Nov 25 07:10:47 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 07:10:47 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 07:10:47 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 07:10:47 localhost kernel: BIOS-provided physical RAM map:
Nov 25 07:10:47 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 07:10:47 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 07:10:47 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 07:10:47 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 07:10:47 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 07:10:47 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 07:10:47 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 07:10:47 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 07:10:47 localhost kernel: NX (Execute Disable) protection: active
Nov 25 07:10:47 localhost kernel: APIC: Static calls initialized
Nov 25 07:10:47 localhost kernel: SMBIOS 2.8 present.
Nov 25 07:10:47 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 07:10:47 localhost kernel: Hypervisor detected: KVM
Nov 25 07:10:47 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 07:10:47 localhost kernel: kvm-clock: using sched offset of 4503485878 cycles
Nov 25 07:10:47 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 07:10:47 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 25 07:10:47 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 25 07:10:47 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 25 07:10:47 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 07:10:47 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 07:10:47 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 07:10:47 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 07:10:47 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 07:10:47 localhost kernel: Using GB pages for direct mapping
Nov 25 07:10:47 localhost kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 07:10:47 localhost kernel: ACPI: Early table checksum verification disabled
Nov 25 07:10:47 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 07:10:47 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 07:10:47 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 07:10:47 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 07:10:47 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 07:10:47 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 07:10:47 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 07:10:47 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 07:10:47 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 07:10:47 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 07:10:47 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 07:10:47 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 07:10:47 localhost kernel: No NUMA configuration found
Nov 25 07:10:47 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 07:10:47 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 25 07:10:47 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 25 07:10:47 localhost kernel: Zone ranges:
Nov 25 07:10:47 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 07:10:47 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 07:10:47 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 07:10:47 localhost kernel:   Device   empty
Nov 25 07:10:47 localhost kernel: Movable zone start for each node
Nov 25 07:10:47 localhost kernel: Early memory node ranges
Nov 25 07:10:47 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 07:10:47 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 07:10:47 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 07:10:47 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 07:10:47 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 07:10:47 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 07:10:47 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 07:10:47 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 07:10:47 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 07:10:47 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 07:10:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 07:10:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 07:10:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 07:10:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 07:10:47 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 07:10:47 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 07:10:47 localhost kernel: TSC deadline timer available
Nov 25 07:10:47 localhost kernel: CPU topo: Max. logical packages:   8
Nov 25 07:10:47 localhost kernel: CPU topo: Max. logical dies:       8
Nov 25 07:10:47 localhost kernel: CPU topo: Max. dies per package:   1
Nov 25 07:10:47 localhost kernel: CPU topo: Max. threads per core:   1
Nov 25 07:10:47 localhost kernel: CPU topo: Num. cores per package:     1
Nov 25 07:10:47 localhost kernel: CPU topo: Num. threads per package:   1
Nov 25 07:10:47 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 07:10:47 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 07:10:47 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 07:10:47 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 07:10:47 localhost kernel: Booting paravirtualized kernel on KVM
Nov 25 07:10:47 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 07:10:47 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 07:10:47 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 07:10:47 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 25 07:10:47 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 25 07:10:47 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 07:10:47 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 07:10:47 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 07:10:47 localhost kernel: random: crng init done
Nov 25 07:10:47 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 07:10:47 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 07:10:47 localhost kernel: Fallback order for Node 0: 0 
Nov 25 07:10:47 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 07:10:47 localhost kernel: Policy zone: Normal
Nov 25 07:10:47 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 07:10:47 localhost kernel: software IO TLB: area num 8.
Nov 25 07:10:47 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 07:10:47 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 07:10:47 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 07:10:47 localhost kernel: Dynamic Preempt: voluntary
Nov 25 07:10:47 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 07:10:47 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 25 07:10:47 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 07:10:47 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 25 07:10:47 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 25 07:10:47 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 25 07:10:47 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 07:10:47 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 07:10:47 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 07:10:47 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 07:10:47 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 07:10:47 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 07:10:47 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 07:10:47 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 07:10:47 localhost kernel: Console: colour VGA+ 80x25
Nov 25 07:10:47 localhost kernel: printk: console [ttyS0] enabled
Nov 25 07:10:47 localhost kernel: ACPI: Core revision 20230331
Nov 25 07:10:47 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 07:10:47 localhost kernel: x2apic enabled
Nov 25 07:10:47 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 07:10:47 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 07:10:47 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 25 07:10:47 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 07:10:47 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 07:10:47 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 07:10:47 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 07:10:47 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 07:10:47 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 07:10:47 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 07:10:47 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 07:10:47 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 07:10:47 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 07:10:47 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 07:10:47 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 07:10:47 localhost kernel: x86/bugs: return thunk changed
Nov 25 07:10:47 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 07:10:47 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 07:10:47 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 07:10:47 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 07:10:47 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 07:10:47 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 07:10:47 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 25 07:10:47 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 25 07:10:47 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 07:10:47 localhost kernel: landlock: Up and running.
Nov 25 07:10:47 localhost kernel: Yama: becoming mindful.
Nov 25 07:10:47 localhost kernel: SELinux:  Initializing.
Nov 25 07:10:47 localhost kernel: LSM support for eBPF active
Nov 25 07:10:47 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 07:10:47 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 07:10:47 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 07:10:47 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 07:10:47 localhost kernel: ... version:                0
Nov 25 07:10:47 localhost kernel: ... bit width:              48
Nov 25 07:10:47 localhost kernel: ... generic registers:      6
Nov 25 07:10:47 localhost kernel: ... value mask:             0000ffffffffffff
Nov 25 07:10:47 localhost kernel: ... max period:             00007fffffffffff
Nov 25 07:10:47 localhost kernel: ... fixed-purpose events:   0
Nov 25 07:10:47 localhost kernel: ... event mask:             000000000000003f
Nov 25 07:10:47 localhost kernel: signal: max sigframe size: 1776
Nov 25 07:10:47 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 25 07:10:47 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 25 07:10:47 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 25 07:10:47 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 25 07:10:47 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 07:10:47 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 07:10:47 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 25 07:10:47 localhost kernel: node 0 deferred pages initialised in 7ms
Nov 25 07:10:47 localhost kernel: Memory: 7776600K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 605564K reserved, 0K cma-reserved)
Nov 25 07:10:47 localhost kernel: devtmpfs: initialized
Nov 25 07:10:47 localhost kernel: x86/mm: Memory block size: 128MB
Nov 25 07:10:47 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 07:10:47 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 07:10:47 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 07:10:47 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 07:10:47 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 07:10:47 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 07:10:47 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 07:10:47 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 25 07:10:47 localhost kernel: audit: type=2000 audit(1764054645.272:1): state=initialized audit_enabled=0 res=1
Nov 25 07:10:47 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 07:10:47 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 07:10:47 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 07:10:47 localhost kernel: cpuidle: using governor menu
Nov 25 07:10:47 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 07:10:47 localhost kernel: PCI: Using configuration type 1 for base access
Nov 25 07:10:47 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 25 07:10:47 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 07:10:47 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 07:10:47 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 07:10:47 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 07:10:47 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 07:10:47 localhost kernel: Demotion targets for Node 0: null
Nov 25 07:10:47 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 07:10:47 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 25 07:10:47 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 25 07:10:47 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 07:10:47 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 07:10:47 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 07:10:47 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 07:10:47 localhost kernel: ACPI: Interpreter enabled
Nov 25 07:10:47 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 07:10:47 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 07:10:47 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 07:10:47 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 07:10:47 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 07:10:47 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 07:10:47 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [3] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [4] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [5] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [6] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [7] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [8] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [9] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [10] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [11] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [12] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [13] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [14] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [15] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [16] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [17] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [18] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [19] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [20] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [21] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [22] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [23] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [24] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [25] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [26] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [27] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [28] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [29] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [30] registered
Nov 25 07:10:47 localhost kernel: acpiphp: Slot [31] registered
Nov 25 07:10:47 localhost kernel: PCI host bridge to bus 0000:00
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 07:10:47 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 07:10:47 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 07:10:47 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 07:10:47 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 07:10:47 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 07:10:47 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 07:10:47 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 07:10:47 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 07:10:47 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 07:10:47 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 07:10:47 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 07:10:47 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 07:10:47 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 07:10:47 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 07:10:47 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 07:10:47 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 07:10:47 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 07:10:47 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 07:10:47 localhost kernel: iommu: Default domain type: Translated
Nov 25 07:10:47 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 07:10:47 localhost kernel: SCSI subsystem initialized
Nov 25 07:10:47 localhost kernel: ACPI: bus type USB registered
Nov 25 07:10:47 localhost kernel: usbcore: registered new interface driver usbfs
Nov 25 07:10:47 localhost kernel: usbcore: registered new interface driver hub
Nov 25 07:10:47 localhost kernel: usbcore: registered new device driver usb
Nov 25 07:10:47 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 07:10:47 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 07:10:47 localhost kernel: PTP clock support registered
Nov 25 07:10:47 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 25 07:10:47 localhost kernel: NetLabel: Initializing
Nov 25 07:10:47 localhost kernel: NetLabel:  domain hash size = 128
Nov 25 07:10:47 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 07:10:47 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 07:10:47 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 25 07:10:47 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 25 07:10:47 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 25 07:10:47 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 07:10:47 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 07:10:47 localhost kernel: vgaarb: loaded
Nov 25 07:10:47 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 07:10:47 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 07:10:47 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 07:10:47 localhost kernel: pnp: PnP ACPI init
Nov 25 07:10:47 localhost kernel: pnp 00:03: [dma 2]
Nov 25 07:10:47 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 25 07:10:47 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 07:10:47 localhost kernel: NET: Registered PF_INET protocol family
Nov 25 07:10:47 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 07:10:47 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 07:10:47 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 07:10:47 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 07:10:47 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 07:10:47 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 07:10:47 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 07:10:47 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 07:10:47 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 07:10:47 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 07:10:47 localhost kernel: NET: Registered PF_XDP protocol family
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 07:10:47 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 07:10:47 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 07:10:47 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 07:10:47 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74261 usecs
Nov 25 07:10:47 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 25 07:10:47 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 07:10:47 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 25 07:10:47 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 25 07:10:47 localhost kernel: ACPI: bus type thunderbolt registered
Nov 25 07:10:47 localhost kernel: Initialise system trusted keyrings
Nov 25 07:10:47 localhost kernel: Key type blacklist registered
Nov 25 07:10:47 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 07:10:47 localhost kernel: zbud: loaded
Nov 25 07:10:47 localhost kernel: integrity: Platform Keyring initialized
Nov 25 07:10:47 localhost kernel: integrity: Machine keyring initialized
Nov 25 07:10:47 localhost kernel: Freeing initrd memory: 75160K
Nov 25 07:10:47 localhost kernel: NET: Registered PF_ALG protocol family
Nov 25 07:10:47 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 25 07:10:47 localhost kernel: Key type asymmetric registered
Nov 25 07:10:47 localhost kernel: Asymmetric key parser 'x509' registered
Nov 25 07:10:47 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 07:10:47 localhost kernel: io scheduler mq-deadline registered
Nov 25 07:10:47 localhost kernel: io scheduler kyber registered
Nov 25 07:10:47 localhost kernel: io scheduler bfq registered
Nov 25 07:10:47 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 07:10:47 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 07:10:47 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 07:10:47 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 25 07:10:47 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 07:10:47 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 07:10:47 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 07:10:47 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 07:10:47 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 07:10:47 localhost kernel: Non-volatile memory driver v1.3
Nov 25 07:10:47 localhost kernel: rdac: device handler registered
Nov 25 07:10:47 localhost kernel: hp_sw: device handler registered
Nov 25 07:10:47 localhost kernel: emc: device handler registered
Nov 25 07:10:47 localhost kernel: alua: device handler registered
Nov 25 07:10:47 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 07:10:47 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 07:10:47 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 07:10:47 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 07:10:47 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 07:10:47 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 07:10:47 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 25 07:10:47 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 07:10:47 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 07:10:47 localhost kernel: hub 1-0:1.0: USB hub found
Nov 25 07:10:47 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 25 07:10:47 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 07:10:47 localhost kernel: usbserial: USB Serial support registered for generic
Nov 25 07:10:47 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 07:10:47 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 07:10:47 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 07:10:47 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 07:10:47 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 07:10:47 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 07:10:47 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 07:10:47 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-25T07:10:46 UTC (1764054646)
Nov 25 07:10:47 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 07:10:47 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 07:10:47 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 07:10:47 localhost kernel: usbcore: registered new interface driver usbhid
Nov 25 07:10:47 localhost kernel: usbhid: USB HID core driver
Nov 25 07:10:47 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 25 07:10:47 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 07:10:47 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 07:10:47 localhost kernel: Initializing XFRM netlink socket
Nov 25 07:10:47 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 25 07:10:47 localhost kernel: Segment Routing with IPv6
Nov 25 07:10:47 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 25 07:10:47 localhost kernel: mpls_gso: MPLS GSO support
Nov 25 07:10:47 localhost kernel: IPI shorthand broadcast: enabled
Nov 25 07:10:47 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 07:10:47 localhost kernel: AES CTR mode by8 optimization enabled
Nov 25 07:10:47 localhost kernel: sched_clock: Marking stable (1191002827, 150153723)->(1425753309, -84596759)
Nov 25 07:10:47 localhost kernel: registered taskstats version 1
Nov 25 07:10:47 localhost kernel: Loading compiled-in X.509 certificates
Nov 25 07:10:47 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 07:10:47 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 07:10:47 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 07:10:47 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 07:10:47 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 07:10:47 localhost kernel: Demotion targets for Node 0: null
Nov 25 07:10:47 localhost kernel: page_owner is disabled
Nov 25 07:10:47 localhost kernel: Key type .fscrypt registered
Nov 25 07:10:47 localhost kernel: Key type fscrypt-provisioning registered
Nov 25 07:10:47 localhost kernel: Key type big_key registered
Nov 25 07:10:47 localhost kernel: Key type encrypted registered
Nov 25 07:10:47 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 07:10:47 localhost kernel: Loading compiled-in module X.509 certificates
Nov 25 07:10:47 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 07:10:47 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 25 07:10:47 localhost kernel: ima: No architecture policies found
Nov 25 07:10:47 localhost kernel: evm: Initialising EVM extended attributes:
Nov 25 07:10:47 localhost kernel: evm: security.selinux
Nov 25 07:10:47 localhost kernel: evm: security.SMACK64 (disabled)
Nov 25 07:10:47 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 07:10:47 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 07:10:47 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 07:10:47 localhost kernel: evm: security.apparmor (disabled)
Nov 25 07:10:47 localhost kernel: evm: security.ima
Nov 25 07:10:47 localhost kernel: evm: security.capability
Nov 25 07:10:47 localhost kernel: evm: HMAC attrs: 0x1
Nov 25 07:10:47 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 07:10:47 localhost kernel: Running certificate verification RSA selftest
Nov 25 07:10:47 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 07:10:47 localhost kernel: Running certificate verification ECDSA selftest
Nov 25 07:10:47 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 07:10:47 localhost kernel: clk: Disabling unused clocks
Nov 25 07:10:47 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 25 07:10:47 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 07:10:47 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 25 07:10:47 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 07:10:47 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 07:10:47 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 07:10:47 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 07:10:47 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 25 07:10:47 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 07:10:47 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 07:10:47 localhost kernel: Run /init as init process
Nov 25 07:10:47 localhost kernel:   with arguments:
Nov 25 07:10:47 localhost kernel:     /init
Nov 25 07:10:47 localhost kernel:   with environment:
Nov 25 07:10:47 localhost kernel:     HOME=/
Nov 25 07:10:47 localhost kernel:     TERM=linux
Nov 25 07:10:47 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 25 07:10:47 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 07:10:47 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 07:10:47 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 07:10:47 localhost systemd[1]: Detected virtualization kvm.
Nov 25 07:10:47 localhost systemd[1]: Detected architecture x86-64.
Nov 25 07:10:47 localhost systemd[1]: Running in initrd.
Nov 25 07:10:47 localhost systemd[1]: No hostname configured, using default hostname.
Nov 25 07:10:47 localhost systemd[1]: Hostname set to <localhost>.
Nov 25 07:10:47 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 25 07:10:47 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 25 07:10:47 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 07:10:47 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 07:10:47 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 25 07:10:47 localhost systemd[1]: Reached target Local File Systems.
Nov 25 07:10:47 localhost systemd[1]: Reached target Path Units.
Nov 25 07:10:47 localhost systemd[1]: Reached target Slice Units.
Nov 25 07:10:47 localhost systemd[1]: Reached target Swaps.
Nov 25 07:10:47 localhost systemd[1]: Reached target Timer Units.
Nov 25 07:10:47 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 07:10:47 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 25 07:10:47 localhost systemd[1]: Listening on Journal Socket.
Nov 25 07:10:47 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 07:10:47 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 07:10:47 localhost systemd[1]: Reached target Socket Units.
Nov 25 07:10:47 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 07:10:47 localhost systemd[1]: Starting Journal Service...
Nov 25 07:10:47 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 07:10:47 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 07:10:47 localhost systemd[1]: Starting Create System Users...
Nov 25 07:10:47 localhost systemd[1]: Starting Setup Virtual Console...
Nov 25 07:10:47 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 07:10:47 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 07:10:47 localhost systemd[1]: Finished Create System Users.
Nov 25 07:10:47 localhost systemd-journald[304]: Journal started
Nov 25 07:10:47 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/c15e9c692e4d4822bb2513f5e1e4f89a) is 8.0M, max 153.6M, 145.6M free.
Nov 25 07:10:47 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 25 07:10:47 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 25 07:10:47 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 07:10:47 localhost systemd[1]: Started Journal Service.
Nov 25 07:10:47 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 07:10:47 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 07:10:47 localhost systemd[1]: Finished Setup Virtual Console.
Nov 25 07:10:47 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 07:10:47 localhost systemd[1]: Starting dracut cmdline hook...
Nov 25 07:10:47 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 07:10:47 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 07:10:47 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 07:10:47 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 07:10:47 localhost systemd[1]: Finished dracut cmdline hook.
Nov 25 07:10:47 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 25 07:10:47 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 07:10:47 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 25 07:10:47 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 07:10:47 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 25 07:10:47 localhost kernel: RPC: Registered udp transport module.
Nov 25 07:10:47 localhost kernel: RPC: Registered tcp transport module.
Nov 25 07:10:47 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 07:10:47 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 07:10:47 localhost rpc.statd[442]: Version 2.5.4 starting
Nov 25 07:10:47 localhost rpc.statd[442]: Initializing NSM state
Nov 25 07:10:47 localhost rpc.idmapd[447]: Setting log level to 0
Nov 25 07:10:47 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 25 07:10:47 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 07:10:47 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 07:10:47 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 07:10:47 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 25 07:10:47 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 25 07:10:47 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 07:10:47 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 25 07:10:47 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 07:10:47 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 07:10:47 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 07:10:47 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 07:10:47 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 07:10:47 localhost systemd[1]: Reached target Network.
Nov 25 07:10:47 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 07:10:47 localhost systemd[1]: Starting dracut initqueue hook...
Nov 25 07:10:48 localhost kernel: libata version 3.00 loaded.
Nov 25 07:10:48 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 07:10:48 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 25 07:10:48 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 07:10:48 localhost kernel: scsi host0: ata_piix
Nov 25 07:10:48 localhost kernel: scsi host1: ata_piix
Nov 25 07:10:48 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 07:10:48 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 07:10:48 localhost kernel:  vda: vda1
Nov 25 07:10:48 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 25 07:10:48 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 25 07:10:48 localhost systemd[1]: Reached target System Initialization.
Nov 25 07:10:48 localhost systemd[1]: Reached target Basic System.
Nov 25 07:10:48 localhost kernel: ata1: found unknown device (class 0)
Nov 25 07:10:48 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 07:10:48 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 07:10:48 localhost systemd-udevd[507]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 07:10:48 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 07:10:48 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 07:10:48 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 07:10:48 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 07:10:48 localhost systemd[1]: Reached target Initrd Root Device.
Nov 25 07:10:48 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 25 07:10:48 localhost systemd[1]: Finished dracut initqueue hook.
Nov 25 07:10:48 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 07:10:48 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 07:10:48 localhost systemd[1]: Reached target Remote File Systems.
Nov 25 07:10:48 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 25 07:10:48 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 25 07:10:48 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 07:10:48 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 07:10:48 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 07:10:48 localhost systemd[1]: Mounting /sysroot...
Nov 25 07:10:49 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 07:10:49 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 07:10:49 localhost kernel: XFS (vda1): Ending clean mount
Nov 25 07:10:49 localhost systemd[1]: Mounted /sysroot.
Nov 25 07:10:49 localhost systemd[1]: Reached target Initrd Root File System.
Nov 25 07:10:49 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 07:10:49 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 07:10:49 localhost systemd[1]: Reached target Initrd File Systems.
Nov 25 07:10:49 localhost systemd[1]: Reached target Initrd Default Target.
Nov 25 07:10:49 localhost systemd[1]: Starting dracut mount hook...
Nov 25 07:10:49 localhost systemd[1]: Finished dracut mount hook.
Nov 25 07:10:49 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 07:10:49 localhost rpc.idmapd[447]: exiting on signal 15
Nov 25 07:10:49 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 07:10:49 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 07:10:49 localhost systemd[1]: Stopped target Network.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Timer Units.
Nov 25 07:10:49 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 07:10:49 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Basic System.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Path Units.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Remote File Systems.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Slice Units.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Socket Units.
Nov 25 07:10:49 localhost systemd[1]: Stopped target System Initialization.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Local File Systems.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Swaps.
Nov 25 07:10:49 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped dracut mount hook.
Nov 25 07:10:49 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 25 07:10:49 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 07:10:49 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 07:10:49 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 25 07:10:49 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 25 07:10:49 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 07:10:49 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 07:10:49 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 07:10:49 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 07:10:49 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 25 07:10:49 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 07:10:49 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 07:10:49 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Closed udev Control Socket.
Nov 25 07:10:49 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Closed udev Kernel Socket.
Nov 25 07:10:49 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 25 07:10:49 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 25 07:10:49 localhost systemd[1]: Starting Cleanup udev Database...
Nov 25 07:10:49 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 07:10:49 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 07:10:49 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Stopped Create System Users.
Nov 25 07:10:49 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 07:10:49 localhost systemd[1]: Finished Cleanup udev Database.
Nov 25 07:10:49 localhost systemd[1]: Reached target Switch Root.
Nov 25 07:10:49 localhost systemd[1]: Starting Switch Root...
Nov 25 07:10:49 localhost systemd[1]: Switching root.
Nov 25 07:10:49 localhost systemd-journald[304]: Journal stopped
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <entry name="serial">4bc97ee2-5aba-4bb5-86e2-f0806a200c04</entry>
Nov 25 08:22:42 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <entry name="uuid">4bc97ee2-5aba-4bb5-86e2-f0806a200c04</entry>
Nov 25 08:22:42 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </system>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   <os>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   </os>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   <features>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   </features>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk">
Nov 25 08:22:42 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       </source>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:22:42 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.eph0">
Nov 25 08:22:42 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       </source>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:22:42 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <target dev="vdb" bus="virtio"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config">
Nov 25 08:22:42 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       </source>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:22:42 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:99:b2:39"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <target dev="tap029a2ee5-40"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/console.log" append="off"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <video>
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </video>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:22:42 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:22:42 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:22:42 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:22:42 compute-0 nova_compute[253538]: </domain>
Nov 25 08:22:42 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.768 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Preparing to wait for external event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.769 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.769 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.770 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.770 253542 DEBUG nova.virt.libvirt.vif [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:22:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1503393729',id=5,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBzTqcG3JodznNWSwgj3rCrEedPdeiQ/mnvgYz3cydyWhFZXQKv9KjumDsSN5/xC3KFolCMDQs3EubeJBsTVkZbaVh8dka9krQSkDOu6i81Tbp1XKPFk+NDOA6XHT/FTw==',key_name='tempest-keypair-1479336102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2671313ddba04346ac0e2eef435f909c',ramdisk_id='',reservation_id='r-1iln2b4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-625252619',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:22:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='19a48db5eafb4ccb9008a204aa3d72d4',uuid=4bc97ee2-5aba-4bb5-86e2-f0806a200c04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.771 253542 DEBUG nova.network.os_vif_util [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converting VIF {"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.772 253542 DEBUG nova.network.os_vif_util [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.772 253542 DEBUG os_vif [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.783 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap029a2ee5-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.784 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap029a2ee5-40, col_values=(('external_ids', {'iface-id': '029a2ee5-4018-4b73-8953-5436c5af3666', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:b2:39', 'vm-uuid': '4bc97ee2-5aba-4bb5-86e2-f0806a200c04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:42 compute-0 NetworkManager[48915]: <info>  [1764058962.7871] manager: (tap029a2ee5-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.794 253542 INFO os_vif [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40')
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.850 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.850 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.851 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.851 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] No VIF found with MAC fa:16:3e:99:b2:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.851 253542 INFO nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Using config drive
Nov 25 08:22:42 compute-0 nova_compute[253538]: 2025-11-25 08:22:42.873 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.089 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058948.0885918, 67445e66-65d6-487d-8c34-7d798ac485c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.089 253542 INFO nova.compute.manager [-] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] VM Stopped (Lifecycle Event)
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.118 253542 DEBUG nova.compute.manager [None req-3c004bf3-cf1f-4fa3-b305-06a94a8f88e3 - - - - - -] [instance: 67445e66-65d6-487d-8c34-7d798ac485c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.165 253542 DEBUG nova.network.neutron [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updated VIF entry in instance network info cache for port 029a2ee5-4018-4b73-8953-5436c5af3666. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.166 253542 DEBUG nova.network.neutron [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:22:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.182 253542 DEBUG oslo_concurrency.lockutils [req-e1f91395-0808-4b43-b5a4-ccba19399888 req-ab321005-202c-4af5-95d7-f047639420bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:22:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/706444676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.368 253542 INFO nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Creating config drive at /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.376 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr8bqjidi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.522 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr8bqjidi" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.559 253542 DEBUG nova.storage.rbd_utils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] rbd image 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.563 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:43 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.758 253542 DEBUG oslo_concurrency.processutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config 4bc97ee2-5aba-4bb5-86e2-f0806a200c04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.759 253542 INFO nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Deleting local config drive /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04/disk.config because it was imported into RBD.
Nov 25 08:22:43 compute-0 kernel: tap029a2ee5-40: entered promiscuous mode
Nov 25 08:22:43 compute-0 NetworkManager[48915]: <info>  [1764058963.8329] manager: (tap029a2ee5-40): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.835 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:43 compute-0 ovn_controller[152859]: 2025-11-25T08:22:43Z|00036|binding|INFO|Claiming lport 029a2ee5-4018-4b73-8953-5436c5af3666 for this chassis.
Nov 25 08:22:43 compute-0 ovn_controller[152859]: 2025-11-25T08:22:43Z|00037|binding|INFO|029a2ee5-4018-4b73-8953-5436c5af3666: Claiming fa:16:3e:99:b2:39 10.100.0.13
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.841 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b2:39 10.100.0.13'], port_security=['fa:16:3e:99:b2:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4bc97ee2-5aba-4bb5-86e2-f0806a200c04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2671313ddba04346ac0e2eef435f909c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45e37a08-d0c9-4931-b93a-912579eefb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e5eae09-1123-407e-9138-26c6151dcc1c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=029a2ee5-4018-4b73-8953-5436c5af3666) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.843 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 029a2ee5-4018-4b73-8953-5436c5af3666 in datapath ef52fe4f-78d3-45fa-ab69-177fdfabe604 bound to our chassis
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.844 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef52fe4f-78d3-45fa-ab69-177fdfabe604
Nov 25 08:22:43 compute-0 ovn_controller[152859]: 2025-11-25T08:22:43Z|00038|binding|INFO|Setting lport 029a2ee5-4018-4b73-8953-5436c5af3666 ovn-installed in OVS
Nov 25 08:22:43 compute-0 ovn_controller[152859]: 2025-11-25T08:22:43Z|00039|binding|INFO|Setting lport 029a2ee5-4018-4b73-8953-5436c5af3666 up in Southbound
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:43 compute-0 nova_compute[253538]: 2025-11-25 08:22:43.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.854 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56cb4533-78aa-443d-a15c-070093a36b81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.856 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef52fe4f-71 in ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.857 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef52fe4f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.857 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf71671-009d-415f-8153-d5ace5ad7d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.859 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c67c94d-fae5-4ebb-ae8a-a1d7374bb0c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 systemd-machined[215790]: New machine qemu-5-instance-00000005.
Nov 25 08:22:43 compute-0 systemd-udevd[271051]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:22:43 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.873 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d46d080f-9eb5-4074-b3fe-b4f45f784d0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 NetworkManager[48915]: <info>  [1764058963.8764] device (tap029a2ee5-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:22:43 compute-0 NetworkManager[48915]: <info>  [1764058963.8772] device (tap029a2ee5-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.895 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f45f24-e3e0-4dbe-b93b-a95e2c9dd83f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.918 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba55b682-c144-4018-b03d-69055a233dd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1facfb-5e9c-4b32-b477-bb6da0b595e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 NetworkManager[48915]: <info>  [1764058963.9250] manager: (tapef52fe4f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/32)
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.955 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e747701d-d195-4363-92ba-849c2c3cc644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.958 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47eac41d-9341-45b3-ac22-1268ff8cc7f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:43 compute-0 NetworkManager[48915]: <info>  [1764058963.9834] device (tapef52fe4f-70): carrier: link connected
Nov 25 08:22:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:43.991 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b434ba-5a72-4207-8037-c9fd1f31f5c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.009 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[89444853-bb10-4803-b064-129fa781b4da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef52fe4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:1c:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431858, 'reachable_time': 15570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271084, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.028 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e6de10-b369-4ffc-8e14-66ca315e2dad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:1cc5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431858, 'tstamp': 431858}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271085, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.051 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87cda7dd-8430-4fd7-8d22-629ad2756e80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef52fe4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:1c:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431858, 'reachable_time': 15570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271097, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.089 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b1e7d4-19ca-4db4-9042-64c13f8a0204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.150 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f25294f-170e-407b-b43c-074ed27d9614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.152 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef52fe4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.152 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.153 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef52fe4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:44 compute-0 NetworkManager[48915]: <info>  [1764058964.2020] manager: (tapef52fe4f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 25 08:22:44 compute-0 kernel: tapef52fe4f-70: entered promiscuous mode
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.206 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef52fe4f-70, col_values=(('external_ids', {'iface-id': 'c3160d37-f1ba-461a-855e-31f72d46baee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:22:44 compute-0 ovn_controller[152859]: 2025-11-25T08:22:44Z|00040|binding|INFO|Releasing lport c3160d37-f1ba-461a-855e-31f72d46baee from this chassis (sb_readonly=0)
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.237 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.238 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5e1e4e-7b58-4ae6-a5e9-8e175ffebcc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.239 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ef52fe4f-78d3-45fa-ab69-177fdfabe604
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ef52fe4f-78d3-45fa-ab69-177fdfabe604.pid.haproxy
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ef52fe4f-78d3-45fa-ab69-177fdfabe604
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:22:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:22:44.240 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'env', 'PROCESS_TAG=haproxy-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef52fe4f-78d3-45fa-ab69-177fdfabe604.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.258 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058949.2571197, a134e11b-8c87-4a96-a92c-b4dfdad7e518 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.259 253542 INFO nova.compute.manager [-] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] VM Stopped (Lifecycle Event)
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.290 253542 DEBUG nova.compute.manager [None req-5ced3f25-a661-4235-9a0c-185cc5561744 - - - - - -] [instance: a134e11b-8c87-4a96-a92c-b4dfdad7e518] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.328 253542 DEBUG nova.compute.manager [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.329 253542 DEBUG oslo_concurrency.lockutils [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.330 253542 DEBUG oslo_concurrency.lockutils [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.331 253542 DEBUG oslo_concurrency.lockutils [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.331 253542 DEBUG nova.compute.manager [req-c4136aad-79c9-42a7-a4db-26735a555b96 req-f09eeca1-1b4a-4c25-bfeb-5a3138942c69 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Processing event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:44 compute-0 ceph-mon[75015]: pgmap v1112: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.674 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058964.67421, 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.675 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] VM Started (Lifecycle Event)
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.678 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.682 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.686 253542 INFO nova.virt.libvirt.driver [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Instance spawned successfully.
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.687 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.701 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.708 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.712 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.713 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.713 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.714 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.715 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.715 253542 DEBUG nova.virt.libvirt.driver [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.743 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.744 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058964.6755211, 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.744 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] VM Paused (Lifecycle Event)
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.779 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.784 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058964.6814785, 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.785 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] VM Resumed (Lifecycle Event)
Nov 25 08:22:44 compute-0 podman[271178]: 2025-11-25 08:22:44.71564856 +0000 UTC m=+0.052409398 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.810 253542 INFO nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Took 11.06 seconds to spawn the instance on the hypervisor.
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.811 253542 DEBUG nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.813 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.825 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.853 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.879 253542 INFO nova.compute.manager [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Took 12.37 seconds to build instance.
Nov 25 08:22:44 compute-0 nova_compute[253538]: 2025-11-25 08:22:44.895 253542 DEBUG oslo_concurrency.lockutils [None req-324a0cc4-7dc0-4eda-abc8-2a6c75889181 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:22:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 08:22:44 compute-0 podman[271178]: 2025-11-25 08:22:44.915704862 +0000 UTC m=+0.252465660 container create f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 08:22:45 compute-0 systemd[1]: Started libpod-conmon-f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000.scope.
Nov 25 08:22:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6262865f4caeb3aa76bd5939a209fa5913aca3e8ecc37800451d727ee1322951/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:45 compute-0 podman[271178]: 2025-11-25 08:22:45.123745137 +0000 UTC m=+0.460505985 container init f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 08:22:45 compute-0 podman[271178]: 2025-11-25 08:22:45.135974348 +0000 UTC m=+0.472735177 container start f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:22:45 compute-0 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [NOTICE]   (271197) : New worker (271199) forked
Nov 25 08:22:45 compute-0 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [NOTICE]   (271197) : Loading success.
Nov 25 08:22:45 compute-0 ceph-mon[75015]: pgmap v1113: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 08:22:46 compute-0 nova_compute[253538]: 2025-11-25 08:22:46.510 253542 DEBUG nova.compute.manager [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:22:46 compute-0 nova_compute[253538]: 2025-11-25 08:22:46.510 253542 DEBUG oslo_concurrency.lockutils [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:22:46 compute-0 nova_compute[253538]: 2025-11-25 08:22:46.510 253542 DEBUG oslo_concurrency.lockutils [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:22:46 compute-0 nova_compute[253538]: 2025-11-25 08:22:46.510 253542 DEBUG oslo_concurrency.lockutils [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:22:46 compute-0 nova_compute[253538]: 2025-11-25 08:22:46.511 253542 DEBUG nova.compute.manager [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] No waiting events found dispatching network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:22:46 compute-0 nova_compute[253538]: 2025-11-25 08:22:46.511 253542 WARNING nova.compute.manager [req-f9f5e719-a72c-4557-bed1-38909e69fd83 req-1031c73e-929c-4347-9fb7-b4859b00b76b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received unexpected event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 for instance with vm_state active and task_state None.
Nov 25 08:22:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Nov 25 08:22:47 compute-0 nova_compute[253538]: 2025-11-25 08:22:47.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:48 compute-0 ceph-mon[75015]: pgmap v1114: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Nov 25 08:22:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:22:48 compute-0 nova_compute[253538]: 2025-11-25 08:22:48.563 253542 DEBUG nova.compute.manager [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-changed-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:22:48 compute-0 nova_compute[253538]: 2025-11-25 08:22:48.563 253542 DEBUG nova.compute.manager [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Refreshing instance network info cache due to event network-changed-029a2ee5-4018-4b73-8953-5436c5af3666. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:22:48 compute-0 nova_compute[253538]: 2025-11-25 08:22:48.564 253542 DEBUG oslo_concurrency.lockutils [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:22:48 compute-0 nova_compute[253538]: 2025-11-25 08:22:48.564 253542 DEBUG oslo_concurrency.lockutils [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:22:48 compute-0 nova_compute[253538]: 2025-11-25 08:22:48.564 253542 DEBUG nova.network.neutron [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Refreshing network info cache for port 029a2ee5-4018-4b73-8953-5436c5af3666 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:22:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 63 op/s
Nov 25 08:22:49 compute-0 nova_compute[253538]: 2025-11-25 08:22:49.360 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:49 compute-0 sudo[271208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:49 compute-0 sudo[271208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:49 compute-0 sudo[271208]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:49 compute-0 sudo[271233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:22:49 compute-0 sudo[271233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:49 compute-0 sudo[271233]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:49 compute-0 sudo[271258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:49 compute-0 sudo[271258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:49 compute-0 sudo[271258]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:49 compute-0 sudo[271283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:22:49 compute-0 sudo[271283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:50 compute-0 ceph-mon[75015]: pgmap v1115: 321 pgs: 321 active+clean; 90 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 63 op/s
Nov 25 08:22:50 compute-0 sudo[271283]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:22:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:22:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:22:50 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:22:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:22:50 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:22:50 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9233aae9-9626-40ab-b90b-9a25b0c588d5 does not exist
Nov 25 08:22:50 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5207d199-5b5e-44e6-9eda-4884de70436c does not exist
Nov 25 08:22:50 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8f40c3e3-3c1c-447d-b86a-3b1365c103a0 does not exist
Nov 25 08:22:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:22:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:22:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:22:50 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:22:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:22:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:22:50 compute-0 sudo[271340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:50 compute-0 sudo[271340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:50 compute-0 sudo[271340]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:50 compute-0 sudo[271365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:22:50 compute-0 sudo[271365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:50 compute-0 sudo[271365]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:50 compute-0 sudo[271390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:50 compute-0 sudo[271390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:50 compute-0 sudo[271390]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:50 compute-0 sudo[271415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:22:50 compute-0 sudo[271415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 87 op/s
Nov 25 08:22:50 compute-0 nova_compute[253538]: 2025-11-25 08:22:50.978 253542 DEBUG nova.network.neutron [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updated VIF entry in instance network info cache for port 029a2ee5-4018-4b73-8953-5436c5af3666. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:22:50 compute-0 nova_compute[253538]: 2025-11-25 08:22:50.978 253542 DEBUG nova.network.neutron [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:22:51 compute-0 nova_compute[253538]: 2025-11-25 08:22:51.050 253542 DEBUG oslo_concurrency.lockutils [req-84f8fe13-e525-4697-9932-ecd4dfa49dc4 req-14c65e82-6d51-4385-b2a2-2c65ebad1fa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:22:51 compute-0 podman[271481]: 2025-11-25 08:22:51.137577979 +0000 UTC m=+0.101777281 container create 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 08:22:51 compute-0 podman[271481]: 2025-11-25 08:22:51.063523835 +0000 UTC m=+0.027723116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:22:51 compute-0 systemd[1]: Started libpod-conmon-52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a.scope.
Nov 25 08:22:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:22:51 compute-0 podman[271481]: 2025-11-25 08:22:51.328959898 +0000 UTC m=+0.293159229 container init 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:22:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:22:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:22:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:22:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:22:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:22:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:22:51 compute-0 podman[271481]: 2025-11-25 08:22:51.338407163 +0000 UTC m=+0.302606474 container start 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 08:22:51 compute-0 podman[271481]: 2025-11-25 08:22:51.342791335 +0000 UTC m=+0.306990636 container attach 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:22:51 compute-0 strange_mendeleev[271497]: 167 167
Nov 25 08:22:51 compute-0 systemd[1]: libpod-52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a.scope: Deactivated successfully.
Nov 25 08:22:51 compute-0 conmon[271497]: conmon 52150e599c254c0a609a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a.scope/container/memory.events
Nov 25 08:22:51 compute-0 podman[271481]: 2025-11-25 08:22:51.348034452 +0000 UTC m=+0.312233703 container died 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc48b6a9bbcb890314d71694179e246d31c7a0e133981316ef634972c671ae31-merged.mount: Deactivated successfully.
Nov 25 08:22:51 compute-0 podman[271481]: 2025-11-25 08:22:51.389533014 +0000 UTC m=+0.353732265 container remove 52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:22:51 compute-0 systemd[1]: libpod-conmon-52150e599c254c0a609a299a5b6812046b68cd0e364cce2c049bfec9a503aa3a.scope: Deactivated successfully.
Nov 25 08:22:51 compute-0 podman[271521]: 2025-11-25 08:22:51.611975462 +0000 UTC m=+0.067626435 container create 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:22:51 compute-0 systemd[1]: Started libpod-conmon-838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a.scope.
Nov 25 08:22:51 compute-0 podman[271521]: 2025-11-25 08:22:51.590903662 +0000 UTC m=+0.046554625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:22:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:51 compute-0 podman[271521]: 2025-11-25 08:22:51.735174551 +0000 UTC m=+0.190825504 container init 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:22:51 compute-0 podman[271521]: 2025-11-25 08:22:51.743395822 +0000 UTC m=+0.199046765 container start 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:22:51 compute-0 podman[271521]: 2025-11-25 08:22:51.754265706 +0000 UTC m=+0.209916669 container attach 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 08:22:52 compute-0 ceph-mon[75015]: pgmap v1116: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 87 op/s
Nov 25 08:22:52 compute-0 charming_shannon[271538]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:22:52 compute-0 charming_shannon[271538]: --> relative data size: 1.0
Nov 25 08:22:52 compute-0 charming_shannon[271538]: --> All data devices are unavailable
Nov 25 08:22:52 compute-0 systemd[1]: libpod-838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a.scope: Deactivated successfully.
Nov 25 08:22:52 compute-0 nova_compute[253538]: 2025-11-25 08:22:52.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:52 compute-0 podman[271567]: 2025-11-25 08:22:52.834254895 +0000 UTC m=+0.047490150 container died 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:22:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f53836abc474f8855562a907a6efc5c07b93dfa99b74da3e1618cfc67c2dff91-merged.mount: Deactivated successfully.
Nov 25 08:22:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Nov 25 08:22:52 compute-0 podman[271567]: 2025-11-25 08:22:52.974689537 +0000 UTC m=+0.187924802 container remove 838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:22:52 compute-0 systemd[1]: libpod-conmon-838146708a8fe149cd7b49d991b96d1c76d94bffd8cc0eacdb64f11bbb98b56a.scope: Deactivated successfully.
Nov 25 08:22:53 compute-0 sudo[271415]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:53 compute-0 sudo[271583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:53 compute-0 sudo[271583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:53 compute-0 sudo[271583]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:53 compute-0 sudo[271608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:22:53 compute-0 sudo[271608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:53 compute-0 sudo[271608]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:22:53
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'vms', '.mgr', 'cephfs.cephfs.data']
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:22:53 compute-0 sudo[271633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:53 compute-0 sudo[271633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:53 compute-0 sudo[271633]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:53 compute-0 sudo[271658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:22:53 compute-0 sudo[271658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:22:53 compute-0 ceph-mon[75015]: pgmap v1117: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:22:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:22:53 compute-0 podman[271723]: 2025-11-25 08:22:53.719001177 +0000 UTC m=+0.046937125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:22:53 compute-0 nova_compute[253538]: 2025-11-25 08:22:53.888 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:22:53 compute-0 nova_compute[253538]: 2025-11-25 08:22:53.890 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:22:53 compute-0 nova_compute[253538]: 2025-11-25 08:22:53.908 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:22:53 compute-0 podman[271723]: 2025-11-25 08:22:53.925736195 +0000 UTC m=+0.253672043 container create 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 08:22:53 compute-0 systemd[1]: Started libpod-conmon-26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1.scope.
Nov 25 08:22:53 compute-0 nova_compute[253538]: 2025-11-25 08:22:53.988 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:22:53 compute-0 nova_compute[253538]: 2025-11-25 08:22:53.989 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:22:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.005 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.006 253542 INFO nova.compute.claims [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:22:54 compute-0 podman[271723]: 2025-11-25 08:22:54.042094344 +0000 UTC m=+0.370030232 container init 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 25 08:22:54 compute-0 podman[271723]: 2025-11-25 08:22:54.053426641 +0000 UTC m=+0.381362529 container start 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:22:54 compute-0 podman[271723]: 2025-11-25 08:22:54.060981432 +0000 UTC m=+0.388917310 container attach 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 08:22:54 compute-0 fervent_herschel[271740]: 167 167
Nov 25 08:22:54 compute-0 systemd[1]: libpod-26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1.scope: Deactivated successfully.
Nov 25 08:22:54 compute-0 podman[271723]: 2025-11-25 08:22:54.062572127 +0000 UTC m=+0.390507975 container died 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:22:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d0eb8c71d7e3aa3fd702729fe91bfd803df4ff7014dada6e31e843a89e88fa5-merged.mount: Deactivated successfully.
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.170 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:54 compute-0 podman[271723]: 2025-11-25 08:22:54.220630162 +0000 UTC m=+0.548566010 container remove 26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:22:54 compute-0 systemd[1]: libpod-conmon-26d9e662dbb48a488ea0a1eeda63a4b7331e09724b9934fa2134b5b118cba7a1.scope: Deactivated successfully.
Nov 25 08:22:54 compute-0 podman[271737]: 2025-11-25 08:22:54.271108725 +0000 UTC m=+0.295921196 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.362 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:54 compute-0 podman[271800]: 2025-11-25 08:22:54.428339999 +0000 UTC m=+0.054765665 container create 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 08:22:54 compute-0 systemd[1]: Started libpod-conmon-5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a.scope.
Nov 25 08:22:54 compute-0 podman[271800]: 2025-11-25 08:22:54.398858013 +0000 UTC m=+0.025283719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:22:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:54 compute-0 podman[271800]: 2025-11-25 08:22:54.5591229 +0000 UTC m=+0.185548556 container init 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:22:54 compute-0 podman[271800]: 2025-11-25 08:22:54.572942717 +0000 UTC m=+0.199368363 container start 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 08:22:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:22:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004869931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.617 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.625 253542 DEBUG nova.compute.provider_tree [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.653 253542 DEBUG nova.scheduler.client.report [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:22:54 compute-0 podman[271800]: 2025-11-25 08:22:54.671836505 +0000 UTC m=+0.298262161 container attach 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.678 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.679 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:22:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2004869931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.735 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.736 253542 DEBUG nova.network.neutron [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.763 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.785 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.867 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.868 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.869 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Creating image(s)
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.891 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.915 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.937 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:54 compute-0 nova_compute[253538]: 2025-11-25 08:22:54.941 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:55 compute-0 nova_compute[253538]: 2025-11-25 08:22:55.032 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:55 compute-0 nova_compute[253538]: 2025-11-25 08:22:55.033 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:22:55 compute-0 nova_compute[253538]: 2025-11-25 08:22:55.033 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:22:55 compute-0 nova_compute[253538]: 2025-11-25 08:22:55.034 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:22:55 compute-0 nova_compute[253538]: 2025-11-25 08:22:55.058 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:55 compute-0 nova_compute[253538]: 2025-11-25 08:22:55.062 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:55 compute-0 nova_compute[253538]: 2025-11-25 08:22:55.183 253542 DEBUG nova.network.neutron [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:22:55 compute-0 nova_compute[253538]: 2025-11-25 08:22:55.184 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]: {
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:     "0": [
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:         {
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "devices": [
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "/dev/loop3"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             ],
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_name": "ceph_lv0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_size": "21470642176",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "name": "ceph_lv0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "tags": {
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cluster_name": "ceph",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.crush_device_class": "",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.encrypted": "0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osd_id": "0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.type": "block",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.vdo": "0"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             },
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "type": "block",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "vg_name": "ceph_vg0"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:         }
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:     ],
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:     "1": [
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:         {
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "devices": [
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "/dev/loop4"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             ],
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_name": "ceph_lv1",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_size": "21470642176",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "name": "ceph_lv1",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "tags": {
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cluster_name": "ceph",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.crush_device_class": "",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.encrypted": "0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osd_id": "1",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.type": "block",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.vdo": "0"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             },
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "type": "block",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "vg_name": "ceph_vg1"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:         }
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:     ],
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:     "2": [
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:         {
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "devices": [
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "/dev/loop5"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             ],
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_name": "ceph_lv2",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_size": "21470642176",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "name": "ceph_lv2",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "tags": {
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.cluster_name": "ceph",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.crush_device_class": "",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.encrypted": "0",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osd_id": "2",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.type": "block",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:                 "ceph.vdo": "0"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             },
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "type": "block",
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:             "vg_name": "ceph_vg2"
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:         }
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]:     ]
Nov 25 08:22:55 compute-0 musing_brahmagupta[271817]: }
Nov 25 08:22:55 compute-0 systemd[1]: libpod-5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a.scope: Deactivated successfully.
Nov 25 08:22:55 compute-0 conmon[271817]: conmon 5d383cbaab5a384db3d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a.scope/container/memory.events
Nov 25 08:22:55 compute-0 podman[271800]: 2025-11-25 08:22:55.380750804 +0000 UTC m=+1.007176470 container died 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 08:22:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bae54cc7bb89c96a151a488f8930db544ffc9ad8fb1258e03608c868c073fbd-merged.mount: Deactivated successfully.
Nov 25 08:22:55 compute-0 ceph-mon[75015]: pgmap v1118: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Nov 25 08:22:55 compute-0 podman[271800]: 2025-11-25 08:22:55.828923193 +0000 UTC m=+1.455348829 container remove 5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:22:55 compute-0 systemd[1]: libpod-conmon-5d383cbaab5a384db3d99ca8fa437ac4e7646dc6fa8dcd579c3c9d6d24b2b78a.scope: Deactivated successfully.
Nov 25 08:22:55 compute-0 sudo[271658]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:55 compute-0 sudo[271935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:55 compute-0 sudo[271935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:55 compute-0 sudo[271935]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:56 compute-0 sudo[271960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:22:56 compute-0 sudo[271960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:56 compute-0 sudo[271960]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:56 compute-0 sudo[271985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:56 compute-0 sudo[271985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:56 compute-0 sudo[271985]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:56 compute-0 sudo[272010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:22:56 compute-0 sudo[272010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.222546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976222827, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2092, "num_deletes": 251, "total_data_size": 3409477, "memory_usage": 3474464, "flush_reason": "Manual Compaction"}
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976288592, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3331619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20976, "largest_seqno": 23067, "table_properties": {"data_size": 3322153, "index_size": 5960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19488, "raw_average_key_size": 20, "raw_value_size": 3303054, "raw_average_value_size": 3426, "num_data_blocks": 268, "num_entries": 964, "num_filter_entries": 964, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058759, "oldest_key_time": 1764058759, "file_creation_time": 1764058976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 66115 microseconds, and 8480 cpu microseconds.
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.288668) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3331619 bytes OK
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.288703) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.304048) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.304102) EVENT_LOG_v1 {"time_micros": 1764058976304088, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.304124) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3400666, prev total WAL file size 3400666, number of live WAL files 2.
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.305522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3253KB)], [50(7551KB)]
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976305576, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11064539, "oldest_snapshot_seqno": -1}
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4814 keys, 9321849 bytes, temperature: kUnknown
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976635891, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9321849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9287209, "index_size": 21474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 118322, "raw_average_key_size": 24, "raw_value_size": 9197735, "raw_average_value_size": 1910, "num_data_blocks": 907, "num_entries": 4814, "num_filter_entries": 4814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764058976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.636203) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9321849 bytes
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.639200) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.5 rd, 28.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5332, records dropped: 518 output_compression: NoCompression
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.639230) EVENT_LOG_v1 {"time_micros": 1764058976639216, "job": 26, "event": "compaction_finished", "compaction_time_micros": 330408, "compaction_time_cpu_micros": 36959, "output_level": 6, "num_output_files": 1, "total_output_size": 9321849, "num_input_records": 5332, "num_output_records": 4814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:22:56 compute-0 podman[272075]: 2025-11-25 08:22:56.639794776 +0000 UTC m=+0.129190559 container create 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976640479, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764058976643018, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.305408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:22:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:22:56.643133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:22:56 compute-0 podman[272075]: 2025-11-25 08:22:56.569445336 +0000 UTC m=+0.058841169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:22:56 compute-0 nova_compute[253538]: 2025-11-25 08:22:56.682 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:56 compute-0 systemd[1]: Started libpod-conmon-89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842.scope.
Nov 25 08:22:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:22:56 compute-0 nova_compute[253538]: 2025-11-25 08:22:56.769 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] resizing rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:22:56 compute-0 podman[272075]: 2025-11-25 08:22:56.806208445 +0000 UTC m=+0.295604218 container init 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:22:56 compute-0 podman[272075]: 2025-11-25 08:22:56.816269037 +0000 UTC m=+0.305664810 container start 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:22:56 compute-0 great_wescoff[272109]: 167 167
Nov 25 08:22:56 compute-0 systemd[1]: libpod-89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842.scope: Deactivated successfully.
Nov 25 08:22:56 compute-0 conmon[272109]: conmon 89f92b51218aaeeba043 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842.scope/container/memory.events
Nov 25 08:22:56 compute-0 podman[272075]: 2025-11-25 08:22:56.892561273 +0000 UTC m=+0.381957026 container attach 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:22:56 compute-0 podman[272075]: 2025-11-25 08:22:56.893477299 +0000 UTC m=+0.382873042 container died 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 08:22:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Nov 25 08:22:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d57164362c9ccd3bd8b67c02b10ce51ccb8222f91c6f0d05f3ad8467b8964d6-merged.mount: Deactivated successfully.
Nov 25 08:22:57 compute-0 podman[272075]: 2025-11-25 08:22:57.38460498 +0000 UTC m=+0.874000743 container remove 89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:22:57 compute-0 systemd[1]: libpod-conmon-89f92b51218aaeeba043e8444f787b9bf9188d6176042905d6902212d5284842.scope: Deactivated successfully.
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.497 253542 DEBUG nova.objects.instance [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'migration_context' on Instance uuid 30491b9b-e328-43ff-9a35-3f5afa6fed34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:22:57 compute-0 podman[272162]: 2025-11-25 08:22:57.50209174 +0000 UTC m=+0.516961686 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.512 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.513 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Ensure instance console log exists: /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.514 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.514 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.515 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.517 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.528 253542 WARNING nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.538 253542 DEBUG nova.virt.libvirt.host [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.540 253542 DEBUG nova.virt.libvirt.host [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.545 253542 DEBUG nova.virt.libvirt.host [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.546 253542 DEBUG nova.virt.libvirt.host [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.547 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.548 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.549 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.550 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.551 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.551 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.552 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.552 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.553 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.553 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.553 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.553 253542 DEBUG nova.virt.hardware [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.557 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.662 253542 DEBUG oslo_concurrency.processutils [None req-457a00a4-03e6-4892-99d8-37e20d3e1cb2 3469d64ae20e4870ad703ac6d75a2a35 5a87aa3f5a47431ba468b9c1fdbcf5cd - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:57 compute-0 podman[272208]: 2025-11-25 08:22:57.597590963 +0000 UTC m=+0.033533129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.697 253542 DEBUG oslo_concurrency.processutils [None req-457a00a4-03e6-4892-99d8-37e20d3e1cb2 3469d64ae20e4870ad703ac6d75a2a35 5a87aa3f5a47431ba468b9c1fdbcf5cd - - default default] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:57 compute-0 podman[272208]: 2025-11-25 08:22:57.719153007 +0000 UTC m=+0.155095183 container create 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:22:57 compute-0 nova_compute[253538]: 2025-11-25 08:22:57.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:57 compute-0 systemd[1]: Started libpod-conmon-11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a.scope.
Nov 25 08:22:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:22:57 compute-0 podman[272208]: 2025-11-25 08:22:57.91569571 +0000 UTC m=+0.351637876 container init 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 08:22:57 compute-0 podman[272208]: 2025-11-25 08:22:57.923428026 +0000 UTC m=+0.359370152 container start 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:22:57 compute-0 podman[272208]: 2025-11-25 08:22:57.965739081 +0000 UTC m=+0.401681257 container attach 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:22:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:22:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643865434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.052 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.071 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.074 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:22:58 compute-0 ceph-mon[75015]: pgmap v1119: 321 pgs: 321 active+clean; 90 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Nov 25 08:22:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/643865434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:22:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:22:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121262340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.564 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.566 253542 DEBUG nova.objects.instance [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'pci_devices' on Instance uuid 30491b9b-e328-43ff-9a35-3f5afa6fed34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.579 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <uuid>30491b9b-e328-43ff-9a35-3f5afa6fed34</uuid>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <name>instance-00000006</name>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <nova:name>tempest-LiveMigrationNegativeTest-server-2012193751</nova:name>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:22:57</nova:creationTime>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <nova:user uuid="e93ad3d8111e48218a5ab899be7d3708">tempest-LiveMigrationNegativeTest-1761635651-project-member</nova:user>
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <nova:project uuid="029b328a15754b45970b2053b56564bc">tempest-LiveMigrationNegativeTest-1761635651</nova:project>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <system>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <entry name="serial">30491b9b-e328-43ff-9a35-3f5afa6fed34</entry>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <entry name="uuid">30491b9b-e328-43ff-9a35-3f5afa6fed34</entry>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     </system>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <os>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   </os>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <features>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   </features>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/30491b9b-e328-43ff-9a35-3f5afa6fed34_disk">
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       </source>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config">
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       </source>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:22:58 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/console.log" append="off"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <video>
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     </video>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:22:58 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:22:58 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:22:58 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:22:58 compute-0 nova_compute[253538]: </domain>
Nov 25 08:22:58 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.625 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.626 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.626 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Using config drive
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.647 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.876 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Creating config drive at /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config
Nov 25 08:22:58 compute-0 nova_compute[253538]: 2025-11-25 08:22:58.882 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_td_jay8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 119 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 87 op/s
Nov 25 08:22:59 compute-0 amazing_wing[272244]: {
Nov 25 08:22:59 compute-0 amazing_wing[272244]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "osd_id": 1,
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "type": "bluestore"
Nov 25 08:22:59 compute-0 amazing_wing[272244]:     },
Nov 25 08:22:59 compute-0 amazing_wing[272244]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "osd_id": 2,
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "type": "bluestore"
Nov 25 08:22:59 compute-0 amazing_wing[272244]:     },
Nov 25 08:22:59 compute-0 amazing_wing[272244]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "osd_id": 0,
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:22:59 compute-0 amazing_wing[272244]:         "type": "bluestore"
Nov 25 08:22:59 compute-0 amazing_wing[272244]:     }
Nov 25 08:22:59 compute-0 amazing_wing[272244]: }
Nov 25 08:22:59 compute-0 nova_compute[253538]: 2025-11-25 08:22:59.019 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_td_jay8" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:59 compute-0 systemd[1]: libpod-11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a.scope: Deactivated successfully.
Nov 25 08:22:59 compute-0 systemd[1]: libpod-11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a.scope: Consumed 1.098s CPU time.
Nov 25 08:22:59 compute-0 podman[272208]: 2025-11-25 08:22:59.03785303 +0000 UTC m=+1.473795206 container died 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 08:22:59 compute-0 nova_compute[253538]: 2025-11-25 08:22:59.059 253542 DEBUG nova.storage.rbd_utils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:22:59 compute-0 nova_compute[253538]: 2025-11-25 08:22:59.064 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5b0cd7dc87567a8c9cdcef12457cdd99e347bdc20399c0593b1b4ca4b9036b9-merged.mount: Deactivated successfully.
Nov 25 08:22:59 compute-0 podman[272208]: 2025-11-25 08:22:59.184788123 +0000 UTC m=+1.620730269 container remove 11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 08:22:59 compute-0 systemd[1]: libpod-conmon-11c1e00090e8e100697aff36008941eb455ae4252a1321ca11a872adf98a986a.scope: Deactivated successfully.
Nov 25 08:22:59 compute-0 sudo[272010]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:22:59 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:22:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:22:59 compute-0 nova_compute[253538]: 2025-11-25 08:22:59.255 253542 DEBUG oslo_concurrency.processutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config 30491b9b-e328-43ff-9a35-3f5afa6fed34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:22:59 compute-0 nova_compute[253538]: 2025-11-25 08:22:59.256 253542 INFO nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Deleting local config drive /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34/disk.config because it was imported into RBD.
Nov 25 08:22:59 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:22:59 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 97476215-8325-442e-a4e8-a2f3d5922981 does not exist
Nov 25 08:22:59 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 82f75854-8c30-4852-b118-62f75371b604 does not exist
Nov 25 08:22:59 compute-0 systemd-machined[215790]: New machine qemu-6-instance-00000006.
Nov 25 08:22:59 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 25 08:22:59 compute-0 sudo[272391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:22:59 compute-0 sudo[272391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:59 compute-0 sudo[272391]: pam_unix(sudo:session): session closed for user root
Nov 25 08:22:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3121262340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:22:59 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:22:59 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:22:59 compute-0 nova_compute[253538]: 2025-11-25 08:22:59.365 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:22:59 compute-0 sudo[272424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:22:59 compute-0 sudo[272424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:22:59 compute-0 sudo[272424]: pam_unix(sudo:session): session closed for user root
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.092 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058980.0920835, 30491b9b-e328-43ff-9a35-3f5afa6fed34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.094 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] VM Resumed (Lifecycle Event)
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.098 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.099 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.113 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "a8f34404-8153-46bd-aea0-02d909cd66a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.114 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.116 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.117 253542 INFO nova.virt.libvirt.driver [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance spawned successfully.
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.119 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.123 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.145 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.151 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.151 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058980.0946102, 30491b9b-e328-43ff-9a35-3f5afa6fed34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.151 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] VM Started (Lifecycle Event)
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.156 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.157 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.157 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.158 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.158 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.159 253542 DEBUG nova.virt.libvirt.driver [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.182 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.187 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.230 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.233 253542 INFO nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Took 5.37 seconds to spawn the instance on the hypervisor.
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.234 253542 DEBUG nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.265 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.266 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.276 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.277 253542 INFO nova.compute.claims [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.314 253542 INFO nova.compute.manager [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Took 6.35 seconds to build instance.
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.346 253542 DEBUG oslo_concurrency.lockutils [None req-07a74685-1b4c-4827-a544-e8eec088d775 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:00 compute-0 ceph-mon[75015]: pgmap v1120: 321 pgs: 321 active+clean; 119 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 87 op/s
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.424 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:00 compute-0 ovn_controller[152859]: 2025-11-25T08:23:00Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:b2:39 10.100.0.13
Nov 25 08:23:00 compute-0 ovn_controller[152859]: 2025-11-25T08:23:00Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:b2:39 10.100.0.13
Nov 25 08:23:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312403547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.908 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.914 253542 DEBUG nova.compute.provider_tree [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:23:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 156 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 883 KiB/s rd, 2.8 MiB/s wr, 78 op/s
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.930 253542 DEBUG nova.scheduler.client.report [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.956 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:00 compute-0 nova_compute[253538]: 2025-11-25 08:23:00.957 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.016 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.029 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.048 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.125 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.126 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.127 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Creating image(s)
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.152 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.180 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.204 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.208 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.272 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.273 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.274 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.274 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.297 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.301 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a8f34404-8153-46bd-aea0-02d909cd66a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3312403547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.634 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a8f34404-8153-46bd-aea0-02d909cd66a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.704 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] resizing rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.798 253542 DEBUG nova.objects.instance [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lazy-loading 'migration_context' on Instance uuid a8f34404-8153-46bd-aea0-02d909cd66a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.809 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.809 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Ensure instance console log exists: /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.810 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.810 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.810 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.813 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.816 253542 WARNING nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.820 253542 DEBUG nova.virt.libvirt.host [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.820 253542 DEBUG nova.virt.libvirt.host [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.823 253542 DEBUG nova.virt.libvirt.host [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.823 253542 DEBUG nova.virt.libvirt.host [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.824 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.825 253542 DEBUG nova.virt.hardware [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:23:01 compute-0 nova_compute[253538]: 2025-11-25 08:23:01.828 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:23:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1620578608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.252 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.272 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.275 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:02 compute-0 ceph-mon[75015]: pgmap v1121: 321 pgs: 321 active+clean; 156 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 883 KiB/s rd, 2.8 MiB/s wr, 78 op/s
Nov 25 08:23:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1620578608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:23:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367038960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.702 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.705 253542 DEBUG nova.objects.instance [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8f34404-8153-46bd-aea0-02d909cd66a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.728 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <uuid>a8f34404-8153-46bd-aea0-02d909cd66a9</uuid>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <name>instance-00000007</name>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-1222467350</nova:name>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:23:01</nova:creationTime>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <nova:user uuid="2efed9769e444ce29167cef08d536e28">tempest-ServerDiagnosticsV248Test-1239151635-project-member</nova:user>
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <nova:project uuid="87f3ce31098e4e0cb8717a11dd3fee23">tempest-ServerDiagnosticsV248Test-1239151635</nova:project>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <system>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <entry name="serial">a8f34404-8153-46bd-aea0-02d909cd66a9</entry>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <entry name="uuid">a8f34404-8153-46bd-aea0-02d909cd66a9</entry>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     </system>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <os>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   </os>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <features>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   </features>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a8f34404-8153-46bd-aea0-02d909cd66a9_disk">
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       </source>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config">
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       </source>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:23:02 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/console.log" append="off"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <video>
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     </video>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:23:02 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:23:02 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:23:02 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:23:02 compute-0 nova_compute[253538]: </domain>
Nov 25 08:23:02 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.801 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.801 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.801 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Using config drive
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.828 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:02 compute-0 podman[272747]: 2025-11-25 08:23:02.832432795 +0000 UTC m=+0.085248148 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:23:02 compute-0 nova_compute[253538]: 2025-11-25 08:23:02.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 193 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.9 MiB/s wr, 139 op/s
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.034 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Creating config drive at /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.039 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6x9uigvi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.181 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6x9uigvi" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.202 253542 DEBUG nova.storage.rbd_utils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] rbd image a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.206 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1367038960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:03 compute-0 ceph-mon[75015]: pgmap v1122: 321 pgs: 321 active+clean; 193 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.9 MiB/s wr, 139 op/s
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012975048023702545 of space, bias 1.0, pg target 0.38925144071107637 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:23:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.714 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.715 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.734 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.799 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.799 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.807 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.807 253542 INFO nova.compute.claims [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.904 253542 DEBUG oslo_concurrency.processutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config a8f34404-8153-46bd-aea0-02d909cd66a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.904 253542 INFO nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Deleting local config drive /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9/disk.config because it was imported into RBD.
Nov 25 08:23:03 compute-0 systemd-machined[215790]: New machine qemu-7-instance-00000007.
Nov 25 08:23:03 compute-0 nova_compute[253538]: 2025-11-25 08:23:03.956 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:03 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437880790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.413 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.421 253542 DEBUG nova.compute.provider_tree [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.437 253542 DEBUG nova.scheduler.client.report [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.463 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.464 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.514 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.515 253542 DEBUG nova.network.neutron [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.536 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.553 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:23:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1437880790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.674 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.676 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.676 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Creating image(s)
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.694 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.716 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.739 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.743 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.797 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.798 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.798 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.798 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.815 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:04 compute-0 nova_compute[253538]: 2025-11-25 08:23:04.817 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 215 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.105 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058985.10516, a8f34404-8153-46bd-aea0-02d909cd66a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.107 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] VM Resumed (Lifecycle Event)
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.112 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.113 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.118 253542 INFO nova.virt.libvirt.driver [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance spawned successfully.
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.119 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.144 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.150 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.153 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.154 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.154 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.155 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.155 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.155 253542 DEBUG nova.virt.libvirt.driver [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.182 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.183 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058985.1070108, a8f34404-8153-46bd-aea0-02d909cd66a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.183 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] VM Started (Lifecycle Event)
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.196 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.200 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.215 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.304 253542 INFO nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Took 4.18 seconds to spawn the instance on the hypervisor.
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.304 253542 DEBUG nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.372 253542 INFO nova.compute.manager [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Took 5.15 seconds to build instance.
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.401 253542 DEBUG oslo_concurrency.lockutils [None req-512552d9-98f5-4ecc-a713-b9805f714944 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:05 compute-0 ceph-mon[75015]: pgmap v1123: 321 pgs: 321 active+clean; 215 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.739 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.922s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.765 253542 DEBUG nova.network.neutron [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.766 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:23:05 compute-0 nova_compute[253538]: 2025-11-25 08:23:05.808 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] resizing rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.261 253542 DEBUG nova.objects.instance [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'migration_context' on Instance uuid 7779552a-aa17-4b2f-8b15-69121d6b6a63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.272 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.273 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Ensure instance console log exists: /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.273 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.273 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.274 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.276 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.281 253542 WARNING nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.285 253542 DEBUG nova.virt.libvirt.host [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.286 253542 DEBUG nova.virt.libvirt.host [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.288 253542 DEBUG nova.virt.libvirt.host [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.288 253542 DEBUG nova.virt.libvirt.host [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.289 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.290 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.291 253542 DEBUG nova.virt.hardware [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.293 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.552 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.578 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:23:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:23:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3598226049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.783 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.818 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.825 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3598226049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.850 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.851 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.851 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:23:06 compute-0 nova_compute[253538]: 2025-11-25 08:23:06.851 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 215 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.000 253542 DEBUG nova.compute.manager [None req-4d910055-5443-4b5f-9a6b-aaa1a0992ab6 d603ec87a6444149a76c35f9302f5d37 5fb5f42261cd4144abf24782eefc3dec - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.004 253542 INFO nova.compute.manager [None req-4d910055-5443-4b5f-9a6b-aaa1a0992ab6 d603ec87a6444149a76c35f9302f5d37 5fb5f42261cd4144abf24782eefc3dec - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Retrieving diagnostics
Nov 25 08:23:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:23:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309992808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.237 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.239 253542 DEBUG nova.objects.instance [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'pci_devices' on Instance uuid 7779552a-aa17-4b2f-8b15-69121d6b6a63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.253 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <uuid>7779552a-aa17-4b2f-8b15-69121d6b6a63</uuid>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <name>instance-00000008</name>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1157235376</nova:name>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:23:06</nova:creationTime>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <nova:user uuid="e93ad3d8111e48218a5ab899be7d3708">tempest-LiveMigrationNegativeTest-1761635651-project-member</nova:user>
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <nova:project uuid="029b328a15754b45970b2053b56564bc">tempest-LiveMigrationNegativeTest-1761635651</nova:project>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <system>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <entry name="serial">7779552a-aa17-4b2f-8b15-69121d6b6a63</entry>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <entry name="uuid">7779552a-aa17-4b2f-8b15-69121d6b6a63</entry>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     </system>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <os>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   </os>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <features>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   </features>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7779552a-aa17-4b2f-8b15-69121d6b6a63_disk">
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       </source>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config">
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       </source>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:23:07 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/console.log" append="off"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <video>
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     </video>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:23:07 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:23:07 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:23:07 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:23:07 compute-0 nova_compute[253538]: </domain>
Nov 25 08:23:07 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.325 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.325 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.326 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Using config drive
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.350 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.524 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Creating config drive at /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.529 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjpluod_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.657 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjpluod_" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.681 253542 DEBUG nova.storage.rbd_utils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] rbd image 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.684 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.844 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:07 compute-0 ceph-mon[75015]: pgmap v1124: 321 pgs: 321 active+clean; 215 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Nov 25 08:23:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1309992808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.968 253542 DEBUG oslo_concurrency.processutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config 7779552a-aa17-4b2f-8b15-69121d6b6a63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:07 compute-0 nova_compute[253538]: 2025-11-25 08:23:07.970 253542 INFO nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Deleting local config drive /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63/disk.config because it was imported into RBD.
Nov 25 08:23:08 compute-0 systemd-machined[215790]: New machine qemu-8-instance-00000008.
Nov 25 08:23:08 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 25 08:23:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.610 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058988.6102822, 7779552a-aa17-4b2f-8b15-69121d6b6a63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.611 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] VM Resumed (Lifecycle Event)
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.615 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.615 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.618 253542 INFO nova.virt.libvirt.driver [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance spawned successfully.
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.618 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.630 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.638 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.647 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.647 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.648 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.648 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.648 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.649 253542 DEBUG nova.virt.libvirt.driver [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.656 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.657 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058988.6144156, 7779552a-aa17-4b2f-8b15-69121d6b6a63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.657 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] VM Started (Lifecycle Event)
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.695 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.701 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.729 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.741 253542 INFO nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Took 4.07 seconds to spawn the instance on the hypervisor.
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.742 253542 DEBUG nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.839 253542 INFO nova.compute.manager [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Took 5.06 seconds to build instance.
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.850 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [{"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.867 253542 DEBUG oslo_concurrency.lockutils [None req-cc692998-6ffd-4743-8f4d-b19c24a48e49 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.868 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-4bc97ee2-5aba-4bb5-86e2-f0806a200c04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.869 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.871 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:08 compute-0 nova_compute[253538]: 2025-11-25 08:23:08.872 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:23:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 247 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.7 MiB/s wr, 255 op/s
Nov 25 08:23:09 compute-0 nova_compute[253538]: 2025-11-25 08:23:09.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:10 compute-0 ceph-mon[75015]: pgmap v1125: 321 pgs: 321 active+clean; 247 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.7 MiB/s wr, 255 op/s
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.729 253542 DEBUG nova.objects.instance [None req-86f476a4-aebb-4c1d-afe8-e5de1e03287f b0630450e2d545b5ac0a7dcada11588b 1249e66da1ff4135bf22d5fc5ecb4674 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7779552a-aa17-4b2f-8b15-69121d6b6a63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.756 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764058990.7553976, 7779552a-aa17-4b2f-8b15-69121d6b6a63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.757 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] VM Paused (Lifecycle Event)
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.779 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.785 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:10 compute-0 nova_compute[253538]: 2025-11-25 08:23:10.806 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 08:23:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 262 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.3 MiB/s wr, 296 op/s
Nov 25 08:23:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909703305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.323 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2909703305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:11 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 25 08:23:11 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 2.826s CPU time.
Nov 25 08:23:11 compute-0 systemd-machined[215790]: Machine qemu-8-instance-00000008 terminated.
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.505 253542 DEBUG nova.compute.manager [None req-86f476a4-aebb-4c1d-afe8-e5de1e03287f b0630450e2d545b5ac0a7dcada11588b 1249e66da1ff4135bf22d5fc5ecb4674 - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.607 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.608 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.613 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.614 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.614 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.618 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.618 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.624 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.624 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.804 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.805 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4253MB free_disk=59.88930892944336GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.806 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.806 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.877 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.878 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 30491b9b-e328-43ff-9a35-3f5afa6fed34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.878 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance a8f34404-8153-46bd-aea0-02d909cd66a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.878 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7779552a-aa17-4b2f-8b15-69121d6b6a63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.879 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.879 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:23:11 compute-0 nova_compute[253538]: 2025-11-25 08:23:11.963 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:11 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 08:23:12 compute-0 ceph-mon[75015]: pgmap v1126: 321 pgs: 321 active+clean; 262 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.3 MiB/s wr, 296 op/s
Nov 25 08:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025853926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:12 compute-0 nova_compute[253538]: 2025-11-25 08:23:12.426 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:12 compute-0 nova_compute[253538]: 2025-11-25 08:23:12.431 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:23:12 compute-0 nova_compute[253538]: 2025-11-25 08:23:12.446 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:23:12 compute-0 nova_compute[253538]: 2025-11-25 08:23:12.546 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:23:12 compute-0 nova_compute[253538]: 2025-11-25 08:23:12.547 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:12 compute-0 nova_compute[253538]: 2025-11-25 08:23:12.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 264 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.1 MiB/s wr, 303 op/s
Nov 25 08:23:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1025853926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:13 compute-0 nova_compute[253538]: 2025-11-25 08:23:13.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:13 compute-0 nova_compute[253538]: 2025-11-25 08:23:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:14 compute-0 ceph-mon[75015]: pgmap v1127: 321 pgs: 321 active+clean; 264 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.1 MiB/s wr, 303 op/s
Nov 25 08:23:14 compute-0 nova_compute[253538]: 2025-11-25 08:23:14.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:14 compute-0 nova_compute[253538]: 2025-11-25 08:23:14.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:23:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 287 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 272 op/s
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.142 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.143 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.144 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "7779552a-aa17-4b2f-8b15-69121d6b6a63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.144 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.145 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.147 253542 INFO nova.compute.manager [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Terminating instance
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.148 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "refresh_cache-7779552a-aa17-4b2f-8b15-69121d6b6a63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.149 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquired lock "refresh_cache-7779552a-aa17-4b2f-8b15-69121d6b6a63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.149 253542 DEBUG nova.network.neutron [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:23:15 compute-0 ceph-mon[75015]: pgmap v1128: 321 pgs: 321 active+clean; 287 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 272 op/s
Nov 25 08:23:15 compute-0 nova_compute[253538]: 2025-11-25 08:23:15.767 253542 DEBUG nova.network.neutron [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:23:16 compute-0 nova_compute[253538]: 2025-11-25 08:23:16.058 253542 DEBUG nova.network.neutron [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:23:16 compute-0 nova_compute[253538]: 2025-11-25 08:23:16.076 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Releasing lock "refresh_cache-7779552a-aa17-4b2f-8b15-69121d6b6a63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:23:16 compute-0 nova_compute[253538]: 2025-11-25 08:23:16.076 253542 DEBUG nova.compute.manager [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:23:16 compute-0 nova_compute[253538]: 2025-11-25 08:23:16.083 253542 INFO nova.virt.libvirt.driver [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance destroyed successfully.
Nov 25 08:23:16 compute-0 nova_compute[253538]: 2025-11-25 08:23:16.084 253542 DEBUG nova.objects.instance [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'resources' on Instance uuid 7779552a-aa17-4b2f-8b15-69121d6b6a63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 287 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 210 op/s
Nov 25 08:23:16 compute-0 nova_compute[253538]: 2025-11-25 08:23:16.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:16.938 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:23:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:16.940 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:23:17 compute-0 ovn_controller[152859]: 2025-11-25T08:23:17Z|00041|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.359 253542 DEBUG nova.compute.manager [None req-144950eb-7800-46c1-8308-57c1bde3b8c0 d603ec87a6444149a76c35f9302f5d37 5fb5f42261cd4144abf24782eefc3dec - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.364 253542 INFO nova.compute.manager [None req-144950eb-7800-46c1-8308-57c1bde3b8c0 d603ec87a6444149a76c35f9302f5d37 5fb5f42261cd4144abf24782eefc3dec - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Retrieving diagnostics
Nov 25 08:23:17 compute-0 ceph-mon[75015]: pgmap v1129: 321 pgs: 321 active+clean; 287 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 210 op/s
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.622 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "a8f34404-8153-46bd-aea0-02d909cd66a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.623 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.623 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "a8f34404-8153-46bd-aea0-02d909cd66a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.624 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.624 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.625 253542 INFO nova.compute.manager [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Terminating instance
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.626 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "refresh_cache-a8f34404-8153-46bd-aea0-02d909cd66a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.627 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquired lock "refresh_cache-a8f34404-8153-46bd-aea0-02d909cd66a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.627 253542 DEBUG nova.network.neutron [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.654 253542 INFO nova.virt.libvirt.driver [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Deleting instance files /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63_del
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.655 253542 INFO nova.virt.libvirt.driver [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Deletion of /var/lib/nova/instances/7779552a-aa17-4b2f-8b15-69121d6b6a63_del complete
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.730 253542 INFO nova.compute.manager [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Took 1.65 seconds to destroy the instance on the hypervisor.
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.731 253542 DEBUG oslo.service.loopingcall [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.731 253542 DEBUG nova.compute.manager [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.731 253542 DEBUG nova.network.neutron [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.968 253542 DEBUG nova.network.neutron [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.976 253542 DEBUG nova.network.neutron [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:23:17 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.986 253542 DEBUG nova.network.neutron [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:17.999 253542 INFO nova.compute.manager [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Took 0.27 seconds to deallocate network for instance.
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.040 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.041 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.133 253542 DEBUG oslo_concurrency.processutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.275 253542 DEBUG nova.network.neutron [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.288 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Releasing lock "refresh_cache-a8f34404-8153-46bd-aea0-02d909cd66a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.289 253542 DEBUG nova.compute.manager [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:23:18 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 25 08:23:18 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 12.938s CPU time.
Nov 25 08:23:18 compute-0 systemd-machined[215790]: Machine qemu-7-instance-00000007 terminated.
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.514 253542 INFO nova.virt.libvirt.driver [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance destroyed successfully.
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.515 253542 DEBUG nova.objects.instance [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lazy-loading 'resources' on Instance uuid a8f34404-8153-46bd-aea0-02d909cd66a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231050133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.602 253542 DEBUG oslo_concurrency.processutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.608 253542 DEBUG nova.compute.provider_tree [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.628 253542 DEBUG nova.scheduler.client.report [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:23:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3231050133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.650 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.673 253542 INFO nova.scheduler.client.report [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Deleted allocations for instance 7779552a-aa17-4b2f-8b15-69121d6b6a63
Nov 25 08:23:18 compute-0 nova_compute[253538]: 2025-11-25 08:23:18.733 253542 DEBUG oslo_concurrency.lockutils [None req-6b6e7722-7b71-4154-b094-5c23f5f5f7cb e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "7779552a-aa17-4b2f-8b15-69121d6b6a63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 278 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.0 MiB/s wr, 271 op/s
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.195 253542 INFO nova.virt.libvirt.driver [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Deleting instance files /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9_del
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.196 253542 INFO nova.virt.libvirt.driver [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Deletion of /var/lib/nova/instances/a8f34404-8153-46bd-aea0-02d909cd66a9_del complete
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.282 253542 INFO nova.compute.manager [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.282 253542 DEBUG oslo.service.loopingcall [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.283 253542 DEBUG nova.compute.manager [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.283 253542 DEBUG nova.network.neutron [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.392 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:19 compute-0 ceph-mon[75015]: pgmap v1130: 321 pgs: 321 active+clean; 278 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.0 MiB/s wr, 271 op/s
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.769 253542 DEBUG nova.network.neutron [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.786 253542 DEBUG nova.network.neutron [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.801 253542 INFO nova.compute.manager [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Took 0.52 seconds to deallocate network for instance.
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.845 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.846 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.889 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.889 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.890 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "30491b9b-e328-43ff-9a35-3f5afa6fed34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.890 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.891 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.893 253542 INFO nova.compute.manager [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Terminating instance
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.895 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "refresh_cache-30491b9b-e328-43ff-9a35-3f5afa6fed34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.895 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquired lock "refresh_cache-30491b9b-e328-43ff-9a35-3f5afa6fed34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.896 253542 DEBUG nova.network.neutron [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:23:19 compute-0 nova_compute[253538]: 2025-11-25 08:23:19.941 253542 DEBUG oslo_concurrency.processutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.083 253542 DEBUG nova.network.neutron [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:23:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583977569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.421 253542 DEBUG oslo_concurrency.processutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.426 253542 DEBUG nova.compute.provider_tree [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.447 253542 DEBUG nova.scheduler.client.report [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.467 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.493 253542 INFO nova.scheduler.client.report [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Deleted allocations for instance a8f34404-8153-46bd-aea0-02d909cd66a9
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.499 253542 DEBUG nova.network.neutron [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.512 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Releasing lock "refresh_cache-30491b9b-e328-43ff-9a35-3f5afa6fed34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.514 253542 DEBUG nova.compute.manager [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.571 253542 DEBUG oslo_concurrency.lockutils [None req-225adb63-c415-4664-b9ce-71d1298904a1 2efed9769e444ce29167cef08d536e28 87f3ce31098e4e0cb8717a11dd3fee23 - - default default] Lock "a8f34404-8153-46bd-aea0-02d909cd66a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:20 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 25 08:23:20 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.716s CPU time.
Nov 25 08:23:20 compute-0 systemd-machined[215790]: Machine qemu-6-instance-00000006 terminated.
Nov 25 08:23:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/583977569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.747 253542 INFO nova.virt.libvirt.driver [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance destroyed successfully.
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.748 253542 DEBUG nova.objects.instance [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lazy-loading 'resources' on Instance uuid 30491b9b-e328-43ff-9a35-3f5afa6fed34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.754 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.755 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.755 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.755 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.756 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.758 253542 INFO nova.compute.manager [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Terminating instance
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.759 253542 DEBUG nova.compute.manager [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:23:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 242 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.9 MiB/s wr, 256 op/s
Nov 25 08:23:20 compute-0 kernel: tap029a2ee5-40 (unregistering): left promiscuous mode
Nov 25 08:23:20 compute-0 NetworkManager[48915]: <info>  [1764059000.9641] device (tap029a2ee5-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:20 compute-0 ovn_controller[152859]: 2025-11-25T08:23:20Z|00042|binding|INFO|Releasing lport 029a2ee5-4018-4b73-8953-5436c5af3666 from this chassis (sb_readonly=0)
Nov 25 08:23:20 compute-0 ovn_controller[152859]: 2025-11-25T08:23:20Z|00043|binding|INFO|Setting lport 029a2ee5-4018-4b73-8953-5436c5af3666 down in Southbound
Nov 25 08:23:20 compute-0 ovn_controller[152859]: 2025-11-25T08:23:20Z|00044|binding|INFO|Removing iface tap029a2ee5-40 ovn-installed in OVS
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.976 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.980 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b2:39 10.100.0.13'], port_security=['fa:16:3e:99:b2:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4bc97ee2-5aba-4bb5-86e2-f0806a200c04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2671313ddba04346ac0e2eef435f909c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45e37a08-d0c9-4931-b93a-912579eefb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e5eae09-1123-407e-9138-26c6151dcc1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=029a2ee5-4018-4b73-8953-5436c5af3666) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:23:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.981 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 029a2ee5-4018-4b73-8953-5436c5af3666 in datapath ef52fe4f-78d3-45fa-ab69-177fdfabe604 unbound from our chassis
Nov 25 08:23:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.982 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef52fe4f-78d3-45fa-ab69-177fdfabe604, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:23:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.984 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3a91bd-6c58-436f-9bcc-ef36e9f81c5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:23:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:20.984 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 namespace which is not needed anymore
Nov 25 08:23:20 compute-0 nova_compute[253538]: 2025-11-25 08:23:20.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 25 08:23:21 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 15.198s CPU time.
Nov 25 08:23:21 compute-0 systemd-machined[215790]: Machine qemu-5-instance-00000005 terminated.
Nov 25 08:23:21 compute-0 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [NOTICE]   (271197) : haproxy version is 2.8.14-c23fe91
Nov 25 08:23:21 compute-0 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [NOTICE]   (271197) : path to executable is /usr/sbin/haproxy
Nov 25 08:23:21 compute-0 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [WARNING]  (271197) : Exiting Master process...
Nov 25 08:23:21 compute-0 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [ALERT]    (271197) : Current worker (271199) exited with code 143 (Terminated)
Nov 25 08:23:21 compute-0 neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604[271193]: [WARNING]  (271197) : All workers exited. Exiting... (0)
Nov 25 08:23:21 compute-0 systemd[1]: libpod-f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000.scope: Deactivated successfully.
Nov 25 08:23:21 compute-0 podman[273431]: 2025-11-25 08:23:21.130788725 +0000 UTC m=+0.050959798 container died f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000-userdata-shm.mount: Deactivated successfully.
Nov 25 08:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6262865f4caeb3aa76bd5939a209fa5913aca3e8ecc37800451d727ee1322951-merged.mount: Deactivated successfully.
Nov 25 08:23:21 compute-0 podman[273431]: 2025-11-25 08:23:21.191888766 +0000 UTC m=+0.112059799 container cleanup f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:23:21 compute-0 NetworkManager[48915]: <info>  [1764059001.1953] manager: (tap029a2ee5-40): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.196 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 systemd[1]: libpod-conmon-f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000.scope: Deactivated successfully.
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.207 253542 INFO nova.virt.libvirt.driver [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Instance destroyed successfully.
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.207 253542 DEBUG nova.objects.instance [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lazy-loading 'resources' on Instance uuid 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.227 253542 DEBUG nova.virt.libvirt.vif [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:22:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1503393729',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1503393729',id=5,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBzTqcG3JodznNWSwgj3rCrEedPdeiQ/mnvgYz3cydyWhFZXQKv9KjumDsSN5/xC3KFolCMDQs3EubeJBsTVkZbaVh8dka9krQSkDOu6i81Tbp1XKPFk+NDOA6XHT/FTw==',key_name='tempest-keypair-1479336102',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:22:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2671313ddba04346ac0e2eef435f909c',ramdisk_id='',reservation_id='r-1iln2b4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-625252619',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-625252619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:22:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='19a48db5eafb4ccb9008a204aa3d72d4',uuid=4bc97ee2-5aba-4bb5-86e2-f0806a200c04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.227 253542 DEBUG nova.network.os_vif_util [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converting VIF {"id": "029a2ee5-4018-4b73-8953-5436c5af3666", "address": "fa:16:3e:99:b2:39", "network": {"id": "ef52fe4f-78d3-45fa-ab69-177fdfabe604", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1512173371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2671313ddba04346ac0e2eef435f909c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap029a2ee5-40", "ovs_interfaceid": "029a2ee5-4018-4b73-8953-5436c5af3666", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.228 253542 DEBUG nova.network.os_vif_util [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.228 253542 DEBUG os_vif [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.231 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap029a2ee5-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.236 253542 INFO os_vif [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:b2:39,bridge_name='br-int',has_traffic_filtering=True,id=029a2ee5-4018-4b73-8953-5436c5af3666,network=Network(ef52fe4f-78d3-45fa-ab69-177fdfabe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap029a2ee5-40')
Nov 25 08:23:21 compute-0 podman[273467]: 2025-11-25 08:23:21.723572523 +0000 UTC m=+0.507387247 container remove f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 08:23:21 compute-0 ceph-mon[75015]: pgmap v1131: 321 pgs: 321 active+clean; 242 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.9 MiB/s wr, 256 op/s
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.733 253542 DEBUG nova.compute.manager [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-unplugged-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.735 253542 DEBUG oslo_concurrency.lockutils [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.736 253542 DEBUG oslo_concurrency.lockutils [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.736 253542 DEBUG oslo_concurrency.lockutils [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.735 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e156510-1a0a-4c73-944f-3a95d40d1b49]: (4, ('Tue Nov 25 08:23:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 (f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000)\nf531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000\nTue Nov 25 08:23:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 (f531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000)\nf531910c08daeea977133b7f50a8ea2ee4c30fd4ce3969def7129d09431e9000\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.736 253542 DEBUG nova.compute.manager [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] No waiting events found dispatching network-vif-unplugged-029a2ee5-4018-4b73-8953-5436c5af3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.737 253542 DEBUG nova.compute.manager [req-cb4b8ff1-228d-4b3c-9f8f-9148d5089584 req-a8e4b972-925d-4925-8ae9-fb854dd5797f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-unplugged-029a2ee5-4018-4b73-8953-5436c5af3666 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.738 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c276331-eaeb-433a-8e92-faf007b766d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.739 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef52fe4f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 kernel: tapef52fe4f-70: left promiscuous mode
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.744 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.748 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9282c781-a7a1-4dd8-9785-856ca996cd36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:23:21 compute-0 nova_compute[253538]: 2025-11-25 08:23:21.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.765 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c85072-df14-4441-8eb8-242916270d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.766 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5d419659-3196-416d-8745-6dc72e3f3733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.781 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b73fba3-8643-4702-9dd6-36a625a005d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431851, 'reachable_time': 25214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273502, 'error': None, 'target': 'ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:23:21 compute-0 systemd[1]: run-netns-ovnmeta\x2def52fe4f\x2d78d3\x2d45fa\x2dab69\x2d177fdfabe604.mount: Deactivated successfully.
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.785 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef52fe4f-78d3-45fa-ab69-177fdfabe604 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:23:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:21.785 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a66331f7-928f-485c-934d-3a3cdd9c4f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.024 253542 INFO nova.virt.libvirt.driver [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Deleting instance files /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34_del
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.025 253542 INFO nova.virt.libvirt.driver [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Deletion of /var/lib/nova/instances/30491b9b-e328-43ff-9a35-3f5afa6fed34_del complete
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.084 253542 INFO nova.compute.manager [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Took 1.57 seconds to destroy the instance on the hypervisor.
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.085 253542 DEBUG oslo.service.loopingcall [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.085 253542 DEBUG nova.compute.manager [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.085 253542 DEBUG nova.network.neutron [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.243 253542 DEBUG nova.network.neutron [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.255 253542 DEBUG nova.network.neutron [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.267 253542 INFO nova.compute.manager [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Took 0.18 seconds to deallocate network for instance.
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.317 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.318 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.380 253542 DEBUG oslo_concurrency.processutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420791341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.847 253542 DEBUG oslo_concurrency.processutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.855 253542 DEBUG nova.compute.provider_tree [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.870 253542 DEBUG nova.scheduler.client.report [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.890 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:22 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1420791341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.919 253542 INFO nova.scheduler.client.report [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Deleted allocations for instance 30491b9b-e328-43ff-9a35-3f5afa6fed34
Nov 25 08:23:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 182 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 235 op/s
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.941 253542 INFO nova.virt.libvirt.driver [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Deleting instance files /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_del
Nov 25 08:23:22 compute-0 nova_compute[253538]: 2025-11-25 08:23:22.942 253542 INFO nova.virt.libvirt.driver [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Deletion of /var/lib/nova/instances/4bc97ee2-5aba-4bb5-86e2-f0806a200c04_del complete
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.004 253542 DEBUG oslo_concurrency.lockutils [None req-19cd2f8e-2d0c-4479-8994-7dbe017b7b35 e93ad3d8111e48218a5ab899be7d3708 029b328a15754b45970b2053b56564bc - - default default] Lock "30491b9b-e328-43ff-9a35-3f5afa6fed34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.010 253542 INFO nova.compute.manager [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Took 2.25 seconds to destroy the instance on the hypervisor.
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.010 253542 DEBUG oslo.service.loopingcall [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.011 253542 DEBUG nova.compute.manager [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.011 253542 DEBUG nova.network.neutron [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:23:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:23:23 compute-0 ceph-mon[75015]: pgmap v1132: 321 pgs: 321 active+clean; 182 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 235 op/s
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.940 253542 DEBUG nova.compute.manager [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.941 253542 DEBUG oslo_concurrency.lockutils [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.941 253542 DEBUG oslo_concurrency.lockutils [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.941 253542 DEBUG oslo_concurrency.lockutils [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.942 253542 DEBUG nova.compute.manager [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] No waiting events found dispatching network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:23:23 compute-0 nova_compute[253538]: 2025-11-25 08:23:23.942 253542 WARNING nova.compute.manager [req-5136dd19-8373-43dd-b39e-d8a55b76110e req-db8fb72d-abc7-420e-9074-1292ba822b00 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received unexpected event network-vif-plugged-029a2ee5-4018-4b73-8953-5436c5af3666 for instance with vm_state active and task_state deleting.
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.114 253542 DEBUG nova.network.neutron [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.131 253542 INFO nova.compute.manager [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Took 1.12 seconds to deallocate network for instance.
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.176 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.177 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.245 253542 DEBUG oslo_concurrency.processutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946080216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.708 253542 DEBUG oslo_concurrency.processutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.712 253542 DEBUG nova.compute.provider_tree [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.725 253542 DEBUG nova.scheduler.client.report [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.742 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.770 253542 INFO nova.scheduler.client.report [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Deleted allocations for instance 4bc97ee2-5aba-4bb5-86e2-f0806a200c04
Nov 25 08:23:24 compute-0 podman[273554]: 2025-11-25 08:23:24.822738687 +0000 UTC m=+0.072245933 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 08:23:24 compute-0 nova_compute[253538]: 2025-11-25 08:23:24.825 253542 DEBUG oslo_concurrency.lockutils [None req-ba2d0324-8de5-497a-a4e1-8a5349afa1d5 19a48db5eafb4ccb9008a204aa3d72d4 2671313ddba04346ac0e2eef435f909c - - default default] Lock "4bc97ee2-5aba-4bb5-86e2-f0806a200c04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 74 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 238 op/s
Nov 25 08:23:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3946080216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:25 compute-0 ceph-mon[75015]: pgmap v1133: 321 pgs: 321 active+clean; 74 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 238 op/s
Nov 25 08:23:26 compute-0 nova_compute[253538]: 2025-11-25 08:23:26.088 253542 DEBUG nova.compute.manager [req-cb1f7d5f-0adc-4bff-8be3-37bfc2eb868f req-86c3df25-d20e-487c-848d-bf8f1ba84dc2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Received event network-vif-deleted-029a2ee5-4018-4b73-8953-5436c5af3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:23:26 compute-0 nova_compute[253538]: 2025-11-25 08:23:26.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:26 compute-0 nova_compute[253538]: 2025-11-25 08:23:26.507 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058991.506591, 7779552a-aa17-4b2f-8b15-69121d6b6a63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:26 compute-0 nova_compute[253538]: 2025-11-25 08:23:26.508 253542 INFO nova.compute.manager [-] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] VM Stopped (Lifecycle Event)
Nov 25 08:23:26 compute-0 nova_compute[253538]: 2025-11-25 08:23:26.524 253542 DEBUG nova.compute.manager [None req-3b16fc33-da19-4034-8938-a70e5620a85c - - - - - -] [instance: 7779552a-aa17-4b2f-8b15-69121d6b6a63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 74 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 KiB/s rd, 2.5 MiB/s wr, 181 op/s
Nov 25 08:23:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:26.943 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:23:27 compute-0 podman[273573]: 2025-11-25 08:23:27.827358334 +0000 UTC m=+0.074534188 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 08:23:27 compute-0 ceph-mon[75015]: pgmap v1134: 321 pgs: 321 active+clean; 74 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 KiB/s rd, 2.5 MiB/s wr, 181 op/s
Nov 25 08:23:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:23:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2456352356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:23:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:23:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2456352356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:23:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 520 KiB/s rd, 2.5 MiB/s wr, 194 op/s
Nov 25 08:23:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2456352356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:23:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2456352356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:23:29 compute-0 nova_compute[253538]: 2025-11-25 08:23:29.397 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:30 compute-0 ceph-mon[75015]: pgmap v1135: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 520 KiB/s rd, 2.5 MiB/s wr, 194 op/s
Nov 25 08:23:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 1.0 MiB/s wr, 133 op/s
Nov 25 08:23:31 compute-0 nova_compute[253538]: 2025-11-25 08:23:31.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:31 compute-0 nova_compute[253538]: 2025-11-25 08:23:31.152 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:31 compute-0 nova_compute[253538]: 2025-11-25 08:23:31.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:32 compute-0 ceph-mon[75015]: pgmap v1136: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 1.0 MiB/s wr, 133 op/s
Nov 25 08:23:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 72 KiB/s wr, 94 op/s
Nov 25 08:23:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:33 compute-0 nova_compute[253538]: 2025-11-25 08:23:33.513 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764058998.5120335, a8f34404-8153-46bd-aea0-02d909cd66a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:33 compute-0 nova_compute[253538]: 2025-11-25 08:23:33.514 253542 INFO nova.compute.manager [-] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] VM Stopped (Lifecycle Event)
Nov 25 08:23:33 compute-0 nova_compute[253538]: 2025-11-25 08:23:33.532 253542 DEBUG nova.compute.manager [None req-fec8b18e-f915-49d3-a3d5-1ab9417f5b6f - - - - - -] [instance: a8f34404-8153-46bd-aea0-02d909cd66a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:33 compute-0 podman[273595]: 2025-11-25 08:23:33.844822348 +0000 UTC m=+0.094342884 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 08:23:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 25 08:23:34 compute-0 ceph-mon[75015]: pgmap v1137: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 72 KiB/s wr, 94 op/s
Nov 25 08:23:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 25 08:23:34 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 25 08:23:34 compute-0 nova_compute[253538]: 2025-11-25 08:23:34.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.6 KiB/s wr, 26 op/s
Nov 25 08:23:35 compute-0 ceph-mon[75015]: osdmap e108: 3 total, 3 up, 3 in
Nov 25 08:23:35 compute-0 nova_compute[253538]: 2025-11-25 08:23:35.744 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059000.7430954, 30491b9b-e328-43ff-9a35-3f5afa6fed34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:35 compute-0 nova_compute[253538]: 2025-11-25 08:23:35.745 253542 INFO nova.compute.manager [-] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] VM Stopped (Lifecycle Event)
Nov 25 08:23:35 compute-0 nova_compute[253538]: 2025-11-25 08:23:35.874 253542 DEBUG nova.compute.manager [None req-cbdf3cb4-00d2-437f-a627-b40e3f800aa3 - - - - - -] [instance: 30491b9b-e328-43ff-9a35-3f5afa6fed34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:35 compute-0 nova_compute[253538]: 2025-11-25 08:23:35.924 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "4659329f-611a-4436-aa9e-26937db9cd61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:35 compute-0 nova_compute[253538]: 2025-11-25 08:23:35.925 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:35 compute-0 nova_compute[253538]: 2025-11-25 08:23:35.990 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:23:36 compute-0 nova_compute[253538]: 2025-11-25 08:23:36.103 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:36 compute-0 nova_compute[253538]: 2025-11-25 08:23:36.104 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:36 compute-0 nova_compute[253538]: 2025-11-25 08:23:36.113 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:23:36 compute-0 nova_compute[253538]: 2025-11-25 08:23:36.113 253542 INFO nova.compute.claims [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:23:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 25 08:23:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 25 08:23:36 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 25 08:23:36 compute-0 ceph-mon[75015]: pgmap v1139: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.6 KiB/s wr, 26 op/s
Nov 25 08:23:36 compute-0 nova_compute[253538]: 2025-11-25 08:23:36.206 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059001.2050695, 4bc97ee2-5aba-4bb5-86e2-f0806a200c04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:36 compute-0 nova_compute[253538]: 2025-11-25 08:23:36.206 253542 INFO nova.compute.manager [-] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] VM Stopped (Lifecycle Event)
Nov 25 08:23:36 compute-0 nova_compute[253538]: 2025-11-25 08:23:36.222 253542 DEBUG nova.compute.manager [None req-a7efacb9-7e8f-4976-8ac8-0f1c40b0ab70 - - - - - -] [instance: 4bc97ee2-5aba-4bb5-86e2-f0806a200c04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:36 compute-0 nova_compute[253538]: 2025-11-25 08:23:36.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 12 op/s
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.082 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:37 compute-0 ceph-mon[75015]: osdmap e109: 3 total, 3 up, 3 in
Nov 25 08:23:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:23:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1089520596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.535 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.541 253542 DEBUG nova.compute.provider_tree [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.554 253542 DEBUG nova.scheduler.client.report [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.588 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.589 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.649 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.685 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.724 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.862 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.864 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.865 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating image(s)
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.890 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.912 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.929 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.931 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.997 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.998 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.998 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:37 compute-0 nova_compute[253538]: 2025-11-25 08:23:37.999 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.017 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.021 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:38 compute-0 ceph-mon[75015]: pgmap v1141: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 12 op/s
Nov 25 08:23:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1089520596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.479 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.555 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] resizing rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.686 253542 DEBUG nova.objects.instance [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'migration_context' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.700 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.701 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Ensure instance console log exists: /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.701 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.702 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.702 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.704 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.710 253542 WARNING nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.715 253542 DEBUG nova.virt.libvirt.host [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.716 253542 DEBUG nova.virt.libvirt.host [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.719 253542 DEBUG nova.virt.libvirt.host [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.720 253542 DEBUG nova.virt.libvirt.host [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.721 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.721 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.722 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.723 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.723 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.723 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.723 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.724 253542 DEBUG nova.virt.hardware [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:23:38 compute-0 nova_compute[253538]: 2025-11-25 08:23:38.728 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 41 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 KiB/s wr, 27 op/s
Nov 25 08:23:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:23:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480854766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.201 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.228 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.232 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/480854766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:23:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162049715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.651 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.653 253542 DEBUG nova.objects.instance [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'pci_devices' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.674 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <uuid>4659329f-611a-4436-aa9e-26937db9cd61</uuid>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <name>instance-00000009</name>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdmin275Test-server-190405605</nova:name>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:23:38</nova:creationTime>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <nova:user uuid="57ccb3076c9145fda72f75af7dd3acc0">tempest-ServersAdmin275Test-1226019010-project-member</nova:user>
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <nova:project uuid="e0f1bfa27d5e45138e846f38c1d92dfc">tempest-ServersAdmin275Test-1226019010</nova:project>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <system>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <entry name="serial">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <entry name="uuid">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     </system>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <os>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   </os>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <features>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   </features>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk">
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       </source>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk.config">
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       </source>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:23:39 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log" append="off"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <video>
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     </video>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:23:39 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:23:39 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:23:39 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:23:39 compute-0 nova_compute[253538]: </domain>
Nov 25 08:23:39 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.719 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.720 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.720 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Using config drive
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.739 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.979 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating config drive at /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config
Nov 25 08:23:39 compute-0 nova_compute[253538]: 2025-11-25 08:23:39.983 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkdikmrcs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.107 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkdikmrcs" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.137 253542 DEBUG nova.storage.rbd_utils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.141 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:23:40 compute-0 ceph-mon[75015]: pgmap v1142: 321 pgs: 321 active+clean; 41 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 KiB/s wr, 27 op/s
Nov 25 08:23:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2162049715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.316 253542 DEBUG oslo_concurrency.processutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.317 253542 INFO nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting local config drive /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config because it was imported into RBD.
Nov 25 08:23:40 compute-0 systemd-machined[215790]: New machine qemu-9-instance-00000009.
Nov 25 08:23:40 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.760 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059020.7600143, 4659329f-611a-4436-aa9e-26937db9cd61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.761 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Resumed (Lifecycle Event)
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.764 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.764 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.768 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance spawned successfully.
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.768 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.784 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.789 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.793 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.794 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.794 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.795 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.795 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.795 253542 DEBUG nova.virt.libvirt.driver [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.817 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.818 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059020.7640014, 4659329f-611a-4436-aa9e-26937db9cd61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.818 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Started (Lifecycle Event)
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.842 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.845 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.859 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.866 253542 INFO nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Took 3.00 seconds to spawn the instance on the hypervisor.
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.866 253542 DEBUG nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.934 253542 INFO nova.compute.manager [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Took 4.86 seconds to build instance.
Nov 25 08:23:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 47 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 208 KiB/s wr, 68 op/s
Nov 25 08:23:40 compute-0 nova_compute[253538]: 2025-11-25 08:23:40.962 253542 DEBUG oslo_concurrency.lockutils [None req-95ec1bb3-c50b-4b21-8997-db3f364e8e2a 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:41.049 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:41.049 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:23:41.049 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:23:41 compute-0 nova_compute[253538]: 2025-11-25 08:23:41.238 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:42 compute-0 ceph-mon[75015]: pgmap v1143: 321 pgs: 321 active+clean; 47 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 208 KiB/s wr, 68 op/s
Nov 25 08:23:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 80 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 129 op/s
Nov 25 08:23:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 25 08:23:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 25 08:23:43 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.094 253542 INFO nova.compute.manager [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Rebuilding instance
Nov 25 08:23:44 compute-0 ceph-mon[75015]: pgmap v1144: 321 pgs: 321 active+clean; 80 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 129 op/s
Nov 25 08:23:44 compute-0 ceph-mon[75015]: osdmap e110: 3 total, 3 up, 3 in
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.313 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.328 253542 DEBUG nova.compute.manager [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.371 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'pci_requests' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.382 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'pci_devices' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.438 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'resources' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.446 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'migration_context' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.453 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:23:44 compute-0 nova_compute[253538]: 2025-11-25 08:23:44.457 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:23:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Nov 25 08:23:46 compute-0 nova_compute[253538]: 2025-11-25 08:23:46.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:46 compute-0 ceph-mon[75015]: pgmap v1146: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Nov 25 08:23:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Nov 25 08:23:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:48 compute-0 ceph-mon[75015]: pgmap v1147: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Nov 25 08:23:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 08:23:49 compute-0 nova_compute[253538]: 2025-11-25 08:23:49.437 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:50 compute-0 ceph-mon[75015]: pgmap v1148: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 08:23:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Nov 25 08:23:51 compute-0 nova_compute[253538]: 2025-11-25 08:23:51.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:52 compute-0 ceph-mon[75015]: pgmap v1149: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Nov 25 08:23:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 91 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 686 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:23:53
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:23:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:23:53 compute-0 ceph-mon[75015]: pgmap v1150: 321 pgs: 321 active+clean; 91 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 686 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:23:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:23:54 compute-0 nova_compute[253538]: 2025-11-25 08:23:54.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:54 compute-0 nova_compute[253538]: 2025-11-25 08:23:54.504 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:23:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 105 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 1.6 MiB/s wr, 44 op/s
Nov 25 08:23:55 compute-0 podman[273987]: 2025-11-25 08:23:55.821983628 +0000 UTC m=+0.062867770 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:23:56 compute-0 ceph-mon[75015]: pgmap v1151: 321 pgs: 321 active+clean; 105 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 1.6 MiB/s wr, 44 op/s
Nov 25 08:23:56 compute-0 nova_compute[253538]: 2025-11-25 08:23:56.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 112 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 25 08:23:58 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 08:23:58 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 12.885s CPU time.
Nov 25 08:23:58 compute-0 systemd-machined[215790]: Machine qemu-9-instance-00000009 terminated.
Nov 25 08:23:58 compute-0 podman[274006]: 2025-11-25 08:23:58.187046035 +0000 UTC m=+0.082637888 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:23:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:23:58 compute-0 ceph-mon[75015]: pgmap v1152: 321 pgs: 321 active+clean; 112 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 25 08:23:58 compute-0 nova_compute[253538]: 2025-11-25 08:23:58.526 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance shutdown successfully after 14 seconds.
Nov 25 08:23:58 compute-0 nova_compute[253538]: 2025-11-25 08:23:58.533 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.
Nov 25 08:23:58 compute-0 nova_compute[253538]: 2025-11-25 08:23:58.539 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.
Nov 25 08:23:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 118 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.2 MiB/s wr, 60 op/s
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.002 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting instance files /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.003 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deletion of /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del complete
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.173 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.174 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating image(s)
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.199 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.231 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.251 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.254 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.255 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.462 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:23:59 compute-0 sudo[274103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:23:59 compute-0 sudo[274103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:23:59 compute-0 sudo[274103]: pam_unix(sudo:session): session closed for user root
Nov 25 08:23:59 compute-0 sudo[274128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:23:59 compute-0 sudo[274128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:23:59 compute-0 sudo[274128]: pam_unix(sudo:session): session closed for user root
Nov 25 08:23:59 compute-0 sudo[274153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:23:59 compute-0 sudo[274153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:23:59 compute-0 sudo[274153]: pam_unix(sudo:session): session closed for user root
Nov 25 08:23:59 compute-0 nova_compute[253538]: 2025-11-25 08:23:59.790 253542 DEBUG nova.virt.libvirt.imagebackend [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/64385127-d622-49bb-be38-b33beb2692d1/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/64385127-d622-49bb-be38-b33beb2692d1/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 08:23:59 compute-0 sudo[274178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 08:23:59 compute-0 sudo[274178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:00 compute-0 ceph-mon[75015]: pgmap v1153: 321 pgs: 321 active+clean; 118 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.2 MiB/s wr, 60 op/s
Nov 25 08:24:00 compute-0 podman[274275]: 2025-11-25 08:24:00.307219203 +0000 UTC m=+0.079866341 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:24:00 compute-0 podman[274275]: 2025-11-25 08:24:00.438809446 +0000 UTC m=+0.211456574 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:24:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 107 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Nov 25 08:24:01 compute-0 sudo[274178]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:24:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:24:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.135 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:01 compute-0 sudo[274436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:01 compute-0 sudo[274436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:01 compute-0 sudo[274436]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.199 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.200 253542 DEBUG nova.virt.images [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] 64385127-d622-49bb-be38-b33beb2692d1 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.202 253542 DEBUG nova.privsep.utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.202 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.part /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:01 compute-0 sudo[274462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:24:01 compute-0 sudo[274462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:01 compute-0 sudo[274462]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:01 compute-0 sudo[274490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:01 compute-0 sudo[274490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:01 compute-0 sudo[274490]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:01 compute-0 sudo[274521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:24:01 compute-0 sudo[274521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.399 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.part /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.converted" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.402 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.461 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.463 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.484 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.491 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:01 compute-0 sudo[274521]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:24:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:24:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:24:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:24:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:24:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 7f5f9d89-8567-44e4-8a5f-7436811d9517 does not exist
Nov 25 08:24:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f0e4c608-ba61-484f-ade1-cb1083c39236 does not exist
Nov 25 08:24:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6cf16551-34e5-4e33-9371-bbbd31a8f497 does not exist
Nov 25 08:24:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:24:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:24:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:24:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:24:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:24:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:24:01 compute-0 nova_compute[253538]: 2025-11-25 08:24:01.971 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:01 compute-0 sudo[274616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:01 compute-0 sudo[274616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:01 compute-0 sudo[274616]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.036 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] resizing rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:02 compute-0 sudo[274659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:24:02 compute-0 sudo[274659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:02 compute-0 sudo[274659]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:02 compute-0 ceph-mon[75015]: pgmap v1154: 321 pgs: 321 active+clean; 107 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Nov 25 08:24:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:24:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:24:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:24:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:24:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:24:02 compute-0 sudo[274720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.122 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.123 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Ensure instance console log exists: /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.123 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:02 compute-0 sudo[274720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.124 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.125 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.126 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:02 compute-0 sudo[274720]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.130 253542 WARNING nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.136 253542 DEBUG nova.virt.libvirt.host [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.137 253542 DEBUG nova.virt.libvirt.host [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.144 253542 DEBUG nova.virt.libvirt.host [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.144 253542 DEBUG nova.virt.libvirt.host [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.145 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.145 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.146 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.146 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.146 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.146 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.147 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.147 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.147 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.147 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.148 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.148 253542 DEBUG nova.virt.hardware [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.148 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.162 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:02 compute-0 sudo[274763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:24:02 compute-0 sudo[274763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:02 compute-0 podman[274847]: 2025-11-25 08:24:02.511397157 +0000 UTC m=+0.044835642 container create 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:24:02 compute-0 systemd[1]: Started libpod-conmon-5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb.scope.
Nov 25 08:24:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:24:02 compute-0 podman[274847]: 2025-11-25 08:24:02.488575095 +0000 UTC m=+0.022013500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:24:02 compute-0 podman[274847]: 2025-11-25 08:24:02.62785179 +0000 UTC m=+0.161290205 container init 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:24:02 compute-0 podman[274847]: 2025-11-25 08:24:02.636804128 +0000 UTC m=+0.170242493 container start 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:24:02 compute-0 hopeful_fermi[274863]: 167 167
Nov 25 08:24:02 compute-0 systemd[1]: libpod-5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb.scope: Deactivated successfully.
Nov 25 08:24:02 compute-0 podman[274847]: 2025-11-25 08:24:02.645872159 +0000 UTC m=+0.179310544 container attach 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:24:02 compute-0 podman[274847]: 2025-11-25 08:24:02.64628591 +0000 UTC m=+0.179724285 container died 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:24:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330364686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.684 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.709 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:02 compute-0 nova_compute[253538]: 2025-11-25 08:24:02.714 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 74 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 977 KiB/s rd, 2.5 MiB/s wr, 88 op/s
Nov 25 08:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-53a3f45fa07fb121a8728bcbcf4059822806b2cc59ad0822c29695b0a31a0ba0-merged.mount: Deactivated successfully.
Nov 25 08:24:03 compute-0 podman[274847]: 2025-11-25 08:24:03.011424507 +0000 UTC m=+0.544862872 container remove 5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:24:03 compute-0 systemd[1]: libpod-conmon-5590b1cfc8fcc3adca285d0047fae2eacb2fdd17438aafc60bdfa284a35037eb.scope: Deactivated successfully.
Nov 25 08:24:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1330364686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110738618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.180 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.189 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <uuid>4659329f-611a-4436-aa9e-26937db9cd61</uuid>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <name>instance-00000009</name>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdmin275Test-server-190405605</nova:name>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:02</nova:creationTime>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <nova:user uuid="57ccb3076c9145fda72f75af7dd3acc0">tempest-ServersAdmin275Test-1226019010-project-member</nova:user>
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <nova:project uuid="e0f1bfa27d5e45138e846f38c1d92dfc">tempest-ServersAdmin275Test-1226019010</nova:project>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <entry name="serial">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <entry name="uuid">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk">
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk.config">
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:03 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log" append="off"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:03 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:03 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:03 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:03 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:03 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.252 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.252 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.253 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Using config drive
Nov 25 08:24:03 compute-0 podman[274930]: 2025-11-25 08:24:03.277608995 +0000 UTC m=+0.074777260 container create fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.281 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.303 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:03 compute-0 podman[274930]: 2025-11-25 08:24:03.228321812 +0000 UTC m=+0.025490147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:24:03 compute-0 systemd[1]: Started libpod-conmon-fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a.scope.
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.355 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'keypairs' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:03 compute-0 podman[274930]: 2025-11-25 08:24:03.378597981 +0000 UTC m=+0.175766326 container init fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:24:03 compute-0 podman[274930]: 2025-11-25 08:24:03.38719369 +0000 UTC m=+0.184361985 container start fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:24:03 compute-0 podman[274930]: 2025-11-25 08:24:03.391010935 +0000 UTC m=+0.188179220 container attach fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.526 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating config drive at /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.533 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfjdh9v4d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.658 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfjdh9v4d" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.683 253542 DEBUG nova.storage.rbd_utils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.687 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00027935397945562287 of space, bias 1.0, pg target 0.08380619383668686 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:24:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.830 253542 DEBUG oslo_concurrency.processutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:03 compute-0 nova_compute[253538]: 2025-11-25 08:24:03.831 253542 INFO nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting local config drive /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config because it was imported into RBD.
Nov 25 08:24:03 compute-0 systemd-machined[215790]: New machine qemu-10-instance-00000009.
Nov 25 08:24:03 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000009.
Nov 25 08:24:03 compute-0 podman[275015]: 2025-11-25 08:24:03.984259037 +0000 UTC m=+0.083676438 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 08:24:04 compute-0 ceph-mon[75015]: pgmap v1155: 321 pgs: 321 active+clean; 74 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 977 KiB/s rd, 2.5 MiB/s wr, 88 op/s
Nov 25 08:24:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/110738618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.253 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 4659329f-611a-4436-aa9e-26937db9cd61 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.253 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059044.2528143, 4659329f-611a-4436-aa9e-26937db9cd61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.254 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Resumed (Lifecycle Event)
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.256 253542 DEBUG nova.compute.manager [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.256 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.259 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance spawned successfully.
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.259 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.295 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.302 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.305 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.305 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.305 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.306 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.306 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.306 253542 DEBUG nova.virt.libvirt.driver [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.334 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.335 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059044.2561886, 4659329f-611a-4436-aa9e-26937db9cd61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.335 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Started (Lifecycle Event)
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.353 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.356 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.388 253542 DEBUG nova.compute.manager [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.389 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:24:04 compute-0 lucid_heyrovsky[274964]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:24:04 compute-0 lucid_heyrovsky[274964]: --> relative data size: 1.0
Nov 25 08:24:04 compute-0 lucid_heyrovsky[274964]: --> All data devices are unavailable
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.436 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.436 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.437 253542 DEBUG nova.objects.instance [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:24:04 compute-0 systemd[1]: libpod-fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a.scope: Deactivated successfully.
Nov 25 08:24:04 compute-0 podman[274930]: 2025-11-25 08:24:04.462374691 +0000 UTC m=+1.259542986 container died fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba9e72b75be376b653ac768e958108a936568a51c24172a99d28e6f2a631a7dc-merged.mount: Deactivated successfully.
Nov 25 08:24:04 compute-0 nova_compute[253538]: 2025-11-25 08:24:04.504 253542 DEBUG oslo_concurrency.lockutils [None req-30da9785-a1fd-47ff-a7d7-c47887bfaf5d 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:04 compute-0 podman[274930]: 2025-11-25 08:24:04.530483417 +0000 UTC m=+1.327651682 container remove fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:24:04 compute-0 systemd[1]: libpod-conmon-fc5bbaf7f80b1bce58037fc30817f7c37312e18a55b6455fdcb025ac58c40c3a.scope: Deactivated successfully.
Nov 25 08:24:04 compute-0 sudo[274763]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:04 compute-0 sudo[275127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:04 compute-0 sudo[275127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:04 compute-0 sudo[275127]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:04 compute-0 sudo[275152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:24:04 compute-0 sudo[275152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:04 compute-0 sudo[275152]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:04 compute-0 sudo[275177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:04 compute-0 sudo[275177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:04 compute-0 sudo[275177]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:04 compute-0 sudo[275202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:24:04 compute-0 sudo[275202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 121 op/s
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.158 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.158 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.173 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.247 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.248 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:05 compute-0 podman[275268]: 2025-11-25 08:24:05.253272624 +0000 UTC m=+0.076149880 container create 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.256 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.256 253542 INFO nova.compute.claims [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:24:05 compute-0 podman[275268]: 2025-11-25 08:24:05.205966464 +0000 UTC m=+0.028843820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.374 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:05 compute-0 systemd[1]: Started libpod-conmon-0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251.scope.
Nov 25 08:24:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:24:05 compute-0 podman[275268]: 2025-11-25 08:24:05.618082072 +0000 UTC m=+0.440959348 container init 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:24:05 compute-0 podman[275268]: 2025-11-25 08:24:05.625843827 +0000 UTC m=+0.448721083 container start 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 08:24:05 compute-0 vigorous_allen[275286]: 167 167
Nov 25 08:24:05 compute-0 systemd[1]: libpod-0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251.scope: Deactivated successfully.
Nov 25 08:24:05 compute-0 podman[275268]: 2025-11-25 08:24:05.683471492 +0000 UTC m=+0.506348748 container attach 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 08:24:05 compute-0 podman[275268]: 2025-11-25 08:24:05.683937364 +0000 UTC m=+0.506814610 container died 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 08:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfc426047ba604c00f1fe66e2e0e9b00d7398d9aa1557a5eac07a8dcfc7a59af-merged.mount: Deactivated successfully.
Nov 25 08:24:05 compute-0 podman[275268]: 2025-11-25 08:24:05.739496862 +0000 UTC m=+0.562374118 container remove 0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:24:05 compute-0 systemd[1]: libpod-conmon-0bd531a2740412c907fd0ac3cedafc06206e815312ece3a9681172ae2f1fa251.scope: Deactivated successfully.
Nov 25 08:24:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892928765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.852 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.864 253542 DEBUG nova.compute.provider_tree [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.882 253542 DEBUG nova.scheduler.client.report [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.916 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:05 compute-0 nova_compute[253538]: 2025-11-25 08:24:05.917 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:24:05 compute-0 podman[275332]: 2025-11-25 08:24:05.937660558 +0000 UTC m=+0.047063674 container create 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:24:05 compute-0 systemd[1]: Started libpod-conmon-02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c.scope.
Nov 25 08:24:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.010 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.010 253542 DEBUG nova.network.neutron [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:24:06 compute-0 podman[275332]: 2025-11-25 08:24:05.917638094 +0000 UTC m=+0.027041230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:24:06 compute-0 podman[275332]: 2025-11-25 08:24:06.016376757 +0000 UTC m=+0.125779893 container init 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 08:24:06 compute-0 podman[275332]: 2025-11-25 08:24:06.023784112 +0000 UTC m=+0.133187228 container start 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 08:24:06 compute-0 podman[275332]: 2025-11-25 08:24:06.026903148 +0000 UTC m=+0.136306284 container attach 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.035 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.048 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:24:06 compute-0 ceph-mon[75015]: pgmap v1156: 321 pgs: 321 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 121 op/s
Nov 25 08:24:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3892928765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.156 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.158 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.158 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Creating image(s)
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.187 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.209 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.231 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.235 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.261 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.304 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.305 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.305 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.305 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.328 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.332 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.580 253542 DEBUG nova.network.neutron [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.582 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.606 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.607 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.753 253542 INFO nova.compute.manager [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Rebuilding instance
Nov 25 08:24:06 compute-0 practical_khorana[275346]: {
Nov 25 08:24:06 compute-0 practical_khorana[275346]:     "0": [
Nov 25 08:24:06 compute-0 practical_khorana[275346]:         {
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "devices": [
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "/dev/loop3"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             ],
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_name": "ceph_lv0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_size": "21470642176",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "name": "ceph_lv0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "tags": {
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cluster_name": "ceph",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.crush_device_class": "",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.encrypted": "0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osd_id": "0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.type": "block",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.vdo": "0"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             },
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "type": "block",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "vg_name": "ceph_vg0"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:         }
Nov 25 08:24:06 compute-0 practical_khorana[275346]:     ],
Nov 25 08:24:06 compute-0 practical_khorana[275346]:     "1": [
Nov 25 08:24:06 compute-0 practical_khorana[275346]:         {
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "devices": [
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "/dev/loop4"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             ],
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_name": "ceph_lv1",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_size": "21470642176",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "name": "ceph_lv1",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "tags": {
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cluster_name": "ceph",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.crush_device_class": "",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.encrypted": "0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osd_id": "1",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.type": "block",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.vdo": "0"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             },
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "type": "block",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "vg_name": "ceph_vg1"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:         }
Nov 25 08:24:06 compute-0 practical_khorana[275346]:     ],
Nov 25 08:24:06 compute-0 practical_khorana[275346]:     "2": [
Nov 25 08:24:06 compute-0 practical_khorana[275346]:         {
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "devices": [
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "/dev/loop5"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             ],
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_name": "ceph_lv2",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_size": "21470642176",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "name": "ceph_lv2",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "tags": {
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.cluster_name": "ceph",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.crush_device_class": "",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.encrypted": "0",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osd_id": "2",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.type": "block",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:                 "ceph.vdo": "0"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             },
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "type": "block",
Nov 25 08:24:06 compute-0 practical_khorana[275346]:             "vg_name": "ceph_vg2"
Nov 25 08:24:06 compute-0 practical_khorana[275346]:         }
Nov 25 08:24:06 compute-0 practical_khorana[275346]:     ]
Nov 25 08:24:06 compute-0 practical_khorana[275346]: }
Nov 25 08:24:06 compute-0 systemd[1]: libpod-02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c.scope: Deactivated successfully.
Nov 25 08:24:06 compute-0 podman[275332]: 2025-11-25 08:24:06.800599465 +0000 UTC m=+0.910002601 container died 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 08:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef51937ae41297dd191c15f6e9993fab7c12bb2bae032df544df2e2d455bd6e-merged.mount: Deactivated successfully.
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.882 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:06 compute-0 podman[275332]: 2025-11-25 08:24:06.914996311 +0000 UTC m=+1.024399437 container remove 02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:24:06 compute-0 systemd[1]: libpod-conmon-02f1513406697c03124c33dab04908b0458797ce6cc9e598f8ab24b35778968c.scope: Deactivated successfully.
Nov 25 08:24:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 101 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 117 op/s
Nov 25 08:24:06 compute-0 sudo[275202]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:06 compute-0 nova_compute[253538]: 2025-11-25 08:24:06.981 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] resizing rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.022 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.041 253542 DEBUG nova.compute.manager [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:07 compute-0 sudo[275496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:07 compute-0 sudo[275496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:07 compute-0 sudo[275496]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.096 253542 DEBUG nova.objects.instance [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lazy-loading 'migration_context' on Instance uuid 7ca3cfae-7765-48b7-9d65-660c3b709a55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.104 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.112 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.112 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Ensure instance console log exists: /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.112 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.113 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.113 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.114 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.115 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.122 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'resources' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.127 253542 WARNING nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.129 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:07 compute-0 sudo[275555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.132 253542 DEBUG nova.virt.libvirt.host [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.132 253542 DEBUG nova.virt.libvirt.host [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:07 compute-0 sudo[275555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.136 253542 DEBUG nova.virt.libvirt.host [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.136 253542 DEBUG nova.virt.libvirt.host [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.136 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.137 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:07 compute-0 sudo[275555]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.138 253542 DEBUG nova.virt.hardware [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.141 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.164 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.170 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:24:07 compute-0 sudo[275582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:07 compute-0 sudo[275582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:07 compute-0 sudo[275582]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:07 compute-0 sudo[275608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:24:07 compute-0 sudo[275608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484734067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.576 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.615 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:07 compute-0 nova_compute[253538]: 2025-11-25 08:24:07.623 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:07 compute-0 podman[275710]: 2025-11-25 08:24:07.645649966 +0000 UTC m=+0.037115269 container create 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 08:24:07 compute-0 systemd[1]: Started libpod-conmon-80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d.scope.
Nov 25 08:24:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:24:07 compute-0 podman[275710]: 2025-11-25 08:24:07.63132931 +0000 UTC m=+0.022794643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:24:07 compute-0 podman[275710]: 2025-11-25 08:24:07.787553294 +0000 UTC m=+0.179018617 container init 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:24:07 compute-0 podman[275710]: 2025-11-25 08:24:07.796151922 +0000 UTC m=+0.187617255 container start 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 08:24:07 compute-0 beautiful_hermann[275729]: 167 167
Nov 25 08:24:07 compute-0 podman[275710]: 2025-11-25 08:24:07.801050677 +0000 UTC m=+0.192516030 container attach 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 08:24:07 compute-0 systemd[1]: libpod-80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d.scope: Deactivated successfully.
Nov 25 08:24:07 compute-0 conmon[275729]: conmon 80c9ef87afeae2e9551b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d.scope/container/memory.events
Nov 25 08:24:07 compute-0 podman[275710]: 2025-11-25 08:24:07.803714591 +0000 UTC m=+0.195179914 container died 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 08:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-80b1c9d29f67d0e5cb01335439a05baa3e0086a9d78405c403713a190709ef96-merged.mount: Deactivated successfully.
Nov 25 08:24:07 compute-0 podman[275710]: 2025-11-25 08:24:07.876687231 +0000 UTC m=+0.268152554 container remove 80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hermann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:24:07 compute-0 systemd[1]: libpod-conmon-80c9ef87afeae2e9551bdd6e5a06b4e6126a1ac68b480f8654ac4ccdd620166d.scope: Deactivated successfully.
Nov 25 08:24:08 compute-0 podman[275772]: 2025-11-25 08:24:08.047744036 +0000 UTC m=+0.050614532 container create 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:24:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694983390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.080 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.084 253542 DEBUG nova.objects.instance [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ca3cfae-7765-48b7-9d65-660c3b709a55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.094 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <uuid>7ca3cfae-7765-48b7-9d65-660c3b709a55</uuid>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <name>instance-0000000a</name>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerDiagnosticsTest-server-1777165044</nova:name>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:07</nova:creationTime>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <nova:user uuid="247c30d08ff74c88808c16ddee332fbe">tempest-ServerDiagnosticsTest-593637892-project-member</nova:user>
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <nova:project uuid="5a8ba4490f464848abc986d6b52e37cc">tempest-ServerDiagnosticsTest-593637892</nova:project>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <entry name="serial">7ca3cfae-7765-48b7-9d65-660c3b709a55</entry>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <entry name="uuid">7ca3cfae-7765-48b7-9d65-660c3b709a55</entry>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7ca3cfae-7765-48b7-9d65-660c3b709a55_disk">
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config">
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/console.log" append="off"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:08 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:08 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:08 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:08 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:08 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:08 compute-0 systemd[1]: Started libpod-conmon-38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a.scope.
Nov 25 08:24:08 compute-0 podman[275772]: 2025-11-25 08:24:08.018536618 +0000 UTC m=+0.021407094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:24:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:24:08 compute-0 podman[275772]: 2025-11-25 08:24:08.171532842 +0000 UTC m=+0.174403348 container init 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.171 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.172 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.173 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Using config drive
Nov 25 08:24:08 compute-0 ceph-mon[75015]: pgmap v1157: 321 pgs: 321 active+clean; 101 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 117 op/s
Nov 25 08:24:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/484734067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/694983390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:08 compute-0 podman[275772]: 2025-11-25 08:24:08.181301523 +0000 UTC m=+0.184171979 container start 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 08:24:08 compute-0 podman[275772]: 2025-11-25 08:24:08.19346262 +0000 UTC m=+0.196333106 container attach 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.207 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.812 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Creating config drive at /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.817 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwucte386 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.947 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwucte386" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 109 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 141 op/s
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.973 253542 DEBUG nova.storage.rbd_utils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] rbd image 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:08 compute-0 nova_compute[253538]: 2025-11-25 08:24:08.977 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:09 compute-0 nova_compute[253538]: 2025-11-25 08:24:09.164 253542 DEBUG oslo_concurrency.processutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config 7ca3cfae-7765-48b7-9d65-660c3b709a55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:09 compute-0 nova_compute[253538]: 2025-11-25 08:24:09.165 253542 INFO nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Deleting local config drive /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55/disk.config because it was imported into RBD.
Nov 25 08:24:09 compute-0 condescending_bose[275791]: {
Nov 25 08:24:09 compute-0 condescending_bose[275791]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "osd_id": 1,
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "type": "bluestore"
Nov 25 08:24:09 compute-0 condescending_bose[275791]:     },
Nov 25 08:24:09 compute-0 condescending_bose[275791]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "osd_id": 2,
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "type": "bluestore"
Nov 25 08:24:09 compute-0 condescending_bose[275791]:     },
Nov 25 08:24:09 compute-0 condescending_bose[275791]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "osd_id": 0,
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:24:09 compute-0 condescending_bose[275791]:         "type": "bluestore"
Nov 25 08:24:09 compute-0 condescending_bose[275791]:     }
Nov 25 08:24:09 compute-0 condescending_bose[275791]: }
Nov 25 08:24:09 compute-0 systemd[1]: libpod-38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a.scope: Deactivated successfully.
Nov 25 08:24:09 compute-0 podman[275772]: 2025-11-25 08:24:09.210392519 +0000 UTC m=+1.213262965 container died 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:24:09 compute-0 systemd[1]: libpod-38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a.scope: Consumed 1.020s CPU time.
Nov 25 08:24:09 compute-0 systemd-machined[215790]: New machine qemu-11-instance-0000000a.
Nov 25 08:24:09 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Nov 25 08:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c215b97091e818a58a3954957626c70a59fd99e85135a8df913f22f46e878fc-merged.mount: Deactivated successfully.
Nov 25 08:24:09 compute-0 podman[275772]: 2025-11-25 08:24:09.323126359 +0000 UTC m=+1.325996815 container remove 38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 08:24:09 compute-0 systemd[1]: libpod-conmon-38b0ec16caf53e33c480aee01417c5d895daf34dfb358e7bd57293f08810829a.scope: Deactivated successfully.
Nov 25 08:24:09 compute-0 sudo[275608]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:24:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:24:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:09 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e162fcd0-88c4-4075-94a2-c7b3eeb4d01e does not exist
Nov 25 08:24:09 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ed814936-35aa-485f-8096-3f1448dda72b does not exist
Nov 25 08:24:09 compute-0 sudo[275912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:24:09 compute-0 sudo[275912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:09 compute-0 sudo[275912]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:09 compute-0 nova_compute[253538]: 2025-11-25 08:24:09.467 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:09 compute-0 sudo[275937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:24:09 compute-0 sudo[275937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:24:09 compute-0 sudo[275937]: pam_unix(sudo:session): session closed for user root
Nov 25 08:24:09 compute-0 nova_compute[253538]: 2025-11-25 08:24:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.028 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059050.0281622, 7ca3cfae-7765-48b7-9d65-660c3b709a55 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.029 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] VM Resumed (Lifecycle Event)
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.033 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.033 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.037 253542 INFO nova.virt.libvirt.driver [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance spawned successfully.
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.037 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.052 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.059 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.062 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.063 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.063 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.064 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.064 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.065 253542 DEBUG nova.virt.libvirt.driver [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.270 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.271 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059050.0322893, 7ca3cfae-7765-48b7-9d65-660c3b709a55 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] VM Started (Lifecycle Event)
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.312 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.316 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.324 253542 INFO nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Took 4.17 seconds to spawn the instance on the hypervisor.
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.324 253542 DEBUG nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.347 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.384 253542 INFO nova.compute.manager [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Took 5.16 seconds to build instance.
Nov 25 08:24:10 compute-0 ceph-mon[75015]: pgmap v1158: 321 pgs: 321 active+clean; 109 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 141 op/s
Nov 25 08:24:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.400 253542 DEBUG oslo_concurrency.lockutils [None req-666ccd83-4bf8-4de6-810a-c21239428d00 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:24:10 compute-0 nova_compute[253538]: 2025-11-25 08:24:10.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 134 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Nov 25 08:24:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511440864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.048 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.118 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.119 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.123 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.123 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.258 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.259 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4439MB free_disk=59.95622634887695GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.259 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.260 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.263 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.344 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 4659329f-611a-4436-aa9e-26937db9cd61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.344 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7ca3cfae-7765-48b7-9d65-660c3b709a55 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.345 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.345 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.397 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:11 compute-0 ceph-mon[75015]: pgmap v1159: 321 pgs: 321 active+clean; 134 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Nov 25 08:24:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/511440864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.544 253542 DEBUG nova.compute.manager [None req-a85b4a57-e15c-486c-83b4-33fa9d9f719b d683a34b78b44a5ca9f59cd5f49e57cf 77990df78fda43d19109b5fd47d2d5ad - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.548 253542 INFO nova.compute.manager [None req-a85b4a57-e15c-486c-83b4-33fa9d9f719b d683a34b78b44a5ca9f59cd5f49e57cf 77990df78fda43d19109b5fd47d2d5ad - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Retrieving diagnostics
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.777 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.778 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.778 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "7ca3cfae-7765-48b7-9d65-660c3b709a55-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.779 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.779 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.780 253542 INFO nova.compute.manager [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Terminating instance
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.781 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "refresh_cache-7ca3cfae-7765-48b7-9d65-660c3b709a55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.782 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquired lock "refresh_cache-7ca3cfae-7765-48b7-9d65-660c3b709a55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.782 253542 DEBUG nova.network.neutron [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:24:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890546180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.847 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.854 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.874 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.896 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:24:11 compute-0 nova_compute[253538]: 2025-11-25 08:24:11.897 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:12 compute-0 nova_compute[253538]: 2025-11-25 08:24:12.255 253542 DEBUG nova.network.neutron [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/890546180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:12 compute-0 nova_compute[253538]: 2025-11-25 08:24:12.812 253542 DEBUG nova.network.neutron [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:12 compute-0 nova_compute[253538]: 2025-11-25 08:24:12.826 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Releasing lock "refresh_cache-7ca3cfae-7765-48b7-9d65-660c3b709a55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:24:12 compute-0 nova_compute[253538]: 2025-11-25 08:24:12.827 253542 DEBUG nova.compute.manager [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:24:12 compute-0 nova_compute[253538]: 2025-11-25 08:24:12.898 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:12 compute-0 nova_compute[253538]: 2025-11-25 08:24:12.915 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:12 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 25 08:24:12 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 3.502s CPU time.
Nov 25 08:24:12 compute-0 systemd-machined[215790]: Machine qemu-11-instance-0000000a terminated.
Nov 25 08:24:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 134 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Nov 25 08:24:13 compute-0 nova_compute[253538]: 2025-11-25 08:24:13.049 253542 INFO nova.virt.libvirt.driver [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance destroyed successfully.
Nov 25 08:24:13 compute-0 nova_compute[253538]: 2025-11-25 08:24:13.050 253542 DEBUG nova.objects.instance [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lazy-loading 'resources' on Instance uuid 7ca3cfae-7765-48b7-9d65-660c3b709a55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.449622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053449664, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1270, "num_deletes": 507, "total_data_size": 1316105, "memory_usage": 1343024, "flush_reason": "Manual Compaction"}
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053566104, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1252824, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23068, "largest_seqno": 24337, "table_properties": {"data_size": 1247429, "index_size": 2279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15474, "raw_average_key_size": 19, "raw_value_size": 1234214, "raw_average_value_size": 1518, "num_data_blocks": 102, "num_entries": 813, "num_filter_entries": 813, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764058977, "oldest_key_time": 1764058977, "file_creation_time": 1764059053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 116546 microseconds, and 5470 cpu microseconds.
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.566164) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1252824 bytes OK
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.566189) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.590104) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.590160) EVENT_LOG_v1 {"time_micros": 1764059053590145, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.590193) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1309240, prev total WAL file size 1309240, number of live WAL files 2.
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.591391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1223KB)], [53(9103KB)]
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053591439, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10574673, "oldest_snapshot_seqno": -1}
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4598 keys, 7217278 bytes, temperature: kUnknown
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053962684, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7217278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7186963, "index_size": 17725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 115523, "raw_average_key_size": 25, "raw_value_size": 7104183, "raw_average_value_size": 1545, "num_data_blocks": 739, "num_entries": 4598, "num_filter_entries": 4598, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.962923) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7217278 bytes
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.974779) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.5 rd, 19.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.9 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(14.2) write-amplify(5.8) OK, records in: 5627, records dropped: 1029 output_compression: NoCompression
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.974824) EVENT_LOG_v1 {"time_micros": 1764059053974804, "job": 28, "event": "compaction_finished", "compaction_time_micros": 371311, "compaction_time_cpu_micros": 33617, "output_level": 6, "num_output_files": 1, "total_output_size": 7217278, "num_input_records": 5627, "num_output_records": 4598, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053975435, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059053978062, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.591187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:24:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:24:13.978609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:24:13 compute-0 ceph-mon[75015]: pgmap v1160: 321 pgs: 321 active+clean; 134 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Nov 25 08:24:14 compute-0 nova_compute[253538]: 2025-11-25 08:24:14.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:14 compute-0 nova_compute[253538]: 2025-11-25 08:24:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:14 compute-0 nova_compute[253538]: 2025-11-25 08:24:14.946 253542 INFO nova.virt.libvirt.driver [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Deleting instance files /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55_del
Nov 25 08:24:14 compute-0 nova_compute[253538]: 2025-11-25 08:24:14.947 253542 INFO nova.virt.libvirt.driver [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Deletion of /var/lib/nova/instances/7ca3cfae-7765-48b7-9d65-660c3b709a55_del complete
Nov 25 08:24:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 109 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.2 MiB/s wr, 214 op/s
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.027 253542 INFO nova.compute.manager [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Took 2.20 seconds to destroy the instance on the hypervisor.
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.028 253542 DEBUG oslo.service.loopingcall [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.028 253542 DEBUG nova.compute.manager [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.029 253542 DEBUG nova.network.neutron [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.820 253542 DEBUG nova.network.neutron [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.832 253542 DEBUG nova.network.neutron [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.852 253542 INFO nova.compute.manager [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Took 0.82 seconds to deallocate network for instance.
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.894 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.895 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:15 compute-0 nova_compute[253538]: 2025-11-25 08:24:15.975 253542 DEBUG oslo_concurrency.processutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:16 compute-0 ceph-mon[75015]: pgmap v1161: 321 pgs: 321 active+clean; 109 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.2 MiB/s wr, 214 op/s
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739399886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.451 253542 DEBUG oslo_concurrency.processutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.457 253542 DEBUG nova.compute.provider_tree [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.478 253542 DEBUG nova.scheduler.client.report [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.513 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.542 253542 INFO nova.scheduler.client.report [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Deleted allocations for instance 7ca3cfae-7765-48b7-9d65-660c3b709a55
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.615 253542 DEBUG oslo_concurrency.lockutils [None req-06f463bb-3052-409e-a356-29186e6c4bf3 247c30d08ff74c88808c16ddee332fbe 5a8ba4490f464848abc986d6b52e37cc - - default default] Lock "7ca3cfae-7765-48b7-9d65-660c3b709a55" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.692 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.693 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.708 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.776 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.776 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.784 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.785 253542 INFO nova.compute.claims [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:24:16 compute-0 nova_compute[253538]: 2025-11-25 08:24:16.923 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 99 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Nov 25 08:24:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1739399886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.256 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:24:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552264641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.390 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.396 253542 DEBUG nova.compute.provider_tree [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.410 253542 DEBUG nova.scheduler.client.report [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.435 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.436 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.489 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.490 253542 DEBUG nova.network.neutron [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.520 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.539 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.649 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.650 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.651 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Creating image(s)
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.672 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.695 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.717 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.720 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.777 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.779 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.779 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.780 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.804 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.807 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.985 253542 DEBUG nova.network.neutron [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:24:17 compute-0 nova_compute[253538]: 2025-11-25 08:24:17.986 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:24:18 compute-0 ovn_controller[152859]: 2025-11-25T08:24:18Z|00045|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 08:24:18 compute-0 ceph-mon[75015]: pgmap v1162: 321 pgs: 321 active+clean; 99 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Nov 25 08:24:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1552264641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.257 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.310 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] resizing rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.471 253542 DEBUG nova.objects.instance [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lazy-loading 'migration_context' on Instance uuid 1cb8bb78-4ff6-496c-858c-41159362ffb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.487 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.487 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Ensure instance console log exists: /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.488 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.488 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.489 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.490 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.495 253542 WARNING nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.500 253542 DEBUG nova.virt.libvirt.host [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.501 253542 DEBUG nova.virt.libvirt.host [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.507 253542 DEBUG nova.virt.libvirt.host [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.508 253542 DEBUG nova.virt.libvirt.host [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.508 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.508 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.509 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.509 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.509 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.510 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.510 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.510 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.510 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.511 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.511 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.511 253542 DEBUG nova.virt.hardware [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.514 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3193521444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.928 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.952 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:18 compute-0 nova_compute[253538]: 2025-11-25 08:24:18.957 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 114 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.9 MiB/s wr, 208 op/s
Nov 25 08:24:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3193521444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2042104617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.553 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.556 253542 DEBUG nova.objects.instance [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cb8bb78-4ff6-496c-858c-41159362ffb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.588 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <uuid>1cb8bb78-4ff6-496c-858c-41159362ffb8</uuid>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <name>instance-0000000b</name>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerExternalEventsTest-server-44658727</nova:name>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:18</nova:creationTime>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <nova:user uuid="479070fb6fab4140a63c6eb2f769af32">tempest-ServerExternalEventsTest-287738302-project-member</nova:user>
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <nova:project uuid="17c27399c5cf49759f4c3caf086910cb">tempest-ServerExternalEventsTest-287738302</nova:project>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <entry name="serial">1cb8bb78-4ff6-496c-858c-41159362ffb8</entry>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <entry name="uuid">1cb8bb78-4ff6-496c-858c-41159362ffb8</entry>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/1cb8bb78-4ff6-496c-858c-41159362ffb8_disk">
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config">
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:19 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/console.log" append="off"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:19 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:19 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:19 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:19 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:19 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.688 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.689 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.693 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Using config drive
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.732 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:24:19.773 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:24:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:24:19.775 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.968 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Creating config drive at /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config
Nov 25 08:24:19 compute-0 nova_compute[253538]: 2025-11-25 08:24:19.973 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvcob9wtt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:20 compute-0 nova_compute[253538]: 2025-11-25 08:24:20.099 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvcob9wtt" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:20 compute-0 nova_compute[253538]: 2025-11-25 08:24:20.123 253542 DEBUG nova.storage.rbd_utils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] rbd image 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:20 compute-0 nova_compute[253538]: 2025-11-25 08:24:20.127 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:20 compute-0 ceph-mon[75015]: pgmap v1163: 321 pgs: 321 active+clean; 114 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.9 MiB/s wr, 208 op/s
Nov 25 08:24:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2042104617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:20 compute-0 nova_compute[253538]: 2025-11-25 08:24:20.564 253542 DEBUG oslo_concurrency.processutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config 1cb8bb78-4ff6-496c-858c-41159362ffb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:20 compute-0 nova_compute[253538]: 2025-11-25 08:24:20.565 253542 INFO nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Deleting local config drive /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8/disk.config because it was imported into RBD.
Nov 25 08:24:20 compute-0 systemd-machined[215790]: New machine qemu-12-instance-0000000b.
Nov 25 08:24:20 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Nov 25 08:24:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 134 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 193 op/s
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.266 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:21 compute-0 ceph-mon[75015]: pgmap v1164: 321 pgs: 321 active+clean; 134 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 193 op/s
Nov 25 08:24:21 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 08:24:21 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Consumed 13.060s CPU time.
Nov 25 08:24:21 compute-0 systemd-machined[215790]: Machine qemu-10-instance-00000009 terminated.
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.726 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059061.7259235, 1cb8bb78-4ff6-496c-858c-41159362ffb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.726 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] VM Resumed (Lifecycle Event)
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.730 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.731 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.735 253542 INFO nova.virt.libvirt.driver [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance spawned successfully.
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.736 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.749 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.755 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.758 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.759 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.759 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.760 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.760 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.761 253542 DEBUG nova.virt.libvirt.driver [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.787 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.787 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059061.7305493, 1cb8bb78-4ff6-496c-858c-41159362ffb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.787 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] VM Started (Lifecycle Event)
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.815 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.819 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.824 253542 INFO nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Took 4.17 seconds to spawn the instance on the hypervisor.
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.825 253542 DEBUG nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.835 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.881 253542 INFO nova.compute.manager [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Took 5.13 seconds to build instance.
Nov 25 08:24:21 compute-0 nova_compute[253538]: 2025-11-25 08:24:21.897 253542 DEBUG oslo_concurrency.lockutils [None req-c5d6135b-f44a-4f19-8193-26c9f80add05 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:22 compute-0 nova_compute[253538]: 2025-11-25 08:24:22.458 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance shutdown successfully after 15 seconds.
Nov 25 08:24:22 compute-0 nova_compute[253538]: 2025-11-25 08:24:22.464 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.
Nov 25 08:24:22 compute-0 nova_compute[253538]: 2025-11-25 08:24:22.468 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.
Nov 25 08:24:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 164 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Nov 25 08:24:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.475 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting instance files /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.476 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deletion of /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del complete
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.499 253542 DEBUG nova.compute.manager [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.499 253542 DEBUG nova.compute.manager [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.500 253542 DEBUG oslo_concurrency.lockutils [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] Acquiring lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.500 253542 DEBUG oslo_concurrency.lockutils [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] Acquired lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.501 253542 DEBUG nova.network.neutron [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.627 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.629 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating image(s)
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.655 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.684 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.708 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.712 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.747 253542 DEBUG nova.network.neutron [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.798 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.799 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.800 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.800 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.822 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.826 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.943 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.944 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.944 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "1cb8bb78-4ff6-496c-858c-41159362ffb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.945 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.945 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.946 253542 INFO nova.compute.manager [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Terminating instance
Nov 25 08:24:23 compute-0 nova_compute[253538]: 2025-11-25 08:24:23.947 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.005 253542 DEBUG nova.network.neutron [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.019 253542 DEBUG oslo_concurrency.lockutils [None req-714c023e-24b6-4e82-adf6-0aa705fed6b6 fe6ab27d78934bb8bd1e2c796e24a021 8be8e15be92f4c8a8ad37f92b6ba82e4 - - default default] Releasing lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.022 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquired lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.022 253542 DEBUG nova.network.neutron [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:24:24 compute-0 ceph-mon[75015]: pgmap v1165: 321 pgs: 321 active+clean; 164 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.123 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4659329f-611a-4436-aa9e-26937db9cd61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.178 253542 DEBUG nova.network.neutron [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.185 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] resizing rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.535 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.579 253542 DEBUG nova.network.neutron [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.591 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Releasing lock "refresh_cache-1cb8bb78-4ff6-496c-858c-41159362ffb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.592 253542 DEBUG nova.compute.manager [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.631 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.631 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.644 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.729 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.730 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.737 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.738 253542 INFO nova.compute.claims [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:24:24 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 25 08:24:24 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 4.015s CPU time.
Nov 25 08:24:24 compute-0 systemd-machined[215790]: Machine qemu-12-instance-0000000b terminated.
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.813 253542 INFO nova.virt.libvirt.driver [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance destroyed successfully.
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.814 253542 DEBUG nova.objects.instance [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lazy-loading 'resources' on Instance uuid 1cb8bb78-4ff6-496c-858c-41159362ffb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.967 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.968 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Ensure instance console log exists: /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.969 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.969 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.970 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 115 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.972 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.977 253542 WARNING nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:24:24 compute-0 nova_compute[253538]: 2025-11-25 08:24:24.981 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.014 253542 DEBUG nova.virt.libvirt.host [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.017 253542 DEBUG nova.virt.libvirt.host [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.021 253542 DEBUG nova.virt.libvirt.host [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.022 253542 DEBUG nova.virt.libvirt.host [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.023 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.024 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.026 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.027 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.028 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.028 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.029 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.030 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.030 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.031 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.032 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.032 253542 DEBUG nova.virt.hardware [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.033 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.053 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334326783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947771019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.481 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.508 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.512 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.533 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.539 253542 DEBUG nova.compute.provider_tree [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.562 253542 DEBUG nova.scheduler.client.report [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.615 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.616 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.830 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.831 253542 DEBUG nova.network.neutron [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.910 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.949 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.977 253542 INFO nova.virt.libvirt.driver [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Deleting instance files /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8_del
Nov 25 08:24:25 compute-0 nova_compute[253538]: 2025-11-25 08:24:25.978 253542 INFO nova.virt.libvirt.driver [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Deletion of /var/lib/nova/instances/1cb8bb78-4ff6-496c-858c-41159362ffb8_del complete
Nov 25 08:24:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254586632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.008 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.010 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <uuid>4659329f-611a-4436-aa9e-26937db9cd61</uuid>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <name>instance-00000009</name>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdmin275Test-server-190405605</nova:name>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:24</nova:creationTime>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <nova:user uuid="57ccb3076c9145fda72f75af7dd3acc0">tempest-ServersAdmin275Test-1226019010-project-member</nova:user>
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <nova:project uuid="e0f1bfa27d5e45138e846f38c1d92dfc">tempest-ServersAdmin275Test-1226019010</nova:project>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <entry name="serial">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <entry name="uuid">4659329f-611a-4436-aa9e-26937db9cd61</entry>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk">
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4659329f-611a-4436-aa9e-26937db9cd61_disk.config">
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/console.log" append="off"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:26 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:26 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:26 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:26 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:26 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.051 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.051 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.052 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Using config drive
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.077 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.101 253542 DEBUG nova.network.neutron [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.101 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:24:26 compute-0 podman[276754]: 2025-11-25 08:24:26.108137011 +0000 UTC m=+0.055581800 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.123 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.150 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lazy-loading 'keypairs' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.237 253542 INFO nova.compute.manager [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Took 1.64 seconds to destroy the instance on the hypervisor.
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.237 253542 DEBUG oslo.service.loopingcall [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.237 253542 DEBUG nova.compute.manager [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.238 253542 DEBUG nova.network.neutron [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.252 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.253 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.253 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Creating image(s)
Nov 25 08:24:26 compute-0 ceph-mon[75015]: pgmap v1166: 321 pgs: 321 active+clean; 115 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 08:24:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3334326783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1947771019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1254586632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.300 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.322 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.344 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.347 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.374 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Creating config drive at /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.378 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpql71eqeu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.397 253542 DEBUG nova.network.neutron [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.411 253542 DEBUG nova.network.neutron [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.425 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.426 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.427 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.427 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.447 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.449 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.470 253542 INFO nova.compute.manager [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Took 0.23 seconds to deallocate network for instance.
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.502 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpql71eqeu" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.533 253542 DEBUG nova.storage.rbd_utils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] rbd image 4659329f-611a-4436-aa9e-26937db9cd61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.537 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.670 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.672 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:24:26.777 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:24:26 compute-0 nova_compute[253538]: 2025-11-25 08:24:26.792 253542 DEBUG oslo_concurrency.processutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 91 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 239 op/s
Nov 25 08:24:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978263295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:27 compute-0 nova_compute[253538]: 2025-11-25 08:24:27.253 253542 DEBUG oslo_concurrency.processutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:27 compute-0 nova_compute[253538]: 2025-11-25 08:24:27.258 253542 DEBUG nova.compute.provider_tree [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:27 compute-0 nova_compute[253538]: 2025-11-25 08:24:27.417 253542 DEBUG nova.scheduler.client.report [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2978263295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:27 compute-0 nova_compute[253538]: 2025-11-25 08:24:27.941 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.048 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059053.047059, 7ca3cfae-7765-48b7-9d65-660c3b709a55 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.048 253542 INFO nova.compute.manager [-] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] VM Stopped (Lifecycle Event)
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.063 253542 INFO nova.scheduler.client.report [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Deleted allocations for instance 1cb8bb78-4ff6-496c-858c-41159362ffb8
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.070 253542 DEBUG nova.compute.manager [None req-3746c55d-c83d-4029-9c9e-a8aee7ddcf44 - - - - - -] [instance: 7ca3cfae-7765-48b7-9d65-660c3b709a55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.134 253542 DEBUG oslo_concurrency.lockutils [None req-cce667b4-09e0-4f99-a432-b9789a278886 479070fb6fab4140a63c6eb2f769af32 17c27399c5cf49759f4c3caf086910cb - - default default] Lock "1cb8bb78-4ff6-496c-858c-41159362ffb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.142 253542 DEBUG oslo_concurrency.processutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config 4659329f-611a-4436-aa9e-26937db9cd61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.143 253542 INFO nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting local config drive /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61/disk.config because it was imported into RBD.
Nov 25 08:24:28 compute-0 systemd-machined[215790]: New machine qemu-13-instance-00000009.
Nov 25 08:24:28 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-00000009.
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.306 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.857s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:28 compute-0 podman[276953]: 2025-11-25 08:24:28.319470312 +0000 UTC m=+0.077470445 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:24:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:28 compute-0 nova_compute[253538]: 2025-11-25 08:24:28.426 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] resizing rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:28 compute-0 ceph-mon[75015]: pgmap v1167: 321 pgs: 321 active+clean; 91 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 239 op/s
Nov 25 08:24:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 111 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 247 op/s
Nov 25 08:24:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:24:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2152344658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:24:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:24:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2152344658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.121 253542 DEBUG nova.objects.instance [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lazy-loading 'migration_context' on Instance uuid 651b0445-6a0f-41e0-ad78-6318f8175e0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.138 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.139 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Ensure instance console log exists: /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.139 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.139 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.139 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.141 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.145 253542 WARNING nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.149 253542 DEBUG nova.virt.libvirt.host [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.149 253542 DEBUG nova.virt.libvirt.host [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.153 253542 DEBUG nova.virt.libvirt.host [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.154 253542 DEBUG nova.virt.libvirt.host [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.154 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.154 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.155 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.156 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.156 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.156 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.156 253542 DEBUG nova.virt.hardware [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.159 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.179 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 4659329f-611a-4436-aa9e-26937db9cd61 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.180 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059069.1679134, 4659329f-611a-4436-aa9e-26937db9cd61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.180 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Resumed (Lifecycle Event)
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.182 253542 DEBUG nova.compute.manager [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.183 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.187 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance spawned successfully.
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.188 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.199 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.205 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.208 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.210 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.210 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.211 253542 DEBUG nova.virt.libvirt.driver [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.237 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.238 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059069.1683638, 4659329f-611a-4436-aa9e-26937db9cd61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.238 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Started (Lifecycle Event)
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.271 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.275 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.295 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.301 253542 DEBUG nova.compute.manager [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.354 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.354 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.355 253542 DEBUG nova.objects.instance [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.421 253542 DEBUG oslo_concurrency.lockutils [None req-f0f67520-50d5-44d1-bd6c-82dd827f18a4 8eb21f29ec514ce1adc864a5875afc7f cb11b932e29641d899540daf5c89f9d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515615473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.626 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:29 compute-0 ceph-mon[75015]: pgmap v1168: 321 pgs: 321 active+clean; 111 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 247 op/s
Nov 25 08:24:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2152344658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:24:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2152344658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:24:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3515615473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.651 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:29 compute-0 nova_compute[253538]: 2025-11-25 08:24:29.658 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243680153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.151 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.153 253542 DEBUG nova.objects.instance [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lazy-loading 'pci_devices' on Instance uuid 651b0445-6a0f-41e0-ad78-6318f8175e0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.167 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <uuid>651b0445-6a0f-41e0-ad78-6318f8175e0b</uuid>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <name>instance-0000000c</name>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1478904873</nova:name>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:29</nova:creationTime>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <nova:user uuid="5fe54f3384ea4571bee28e13c88e6d14">tempest-ServerDiagnosticsNegativeTest-966457076-project-member</nova:user>
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <nova:project uuid="c86b7b0d5b344dfb82fc7554a622988e">tempest-ServerDiagnosticsNegativeTest-966457076</nova:project>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <entry name="serial">651b0445-6a0f-41e0-ad78-6318f8175e0b</entry>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <entry name="uuid">651b0445-6a0f-41e0-ad78-6318f8175e0b</entry>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/651b0445-6a0f-41e0-ad78-6318f8175e0b_disk">
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config">
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:30 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/console.log" append="off"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:30 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:30 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:30 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:30 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:30 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.283 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.284 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.285 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Using config drive
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.317 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.629 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Creating config drive at /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.633 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9prsvbg2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3243680153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.775 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9prsvbg2" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.813 253542 DEBUG nova.storage.rbd_utils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] rbd image 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.816 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 116 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 227 op/s
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.988 253542 DEBUG oslo_concurrency.processutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config 651b0445-6a0f-41e0-ad78-6318f8175e0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:30 compute-0 nova_compute[253538]: 2025-11-25 08:24:30.989 253542 INFO nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Deleting local config drive /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b/disk.config because it was imported into RBD.
Nov 25 08:24:31 compute-0 systemd-machined[215790]: New machine qemu-14-instance-0000000c.
Nov 25 08:24:31 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000c.
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.761 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "4659329f-611a-4436-aa9e-26937db9cd61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.761 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.762 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "4659329f-611a-4436-aa9e-26937db9cd61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.762 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.762 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.764 253542 INFO nova.compute.manager [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Terminating instance
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.764 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "refresh_cache-4659329f-611a-4436-aa9e-26937db9cd61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.765 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquired lock "refresh_cache-4659329f-611a-4436-aa9e-26937db9cd61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.765 253542 DEBUG nova.network.neutron [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:24:31 compute-0 ceph-mon[75015]: pgmap v1169: 321 pgs: 321 active+clean; 116 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 227 op/s
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.880 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059071.8806245, 651b0445-6a0f-41e0-ad78-6318f8175e0b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.881 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] VM Resumed (Lifecycle Event)
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.883 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.883 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.886 253542 INFO nova.virt.libvirt.driver [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance spawned successfully.
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.887 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.903 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.908 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.912 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.913 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.913 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.913 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.914 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.914 253542 DEBUG nova.virt.libvirt.driver [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:31 compute-0 sshd-session[277229]: Invalid user hmsftp from 193.32.162.151 port 59344
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.928 253542 DEBUG nova.network.neutron [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.945 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.946 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059071.8825336, 651b0445-6a0f-41e0-ad78-6318f8175e0b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.946 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] VM Started (Lifecycle Event)
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.978 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:31 compute-0 nova_compute[253538]: 2025-11-25 08:24:31.982 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.005 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.010 253542 INFO nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Took 5.76 seconds to spawn the instance on the hypervisor.
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.011 253542 DEBUG nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:32 compute-0 sshd-session[277229]: Connection closed by invalid user hmsftp 193.32.162.151 port 59344 [preauth]
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.094 253542 INFO nova.compute.manager [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Took 7.39 seconds to build instance.
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.141 253542 DEBUG oslo_concurrency.lockutils [None req-1548f2c5-d48a-4cdb-8ce2-174785409a0b 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.281 253542 DEBUG nova.network.neutron [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.295 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Releasing lock "refresh_cache-4659329f-611a-4436-aa9e-26937db9cd61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.295 253542 DEBUG nova.compute.manager [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:24:32 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 08:24:32 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000009.scope: Consumed 3.691s CPU time.
Nov 25 08:24:32 compute-0 systemd-machined[215790]: Machine qemu-13-instance-00000009 terminated.
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.517 253542 INFO nova.virt.libvirt.driver [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance destroyed successfully.
Nov 25 08:24:32 compute-0 nova_compute[253538]: 2025-11-25 08:24:32.517 253542 DEBUG nova.objects.instance [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lazy-loading 'resources' on Instance uuid 4659329f-611a-4436-aa9e-26937db9cd61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 25 08:24:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 25 08:24:32 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 25 08:24:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 130 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 268 op/s
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.200 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.200 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.201 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "651b0445-6a0f-41e0-ad78-6318f8175e0b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.201 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.202 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.204 253542 INFO nova.compute.manager [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Terminating instance
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.207 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "refresh_cache-651b0445-6a0f-41e0-ad78-6318f8175e0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.207 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquired lock "refresh_cache-651b0445-6a0f-41e0-ad78-6318f8175e0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.207 253542 DEBUG nova.network.neutron [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:24:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.421 253542 DEBUG nova.network.neutron [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.771 253542 DEBUG nova.network.neutron [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.785 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Releasing lock "refresh_cache-651b0445-6a0f-41e0-ad78-6318f8175e0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.786 253542 DEBUG nova.compute.manager [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:24:33 compute-0 ceph-mon[75015]: osdmap e111: 3 total, 3 up, 3 in
Nov 25 08:24:33 compute-0 ceph-mon[75015]: pgmap v1171: 321 pgs: 321 active+clean; 130 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 268 op/s
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.810 253542 INFO nova.virt.libvirt.driver [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deleting instance files /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.811 253542 INFO nova.virt.libvirt.driver [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deletion of /var/lib/nova/instances/4659329f-611a-4436-aa9e-26937db9cd61_del complete
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.858 253542 INFO nova.compute.manager [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Took 1.56 seconds to destroy the instance on the hypervisor.
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.859 253542 DEBUG oslo.service.loopingcall [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.859 253542 DEBUG nova.compute.manager [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:24:33 compute-0 nova_compute[253538]: 2025-11-25 08:24:33.859 253542 DEBUG nova.network.neutron [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:24:33 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 08:24:33 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Consumed 2.723s CPU time.
Nov 25 08:24:33 compute-0 systemd-machined[215790]: Machine qemu-14-instance-0000000c terminated.
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.007 253542 INFO nova.virt.libvirt.driver [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance destroyed successfully.
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.008 253542 DEBUG nova.objects.instance [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lazy-loading 'resources' on Instance uuid 651b0445-6a0f-41e0-ad78-6318f8175e0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.088 253542 DEBUG nova.network.neutron [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.099 253542 DEBUG nova.network.neutron [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.122 253542 INFO nova.compute.manager [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Took 0.26 seconds to deallocate network for instance.
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.168 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.169 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.240 253542 DEBUG oslo_concurrency.processutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178786896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.743 253542 DEBUG oslo_concurrency.processutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.752 253542 DEBUG nova.compute.provider_tree [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.770 253542 DEBUG nova.scheduler.client.report [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.796 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.805 253542 INFO nova.virt.libvirt.driver [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Deleting instance files /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b_del
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.806 253542 INFO nova.virt.libvirt.driver [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Deletion of /var/lib/nova/instances/651b0445-6a0f-41e0-ad78-6318f8175e0b_del complete
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.827 253542 INFO nova.scheduler.client.report [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Deleted allocations for instance 4659329f-611a-4436-aa9e-26937db9cd61
Nov 25 08:24:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4178786896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:34 compute-0 podman[277336]: 2025-11-25 08:24:34.849148498 +0000 UTC m=+0.093885169 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.865 253542 INFO nova.compute.manager [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Took 1.08 seconds to destroy the instance on the hypervisor.
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.865 253542 DEBUG oslo.service.loopingcall [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.866 253542 DEBUG nova.compute.manager [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.866 253542 DEBUG nova.network.neutron [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.927 253542 DEBUG oslo_concurrency.lockutils [None req-ab887b35-ff51-48ad-bf6d-cad94f8fbf10 57ccb3076c9145fda72f75af7dd3acc0 e0f1bfa27d5e45138e846f38c1d92dfc - - default default] Lock "4659329f-611a-4436-aa9e-26937db9cd61" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 91 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 294 op/s
Nov 25 08:24:34 compute-0 nova_compute[253538]: 2025-11-25 08:24:34.996 253542 DEBUG nova.network.neutron [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.008 253542 DEBUG nova.network.neutron [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.025 253542 INFO nova.compute.manager [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Took 0.16 seconds to deallocate network for instance.
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.064 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.065 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.106 253542 DEBUG oslo_concurrency.processutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980519771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.551 253542 DEBUG oslo_concurrency.processutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.561 253542 DEBUG nova.compute.provider_tree [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.578 253542 DEBUG nova.scheduler.client.report [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.603 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.629 253542 INFO nova.scheduler.client.report [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Deleted allocations for instance 651b0445-6a0f-41e0-ad78-6318f8175e0b
Nov 25 08:24:35 compute-0 nova_compute[253538]: 2025-11-25 08:24:35.695 253542 DEBUG oslo_concurrency.lockutils [None req-6395b2e8-ef8f-46a3-91fc-7dc07c33bce5 5fe54f3384ea4571bee28e13c88e6d14 c86b7b0d5b344dfb82fc7554a622988e - - default default] Lock "651b0445-6a0f-41e0-ad78-6318f8175e0b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:35 compute-0 ceph-mon[75015]: pgmap v1172: 321 pgs: 321 active+clean; 91 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 294 op/s
Nov 25 08:24:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2980519771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:36 compute-0 nova_compute[253538]: 2025-11-25 08:24:36.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 57 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 308 op/s
Nov 25 08:24:38 compute-0 ceph-mon[75015]: pgmap v1173: 321 pgs: 321 active+clean; 57 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 308 op/s
Nov 25 08:24:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.7 MiB/s wr, 287 op/s
Nov 25 08:24:39 compute-0 nova_compute[253538]: 2025-11-25 08:24:39.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:39 compute-0 nova_compute[253538]: 2025-11-25 08:24:39.813 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059064.8116925, 1cb8bb78-4ff6-496c-858c-41159362ffb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:39 compute-0 nova_compute[253538]: 2025-11-25 08:24:39.813 253542 INFO nova.compute.manager [-] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] VM Stopped (Lifecycle Event)
Nov 25 08:24:39 compute-0 nova_compute[253538]: 2025-11-25 08:24:39.830 253542 DEBUG nova.compute.manager [None req-28143c51-d59c-45ad-919b-665e2967ca20 - - - - - -] [instance: 1cb8bb78-4ff6-496c-858c-41159362ffb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:40 compute-0 ceph-mon[75015]: pgmap v1174: 321 pgs: 321 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.7 MiB/s wr, 287 op/s
Nov 25 08:24:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.1 MiB/s wr, 270 op/s
Nov 25 08:24:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:24:41.050 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:24:41.050 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:24:41.050 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:41 compute-0 nova_compute[253538]: 2025-11-25 08:24:41.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:42 compute-0 ceph-mon[75015]: pgmap v1175: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.1 MiB/s wr, 270 op/s
Nov 25 08:24:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 408 KiB/s wr, 214 op/s
Nov 25 08:24:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:44 compute-0 ceph-mon[75015]: pgmap v1176: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 408 KiB/s wr, 214 op/s
Nov 25 08:24:44 compute-0 nova_compute[253538]: 2025-11-25 08:24:44.543 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 345 KiB/s wr, 182 op/s
Nov 25 08:24:45 compute-0 nova_compute[253538]: 2025-11-25 08:24:45.716 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:45 compute-0 nova_compute[253538]: 2025-11-25 08:24:45.717 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:45 compute-0 nova_compute[253538]: 2025-11-25 08:24:45.737 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:24:45 compute-0 nova_compute[253538]: 2025-11-25 08:24:45.816 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:45 compute-0 nova_compute[253538]: 2025-11-25 08:24:45.817 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:45 compute-0 nova_compute[253538]: 2025-11-25 08:24:45.825 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:24:45 compute-0 nova_compute[253538]: 2025-11-25 08:24:45.826 253542 INFO nova.compute.claims [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:24:45 compute-0 nova_compute[253538]: 2025-11-25 08:24:45.940 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:46 compute-0 ceph-mon[75015]: pgmap v1177: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 345 KiB/s wr, 182 op/s
Nov 25 08:24:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991617410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.382 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.389 253542 DEBUG nova.compute.provider_tree [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.411 253542 DEBUG nova.scheduler.client.report [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.437 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.438 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.477 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.477 253542 DEBUG nova.network.neutron [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.493 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.507 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.584 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.585 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.586 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Creating image(s)
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.607 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.627 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.647 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.651 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.720 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.721 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.721 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.721 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.746 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.750 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.850 253542 DEBUG nova.network.neutron [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:24:46 compute-0 nova_compute[253538]: 2025-11-25 08:24:46.850 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:24:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 745 KiB/s rd, 2.3 KiB/s wr, 60 op/s
Nov 25 08:24:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2991617410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.481 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.731s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.532 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059072.5143735, 4659329f-611a-4436-aa9e-26937db9cd61 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.533 253542 INFO nova.compute.manager [-] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] VM Stopped (Lifecycle Event)
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.539 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] resizing rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.623 253542 DEBUG nova.compute.manager [None req-6c710a7f-d17d-4277-a446-38db8056cf2d - - - - - -] [instance: 4659329f-611a-4436-aa9e-26937db9cd61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.814 253542 DEBUG nova.objects.instance [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e6c81c4-a422-42e4-950b-66fa6411c1eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.825 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.825 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Ensure instance console log exists: /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.825 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.826 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.826 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.827 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.830 253542 WARNING nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.835 253542 DEBUG nova.virt.libvirt.host [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.835 253542 DEBUG nova.virt.libvirt.host [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.841 253542 DEBUG nova.virt.libvirt.host [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.841 253542 DEBUG nova.virt.libvirt.host [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.841 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.842 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.842 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.842 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.842 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.843 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.844 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.844 253542 DEBUG nova.virt.hardware [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:47 compute-0 nova_compute[253538]: 2025-11-25 08:24:47.846 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466949336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.275 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.294 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.297 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:48 compute-0 ceph-mon[75015]: pgmap v1178: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 745 KiB/s rd, 2.3 KiB/s wr, 60 op/s
Nov 25 08:24:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1466949336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709436398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.722 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.725 253542 DEBUG nova.objects.instance [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e6c81c4-a422-42e4-950b-66fa6411c1eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.745 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <uuid>8e6c81c4-a422-42e4-950b-66fa6411c1eb</uuid>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <name>instance-0000000d</name>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1133317659</nova:name>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:47</nova:creationTime>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <nova:user uuid="345cc8bd54dd46c9aaf034a44f55f52e">tempest-ServersAdminNegativeTestJSON-455379081-project-member</nova:user>
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <nova:project uuid="fbc71ebb18c64a72bd6d93bc520d8921">tempest-ServersAdminNegativeTestJSON-455379081</nova:project>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <entry name="serial">8e6c81c4-a422-42e4-950b-66fa6411c1eb</entry>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <entry name="uuid">8e6c81c4-a422-42e4-950b-66fa6411c1eb</entry>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk">
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config">
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:48 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/console.log" append="off"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:48 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:48 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:48 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:48 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:48 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.797 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.797 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.798 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Using config drive
Nov 25 08:24:48 compute-0 nova_compute[253538]: 2025-11-25 08:24:48.816 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 46 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 344 KiB/s wr, 13 op/s
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.006 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059074.0059495, 651b0445-6a0f-41e0-ad78-6318f8175e0b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.007 253542 INFO nova.compute.manager [-] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] VM Stopped (Lifecycle Event)
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.025 253542 DEBUG nova.compute.manager [None req-881a29ee-6414-46ec-937f-71063b4ee328 - - - - - -] [instance: 651b0445-6a0f-41e0-ad78-6318f8175e0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.227 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Creating config drive at /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.233 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0w7wb0ti execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3709436398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.361 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0w7wb0ti" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.389 253542 DEBUG nova.storage.rbd_utils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.393 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.545 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.604 253542 DEBUG oslo_concurrency.processutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config 8e6c81c4-a422-42e4-950b-66fa6411c1eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:49 compute-0 nova_compute[253538]: 2025-11-25 08:24:49.605 253542 INFO nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Deleting local config drive /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb/disk.config because it was imported into RBD.
Nov 25 08:24:49 compute-0 systemd-machined[215790]: New machine qemu-15-instance-0000000d.
Nov 25 08:24:49 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.148 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059090.1484919, 8e6c81c4-a422-42e4-950b-66fa6411c1eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.149 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] VM Resumed (Lifecycle Event)
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.151 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.151 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.155 253542 INFO nova.virt.libvirt.driver [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance spawned successfully.
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.155 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.173 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.178 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.178 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.179 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.180 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.180 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.181 253542 DEBUG nova.virt.libvirt.driver [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.185 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.220 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.220 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059090.1490479, 8e6c81c4-a422-42e4-950b-66fa6411c1eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.221 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] VM Started (Lifecycle Event)
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.251 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.255 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.263 253542 INFO nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Took 3.68 seconds to spawn the instance on the hypervisor.
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.263 253542 DEBUG nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 25 08:24:50 compute-0 ceph-mon[75015]: pgmap v1179: 321 pgs: 321 active+clean; 46 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 344 KiB/s wr, 13 op/s
Nov 25 08:24:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 25 08:24:50 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.413 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.451 253542 INFO nova.compute.manager [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Took 4.66 seconds to build instance.
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.471 253542 DEBUG oslo_concurrency.lockutils [None req-bf34376d-b6e7-4feb-b2df-5913e90367e0 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.910 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "7e576418-7454-49eb-9918-2d7f04547bd8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.911 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:50 compute-0 nova_compute[253538]: 2025-11-25 08:24:50.938 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:24:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 63 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 849 KiB/s wr, 6 op/s
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.014 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.015 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.020 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.021 253542 INFO nova.compute.claims [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.153 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.416 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:51 compute-0 ceph-mon[75015]: osdmap e112: 3 total, 3 up, 3 in
Nov 25 08:24:51 compute-0 ceph-mon[75015]: pgmap v1181: 321 pgs: 321 active+clean; 63 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 849 KiB/s wr, 6 op/s
Nov 25 08:24:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044485481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.659 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.666 253542 DEBUG nova.compute.provider_tree [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.686 253542 DEBUG nova.scheduler.client.report [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.715 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.716 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.776 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.777 253542 DEBUG nova.network.neutron [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.836 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.853 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.943 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.947 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.948 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Creating image(s)
Nov 25 08:24:51 compute-0 nova_compute[253538]: 2025-11-25 08:24:51.982 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.013 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.044 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.049 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.143 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.144 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.146 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.146 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.176 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.180 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7e576418-7454-49eb-9918-2d7f04547bd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3044485481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.580 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7e576418-7454-49eb-9918-2d7f04547bd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.647 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] resizing rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.752 253542 DEBUG nova.objects.instance [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e576418-7454-49eb-9918-2d7f04547bd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.775 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.776 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Ensure instance console log exists: /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.777 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.777 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.778 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.881 253542 DEBUG nova.network.neutron [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.883 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.887 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.892 253542 WARNING nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.898 253542 DEBUG nova.virt.libvirt.host [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.899 253542 DEBUG nova.virt.libvirt.host [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.903 253542 DEBUG nova.virt.libvirt.host [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.904 253542 DEBUG nova.virt.libvirt.host [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.905 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.906 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.907 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.908 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.909 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.909 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.910 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.911 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.912 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.912 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.913 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.914 253542 DEBUG nova.virt.hardware [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:52 compute-0 nova_compute[253538]: 2025-11-25 08:24:52.920 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:24:53
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.301 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "d3764acc-bce0-452e-bba5-90d76d88df2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.304 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.323 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:24:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753797572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.403 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.429 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.435 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.462 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.463 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.472 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.472 253542 INFO nova.compute.claims [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:24:53 compute-0 ceph-mon[75015]: pgmap v1182: 321 pgs: 321 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Nov 25 08:24:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/753797572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.621 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:24:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:24:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732634573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.913 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.917 253542 DEBUG nova.objects.instance [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e576418-7454-49eb-9918-2d7f04547bd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:53 compute-0 nova_compute[253538]: 2025-11-25 08:24:53.948 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <uuid>7e576418-7454-49eb-9918-2d7f04547bd8</uuid>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <name>instance-0000000e</name>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <nova:name>tempest-TenantUsagesTestJSON-server-625857128</nova:name>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:52</nova:creationTime>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <nova:user uuid="33ac0c97ce984130b9394683c963cda1">tempest-TenantUsagesTestJSON-1684672578-project-member</nova:user>
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <nova:project uuid="e8f1e6855a5b45a8b1c74763485618f4">tempest-TenantUsagesTestJSON-1684672578</nova:project>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <entry name="serial">7e576418-7454-49eb-9918-2d7f04547bd8</entry>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <entry name="uuid">7e576418-7454-49eb-9918-2d7f04547bd8</entry>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7e576418-7454-49eb-9918-2d7f04547bd8_disk">
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7e576418-7454-49eb-9918-2d7f04547bd8_disk.config">
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/console.log" append="off"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:53 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:53 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:53 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:53 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:53 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.037 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.038 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.040 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Using config drive
Nov 25 08:24:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086237870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.079 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.090 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.099 253542 DEBUG nova.compute.provider_tree [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.116 253542 DEBUG nova.scheduler.client.report [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.146 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.147 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.211 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.212 253542 DEBUG nova.network.neutron [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.236 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.268 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.368 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Creating config drive at /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.375 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp054ozkg_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.408 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.409 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.413 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.415 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.415 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Creating image(s)
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.447 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.473 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/732634573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4086237870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.505 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.519 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.554 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp054ozkg_" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.591 253542 DEBUG nova.storage.rbd_utils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] rbd image 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.595 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.616 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.620 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.621 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.621 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.622 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.641 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.644 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc d3764acc-bce0-452e-bba5-90d76d88df2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.695 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.695 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.702 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.703 253542 INFO nova.compute.claims [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.714 253542 DEBUG oslo_concurrency.processutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config 7e576418-7454-49eb-9918-2d7f04547bd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.715 253542 INFO nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Deleting local config drive /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8/disk.config because it was imported into RBD.
Nov 25 08:24:54 compute-0 systemd-machined[215790]: New machine qemu-16-instance-0000000e.
Nov 25 08:24:54 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000e.
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.849 253542 DEBUG nova.network.neutron [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.850 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.886 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.916 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc d3764acc-bce0-452e-bba5-90d76d88df2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:54 compute-0 nova_compute[253538]: 2025-11-25 08:24:54.982 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] resizing rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 110 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 176 op/s
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.096 253542 DEBUG nova.objects.instance [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'migration_context' on Instance uuid d3764acc-bce0-452e-bba5-90d76d88df2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.110 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.110 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Ensure instance console log exists: /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.111 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.111 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.111 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.113 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.119 253542 WARNING nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.123 253542 DEBUG nova.virt.libvirt.host [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.124 253542 DEBUG nova.virt.libvirt.host [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.127 253542 DEBUG nova.virt.libvirt.host [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.127 253542 DEBUG nova.virt.libvirt.host [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.128 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.128 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.128 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.129 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.130 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.130 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.130 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.130 253542 DEBUG nova.virt.hardware [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.134 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898110085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.410 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.417 253542 DEBUG nova.compute.provider_tree [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.430 253542 DEBUG nova.scheduler.client.report [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.450 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.451 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.497 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.506 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.521 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:24:55 compute-0 ceph-mon[75015]: pgmap v1183: 321 pgs: 321 active+clean; 110 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 176 op/s
Nov 25 08:24:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1898110085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.538 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.557 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.558 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.558 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059095.557283, 7e576418-7454-49eb-9918-2d7f04547bd8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.558 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] VM Resumed (Lifecycle Event)
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.562 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance spawned successfully.
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.562 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.594 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.603 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.606 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.606 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.607 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.607 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.607 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.608 253542 DEBUG nova.virt.libvirt.driver [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3090641423' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.657 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.657 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059095.5584865, 7e576418-7454-49eb-9918-2d7f04547bd8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.657 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] VM Started (Lifecycle Event)
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.674 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.695 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.699 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.720 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.722 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.722 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating image(s)
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.741 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.760 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.779 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.784 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.809 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.811 253542 INFO nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Took 3.86 seconds to spawn the instance on the hypervisor.
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.811 253542 DEBUG nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.815 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.860 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.863 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.863 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.864 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.864 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.894 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.898 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.930 253542 INFO nova.compute.manager [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Took 4.94 seconds to build instance.
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.938 253542 DEBUG nova.policy [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76d3377d398a4214a77bc0eb91638ec5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65a2f983cce14453b2dc9251a520f289', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:24:55 compute-0 nova_compute[253538]: 2025-11-25 08:24:55.950 253542 DEBUG oslo_concurrency.lockutils [None req-2460f9f1-7e69-4aa3-b7b8-6ffd2bd05f80 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569556256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.191 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.193 253542 DEBUG nova.objects.instance [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3764acc-bce0-452e-bba5-90d76d88df2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.206 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <uuid>d3764acc-bce0-452e-bba5-90d76d88df2e</uuid>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <name>instance-0000000f</name>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-922281324</nova:name>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:55</nova:creationTime>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <nova:user uuid="345cc8bd54dd46c9aaf034a44f55f52e">tempest-ServersAdminNegativeTestJSON-455379081-project-member</nova:user>
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <nova:project uuid="fbc71ebb18c64a72bd6d93bc520d8921">tempest-ServersAdminNegativeTestJSON-455379081</nova:project>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <entry name="serial">d3764acc-bce0-452e-bba5-90d76d88df2e</entry>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <entry name="uuid">d3764acc-bce0-452e-bba5-90d76d88df2e</entry>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/d3764acc-bce0-452e-bba5-90d76d88df2e_disk">
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config">
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:56 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/console.log" append="off"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:56 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:56 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:56 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:56 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:56 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.211 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.296 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:56 compute-0 podman[278481]: 2025-11-25 08:24:56.311514842 +0000 UTC m=+0.069460364 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.329 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.329 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.329 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Using config drive
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.350 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.399 253542 DEBUG nova.objects.instance [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.413 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.413 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Ensure instance console log exists: /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.414 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.414 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.414 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:56 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3090641423' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:56 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1569556256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.623 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Successfully created port: 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.650 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Creating config drive at /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.659 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_oc8sw0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.736 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.737 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.750 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.793 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_oc8sw0" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.817 253542 DEBUG nova.storage.rbd_utils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] rbd image d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.823 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.872 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.872 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.880 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.881 253542 INFO nova.compute.claims [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.970 253542 DEBUG oslo_concurrency.processutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config d3764acc-bce0-452e-bba5-90d76d88df2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:56 compute-0 nova_compute[253538]: 2025-11-25 08:24:56.971 253542 INFO nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Deleting local config drive /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e/disk.config because it was imported into RBD.
Nov 25 08:24:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 162 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.3 MiB/s wr, 216 op/s
Nov 25 08:24:57 compute-0 systemd-machined[215790]: New machine qemu-17-instance-0000000f.
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.034 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "7e576418-7454-49eb-9918-2d7f04547bd8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.035 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.035 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "7e576418-7454-49eb-9918-2d7f04547bd8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.035 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.035 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.036 253542 INFO nova.compute.manager [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Terminating instance
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.037 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "refresh_cache-7e576418-7454-49eb-9918-2d7f04547bd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.037 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquired lock "refresh_cache-7e576418-7454-49eb-9918-2d7f04547bd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.037 253542 DEBUG nova.network.neutron [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:24:57 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.054 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.170 253542 DEBUG nova.network.neutron [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.393 253542 DEBUG nova.network.neutron [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.408 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Releasing lock "refresh_cache-7e576418-7454-49eb-9918-2d7f04547bd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.409 253542 DEBUG nova.compute.manager [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.425 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Successfully updated port: 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.439 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.439 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquired lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.439 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:24:57 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 25 08:24:57 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Consumed 2.525s CPU time.
Nov 25 08:24:57 compute-0 systemd-machined[215790]: Machine qemu-16-instance-0000000e terminated.
Nov 25 08:24:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315663466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.500 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.505 253542 DEBUG nova.compute.manager [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-changed-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.505 253542 DEBUG nova.compute.manager [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Refreshing instance network info cache due to event network-changed-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.505 253542 DEBUG oslo_concurrency.lockutils [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.507 253542 DEBUG nova.compute.provider_tree [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.523 253542 DEBUG nova.scheduler.client.report [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:57 compute-0 ceph-mon[75015]: pgmap v1184: 321 pgs: 321 active+clean; 162 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.3 MiB/s wr, 216 op/s
Nov 25 08:24:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/315663466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.546 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.546 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.587 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.588 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.602 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.618 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.630 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance destroyed successfully.
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.630 253542 DEBUG nova.objects.instance [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lazy-loading 'resources' on Instance uuid 7e576418-7454-49eb-9918-2d7f04547bd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.688 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.703 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.704 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.705 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Creating image(s)
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.721 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.740 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.759 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.764 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.792 253542 DEBUG nova.policy [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76d3377d398a4214a77bc0eb91638ec5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65a2f983cce14453b2dc9251a520f289', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.838 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.839 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.839 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.840 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.858 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.862 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 23ace5af-6840-42aa-a801-98abbb4f3a52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.985 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059097.981997, d3764acc-bce0-452e-bba5-90d76d88df2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.986 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] VM Resumed (Lifecycle Event)
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.995 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:24:57 compute-0 nova_compute[253538]: 2025-11-25 08:24:57.997 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.002 253542 INFO nova.virt.libvirt.driver [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance spawned successfully.
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.002 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.004 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.009 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.024 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.025 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.025 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.026 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.026 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.026 253542 DEBUG nova.virt.libvirt.driver [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.032 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.032 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059097.9842272, d3764acc-bce0-452e-bba5-90d76d88df2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.033 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] VM Started (Lifecycle Event)
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.072 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.083 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.088 253542 INFO nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Took 3.67 seconds to spawn the instance on the hypervisor.
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.088 253542 DEBUG nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.117 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.149 253542 INFO nova.compute.manager [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Took 4.77 seconds to build instance.
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.162 253542 DEBUG oslo_concurrency.lockutils [None req-bcd3d6b9-5252-4c91-8945-50734fbda5f2 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.223 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 23ace5af-6840-42aa-a801-98abbb4f3a52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.269 253542 INFO nova.virt.libvirt.driver [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Deleting instance files /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8_del
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.270 253542 INFO nova.virt.libvirt.driver [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Deletion of /var/lib/nova/instances/7e576418-7454-49eb-9918-2d7f04547bd8_del complete
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.310 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Successfully created port: e9d1298d-411a-4018-ba08-c41d40ba0d41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.315 253542 DEBUG nova.network.neutron [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updating instance_info_cache with network_info: [{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.321 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:24:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:24:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 25 08:24:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 25 08:24:58 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.357 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Releasing lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.358 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance network_info: |[{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.359 253542 DEBUG oslo_concurrency.lockutils [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.359 253542 DEBUG nova.network.neutron [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Refreshing network info cache for port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.363 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start _get_guest_xml network_info=[{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.368 253542 INFO nova.compute.manager [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Took 0.96 seconds to destroy the instance on the hypervisor.
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.368 253542 DEBUG oslo.service.loopingcall [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.368 253542 DEBUG nova.compute.manager [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.369 253542 DEBUG nova.network.neutron [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.381 253542 WARNING nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.428 253542 DEBUG nova.objects.instance [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 23ace5af-6840-42aa-a801-98abbb4f3a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.431 253542 DEBUG nova.virt.libvirt.host [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.431 253542 DEBUG nova.virt.libvirt.host [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.437 253542 DEBUG nova.virt.libvirt.host [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.437 253542 DEBUG nova.virt.libvirt.host [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.437 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.437 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.438 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.439 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.439 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.439 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.439 253542 DEBUG nova.virt.hardware [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.441 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.458 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.459 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Ensure instance console log exists: /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.460 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.460 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.461 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.654 253542 DEBUG nova.network.neutron [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.669 253542 DEBUG nova.network.neutron [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.689 253542 INFO nova.compute.manager [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Took 0.32 seconds to deallocate network for instance.
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.731 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.732 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:58 compute-0 podman[278915]: 2025-11-25 08:24:58.819532286 +0000 UTC m=+0.068421405 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:24:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2853933548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.881 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.910 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.915 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:58 compute-0 nova_compute[253538]: 2025-11-25 08:24:58.935 253542 DEBUG oslo_concurrency.processutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:24:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 184 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 323 op/s
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.225 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Successfully updated port: e9d1298d-411a-4018-ba08-c41d40ba0d41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.243 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.243 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquired lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.244 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:24:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:24:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317007074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.329 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.331 253542 DEBUG nova.virt.libvirt.vif [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:24:55Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.331 253542 DEBUG nova.network.os_vif_util [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.333 253542 DEBUG nova.network.os_vif_util [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.334 253542 DEBUG nova.objects.instance [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.341 253542 DEBUG nova.objects.instance [None req-dc4d201f-60cf-4a6c-bd3e-1fd40f0ead3f 645c20ba7f4c4bb7a8facc194a00857c f1a1c40fc7b44a8fb2422030f90fbc4c - - default default] Lazy-loading 'pci_devices' on Instance uuid d3764acc-bce0-452e-bba5-90d76d88df2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:24:59 compute-0 ceph-mon[75015]: osdmap e113: 3 total, 3 up, 3 in
Nov 25 08:24:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2853933548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/317007074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.362 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <uuid>86bfa56f-56d0-4a5e-b0b2-302c375e37a3</uuid>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <name>instance-00000010</name>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdminTestJSON-server-1649971692</nova:name>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:24:58</nova:creationTime>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <nova:port uuid="4ad9572b-6ac1-4659-8ea6-71b8a32c06fe">
Nov 25 08:24:59 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <system>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <entry name="serial">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <entry name="uuid">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </system>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <os>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   </os>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <features>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   </features>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk">
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config">
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:24:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5e:0e:e0"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <target dev="tap4ad9572b-6a"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log" append="off"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <video>
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </video>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:24:59 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:24:59 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:24:59 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:24:59 compute-0 nova_compute[253538]: </domain>
Nov 25 08:24:59 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.362 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Preparing to wait for external event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.363 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.363 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.363 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.364 253542 DEBUG nova.virt.libvirt.vif [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:24:55Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.364 253542 DEBUG nova.network.os_vif_util [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.365 253542 DEBUG nova.network.os_vif_util [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.365 253542 DEBUG os_vif [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.367 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.368 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.372 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ad9572b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.373 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ad9572b-6a, col_values=(('external_ids', {'iface-id': '4ad9572b-6ac1-4659-8ea6-71b8a32c06fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:0e:e0', 'vm-uuid': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:24:59 compute-0 NetworkManager[48915]: <info>  [1764059099.3756] manager: (tap4ad9572b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.375 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.383 253542 INFO os_vif [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.396 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059099.396449, d3764acc-bce0-452e-bba5-90d76d88df2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.397 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] VM Paused (Lifecycle Event)
Nov 25 08:24:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:24:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754520027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.414 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.422 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.430 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.433 253542 DEBUG oslo_concurrency.processutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.443 253542 DEBUG nova.compute.provider_tree [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.453 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.458 253542 DEBUG nova.scheduler.client.report [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.464 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.464 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.464 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:5e:0e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.465 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Using config drive
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.485 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.503 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.538 253542 INFO nova.scheduler.client.report [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Deleted allocations for instance 7e576418-7454-49eb-9918-2d7f04547bd8
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.581 253542 DEBUG nova.compute.manager [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received event network-changed-e9d1298d-411a-4018-ba08-c41d40ba0d41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.582 253542 DEBUG nova.compute.manager [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Refreshing instance network info cache due to event network-changed-e9d1298d-411a-4018-ba08-c41d40ba0d41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.582 253542 DEBUG oslo_concurrency.lockutils [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:24:59 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 25 08:24:59 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 2.400s CPU time.
Nov 25 08:24:59 compute-0 systemd-machined[215790]: Machine qemu-17-instance-0000000f terminated.
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.598 253542 DEBUG oslo_concurrency.lockutils [None req-46639f30-841e-4511-9447-6e35fb117677 33ac0c97ce984130b9394683c963cda1 e8f1e6855a5b45a8b1c74763485618f4 - - default default] Lock "7e576418-7454-49eb-9918-2d7f04547bd8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.752 253542 DEBUG nova.compute.manager [None req-dc4d201f-60cf-4a6c-bd3e-1fd40f0ead3f 645c20ba7f4c4bb7a8facc194a00857c f1a1c40fc7b44a8fb2422030f90fbc4c - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.915 253542 DEBUG nova.network.neutron [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updated VIF entry in instance network info cache for port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.915 253542 DEBUG nova.network.neutron [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updating instance_info_cache with network_info: [{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:24:59 compute-0 nova_compute[253538]: 2025-11-25 08:24:59.930 253542 DEBUG oslo_concurrency.lockutils [req-39390d9a-11d1-4f93-92f7-b3896b27d9be req-4311a001-cb66-4f00-964d-cd5caed782f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.065 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating config drive at /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.069 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmk73hby8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.200 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmk73hby8" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.222 253542 DEBUG nova.storage.rbd_utils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.224 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.353 253542 DEBUG oslo_concurrency.processutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.353 253542 INFO nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting local config drive /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config because it was imported into RBD.
Nov 25 08:25:00 compute-0 ceph-mon[75015]: pgmap v1186: 321 pgs: 321 active+clean; 184 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 323 op/s
Nov 25 08:25:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1754520027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:00 compute-0 kernel: tap4ad9572b-6a: entered promiscuous mode
Nov 25 08:25:00 compute-0 systemd-udevd[279022]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:25:00 compute-0 NetworkManager[48915]: <info>  [1764059100.4002] manager: (tap4ad9572b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Nov 25 08:25:00 compute-0 ovn_controller[152859]: 2025-11-25T08:25:00Z|00046|binding|INFO|Claiming lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for this chassis.
Nov 25 08:25:00 compute-0 ovn_controller[152859]: 2025-11-25T08:25:00Z|00047|binding|INFO|4ad9572b-6ac1-4659-8ea6-71b8a32c06fe: Claiming fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.407 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 NetworkManager[48915]: <info>  [1764059100.4125] device (tap4ad9572b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:25:00 compute-0 NetworkManager[48915]: <info>  [1764059100.4138] device (tap4ad9572b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.417 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.418 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.419 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.429 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7927b67f-59a1-46b3-b331-66999dd0f287]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.430 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93269c36-a1 in ovnmeta-93269c36-ab23-4d95-925a-798173550624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.432 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93269c36-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.432 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[48548175-609b-4795-9017-df44aceda165]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.433 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97355664-0056-4b05-ad99-532f3ea0db63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 systemd-machined[215790]: New machine qemu-18-instance-00000010.
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.443 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[cee897ce-da77-4a1e-81d5-710bfbd8d714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.466 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e11d7eb-f45e-45d5-881e-c323d09eebf9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 ovn_controller[152859]: 2025-11-25T08:25:00Z|00048|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe ovn-installed in OVS
Nov 25 08:25:00 compute-0 ovn_controller[152859]: 2025-11-25T08:25:00Z|00049|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe up in Southbound
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.489 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7692981b-1481-4bb7-a14e-6dd5ca41b99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.492 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 systemd-udevd[279077]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:25:00 compute-0 NetworkManager[48915]: <info>  [1764059100.4955] manager: (tap93269c36-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.494 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[62685375-b584-4037-b8ca-eef1135448b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.527 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[af9c9c9d-a84f-4752-8274-1d302da5d2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.530 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d3be5eff-4940-4064-a404-eae407ad5b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 NetworkManager[48915]: <info>  [1764059100.5510] device (tap93269c36-a0): carrier: link connected
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.555 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5491c5b9-d56e-4322-a3a9-2fbc68c420f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[89819ad9-ee81-49bf-b56a-2ac12ed50152]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279111, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.584 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[edc54449-25e4-4cf9-86bb-c46bdd112130]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:1121'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445515, 'tstamp': 445515}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279112, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.607 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdbfa62-7ab6-461b-b5ef-fc77784a67a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279113, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.636 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82f8b477-27d8-4624-86cd-5aed35ae370d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.699 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6647dfd4-492d-481d-bcdf-2f6c6fb8f6b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.700 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.701 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.701 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 kernel: tap93269c36-a0: entered promiscuous mode
Nov 25 08:25:00 compute-0 NetworkManager[48915]: <info>  [1764059100.7046] manager: (tap93269c36-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.706 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:00 compute-0 ovn_controller[152859]: 2025-11-25T08:25:00Z|00050|binding|INFO|Releasing lport 52d2128c-19c6-4892-8ba5-cc8740039f5e from this chassis (sb_readonly=0)
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.711 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93269c36-ab23-4d95-925a-798173550624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93269c36-ab23-4d95-925a-798173550624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.712 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5de66faa-f965-4b4a-ae61-12f22fd8887b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.712 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-93269c36-ab23-4d95-925a-798173550624
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/93269c36-ab23-4d95-925a-798173550624.pid.haproxy
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:25:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:00.714 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'env', 'PROCESS_TAG=haproxy-93269c36-ab23-4d95-925a-798173550624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93269c36-ab23-4d95-925a-798173550624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:00 compute-0 nova_compute[253538]: 2025-11-25 08:25:00.988 253542 DEBUG nova.network.neutron [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updating instance_info_cache with network_info: [{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 227 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 434 op/s
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.008 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Releasing lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.009 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance network_info: |[{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.010 253542 DEBUG oslo_concurrency.lockutils [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.011 253542 DEBUG nova.network.neutron [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Refreshing network info cache for port e9d1298d-411a-4018-ba08-c41d40ba0d41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.018 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start _get_guest_xml network_info=[{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.026 253542 WARNING nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.035 253542 DEBUG nova.virt.libvirt.host [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.036 253542 DEBUG nova.virt.libvirt.host [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.040 253542 DEBUG nova.virt.libvirt.host [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.041 253542 DEBUG nova.virt.libvirt.host [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.041 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.041 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.042 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.042 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.042 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.043 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.043 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.043 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.044 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.044 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.045 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.045 253542 DEBUG nova.virt.hardware [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.049 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:01 compute-0 podman[279181]: 2025-11-25 08:25:01.120080466 +0000 UTC m=+0.059931039 container create 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:25:01 compute-0 systemd[1]: Started libpod-conmon-5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea.scope.
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.165 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059101.1648073, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.166 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Started (Lifecycle Event)
Nov 25 08:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.191 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f08b2f30a3472e570aff96bae58b6aa5c1b8e49bb02775e83be72c16348ca83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:01 compute-0 podman[279181]: 2025-11-25 08:25:01.09311267 +0000 UTC m=+0.032963263 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.199 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059101.1654396, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.200 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Paused (Lifecycle Event)
Nov 25 08:25:01 compute-0 podman[279181]: 2025-11-25 08:25:01.205649775 +0000 UTC m=+0.145500368 container init 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.213 253542 DEBUG nova.compute.manager [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.214 253542 DEBUG oslo_concurrency.lockutils [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.214 253542 DEBUG oslo_concurrency.lockutils [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.215 253542 DEBUG oslo_concurrency.lockutils [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:01 compute-0 podman[279181]: 2025-11-25 08:25:01.215626671 +0000 UTC m=+0.155477244 container start 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.215 253542 DEBUG nova.compute.manager [req-1599ae67-6320-4259-96d2-5031b2a52b2e req-e6ece45d-9ec7-483a-8c82-4adb1214a653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Processing event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.217 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.218 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.226 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.228 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059101.2243848, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.229 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Resumed (Lifecycle Event)
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.233 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance spawned successfully.
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.234 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:25:01 compute-0 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [NOTICE]   (279225) : New worker (279227) forked
Nov 25 08:25:01 compute-0 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [NOTICE]   (279225) : Loading success.
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.246 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.250 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.259 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.260 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.260 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.260 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.261 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.261 253542 DEBUG nova.virt.libvirt.driver [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.269 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.319 253542 INFO nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Took 5.60 seconds to spawn the instance on the hypervisor.
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.320 253542 DEBUG nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.377 253542 INFO nova.compute.manager [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Took 6.71 seconds to build instance.
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.394 253542 DEBUG oslo_concurrency.lockutils [None req-cd280208-b595-4b3f-8796-270485a4caef 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441138275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.530 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.561 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:01 compute-0 nova_compute[253538]: 2025-11-25 08:25:01.565 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.014 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "d3764acc-bce0-452e-bba5-90d76d88df2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.015 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.016 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "d3764acc-bce0-452e-bba5-90d76d88df2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.016 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.017 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.019 253542 INFO nova.compute.manager [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Terminating instance
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.020 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "refresh_cache-d3764acc-bce0-452e-bba5-90d76d88df2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.021 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquired lock "refresh_cache-d3764acc-bce0-452e-bba5-90d76d88df2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.021 253542 DEBUG nova.network.neutron [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:25:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025257833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.047 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.048 253542 DEBUG nova.virt.libvirt.vif [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2132333394',display_name='tempest-ServersAdminTestJSON-server-2132333394',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2132333394',id=17,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-8p4p95p6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:24:57Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=23ace5af-6840-42aa-a801-98abbb4f3a52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.049 253542 DEBUG nova.network.os_vif_util [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.050 253542 DEBUG nova.network.os_vif_util [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.051 253542 DEBUG nova.objects.instance [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23ace5af-6840-42aa-a801-98abbb4f3a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.063 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <uuid>23ace5af-6840-42aa-a801-98abbb4f3a52</uuid>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <name>instance-00000011</name>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdminTestJSON-server-2132333394</nova:name>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:25:01</nova:creationTime>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <nova:port uuid="e9d1298d-411a-4018-ba08-c41d40ba0d41">
Nov 25 08:25:02 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <system>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <entry name="serial">23ace5af-6840-42aa-a801-98abbb4f3a52</entry>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <entry name="uuid">23ace5af-6840-42aa-a801-98abbb4f3a52</entry>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </system>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <os>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   </os>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <features>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   </features>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/23ace5af-6840-42aa-a801-98abbb4f3a52_disk">
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config">
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:02 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:af:6c:e2"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <target dev="tape9d1298d-41"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/console.log" append="off"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <video>
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </video>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:25:02 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:25:02 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:25:02 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:25:02 compute-0 nova_compute[253538]: </domain>
Nov 25 08:25:02 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.064 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Preparing to wait for external event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.064 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.064 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.065 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.065 253542 DEBUG nova.virt.libvirt.vif [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2132333394',display_name='tempest-ServersAdminTestJSON-server-2132333394',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2132333394',id=17,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-8p4p95p6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:24:57Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=23ace5af-6840-42aa-a801-98abbb4f3a52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.065 253542 DEBUG nova.network.os_vif_util [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.066 253542 DEBUG nova.network.os_vif_util [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.066 253542 DEBUG os_vif [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.067 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.067 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.068 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.070 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.070 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape9d1298d-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.071 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape9d1298d-41, col_values=(('external_ids', {'iface-id': 'e9d1298d-411a-4018-ba08-c41d40ba0d41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:6c:e2', 'vm-uuid': '23ace5af-6840-42aa-a801-98abbb4f3a52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:02 compute-0 NetworkManager[48915]: <info>  [1764059102.0732] manager: (tape9d1298d-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.080 253542 INFO os_vif [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41')
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.130 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.130 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.131 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:af:6c:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.132 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Using config drive
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.159 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.232 253542 DEBUG nova.network.neutron [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:02 compute-0 ceph-mon[75015]: pgmap v1187: 321 pgs: 321 active+clean; 227 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 434 op/s
Nov 25 08:25:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1441138275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3025257833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.675 253542 DEBUG nova.network.neutron [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.688 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Releasing lock "refresh_cache-d3764acc-bce0-452e-bba5-90d76d88df2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.689 253542 DEBUG nova.compute.manager [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.694 253542 INFO nova.virt.libvirt.driver [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance destroyed successfully.
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.695 253542 DEBUG nova.objects.instance [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'resources' on Instance uuid d3764acc-bce0-452e-bba5-90d76d88df2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.737 253542 DEBUG nova.network.neutron [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updated VIF entry in instance network info cache for port e9d1298d-411a-4018-ba08-c41d40ba0d41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.738 253542 DEBUG nova.network.neutron [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updating instance_info_cache with network_info: [{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.761 253542 DEBUG oslo_concurrency.lockutils [req-8a32d8cc-b206-4136-94a9-584fea6fcd47 req-6b365ca6-cacd-43fa-9ad8-774b4ed63e33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.793 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Creating config drive at /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.800 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgf5618h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.945 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgf5618h" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.971 253542 DEBUG nova.storage.rbd_utils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:02 compute-0 nova_compute[253538]: 2025-11-25 08:25:02.975 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 232 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 9.4 MiB/s wr, 406 op/s
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.139 253542 DEBUG oslo_concurrency.processutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config 23ace5af-6840-42aa-a801-98abbb4f3a52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.140 253542 INFO nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Deleting local config drive /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52/disk.config because it was imported into RBD.
Nov 25 08:25:03 compute-0 kernel: tape9d1298d-41: entered promiscuous mode
Nov 25 08:25:03 compute-0 NetworkManager[48915]: <info>  [1764059103.1940] manager: (tape9d1298d-41): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Nov 25 08:25:03 compute-0 ovn_controller[152859]: 2025-11-25T08:25:03Z|00051|binding|INFO|Claiming lport e9d1298d-411a-4018-ba08-c41d40ba0d41 for this chassis.
Nov 25 08:25:03 compute-0 ovn_controller[152859]: 2025-11-25T08:25:03Z|00052|binding|INFO|e9d1298d-411a-4018-ba08-c41d40ba0d41: Claiming fa:16:3e:af:6c:e2 10.100.0.3
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.195 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.201 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:6c:e2 10.100.0.3'], port_security=['fa:16:3e:af:6c:e2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '23ace5af-6840-42aa-a801-98abbb4f3a52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e9d1298d-411a-4018-ba08-c41d40ba0d41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.202 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e9d1298d-411a-4018-ba08-c41d40ba0d41 in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.203 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.217 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[20604f6f-f963-460d-b4f8-31f6d3befe8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:03 compute-0 ovn_controller[152859]: 2025-11-25T08:25:03Z|00053|binding|INFO|Setting lport e9d1298d-411a-4018-ba08-c41d40ba0d41 ovn-installed in OVS
Nov 25 08:25:03 compute-0 ovn_controller[152859]: 2025-11-25T08:25:03Z|00054|binding|INFO|Setting lport e9d1298d-411a-4018-ba08-c41d40ba0d41 up in Southbound
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:03 compute-0 systemd-machined[215790]: New machine qemu-19-instance-00000011.
Nov 25 08:25:03 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.247 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[997360ca-f129-408f-98e9-a30a7497076a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.252 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba18ad85-3a69-4b28-ba74-d704a01ea256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:03 compute-0 systemd-udevd[279374]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:25:03 compute-0 NetworkManager[48915]: <info>  [1764059103.2719] device (tape9d1298d-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:25:03 compute-0 NetworkManager[48915]: <info>  [1764059103.2728] device (tape9d1298d-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.280 253542 INFO nova.virt.libvirt.driver [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Deleting instance files /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e_del
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.280 253542 INFO nova.virt.libvirt.driver [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Deletion of /var/lib/nova/instances/d3764acc-bce0-452e-bba5-90d76d88df2e_del complete
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.287 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6de63c-0f4e-4462-813d-d1cd35342056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd16d4f8-f594-45a6-b17d-f2df73350819]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279384, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.323 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d508698-aaf8-42fc-a736-81e181c82efa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279385, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279385, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.325 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.327 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.328 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.328 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:03.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.329 253542 INFO nova.compute.manager [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Took 0.64 seconds to destroy the instance on the hypervisor.
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.329 253542 DEBUG oslo.service.loopingcall [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.330 253542 DEBUG nova.compute.manager [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.330 253542 DEBUG nova.network.neutron [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:25:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.372 253542 DEBUG nova.compute.manager [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.373 253542 DEBUG oslo_concurrency.lockutils [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.373 253542 DEBUG oslo_concurrency.lockutils [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.374 253542 DEBUG oslo_concurrency.lockutils [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.374 253542 DEBUG nova.compute.manager [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.375 253542 WARNING nova.compute.manager [req-75f7bb4e-7955-4852-afd8-2223ffa7818a req-a2946efc-8d5f-4e0a-9447-55c4b4931d5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state None.
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.423 253542 DEBUG nova.compute.manager [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.424 253542 DEBUG oslo_concurrency.lockutils [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.425 253542 DEBUG oslo_concurrency.lockutils [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.426 253542 DEBUG oslo_concurrency.lockutils [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.426 253542 DEBUG nova.compute.manager [req-07d30ada-0d57-4ac7-91be-c62690ffcd32 req-668f9794-6d4a-470c-bfd3-b18373b10d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Processing event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.570 253542 DEBUG nova.network.neutron [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.588 253542 DEBUG nova.network.neutron [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.601 253542 INFO nova.compute.manager [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Took 0.27 seconds to deallocate network for instance.
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.634 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.635 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015271308483102168 of space, bias 1.0, pg target 0.45813925449306503 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:25:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:25:03 compute-0 nova_compute[253538]: 2025-11-25 08:25:03.733 253542 DEBUG oslo_concurrency.processutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289916280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.209 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059104.209265, 23ace5af-6840-42aa-a801-98abbb4f3a52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.210 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] VM Started (Lifecycle Event)
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.213 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.215 253542 DEBUG oslo_concurrency.processutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.216 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.221 253542 DEBUG nova.compute.provider_tree [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.224 253542 INFO nova.virt.libvirt.driver [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance spawned successfully.
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.224 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.258 253542 DEBUG nova.scheduler.client.report [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.262 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.267 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.318 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.318 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059104.209551, 23ace5af-6840-42aa-a801-98abbb4f3a52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.319 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] VM Paused (Lifecycle Event)
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.322 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.338 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.339 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.340 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.340 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.341 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.342 253542 DEBUG nova.virt.libvirt.driver [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.348 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.353 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059104.2160735, 23ace5af-6840-42aa-a801-98abbb4f3a52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.353 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] VM Resumed (Lifecycle Event)
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.360 253542 INFO nova.scheduler.client.report [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Deleted allocations for instance d3764acc-bce0-452e-bba5-90d76d88df2e
Nov 25 08:25:04 compute-0 ceph-mon[75015]: pgmap v1188: 321 pgs: 321 active+clean; 232 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 9.4 MiB/s wr, 406 op/s
Nov 25 08:25:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4289916280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.405 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.409 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.431 253542 INFO nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Took 6.73 seconds to spawn the instance on the hypervisor.
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.432 253542 DEBUG nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.435 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.467 253542 DEBUG oslo_concurrency.lockutils [None req-3261949f-221a-4d13-874e-356c6716179f 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "d3764acc-bce0-452e-bba5-90d76d88df2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.515 253542 INFO nova.compute.manager [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Took 7.66 seconds to build instance.
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.532 253542 DEBUG oslo_concurrency.lockutils [None req-eff26c03-6da1-4935-b2ab-a11367dd46a2 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:04 compute-0 nova_compute[253538]: 2025-11-25 08:25:04.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 227 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 10 MiB/s wr, 484 op/s
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.390 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.390 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.391 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.391 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.392 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.393 253542 INFO nova.compute.manager [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Terminating instance
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.394 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "refresh_cache-8e6c81c4-a422-42e4-950b-66fa6411c1eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.394 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquired lock "refresh_cache-8e6c81c4-a422-42e4-950b-66fa6411c1eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.394 253542 DEBUG nova.network.neutron [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.561 253542 DEBUG nova.network.neutron [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.716 253542 DEBUG nova.compute.manager [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.716 253542 DEBUG oslo_concurrency.lockutils [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.717 253542 DEBUG oslo_concurrency.lockutils [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.717 253542 DEBUG oslo_concurrency.lockutils [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.718 253542 DEBUG nova.compute.manager [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] No waiting events found dispatching network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.718 253542 WARNING nova.compute.manager [req-5f7cc571-c28d-4c20-a81c-e74d941ea48f req-d67d3d45-c748-4484-be1a-5e44d9ac8b3a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received unexpected event network-vif-plugged-e9d1298d-411a-4018-ba08-c41d40ba0d41 for instance with vm_state active and task_state None.
Nov 25 08:25:05 compute-0 podman[279451]: 2025-11-25 08:25:05.834500145 +0000 UTC m=+0.086100545 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.902 253542 DEBUG nova.network.neutron [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.920 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Releasing lock "refresh_cache-8e6c81c4-a422-42e4-950b-66fa6411c1eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:05 compute-0 nova_compute[253538]: 2025-11-25 08:25:05.921 253542 DEBUG nova.compute.manager [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:25:05 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 25 08:25:05 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 12.782s CPU time.
Nov 25 08:25:05 compute-0 systemd-machined[215790]: Machine qemu-15-instance-0000000d terminated.
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.142 253542 INFO nova.virt.libvirt.driver [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance destroyed successfully.
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.143 253542 DEBUG nova.objects.instance [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lazy-loading 'resources' on Instance uuid 8e6c81c4-a422-42e4-950b-66fa6411c1eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:06 compute-0 ceph-mon[75015]: pgmap v1189: 321 pgs: 321 active+clean; 227 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 10 MiB/s wr, 484 op/s
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.454 253542 INFO nova.virt.libvirt.driver [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Deleting instance files /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb_del
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.454 253542 INFO nova.virt.libvirt.driver [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Deletion of /var/lib/nova/instances/8e6c81c4-a422-42e4-950b-66fa6411c1eb_del complete
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.538 253542 INFO nova.compute.manager [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Took 0.62 seconds to destroy the instance on the hypervisor.
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.539 253542 DEBUG oslo.service.loopingcall [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.539 253542 DEBUG nova.compute.manager [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.540 253542 DEBUG nova.network.neutron [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.583 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.890 253542 DEBUG nova.network.neutron [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.903 253542 DEBUG nova.network.neutron [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.920 253542 INFO nova.compute.manager [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Took 0.38 seconds to deallocate network for instance.
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.926 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.926 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.926 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.927 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.975 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:06 compute-0 nova_compute[253538]: 2025-11-25 08:25:06.976 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 183 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 7.6 MiB/s rd, 8.0 MiB/s wr, 530 op/s
Nov 25 08:25:07 compute-0 nova_compute[253538]: 2025-11-25 08:25:07.055 253542 DEBUG oslo_concurrency.processutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:07 compute-0 nova_compute[253538]: 2025-11-25 08:25:07.077 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479462927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:07 compute-0 nova_compute[253538]: 2025-11-25 08:25:07.471 253542 DEBUG oslo_concurrency.processutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:07 compute-0 nova_compute[253538]: 2025-11-25 08:25:07.476 253542 DEBUG nova.compute.provider_tree [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:25:07 compute-0 nova_compute[253538]: 2025-11-25 08:25:07.490 253542 DEBUG nova.scheduler.client.report [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:25:07 compute-0 nova_compute[253538]: 2025-11-25 08:25:07.533 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:07 compute-0 nova_compute[253538]: 2025-11-25 08:25:07.586 253542 INFO nova.scheduler.client.report [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Deleted allocations for instance 8e6c81c4-a422-42e4-950b-66fa6411c1eb
Nov 25 08:25:07 compute-0 nova_compute[253538]: 2025-11-25 08:25:07.650 253542 DEBUG oslo_concurrency.lockutils [None req-85efc1f6-1c43-4482-88dd-c1294495f1db 345cc8bd54dd46c9aaf034a44f55f52e fbc71ebb18c64a72bd6d93bc520d8921 - - default default] Lock "8e6c81c4-a422-42e4-950b-66fa6411c1eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:08 compute-0 ceph-mon[75015]: pgmap v1190: 321 pgs: 321 active+clean; 183 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 7.6 MiB/s rd, 8.0 MiB/s wr, 530 op/s
Nov 25 08:25:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3479462927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:08 compute-0 nova_compute[253538]: 2025-11-25 08:25:08.518 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updating instance_info_cache with network_info: [{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:08 compute-0 nova_compute[253538]: 2025-11-25 08:25:08.540 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-86bfa56f-56d0-4a5e-b0b2-302c375e37a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:08 compute-0 nova_compute[253538]: 2025-11-25 08:25:08.540 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:25:08 compute-0 nova_compute[253538]: 2025-11-25 08:25:08.541 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:08 compute-0 nova_compute[253538]: 2025-11-25 08:25:08.542 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:08 compute-0 nova_compute[253538]: 2025-11-25 08:25:08.542 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:25:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 176 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 6.3 MiB/s wr, 459 op/s
Nov 25 08:25:09 compute-0 nova_compute[253538]: 2025-11-25 08:25:09.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:09 compute-0 nova_compute[253538]: 2025-11-25 08:25:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:09 compute-0 nova_compute[253538]: 2025-11-25 08:25:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:09 compute-0 nova_compute[253538]: 2025-11-25 08:25:09.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 08:25:09 compute-0 sudo[279520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:09 compute-0 sudo[279520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:09 compute-0 sudo[279520]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:09 compute-0 sudo[279545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:25:09 compute-0 sudo[279545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:09 compute-0 sudo[279545]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:09 compute-0 sudo[279570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:09 compute-0 sudo[279570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:09 compute-0 sudo[279570]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:09 compute-0 sudo[279595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:25:09 compute-0 sudo[279595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:10 compute-0 sudo[279595]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:25:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:25:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:25:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:25:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:25:10 compute-0 ceph-mon[75015]: pgmap v1191: 321 pgs: 321 active+clean; 176 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 6.3 MiB/s wr, 459 op/s
Nov 25 08:25:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:25:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:25:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:25:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 403a0863-1c3d-464f-bb32-6d10bcd916df does not exist
Nov 25 08:25:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0bb4f461-3f61-48e8-a398-84f307cf5df9 does not exist
Nov 25 08:25:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 855f5374-84c0-4844-9d99-46c21a6fde16 does not exist
Nov 25 08:25:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:25:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:25:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:25:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:25:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:25:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:25:10 compute-0 sudo[279651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:10 compute-0 sudo[279651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:10 compute-0 sudo[279651]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:10 compute-0 sudo[279676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:25:10 compute-0 sudo[279676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:10 compute-0 sudo[279676]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:10 compute-0 nova_compute[253538]: 2025-11-25 08:25:10.576 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:10 compute-0 sudo[279701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:10 compute-0 sudo[279701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:10 compute-0 sudo[279701]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:10 compute-0 sudo[279726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:25:10 compute-0 nova_compute[253538]: 2025-11-25 08:25:10.691 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:10 compute-0 nova_compute[253538]: 2025-11-25 08:25:10.691 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:10 compute-0 sudo[279726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:10 compute-0 nova_compute[253538]: 2025-11-25 08:25:10.707 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:25:10 compute-0 nova_compute[253538]: 2025-11-25 08:25:10.765 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:10 compute-0 nova_compute[253538]: 2025-11-25 08:25:10.766 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:10 compute-0 nova_compute[253538]: 2025-11-25 08:25:10.773 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:25:10 compute-0 nova_compute[253538]: 2025-11-25 08:25:10.773 253542 INFO nova.compute.claims [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:25:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.6 MiB/s wr, 425 op/s
Nov 25 08:25:11 compute-0 podman[279789]: 2025-11-25 08:25:11.035746729 +0000 UTC m=+0.043849765 container create aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:25:11 compute-0 systemd[1]: Started libpod-conmon-aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144.scope.
Nov 25 08:25:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:11 compute-0 podman[279789]: 2025-11-25 08:25:11.014979144 +0000 UTC m=+0.023082210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:25:11 compute-0 podman[279789]: 2025-11-25 08:25:11.121069051 +0000 UTC m=+0.129172087 container init aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:25:11 compute-0 podman[279789]: 2025-11-25 08:25:11.128114236 +0000 UTC m=+0.136217272 container start aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 08:25:11 compute-0 podman[279789]: 2025-11-25 08:25:11.130969285 +0000 UTC m=+0.139072341 container attach aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 08:25:11 compute-0 vigorous_mirzakhani[279805]: 167 167
Nov 25 08:25:11 compute-0 systemd[1]: libpod-aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144.scope: Deactivated successfully.
Nov 25 08:25:11 compute-0 podman[279789]: 2025-11-25 08:25:11.135699595 +0000 UTC m=+0.143802641 container died aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c755a86c7ee0b329a4b70ebbfbb6b6c40051375c85626013e0354afc2f12b899-merged.mount: Deactivated successfully.
Nov 25 08:25:11 compute-0 podman[279789]: 2025-11-25 08:25:11.173955385 +0000 UTC m=+0.182058431 container remove aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.202 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:11 compute-0 systemd[1]: libpod-conmon-aab33eacda8c52801b8a7ea3425540296a7d17a857bcc3457c740470a9c1b144.scope: Deactivated successfully.
Nov 25 08:25:11 compute-0 podman[279829]: 2025-11-25 08:25:11.356486767 +0000 UTC m=+0.044342299 container create ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 08:25:11 compute-0 systemd[1]: Started libpod-conmon-ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341.scope.
Nov 25 08:25:11 compute-0 podman[279829]: 2025-11-25 08:25:11.338554491 +0000 UTC m=+0.026410043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:25:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:25:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:25:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:25:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:25:11 compute-0 ceph-mon[75015]: pgmap v1192: 321 pgs: 321 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.6 MiB/s wr, 425 op/s
Nov 25 08:25:11 compute-0 podman[279829]: 2025-11-25 08:25:11.466578195 +0000 UTC m=+0.154433747 container init ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:25:11 compute-0 podman[279829]: 2025-11-25 08:25:11.475979075 +0000 UTC m=+0.163834617 container start ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 08:25:11 compute-0 podman[279829]: 2025-11-25 08:25:11.479519363 +0000 UTC m=+0.167374895 container attach ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 08:25:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2802608226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.677 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.685 253542 DEBUG nova.compute.provider_tree [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.703 253542 DEBUG nova.scheduler.client.report [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.726 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.727 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.784 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.785 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.805 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.825 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.935 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.937 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.938 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Creating image(s)
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.974 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:11 compute-0 nova_compute[253538]: 2025-11-25 08:25:11.997 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.022 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.026 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.081 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.091 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.091 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.092 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.092 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.112 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.115 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.321 253542 DEBUG nova.policy [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76d3377d398a4214a77bc0eb91638ec5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65a2f983cce14453b2dc9251a520f289', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.407 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2802608226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.517 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.559 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.560 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.587 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.587 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.629 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059097.627532, 7e576418-7454-49eb-9918-2d7f04547bd8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.629 253542 INFO nova.compute.manager [-] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] VM Stopped (Lifecycle Event)
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.649 253542 DEBUG nova.compute.manager [None req-faafaea5-dd09-4d16-b520-ace4c0b3c8f6 - - - - - -] [instance: 7e576418-7454-49eb-9918-2d7f04547bd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:12 compute-0 eloquent_lichterman[279864]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:25:12 compute-0 eloquent_lichterman[279864]: --> relative data size: 1.0
Nov 25 08:25:12 compute-0 eloquent_lichterman[279864]: --> All data devices are unavailable
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.710 253542 DEBUG nova.objects.instance [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.722 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.722 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Ensure instance console log exists: /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.723 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.723 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.723 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:12 compute-0 systemd[1]: libpod-ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341.scope: Deactivated successfully.
Nov 25 08:25:12 compute-0 systemd[1]: libpod-ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341.scope: Consumed 1.139s CPU time.
Nov 25 08:25:12 compute-0 podman[279829]: 2025-11-25 08:25:12.781433361 +0000 UTC m=+1.469288913 container died ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d72a772b7ee2eaf817353045316f4024f65e21593e09c6aa0b55b6db6243ea7-merged.mount: Deactivated successfully.
Nov 25 08:25:12 compute-0 podman[279829]: 2025-11-25 08:25:12.867632117 +0000 UTC m=+1.555487669 container remove ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:25:12 compute-0 systemd[1]: libpod-conmon-ba33fa3523491854a4c04ca1f7735af8a9bf198d2845dd525e36fd11d347e341.scope: Deactivated successfully.
Nov 25 08:25:12 compute-0 sudo[279726]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:12 compute-0 nova_compute[253538]: 2025-11-25 08:25:12.977 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Successfully created port: a0a9c956-aaa5-4981-a1d9-ae896cea7b7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:25:12 compute-0 sudo[280094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:12 compute-0 sudo[280094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:12 compute-0 sudo[280094]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.3 MiB/s wr, 295 op/s
Nov 25 08:25:13 compute-0 sudo[280119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:25:13 compute-0 sudo[280119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:13 compute-0 sudo[280119]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944451179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:13 compute-0 sudo[280144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:13 compute-0 sudo[280144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:13 compute-0 sudo[280144]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.117 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:13 compute-0 sudo[280171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:25:13 compute-0 sudo[280171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.199 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.199 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.203 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.203 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:25:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.460 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:25:13 compute-0 ceph-mon[75015]: pgmap v1193: 321 pgs: 321 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.3 MiB/s wr, 295 op/s
Nov 25 08:25:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/944451179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.461 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4274MB free_disk=59.94648361206055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.461 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.461 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:13 compute-0 podman[280238]: 2025-11-25 08:25:13.534056744 +0000 UTC m=+0.046882819 container create 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.560 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.560 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 23ace5af-6840-42aa-a801-98abbb4f3a52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.560 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.561 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.561 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:25:13 compute-0 systemd[1]: Started libpod-conmon-32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612.scope.
Nov 25 08:25:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:13 compute-0 podman[280238]: 2025-11-25 08:25:13.517411864 +0000 UTC m=+0.030237969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:25:13 compute-0 podman[280238]: 2025-11-25 08:25:13.613258786 +0000 UTC m=+0.126084871 container init 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:25:13 compute-0 podman[280238]: 2025-11-25 08:25:13.620758964 +0000 UTC m=+0.133585029 container start 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 08:25:13 compute-0 podman[280238]: 2025-11-25 08:25:13.623437548 +0000 UTC m=+0.136263633 container attach 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 08:25:13 compute-0 crazy_rubin[280253]: 167 167
Nov 25 08:25:13 compute-0 systemd[1]: libpod-32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612.scope: Deactivated successfully.
Nov 25 08:25:13 compute-0 conmon[280253]: conmon 32a1fc0e19eabd87fe59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612.scope/container/memory.events
Nov 25 08:25:13 compute-0 podman[280238]: 2025-11-25 08:25:13.627274494 +0000 UTC m=+0.140100569 container died 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:25:13 compute-0 nova_compute[253538]: 2025-11-25 08:25:13.632 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2bff00768786ce34c44da0ced77eb537416f09d3517d8b85e90fdec3a696689-merged.mount: Deactivated successfully.
Nov 25 08:25:13 compute-0 podman[280238]: 2025-11-25 08:25:13.755568146 +0000 UTC m=+0.268394221 container remove 32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:25:13 compute-0 systemd[1]: libpod-conmon-32a1fc0e19eabd87fe59d3e303e0bdb5f382051f3cce4676e3a693633cacf612.scope: Deactivated successfully.
Nov 25 08:25:14 compute-0 podman[280296]: 2025-11-25 08:25:13.913359193 +0000 UTC m=+0.020910259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:25:14 compute-0 podman[280296]: 2025-11-25 08:25:14.009846314 +0000 UTC m=+0.117397400 container create 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.042 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Successfully updated port: a0a9c956-aaa5-4981-a1d9-ae896cea7b7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.055 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.055 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquired lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.055 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:25:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997014469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:14 compute-0 ovn_controller[152859]: 2025-11-25T08:25:14Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:25:14 compute-0 ovn_controller[152859]: 2025-11-25T08:25:14Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:25:14 compute-0 systemd[1]: Started libpod-conmon-71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef.scope.
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.129 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.147 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:25:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.162 253542 DEBUG nova.compute.manager [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-changed-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.163 253542 DEBUG nova.compute.manager [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Refreshing instance network info cache due to event network-changed-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.163 253542 DEBUG oslo_concurrency.lockutils [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.165 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:25:14 compute-0 podman[280296]: 2025-11-25 08:25:14.181385063 +0000 UTC m=+0.288936129 container init 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.186 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.186 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:14 compute-0 podman[280296]: 2025-11-25 08:25:14.187692797 +0000 UTC m=+0.295243843 container start 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 08:25:14 compute-0 podman[280296]: 2025-11-25 08:25:14.196191662 +0000 UTC m=+0.303742708 container attach 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.258 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2997014469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.752 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059099.751519, d3764acc-bce0-452e-bba5-90d76d88df2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.753 253542 INFO nova.compute.manager [-] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] VM Stopped (Lifecycle Event)
Nov 25 08:25:14 compute-0 nova_compute[253538]: 2025-11-25 08:25:14.766 253542 DEBUG nova.compute.manager [None req-21d3da1e-b910-4b7f-8654-7e47d31bb5f0 - - - - - -] [instance: d3764acc-bce0-452e-bba5-90d76d88df2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 183 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 278 op/s
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]: {
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:     "0": [
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:         {
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "devices": [
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "/dev/loop3"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             ],
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_name": "ceph_lv0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_size": "21470642176",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "name": "ceph_lv0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "tags": {
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cluster_name": "ceph",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.crush_device_class": "",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.encrypted": "0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osd_id": "0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.type": "block",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.vdo": "0"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             },
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "type": "block",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "vg_name": "ceph_vg0"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:         }
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:     ],
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:     "1": [
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:         {
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "devices": [
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "/dev/loop4"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             ],
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_name": "ceph_lv1",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_size": "21470642176",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "name": "ceph_lv1",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "tags": {
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cluster_name": "ceph",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.crush_device_class": "",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.encrypted": "0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osd_id": "1",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.type": "block",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.vdo": "0"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             },
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "type": "block",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "vg_name": "ceph_vg1"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:         }
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:     ],
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:     "2": [
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:         {
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "devices": [
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "/dev/loop5"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             ],
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_name": "ceph_lv2",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_size": "21470642176",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "name": "ceph_lv2",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "tags": {
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.cluster_name": "ceph",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.crush_device_class": "",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.encrypted": "0",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osd_id": "2",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.type": "block",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:                 "ceph.vdo": "0"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             },
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "type": "block",
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:             "vg_name": "ceph_vg2"
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:         }
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]:     ]
Nov 25 08:25:15 compute-0 fervent_vaughan[280313]: }
Nov 25 08:25:15 compute-0 systemd[1]: libpod-71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef.scope: Deactivated successfully.
Nov 25 08:25:15 compute-0 podman[280296]: 2025-11-25 08:25:15.042646003 +0000 UTC m=+1.150197049 container died 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d3857365d3e57c6fb4bbd9e053d2a0d5a656d628fd6a7f618bb6311ba934924-merged.mount: Deactivated successfully.
Nov 25 08:25:15 compute-0 podman[280296]: 2025-11-25 08:25:15.11407265 +0000 UTC m=+1.221623696 container remove 71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 08:25:15 compute-0 systemd[1]: libpod-conmon-71f08b7aae3a2f25adf573c881573b4e2627b2bad00243ed9fc5f9b745050cef.scope: Deactivated successfully.
Nov 25 08:25:15 compute-0 sudo[280171]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:15 compute-0 sudo[280337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:15 compute-0 sudo[280337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:15 compute-0 sudo[280337]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:15 compute-0 sudo[280362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:25:15 compute-0 sudo[280362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:15 compute-0 sudo[280362]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:15 compute-0 sudo[280387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:15 compute-0 sudo[280387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:15 compute-0 sudo[280387]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:15 compute-0 sudo[280412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:25:15 compute-0 sudo[280412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:15 compute-0 ceph-mon[75015]: pgmap v1194: 321 pgs: 321 active+clean; 183 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 278 op/s
Nov 25 08:25:15 compute-0 podman[280477]: 2025-11-25 08:25:15.761258565 +0000 UTC m=+0.045531052 container create 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 08:25:15 compute-0 systemd[1]: Started libpod-conmon-0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089.scope.
Nov 25 08:25:15 compute-0 podman[280477]: 2025-11-25 08:25:15.737803895 +0000 UTC m=+0.022076402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:25:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:16 compute-0 podman[280477]: 2025-11-25 08:25:16.043527137 +0000 UTC m=+0.327799644 container init 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:25:16 compute-0 podman[280477]: 2025-11-25 08:25:16.049974627 +0000 UTC m=+0.334247114 container start 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 08:25:16 compute-0 hopeful_bhabha[280493]: 167 167
Nov 25 08:25:16 compute-0 systemd[1]: libpod-0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089.scope: Deactivated successfully.
Nov 25 08:25:16 compute-0 conmon[280493]: conmon 0607b96bb67e0e6eb14c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089.scope/container/memory.events
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.154 253542 DEBUG nova.network.neutron [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Updating instance_info_cache with network_info: [{"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.185 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Releasing lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.186 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance network_info: |[{"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.186 253542 DEBUG oslo_concurrency.lockutils [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.187 253542 DEBUG nova.network.neutron [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Refreshing network info cache for port a0a9c956-aaa5-4981-a1d9-ae896cea7b7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.189 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start _get_guest_xml network_info=[{"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.194 253542 WARNING nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:25:16 compute-0 podman[280477]: 2025-11-25 08:25:16.199196117 +0000 UTC m=+0.483468614 container attach 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:25:16 compute-0 podman[280477]: 2025-11-25 08:25:16.199744502 +0000 UTC m=+0.484017009 container died 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.201 253542 DEBUG nova.virt.libvirt.host [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.203 253542 DEBUG nova.virt.libvirt.host [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.207 253542 DEBUG nova.virt.libvirt.host [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.208 253542 DEBUG nova.virt.libvirt.host [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.209 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.209 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.210 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.211 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.211 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.212 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.212 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.213 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.213 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.213 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.213 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.214 253542 DEBUG nova.virt.hardware [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.218 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bf3baad1494d8441ef45d3c06a22c99bf61c080e0e1fa6acd331b87ce58e6bb-merged.mount: Deactivated successfully.
Nov 25 08:25:16 compute-0 podman[280477]: 2025-11-25 08:25:16.512926631 +0000 UTC m=+0.797199148 container remove 0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 08:25:16 compute-0 systemd[1]: libpod-conmon-0607b96bb67e0e6eb14c3312634b1e1178d8d971752c330b97f9f35f4dcf2089.scope: Deactivated successfully.
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1663447044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.682 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.704 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:16 compute-0 nova_compute[253538]: 2025-11-25 08:25:16.708 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:16 compute-0 podman[280536]: 2025-11-25 08:25:16.713786842 +0000 UTC m=+0.063731146 container create 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:25:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1663447044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:16 compute-0 podman[280536]: 2025-11-25 08:25:16.670793401 +0000 UTC m=+0.020737645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:25:16 compute-0 systemd[1]: Started libpod-conmon-7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8.scope.
Nov 25 08:25:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 210 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.5 MiB/s wr, 195 op/s
Nov 25 08:25:17 compute-0 podman[280536]: 2025-11-25 08:25:17.05259959 +0000 UTC m=+0.402543924 container init 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:25:17 compute-0 podman[280536]: 2025-11-25 08:25:17.068235062 +0000 UTC m=+0.418179286 container start 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 08:25:17 compute-0 podman[280536]: 2025-11-25 08:25:17.072679676 +0000 UTC m=+0.422624010 container attach 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.084 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980305441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.123 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.125 253542 DEBUG nova.virt.libvirt.vif [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1138480486',display_name='tempest-ServersAdminTestJSON-server-1138480486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1138480486',id=18,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-hs51jeqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:11Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=29fb9e2b-13d1-41e6-b0b1-1d5262dcadec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.126 253542 DEBUG nova.network.os_vif_util [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.127 253542 DEBUG nova.network.os_vif_util [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.128 253542 DEBUG nova.objects.instance [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.150 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <uuid>29fb9e2b-13d1-41e6-b0b1-1d5262dcadec</uuid>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <name>instance-00000012</name>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdminTestJSON-server-1138480486</nova:name>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:25:16</nova:creationTime>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <nova:port uuid="a0a9c956-aaa5-4981-a1d9-ae896cea7b7c">
Nov 25 08:25:17 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <system>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <entry name="serial">29fb9e2b-13d1-41e6-b0b1-1d5262dcadec</entry>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <entry name="uuid">29fb9e2b-13d1-41e6-b0b1-1d5262dcadec</entry>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </system>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <os>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   </os>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <features>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   </features>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk">
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config">
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:17 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:11:6b:81"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <target dev="tapa0a9c956-aa"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/console.log" append="off"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <video>
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </video>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:25:17 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:25:17 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:25:17 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:25:17 compute-0 nova_compute[253538]: </domain>
Nov 25 08:25:17 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.150 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Preparing to wait for external event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.151 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.151 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.151 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.152 253542 DEBUG nova.virt.libvirt.vif [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1138480486',display_name='tempest-ServersAdminTestJSON-server-1138480486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1138480486',id=18,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-hs51jeqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:11Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=29fb9e2b-13d1-41e6-b0b1-1d5262dcadec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.152 253542 DEBUG nova.network.os_vif_util [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.153 253542 DEBUG nova.network.os_vif_util [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.153 253542 DEBUG os_vif [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.154 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.154 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.155 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.158 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0a9c956-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.159 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0a9c956-aa, col_values=(('external_ids', {'iface-id': 'a0a9c956-aaa5-4981-a1d9-ae896cea7b7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:6b:81', 'vm-uuid': '29fb9e2b-13d1-41e6-b0b1-1d5262dcadec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:17 compute-0 NetworkManager[48915]: <info>  [1764059117.1619] manager: (tapa0a9c956-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.171 253542 INFO os_vif [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa')
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.214 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.214 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.215 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:11:6b:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.215 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Using config drive
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.234 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:25:17 compute-0 ceph-mon[75015]: pgmap v1195: 321 pgs: 321 active+clean; 210 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.5 MiB/s wr, 195 op/s
Nov 25 08:25:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2980305441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.859 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Creating config drive at /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config
Nov 25 08:25:17 compute-0 nova_compute[253538]: 2025-11-25 08:25:17.865 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxt5d0dfs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.009 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxt5d0dfs" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:18 compute-0 quirky_snyder[280591]: {
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "osd_id": 1,
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "type": "bluestore"
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:     },
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "osd_id": 2,
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "type": "bluestore"
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:     },
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "osd_id": 0,
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:         "type": "bluestore"
Nov 25 08:25:18 compute-0 quirky_snyder[280591]:     }
Nov 25 08:25:18 compute-0 quirky_snyder[280591]: }
Nov 25 08:25:18 compute-0 systemd[1]: libpod-7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8.scope: Deactivated successfully.
Nov 25 08:25:18 compute-0 podman[280536]: 2025-11-25 08:25:18.05728527 +0000 UTC m=+1.407229514 container died 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.072 253542 DEBUG nova.storage.rbd_utils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.078 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-05dd9628226752813a31a5b80303e45981eb1b06b4485f17be510e276d506cdb-merged.mount: Deactivated successfully.
Nov 25 08:25:18 compute-0 podman[280536]: 2025-11-25 08:25:18.108542559 +0000 UTC m=+1.458486773 container remove 7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_snyder, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:25:18 compute-0 systemd[1]: libpod-conmon-7243b2bc33e5d1b1ef93941ee95b3713f020921da317e14451f06e2ca762f4d8.scope: Deactivated successfully.
Nov 25 08:25:18 compute-0 sudo[280412]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:25:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:25:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:25:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:25:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e3796ebb-2e9c-4484-a918-282a19ba3d80 does not exist
Nov 25 08:25:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1471d9e8-7761-4cf0-aadf-66269d9f6469 does not exist
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.244 253542 DEBUG oslo_concurrency.processutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.245 253542 INFO nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Deleting local config drive /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec/disk.config because it was imported into RBD.
Nov 25 08:25:18 compute-0 sudo[280696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:25:18 compute-0 sudo[280696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:18 compute-0 sudo[280696]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:18 compute-0 kernel: tapa0a9c956-aa: entered promiscuous mode
Nov 25 08:25:18 compute-0 NetworkManager[48915]: <info>  [1764059118.2949] manager: (tapa0a9c956-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:18 compute-0 ovn_controller[152859]: 2025-11-25T08:25:18Z|00055|binding|INFO|Claiming lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c for this chassis.
Nov 25 08:25:18 compute-0 ovn_controller[152859]: 2025-11-25T08:25:18Z|00056|binding|INFO|a0a9c956-aaa5-4981-a1d9-ae896cea7b7c: Claiming fa:16:3e:11:6b:81 10.100.0.10
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.303 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:6b:81 10.100.0.10'], port_security=['fa:16:3e:11:6b:81 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '29fb9e2b-13d1-41e6-b0b1-1d5262dcadec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.305 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0a9c956-aaa5-4981-a1d9-ae896cea7b7c in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.308 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:25:18 compute-0 ovn_controller[152859]: 2025-11-25T08:25:18Z|00057|binding|INFO|Setting lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c ovn-installed in OVS
Nov 25 08:25:18 compute-0 ovn_controller[152859]: 2025-11-25T08:25:18Z|00058|binding|INFO|Setting lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c up in Southbound
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:18 compute-0 sudo[280724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.320 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:18 compute-0 sudo[280724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:25:18 compute-0 sudo[280724]: pam_unix(sudo:session): session closed for user root
Nov 25 08:25:18 compute-0 systemd-machined[215790]: New machine qemu-20-instance-00000012.
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.330 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53f7611e-c5a1-41e0-bb35-25e8ce1a3c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:18 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Nov 25 08:25:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:18 compute-0 systemd-udevd[280762]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.360 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[175ffcde-22ee-4941-af75-21bdc4436d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.362 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.363 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.363 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ef40d41c-c155-4528-84e0-0c4400830cd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:18 compute-0 NetworkManager[48915]: <info>  [1764059118.3706] device (tapa0a9c956-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:25:18 compute-0 NetworkManager[48915]: <info>  [1764059118.3719] device (tapa0a9c956-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.392 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.395 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a53c6e-408a-4df7-ad7e-7af93739de2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c3529a-7dc6-4bbc-a39d-8f1223d3472e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280772, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.432 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8dc4080-5f35-48a7-8811-4e2aa846ac59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280774, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280774, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.433 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.436 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.436 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.436 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:18.437 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.469 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.469 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.497 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.498 253542 INFO nova.compute.claims [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:25:18 compute-0 ovn_controller[152859]: 2025-11-25T08:25:18Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:6c:e2 10.100.0.3
Nov 25 08:25:18 compute-0 ovn_controller[152859]: 2025-11-25T08:25:18Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:6c:e2 10.100.0.3
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.687 253542 DEBUG nova.compute.manager [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.688 253542 DEBUG oslo_concurrency.lockutils [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.688 253542 DEBUG oslo_concurrency.lockutils [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.688 253542 DEBUG oslo_concurrency.lockutils [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.688 253542 DEBUG nova.compute.manager [req-d598cda7-0034-4be3-b42b-e34efe52221e req-78cb6083-13f0-48d1-987d-2420fde67f21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Processing event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.693 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.898 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059118.897475, 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.898 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] VM Started (Lifecycle Event)
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.901 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.910 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.918 253542 INFO nova.virt.libvirt.driver [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance spawned successfully.
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.918 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.929 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.934 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.958 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.958 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059118.8977208, 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.958 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] VM Paused (Lifecycle Event)
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.964 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.964 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.965 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.965 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.966 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:18 compute-0 nova_compute[253538]: 2025-11-25 08:25:18.966 253542 DEBUG nova.virt.libvirt.driver [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 219 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.0 MiB/s wr, 136 op/s
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.016 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.019 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059118.9048998, 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.019 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] VM Resumed (Lifecycle Event)
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.042 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.044 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.054 253542 INFO nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Took 7.12 seconds to spawn the instance on the hypervisor.
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.055 253542 DEBUG nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.079 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.113 253542 INFO nova.compute.manager [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Took 8.36 seconds to build instance.
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.127 253542 DEBUG oslo_concurrency.lockutils [None req-9f33985b-68e9-4e9b-980c-be3fef1a34e0 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713389299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.148 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.153 253542 DEBUG nova.compute.provider_tree [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.166 253542 DEBUG nova.scheduler.client.report [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.190 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.191 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:25:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:25:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:25:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/713389299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.230 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.231 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.244 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.266 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.283 253542 DEBUG nova.network.neutron [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Updated VIF entry in instance network info cache for port a0a9c956-aaa5-4981-a1d9-ae896cea7b7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.283 253542 DEBUG nova.network.neutron [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Updating instance_info_cache with network_info: [{"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.302 253542 DEBUG oslo_concurrency.lockutils [req-a4d7c72b-a628-4cc5-b63a-7d0a60797013 req-196991de-e32f-466d-bcf2-e0fdb462f89a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.403 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.405 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.405 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Creating image(s)
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.436 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.477 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.504 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.509 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.558 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.588 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.589 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.591 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.591 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.622 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.630 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.749 253542 DEBUG nova.policy [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd61511e82c674abeb4ba87a4e5c5bf9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.902 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:19 compute-0 nova_compute[253538]: 2025-11-25 08:25:19.957 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] resizing rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:25:20 compute-0 nova_compute[253538]: 2025-11-25 08:25:20.046 253542 DEBUG nova.objects.instance [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'migration_context' on Instance uuid 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:20 compute-0 nova_compute[253538]: 2025-11-25 08:25:20.057 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:25:20 compute-0 nova_compute[253538]: 2025-11-25 08:25:20.058 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Ensure instance console log exists: /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:25:20 compute-0 nova_compute[253538]: 2025-11-25 08:25:20.058 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:20 compute-0 nova_compute[253538]: 2025-11-25 08:25:20.058 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:20 compute-0 nova_compute[253538]: 2025-11-25 08:25:20.059 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:20 compute-0 ceph-mon[75015]: pgmap v1196: 321 pgs: 321 active+clean; 219 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.0 MiB/s wr, 136 op/s
Nov 25 08:25:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:20.255 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:20 compute-0 nova_compute[253538]: 2025-11-25 08:25:20.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:20.256 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:25:20 compute-0 nova_compute[253538]: 2025-11-25 08:25:20.570 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Successfully created port: fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:25:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 236 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 772 KiB/s rd, 5.9 MiB/s wr, 157 op/s
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.141 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059106.1397002, 8e6c81c4-a422-42e4-950b-66fa6411c1eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.141 253542 INFO nova.compute.manager [-] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] VM Stopped (Lifecycle Event)
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.163 253542 DEBUG nova.compute.manager [None req-b34467f4-01ab-4090-876e-68531eb66e24 - - - - - -] [instance: 8e6c81c4-a422-42e4-950b-66fa6411c1eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:21.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.283 253542 DEBUG nova.compute.manager [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.283 253542 DEBUG oslo_concurrency.lockutils [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.283 253542 DEBUG oslo_concurrency.lockutils [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.284 253542 DEBUG oslo_concurrency.lockutils [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.284 253542 DEBUG nova.compute.manager [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] No waiting events found dispatching network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:21 compute-0 nova_compute[253538]: 2025-11-25 08:25:21.284 253542 WARNING nova.compute.manager [req-f9821de4-a471-411c-9dd3-2c7c7edca002 req-c453527d-1cfb-4e4b-9e8d-04a1ec1c67d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received unexpected event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c for instance with vm_state active and task_state None.
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.057 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Successfully updated port: fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.089 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.089 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.089 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.194 253542 DEBUG nova.compute.manager [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.195 253542 DEBUG nova.compute.manager [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.195 253542 DEBUG oslo_concurrency.lockutils [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.281 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:22 compute-0 ceph-mon[75015]: pgmap v1197: 321 pgs: 321 active+clean; 236 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 772 KiB/s rd, 5.9 MiB/s wr, 157 op/s
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.757 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.757 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.770 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.936 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.936 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.942 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:25:22 compute-0 nova_compute[253538]: 2025-11-25 08:25:22.943 253542 INFO nova.compute.claims [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:25:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 255 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.3 MiB/s wr, 184 op/s
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.149 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:25:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684200752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.575 253542 DEBUG nova.network.neutron [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.581 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.590 253542 DEBUG nova.compute.provider_tree [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.596 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.596 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance network_info: |[{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.597 253542 DEBUG oslo_concurrency.lockutils [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.598 253542 DEBUG nova.network.neutron [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.602 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start _get_guest_xml network_info=[{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.606 253542 DEBUG nova.scheduler.client.report [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.620 253542 WARNING nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.629 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.643 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.648 253542 DEBUG nova.virt.libvirt.host [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.649 253542 DEBUG nova.virt.libvirt.host [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.652 253542 DEBUG nova.virt.libvirt.host [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.652 253542 DEBUG nova.virt.libvirt.host [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.653 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.653 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.654 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.654 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.654 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.655 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.656 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.656 253542 DEBUG nova.virt.hardware [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.660 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.695 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.695 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.709 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.726 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.802 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.803 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.804 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Creating image(s)
Nov 25 08:25:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.851 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:23 compute-0 ceph-mon[75015]: pgmap v1198: 321 pgs: 321 active+clean; 255 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.3 MiB/s wr, 184 op/s
Nov 25 08:25:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2684200752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.922 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.946 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.950 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:23 compute-0 nova_compute[253538]: 2025-11-25 08:25:23.983 253542 DEBUG nova.policy [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76d3377d398a4214a77bc0eb91638ec5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65a2f983cce14453b2dc9251a520f289', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.009 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.010 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.011 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.011 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.033 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.036 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355161595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.087 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.109 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.112 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115453552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.516 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.520 253542 DEBUG nova.virt.libvirt.vif [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1794284611',display_name='tempest-FloatingIPsAssociationTestJSON-server-1794284611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1794284611',id=19,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-f695m80c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:19Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=1664fad5-765c-4ecc-93e2-6f96c7fb6d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.520 253542 DEBUG nova.network.os_vif_util [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.522 253542 DEBUG nova.network.os_vif_util [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.524 253542 DEBUG nova.objects.instance [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.547 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <uuid>1664fad5-765c-4ecc-93e2-6f96c7fb6d44</uuid>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <name>instance-00000013</name>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1794284611</nova:name>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:25:23</nova:creationTime>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <nova:user uuid="d61511e82c674abeb4ba87a4e5c5bf9d">tempest-FloatingIPsAssociationTestJSON-1833680054-project-member</nova:user>
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <nova:project uuid="2d3671fc1a3f4b319a62f23168a9df72">tempest-FloatingIPsAssociationTestJSON-1833680054</nova:project>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <nova:port uuid="fdb3703c-f8da-4c10-9784-ed63bfe93fe1">
Nov 25 08:25:24 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <system>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <entry name="serial">1664fad5-765c-4ecc-93e2-6f96c7fb6d44</entry>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <entry name="uuid">1664fad5-765c-4ecc-93e2-6f96c7fb6d44</entry>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </system>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <os>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   </os>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <features>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   </features>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk">
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config">
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:24 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:f0:8b:90"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <target dev="tapfdb3703c-f8"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/console.log" append="off"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <video>
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </video>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:25:24 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:25:24 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:25:24 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:25:24 compute-0 nova_compute[253538]: </domain>
Nov 25 08:25:24 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.548 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Preparing to wait for external event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.549 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.549 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.550 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.551 253542 DEBUG nova.virt.libvirt.vif [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1794284611',display_name='tempest-FloatingIPsAssociationTestJSON-server-1794284611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1794284611',id=19,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-f695m80c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:19Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=1664fad5-765c-4ecc-93e2-6f96c7fb6d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.552 253542 DEBUG nova.network.os_vif_util [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.553 253542 DEBUG nova.network.os_vif_util [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.553 253542 DEBUG os_vif [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.556 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.557 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.559 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.563 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.563 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdb3703c-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.564 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdb3703c-f8, col_values=(('external_ids', {'iface-id': 'fdb3703c-f8da-4c10-9784-ed63bfe93fe1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:8b:90', 'vm-uuid': '1664fad5-765c-4ecc-93e2-6f96c7fb6d44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:24 compute-0 NetworkManager[48915]: <info>  [1764059124.5670] manager: (tapfdb3703c-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.569 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.571 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.573 253542 INFO os_vif [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8')
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.637 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.637 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.638 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No VIF found with MAC fa:16:3e:f0:8b:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.639 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Using config drive
Nov 25 08:25:24 compute-0 nova_compute[253538]: 2025-11-25 08:25:24.674 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/355161595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4115453552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 293 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 255 op/s
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.085 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.153 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.280 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Successfully created port: 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.287 253542 DEBUG nova.objects.instance [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.304 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.304 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Ensure instance console log exists: /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.305 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.305 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.305 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.331 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Creating config drive at /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.336 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl750rea7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.479 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl750rea7" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.504 253542 DEBUG nova.storage.rbd_utils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.507 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.527 253542 DEBUG nova.network.neutron [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.528 253542 DEBUG nova.network.neutron [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.545 253542 DEBUG oslo_concurrency.lockutils [req-c6aa6b31-0de2-49d5-9d04-393703a2d5cc req-b372d0ad-9092-4770-855c-ff8e8332c7f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.910 253542 DEBUG oslo_concurrency.processutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config 1664fad5-765c-4ecc-93e2-6f96c7fb6d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.911 253542 INFO nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Deleting local config drive /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44/disk.config because it was imported into RBD.
Nov 25 08:25:25 compute-0 kernel: tapfdb3703c-f8: entered promiscuous mode
Nov 25 08:25:25 compute-0 NetworkManager[48915]: <info>  [1764059125.9610] manager: (tapfdb3703c-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Nov 25 08:25:25 compute-0 nova_compute[253538]: 2025-11-25 08:25:25.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:25 compute-0 ovn_controller[152859]: 2025-11-25T08:25:25Z|00059|binding|INFO|Claiming lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 for this chassis.
Nov 25 08:25:25 compute-0 ovn_controller[152859]: 2025-11-25T08:25:25Z|00060|binding|INFO|fdb3703c-f8da-4c10-9784-ed63bfe93fe1: Claiming fa:16:3e:f0:8b:90 10.100.0.11
Nov 25 08:25:25 compute-0 ceph-mon[75015]: pgmap v1199: 321 pgs: 321 active+clean; 293 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 255 op/s
Nov 25 08:25:25 compute-0 systemd-udevd[281326]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:25:26 compute-0 NetworkManager[48915]: <info>  [1764059126.0137] device (tapfdb3703c-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:25:26 compute-0 NetworkManager[48915]: <info>  [1764059126.0148] device (tapfdb3703c-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.017 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:8b:90 10.100.0.11'], port_security=['fa:16:3e:f0:8b:90 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1664fad5-765c-4ecc-93e2-6f96c7fb6d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '487cd315-808b-491d-a4f3-9b5dcda46b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddf7b92c-cc8c-4886-ba67-90d06b198671, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=fdb3703c-f8da-4c10-9784-ed63bfe93fe1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.018 162739 INFO neutron.agent.ovn.metadata.agent [-] Port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 in datapath f86a7b06-d9db-4462-bd9b-8ad648dec7f4 bound to our chassis
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.020 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86a7b06-d9db-4462-bd9b-8ad648dec7f4
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.034 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2394fe-55e5-4d96-a5a4-bd35f20c2dbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.035 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf86a7b06-d1 in ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.036 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf86a7b06-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.036 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[624240c0-b9e0-40c1-a1ef-4f6345609b51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9fd287-4d5f-4cd2-9d55-f69b7161b8ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 systemd-machined[215790]: New machine qemu-21-instance-00000013.
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.048 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c7a39d-ef6d-451c-94b8-d264935045f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:26 compute-0 ovn_controller[152859]: 2025-11-25T08:25:26Z|00061|binding|INFO|Setting lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 ovn-installed in OVS
Nov 25 08:25:26 compute-0 ovn_controller[152859]: 2025-11-25T08:25:26Z|00062|binding|INFO|Setting lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 up in Southbound
Nov 25 08:25:26 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000013.
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.070 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[645c382c-1e90-473b-9c6c-c21f9ab4307d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.096 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b34fd4-cf9a-4701-a1ae-533e7950bf07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.101 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5823fa-8c1e-43c5-b3e4-12d82e646d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 NetworkManager[48915]: <info>  [1764059126.1021] manager: (tapf86a7b06-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.136 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e092b3ef-4b92-429c-804b-d40a378c4cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.141 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c75a8dbf-500d-4c34-a05c-da0eaf98da46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 NetworkManager[48915]: <info>  [1764059126.1666] device (tapf86a7b06-d0): carrier: link connected
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.174 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd296ed-c498-48ce-bed8-fda0e4b3bdfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.190 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b56b75c9-1c97-4a71-a3b8-4d6c342d6632]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86a7b06-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:7b:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448076, 'reachable_time': 32795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281361, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.203 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8fba91-6709-4840-84db-c572ce044b9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:7b45'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448076, 'tstamp': 448076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281363, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.216 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3263cff7-21be-4ce5-a65a-dedf6a37ada3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86a7b06-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:7b:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448076, 'reachable_time': 32795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281364, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.253 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6120951e-49e6-47cd-8f4f-093aec4fa8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.316 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b442cb94-be02-48f4-a0b6-cbb2f27943b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.318 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86a7b06-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.318 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.319 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86a7b06-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:26 compute-0 NetworkManager[48915]: <info>  [1764059126.3214] manager: (tapf86a7b06-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.320 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:26 compute-0 kernel: tapf86a7b06-d0: entered promiscuous mode
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.323 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86a7b06-d0, col_values=(('external_ids', {'iface-id': '156cbe7f-b9cd-46c4-9552-1484f581cde6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:26 compute-0 ovn_controller[152859]: 2025-11-25T08:25:26Z|00063|binding|INFO|Releasing lport 156cbe7f-b9cd-46c4-9552-1484f581cde6 from this chassis (sb_readonly=0)
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.342 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f86a7b06-d9db-4462-bd9b-8ad648dec7f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f86a7b06-d9db-4462-bd9b-8ad648dec7f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9e6d35-a3fc-430f-9283-2499eebae732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.344 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-f86a7b06-d9db-4462-bd9b-8ad648dec7f4
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/f86a7b06-d9db-4462-bd9b-8ad648dec7f4.pid.haproxy
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID f86a7b06-d9db-4462-bd9b-8ad648dec7f4
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:25:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:26.346 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'env', 'PROCESS_TAG=haproxy-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f86a7b06-d9db-4462-bd9b-8ad648dec7f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.524 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059126.5242057, 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.525 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] VM Started (Lifecycle Event)
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.548 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.552 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059126.5244079, 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.552 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] VM Paused (Lifecycle Event)
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.568 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.572 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:26 compute-0 nova_compute[253538]: 2025-11-25 08:25:26.589 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:26 compute-0 podman[281438]: 2025-11-25 08:25:26.771502287 +0000 UTC m=+0.067979883 container create e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:25:26 compute-0 systemd[1]: Started libpod-conmon-e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84.scope.
Nov 25 08:25:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:26 compute-0 podman[281438]: 2025-11-25 08:25:26.742232756 +0000 UTC m=+0.038710362 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd67979dba2879b6102202e2641e51318f111cb23f4aeb0a647e6c7456f9106/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:26 compute-0 podman[281438]: 2025-11-25 08:25:26.842950714 +0000 UTC m=+0.139428310 container init e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 08:25:26 compute-0 podman[281450]: 2025-11-25 08:25:26.844473987 +0000 UTC m=+0.090976170 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:25:26 compute-0 podman[281438]: 2025-11-25 08:25:26.849952988 +0000 UTC m=+0.146430584 container start e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:25:26 compute-0 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [NOTICE]   (281474) : New worker (281476) forked
Nov 25 08:25:26 compute-0 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [NOTICE]   (281474) : Loading success.
Nov 25 08:25:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 304 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Nov 25 08:25:27 compute-0 nova_compute[253538]: 2025-11-25 08:25:27.527 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Successfully updated port: 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:25:27 compute-0 nova_compute[253538]: 2025-11-25 08:25:27.543 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:27 compute-0 nova_compute[253538]: 2025-11-25 08:25:27.543 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquired lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:27 compute-0 nova_compute[253538]: 2025-11-25 08:25:27.543 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:25:27 compute-0 nova_compute[253538]: 2025-11-25 08:25:27.747 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:27 compute-0 nova_compute[253538]: 2025-11-25 08:25:27.761 253542 DEBUG nova.compute.manager [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-changed-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:27 compute-0 nova_compute[253538]: 2025-11-25 08:25:27.762 253542 DEBUG nova.compute.manager [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Refreshing instance network info cache due to event network-changed-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:27 compute-0 nova_compute[253538]: 2025-11-25 08:25:27.763 253542 DEBUG oslo_concurrency.lockutils [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:28 compute-0 ceph-mon[75015]: pgmap v1200: 321 pgs: 321 active+clean; 304 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Nov 25 08:25:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 324 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Nov 25 08:25:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:25:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4095158638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:25:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:25:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4095158638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:25:29 compute-0 nova_compute[253538]: 2025-11-25 08:25:29.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:29 compute-0 nova_compute[253538]: 2025-11-25 08:25:29.565 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:29 compute-0 podman[281485]: 2025-11-25 08:25:29.808064721 +0000 UTC m=+0.057965296 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.021 253542 DEBUG nova.network.neutron [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updating instance_info_cache with network_info: [{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.047 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Releasing lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.048 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance network_info: |[{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.050 253542 DEBUG oslo_concurrency.lockutils [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.051 253542 DEBUG nova.network.neutron [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Refreshing network info cache for port 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.056 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start _get_guest_xml network_info=[{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:25:30 compute-0 ceph-mon[75015]: pgmap v1201: 321 pgs: 321 active+clean; 324 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Nov 25 08:25:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4095158638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:25:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4095158638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.064 253542 DEBUG nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.065 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.066 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.066 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.067 253542 DEBUG nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Processing event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.067 253542 DEBUG nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.068 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.068 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.069 253542 DEBUG oslo_concurrency.lockutils [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.069 253542 DEBUG nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] No waiting events found dispatching network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.070 253542 WARNING nova.compute.manager [req-80c7cd52-6d53-4e9b-a3b7-07e38aa69622 req-126880cf-7b17-42f9-9c47-93c923a9d959 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received unexpected event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 for instance with vm_state building and task_state spawning.
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.074 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.079 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059130.0794368, 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.080 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] VM Resumed (Lifecycle Event)
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.082 253542 WARNING nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.083 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.087 253542 INFO nova.virt.libvirt.driver [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance spawned successfully.
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.088 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.090 253542 DEBUG nova.virt.libvirt.host [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.091 253542 DEBUG nova.virt.libvirt.host [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.095 253542 DEBUG nova.virt.libvirt.host [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.095 253542 DEBUG nova.virt.libvirt.host [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.096 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.097 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.098 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.098 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.098 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.099 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.099 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.099 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.099 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.100 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.100 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.100 253542 DEBUG nova.virt.hardware [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.103 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.132 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.138 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.145 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.146 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.146 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.147 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.147 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.148 253542 DEBUG nova.virt.libvirt.driver [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.167 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.199 253542 INFO nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Took 10.80 seconds to spawn the instance on the hypervisor.
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.200 253542 DEBUG nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.268 253542 INFO nova.compute.manager [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Took 11.82 seconds to build instance.
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.291 253542 DEBUG oslo_concurrency.lockutils [None req-fd954489-8e65-4c0e-94f5-d710e8a29ad8 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404325393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.605 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.628 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:30 compute-0 nova_compute[253538]: 2025-11-25 08:25:30.632 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 339 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Nov 25 08:25:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1404325393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56515670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.149 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.152 253542 DEBUG nova.virt.libvirt.vif [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-485239503',display_name='tempest-ServersAdminTestJSON-server-485239503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-485239503',id=20,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-bxwnte0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:23Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.153 253542 DEBUG nova.network.os_vif_util [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.154 253542 DEBUG nova.network.os_vif_util [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.156 253542 DEBUG nova.objects.instance [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.172 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <uuid>740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2</uuid>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <name>instance-00000014</name>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdminTestJSON-server-485239503</nova:name>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:25:30</nova:creationTime>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <nova:port uuid="817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8">
Nov 25 08:25:31 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <system>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <entry name="serial">740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2</entry>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <entry name="uuid">740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2</entry>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </system>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <os>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   </os>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <features>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   </features>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk">
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config">
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:31 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:50:3c:73"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <target dev="tap817e9f9b-9d"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/console.log" append="off"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <video>
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </video>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:25:31 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:25:31 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:25:31 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:25:31 compute-0 nova_compute[253538]: </domain>
Nov 25 08:25:31 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.178 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Preparing to wait for external event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.179 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.179 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.179 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.180 253542 DEBUG nova.virt.libvirt.vif [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-485239503',display_name='tempest-ServersAdminTestJSON-server-485239503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-485239503',id=20,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-bxwnte0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:23Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.180 253542 DEBUG nova.network.os_vif_util [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.181 253542 DEBUG nova.network.os_vif_util [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.182 253542 DEBUG os_vif [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.183 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.183 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.187 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.188 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap817e9f9b-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.188 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap817e9f9b-9d, col_values=(('external_ids', {'iface-id': '817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:3c:73', 'vm-uuid': '740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.192 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:31 compute-0 NetworkManager[48915]: <info>  [1764059131.1938] manager: (tap817e9f9b-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.200 253542 INFO os_vif [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d')
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.284 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.284 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.285 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:50:3c:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.285 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Using config drive
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.313 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.853 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Creating config drive at /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config
Nov 25 08:25:31 compute-0 nova_compute[253538]: 2025-11-25 08:25:31.858 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6bynld62 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:32 compute-0 ceph-mon[75015]: pgmap v1202: 321 pgs: 321 active+clean; 339 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Nov 25 08:25:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/56515670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.288 253542 DEBUG nova.network.neutron [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updated VIF entry in instance network info cache for port 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.289 253542 DEBUG nova.network.neutron [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updating instance_info_cache with network_info: [{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.309 253542 DEBUG oslo_concurrency.lockutils [req-e2014a23-e9ea-4a75-9e0d-e6b5107b03ad req-1272014a-dda3-421c-8f0f-8fe3e699f901 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.395 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6bynld62" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.436 253542 DEBUG nova.storage.rbd_utils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.439 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.590 253542 DEBUG oslo_concurrency.processutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.592 253542 INFO nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Deleting local config drive /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2/disk.config because it was imported into RBD.
Nov 25 08:25:32 compute-0 kernel: tap817e9f9b-9d: entered promiscuous mode
Nov 25 08:25:32 compute-0 NetworkManager[48915]: <info>  [1764059132.6533] manager: (tap817e9f9b-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Nov 25 08:25:32 compute-0 ovn_controller[152859]: 2025-11-25T08:25:32Z|00064|binding|INFO|Claiming lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 for this chassis.
Nov 25 08:25:32 compute-0 ovn_controller[152859]: 2025-11-25T08:25:32Z|00065|binding|INFO|817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8: Claiming fa:16:3e:50:3c:73 10.100.0.4
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.655 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.662 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:3c:73 10.100.0.4'], port_security=['fa:16:3e:50:3c:73 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.664 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.666 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:25:32 compute-0 ovn_controller[152859]: 2025-11-25T08:25:32Z|00066|binding|INFO|Setting lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 ovn-installed in OVS
Nov 25 08:25:32 compute-0 ovn_controller[152859]: 2025-11-25T08:25:32Z|00067|binding|INFO|Setting lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 up in Southbound
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.679 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3da90c-6264-44bb-8c97-df611f05bdcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:32 compute-0 systemd-machined[215790]: New machine qemu-22-instance-00000014.
Nov 25 08:25:32 compute-0 systemd-udevd[281643]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:25:32 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Nov 25 08:25:32 compute-0 NetworkManager[48915]: <info>  [1764059132.7212] device (tap817e9f9b-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:25:32 compute-0 NetworkManager[48915]: <info>  [1764059132.7249] device (tap817e9f9b-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.725 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0566b0e2-6cf8-482c-ad45-b5117582700a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.729 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[91f15788-4a4e-473c-a720-16181e1b74ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.760 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[324dc9f9-bca4-4cd8-ab28-8b6d07614d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.779 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9774ab-247a-4aab-8bdb-a4d3f145060e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281652, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.807 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d473255-3e8b-4bb3-84eb-599b0be95b81]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281655, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281655, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.808 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:32 compute-0 nova_compute[253538]: 2025-11-25 08:25:32.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.813 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.814 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.814 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:32.814 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 350 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 177 op/s
Nov 25 08:25:33 compute-0 nova_compute[253538]: 2025-11-25 08:25:33.063 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059133.0628486, 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:33 compute-0 nova_compute[253538]: 2025-11-25 08:25:33.063 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] VM Started (Lifecycle Event)
Nov 25 08:25:33 compute-0 nova_compute[253538]: 2025-11-25 08:25:33.083 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:33 compute-0 nova_compute[253538]: 2025-11-25 08:25:33.088 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059133.065332, 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:33 compute-0 nova_compute[253538]: 2025-11-25 08:25:33.088 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] VM Paused (Lifecycle Event)
Nov 25 08:25:33 compute-0 nova_compute[253538]: 2025-11-25 08:25:33.104 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:33 compute-0 nova_compute[253538]: 2025-11-25 08:25:33.107 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:33 compute-0 nova_compute[253538]: 2025-11-25 08:25:33.130 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:33 compute-0 ovn_controller[152859]: 2025-11-25T08:25:33Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:6b:81 10.100.0.10
Nov 25 08:25:33 compute-0 ovn_controller[152859]: 2025-11-25T08:25:33Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:6b:81 10.100.0.10
Nov 25 08:25:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:34 compute-0 ceph-mon[75015]: pgmap v1203: 321 pgs: 321 active+clean; 350 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 177 op/s
Nov 25 08:25:34 compute-0 nova_compute[253538]: 2025-11-25 08:25:34.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.4 MiB/s wr, 235 op/s
Nov 25 08:25:35 compute-0 nova_compute[253538]: 2025-11-25 08:25:35.285 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:35 compute-0 nova_compute[253538]: 2025-11-25 08:25:35.286 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:35 compute-0 nova_compute[253538]: 2025-11-25 08:25:35.310 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:25:35 compute-0 nova_compute[253538]: 2025-11-25 08:25:35.391 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:35 compute-0 nova_compute[253538]: 2025-11-25 08:25:35.392 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:35 compute-0 nova_compute[253538]: 2025-11-25 08:25:35.398 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:25:35 compute-0 nova_compute[253538]: 2025-11-25 08:25:35.398 253542 INFO nova.compute.claims [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:25:35 compute-0 nova_compute[253538]: 2025-11-25 08:25:35.612 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124271192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.045 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.052 253542 DEBUG nova.compute.provider_tree [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.123 253542 DEBUG nova.scheduler.client.report [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.145 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.146 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:25:36 compute-0 ceph-mon[75015]: pgmap v1204: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.4 MiB/s wr, 235 op/s
Nov 25 08:25:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1124271192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.192 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.202 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.203 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.236 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.254 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.271 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.272 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.309 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.373 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.375 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.375 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Creating image(s)
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.397 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.419 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.441 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.444 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.463 253542 DEBUG nova.policy [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd61511e82c674abeb4ba87a4e5c5bf9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.480 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.482 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.490 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.490 253542 INFO nova.compute.claims [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.500 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.501 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.501 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.501 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.520 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.524 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 39580ba3-504b-4e17-b64f-f44ef66091da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.588 253542 DEBUG nova.compute.manager [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.589 253542 DEBUG oslo_concurrency.lockutils [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.589 253542 DEBUG oslo_concurrency.lockutils [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.589 253542 DEBUG oslo_concurrency.lockutils [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.589 253542 DEBUG nova.compute.manager [req-1880f91f-80f5-4dfc-90d5-6d60ec4b64d5 req-b228b342-4d2a-4913-a372-ef9147ea1a74 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Processing event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.590 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.618 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059136.6022937, 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.618 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] VM Resumed (Lifecycle Event)
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.620 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.624 253542 INFO nova.virt.libvirt.driver [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance spawned successfully.
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.625 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.639 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.653 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.663 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.663 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.664 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.665 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.665 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.666 253542 DEBUG nova.virt.libvirt.driver [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.674 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.725 253542 INFO nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Took 12.92 seconds to spawn the instance on the hypervisor.
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.725 253542 DEBUG nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.757 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.792 253542 INFO nova.compute.manager [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Took 13.88 seconds to build instance.
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.808 253542 DEBUG oslo_concurrency.lockutils [None req-4f59ed76-8f96-42c9-8127-ec70884ea40a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.817 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 39580ba3-504b-4e17-b64f-f44ef66091da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:36 compute-0 podman[281815]: 2025-11-25 08:25:36.876554161 +0000 UTC m=+0.104372689 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:25:36 compute-0 nova_compute[253538]: 2025-11-25 08:25:36.919 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] resizing rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:25:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 390 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 186 op/s
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.024 253542 DEBUG nova.objects.instance [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'migration_context' on Instance uuid 39580ba3-504b-4e17-b64f-f44ef66091da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.035 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.036 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Ensure instance console log exists: /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.037 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.037 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.037 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.063 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Successfully created port: f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:25:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:25:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4014580120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.205 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.212 253542 DEBUG nova.compute.provider_tree [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.232 253542 DEBUG nova.scheduler.client.report [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.253 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.254 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.306 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.306 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.325 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.342 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.421 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.423 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.423 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Creating image(s)
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.449 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.479 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.506 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.510 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.589 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.590 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.591 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.591 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.628 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.631 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:37 compute-0 nova_compute[253538]: 2025-11-25 08:25:37.715 253542 DEBUG nova.policy [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c53798457642457e8c93278c6bbae0b7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ea3de796e6464fbf65835dc4c3ad79', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.126 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:38 compute-0 ceph-mon[75015]: pgmap v1205: 321 pgs: 321 active+clean; 390 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 186 op/s
Nov 25 08:25:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4014580120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.182 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] resizing rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.406 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Successfully updated port: f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.423 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.423 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquired lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.424 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.577 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Successfully created port: 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.630 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.696 253542 DEBUG nova.objects.instance [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lazy-loading 'migration_context' on Instance uuid ceb93a9d-5e18-4351-9cfa-3949c00b448a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.708 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.708 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Ensure instance console log exists: /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.708 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.709 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.709 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.758 253542 DEBUG nova.compute.manager [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.758 253542 DEBUG nova.compute.manager [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing instance network info cache due to event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.758 253542 DEBUG oslo_concurrency.lockutils [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.951 253542 DEBUG nova.compute.manager [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.951 253542 DEBUG oslo_concurrency.lockutils [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.953 253542 DEBUG oslo_concurrency.lockutils [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.953 253542 DEBUG oslo_concurrency.lockutils [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.953 253542 DEBUG nova.compute.manager [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] No waiting events found dispatching network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:38 compute-0 nova_compute[253538]: 2025-11-25 08:25:38.953 253542 WARNING nova.compute.manager [req-4973dde1-d4f7-4a7b-b360-cc328c9718f4 req-699c4fca-6ff8-4977-b9ea-81070012400a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received unexpected event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 for instance with vm_state active and task_state None.
Nov 25 08:25:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 412 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.1 MiB/s wr, 228 op/s
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.046 253542 DEBUG oslo_concurrency.lockutils [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] Acquiring lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.046 253542 DEBUG oslo_concurrency.lockutils [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] Acquired lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.046 253542 DEBUG nova.network.neutron [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.798 253542 DEBUG nova.network.neutron [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.812 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Releasing lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.812 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance network_info: |[{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.813 253542 DEBUG oslo_concurrency.lockutils [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.813 253542 DEBUG nova.network.neutron [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.817 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start _get_guest_xml network_info=[{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.821 253542 WARNING nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.829 253542 DEBUG nova.virt.libvirt.host [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.829 253542 DEBUG nova.virt.libvirt.host [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.838 253542 DEBUG nova.virt.libvirt.host [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.838 253542 DEBUG nova.virt.libvirt.host [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.838 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.838 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.839 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.840 253542 DEBUG nova.virt.hardware [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:25:39 compute-0 nova_compute[253538]: 2025-11-25 08:25:39.843 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.078 253542 DEBUG nova.network.neutron [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updating instance_info_cache with network_info: [{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.094 253542 DEBUG oslo_concurrency.lockutils [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] Releasing lock "refresh_cache-740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.094 253542 DEBUG nova.compute.manager [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.095 253542 DEBUG nova.compute.manager [None req-72e3d350-4653-4ee9-889f-4e099416522c fa9e1d63b2ec4332a2c7a39b0a87664a 998870165086474c97c02b065e3e6837 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] network_info to inject: |[{"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 25 08:25:40 compute-0 ceph-mon[75015]: pgmap v1206: 321 pgs: 321 active+clean; 412 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.1 MiB/s wr, 228 op/s
Nov 25 08:25:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248823452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.319 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.340 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.344 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.369 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Successfully updated port: 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.390 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.390 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquired lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.390 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.562 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:25:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657543922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.750 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.751 253542 DEBUG nova.virt.libvirt.vif [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1841402445',display_name='tempest-FloatingIPsAssociationTestJSON-server-1841402445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1841402445',id=21,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-19c31s1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:36Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=39580ba3-504b-4e17-b64f-f44ef66091da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.751 253542 DEBUG nova.network.os_vif_util [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.752 253542 DEBUG nova.network.os_vif_util [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.753 253542 DEBUG nova.objects.instance [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 39580ba3-504b-4e17-b64f-f44ef66091da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.767 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <uuid>39580ba3-504b-4e17-b64f-f44ef66091da</uuid>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <name>instance-00000015</name>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1841402445</nova:name>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:25:39</nova:creationTime>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <nova:user uuid="d61511e82c674abeb4ba87a4e5c5bf9d">tempest-FloatingIPsAssociationTestJSON-1833680054-project-member</nova:user>
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <nova:project uuid="2d3671fc1a3f4b319a62f23168a9df72">tempest-FloatingIPsAssociationTestJSON-1833680054</nova:project>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <nova:port uuid="f718b0c0-ca1b-4f5d-aa70-3d1f48097b97">
Nov 25 08:25:40 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <system>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <entry name="serial">39580ba3-504b-4e17-b64f-f44ef66091da</entry>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <entry name="uuid">39580ba3-504b-4e17-b64f-f44ef66091da</entry>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </system>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <os>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   </os>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <features>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   </features>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/39580ba3-504b-4e17-b64f-f44ef66091da_disk">
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/39580ba3-504b-4e17-b64f-f44ef66091da_disk.config">
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:40 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:60:b8:f5"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <target dev="tapf718b0c0-ca"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/console.log" append="off"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <video>
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </video>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:25:40 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:25:40 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:25:40 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:25:40 compute-0 nova_compute[253538]: </domain>
Nov 25 08:25:40 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.768 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Preparing to wait for external event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.768 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.768 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.768 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.769 253542 DEBUG nova.virt.libvirt.vif [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1841402445',display_name='tempest-FloatingIPsAssociationTestJSON-server-1841402445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1841402445',id=21,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-19c31s1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:36Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=39580ba3-504b-4e17-b64f-f44ef66091da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.769 253542 DEBUG nova.network.os_vif_util [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.770 253542 DEBUG nova.network.os_vif_util [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.770 253542 DEBUG os_vif [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.771 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.771 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf718b0c0-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf718b0c0-ca, col_values=(('external_ids', {'iface-id': 'f718b0c0-ca1b-4f5d-aa70-3d1f48097b97', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:b8:f5', 'vm-uuid': '39580ba3-504b-4e17-b64f-f44ef66091da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:40 compute-0 NetworkManager[48915]: <info>  [1764059140.7770] manager: (tapf718b0c0-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.778 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.782 253542 INFO os_vif [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca')
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.832 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.832 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.832 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] No VIF found with MAC fa:16:3e:60:b8:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.833 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Using config drive
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.853 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.861 253542 DEBUG nova.compute.manager [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.861 253542 DEBUG nova.compute.manager [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing instance network info cache due to event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:40 compute-0 nova_compute[253538]: 2025-11-25 08:25:40.861 253542 DEBUG oslo_concurrency.lockutils [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 450 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.8 MiB/s wr, 247 op/s
Nov 25 08:25:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:41.051 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:41.051 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:41.052 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4248823452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2657543922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.362 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Creating config drive at /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.369 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrowkyrm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.499 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrowkyrm" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.527 253542 DEBUG nova.storage.rbd_utils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] rbd image 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.531 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.705 253542 DEBUG nova.network.neutron [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updated VIF entry in instance network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.706 253542 DEBUG nova.network.neutron [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.709 253542 DEBUG nova.network.neutron [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.733 253542 DEBUG oslo_concurrency.lockutils [req-11e21f0b-6dca-44f1-b18a-ac1deecb514f req-b2d4d5ba-a814-47e3-a59f-0f8bc3e95993 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.921 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Releasing lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.921 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance network_info: |[{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.922 253542 DEBUG oslo_concurrency.lockutils [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.922 253542 DEBUG nova.network.neutron [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.924 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start _get_guest_xml network_info=[{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.929 253542 WARNING nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.935 253542 DEBUG nova.virt.libvirt.host [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.936 253542 DEBUG nova.virt.libvirt.host [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.943 253542 DEBUG nova.virt.libvirt.host [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.943 253542 DEBUG nova.virt.libvirt.host [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.944 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.944 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.945 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.945 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.945 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.945 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.946 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.946 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.946 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.946 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.947 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.947 253542 DEBUG nova.virt.hardware [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:25:41 compute-0 nova_compute[253538]: 2025-11-25 08:25:41.951 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:42 compute-0 ceph-mon[75015]: pgmap v1207: 321 pgs: 321 active+clean; 450 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.8 MiB/s wr, 247 op/s
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.368 253542 DEBUG oslo_concurrency.processutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config 39580ba3-504b-4e17-b64f-f44ef66091da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.837s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.369 253542 INFO nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Deleting local config drive /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da/disk.config because it was imported into RBD.
Nov 25 08:25:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2770839393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:42 compute-0 kernel: tapf718b0c0-ca: entered promiscuous mode
Nov 25 08:25:42 compute-0 NetworkManager[48915]: <info>  [1764059142.4154] manager: (tapf718b0c0-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:42 compute-0 ovn_controller[152859]: 2025-11-25T08:25:42Z|00068|binding|INFO|Claiming lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 for this chassis.
Nov 25 08:25:42 compute-0 ovn_controller[152859]: 2025-11-25T08:25:42Z|00069|binding|INFO|f718b0c0-ca1b-4f5d-aa70-3d1f48097b97: Claiming fa:16:3e:60:b8:f5 10.100.0.3
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.423 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.429 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b8:f5 10.100.0.3'], port_security=['fa:16:3e:60:b8:f5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '39580ba3-504b-4e17-b64f-f44ef66091da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '487cd315-808b-491d-a4f3-9b5dcda46b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddf7b92c-cc8c-4886-ba67-90d06b198671, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.430 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 in datapath f86a7b06-d9db-4462-bd9b-8ad648dec7f4 bound to our chassis
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.433 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86a7b06-d9db-4462-bd9b-8ad648dec7f4
Nov 25 08:25:42 compute-0 ovn_controller[152859]: 2025-11-25T08:25:42Z|00070|binding|INFO|Setting lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 ovn-installed in OVS
Nov 25 08:25:42 compute-0 ovn_controller[152859]: 2025-11-25T08:25:42Z|00071|binding|INFO|Setting lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 up in Southbound
Nov 25 08:25:42 compute-0 systemd-udevd[282268]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.461 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4c50ac45-ec8c-4046-bf57-72abce74a397]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:42 compute-0 NetworkManager[48915]: <info>  [1764059142.4649] device (tapf718b0c0-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:25:42 compute-0 NetworkManager[48915]: <info>  [1764059142.4661] device (tapf718b0c0-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:25:42 compute-0 systemd-machined[215790]: New machine qemu-23-instance-00000015.
Nov 25 08:25:42 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.478 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.485 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.500 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b26c8c82-0395-46bf-a8c9-8fd8381a6cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.507 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[41824de4-5e93-4889-8065-24dc5df6a641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.534 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b4c0c8-7da9-4e20-a6de-d37e3b96fbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.551 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f019c31-85ac-4e6f-bdf0-d0bee32c1d8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86a7b06-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:7b:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448076, 'reachable_time': 32795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282290, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.567 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e8379f66-ccdc-4a9a-b119-f9a5344d73e6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf86a7b06-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448087, 'tstamp': 448087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282291, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf86a7b06-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448090, 'tstamp': 448090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282291, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.568 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86a7b06-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.569 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.570 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.571 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86a7b06-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.571 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.571 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86a7b06-d0, col_values=(('external_ids', {'iface-id': '156cbe7f-b9cd-46c4-9552-1484f581cde6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:42.572 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.872 253542 DEBUG nova.network.neutron [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updated VIF entry in instance network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.872 253542 DEBUG nova.network.neutron [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.888 253542 DEBUG oslo_concurrency.lockutils [req-d77d0d92-b3bb-47df-a20f-d698fe4195d7 req-cdca7883-9149-4913-aa8e-af3576b866ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.954 253542 DEBUG nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.955 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.955 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.956 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.956 253542 DEBUG nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Processing event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.957 253542 DEBUG nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.959 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.959 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.959 253542 DEBUG oslo_concurrency.lockutils [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.959 253542 DEBUG nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] No waiting events found dispatching network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:42 compute-0 nova_compute[253538]: 2025-11-25 08:25:42.960 253542 WARNING nova.compute.manager [req-b90993ed-a68e-4355-8fd1-ed4b37bab08a req-af8a3eab-096c-42d5-b620-04abfa9b8975 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received unexpected event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 for instance with vm_state building and task_state spawning.
Nov 25 08:25:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 465 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 257 op/s
Nov 25 08:25:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191226321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.026 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.027 253542 DEBUG nova.virt.libvirt.vif [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-126914630',id=22,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ea3de796e6464fbf65835dc4c3ad79',ramdisk_id='',reservation_id='r-jh0ldu4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:37Z,user_data=None,user_id='c53798457642457e8c93278c6bbae0b7',uuid=ceb93a9d-5e18-4351-9cfa-3949c00b448a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.028 253542 DEBUG nova.network.os_vif_util [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converting VIF {"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.029 253542 DEBUG nova.network.os_vif_util [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.030 253542 DEBUG nova.objects.instance [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lazy-loading 'pci_devices' on Instance uuid ceb93a9d-5e18-4351-9cfa-3949c00b448a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.050 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <uuid>ceb93a9d-5e18-4351-9cfa-3949c00b448a</uuid>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <name>instance-00000016</name>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307</nova:name>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:25:41</nova:creationTime>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <nova:user uuid="c53798457642457e8c93278c6bbae0b7">tempest-FloatingIPsAssociationNegativeTestJSON-1564808270-project-member</nova:user>
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <nova:project uuid="c4ea3de796e6464fbf65835dc4c3ad79">tempest-FloatingIPsAssociationNegativeTestJSON-1564808270</nova:project>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <nova:port uuid="6283ff13-d854-41d6-8a7a-eab602cc4cf4">
Nov 25 08:25:43 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <system>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <entry name="serial">ceb93a9d-5e18-4351-9cfa-3949c00b448a</entry>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <entry name="uuid">ceb93a9d-5e18-4351-9cfa-3949c00b448a</entry>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </system>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <os>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   </os>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <features>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   </features>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk">
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config">
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:43 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:57:ac:37"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <target dev="tap6283ff13-d8"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/console.log" append="off"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <video>
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </video>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:25:43 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:25:43 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:25:43 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:25:43 compute-0 nova_compute[253538]: </domain>
Nov 25 08:25:43 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.054 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Preparing to wait for external event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.054 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.054 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.055 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.056 253542 DEBUG nova.virt.libvirt.vif [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-126914630',id=22,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ea3de796e6464fbf65835dc4c3ad79',ramdisk_id='',reservation_id='r-jh0ldu4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:37Z,user_data=None,user_id='c53798457642457e8c93278c6bbae0b7',uuid=ceb93a9d-5e18-4351-9cfa-3949c00b448a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.056 253542 DEBUG nova.network.os_vif_util [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converting VIF {"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.057 253542 DEBUG nova.network.os_vif_util [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.057 253542 DEBUG os_vif [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.058 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.059 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.059 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.062 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6283ff13-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.062 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6283ff13-d8, col_values=(('external_ids', {'iface-id': '6283ff13-d854-41d6-8a7a-eab602cc4cf4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:ac:37', 'vm-uuid': 'ceb93a9d-5e18-4351-9cfa-3949c00b448a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:43 compute-0 NetworkManager[48915]: <info>  [1764059143.0652] manager: (tap6283ff13-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.069 253542 INFO os_vif [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8')
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.195 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.196 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.196 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] No VIF found with MAC fa:16:3e:57:ac:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.197 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Using config drive
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.228 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.349 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.349 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059143.3487282, 39580ba3-504b-4e17-b64f-f44ef66091da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.350 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] VM Started (Lifecycle Event)
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.353 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.358 253542 INFO nova.virt.libvirt.driver [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance spawned successfully.
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.358 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.370 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.379 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.384 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.385 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.385 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.386 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.387 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.387 253542 DEBUG nova.virt.libvirt.driver [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.394 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.395 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059143.3504753, 39580ba3-504b-4e17-b64f-f44ef66091da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.395 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] VM Paused (Lifecycle Event)
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.413 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.416 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059143.352884, 39580ba3-504b-4e17-b64f-f44ef66091da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.416 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] VM Resumed (Lifecycle Event)
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.440 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.444 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.458 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.503 253542 INFO nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Took 7.13 seconds to spawn the instance on the hypervisor.
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.504 253542 DEBUG nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2770839393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4191226321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.590 253542 INFO nova.compute.manager [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Took 8.22 seconds to build instance.
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.616 253542 DEBUG oslo_concurrency.lockutils [None req-975b261d-151e-4285-95bb-765b22dea8e7 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:43 compute-0 ovn_controller[152859]: 2025-11-25T08:25:43Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:8b:90 10.100.0.11
Nov 25 08:25:43 compute-0 ovn_controller[152859]: 2025-11-25T08:25:43Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:8b:90 10.100.0.11
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.840 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Creating config drive at /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.846 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9yr98jz_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:43 compute-0 nova_compute[253538]: 2025-11-25 08:25:43.977 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9yr98jz_" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.016 253542 DEBUG nova.storage.rbd_utils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] rbd image ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.021 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.086 253542 INFO nova.compute.manager [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Rebuilding instance
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.311 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.324 253542 DEBUG nova.compute.manager [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.391 253542 DEBUG oslo_concurrency.processutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config ceb93a9d-5e18-4351-9cfa-3949c00b448a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.393 253542 INFO nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Deleting local config drive /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a/disk.config because it was imported into RBD.
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.412 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_requests' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.421 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.432 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.443 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:44 compute-0 kernel: tap6283ff13-d8: entered promiscuous mode
Nov 25 08:25:44 compute-0 NetworkManager[48915]: <info>  [1764059144.4492] manager: (tap6283ff13-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.451 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:25:44 compute-0 ovn_controller[152859]: 2025-11-25T08:25:44Z|00072|binding|INFO|Claiming lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 for this chassis.
Nov 25 08:25:44 compute-0 ovn_controller[152859]: 2025-11-25T08:25:44Z|00073|binding|INFO|6283ff13-d854-41d6-8a7a-eab602cc4cf4: Claiming fa:16:3e:57:ac:37 10.100.0.3
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.458 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:44 compute-0 NetworkManager[48915]: <info>  [1764059144.4685] device (tap6283ff13-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:25:44 compute-0 NetworkManager[48915]: <info>  [1764059144.4694] device (tap6283ff13-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.476 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:25:44 compute-0 systemd-machined[215790]: New machine qemu-24-instance-00000016.
Nov 25 08:25:44 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.502 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:37 10.100.0.3'], port_security=['fa:16:3e:57:ac:37 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ceb93a9d-5e18-4351-9cfa-3949c00b448a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ea3de796e6464fbf65835dc4c3ad79', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c614f14d-ba5c-4351-9110-1ad24f7c46f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d60c6aa2-d509-48b2-b548-58ea5b315827, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6283ff13-d854-41d6-8a7a-eab602cc4cf4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.503 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 in datapath f0cb07bc-dc94-4b65-bb7f-100ce36c9428 bound to our chassis
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.505 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0cb07bc-dc94-4b65-bb7f-100ce36c9428
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.517 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f18d6e-dcf3-4fb9-9617-b959bd44dab2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.517 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0cb07bc-d1 in ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.519 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0cb07bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.519 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35d0604f-2ad2-4dcf-b2b9-e431888ad3b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.520 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[205a2420-6962-4215-ad1e-98b6cdab0254]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.531 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2367a0d7-ec1d-4dbe-bd0a-2133fd39f0bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:44 compute-0 ovn_controller[152859]: 2025-11-25T08:25:44Z|00074|binding|INFO|Setting lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 ovn-installed in OVS
Nov 25 08:25:44 compute-0 ovn_controller[152859]: 2025-11-25T08:25:44Z|00075|binding|INFO|Setting lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 up in Southbound
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.557 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d617b70-9cb4-43c8-9cb7-f73b2e282c7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ceph-mon[75015]: pgmap v1208: 321 pgs: 321 active+clean; 465 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 257 op/s
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.586 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[03222ada-b6a7-4a06-8a78-ae2beb4eec6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 NetworkManager[48915]: <info>  [1764059144.5967] manager: (tapf0cb07bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.597 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3be0acf6-907b-42c5-a66b-0b5ee855aaa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.637 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[329a8915-bbab-4d34-aa4f-5e95b97c54cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.640 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbe355d-163e-4618-b651-02d017832793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 NetworkManager[48915]: <info>  [1764059144.6615] device (tapf0cb07bc-d0): carrier: link connected
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.667 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2131a6-261a-4b6e-8ae4-5b90d841cd22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.688 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[104c0139-177e-4f58-a9c2-595e455f2850]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0cb07bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:d7:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449926, 'reachable_time': 33256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282461, 'error': None, 'target': 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9761f11-6053-4ed7-9a4e-1291f84a8213]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecc:d736'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449926, 'tstamp': 449926}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282462, 'error': None, 'target': 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.724 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea58f54-e059-46be-81cb-8979f36f4b8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0cb07bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:d7:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449926, 'reachable_time': 33256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282463, 'error': None, 'target': 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.757 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fc775c-5e4c-4050-8dd0-7c9fc3cebf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.824 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2cd6f9-34f5-45a3-98d4-40bf3af09c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.826 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0cb07bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.826 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.826 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0cb07bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:44 compute-0 NetworkManager[48915]: <info>  [1764059144.8288] manager: (tapf0cb07bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.828 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:44 compute-0 kernel: tapf0cb07bc-d0: entered promiscuous mode
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.831 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.833 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0cb07bc-d0, col_values=(('external_ids', {'iface-id': '6ff33e52-8ce7-4c58-8236-5e210dda120f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:44 compute-0 ovn_controller[152859]: 2025-11-25T08:25:44Z|00076|binding|INFO|Releasing lport 6ff33e52-8ce7-4c58-8236-5e210dda120f from this chassis (sb_readonly=0)
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:44 compute-0 nova_compute[253538]: 2025-11-25 08:25:44.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.856 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0cb07bc-dc94-4b65-bb7f-100ce36c9428.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0cb07bc-dc94-4b65-bb7f-100ce36c9428.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.858 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[de5a4dc3-94e2-44a9-a543-bf5bd4e8aa1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.859 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-f0cb07bc-dc94-4b65-bb7f-100ce36c9428
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/f0cb07bc-dc94-4b65-bb7f-100ce36c9428.pid.haproxy
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID f0cb07bc-dc94-4b65-bb7f-100ce36c9428
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:25:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:44.860 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'env', 'PROCESS_TAG=haproxy-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0cb07bc-dc94-4b65-bb7f-100ce36c9428.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:25:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 487 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.6 MiB/s wr, 291 op/s
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.148 253542 DEBUG nova.compute.manager [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.149 253542 DEBUG oslo_concurrency.lockutils [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.149 253542 DEBUG oslo_concurrency.lockutils [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.149 253542 DEBUG oslo_concurrency.lockutils [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.150 253542 DEBUG nova.compute.manager [req-e4f1300b-48d4-4812-bcb8-42efc739cb53 req-59fa0ee5-cac6-4011-9d0e-298a0a93dcb7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Processing event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.273 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.274 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059145.2730155, ceb93a9d-5e18-4351-9cfa-3949c00b448a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.274 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] VM Started (Lifecycle Event)
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.280 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.293 253542 INFO nova.virt.libvirt.driver [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance spawned successfully.
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.293 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.297 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.315 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.321 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.322 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.322 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.322 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.323 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.323 253542 DEBUG nova.virt.libvirt.driver [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.348 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.349 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059145.2742052, ceb93a9d-5e18-4351-9cfa-3949c00b448a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.349 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] VM Paused (Lifecycle Event)
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.362 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.365 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059145.27616, ceb93a9d-5e18-4351-9cfa-3949c00b448a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.365 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] VM Resumed (Lifecycle Event)
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.379 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.386 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.401 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:25:45 compute-0 podman[282534]: 2025-11-25 08:25:45.43808362 +0000 UTC m=+0.030175076 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.549 253542 INFO nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Took 8.13 seconds to spawn the instance on the hypervisor.
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.550 253542 DEBUG nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:45 compute-0 podman[282534]: 2025-11-25 08:25:45.565676132 +0000 UTC m=+0.157767568 container create df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.627 253542 INFO nova.compute.manager [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Took 9.26 seconds to build instance.
Nov 25 08:25:45 compute-0 nova_compute[253538]: 2025-11-25 08:25:45.651 253542 DEBUG oslo_concurrency.lockutils [None req-e9b91fea-0bb5-4fc4-b71d-ebdcc710f0e6 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:45 compute-0 ceph-mon[75015]: pgmap v1209: 321 pgs: 321 active+clean; 487 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.6 MiB/s wr, 291 op/s
Nov 25 08:25:45 compute-0 systemd[1]: Started libpod-conmon-df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7.scope.
Nov 25 08:25:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f235619556de56030111cecddf940547ff0043e49f6964151133b56b1d88ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:25:45 compute-0 podman[282534]: 2025-11-25 08:25:45.851020371 +0000 UTC m=+0.443111857 container init df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:25:45 compute-0 podman[282534]: 2025-11-25 08:25:45.859198948 +0000 UTC m=+0.451290404 container start df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:25:45 compute-0 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [NOTICE]   (282554) : New worker (282556) forked
Nov 25 08:25:45 compute-0 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [NOTICE]   (282554) : Loading success.
Nov 25 08:25:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 496 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 248 op/s
Nov 25 08:25:47 compute-0 kernel: tap4ad9572b-6a (unregistering): left promiscuous mode
Nov 25 08:25:47 compute-0 NetworkManager[48915]: <info>  [1764059147.2092] device (tap4ad9572b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:47 compute-0 ovn_controller[152859]: 2025-11-25T08:25:47Z|00077|binding|INFO|Releasing lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe from this chassis (sb_readonly=0)
Nov 25 08:25:47 compute-0 ovn_controller[152859]: 2025-11-25T08:25:47Z|00078|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe down in Southbound
Nov 25 08:25:47 compute-0 ovn_controller[152859]: 2025-11-25T08:25:47Z|00079|binding|INFO|Removing iface tap4ad9572b-6a ovn-installed in OVS
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.237 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.238 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.243 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.259 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d49d055c-1421-4774-b95a-62900599513f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:47 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 25 08:25:47 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 15.785s CPU time.
Nov 25 08:25:47 compute-0 systemd-machined[215790]: Machine qemu-18-instance-00000010 terminated.
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.291 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[675af06a-156c-4a4a-b32f-9160b8be0f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.295 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42dfdc23-e8bc-40f1-90ae-414906d44ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.324 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2afb00ba-a7d2-4e00-a050-a310a3a98603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.343 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[897f6575-16f0-4bb4-be20-773baaa4244c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282576, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:47 compute-0 NetworkManager[48915]: <info>  [1764059147.3527] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 25 08:25:47 compute-0 NetworkManager[48915]: <info>  [1764059147.3532] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.358 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53cbcafa-16ef-49f2-82fe-ea9b986bee27]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282578, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282578, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.359 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.360 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.465 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.465 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.465 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:47.466 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:47 compute-0 ovn_controller[152859]: 2025-11-25T08:25:47Z|00080|binding|INFO|Releasing lport 6ff33e52-8ce7-4c58-8236-5e210dda120f from this chassis (sb_readonly=0)
Nov 25 08:25:47 compute-0 ovn_controller[152859]: 2025-11-25T08:25:47Z|00081|binding|INFO|Releasing lport 156cbe7f-b9cd-46c4-9552-1484f581cde6 from this chassis (sb_readonly=0)
Nov 25 08:25:47 compute-0 ovn_controller[152859]: 2025-11-25T08:25:47Z|00082|binding|INFO|Releasing lport 52d2128c-19c6-4892-8ba5-cc8740039f5e from this chassis (sb_readonly=0)
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.501 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance shutdown successfully after 3 seconds.
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.505 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.509 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.511 253542 DEBUG nova.virt.libvirt.vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:43Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.511 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.512 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.513 253542 DEBUG os_vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.515 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ad9572b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.519 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:25:47 compute-0 nova_compute[253538]: 2025-11-25 08:25:47.521 253542 INFO os_vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')
Nov 25 08:25:48 compute-0 ceph-mon[75015]: pgmap v1210: 321 pgs: 321 active+clean; 496 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 248 op/s
Nov 25 08:25:48 compute-0 nova_compute[253538]: 2025-11-25 08:25:48.160 253542 DEBUG nova.compute.manager [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:48 compute-0 nova_compute[253538]: 2025-11-25 08:25:48.161 253542 DEBUG oslo_concurrency.lockutils [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:48 compute-0 nova_compute[253538]: 2025-11-25 08:25:48.161 253542 DEBUG oslo_concurrency.lockutils [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:48 compute-0 nova_compute[253538]: 2025-11-25 08:25:48.162 253542 DEBUG oslo_concurrency.lockutils [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:48 compute-0 nova_compute[253538]: 2025-11-25 08:25:48.163 253542 DEBUG nova.compute.manager [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] No waiting events found dispatching network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:48 compute-0 nova_compute[253538]: 2025-11-25 08:25:48.163 253542 WARNING nova.compute.manager [req-90627a71-3ed3-4ddf-93de-1a9d2f102f45 req-b318ee8f-dbab-4ebb-bb78-763c44333685 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received unexpected event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 for instance with vm_state active and task_state None.
Nov 25 08:25:48 compute-0 nova_compute[253538]: 2025-11-25 08:25:48.661 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 498 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.8 MiB/s wr, 274 op/s
Nov 25 08:25:49 compute-0 nova_compute[253538]: 2025-11-25 08:25:49.257 253542 DEBUG nova.compute.manager [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:49 compute-0 nova_compute[253538]: 2025-11-25 08:25:49.257 253542 DEBUG nova.compute.manager [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:49 compute-0 nova_compute[253538]: 2025-11-25 08:25:49.258 253542 DEBUG oslo_concurrency.lockutils [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:49 compute-0 nova_compute[253538]: 2025-11-25 08:25:49.258 253542 DEBUG oslo_concurrency.lockutils [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:49 compute-0 nova_compute[253538]: 2025-11-25 08:25:49.258 253542 DEBUG nova.network.neutron [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:49 compute-0 nova_compute[253538]: 2025-11-25 08:25:49.617 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:49 compute-0 nova_compute[253538]: 2025-11-25 08:25:49.894 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting instance files /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del
Nov 25 08:25:49 compute-0 nova_compute[253538]: 2025-11-25 08:25:49.895 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deletion of /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del complete
Nov 25 08:25:49 compute-0 ovn_controller[152859]: 2025-11-25T08:25:49Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:3c:73 10.100.0.4
Nov 25 08:25:49 compute-0 ovn_controller[152859]: 2025-11-25T08:25:49Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:3c:73 10.100.0.4
Nov 25 08:25:50 compute-0 ceph-mon[75015]: pgmap v1211: 321 pgs: 321 active+clean; 498 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.8 MiB/s wr, 274 op/s
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.400 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.401 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating image(s)
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.430 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.464 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.485 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.488 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.564 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.565 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.565 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.566 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.588 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.592 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.813 253542 DEBUG nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.814 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.814 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.814 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 DEBUG nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 WARNING nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state error and task_state rebuild_spawning.
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 DEBUG nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.815 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.816 253542 DEBUG oslo_concurrency.lockutils [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.816 253542 DEBUG nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.816 253542 WARNING nova.compute.manager [req-268b40a8-f919-4f1b-87f9-d12af86569ae req-dbeda999-65ef-4e89-b278-0818911bb33a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state error and task_state rebuild_spawning.
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.913 253542 DEBUG nova.network.neutron [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.914 253542 DEBUG nova.network.neutron [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:50 compute-0 nova_compute[253538]: 2025-11-25 08:25:50.927 253542 DEBUG oslo_concurrency.lockutils [req-4982b202-95a8-49b5-ac54-8fab6d8e7850 req-2235e8e6-07f5-476a-9b4a-f3c528cbbede b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 461 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 296 op/s
Nov 25 08:25:51 compute-0 ceph-mon[75015]: pgmap v1212: 321 pgs: 321 active+clean; 461 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 296 op/s
Nov 25 08:25:51 compute-0 nova_compute[253538]: 2025-11-25 08:25:51.778 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:51 compute-0 nova_compute[253538]: 2025-11-25 08:25:51.847 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.476 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.477 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Ensure instance console log exists: /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.477 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.478 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.478 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.481 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start _get_guest_xml network_info=[{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.485 253542 WARNING nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.492 253542 DEBUG nova.virt.libvirt.host [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.492 253542 DEBUG nova.virt.libvirt.host [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.496 253542 DEBUG nova.virt.libvirt.host [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.496 253542 DEBUG nova.virt.libvirt.host [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.497 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.497 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.498 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.499 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.499 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.499 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.499 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.500 253542 DEBUG nova.virt.hardware [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.500 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.515 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604412980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.933 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.974 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:52 compute-0 nova_compute[253538]: 2025-11-25 08:25:52.981 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 464 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.5 MiB/s wr, 282 op/s
Nov 25 08:25:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/604412980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:25:53
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'volumes']
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:25:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:25:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471726923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.416 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.418 253542 DEBUG nova.virt.libvirt.vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:50Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.418 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.419 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.422 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <uuid>86bfa56f-56d0-4a5e-b0b2-302c375e37a3</uuid>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <name>instance-00000010</name>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdminTestJSON-server-1649971692</nova:name>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:25:52</nova:creationTime>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <nova:port uuid="4ad9572b-6ac1-4659-8ea6-71b8a32c06fe">
Nov 25 08:25:53 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <system>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <entry name="serial">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <entry name="uuid">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </system>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <os>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   </os>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <features>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   </features>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk">
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config">
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:25:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5e:0e:e0"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <target dev="tap4ad9572b-6a"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log" append="off"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <video>
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </video>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:25:53 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:25:53 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:25:53 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:25:53 compute-0 nova_compute[253538]: </domain>
Nov 25 08:25:53 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.423 253542 DEBUG nova.virt.libvirt.vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:50Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.423 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.423 253542 DEBUG nova.network.os_vif_util [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.424 253542 DEBUG os_vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.425 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.425 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.429 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.429 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ad9572b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.429 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ad9572b-6a, col_values=(('external_ids', {'iface-id': '4ad9572b-6ac1-4659-8ea6-71b8a32c06fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:0e:e0', 'vm-uuid': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.431 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:25:53 compute-0 NetworkManager[48915]: <info>  [1764059153.4344] manager: (tap4ad9572b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.436 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.436 253542 INFO os_vif [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.524 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.524 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.524 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:5e:0e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.525 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Using config drive
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.553 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.566 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.592 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'keypairs' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:25:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:25:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.860 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating config drive at /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.866 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqtpomag execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:53 compute-0 nova_compute[253538]: 2025-11-25 08:25:53.994 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqtpomag" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.017 253542 DEBUG nova.storage.rbd_utils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.022 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:25:54 compute-0 ceph-mon[75015]: pgmap v1213: 321 pgs: 321 active+clean; 464 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.5 MiB/s wr, 282 op/s
Nov 25 08:25:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3471726923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.160 253542 DEBUG oslo_concurrency.processutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.161 253542 INFO nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting local config drive /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config because it was imported into RBD.
Nov 25 08:25:54 compute-0 NetworkManager[48915]: <info>  [1764059154.2026] manager: (tap4ad9572b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 25 08:25:54 compute-0 kernel: tap4ad9572b-6a: entered promiscuous mode
Nov 25 08:25:54 compute-0 ovn_controller[152859]: 2025-11-25T08:25:54Z|00083|binding|INFO|Claiming lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for this chassis.
Nov 25 08:25:54 compute-0 ovn_controller[152859]: 2025-11-25T08:25:54Z|00084|binding|INFO|4ad9572b-6ac1-4659-8ea6-71b8a32c06fe: Claiming fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.216 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.218 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.219 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.233 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c5d1a6-9411-4bac-9823-f273cd744ea1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:54 compute-0 ovn_controller[152859]: 2025-11-25T08:25:54Z|00085|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe ovn-installed in OVS
Nov 25 08:25:54 compute-0 ovn_controller[152859]: 2025-11-25T08:25:54Z|00086|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe up in Southbound
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:54 compute-0 systemd-machined[215790]: New machine qemu-25-instance-00000010.
Nov 25 08:25:54 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000010.
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.272 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c228ebc8-f4d3-4a52-b58b-86d536720417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.277 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3162d5-adf6-4265-a308-6035c84b78b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:54 compute-0 systemd-udevd[282913]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:25:54 compute-0 NetworkManager[48915]: <info>  [1764059154.2994] device (tap4ad9572b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:25:54 compute-0 NetworkManager[48915]: <info>  [1764059154.3002] device (tap4ad9572b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.314 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[93f74cf2-c2b6-4c3b-b2b9-c5177f342d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.330 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0bdc53-9a85-4995-adac-55d14de6e4ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282923, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45364dd0-142d-4f46-8c22-16a48ba0d5f2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282924, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282924, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.345 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.349 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.348 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.349 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.350 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:25:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:25:54.350 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.699 253542 DEBUG nova.compute.manager [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.699 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.700 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.700 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059154.6986327, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.701 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Resumed (Lifecycle Event)
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.705 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance spawned successfully.
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.706 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.718 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.730 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.733 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.734 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.734 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.734 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.735 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.735 253542 DEBUG nova.virt.libvirt.driver [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.756 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.757 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059154.6998086, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.757 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Started (Lifecycle Event)
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.778 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.783 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.788 253542 DEBUG nova.compute.manager [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.806 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.835 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.835 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.836 253542 DEBUG nova.objects.instance [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:25:54 compute-0 nova_compute[253538]: 2025-11-25 08:25:54.884 253542 DEBUG oslo_concurrency.lockutils [None req-35bd05ab-f6ad-4042-81df-7cebe8557610 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:25:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 497 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.9 MiB/s wr, 327 op/s
Nov 25 08:25:55 compute-0 nova_compute[253538]: 2025-11-25 08:25:55.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:55 compute-0 nova_compute[253538]: 2025-11-25 08:25:55.958 253542 DEBUG nova.compute.manager [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:55 compute-0 nova_compute[253538]: 2025-11-25 08:25:55.958 253542 DEBUG nova.compute.manager [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:55 compute-0 nova_compute[253538]: 2025-11-25 08:25:55.959 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:55 compute-0 nova_compute[253538]: 2025-11-25 08:25:55.959 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:55 compute-0 nova_compute[253538]: 2025-11-25 08:25:55.959 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:56 compute-0 ceph-mon[75015]: pgmap v1214: 321 pgs: 321 active+clean; 497 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.9 MiB/s wr, 327 op/s
Nov 25 08:25:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 501 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.8 MiB/s wr, 319 op/s
Nov 25 08:25:57 compute-0 nova_compute[253538]: 2025-11-25 08:25:57.298 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:57 compute-0 nova_compute[253538]: 2025-11-25 08:25:57.299 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:57 compute-0 nova_compute[253538]: 2025-11-25 08:25:57.313 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:57 compute-0 nova_compute[253538]: 2025-11-25 08:25:57.313 253542 DEBUG nova.compute.manager [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:57 compute-0 nova_compute[253538]: 2025-11-25 08:25:57.313 253542 DEBUG nova.compute.manager [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing instance network info cache due to event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:57 compute-0 nova_compute[253538]: 2025-11-25 08:25:57.314 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:57 compute-0 nova_compute[253538]: 2025-11-25 08:25:57.314 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:57 compute-0 nova_compute[253538]: 2025-11-25 08:25:57.314 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:57 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 08:25:57 compute-0 podman[282968]: 2025-11-25 08:25:57.835973963 +0000 UTC m=+0.078841013 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 08:25:57 compute-0 ovn_controller[152859]: 2025-11-25T08:25:57Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:b8:f5 10.100.0.3
Nov 25 08:25:57 compute-0 ovn_controller[152859]: 2025-11-25T08:25:57Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:b8:f5 10.100.0.3
Nov 25 08:25:58 compute-0 ceph-mon[75015]: pgmap v1215: 321 pgs: 321 active+clean; 501 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.8 MiB/s wr, 319 op/s
Nov 25 08:25:58 compute-0 ovn_controller[152859]: 2025-11-25T08:25:58Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:ac:37 10.100.0.3
Nov 25 08:25:58 compute-0 ovn_controller[152859]: 2025-11-25T08:25:58Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:ac:37 10.100.0.3
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.433 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.560 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.560 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing instance network info cache due to event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.561 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.561 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.561 253542 DEBUG nova.network.neutron [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.626 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updated VIF entry in instance network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.627 253542 DEBUG nova.network.neutron [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.641 253542 DEBUG oslo_concurrency.lockutils [req-fdc29d66-032c-4ab0-9be3-093784b78cd8 req-101ca0cd-178a-43c4-bcb1-e15222c7c759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:25:58 compute-0 nova_compute[253538]: 2025-11-25 08:25:58.805 253542 INFO nova.compute.manager [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Rebuilding instance
Nov 25 08:25:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:25:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 515 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 308 op/s
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.072 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.090 253542 DEBUG nova.compute.manager [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.131 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_requests' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.139 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.148 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.156 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'migration_context' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.165 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.168 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:25:59 compute-0 nova_compute[253538]: 2025-11-25 08:25:59.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:00 compute-0 ceph-mon[75015]: pgmap v1216: 321 pgs: 321 active+clean; 515 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 308 op/s
Nov 25 08:26:00 compute-0 podman[282985]: 2025-11-25 08:26:00.821657409 +0000 UTC m=+0.071724487 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 08:26:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 549 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.3 MiB/s wr, 328 op/s
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.592 253542 DEBUG nova.network.neutron [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updated VIF entry in instance network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.594 253542 DEBUG nova.network.neutron [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.611 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.612 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.613 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.614 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.614 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.615 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.616 253542 WARNING nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state rebuilding.
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.617 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.618 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.619 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.619 253542 DEBUG oslo_concurrency.lockutils [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.620 253542 DEBUG nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:01 compute-0 nova_compute[253538]: 2025-11-25 08:26:01.621 253542 WARNING nova.compute.manager [req-2e9582d4-ba45-41c2-b114-35b39a1f5d69 req-990bbf99-211e-49bb-8138-602dacd56d37 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state rebuilding.
Nov 25 08:26:02 compute-0 ceph-mon[75015]: pgmap v1217: 321 pgs: 321 active+clean; 549 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.3 MiB/s wr, 328 op/s
Nov 25 08:26:02 compute-0 nova_compute[253538]: 2025-11-25 08:26:02.892 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:02 compute-0 nova_compute[253538]: 2025-11-25 08:26:02.893 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:02 compute-0 nova_compute[253538]: 2025-11-25 08:26:02.908 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:26:02 compute-0 nova_compute[253538]: 2025-11-25 08:26:02.962 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:02 compute-0 nova_compute[253538]: 2025-11-25 08:26:02.963 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:02 compute-0 nova_compute[253538]: 2025-11-25 08:26:02.973 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:26:02 compute-0 nova_compute[253538]: 2025-11-25 08:26:02.973 253542 INFO nova.compute.claims [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 561 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 288 op/s
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.159 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.437 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669348661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.632 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.638 253542 DEBUG nova.compute.provider_tree [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:03 compute-0 ceph-mon[75015]: pgmap v1218: 321 pgs: 321 active+clean; 561 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 288 op/s
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.658 253542 DEBUG nova.scheduler.client.report [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.686 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.688 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00489235110617308 of space, bias 1.0, pg target 1.4677053318519242 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:26:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.752 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.752 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.792 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.811 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:26:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.918 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.920 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.920 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Creating image(s)
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.941 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.965 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.985 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:03 compute-0 nova_compute[253538]: 2025-11-25 08:26:03.988 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.016 253542 DEBUG nova.policy [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '02e795c75a3b40bbbc3ca83d0501777f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52217f37b23343d697fa6d2be38e236d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.073 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.074 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.075 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.075 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.094 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.097 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ca088afd-31e5-497b-bfc5-ba1f56096642_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.456 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ca088afd-31e5-497b-bfc5-ba1f56096642_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.524 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] resizing rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3669348661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.934 253542 DEBUG nova.objects.instance [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'migration_context' on Instance uuid ca088afd-31e5-497b-bfc5-ba1f56096642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.951 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.951 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Ensure instance console log exists: /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.952 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.952 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:04 compute-0 nova_compute[253538]: 2025-11-25 08:26:04.953 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 576 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 264 op/s
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.413 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Successfully created port: 2089bf75-6119-4c42-a326-989b3931ec08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.546 253542 DEBUG nova.compute.manager [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.546 253542 DEBUG nova.compute.manager [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing instance network info cache due to event network-changed-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.547 253542 DEBUG oslo_concurrency.lockutils [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.547 253542 DEBUG oslo_concurrency.lockutils [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.548 253542 DEBUG nova.network.neutron [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Refreshing network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.814 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.815 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.816 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.817 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.817 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.819 253542 INFO nova.compute.manager [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Terminating instance
Nov 25 08:26:05 compute-0 nova_compute[253538]: 2025-11-25 08:26:05.820 253542 DEBUG nova.compute.manager [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:26:05 compute-0 ceph-mon[75015]: pgmap v1219: 321 pgs: 321 active+clean; 576 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 264 op/s
Nov 25 08:26:05 compute-0 kernel: tapf718b0c0-ca (unregistering): left promiscuous mode
Nov 25 08:26:05 compute-0 NetworkManager[48915]: <info>  [1764059165.9987] device (tapf718b0c0-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:26:06 compute-0 ovn_controller[152859]: 2025-11-25T08:26:06Z|00087|binding|INFO|Releasing lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 from this chassis (sb_readonly=0)
Nov 25 08:26:06 compute-0 ovn_controller[152859]: 2025-11-25T08:26:06Z|00088|binding|INFO|Setting lport f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 down in Southbound
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:06 compute-0 ovn_controller[152859]: 2025-11-25T08:26:06Z|00089|binding|INFO|Removing iface tapf718b0c0-ca ovn-installed in OVS
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.021 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b8:f5 10.100.0.3'], port_security=['fa:16:3e:60:b8:f5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '39580ba3-504b-4e17-b64f-f44ef66091da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '487cd315-808b-491d-a4f3-9b5dcda46b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddf7b92c-cc8c-4886-ba67-90d06b198671, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.022 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 in datapath f86a7b06-d9db-4462-bd9b-8ad648dec7f4 unbound from our chassis
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.024 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86a7b06-d9db-4462-bd9b-8ad648dec7f4
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4be3e7c3-7b74-43a3-b7fc-e0371d14dc16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:06 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 25 08:26:06 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 14.648s CPU time.
Nov 25 08:26:06 compute-0 systemd-machined[215790]: Machine qemu-23-instance-00000015 terminated.
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.079 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[73110975-fcbd-4e26-84d5-4999b2997173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.081 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[00c94474-e5a2-47ea-9f93-b55e275921c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.106 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[281df5e4-e555-47da-b413-4e2d59e3bbf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.122 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0255be6c-32c5-4aed-a5f0-f3f12a8dfa25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86a7b06-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:7b:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448076, 'reachable_time': 32795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283205, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.137 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1e0f2a-fe78-4d24-aeef-cef019a5061a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf86a7b06-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448087, 'tstamp': 448087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283206, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf86a7b06-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448090, 'tstamp': 448090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283206, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.138 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86a7b06-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.139 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.144 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.144 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86a7b06-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.144 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86a7b06-d0, col_values=(('external_ids', {'iface-id': '156cbe7f-b9cd-46c4-9552-1484f581cde6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:06.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.259 253542 INFO nova.virt.libvirt.driver [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Instance destroyed successfully.
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.260 253542 DEBUG nova.objects.instance [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'resources' on Instance uuid 39580ba3-504b-4e17-b64f-f44ef66091da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.271 253542 DEBUG nova.virt.libvirt.vif [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1841402445',display_name='tempest-FloatingIPsAssociationTestJSON-server-1841402445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1841402445',id=21,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-19c31s1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:43Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=39580ba3-504b-4e17-b64f-f44ef66091da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.272 253542 DEBUG nova.network.os_vif_util [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.272 253542 DEBUG nova.network.os_vif_util [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.273 253542 DEBUG os_vif [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.275 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.275 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf718b0c0-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.277 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:06 compute-0 nova_compute[253538]: 2025-11-25 08:26:06.283 253542 INFO os_vif [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b8:f5,bridge_name='br-int',has_traffic_filtering=True,id=f718b0c0-ca1b-4f5d-aa70-3d1f48097b97,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf718b0c0-ca')
Nov 25 08:26:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 591 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 209 op/s
Nov 25 08:26:07 compute-0 nova_compute[253538]: 2025-11-25 08:26:07.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:07 compute-0 nova_compute[253538]: 2025-11-25 08:26:07.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:26:07 compute-0 podman[283237]: 2025-11-25 08:26:07.84066537 +0000 UTC m=+0.082506634 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 08:26:07 compute-0 nova_compute[253538]: 2025-11-25 08:26:07.909 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Successfully updated port: 2089bf75-6119-4c42-a326-989b3931ec08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:26:07 compute-0 nova_compute[253538]: 2025-11-25 08:26:07.931 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:07 compute-0 nova_compute[253538]: 2025-11-25 08:26:07.931 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:07 compute-0 nova_compute[253538]: 2025-11-25 08:26:07.931 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:26:08 compute-0 nova_compute[253538]: 2025-11-25 08:26:08.013 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:08 compute-0 nova_compute[253538]: 2025-11-25 08:26:08.013 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:08 compute-0 nova_compute[253538]: 2025-11-25 08:26:08.013 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:26:08 compute-0 nova_compute[253538]: 2025-11-25 08:26:08.209 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:26:08 compute-0 ceph-mon[75015]: pgmap v1220: 321 pgs: 321 active+clean; 591 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 209 op/s
Nov 25 08:26:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 586 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Nov 25 08:26:09 compute-0 nova_compute[253538]: 2025-11-25 08:26:09.238 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:26:09 compute-0 nova_compute[253538]: 2025-11-25 08:26:09.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:09 compute-0 nova_compute[253538]: 2025-11-25 08:26:09.936 253542 DEBUG nova.network.neutron [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:09 compute-0 nova_compute[253538]: 2025-11-25 08:26:09.998 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:09 compute-0 nova_compute[253538]: 2025-11-25 08:26:09.999 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance network_info: |[{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.003 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start _get_guest_xml network_info=[{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:26:10 compute-0 ceph-mon[75015]: pgmap v1221: 321 pgs: 321 active+clean; 586 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.009 253542 WARNING nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.017 253542 DEBUG nova.virt.libvirt.host [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.018 253542 DEBUG nova.virt.libvirt.host [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.028 253542 DEBUG nova.virt.libvirt.host [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.029 253542 DEBUG nova.virt.libvirt.host [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.030 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.031 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.031 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.032 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.033 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.033 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.034 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.034 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.035 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.035 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.036 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.037 253542 DEBUG nova.virt.hardware [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.041 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.253 253542 DEBUG nova.network.neutron [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updated VIF entry in instance network info cache for port f718b0c0-ca1b-4f5d-aa70-3d1f48097b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.254 253542 DEBUG nova.network.neutron [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [{"id": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "address": "fa:16:3e:60:b8:f5", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf718b0c0-ca", "ovs_interfaceid": "f718b0c0-ca1b-4f5d-aa70-3d1f48097b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.273 253542 DEBUG oslo_concurrency.lockutils [req-b735f3c2-a677-4c64-a6a0-cce02fd04c22 req-b87ecea7-9233-4090-9906-43e5e0ca7151 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-39580ba3-504b-4e17-b64f-f44ef66091da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:26:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062862655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.508 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.538 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.542 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:26:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542444532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.973 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.975 253542 DEBUG nova.virt.libvirt.vif [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1453002528',display_name='tempest-SecurityGroupsTestJSON-server-1453002528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1453002528',id=23,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-502qkvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:03Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=ca088afd-31e5-497b-bfc5-ba1f56096642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.976 253542 DEBUG nova.network.os_vif_util [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.978 253542 DEBUG nova.network.os_vif_util [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.980 253542 DEBUG nova.objects.instance [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'pci_devices' on Instance uuid ca088afd-31e5-497b-bfc5-ba1f56096642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.997 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <uuid>ca088afd-31e5-497b-bfc5-ba1f56096642</uuid>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <name>instance-00000017</name>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1453002528</nova:name>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:26:10</nova:creationTime>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <nova:user uuid="02e795c75a3b40bbbc3ca83d0501777f">tempest-SecurityGroupsTestJSON-1828125381-project-member</nova:user>
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <nova:project uuid="52217f37b23343d697fa6d2be38e236d">tempest-SecurityGroupsTestJSON-1828125381</nova:project>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <nova:port uuid="2089bf75-6119-4c42-a326-989b3931ec08">
Nov 25 08:26:10 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <system>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <entry name="serial">ca088afd-31e5-497b-bfc5-ba1f56096642</entry>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <entry name="uuid">ca088afd-31e5-497b-bfc5-ba1f56096642</entry>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </system>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <os>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   </os>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <features>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   </features>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ca088afd-31e5-497b-bfc5-ba1f56096642_disk">
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       </source>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config">
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       </source>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:26:10 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:b9:c0:7d"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <target dev="tap2089bf75-61"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/console.log" append="off"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <video>
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </video>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:26:10 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:26:10 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:26:10 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:26:10 compute-0 nova_compute[253538]: </domain>
Nov 25 08:26:10 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.998 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Preparing to wait for external event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.999 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:10 compute-0 nova_compute[253538]: 2025-11-25 08:26:10.999 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.000 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.000 253542 DEBUG nova.virt.libvirt.vif [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1453002528',display_name='tempest-SecurityGroupsTestJSON-server-1453002528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1453002528',id=23,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-502qkvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:03Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=ca088afd-31e5-497b-bfc5-ba1f56096642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.001 253542 DEBUG nova.network.os_vif_util [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.001 253542 DEBUG nova.network.os_vif_util [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.002 253542 DEBUG os_vif [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.002 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.003 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.003 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.006 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2089bf75-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.007 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2089bf75-61, col_values=(('external_ids', {'iface-id': '2089bf75-6119-4c42-a326-989b3931ec08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:c0:7d', 'vm-uuid': 'ca088afd-31e5-497b-bfc5-ba1f56096642'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:11 compute-0 NetworkManager[48915]: <info>  [1764059171.0095] manager: (tap2089bf75-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.016 253542 INFO os_vif [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61')
Nov 25 08:26:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2062862655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/542444532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 555 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 153 op/s
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.336374) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059171336408, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1340, "num_deletes": 252, "total_data_size": 1834743, "memory_usage": 1870416, "flush_reason": "Manual Compaction"}
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.360 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.360 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.361 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No VIF found with MAC fa:16:3e:b9:c0:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.361 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Using config drive
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.384 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059171700611, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1814575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24338, "largest_seqno": 25677, "table_properties": {"data_size": 1808387, "index_size": 3327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14087, "raw_average_key_size": 20, "raw_value_size": 1795583, "raw_average_value_size": 2587, "num_data_blocks": 148, "num_entries": 694, "num_filter_entries": 694, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059053, "oldest_key_time": 1764059053, "file_creation_time": 1764059171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 364281 microseconds, and 4960 cpu microseconds.
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.786 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updating instance_info_cache with network_info: [{"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.700651) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1814575 bytes OK
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.700674) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.788471) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.788527) EVENT_LOG_v1 {"time_micros": 1764059171788513, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.788558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1828675, prev total WAL file size 1828675, number of live WAL files 2.
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.790252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1772KB)], [56(7048KB)]
Nov 25 08:26:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059171790365, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9031853, "oldest_snapshot_seqno": -1}
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.806 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-23ace5af-6840-42aa-a801-98abbb4f3a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.807 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.808 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.808 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.808 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.809 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.948 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Creating config drive at /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config
Nov 25 08:26:11 compute-0 nova_compute[253538]: 2025-11-25 08:26:11.957 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwoeschgk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.088 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwoeschgk" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.354 253542 DEBUG nova.storage.rbd_utils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.359 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4772 keys, 7303244 bytes, temperature: kUnknown
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059172373863, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7303244, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7271824, "index_size": 18382, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 119981, "raw_average_key_size": 25, "raw_value_size": 7186100, "raw_average_value_size": 1505, "num_data_blocks": 760, "num_entries": 4772, "num_filter_entries": 4772, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.374232) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7303244 bytes
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.392637) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 15.5 rd, 12.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(9.0) write-amplify(4.0) OK, records in: 5292, records dropped: 520 output_compression: NoCompression
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.392669) EVENT_LOG_v1 {"time_micros": 1764059172392654, "job": 30, "event": "compaction_finished", "compaction_time_micros": 583346, "compaction_time_cpu_micros": 30630, "output_level": 6, "num_output_files": 1, "total_output_size": 7303244, "num_input_records": 5292, "num_output_records": 4772, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059172393560, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059172396620, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:11.790142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:26:12 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:26:12.396782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:26:12 compute-0 ceph-mon[75015]: pgmap v1222: 321 pgs: 321 active+clean; 555 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 153 op/s
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.581 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:26:12 compute-0 nova_compute[253538]: 2025-11-25 08:26:12.581 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610384772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 541 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.044 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.141 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.142 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.148 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.148 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.154 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.155 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.160 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.161 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.165 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.166 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.171 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.171 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.175 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.175 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.180 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.181 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.513 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.514 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3327MB free_disk=59.70021438598633GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.515 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.515 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.602 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.602 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 23ace5af-6840-42aa-a801-98abbb4f3a52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 39580ba3-504b-4e17-b64f-f44ef66091da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ceb93a9d-5e18-4351-9cfa-3949c00b448a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.603 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ca088afd-31e5-497b-bfc5-ba1f56096642 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.604 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 8 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.604 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=8 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:26:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2610384772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:13 compute-0 ceph-mon[75015]: pgmap v1223: 321 pgs: 321 active+clean; 541 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.794 253542 DEBUG nova.compute.manager [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-changed-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.795 253542 DEBUG nova.compute.manager [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing instance network info cache due to event network-changed-2089bf75-6119-4c42-a326-989b3931ec08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.796 253542 DEBUG oslo_concurrency.lockutils [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.796 253542 DEBUG oslo_concurrency.lockutils [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.797 253542 DEBUG nova.network.neutron [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.841 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.878 253542 DEBUG nova.compute.manager [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-unplugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.878 253542 DEBUG oslo_concurrency.lockutils [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.878 253542 DEBUG oslo_concurrency.lockutils [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.879 253542 DEBUG oslo_concurrency.lockutils [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.879 253542 DEBUG nova.compute.manager [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] No waiting events found dispatching network-vif-unplugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.879 253542 DEBUG nova.compute.manager [req-b2c97240-536f-4e6b-b0ca-f427c3316ae4 req-16d46cd5-26cd-4f60-8b65-2d1d41d37ff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-unplugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:26:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.951 253542 DEBUG oslo_concurrency.processutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config ca088afd-31e5-497b-bfc5-ba1f56096642_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.952 253542 INFO nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Deleting local config drive /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642/disk.config because it was imported into RBD.
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.992 253542 INFO nova.virt.libvirt.driver [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Deleting instance files /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da_del
Nov 25 08:26:13 compute-0 nova_compute[253538]: 2025-11-25 08:26:13.993 253542 INFO nova.virt.libvirt.driver [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Deletion of /var/lib/nova/instances/39580ba3-504b-4e17-b64f-f44ef66091da_del complete
Nov 25 08:26:14 compute-0 NetworkManager[48915]: <info>  [1764059174.0308] manager: (tap2089bf75-61): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 25 08:26:14 compute-0 kernel: tap2089bf75-61: entered promiscuous mode
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.033 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:14 compute-0 ovn_controller[152859]: 2025-11-25T08:26:14Z|00090|binding|INFO|Claiming lport 2089bf75-6119-4c42-a326-989b3931ec08 for this chassis.
Nov 25 08:26:14 compute-0 ovn_controller[152859]: 2025-11-25T08:26:14Z|00091|binding|INFO|2089bf75-6119-4c42-a326-989b3931ec08: Claiming fa:16:3e:b9:c0:7d 10.100.0.7
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.046 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:c0:7d 10.100.0.7'], port_security=['fa:16:3e:b9:c0:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ca088afd-31e5-497b-bfc5-ba1f56096642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94ed9e1b-8451-4dd9-95ef-2d9affe4fca9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2089bf75-6119-4c42-a326-989b3931ec08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.047 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2089bf75-6119-4c42-a326-989b3931ec08 in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 bound to our chassis
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.049 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 08:26:14 compute-0 ovn_controller[152859]: 2025-11-25T08:26:14Z|00092|binding|INFO|Setting lport 2089bf75-6119-4c42-a326-989b3931ec08 ovn-installed in OVS
Nov 25 08:26:14 compute-0 ovn_controller[152859]: 2025-11-25T08:26:14Z|00093|binding|INFO|Setting lport 2089bf75-6119-4c42-a326-989b3931ec08 up in Southbound
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:14 compute-0 systemd-udevd[283443]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.068 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0686c896-a8e7-4fde-9aa6-fb5270581cdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.070 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec4e7ebb-a1 in ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.073 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec4e7ebb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.073 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a9f2b9-29b9-4925-bd66-845cae3a7ffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.075 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[268871a9-5fa4-4475-8d5c-ee02e10adddf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 systemd-machined[215790]: New machine qemu-26-instance-00000017.
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.080 253542 INFO nova.compute.manager [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Took 8.26 seconds to destroy the instance on the hypervisor.
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.081 253542 DEBUG oslo.service.loopingcall [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.081 253542 DEBUG nova.compute.manager [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.081 253542 DEBUG nova.network.neutron [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:26:14 compute-0 NetworkManager[48915]: <info>  [1764059174.0827] device (tap2089bf75-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:26:14 compute-0 NetworkManager[48915]: <info>  [1764059174.0837] device (tap2089bf75-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.088 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf44951-97f7-4fcf-aa06-726eef0fb88e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000017.
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.114 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3117f0c8-608d-416e-a6ab-19d143ae6360]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.148 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[17cff3b4-4d4b-4ccd-be41-6708efc3c4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 NetworkManager[48915]: <info>  [1764059174.1573] manager: (tapec4e7ebb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 25 08:26:14 compute-0 systemd-udevd[283447]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.156 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dab46d66-1768-41e0-bfbf-95b68f60dec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.205 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8d756f13-8460-4018-817c-60cb4a285f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.208 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6f06d28c-0978-4a39-86ec-737353d11800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 NetworkManager[48915]: <info>  [1764059174.2410] device (tapec4e7ebb-a0): carrier: link connected
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.242 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[efcef1a0-f850-41ce-a401-ba5f21cd907e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.258 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4dae09c8-3505-4a93-a98e-b7270b795104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 27756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283478, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.273 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5d367f02-8706-4202-8c9a-4a2ecadc6847]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:641f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452883, 'tstamp': 452883}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283479, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.290 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd258e48-aa8f-4043-838f-9f5afbb52693]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 27756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283480, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.321 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6052f02d-35fe-454e-8b52-55c25a9b9b0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774786305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.377 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f43a322d-8f17-4551-847d-81f95ae6d679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.379 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.379 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.379 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:14 compute-0 NetworkManager[48915]: <info>  [1764059174.3820] manager: (tapec4e7ebb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 25 08:26:14 compute-0 kernel: tapec4e7ebb-a0: entered promiscuous mode
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.389 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:14 compute-0 ovn_controller[152859]: 2025-11-25T08:26:14Z|00094|binding|INFO|Releasing lport 26e04d60-4f32-4592-b567-fc34513c5aba from this chassis (sb_readonly=0)
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.401 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.402 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83705e86-c491-49f5-a7d3-59b49fd0c157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.402 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954.pid.haproxy
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:26:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:14.403 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'env', 'PROCESS_TAG=haproxy-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.411 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.414 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.425 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.446 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.446 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:14 compute-0 nova_compute[253538]: 2025-11-25 08:26:14.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2774786305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:14 compute-0 podman[283529]: 2025-11-25 08:26:14.750579653 +0000 UTC m=+0.026982468 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:26:14 compute-0 podman[283529]: 2025-11-25 08:26:14.855884867 +0000 UTC m=+0.132287632 container create e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.000 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059174.9994388, ca088afd-31e5-497b-bfc5-ba1f56096642 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.000 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] VM Started (Lifecycle Event)
Nov 25 08:26:15 compute-0 systemd[1]: Started libpod-conmon-e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117.scope.
Nov 25 08:26:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 552 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.039 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.045 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059174.9996421, ca088afd-31e5-497b-bfc5-ba1f56096642 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.045 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] VM Paused (Lifecycle Event)
Nov 25 08:26:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d651f3fccf787af2f2b211bc049772630cbec479a6881f484dd2c0eb6af6c354/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.073 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.076 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.094 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:26:15 compute-0 podman[283529]: 2025-11-25 08:26:15.15903374 +0000 UTC m=+0.435436545 container init e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:26:15 compute-0 podman[283529]: 2025-11-25 08:26:15.166369472 +0000 UTC m=+0.442772247 container start e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:26:15 compute-0 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [NOTICE]   (283572) : New worker (283574) forked
Nov 25 08:26:15 compute-0 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [NOTICE]   (283572) : Loading success.
Nov 25 08:26:15 compute-0 ovn_controller[152859]: 2025-11-25T08:26:15Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:26:15 compute-0 ovn_controller[152859]: 2025-11-25T08:26:15Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.798 253542 DEBUG nova.network.neutron [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.811 253542 INFO nova.compute.manager [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Took 1.73 seconds to deallocate network for instance.
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.856 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:15 compute-0 nova_compute[253538]: 2025-11-25 08:26:15.856 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:15 compute-0 ceph-mon[75015]: pgmap v1224: 321 pgs: 321 active+clean; 552 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.034 253542 DEBUG nova.network.neutron [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updated VIF entry in instance network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.034 253542 DEBUG nova.network.neutron [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.053 253542 DEBUG oslo_concurrency.lockutils [req-44a1d498-f23a-492e-af7c-76d474b4228f req-4ef442e3-9dfa-4a71-bbda-722d26d7497a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.077 253542 DEBUG oslo_concurrency.processutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.439 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.471 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.471 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.503 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.503 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] No waiting events found dispatching network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 WARNING nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received unexpected event network-vif-plugged-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 for instance with vm_state deleted and task_state None.
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.504 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.505 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing instance network info cache due to event network-changed-6283ff13-d854-41d6-8a7a-eab602cc4cf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.505 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.505 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.505 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Refreshing network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644728288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.571 253542 DEBUG oslo_concurrency.processutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.579 253542 DEBUG nova.compute.provider_tree [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.592 253542 DEBUG nova.scheduler.client.report [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.610 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.636 253542 INFO nova.scheduler.client.report [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Deleted allocations for instance 39580ba3-504b-4e17-b64f-f44ef66091da
Nov 25 08:26:16 compute-0 nova_compute[253538]: 2025-11-25 08:26:16.716 253542 DEBUG oslo_concurrency.lockutils [None req-3e5b6bd1-af5e-4842-aa52-e2886e1e7447 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "39580ba3-504b-4e17-b64f-f44ef66091da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 552 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 214 KiB/s rd, 3.5 MiB/s wr, 104 op/s
Nov 25 08:26:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2644728288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.817 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updated VIF entry in instance network info cache for port 6283ff13-d854-41d6-8a7a-eab602cc4cf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.818 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [{"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.832 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ceb93a9d-5e18-4351-9cfa-3949c00b448a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.833 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.833 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.833 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.834 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:17 compute-0 nova_compute[253538]: 2025-11-25 08:26:17.834 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:18 compute-0 ceph-mon[75015]: pgmap v1225: 321 pgs: 321 active+clean; 552 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 214 KiB/s rd, 3.5 MiB/s wr, 104 op/s
Nov 25 08:26:18 compute-0 sudo[283606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:18 compute-0 sudo[283606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:18 compute-0 sudo[283606]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:18 compute-0 sudo[283631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:26:18 compute-0 sudo[283631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:18 compute-0 sudo[283631]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:18 compute-0 sudo[283656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:18 compute-0 sudo[283656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:18 compute-0 sudo[283656]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.609 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.610 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.611 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.611 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.612 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.614 253542 INFO nova.compute.manager [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Terminating instance
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.616 253542 DEBUG nova.compute.manager [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.663 253542 DEBUG nova.compute.manager [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.664 253542 DEBUG nova.compute.manager [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing instance network info cache due to event network-changed-fdb3703c-f8da-4c10-9784-ed63bfe93fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:18 compute-0 nova_compute[253538]: 2025-11-25 08:26:18.665 253542 DEBUG oslo_concurrency.lockutils [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:18 compute-0 sudo[283681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:26:18 compute-0 sudo[283681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 555 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.8 MiB/s wr, 114 op/s
Nov 25 08:26:19 compute-0 kernel: tap6283ff13-d8 (unregistering): left promiscuous mode
Nov 25 08:26:19 compute-0 NetworkManager[48915]: <info>  [1764059179.1184] device (tap6283ff13-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:26:19 compute-0 ovn_controller[152859]: 2025-11-25T08:26:19Z|00095|binding|INFO|Releasing lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 from this chassis (sb_readonly=0)
Nov 25 08:26:19 compute-0 ovn_controller[152859]: 2025-11-25T08:26:19Z|00096|binding|INFO|Setting lport 6283ff13-d854-41d6-8a7a-eab602cc4cf4 down in Southbound
Nov 25 08:26:19 compute-0 ovn_controller[152859]: 2025-11-25T08:26:19Z|00097|binding|INFO|Removing iface tap6283ff13-d8 ovn-installed in OVS
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.190 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:37 10.100.0.3'], port_security=['fa:16:3e:57:ac:37 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ceb93a9d-5e18-4351-9cfa-3949c00b448a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ea3de796e6464fbf65835dc4c3ad79', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c614f14d-ba5c-4351-9110-1ad24f7c46f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d60c6aa2-d509-48b2-b548-58ea5b315827, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6283ff13-d854-41d6-8a7a-eab602cc4cf4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.193 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6283ff13-d854-41d6-8a7a-eab602cc4cf4 in datapath f0cb07bc-dc94-4b65-bb7f-100ce36c9428 unbound from our chassis
Nov 25 08:26:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.195 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0cb07bc-dc94-4b65-bb7f-100ce36c9428, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:26:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[64c969ce-39bb-4b93-827b-73f520df47e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:19.197 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 namespace which is not needed anymore
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:19 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 25 08:26:19 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 14.270s CPU time.
Nov 25 08:26:19 compute-0 systemd-machined[215790]: Machine qemu-24-instance-00000016 terminated.
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.255 253542 INFO nova.virt.libvirt.driver [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Instance destroyed successfully.
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.256 253542 DEBUG nova.objects.instance [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lazy-loading 'resources' on Instance uuid ceb93a9d-5e18-4351-9cfa-3949c00b448a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.271 253542 DEBUG nova.virt.libvirt.vif [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1269146307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-126914630',id=22,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ea3de796e6464fbf65835dc4c3ad79',ramdisk_id='',reservation_id='r-jh0ldu4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1564808270-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:45Z,user_data=None,user_id='c53798457642457e8c93278c6bbae0b7',uuid=ceb93a9d-5e18-4351-9cfa-3949c00b448a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.271 253542 DEBUG nova.network.os_vif_util [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converting VIF {"id": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "address": "fa:16:3e:57:ac:37", "network": {"id": "f0cb07bc-dc94-4b65-bb7f-100ce36c9428", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-827720896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ea3de796e6464fbf65835dc4c3ad79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6283ff13-d8", "ovs_interfaceid": "6283ff13-d854-41d6-8a7a-eab602cc4cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.272 253542 DEBUG nova.network.os_vif_util [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.272 253542 DEBUG os_vif [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.275 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6283ff13-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.280 253542 INFO os_vif [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:37,bridge_name='br-int',has_traffic_filtering=True,id=6283ff13-d854-41d6-8a7a-eab602cc4cf4,network=Network(f0cb07bc-dc94-4b65-bb7f-100ce36c9428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6283ff13-d8')
Nov 25 08:26:19 compute-0 sudo[283681]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:26:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:26:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:26:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:26:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:26:19 compute-0 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [NOTICE]   (282554) : haproxy version is 2.8.14-c23fe91
Nov 25 08:26:19 compute-0 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [NOTICE]   (282554) : path to executable is /usr/sbin/haproxy
Nov 25 08:26:19 compute-0 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [WARNING]  (282554) : Exiting Master process...
Nov 25 08:26:19 compute-0 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [ALERT]    (282554) : Current worker (282556) exited with code 143 (Terminated)
Nov 25 08:26:19 compute-0 neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428[282548]: [WARNING]  (282554) : All workers exited. Exiting... (0)
Nov 25 08:26:19 compute-0 systemd[1]: libpod-df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7.scope: Deactivated successfully.
Nov 25 08:26:19 compute-0 podman[283768]: 2025-11-25 08:26:19.427866093 +0000 UTC m=+0.135095080 container died df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.555 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.556 253542 DEBUG nova.network.neutron [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.569 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:19 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 925a5a1a-10b7-4cfa-a432-0c154817475f does not exist
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:19 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e2ed2392-e53c-49d9-867d-57149a7af856 does not exist
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.570 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Processing event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:26:19 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d9d1744f-29e2-4ca1-b9a9-5d534a90ddf9 does not exist
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG oslo_concurrency.lockutils [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] No waiting events found dispatching network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.571 253542 WARNING nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received unexpected event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 for instance with vm_state building and task_state spawning.
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.572 253542 DEBUG nova.compute.manager [req-8bca21e1-6585-481d-b1e4-a2603df209f9 req-8cbb1707-de32-4799-837f-22db191e9416 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Received event network-vif-deleted-f718b0c0-ca1b-4f5d-aa70-3d1f48097b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.572 253542 DEBUG oslo_concurrency.lockutils [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.572 253542 DEBUG nova.network.neutron [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Refreshing network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.573 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:26:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:26:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:26:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:26:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.578 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059179.5779002, ca088afd-31e5-497b-bfc5-ba1f56096642 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.578 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] VM Resumed (Lifecycle Event)
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.581 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.586 253542 INFO nova.virt.libvirt.driver [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance spawned successfully.
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.586 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.605 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.612 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.612 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.613 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.613 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.614 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.614 253542 DEBUG nova.virt.libvirt.driver [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.619 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.646 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:26:19 compute-0 sudo[283815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:19 compute-0 sudo[283815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:19 compute-0 sudo[283815]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.673 253542 INFO nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Took 15.75 seconds to spawn the instance on the hypervisor.
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.673 253542 DEBUG nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.732 253542 INFO nova.compute.manager [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Took 16.79 seconds to build instance.
Nov 25 08:26:19 compute-0 sudo[283842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:26:19 compute-0 sudo[283842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:19 compute-0 sudo[283842]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:19 compute-0 nova_compute[253538]: 2025-11-25 08:26:19.746 253542 DEBUG oslo_concurrency.lockutils [None req-1e35dad3-44c2-4c96-99c5-86a99fa9ac76 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7-userdata-shm.mount: Deactivated successfully.
Nov 25 08:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-92f235619556de56030111cecddf940547ff0043e49f6964151133b56b1d88ee-merged.mount: Deactivated successfully.
Nov 25 08:26:19 compute-0 sudo[283867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:19 compute-0 sudo[283867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:19 compute-0 sudo[283867]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:19 compute-0 sudo[283894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:26:19 compute-0 sudo[283894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:20 compute-0 podman[283768]: 2025-11-25 08:26:20.173987787 +0000 UTC m=+0.881216744 container cleanup df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:26:20 compute-0 systemd[1]: libpod-conmon-df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7.scope: Deactivated successfully.
Nov 25 08:26:20 compute-0 ceph-mon[75015]: pgmap v1226: 321 pgs: 321 active+clean; 555 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.8 MiB/s wr, 114 op/s
Nov 25 08:26:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:26:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:26:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:26:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:26:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:26:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.439 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.739 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-unplugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.739 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.740 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.740 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.740 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] No waiting events found dispatching network-vif-unplugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.741 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-unplugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.741 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.742 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.742 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.742 253542 DEBUG oslo_concurrency.lockutils [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.743 253542 DEBUG nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] No waiting events found dispatching network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.743 253542 WARNING nova.compute.manager [req-33f4f6af-4453-461a-a92b-5bc0bc7efea9 req-c8aaae13-cd56-43a6-a815-a589c63127c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received unexpected event network-vif-plugged-6283ff13-d854-41d6-8a7a-eab602cc4cf4 for instance with vm_state active and task_state deleting.
Nov 25 08:26:20 compute-0 podman[283932]: 2025-11-25 08:26:20.887209519 +0000 UTC m=+0.682513723 container remove df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eaac6397-4f36-470c-b9b1-e55409746c54]: (4, ('Tue Nov 25 08:26:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 (df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7)\ndf9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7\nTue Nov 25 08:26:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 (df9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7)\ndf9bbde1e0097c2246b98c50c44ed8de68e62f49a5f07f17aea74da675ba71b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.903 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72751ae1-8963-48e8-8b4c-2b537961d6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.904 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0cb07bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:20 compute-0 kernel: tapf0cb07bc-d0: left promiscuous mode
Nov 25 08:26:20 compute-0 nova_compute[253538]: 2025-11-25 08:26:20.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.935 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be05cd4e-9d2c-419a-a000-233bc9e1cd07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.953 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aef8aa50-1b70-4b91-8bfb-366c725c415d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.955 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2edda1ef-3f65-44db-9fb9-08d6ed9a8304]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.974 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6850f3c8-f89c-49fc-81ee-16f819338f84]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449918, 'reachable_time': 28754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283969, 'error': None, 'target': 'ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.978 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0cb07bc-dc94-4b65-bb7f-100ce36c9428 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:26:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:20.979 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[69e61c85-5243-4657-834c-e1803c6aff69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:20 compute-0 systemd[1]: run-netns-ovnmeta\x2df0cb07bc\x2ddc94\x2d4b65\x2dbb7f\x2d100ce36c9428.mount: Deactivated successfully.
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.001 253542 DEBUG nova.network.neutron [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updated VIF entry in instance network info cache for port fdb3703c-f8da-4c10-9784-ed63bfe93fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.002 253542 DEBUG nova.network.neutron [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [{"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.014 253542 DEBUG oslo_concurrency.lockutils [req-5e6c7527-69a6-493f-833b-481455803dfe req-0a21b136-3915-47ff-a24e-e0745ce46e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1664fad5-765c-4ecc-93e2-6f96c7fb6d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 563 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 08:26:21 compute-0 podman[283977]: 2025-11-25 08:26:21.049175382 +0000 UTC m=+0.024736196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.258 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059166.2564685, 39580ba3-504b-4e17-b64f-f44ef66091da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.258 253542 INFO nova.compute.manager [-] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] VM Stopped (Lifecycle Event)
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.298 253542 DEBUG nova.compute.manager [None req-65bcaea8-254e-4e1a-b52e-5318a3029571 - - - - - -] [instance: 39580ba3-504b-4e17-b64f-f44ef66091da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.804 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.804 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.805 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.805 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.805 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.806 253542 INFO nova.compute.manager [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Terminating instance
Nov 25 08:26:21 compute-0 nova_compute[253538]: 2025-11-25 08:26:21.807 253542 DEBUG nova.compute.manager [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:26:22 compute-0 podman[283977]: 2025-11-25 08:26:22.584856511 +0000 UTC m=+1.560417275 container create d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 08:26:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.585 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.587 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:26:22 compute-0 ceph-mon[75015]: pgmap v1227: 321 pgs: 321 active+clean; 563 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 08:26:22 compute-0 systemd[1]: Started libpod-conmon-d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7.scope.
Nov 25 08:26:22 compute-0 kernel: tapfdb3703c-f8 (unregistering): left promiscuous mode
Nov 25 08:26:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:26:22 compute-0 NetworkManager[48915]: <info>  [1764059182.6765] device (tapfdb3703c-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:26:22 compute-0 ovn_controller[152859]: 2025-11-25T08:26:22Z|00098|binding|INFO|Releasing lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 from this chassis (sb_readonly=0)
Nov 25 08:26:22 compute-0 ovn_controller[152859]: 2025-11-25T08:26:22Z|00099|binding|INFO|Setting lport fdb3703c-f8da-4c10-9784-ed63bfe93fe1 down in Southbound
Nov 25 08:26:22 compute-0 ovn_controller[152859]: 2025-11-25T08:26:22Z|00100|binding|INFO|Removing iface tapfdb3703c-f8 ovn-installed in OVS
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.692 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.706 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:8b:90 10.100.0.11'], port_security=['fa:16:3e:f0:8b:90 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1664fad5-765c-4ecc-93e2-6f96c7fb6d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d3671fc1a3f4b319a62f23168a9df72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '487cd315-808b-491d-a4f3-9b5dcda46b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddf7b92c-cc8c-4886-ba67-90d06b198671, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=fdb3703c-f8da-4c10-9784-ed63bfe93fe1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.708 162739 INFO neutron.agent.ovn.metadata.agent [-] Port fdb3703c-f8da-4c10-9784-ed63bfe93fe1 in datapath f86a7b06-d9db-4462-bd9b-8ad648dec7f4 unbound from our chassis
Nov 25 08:26:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.711 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f86a7b06-d9db-4462-bd9b-8ad648dec7f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.712 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[40ac1e11-655e-4b79-9ff0-41581e0dc1da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:22.715 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 namespace which is not needed anymore
Nov 25 08:26:22 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 25 08:26:22 compute-0 podman[283977]: 2025-11-25 08:26:22.750449114 +0000 UTC m=+1.726009878 container init d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:26:22 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Consumed 14.413s CPU time.
Nov 25 08:26:22 compute-0 systemd-machined[215790]: Machine qemu-21-instance-00000013 terminated.
Nov 25 08:26:22 compute-0 podman[283977]: 2025-11-25 08:26:22.760141923 +0000 UTC m=+1.735702687 container start d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:26:22 compute-0 podman[283977]: 2025-11-25 08:26:22.765696357 +0000 UTC m=+1.741257131 container attach d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:26:22 compute-0 systemd[1]: libpod-d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7.scope: Deactivated successfully.
Nov 25 08:26:22 compute-0 festive_vaughan[283993]: 167 167
Nov 25 08:26:22 compute-0 conmon[283993]: conmon d9da1a08d30185296a0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7.scope/container/memory.events
Nov 25 08:26:22 compute-0 podman[283977]: 2025-11-25 08:26:22.769960954 +0000 UTC m=+1.745521718 container died d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 08:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f31e0816a28c08c560d761f2bdeff31c1de2cc80ffa721a8db96e71131d706f6-merged.mount: Deactivated successfully.
Nov 25 08:26:22 compute-0 podman[283977]: 2025-11-25 08:26:22.815825315 +0000 UTC m=+1.791386079 container remove d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.829 253542 INFO nova.virt.libvirt.driver [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Instance destroyed successfully.
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.831 253542 DEBUG nova.objects.instance [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lazy-loading 'resources' on Instance uuid 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:22 compute-0 systemd[1]: libpod-conmon-d9da1a08d30185296a0ac235796cfbeee6a924457684676df4eada67483807e7.scope: Deactivated successfully.
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.856 253542 DEBUG nova.virt.libvirt.vif [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1794284611',display_name='tempest-FloatingIPsAssociationTestJSON-server-1794284611',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1794284611',id=19,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d3671fc1a3f4b319a62f23168a9df72',ramdisk_id='',reservation_id='r-f695m80c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1833680054',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1833680054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:30Z,user_data=None,user_id='d61511e82c674abeb4ba87a4e5c5bf9d',uuid=1664fad5-765c-4ecc-93e2-6f96c7fb6d44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.857 253542 DEBUG nova.network.os_vif_util [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converting VIF {"id": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "address": "fa:16:3e:f0:8b:90", "network": {"id": "f86a7b06-d9db-4462-bd9b-8ad648dec7f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-795190525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d3671fc1a3f4b319a62f23168a9df72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb3703c-f8", "ovs_interfaceid": "fdb3703c-f8da-4c10-9784-ed63bfe93fe1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.857 253542 DEBUG nova.network.os_vif_util [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.858 253542 DEBUG os_vif [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.860 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb3703c-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:26:22 compute-0 nova_compute[253538]: 2025-11-25 08:26:22.866 253542 INFO os_vif [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:8b:90,bridge_name='br-int',has_traffic_filtering=True,id=fdb3703c-f8da-4c10-9784-ed63bfe93fe1,network=Network(f86a7b06-d9db-4462-bd9b-8ad648dec7f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb3703c-f8')
Nov 25 08:26:22 compute-0 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [NOTICE]   (281474) : haproxy version is 2.8.14-c23fe91
Nov 25 08:26:22 compute-0 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [NOTICE]   (281474) : path to executable is /usr/sbin/haproxy
Nov 25 08:26:22 compute-0 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [WARNING]  (281474) : Exiting Master process...
Nov 25 08:26:22 compute-0 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [ALERT]    (281474) : Current worker (281476) exited with code 143 (Terminated)
Nov 25 08:26:22 compute-0 neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4[281468]: [WARNING]  (281474) : All workers exited. Exiting... (0)
Nov 25 08:26:22 compute-0 systemd[1]: libpod-e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84.scope: Deactivated successfully.
Nov 25 08:26:22 compute-0 podman[284036]: 2025-11-25 08:26:22.902925115 +0000 UTC m=+0.070663796 container died e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84-userdata-shm.mount: Deactivated successfully.
Nov 25 08:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-afd67979dba2879b6102202e2641e51318f111cb23f4aeb0a647e6c7456f9106-merged.mount: Deactivated successfully.
Nov 25 08:26:22 compute-0 podman[284036]: 2025-11-25 08:26:22.943968382 +0000 UTC m=+0.111707063 container cleanup e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:26:22 compute-0 systemd[1]: libpod-conmon-e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84.scope: Deactivated successfully.
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.024 253542 INFO nova.virt.libvirt.driver [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Deleting instance files /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a_del
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.026 253542 INFO nova.virt.libvirt.driver [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Deletion of /var/lib/nova/instances/ceb93a9d-5e18-4351-9cfa-3949c00b448a_del complete
Nov 25 08:26:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 563 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1003 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Nov 25 08:26:23 compute-0 podman[284090]: 2025-11-25 08:26:23.04214979 +0000 UTC m=+0.063409317 container remove e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e44b79b-9854-4eda-a60a-978d894bf515]: (4, ('Tue Nov 25 08:26:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 (e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84)\ne41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84\nTue Nov 25 08:26:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 (e41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84)\ne41149f548783254118977cc5d5669b5407f85f52828edec42cb6985c8922e84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.052 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7580a085-f973-443c-a17e-f4018d227ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.053 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86a7b06-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:23 compute-0 kernel: tapf86a7b06-d0: left promiscuous mode
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.057 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.061 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe1f942-5c5d-4500-a5a3-ef7081bec513]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:23 compute-0 podman[284100]: 2025-11-25 08:26:23.07145367 +0000 UTC m=+0.072468697 container create d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.085 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15ff8c87-fd4b-406a-b19c-80715de76a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.093 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3884268-57bc-4063-bad1-405e3233ca29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.096 253542 INFO nova.compute.manager [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Took 4.48 seconds to destroy the instance on the hypervisor.
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.097 253542 DEBUG oslo.service.loopingcall [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.097 253542 DEBUG nova.compute.manager [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.097 253542 DEBUG nova.network.neutron [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.111 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44dfc36a-45d8-4b3b-924a-8e2d8703dba8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448068, 'reachable_time': 28220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284124, 'error': None, 'target': 'ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.114 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f86a7b06-d9db-4462-bd9b-8ad648dec7f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:26:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:23.114 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[91323fd4-1aa5-4291-a501-8f2d7359d651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:23 compute-0 systemd[1]: Started libpod-conmon-d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a.scope.
Nov 25 08:26:23 compute-0 podman[284100]: 2025-11-25 08:26:23.047034894 +0000 UTC m=+0.048049951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:26:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:23 compute-0 podman[284100]: 2025-11-25 08:26:23.174962075 +0000 UTC m=+0.175977132 container init d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:26:23 compute-0 podman[284100]: 2025-11-25 08:26:23.188352526 +0000 UTC m=+0.189367553 container start d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:26:23 compute-0 podman[284100]: 2025-11-25 08:26:23.192584463 +0000 UTC m=+0.193599510 container attach d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.213 253542 DEBUG nova.compute.manager [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-changed-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.214 253542 DEBUG nova.compute.manager [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing instance network info cache due to event network-changed-2089bf75-6119-4c42-a326-989b3931ec08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.214 253542 DEBUG oslo_concurrency.lockutils [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.214 253542 DEBUG oslo_concurrency.lockutils [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.215 253542 DEBUG nova.network.neutron [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.287 253542 INFO nova.virt.libvirt.driver [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Deleting instance files /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44_del
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.288 253542 INFO nova.virt.libvirt.driver [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Deletion of /var/lib/nova/instances/1664fad5-765c-4ecc-93e2-6f96c7fb6d44_del complete
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.347 253542 INFO nova.compute.manager [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Took 1.54 seconds to destroy the instance on the hypervisor.
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.348 253542 DEBUG oslo.service.loopingcall [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.348 253542 DEBUG nova.compute.manager [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:26:23 compute-0 nova_compute[253538]: 2025-11-25 08:26:23.349 253542 DEBUG nova.network.neutron [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:26:23 compute-0 systemd[1]: run-netns-ovnmeta\x2df86a7b06\x2dd9db\x2d4462\x2dbd9b\x2d8ad648dec7f4.mount: Deactivated successfully.
Nov 25 08:26:23 compute-0 ceph-mon[75015]: pgmap v1228: 321 pgs: 321 active+clean; 563 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1003 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Nov 25 08:26:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:24 compute-0 laughing_carson[284127]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:26:24 compute-0 laughing_carson[284127]: --> relative data size: 1.0
Nov 25 08:26:24 compute-0 laughing_carson[284127]: --> All data devices are unavailable
Nov 25 08:26:24 compute-0 systemd[1]: libpod-d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a.scope: Deactivated successfully.
Nov 25 08:26:24 compute-0 podman[284100]: 2025-11-25 08:26:24.265558534 +0000 UTC m=+1.266573591 container died d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1cad7d95bc4d2cf12c5cdedea9f2c67ef6e9ca5e2f63e4c0d5b8678620710c-merged.mount: Deactivated successfully.
Nov 25 08:26:24 compute-0 podman[284100]: 2025-11-25 08:26:24.327524889 +0000 UTC m=+1.328539906 container remove d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 08:26:24 compute-0 systemd[1]: libpod-conmon-d27cb221c10778b905a507b24ab5c1867613f732f8f592d96e60efeb68688b4a.scope: Deactivated successfully.
Nov 25 08:26:24 compute-0 sudo[283894]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:24 compute-0 sudo[284171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:24 compute-0 sudo[284171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:24 compute-0 sudo[284171]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:24 compute-0 sudo[284196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:26:24 compute-0 sudo[284196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:24 compute-0 sudo[284196]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:24 compute-0 sudo[284221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:24 compute-0 sudo[284221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:24 compute-0 sudo[284221]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:24 compute-0 sudo[284246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:26:24 compute-0 sudo[284246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:24 compute-0 nova_compute[253538]: 2025-11-25 08:26:24.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:24 compute-0 kernel: tap4ad9572b-6a (unregistering): left promiscuous mode
Nov 25 08:26:24 compute-0 NetworkManager[48915]: <info>  [1764059184.9381] device (tap4ad9572b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:26:24 compute-0 nova_compute[253538]: 2025-11-25 08:26:24.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:24 compute-0 ovn_controller[152859]: 2025-11-25T08:26:24Z|00101|binding|INFO|Releasing lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe from this chassis (sb_readonly=0)
Nov 25 08:26:24 compute-0 ovn_controller[152859]: 2025-11-25T08:26:24Z|00102|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe down in Southbound
Nov 25 08:26:24 compute-0 ovn_controller[152859]: 2025-11-25T08:26:24Z|00103|binding|INFO|Removing iface tap4ad9572b-6a ovn-installed in OVS
Nov 25 08:26:24 compute-0 nova_compute[253538]: 2025-11-25 08:26:24.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:24.998 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:24.999 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.001 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:25 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 25 08:26:25 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000010.scope: Consumed 15.645s CPU time.
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.019 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c6c0cf-04c0-49fb-9ac8-a786286976e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:25 compute-0 systemd-machined[215790]: Machine qemu-25-instance-00000010 terminated.
Nov 25 08:26:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 449 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.068 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[abf57031-11ec-4a67-9885-48a3fa191bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.071 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[82b24518-e5e6-447e-ab05-c3fb48981841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:25 compute-0 podman[284317]: 2025-11-25 08:26:25.088549695 +0000 UTC m=+0.043720772 container create a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.109 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[692c866a-f4e2-4602-978f-74fb3ac0690e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:25 compute-0 systemd[1]: Started libpod-conmon-a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926.scope.
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.143 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[84fc0e8f-141d-4033-ae02-02465288fd8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 952, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 952, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284334, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:25 compute-0 podman[284317]: 2025-11-25 08:26:25.069476287 +0000 UTC m=+0.024647404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.164 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef041dc-21e7-439b-a784-c327f3b9d96a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284340, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284340, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.166 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.175 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.175 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.175 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.175 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:25.176 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:25 compute-0 podman[284317]: 2025-11-25 08:26:25.20217247 +0000 UTC m=+0.157343547 container init a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:25 compute-0 podman[284317]: 2025-11-25 08:26:25.212241589 +0000 UTC m=+0.167412656 container start a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:26:25 compute-0 podman[284317]: 2025-11-25 08:26:25.219759117 +0000 UTC m=+0.174930184 container attach a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.219 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:25 compute-0 dreamy_margulis[284337]: 167 167
Nov 25 08:26:25 compute-0 systemd[1]: libpod-a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926.scope: Deactivated successfully.
Nov 25 08:26:25 compute-0 podman[284317]: 2025-11-25 08:26:25.229246189 +0000 UTC m=+0.184417266 container died a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b53ac15188cb2d94fe363d2caf9f843b8d435f08b2228d3dd98f43100bb8ea5f-merged.mount: Deactivated successfully.
Nov 25 08:26:25 compute-0 podman[284317]: 2025-11-25 08:26:25.273212126 +0000 UTC m=+0.228383193 container remove a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_margulis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:26:25 compute-0 systemd[1]: libpod-conmon-a588c6b2effbec8717915a3b423733334c3a22751a1a5f6b2746d34053e31926.scope: Deactivated successfully.
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.420 253542 DEBUG nova.network.neutron [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.455 253542 INFO nova.compute.manager [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Took 2.36 seconds to deallocate network for instance.
Nov 25 08:26:25 compute-0 podman[284372]: 2025-11-25 08:26:25.472688348 +0000 UTC m=+0.059422496 container create e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.507 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.508 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:25 compute-0 systemd[1]: Started libpod-conmon-e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545.scope.
Nov 25 08:26:25 compute-0 podman[284372]: 2025-11-25 08:26:25.443426588 +0000 UTC m=+0.030160756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:26:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:25 compute-0 podman[284372]: 2025-11-25 08:26:25.590840138 +0000 UTC m=+0.177574316 container init e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 08:26:25 compute-0 podman[284372]: 2025-11-25 08:26:25.597597666 +0000 UTC m=+0.184331824 container start e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 08:26:25 compute-0 podman[284372]: 2025-11-25 08:26:25.603369506 +0000 UTC m=+0.190103684 container attach e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.607 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance shutdown successfully after 25 seconds.
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.612 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.617 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.618 253542 DEBUG nova.virt.libvirt.vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:25:58Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.618 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.618 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.619 253542 DEBUG os_vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.622 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ad9572b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.628 253542 INFO os_vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')
Nov 25 08:26:25 compute-0 nova_compute[253538]: 2025-11-25 08:26:25.682 253542 DEBUG oslo_concurrency.processutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:26 compute-0 ceph-mon[75015]: pgmap v1229: 321 pgs: 321 active+clean; 449 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.194 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting instance files /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.195 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deletion of /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del complete
Nov 25 08:26:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481982083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.250 253542 DEBUG oslo_concurrency.processutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.257 253542 DEBUG nova.compute.provider_tree [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.273 253542 DEBUG nova.scheduler.client.report [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.335 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.363 253542 INFO nova.scheduler.client.report [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Deleted allocations for instance ceb93a9d-5e18-4351-9cfa-3949c00b448a
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.382 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.383 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating image(s)
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.407 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.441 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]: {
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:     "0": [
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:         {
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "devices": [
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "/dev/loop3"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             ],
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_name": "ceph_lv0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_size": "21470642176",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "name": "ceph_lv0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "tags": {
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cluster_name": "ceph",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.crush_device_class": "",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.encrypted": "0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osd_id": "0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.type": "block",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.vdo": "0"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             },
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "type": "block",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "vg_name": "ceph_vg0"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:         }
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:     ],
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:     "1": [
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:         {
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "devices": [
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "/dev/loop4"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             ],
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_name": "ceph_lv1",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_size": "21470642176",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "name": "ceph_lv1",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "tags": {
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cluster_name": "ceph",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.crush_device_class": "",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.encrypted": "0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osd_id": "1",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.type": "block",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.vdo": "0"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             },
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "type": "block",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "vg_name": "ceph_vg1"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:         }
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:     ],
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:     "2": [
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:         {
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "devices": [
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "/dev/loop5"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             ],
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_name": "ceph_lv2",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_size": "21470642176",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "name": "ceph_lv2",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "tags": {
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.cluster_name": "ceph",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.crush_device_class": "",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.encrypted": "0",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osd_id": "2",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.type": "block",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:                 "ceph.vdo": "0"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             },
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "type": "block",
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:             "vg_name": "ceph_vg2"
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:         }
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]:     ]
Nov 25 08:26:26 compute-0 focused_ishizaka[284389]: }
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.464 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.467 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.489 253542 DEBUG oslo_concurrency.lockutils [None req-d54cf114-4cd0-4343-8d8e-85625af4e156 c53798457642457e8c93278c6bbae0b7 c4ea3de796e6464fbf65835dc4c3ad79 - - default default] Lock "ceb93a9d-5e18-4351-9cfa-3949c00b448a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:26 compute-0 systemd[1]: libpod-e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545.scope: Deactivated successfully.
Nov 25 08:26:26 compute-0 podman[284372]: 2025-11-25 08:26:26.495209162 +0000 UTC m=+1.081943320 container died e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f742f0aea007c0d03c4b5365e50b60f01100baa162c51f466d44f017dee90a3-merged.mount: Deactivated successfully.
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.517 253542 DEBUG nova.network.neutron [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updated VIF entry in instance network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.519 253542 DEBUG nova.network.neutron [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.529 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.530 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.531 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.531 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.549 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:26 compute-0 podman[284372]: 2025-11-25 08:26:26.550509723 +0000 UTC m=+1.137243871 container remove e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.552 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:26 compute-0 systemd[1]: libpod-conmon-e569ddb9b8361d60a11cea8df8ffadb8da02e075f3f0286e2a1f81fc17b00545.scope: Deactivated successfully.
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.582 253542 DEBUG nova.network.neutron [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.584 253542 DEBUG oslo_concurrency.lockutils [req-06f89d99-6e38-4d91-b3f0-f7dcb649d298 req-86856b76-7820-440a-ab2f-44d668f882d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:26 compute-0 sudo[284246]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:26.591 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.607 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-unplugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.607 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.607 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] No waiting events found dispatching network-vif-unplugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-unplugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-changed-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.608 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing instance network info cache due to event network-changed-2089bf75-6119-4c42-a326-989b3931ec08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.609 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.609 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.609 253542 DEBUG nova.network.neutron [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Refreshing network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.612 253542 INFO nova.compute.manager [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Took 3.26 seconds to deallocate network for instance.
Nov 25 08:26:26 compute-0 sudo[284526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.663 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.663 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:26 compute-0 sudo[284526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:26 compute-0 sudo[284526]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:26 compute-0 sudo[284569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:26:26 compute-0 sudo[284569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:26 compute-0 sudo[284569]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:26 compute-0 sudo[284594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:26 compute-0 sudo[284594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:26 compute-0 sudo[284594]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.805 253542 DEBUG oslo_concurrency.processutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.836 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:26 compute-0 sudo[284619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:26:26 compute-0 sudo[284619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:26 compute-0 nova_compute[253538]: 2025-11-25 08:26:26.912 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] resizing rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.018 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.019 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Ensure instance console log exists: /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.019 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.020 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.020 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.022 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start _get_guest_xml network_info=[{"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.033 253542 WARNING nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:26:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 398 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1019 KiB/s wr, 191 op/s
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.053 253542 DEBUG nova.virt.libvirt.host [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.054 253542 DEBUG nova.virt.libvirt.host [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.062 253542 DEBUG nova.virt.libvirt.host [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.063 253542 DEBUG nova.virt.libvirt.host [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.064 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.065 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.066 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.066 253542 DEBUG nova.virt.hardware [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.066 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.081 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1481982083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658417652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.224 253542 DEBUG oslo_concurrency.processutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.233 253542 DEBUG nova.compute.provider_tree [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.255 253542 DEBUG nova.scheduler.client.report [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.280 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:27 compute-0 podman[284775]: 2025-11-25 08:26:27.195676271 +0000 UTC m=+0.028237812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:26:27 compute-0 podman[284775]: 2025-11-25 08:26:27.296934294 +0000 UTC m=+0.129495775 container create c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.307 253542 INFO nova.scheduler.client.report [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Deleted allocations for instance 1664fad5-765c-4ecc-93e2-6f96c7fb6d44
Nov 25 08:26:27 compute-0 systemd[1]: Started libpod-conmon-c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c.scope.
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.385 253542 DEBUG oslo_concurrency.lockutils [None req-b75a2694-4fbc-4845-b2ff-c61073766bc9 d61511e82c674abeb4ba87a4e5c5bf9d 2d3671fc1a3f4b319a62f23168a9df72 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:26:27 compute-0 podman[284775]: 2025-11-25 08:26:27.417927144 +0000 UTC m=+0.250488605 container init c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 08:26:27 compute-0 podman[284775]: 2025-11-25 08:26:27.425236196 +0000 UTC m=+0.257797677 container start c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:26:27 compute-0 vigorous_leavitt[284814]: 167 167
Nov 25 08:26:27 compute-0 systemd[1]: libpod-c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c.scope: Deactivated successfully.
Nov 25 08:26:27 compute-0 conmon[284814]: conmon c9af7bc7ff20425be8e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c.scope/container/memory.events
Nov 25 08:26:27 compute-0 podman[284775]: 2025-11-25 08:26:27.51605449 +0000 UTC m=+0.348615951 container attach c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:26:27 compute-0 podman[284775]: 2025-11-25 08:26:27.516461912 +0000 UTC m=+0.349023363 container died c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:26:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:26:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079511693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.556 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-356e5b7edcbf1016124a63d089578757035be54b6ba09f006c7d5f17c7c2ea4b-merged.mount: Deactivated successfully.
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.596 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.601 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:27 compute-0 podman[284775]: 2025-11-25 08:26:27.607119591 +0000 UTC m=+0.439681042 container remove c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:26:27 compute-0 systemd[1]: libpod-conmon-c9af7bc7ff20425be8e412e3d0eff0d1d4404eb1095e39489c2196738addf47c.scope: Deactivated successfully.
Nov 25 08:26:27 compute-0 podman[284879]: 2025-11-25 08:26:27.873601417 +0000 UTC m=+0.084145680 container create 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 08:26:27 compute-0 podman[284879]: 2025-11-25 08:26:27.810773698 +0000 UTC m=+0.021317941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:26:27 compute-0 systemd[1]: Started libpod-conmon-9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299.scope.
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.960 253542 DEBUG nova.network.neutron [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updated VIF entry in instance network info cache for port 2089bf75-6119-4c42-a326-989b3931ec08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.960 253542 DEBUG nova.network.neutron [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.972 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.973 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.975 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.976 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.978 253542 DEBUG oslo_concurrency.lockutils [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1664fad5-765c-4ecc-93e2-6f96c7fb6d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.979 253542 DEBUG nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] No waiting events found dispatching network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:27 compute-0 nova_compute[253538]: 2025-11-25 08:26:27.979 253542 WARNING nova.compute.manager [req-0d00b467-f72e-4f38-ab1b-8c00eed02b7b req-3a83c37e-8630-4116-91f6-b878ef1a453b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received unexpected event network-vif-plugged-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 for instance with vm_state active and task_state deleting.
Nov 25 08:26:28 compute-0 podman[284879]: 2025-11-25 08:26:28.040510987 +0000 UTC m=+0.251055230 container init 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:26:28 compute-0 podman[284879]: 2025-11-25 08:26:28.049549157 +0000 UTC m=+0.260093380 container start 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:26:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:26:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3656318652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:28 compute-0 podman[284879]: 2025-11-25 08:26:28.069978723 +0000 UTC m=+0.280522946 container attach 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 08:26:28 compute-0 podman[284892]: 2025-11-25 08:26:28.076610917 +0000 UTC m=+0.157054118 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.085 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.089 253542 DEBUG nova.virt.libvirt.vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:26Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.090 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.092 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.099 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <uuid>86bfa56f-56d0-4a5e-b0b2-302c375e37a3</uuid>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <name>instance-00000010</name>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAdminTestJSON-server-1649971692</nova:name>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:26:27</nova:creationTime>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <nova:user uuid="76d3377d398a4214a77bc0eb91638ec5">tempest-ServersAdminTestJSON-857664825-project-member</nova:user>
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <nova:project uuid="65a2f983cce14453b2dc9251a520f289">tempest-ServersAdminTestJSON-857664825</nova:project>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <nova:port uuid="4ad9572b-6ac1-4659-8ea6-71b8a32c06fe">
Nov 25 08:26:28 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <system>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <entry name="serial">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <entry name="uuid">86bfa56f-56d0-4a5e-b0b2-302c375e37a3</entry>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </system>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <os>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   </os>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <features>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   </features>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk">
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config">
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:26:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5e:0e:e0"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <target dev="tap4ad9572b-6a"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/console.log" append="off"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <video>
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </video>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:26:28 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:26:28 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:26:28 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:26:28 compute-0 nova_compute[253538]: </domain>
Nov 25 08:26:28 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.101 253542 DEBUG nova.virt.libvirt.vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:26Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.102 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.103 253542 DEBUG nova.network.os_vif_util [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.104 253542 DEBUG os_vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.106 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.107 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ad9572b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.113 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ad9572b-6a, col_values=(('external_ids', {'iface-id': '4ad9572b-6ac1-4659-8ea6-71b8a32c06fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:0e:e0', 'vm-uuid': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:28 compute-0 NetworkManager[48915]: <info>  [1764059188.1168] manager: (tap4ad9572b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.119 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.123 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.124 253542 INFO os_vif [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')
Nov 25 08:26:28 compute-0 ceph-mon[75015]: pgmap v1230: 321 pgs: 321 active+clean; 398 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1019 KiB/s wr, 191 op/s
Nov 25 08:26:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1658417652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3079511693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3656318652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.177 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.177 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.178 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] No VIF found with MAC fa:16:3e:5e:0e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.179 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Using config drive
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.212 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.232 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.258 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'keypairs' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.692 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Received event network-vif-deleted-6283ff13-d854-41d6-8a7a-eab602cc4cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.693 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.693 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.694 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.694 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.695 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.695 253542 WARNING nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state rebuild_spawning.
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.696 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Received event network-vif-deleted-fdb3703c-f8da-4c10-9784-ed63bfe93fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.696 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.696 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.697 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.697 253542 DEBUG oslo_concurrency.lockutils [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.698 253542 DEBUG nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.698 253542 WARNING nova.compute.manager [req-1c81eaa6-c2d5-45b5-8cd0-13a82bd9df65 req-3f558dd2-1e12-472c-89af-03cefc564d9b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state rebuild_spawning.
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.702 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Creating config drive at /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.712 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkbur3q6b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.851 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkbur3q6b" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.893 253542 DEBUG nova.storage.rbd_utils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] rbd image 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:28 compute-0 nova_compute[253538]: 2025-11-25 08:26:28.900 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:26:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3608054424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:26:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:26:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3608054424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:26:29 compute-0 kind_shannon[284905]: {
Nov 25 08:26:29 compute-0 kind_shannon[284905]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "osd_id": 1,
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "type": "bluestore"
Nov 25 08:26:29 compute-0 kind_shannon[284905]:     },
Nov 25 08:26:29 compute-0 kind_shannon[284905]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "osd_id": 2,
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "type": "bluestore"
Nov 25 08:26:29 compute-0 kind_shannon[284905]:     },
Nov 25 08:26:29 compute-0 kind_shannon[284905]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "osd_id": 0,
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:26:29 compute-0 kind_shannon[284905]:         "type": "bluestore"
Nov 25 08:26:29 compute-0 kind_shannon[284905]:     }
Nov 25 08:26:29 compute-0 kind_shannon[284905]: }
Nov 25 08:26:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 393 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 190 op/s
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.053 253542 DEBUG oslo_concurrency.processutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config 86bfa56f-56d0-4a5e-b0b2-302c375e37a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.054 253542 INFO nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting local config drive /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3/disk.config because it was imported into RBD.
Nov 25 08:26:29 compute-0 systemd[1]: libpod-9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299.scope: Deactivated successfully.
Nov 25 08:26:29 compute-0 systemd[1]: libpod-9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299.scope: Consumed 1.006s CPU time.
Nov 25 08:26:29 compute-0 podman[284879]: 2025-11-25 08:26:29.079328003 +0000 UTC m=+1.289872246 container died 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 08:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e76c69eb600601acfa40b4fe68bc76649ba23ce23fd23eb60d494d141d1974ee-merged.mount: Deactivated successfully.
Nov 25 08:26:29 compute-0 kernel: tap4ad9572b-6a: entered promiscuous mode
Nov 25 08:26:29 compute-0 NetworkManager[48915]: <info>  [1764059189.1125] manager: (tap4ad9572b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Nov 25 08:26:29 compute-0 ovn_controller[152859]: 2025-11-25T08:26:29Z|00104|binding|INFO|Claiming lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for this chassis.
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:29 compute-0 ovn_controller[152859]: 2025-11-25T08:26:29Z|00105|binding|INFO|4ad9572b-6ac1-4659-8ea6-71b8a32c06fe: Claiming fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:26:29 compute-0 podman[284879]: 2025-11-25 08:26:29.132520045 +0000 UTC m=+1.343064258 container remove 9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 08:26:29 compute-0 ovn_controller[152859]: 2025-11-25T08:26:29Z|00106|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe ovn-installed in OVS
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.136 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:29 compute-0 systemd[1]: libpod-conmon-9970729f0b7d74dcb6d5ebd569d560cdcd4207f1f61d6818ed96ca1e1ad4a299.scope: Deactivated successfully.
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:29 compute-0 systemd-machined[215790]: New machine qemu-27-instance-00000010.
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.161 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.162 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 bound to our chassis
Nov 25 08:26:29 compute-0 ovn_controller[152859]: 2025-11-25T08:26:29Z|00107|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe up in Southbound
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.163 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:26:29 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000010.
Nov 25 08:26:29 compute-0 sudo[284619]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3608054424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:26:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3608054424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:26:29 compute-0 systemd-udevd[285034]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:26:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.178 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5551d83f-8746-4590-9e3e-48db09addcc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:29 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:26:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:26:29 compute-0 NetworkManager[48915]: <info>  [1764059189.1910] device (tap4ad9572b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:26:29 compute-0 NetworkManager[48915]: <info>  [1764059189.1924] device (tap4ad9572b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:26:29 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:26:29 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9ab726ba-138a-479a-a078-9fd78359ba2c does not exist
Nov 25 08:26:29 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d10dd466-4fcc-49b2-bb69-36236bd85e42 does not exist
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.208 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f3af8e34-bce8-424e-a8a7-9b48bfd6c7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.211 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3a72e9fb-8502-4029-a689-941fbd06cc37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.238 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[22e7f59e-6be3-4eb9-8f59-750bcb2833b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:29 compute-0 sudo[285039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:26:29 compute-0 sudo[285039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:29 compute-0 sudo[285039]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4702e5e-3235-4729-8ae8-554665c1ae9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 952, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 952, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285070, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.289 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[777dab95-8245-449c-bd5e-dfe29bed0d9a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285080, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285080, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.290 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.294 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.294 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.294 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:29.295 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:29 compute-0 sudo[285073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:26:29 compute-0 sudo[285073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:26:29 compute-0 sudo[285073]: pam_unix(sudo:session): session closed for user root
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.716 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.763 253542 DEBUG nova.compute.manager [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.764 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.765 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.765 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059189.7644808, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.765 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Resumed (Lifecycle Event)
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.771 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance spawned successfully.
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.771 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.792 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.799 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.799 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.799 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.799 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.800 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.800 253542 DEBUG nova.virt.libvirt.driver [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.804 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.827 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.827 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059189.7678297, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.828 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Started (Lifecycle Event)
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.845 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.849 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.872 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:26:29 compute-0 nova_compute[253538]: 2025-11-25 08:26:29.899 253542 DEBUG nova.compute.manager [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.025 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.026 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.026 253542 DEBUG nova.objects.instance [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:26:30 compute-0 ceph-mon[75015]: pgmap v1231: 321 pgs: 321 active+clean; 393 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 190 op/s
Nov 25 08:26:30 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:26:30 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:26:30 compute-0 ovn_controller[152859]: 2025-11-25T08:26:30Z|00108|binding|INFO|Releasing lport 26e04d60-4f32-4592-b567-fc34513c5aba from this chassis (sb_readonly=0)
Nov 25 08:26:30 compute-0 ovn_controller[152859]: 2025-11-25T08:26:30Z|00109|binding|INFO|Releasing lport 52d2128c-19c6-4892-8ba5-cc8740039f5e from this chassis (sb_readonly=0)
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.198 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.203 253542 DEBUG oslo_concurrency.lockutils [None req-b2263c80-33ef-445c-930a-78c60ce7c0c9 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:30 compute-0 ovn_controller[152859]: 2025-11-25T08:26:30Z|00110|binding|INFO|Releasing lport 26e04d60-4f32-4592-b567-fc34513c5aba from this chassis (sb_readonly=0)
Nov 25 08:26:30 compute-0 ovn_controller[152859]: 2025-11-25T08:26:30Z|00111|binding|INFO|Releasing lport 52d2128c-19c6-4892-8ba5-cc8740039f5e from this chassis (sb_readonly=0)
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.487 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.793 253542 DEBUG nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 DEBUG nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.794 253542 WARNING nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state None.
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG oslo_concurrency.lockutils [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 DEBUG nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:30 compute-0 nova_compute[253538]: 2025-11-25 08:26:30.795 253542 WARNING nova.compute.manager [req-98a23fff-d300-475e-9da7-c942dfe11fc0 req-733a5dc0-10b8-496f-8042-09c8d93ac3e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state None.
Nov 25 08:26:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 372 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Nov 25 08:26:31 compute-0 podman[285142]: 2025-11-25 08:26:31.844513615 +0000 UTC m=+0.082752852 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 08:26:32 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 08:26:32 compute-0 ceph-mon[75015]: pgmap v1232: 321 pgs: 321 active+clean; 372 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Nov 25 08:26:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 372 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Nov 25 08:26:33 compute-0 nova_compute[253538]: 2025-11-25 08:26:33.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:33 compute-0 ovn_controller[152859]: 2025-11-25T08:26:33Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:c0:7d 10.100.0.7
Nov 25 08:26:33 compute-0 ovn_controller[152859]: 2025-11-25T08:26:33Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:c0:7d 10.100.0.7
Nov 25 08:26:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:34 compute-0 nova_compute[253538]: 2025-11-25 08:26:34.253 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059179.2513971, ceb93a9d-5e18-4351-9cfa-3949c00b448a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:34 compute-0 nova_compute[253538]: 2025-11-25 08:26:34.253 253542 INFO nova.compute.manager [-] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] VM Stopped (Lifecycle Event)
Nov 25 08:26:34 compute-0 nova_compute[253538]: 2025-11-25 08:26:34.283 253542 DEBUG nova.compute.manager [None req-e4b6013e-160f-4e09-90e9-b8a56198e01d - - - - - -] [instance: ceb93a9d-5e18-4351-9cfa-3949c00b448a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:34 compute-0 ceph-mon[75015]: pgmap v1233: 321 pgs: 321 active+clean; 372 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Nov 25 08:26:34 compute-0 nova_compute[253538]: 2025-11-25 08:26:34.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 391 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 277 op/s
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.186 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.187 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.187 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.188 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.189 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.191 253542 INFO nova.compute.manager [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Terminating instance
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.193 253542 DEBUG nova.compute.manager [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:26:35 compute-0 kernel: tap817e9f9b-9d (unregistering): left promiscuous mode
Nov 25 08:26:35 compute-0 NetworkManager[48915]: <info>  [1764059195.2498] device (tap817e9f9b-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.262 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 ovn_controller[152859]: 2025-11-25T08:26:35Z|00112|binding|INFO|Releasing lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 from this chassis (sb_readonly=0)
Nov 25 08:26:35 compute-0 ovn_controller[152859]: 2025-11-25T08:26:35Z|00113|binding|INFO|Setting lport 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 down in Southbound
Nov 25 08:26:35 compute-0 ovn_controller[152859]: 2025-11-25T08:26:35Z|00114|binding|INFO|Removing iface tap817e9f9b-9d ovn-installed in OVS
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.273 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:3c:73 10.100.0.4'], port_security=['fa:16:3e:50:3c:73 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.274 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.277 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.301 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3125a6-43cb-4ec1-b193-17d4727ab929]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:35 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 25 08:26:35 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 14.763s CPU time.
Nov 25 08:26:35 compute-0 systemd-machined[215790]: Machine qemu-22-instance-00000014 terminated.
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.333 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4b09113d-8a13-48b4-adac-d32827026da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.337 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f72bb549-b4ff-4c53-82c0-7a7c1ac2ebb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.368 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[58dbbe95-3064-4a79-91bf-479ef2b04366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.392 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae65681-a4ba-4b1a-8ff5-fe350c710972]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 952, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 952, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285174, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.414 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a082b5-fbd4-4106-bf7c-87b3c90ae619]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285175, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285175, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.418 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.420 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.420 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.426 253542 INFO nova.virt.libvirt.driver [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Instance destroyed successfully.
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.427 253542 DEBUG nova.objects.instance [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.429 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.429 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.430 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:35.431 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.440 253542 DEBUG nova.virt.libvirt.vif [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-485239503',display_name='tempest-ServersAdminTestJSON-server-485239503',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-485239503',id=20,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-bxwnte0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:36Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.441 253542 DEBUG nova.network.os_vif_util [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "address": "fa:16:3e:50:3c:73", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817e9f9b-9d", "ovs_interfaceid": "817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.441 253542 DEBUG nova.network.os_vif_util [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.442 253542 DEBUG os_vif [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.443 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.444 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap817e9f9b-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.445 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.447 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.454 253542 INFO os_vif [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:3c:73,bridge_name='br-int',has_traffic_filtering=True,id=817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817e9f9b-9d')
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.775 253542 DEBUG nova.compute.manager [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-unplugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.776 253542 DEBUG oslo_concurrency.lockutils [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.776 253542 DEBUG oslo_concurrency.lockutils [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.776 253542 DEBUG oslo_concurrency.lockutils [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.777 253542 DEBUG nova.compute.manager [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] No waiting events found dispatching network-vif-unplugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.777 253542 DEBUG nova.compute.manager [req-09f24101-bdce-4008-91e7-50d3bd6b78b5 req-c342ca05-35d6-4f80-a814-7180fe6bffbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-unplugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.850 253542 INFO nova.virt.libvirt.driver [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Deleting instance files /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_del
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.852 253542 INFO nova.virt.libvirt.driver [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Deletion of /var/lib/nova/instances/740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2_del complete
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.926 253542 INFO nova.compute.manager [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.928 253542 DEBUG oslo.service.loopingcall [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.929 253542 DEBUG nova.compute.manager [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:26:35 compute-0 nova_compute[253538]: 2025-11-25 08:26:35.930 253542 DEBUG nova.network.neutron [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:26:36 compute-0 ceph-mon[75015]: pgmap v1234: 321 pgs: 321 active+clean; 391 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 277 op/s
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.035 253542 DEBUG nova.network.neutron [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 378 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.052 253542 INFO nova.compute.manager [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Took 1.12 seconds to deallocate network for instance.
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.093 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.094 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.204 253542 DEBUG oslo_concurrency.processutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568247570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.613 253542 DEBUG oslo_concurrency.processutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.620 253542 DEBUG nova.compute.provider_tree [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.633 253542 DEBUG nova.scheduler.client.report [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.651 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.676 253542 INFO nova.scheduler.client.report [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Deleted allocations for instance 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.746 253542 DEBUG oslo_concurrency.lockutils [None req-ce87c40b-1887-4255-bdd8-f7c0edabc34a 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.825 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059182.822616, 1664fad5-765c-4ecc-93e2-6f96c7fb6d44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.826 253542 INFO nova.compute.manager [-] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] VM Stopped (Lifecycle Event)
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.844 253542 DEBUG nova.compute.manager [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.845 253542 DEBUG oslo_concurrency.lockutils [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.846 253542 DEBUG oslo_concurrency.lockutils [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.847 253542 DEBUG oslo_concurrency.lockutils [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.848 253542 DEBUG nova.compute.manager [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] No waiting events found dispatching network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.849 253542 WARNING nova.compute.manager [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received unexpected event network-vif-plugged-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 for instance with vm_state deleted and task_state None.
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.849 253542 DEBUG nova.compute.manager [req-191a6986-89a2-4d60-a74d-16f1522e5eb2 req-d20f4386-2a5c-49b9-9484-1a52f1ecac77 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Received event network-vif-deleted-817e9f9b-9d8a-4f80-a7b3-f2bcae49ffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:37 compute-0 nova_compute[253538]: 2025-11-25 08:26:37.856 253542 DEBUG nova.compute.manager [None req-8b61a965-bf5b-40a2-8ce6-1c508b4e5bdc - - - - - -] [instance: 1664fad5-765c-4ecc-93e2-6f96c7fb6d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:38 compute-0 ceph-mon[75015]: pgmap v1235: 321 pgs: 321 active+clean; 378 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Nov 25 08:26:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3568247570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.403 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.404 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.405 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.406 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.407 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.410 253542 INFO nova.compute.manager [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Terminating instance
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.413 253542 DEBUG nova.compute.manager [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:26:38 compute-0 kernel: tapa0a9c956-aa (unregistering): left promiscuous mode
Nov 25 08:26:38 compute-0 NetworkManager[48915]: <info>  [1764059198.4732] device (tapa0a9c956-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:26:38 compute-0 ovn_controller[152859]: 2025-11-25T08:26:38Z|00115|binding|INFO|Releasing lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c from this chassis (sb_readonly=0)
Nov 25 08:26:38 compute-0 ovn_controller[152859]: 2025-11-25T08:26:38Z|00116|binding|INFO|Setting lport a0a9c956-aaa5-4981-a1d9-ae896cea7b7c down in Southbound
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:38 compute-0 ovn_controller[152859]: 2025-11-25T08:26:38Z|00117|binding|INFO|Removing iface tapa0a9c956-aa ovn-installed in OVS
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.488 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:6b:81 10.100.0.10'], port_security=['fa:16:3e:11:6b:81 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '29fb9e2b-13d1-41e6-b0b1-1d5262dcadec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.490 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0a9c956-aaa5-4981-a1d9-ae896cea7b7c in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.492 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.506 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ba8ba7-3bd4-4d67-88a1-d66c389de283]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:38 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 25 08:26:38 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 15.726s CPU time.
Nov 25 08:26:38 compute-0 systemd-machined[215790]: Machine qemu-20-instance-00000012 terminated.
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.544 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1486c0b1-b170-4fdf-91fe-d8e9bf5bd4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.548 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f7711ab7-dfa3-48ce-99c6-02a9d6af147b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.577 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f2983b72-bbe6-4b28-8074-8200094a9dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:38 compute-0 podman[285228]: 2025-11-25 08:26:38.595128838 +0000 UTC m=+0.096295206 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.595 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0de3d83c-ba4b-4f29-a343-0344a8871b5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 952, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 952, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285262, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.612 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30b1d5e0-b72e-4f1f-afac-d4c9cdd1c7c9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285267, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285267, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.614 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.620 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.622 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.622 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:38.623 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.648 253542 INFO nova.virt.libvirt.driver [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Instance destroyed successfully.
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.648 253542 DEBUG nova.objects.instance [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.661 253542 DEBUG nova.virt.libvirt.vif [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1138480486',display_name='tempest-ServersAdminTestJSON-server-1138480486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1138480486',id=18,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-hs51jeqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:19Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=29fb9e2b-13d1-41e6-b0b1-1d5262dcadec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.661 253542 DEBUG nova.network.os_vif_util [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "address": "fa:16:3e:11:6b:81", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a9c956-aa", "ovs_interfaceid": "a0a9c956-aaa5-4981-a1d9-ae896cea7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.662 253542 DEBUG nova.network.os_vif_util [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.662 253542 DEBUG os_vif [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.665 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0a9c956-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.728 253542 INFO os_vif [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=a0a9c956-aaa5-4981-a1d9-ae896cea7b7c,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a9c956-aa')
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG nova.compute.manager [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-unplugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG oslo_concurrency.lockutils [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG oslo_concurrency.lockutils [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG oslo_concurrency.lockutils [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.795 253542 DEBUG nova.compute.manager [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] No waiting events found dispatching network-vif-unplugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:38 compute-0 nova_compute[253538]: 2025-11-25 08:26:38.796 253542 DEBUG nova.compute.manager [req-297b75c7-53bf-4aa4-9279-71734157ab68 req-ecc99db5-4961-4b84-a1f3-37f59666d0f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-unplugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:26:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 348 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 203 op/s
Nov 25 08:26:39 compute-0 nova_compute[253538]: 2025-11-25 08:26:39.181 253542 INFO nova.virt.libvirt.driver [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Deleting instance files /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_del
Nov 25 08:26:39 compute-0 nova_compute[253538]: 2025-11-25 08:26:39.182 253542 INFO nova.virt.libvirt.driver [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Deletion of /var/lib/nova/instances/29fb9e2b-13d1-41e6-b0b1-1d5262dcadec_del complete
Nov 25 08:26:39 compute-0 nova_compute[253538]: 2025-11-25 08:26:39.255 253542 INFO nova.compute.manager [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 25 08:26:39 compute-0 nova_compute[253538]: 2025-11-25 08:26:39.256 253542 DEBUG oslo.service.loopingcall [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:26:39 compute-0 nova_compute[253538]: 2025-11-25 08:26:39.256 253542 DEBUG nova.compute.manager [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:26:39 compute-0 nova_compute[253538]: 2025-11-25 08:26:39.256 253542 DEBUG nova.network.neutron [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:26:39 compute-0 nova_compute[253538]: 2025-11-25 08:26:39.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:40 compute-0 ceph-mon[75015]: pgmap v1236: 321 pgs: 321 active+clean; 348 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 203 op/s
Nov 25 08:26:40 compute-0 nova_compute[253538]: 2025-11-25 08:26:40.820 253542 DEBUG nova.network.neutron [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:40 compute-0 nova_compute[253538]: 2025-11-25 08:26:40.834 253542 INFO nova.compute.manager [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Took 1.58 seconds to deallocate network for instance.
Nov 25 08:26:40 compute-0 nova_compute[253538]: 2025-11-25 08:26:40.879 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:40 compute-0 nova_compute[253538]: 2025-11-25 08:26:40.879 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:40 compute-0 nova_compute[253538]: 2025-11-25 08:26:40.916 253542 DEBUG nova.compute.manager [req-afcab062-3f05-4eaa-b580-d68811020e8d req-124c631d-6164-4608-a048-efb88603ed82 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-deleted-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:40 compute-0 nova_compute[253538]: 2025-11-25 08:26:40.989 253542 DEBUG oslo_concurrency.processutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.015 253542 DEBUG nova.compute.manager [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.016 253542 DEBUG oslo_concurrency.lockutils [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.016 253542 DEBUG oslo_concurrency.lockutils [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.016 253542 DEBUG oslo_concurrency.lockutils [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.017 253542 DEBUG nova.compute.manager [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] No waiting events found dispatching network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.017 253542 WARNING nova.compute.manager [req-f46006f8-fab1-4835-bcdd-99995a52a288 req-c4ab95dd-afd9-4835-b42f-fb542b4e78e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Received unexpected event network-vif-plugged-a0a9c956-aaa5-4981-a1d9-ae896cea7b7c for instance with vm_state deleted and task_state None.
Nov 25 08:26:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 285 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 192 op/s
Nov 25 08:26:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:41.056 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:41.057 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3844717133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.458 253542 DEBUG oslo_concurrency.processutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.463 253542 DEBUG nova.compute.provider_tree [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.491 253542 DEBUG nova.scheduler.client.report [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.514 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.541 253542 INFO nova.scheduler.client.report [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Deleted allocations for instance 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.603 253542 DEBUG oslo_concurrency.lockutils [None req-0ffadc59-036e-47fe-8a78-d4e1681ae88e 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "29fb9e2b-13d1-41e6-b0b1-1d5262dcadec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:41 compute-0 ovn_controller[152859]: 2025-11-25T08:26:41Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:26:41 compute-0 ovn_controller[152859]: 2025-11-25T08:26:41Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:0e:e0 10.100.0.6
Nov 25 08:26:41 compute-0 sshd-session[285321]: Invalid user loginuser from 193.32.162.151 port 46082
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.985 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:41 compute-0 nova_compute[253538]: 2025-11-25 08:26:41.985 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.000 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:26:42 compute-0 sshd-session[285321]: Connection closed by invalid user loginuser 193.32.162.151 port 46082 [preauth]
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.070 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.070 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.071 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.071 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.071 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.072 253542 INFO nova.compute.manager [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Terminating instance
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.073 253542 DEBUG nova.compute.manager [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.080 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.080 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.087 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.088 253542 INFO nova.compute.claims [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.266 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:42 compute-0 ceph-mon[75015]: pgmap v1237: 321 pgs: 321 active+clean; 285 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 192 op/s
Nov 25 08:26:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3844717133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:42 compute-0 kernel: tape9d1298d-41 (unregistering): left promiscuous mode
Nov 25 08:26:42 compute-0 NetworkManager[48915]: <info>  [1764059202.5438] device (tape9d1298d-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:26:42 compute-0 ovn_controller[152859]: 2025-11-25T08:26:42Z|00118|binding|INFO|Releasing lport e9d1298d-411a-4018-ba08-c41d40ba0d41 from this chassis (sb_readonly=0)
Nov 25 08:26:42 compute-0 ovn_controller[152859]: 2025-11-25T08:26:42Z|00119|binding|INFO|Setting lport e9d1298d-411a-4018-ba08-c41d40ba0d41 down in Southbound
Nov 25 08:26:42 compute-0 ovn_controller[152859]: 2025-11-25T08:26:42Z|00120|binding|INFO|Removing iface tape9d1298d-41 ovn-installed in OVS
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.559 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:6c:e2 10.100.0.3'], port_security=['fa:16:3e:af:6c:e2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '23ace5af-6840-42aa-a801-98abbb4f3a52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e9d1298d-411a-4018-ba08-c41d40ba0d41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.560 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e9d1298d-411a-4018-ba08-c41d40ba0d41 in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.562 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93269c36-ab23-4d95-925a-798173550624
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.579 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ced9b8-1d3b-4e52-8dc1-1c5fa64f0c41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.606 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[57522015-d764-43bd-90e4-41bfccc76ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.609 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4a31f8-4a0e-4639-84e7-473c257312fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:42 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 25 08:26:42 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 18.176s CPU time.
Nov 25 08:26:42 compute-0 systemd-machined[215790]: Machine qemu-19-instance-00000011 terminated.
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.637 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6a4d29-609d-429f-8332-b0db7ee25bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.660 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79ab9411-81de-4ffe-9f4c-4afd28a310cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93269c36-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 23, 'rx_bytes': 952, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 23, 'rx_bytes': 952, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445515, 'reachable_time': 41516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285354, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.677 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3942dcff-8967-40e7-bb35-4247e12b4f52]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445525, 'tstamp': 445525}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285355, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap93269c36-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445529, 'tstamp': 445529}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285355, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.679 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.681 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.687 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93269c36-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.688 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.688 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93269c36-a0, col_values=(('external_ids', {'iface-id': '52d2128c-19c6-4892-8ba5-cc8740039f5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:42.688 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.708 253542 INFO nova.virt.libvirt.driver [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Instance destroyed successfully.
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.712 253542 DEBUG nova.objects.instance [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 23ace5af-6840-42aa-a801-98abbb4f3a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.737 253542 DEBUG nova.virt.libvirt.vif [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2132333394',display_name='tempest-ServersAdminTestJSON-server-2132333394',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2132333394',id=17,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:25:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-8p4p95p6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:25:04Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=23ace5af-6840-42aa-a801-98abbb4f3a52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.738 253542 DEBUG nova.network.os_vif_util [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "address": "fa:16:3e:af:6c:e2", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape9d1298d-41", "ovs_interfaceid": "e9d1298d-411a-4018-ba08-c41d40ba0d41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.739 253542 DEBUG nova.network.os_vif_util [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.739 253542 DEBUG os_vif [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.742 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9d1298d-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.747 253542 INFO os_vif [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:6c:e2,bridge_name='br-int',has_traffic_filtering=True,id=e9d1298d-411a-4018-ba08-c41d40ba0d41,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape9d1298d-41')
Nov 25 08:26:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906575829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.826 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.831 253542 DEBUG nova.compute.provider_tree [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.845 253542 DEBUG nova.scheduler.client.report [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.864 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.865 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.909 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.909 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.927 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:26:42 compute-0 nova_compute[253538]: 2025-11-25 08:26:42.944 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.023 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.024 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.025 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Creating image(s)
Nov 25 08:26:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 265 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.051 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.079 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.099 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.104 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.139 253542 DEBUG nova.policy [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '02e795c75a3b40bbbc3ca83d0501777f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52217f37b23343d697fa6d2be38e236d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.192 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.194 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.195 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.196 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.230 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.234 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c787de46-dba9-458e-acc0-57470097fac5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.338 253542 INFO nova.virt.libvirt.driver [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Deleting instance files /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52_del
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.339 253542 INFO nova.virt.libvirt.driver [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Deletion of /var/lib/nova/instances/23ace5af-6840-42aa-a801-98abbb4f3a52_del complete
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.399 253542 INFO nova.compute.manager [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Took 1.33 seconds to destroy the instance on the hypervisor.
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.400 253542 DEBUG oslo.service.loopingcall [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.400 253542 DEBUG nova.compute.manager [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.400 253542 DEBUG nova.network.neutron [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:26:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/906575829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:43 compute-0 ceph-mon[75015]: pgmap v1238: 321 pgs: 321 active+clean; 265 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.513 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c787de46-dba9-458e-acc0-57470097fac5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:43 compute-0 nova_compute[253538]: 2025-11-25 08:26:43.563 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] resizing rbd image c787de46-dba9-458e-acc0-57470097fac5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:26:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.188 253542 DEBUG nova.objects.instance [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'migration_context' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.198 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.198 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Ensure instance console log exists: /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.198 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.199 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.199 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.750 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Successfully created port: 0416b402-0842-4b73-910b-d30a5750474c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.963 253542 DEBUG nova.network.neutron [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:44 compute-0 nova_compute[253538]: 2025-11-25 08:26:44.984 253542 INFO nova.compute.manager [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Took 1.58 seconds to deallocate network for instance.
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.023 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.024 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 249 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.6 MiB/s wr, 237 op/s
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.118 253542 DEBUG oslo_concurrency.processutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.340 253542 DEBUG nova.compute.manager [req-a4c87865-f06c-4c08-ba2e-b4613ed78149 req-060bfe8a-39fc-47a7-aff3-446af7c6698e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Received event network-vif-deleted-e9d1298d-411a-4018-ba08-c41d40ba0d41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580963184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.570 253542 DEBUG oslo_concurrency.processutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.576 253542 DEBUG nova.compute.provider_tree [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.595 253542 DEBUG nova.scheduler.client.report [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.619 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.675 253542 INFO nova.scheduler.client.report [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Deleted allocations for instance 23ace5af-6840-42aa-a801-98abbb4f3a52
Nov 25 08:26:45 compute-0 nova_compute[253538]: 2025-11-25 08:26:45.789 253542 DEBUG oslo_concurrency.lockutils [None req-e04947b3-17a1-4175-939f-21d4228cb042 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "23ace5af-6840-42aa-a801-98abbb4f3a52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:46 compute-0 ceph-mon[75015]: pgmap v1239: 321 pgs: 321 active+clean; 249 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.6 MiB/s wr, 237 op/s
Nov 25 08:26:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3580963184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 238 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 431 KiB/s rd, 4.2 MiB/s wr, 172 op/s
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.470 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Successfully updated port: 0416b402-0842-4b73-910b-d30a5750474c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.664 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.665 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.665 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.724 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.725 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.726 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.726 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.726 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.727 253542 INFO nova.compute.manager [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Terminating instance
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.728 253542 DEBUG nova.compute.manager [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.863 253542 DEBUG nova.compute.manager [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-changed-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.863 253542 DEBUG nova.compute.manager [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing instance network info cache due to event network-changed-0416b402-0842-4b73-910b-d30a5750474c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.864 253542 DEBUG oslo_concurrency.lockutils [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:47 compute-0 nova_compute[253538]: 2025-11-25 08:26:47.913 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:26:48 compute-0 ceph-mon[75015]: pgmap v1240: 321 pgs: 321 active+clean; 238 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 431 KiB/s rd, 4.2 MiB/s wr, 172 op/s
Nov 25 08:26:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:48 compute-0 kernel: tap4ad9572b-6a (unregistering): left promiscuous mode
Nov 25 08:26:48 compute-0 NetworkManager[48915]: <info>  [1764059208.9455] device (tap4ad9572b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:26:48 compute-0 nova_compute[253538]: 2025-11-25 08:26:48.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:48 compute-0 ovn_controller[152859]: 2025-11-25T08:26:48Z|00121|binding|INFO|Releasing lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe from this chassis (sb_readonly=0)
Nov 25 08:26:48 compute-0 ovn_controller[152859]: 2025-11-25T08:26:48Z|00122|binding|INFO|Setting lport 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe down in Southbound
Nov 25 08:26:48 compute-0 ovn_controller[152859]: 2025-11-25T08:26:48Z|00123|binding|INFO|Removing iface tap4ad9572b-6a ovn-installed in OVS
Nov 25 08:26:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.972 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0e:e0 10.100.0.6'], port_security=['fa:16:3e:5e:0e:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86bfa56f-56d0-4a5e-b0b2-302c375e37a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93269c36-ab23-4d95-925a-798173550624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65a2f983cce14453b2dc9251a520f289', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'cc9f601e-bb99-4729-8b62-ddcf81c134a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a705f4ac-b7cc-4deb-b453-a20afb944392, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:26:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.973 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad9572b-6ac1-4659-8ea6-71b8a32c06fe in datapath 93269c36-ab23-4d95-925a-798173550624 unbound from our chassis
Nov 25 08:26:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.974 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93269c36-ab23-4d95-925a-798173550624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:26:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.975 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f90d55a8-ac50-4eaa-9f50-d7ec48d06db0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:48.976 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93269c36-ab23-4d95-925a-798173550624 namespace which is not needed anymore
Nov 25 08:26:48 compute-0 nova_compute[253538]: 2025-11-25 08:26:48.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:49 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 25 08:26:49 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000010.scope: Consumed 13.324s CPU time.
Nov 25 08:26:49 compute-0 systemd-machined[215790]: Machine qemu-27-instance-00000010 terminated.
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.016 253542 DEBUG nova.network.neutron [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 246 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.067 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.067 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance network_info: |[{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.068 253542 DEBUG oslo_concurrency.lockutils [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.068 253542 DEBUG nova.network.neutron [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing network info cache for port 0416b402-0842-4b73-910b-d30a5750474c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.074 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start _get_guest_xml network_info=[{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.084 253542 WARNING nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.096 253542 DEBUG nova.virt.libvirt.host [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.097 253542 DEBUG nova.virt.libvirt.host [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.102 253542 DEBUG nova.virt.libvirt.host [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.103 253542 DEBUG nova.virt.libvirt.host [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.104 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.104 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.106 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.106 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.107 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.107 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.108 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.109 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.109 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.110 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.110 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.111 253542 DEBUG nova.virt.hardware [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.116 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.153 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.166 253542 INFO nova.virt.libvirt.driver [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Instance destroyed successfully.
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.166 253542 DEBUG nova.objects.instance [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lazy-loading 'resources' on Instance uuid 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.195 253542 DEBUG nova.virt.libvirt.vif [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1649971692',display_name='tempest-ServersAdminTestJSON-server-1649971692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1649971692',id=16,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:26:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65a2f983cce14453b2dc9251a520f289',ramdisk_id='',reservation_id='r-309qh2t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-857664825',owner_user_name='tempest-ServersAdminTestJSON-857664825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:26:33Z,user_data=None,user_id='76d3377d398a4214a77bc0eb91638ec5',uuid=86bfa56f-56d0-4a5e-b0b2-302c375e37a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.196 253542 DEBUG nova.network.os_vif_util [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converting VIF {"id": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "address": "fa:16:3e:5e:0e:e0", "network": {"id": "93269c36-ab23-4d95-925a-798173550624", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-897372830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65a2f983cce14453b2dc9251a520f289", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad9572b-6a", "ovs_interfaceid": "4ad9572b-6ac1-4659-8ea6-71b8a32c06fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.198 253542 DEBUG nova.network.os_vif_util [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.199 253542 DEBUG os_vif [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.202 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ad9572b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.206 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.208 253542 INFO os_vif [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:0e:e0,bridge_name='br-int',has_traffic_filtering=True,id=4ad9572b-6ac1-4659-8ea6-71b8a32c06fe,network=Network(93269c36-ab23-4d95-925a-798173550624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad9572b-6a')
Nov 25 08:26:49 compute-0 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [NOTICE]   (279225) : haproxy version is 2.8.14-c23fe91
Nov 25 08:26:49 compute-0 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [NOTICE]   (279225) : path to executable is /usr/sbin/haproxy
Nov 25 08:26:49 compute-0 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [WARNING]  (279225) : Exiting Master process...
Nov 25 08:26:49 compute-0 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [WARNING]  (279225) : Exiting Master process...
Nov 25 08:26:49 compute-0 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [ALERT]    (279225) : Current worker (279227) exited with code 143 (Terminated)
Nov 25 08:26:49 compute-0 neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624[279202]: [WARNING]  (279225) : All workers exited. Exiting... (0)
Nov 25 08:26:49 compute-0 systemd[1]: libpod-5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea.scope: Deactivated successfully.
Nov 25 08:26:49 compute-0 podman[285601]: 2025-11-25 08:26:49.574248519 +0000 UTC m=+0.487453504 container died 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:26:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:26:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190769572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.646 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.680 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.685 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:49 compute-0 nova_compute[253538]: 2025-11-25 08:26:49.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:49 compute-0 ceph-mon[75015]: pgmap v1241: 321 pgs: 321 active+clean; 246 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.429 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059195.4277666, 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.430 253542 INFO nova.compute.manager [-] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] VM Stopped (Lifecycle Event)
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.451 253542 DEBUG nova.compute.manager [None req-be4c7771-d3d8-40f3-8e25-67d0c91263f4 - - - - - -] [instance: 740ce5e2-b8eb-46b1-9ec8-e9c89ed3e5b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea-userdata-shm.mount: Deactivated successfully.
Nov 25 08:26:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:26:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/660808336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f08b2f30a3472e570aff96bae58b6aa5c1b8e49bb02775e83be72c16348ca83-merged.mount: Deactivated successfully.
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.710 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.711 253542 DEBUG nova.virt.libvirt.vif [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:42Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.711 253542 DEBUG nova.network.os_vif_util [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.712 253542 DEBUG nova.network.os_vif_util [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.713 253542 DEBUG nova.objects.instance [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'pci_devices' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.726 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <uuid>c787de46-dba9-458e-acc0-57470097fac5</uuid>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <name>instance-00000018</name>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <nova:name>tempest-SecurityGroupsTestJSON-server-910851624</nova:name>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:26:49</nova:creationTime>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <nova:user uuid="02e795c75a3b40bbbc3ca83d0501777f">tempest-SecurityGroupsTestJSON-1828125381-project-member</nova:user>
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <nova:project uuid="52217f37b23343d697fa6d2be38e236d">tempest-SecurityGroupsTestJSON-1828125381</nova:project>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <nova:port uuid="0416b402-0842-4b73-910b-d30a5750474c">
Nov 25 08:26:50 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <system>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <entry name="serial">c787de46-dba9-458e-acc0-57470097fac5</entry>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <entry name="uuid">c787de46-dba9-458e-acc0-57470097fac5</entry>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </system>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <os>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   </os>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <features>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   </features>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c787de46-dba9-458e-acc0-57470097fac5_disk">
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c787de46-dba9-458e-acc0-57470097fac5_disk.config">
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:26:50 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:97:a5:e1"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <target dev="tap0416b402-08"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/console.log" append="off"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <video>
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </video>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:26:50 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:26:50 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:26:50 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:26:50 compute-0 nova_compute[253538]: </domain>
Nov 25 08:26:50 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.727 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Preparing to wait for external event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.727 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.727 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.727 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.728 253542 DEBUG nova.virt.libvirt.vif [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:42Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.728 253542 DEBUG nova.network.os_vif_util [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.728 253542 DEBUG nova.network.os_vif_util [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.728 253542 DEBUG os_vif [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.729 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.729 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.729 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.732 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0416b402-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.732 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0416b402-08, col_values=(('external_ids', {'iface-id': '0416b402-0842-4b73-910b-d30a5750474c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:a5:e1', 'vm-uuid': 'c787de46-dba9-458e-acc0-57470097fac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:50 compute-0 NetworkManager[48915]: <info>  [1764059210.7348] manager: (tap0416b402-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.740 253542 INFO os_vif [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08')
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.978 253542 DEBUG nova.network.neutron [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updated VIF entry in instance network info cache for port 0416b402-0842-4b73-910b-d30a5750474c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.978 253542 DEBUG nova.network.neutron [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:50 compute-0 nova_compute[253538]: 2025-11-25 08:26:50.994 253542 DEBUG oslo_concurrency.lockutils [req-e029a2d7-062f-4137-86c6-8fd22eab0998 req-84005574-af02-4dd3-b715-83a682d4cba3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 148 op/s
Nov 25 08:26:51 compute-0 nova_compute[253538]: 2025-11-25 08:26:51.456 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:26:51 compute-0 nova_compute[253538]: 2025-11-25 08:26:51.456 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:26:51 compute-0 nova_compute[253538]: 2025-11-25 08:26:51.456 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] No VIF found with MAC fa:16:3e:97:a5:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:26:51 compute-0 nova_compute[253538]: 2025-11-25 08:26:51.457 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Using config drive
Nov 25 08:26:51 compute-0 nova_compute[253538]: 2025-11-25 08:26:51.524 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2190769572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/660808336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.016 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.016 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.035 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.067 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Creating config drive at /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.072 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxcegl1t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:52 compute-0 podman[285601]: 2025-11-25 08:26:52.086768178 +0000 UTC m=+2.999973133 container cleanup 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:26:52 compute-0 systemd[1]: libpod-conmon-5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea.scope: Deactivated successfully.
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.176 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.177 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.186 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.186 253542 INFO nova.compute.claims [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.215 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxcegl1t" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.666 253542 DEBUG nova.storage.rbd_utils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] rbd image c787de46-dba9-458e-acc0-57470097fac5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.671 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config c787de46-dba9-458e-acc0-57470097fac5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.706 253542 DEBUG nova.compute.manager [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.708 253542 DEBUG oslo_concurrency.lockutils [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.708 253542 DEBUG oslo_concurrency.lockutils [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.709 253542 DEBUG oslo_concurrency.lockutils [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.709 253542 DEBUG nova.compute.manager [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.710 253542 DEBUG nova.compute.manager [req-3ed70af4-a745-46fb-a079-60bb39a660ce req-f986aea0-86ad-4060-9b83-a0f783eff1f3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-unplugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.740 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.757 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.758 253542 DEBUG nova.compute.provider_tree [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.775 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.798 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 08:26:52 compute-0 nova_compute[253538]: 2025-11-25 08:26:52.881 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 3.9 MiB/s wr, 137 op/s
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:26:53
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr']
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:26:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:26:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/344841007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.348 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.354 253542 DEBUG nova.compute.provider_tree [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.374 253542 DEBUG nova.scheduler.client.report [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.393 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.394 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.437 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.438 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.465 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.488 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.581 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.582 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:26:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:26:53 compute-0 nova_compute[253538]: 2025-11-25 08:26:53.916 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Creating image(s)
Nov 25 08:26:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:26:54 compute-0 nova_compute[253538]: 2025-11-25 08:26:54.922 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:54 compute-0 nova_compute[253538]: 2025-11-25 08:26:54.941 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:54 compute-0 nova_compute[253538]: 2025-11-25 08:26:54.965 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:54 compute-0 nova_compute[253538]: 2025-11-25 08:26:54.968 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:54 compute-0 nova_compute[253538]: 2025-11-25 08:26:54.988 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:54 compute-0 nova_compute[253538]: 2025-11-25 08:26:54.994 253542 DEBUG nova.policy [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:26:54 compute-0 nova_compute[253538]: 2025-11-25 08:26:54.996 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059198.6460123, 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:54 compute-0 nova_compute[253538]: 2025-11-25 08:26:54.996 253542 INFO nova.compute.manager [-] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] VM Stopped (Lifecycle Event)
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.000 253542 DEBUG nova.compute.manager [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.000 253542 DEBUG oslo_concurrency.lockutils [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.000 253542 DEBUG oslo_concurrency.lockutils [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.000 253542 DEBUG oslo_concurrency.lockutils [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.001 253542 DEBUG nova.compute.manager [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] No waiting events found dispatching network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.001 253542 WARNING nova.compute.manager [req-09428a5b-8506-4e34-942c-9b16e49fcdf0 req-9027cdfb-a1bd-4971-8803-735b0ce0bbba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received unexpected event network-vif-plugged-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe for instance with vm_state active and task_state deleting.
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.028 253542 DEBUG nova.compute.manager [None req-eb85c2bd-abbb-4882-82d3-84e66b52d64b - - - - - -] [instance: 29fb9e2b-13d1-41e6-b0b1-1d5262dcadec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.029 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.029 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.030 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.030 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.052 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:26:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 3.3 MiB/s wr, 106 op/s
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.055 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:26:55 compute-0 ceph-mon[75015]: pgmap v1242: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 148 op/s
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.561 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Successfully created port: 521823df-589a-4370-a3ea-a5a6f4c73a6a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:55 compute-0 podman[285744]: 2025-11-25 08:26:55.917417184 +0000 UTC m=+3.807973759 container remove 5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:26:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.929 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e68c64c-3c5a-4a9d-8bc0-d62f192f26ce]: (4, ('Tue Nov 25 08:26:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624 (5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea)\n5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea\nTue Nov 25 08:26:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-93269c36-ab23-4d95-925a-798173550624 (5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea)\n5a79a5522617471f9f9e9a6b3106daa8a451cbbb718bec8de38f4fa7ab7821ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.932 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7224dfd9-6807-4a6a-8e25-40ec23144e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.933 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93269c36-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:55 compute-0 kernel: tap93269c36-a0: left promiscuous mode
Nov 25 08:26:55 compute-0 nova_compute[253538]: 2025-11-25 08:26:55.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.977 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[209da6d4-a35d-4182-ab60-c3e6cccc74c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.998 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5023897b-d168-4061-a124-21366ad1e2c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:55.999 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac7d5e0-daef-4120-9f68-182dd4940c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:56.021 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[132fc897-12aa-4ec5-9e76-78d1305ab6de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445508, 'reachable_time': 31941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285908, 'error': None, 'target': 'ovnmeta-93269c36-ab23-4d95-925a-798173550624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:56.025 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93269c36-ab23-4d95-925a-798173550624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:26:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d93269c36\x2dab23\x2d4d95\x2d925a\x2d798173550624.mount: Deactivated successfully.
Nov 25 08:26:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:26:56.025 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8e615fdc-e3ba-437e-8ef0-b2504f0d3f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:26:56 compute-0 nova_compute[253538]: 2025-11-25 08:26:56.348 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Successfully updated port: 521823df-589a-4370-a3ea-a5a6f4c73a6a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:26:56 compute-0 nova_compute[253538]: 2025-11-25 08:26:56.371 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:56 compute-0 nova_compute[253538]: 2025-11-25 08:26:56.371 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:56 compute-0 nova_compute[253538]: 2025-11-25 08:26:56.372 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:26:56 compute-0 nova_compute[253538]: 2025-11-25 08:26:56.552 253542 DEBUG nova.compute.manager [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-changed-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:26:56 compute-0 nova_compute[253538]: 2025-11-25 08:26:56.553 253542 DEBUG nova.compute.manager [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Refreshing instance network info cache due to event network-changed-521823df-589a-4370-a3ea-a5a6f4c73a6a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:26:56 compute-0 nova_compute[253538]: 2025-11-25 08:26:56.553 253542 DEBUG oslo_concurrency.lockutils [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:26:56 compute-0 nova_compute[253538]: 2025-11-25 08:26:56.592 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:26:56 compute-0 ceph-mon[75015]: pgmap v1243: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 3.9 MiB/s wr, 137 op/s
Nov 25 08:26:56 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/344841007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:26:56 compute-0 ceph-mon[75015]: pgmap v1244: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 3.3 MiB/s wr, 106 op/s
Nov 25 08:26:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.5 MiB/s wr, 46 op/s
Nov 25 08:26:57 compute-0 nova_compute[253538]: 2025-11-25 08:26:57.478 253542 DEBUG nova.network.neutron [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Updating instance_info_cache with network_info: [{"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:26:57 compute-0 nova_compute[253538]: 2025-11-25 08:26:57.517 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:26:57 compute-0 nova_compute[253538]: 2025-11-25 08:26:57.518 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance network_info: |[{"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:26:57 compute-0 nova_compute[253538]: 2025-11-25 08:26:57.518 253542 DEBUG oslo_concurrency.lockutils [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:26:57 compute-0 nova_compute[253538]: 2025-11-25 08:26:57.518 253542 DEBUG nova.network.neutron [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Refreshing network info cache for port 521823df-589a-4370-a3ea-a5a6f4c73a6a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:26:57 compute-0 nova_compute[253538]: 2025-11-25 08:26:57.706 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059202.7051883, 23ace5af-6840-42aa-a801-98abbb4f3a52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:26:57 compute-0 nova_compute[253538]: 2025-11-25 08:26:57.707 253542 INFO nova.compute.manager [-] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] VM Stopped (Lifecycle Event)
Nov 25 08:26:57 compute-0 nova_compute[253538]: 2025-11-25 08:26:57.720 253542 DEBUG nova.compute.manager [None req-bed7a0fa-c90e-44b0-a71b-15f6cb61e566 - - - - - -] [instance: 23ace5af-6840-42aa-a801-98abbb4f3a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:26:57 compute-0 ceph-mon[75015]: pgmap v1245: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.5 MiB/s wr, 46 op/s
Nov 25 08:26:58 compute-0 podman[285919]: 2025-11-25 08:26:58.807654827 +0000 UTC m=+0.056600068 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:26:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 118 KiB/s wr, 25 op/s
Nov 25 08:26:59 compute-0 nova_compute[253538]: 2025-11-25 08:26:59.730 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:26:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:00 compute-0 ceph-mon[75015]: pgmap v1246: 321 pgs: 321 active+clean; 246 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 118 KiB/s wr, 25 op/s
Nov 25 08:27:00 compute-0 nova_compute[253538]: 2025-11-25 08:27:00.541 253542 DEBUG nova.network.neutron [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Updated VIF entry in instance network info cache for port 521823df-589a-4370-a3ea-a5a6f4c73a6a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:27:00 compute-0 nova_compute[253538]: 2025-11-25 08:27:00.542 253542 DEBUG nova.network.neutron [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Updating instance_info_cache with network_info: [{"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:00 compute-0 nova_compute[253538]: 2025-11-25 08:27:00.557 253542 DEBUG oslo_concurrency.lockutils [req-221a7061-a639-40cb-853a-3ada2ec19579 req-ca28aa9e-fe35-4ee7-8546-8b1c990fbcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0fc86c7e-5de2-431c-9152-cfe293f8cc7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:00 compute-0 nova_compute[253538]: 2025-11-25 08:27:00.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.028 253542 DEBUG oslo_concurrency.processutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config c787de46-dba9-458e-acc0-57470097fac5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.028 253542 INFO nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Deleting local config drive /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/disk.config because it was imported into RBD.
Nov 25 08:27:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 260 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 496 KiB/s wr, 13 op/s
Nov 25 08:27:01 compute-0 kernel: tap0416b402-08: entered promiscuous mode
Nov 25 08:27:01 compute-0 NetworkManager[48915]: <info>  [1764059221.0872] manager: (tap0416b402-08): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.089 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:01 compute-0 ovn_controller[152859]: 2025-11-25T08:27:01Z|00124|binding|INFO|Claiming lport 0416b402-0842-4b73-910b-d30a5750474c for this chassis.
Nov 25 08:27:01 compute-0 ovn_controller[152859]: 2025-11-25T08:27:01Z|00125|binding|INFO|0416b402-0842-4b73-910b-d30a5750474c: Claiming fa:16:3e:97:a5:e1 10.100.0.4
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.097 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a5:e1 10.100.0.4'], port_security=['fa:16:3e:97:a5:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c787de46-dba9-458e-acc0-57470097fac5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94ed9e1b-8451-4dd9-95ef-2d9affe4fca9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0416b402-0842-4b73-910b-d30a5750474c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.098 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0416b402-0842-4b73-910b-d30a5750474c in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 bound to our chassis
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.099 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 08:27:01 compute-0 ovn_controller[152859]: 2025-11-25T08:27:01Z|00126|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c ovn-installed in OVS
Nov 25 08:27:01 compute-0 ovn_controller[152859]: 2025-11-25T08:27:01Z|00127|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c up in Southbound
Nov 25 08:27:01 compute-0 systemd-machined[215790]: New machine qemu-28-instance-00000018.
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.120 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.120 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[774ea90c-f294-4ec3-915e-cad83639da51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:01 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000018.
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.142 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9378f3-5152-490e-9815-c00e1faa230f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.145 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f466ef43-5c5e-41d5-b81b-c52ced0bcdfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:01 compute-0 systemd-udevd[285955]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:27:01 compute-0 NetworkManager[48915]: <info>  [1764059221.1594] device (tap0416b402-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:27:01 compute-0 NetworkManager[48915]: <info>  [1764059221.1615] device (tap0416b402-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.173 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b89f1ac1-5ecb-4564-bf48-82f03e8e5de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.191 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[92df55f4-e025-4533-a897-9de3da0cb42d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285965, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.206 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8850034-bd2b-4086-a481-54f368f5e276]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452894, 'tstamp': 452894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285967, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452897, 'tstamp': 452897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285967, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.208 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.210 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.210 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.212 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:01.212 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.354 253542 DEBUG nova.compute.manager [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.354 253542 DEBUG oslo_concurrency.lockutils [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.355 253542 DEBUG oslo_concurrency.lockutils [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.355 253542 DEBUG oslo_concurrency.lockutils [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:01 compute-0 nova_compute[253538]: 2025-11-25 08:27:01.355 253542 DEBUG nova.compute.manager [req-ae373da8-694d-4a8f-8b70-69db663e2acc req-b19b6e48-fe37-49b2-80e2-ec3ad0c99062 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Processing event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:27:02 compute-0 nova_compute[253538]: 2025-11-25 08:27:02.118 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:02 compute-0 ceph-mon[75015]: pgmap v1247: 321 pgs: 321 active+clean; 260 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 496 KiB/s wr, 13 op/s
Nov 25 08:27:02 compute-0 nova_compute[253538]: 2025-11-25 08:27:02.197 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:27:02 compute-0 podman[286047]: 2025-11-25 08:27:02.853047257 +0000 UTC m=+0.087697880 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 277 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.200 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.201 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059223.1996746, c787de46-dba9-458e-acc0-57470097fac5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.201 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Started (Lifecycle Event)
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.206 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.209 253542 INFO nova.virt.libvirt.driver [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance spawned successfully.
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.209 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.222 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.227 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.231 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.232 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.232 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.232 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.233 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.233 253542 DEBUG nova.virt.libvirt.driver [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.254 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.254 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059223.2032046, c787de46-dba9-458e-acc0-57470097fac5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.255 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Paused (Lifecycle Event)
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.284 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.287 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059223.2052667, c787de46-dba9-458e-acc0-57470097fac5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.287 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Resumed (Lifecycle Event)
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.313 253542 INFO nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Took 20.29 seconds to spawn the instance on the hypervisor.
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.314 253542 DEBUG nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.319 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.324 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.363 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.387 253542 INFO nova.compute.manager [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Took 21.32 seconds to build instance.
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.402 253542 DEBUG oslo_concurrency.lockutils [None req-512bd3e2-021e-41ed-a450-06a605dc70eb 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.640 253542 DEBUG nova.compute.manager [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.640 253542 DEBUG oslo_concurrency.lockutils [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.640 253542 DEBUG oslo_concurrency.lockutils [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.641 253542 DEBUG oslo_concurrency.lockutils [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.641 253542 DEBUG nova.compute.manager [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] No waiting events found dispatching network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.641 253542 WARNING nova.compute.manager [req-13f026d9-7166-451e-8407-6e52025836f7 req-d0bd73bb-f4e5-45d7-940a-5ea8d97723f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received unexpected event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c for instance with vm_state active and task_state None.
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021430704442599465 of space, bias 1.0, pg target 0.642921133277984 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:27:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.851 253542 DEBUG nova.objects.instance [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 0fc86c7e-5de2-431c-9152-cfe293f8cc7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.864 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.865 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Ensure instance console log exists: /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.866 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.866 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.866 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.869 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start _get_guest_xml network_info=[{"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.874 253542 WARNING nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.887 253542 DEBUG nova.virt.libvirt.host [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.888 253542 DEBUG nova.virt.libvirt.host [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.892 253542 DEBUG nova.virt.libvirt.host [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.893 253542 DEBUG nova.virt.libvirt.host [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.893 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.894 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.894 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.895 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.895 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.895 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.896 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.896 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.896 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.896 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.897 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.897 253542 DEBUG nova.virt.hardware [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:27:03 compute-0 nova_compute[253538]: 2025-11-25 08:27:03.900 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.165 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059209.1640756, 86bfa56f-56d0-4a5e-b0b2-302c375e37a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.166 253542 INFO nova.compute.manager [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] VM Stopped (Lifecycle Event)
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.192 253542 DEBUG nova.compute.manager [None req-994cd579-8358-46f0-b4f1-c4e290e6dd48 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.195 253542 DEBUG nova.compute.manager [None req-994cd579-8358-46f0-b4f1-c4e290e6dd48 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.221 253542 INFO nova.compute.manager [None req-994cd579-8358-46f0-b4f1-c4e290e6dd48 - - - - - -] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 25 08:27:04 compute-0 ceph-mon[75015]: pgmap v1248: 321 pgs: 321 active+clean; 277 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 25 08:27:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011166358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.441 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.466 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.472 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043833436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.928 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.931 253542 DEBUG nova.virt.libvirt.vif [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-232041693',display_name='tempest-ImagesTestJSON-server-232041693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-232041693',id=25,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-pxohzxru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:53Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0fc86c7e-5de2-431c-9152-cfe293f8cc7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.932 253542 DEBUG nova.network.os_vif_util [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.934 253542 DEBUG nova.network.os_vif_util [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.936 253542 DEBUG nova.objects.instance [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fc86c7e-5de2-431c-9152-cfe293f8cc7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.963 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <uuid>0fc86c7e-5de2-431c-9152-cfe293f8cc7d</uuid>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <name>instance-00000019</name>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesTestJSON-server-232041693</nova:name>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:27:03</nova:creationTime>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <nova:port uuid="521823df-589a-4370-a3ea-a5a6f4c73a6a">
Nov 25 08:27:04 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <system>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <entry name="serial">0fc86c7e-5de2-431c-9152-cfe293f8cc7d</entry>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <entry name="uuid">0fc86c7e-5de2-431c-9152-cfe293f8cc7d</entry>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </system>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <os>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   </os>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <features>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   </features>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk">
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config">
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:04 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:3f:ef:3b"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <target dev="tap521823df-58"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/console.log" append="off"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <video>
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </video>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:27:04 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:27:04 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:27:04 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:27:04 compute-0 nova_compute[253538]: </domain>
Nov 25 08:27:04 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.974 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Preparing to wait for external event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.975 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.975 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.975 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.977 253542 DEBUG nova.virt.libvirt.vif [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:26:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-232041693',display_name='tempest-ImagesTestJSON-server-232041693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-232041693',id=25,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-pxohzxru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:26:53Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0fc86c7e-5de2-431c-9152-cfe293f8cc7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.977 253542 DEBUG nova.network.os_vif_util [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.979 253542 DEBUG nova.network.os_vif_util [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.980 253542 DEBUG os_vif [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.982 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.983 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.988 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.988 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap521823df-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.989 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap521823df-58, col_values=(('external_ids', {'iface-id': '521823df-589a-4370-a3ea-a5a6f4c73a6a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:ef:3b', 'vm-uuid': '0fc86c7e-5de2-431c-9152-cfe293f8cc7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:04 compute-0 NetworkManager[48915]: <info>  [1764059224.9928] manager: (tap521823df-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 25 08:27:04 compute-0 nova_compute[253538]: 2025-11-25 08:27:04.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.001 253542 INFO os_vif [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58')
Nov 25 08:27:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 257 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.078 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.079 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.079 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:3f:ef:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.080 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Using config drive
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.107 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.172 253542 INFO nova.virt.libvirt.driver [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deleting instance files /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.173 253542 INFO nova.virt.libvirt.driver [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deletion of /var/lib/nova/instances/86bfa56f-56d0-4a5e-b0b2-302c375e37a3_del complete
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.254 253542 INFO nova.compute.manager [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Took 17.53 seconds to destroy the instance on the hypervisor.
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.254 253542 DEBUG oslo.service.loopingcall [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.255 253542 DEBUG nova.compute.manager [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.255 253542 DEBUG nova.network.neutron [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:27:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1011166358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3043833436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.878 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Creating config drive at /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config
Nov 25 08:27:05 compute-0 nova_compute[253538]: 2025-11-25 08:27:05.890 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkx_hja5b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.019 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkx_hja5b" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.043 253542 DEBUG nova.storage.rbd_utils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.046 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.107 253542 DEBUG nova.network.neutron [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.131 253542 INFO nova.compute.manager [-] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Took 0.88 seconds to deallocate network for instance.
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.161 253542 DEBUG nova.compute.manager [req-47c5a6ae-e500-41d2-b257-c65ac7571bf7 req-7006548c-b6d7-4dbb-9ba5-e8fc41377cc9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 86bfa56f-56d0-4a5e-b0b2-302c375e37a3] Received event network-vif-deleted-4ad9572b-6ac1-4659-8ea6-71b8a32c06fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.165 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.165 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.262 253542 DEBUG oslo_concurrency.processutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:06 compute-0 ceph-mon[75015]: pgmap v1249: 321 pgs: 321 active+clean; 257 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.627 253542 DEBUG oslo_concurrency.processutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config 0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.628 253542 INFO nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Deleting local config drive /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d/disk.config because it was imported into RBD.
Nov 25 08:27:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16656392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:06 compute-0 kernel: tap521823df-58: entered promiscuous mode
Nov 25 08:27:06 compute-0 NetworkManager[48915]: <info>  [1764059226.6885] manager: (tap521823df-58): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Nov 25 08:27:06 compute-0 ovn_controller[152859]: 2025-11-25T08:27:06Z|00128|binding|INFO|Claiming lport 521823df-589a-4370-a3ea-a5a6f4c73a6a for this chassis.
Nov 25 08:27:06 compute-0 ovn_controller[152859]: 2025-11-25T08:27:06Z|00129|binding|INFO|521823df-589a-4370-a3ea-a5a6f4c73a6a: Claiming fa:16:3e:3f:ef:3b 10.100.0.6
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.690 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.704 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:ef:3b 10.100.0.6'], port_security=['fa:16:3e:3f:ef:3b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0fc86c7e-5de2-431c-9152-cfe293f8cc7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=521823df-589a-4370-a3ea-a5a6f4c73a6a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.705 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 521823df-589a-4370-a3ea-a5a6f4c73a6a in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.706 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.713 253542 DEBUG oslo_concurrency.processutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.716 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2015a45e-5588-40d3-93ff-3e3ef4599acc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.719 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.722 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe16207-b281-45a1-bb85-8b583be0e632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.723 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb8ee88-db82-4dfd-9903-bd6f6f2d6eda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.729 253542 DEBUG nova.compute.provider_tree [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:06 compute-0 systemd-machined[215790]: New machine qemu-29-instance-00000019.
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.744 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cf343e-e3ef-4d26-9d11-80abbbe80498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000019.
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.747 253542 DEBUG nova.scheduler.client.report [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:06 compute-0 systemd-udevd[286265]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.765 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:06 compute-0 NetworkManager[48915]: <info>  [1764059226.7721] device (tap521823df-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:27:06 compute-0 NetworkManager[48915]: <info>  [1764059226.7730] device (tap521823df-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.774 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2000e3-e010-4963-9ebe-f678906e74fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 ovn_controller[152859]: 2025-11-25T08:27:06Z|00130|binding|INFO|Setting lport 521823df-589a-4370-a3ea-a5a6f4c73a6a ovn-installed in OVS
Nov 25 08:27:06 compute-0 ovn_controller[152859]: 2025-11-25T08:27:06Z|00131|binding|INFO|Setting lport 521823df-589a-4370-a3ea-a5a6f4c73a6a up in Southbound
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.792 253542 INFO nova.scheduler.client.report [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Deleted allocations for instance 86bfa56f-56d0-4a5e-b0b2-302c375e37a3
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.804 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6e66aad4-70ef-4f7c-bb62-0efec26bcd9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 NetworkManager[48915]: <info>  [1764059226.8111] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.810 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81c7ed6d-7f0b-4f40-a415-007820056d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.842 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d15174eb-91d5-44b4-880d-37d8e7369672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.843 253542 DEBUG oslo_concurrency.lockutils [None req-e493ddf0-4205-4934-be72-4655b7982c78 76d3377d398a4214a77bc0eb91638ec5 65a2f983cce14453b2dc9251a520f289 - - default default] Lock "86bfa56f-56d0-4a5e-b0b2-302c375e37a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 19.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.845 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8683b179-3543-44b3-941c-63862df46245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 NetworkManager[48915]: <info>  [1764059226.8688] device (tapba659d6c-c0): carrier: link connected
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.874 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eeda2e7f-79ba-4aec-8f1f-f7af777bfbb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.893 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b959388d-6ce8-4ad7-93c6-8918925b1c21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458146, 'reachable_time': 30819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286295, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.912 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68c69db2-1a85-47f8-a1e4-a42eb443e8f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458146, 'tstamp': 458146}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286296, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.923 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.924 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.925 253542 INFO nova.compute.manager [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Rebooting instance
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[422d8087-64f2-4fb4-aa74-e27390cd44bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458146, 'reachable_time': 30819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286297, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.950 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.952 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:06 compute-0 nova_compute[253538]: 2025-11-25 08:27:06.952 253542 DEBUG nova.network.neutron [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:27:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:06.964 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b395e410-84bf-41c1-9fdd-15533cd1712a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.028 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2ff0fc-2f0a-4a5f-b5f3-63bcbeb77d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.035 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:07 compute-0 NetworkManager[48915]: <info>  [1764059227.0397] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.040 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:07 compute-0 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.045 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:07 compute-0 ovn_controller[152859]: 2025-11-25T08:27:07Z|00132|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:27:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 228 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.068 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5b1216-6b7f-4140-b059-0fd676b60dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.070 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:27:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:07.072 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:27:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/16656392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:07 compute-0 ceph-mon[75015]: pgmap v1250: 321 pgs: 321 active+clean; 228 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 08:27:07 compute-0 podman[286346]: 2025-11-25 08:27:07.488770187 +0000 UTC m=+0.020338505 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.650 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059227.6497922, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.651 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Started (Lifecycle Event)
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.680 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.685 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059227.64991, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.685 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Paused (Lifecycle Event)
Nov 25 08:27:07 compute-0 podman[286346]: 2025-11-25 08:27:07.691129948 +0000 UTC m=+0.222698246 container create ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.704 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.707 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:07 compute-0 nova_compute[253538]: 2025-11-25 08:27:07.721 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:27:07 compute-0 systemd[1]: Started libpod-conmon-ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06.scope.
Nov 25 08:27:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301b66a4b9ad7a58ec132f0b777f3b1abc4e7a7e468241d4a3386fefbc812d53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:07 compute-0 podman[286346]: 2025-11-25 08:27:07.927980584 +0000 UTC m=+0.459548902 container init ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:27:07 compute-0 podman[286346]: 2025-11-25 08:27:07.934487915 +0000 UTC m=+0.466056213 container start ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 08:27:07 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [NOTICE]   (286389) : New worker (286391) forked
Nov 25 08:27:07 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [NOTICE]   (286389) : Loading success.
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.058 253542 DEBUG nova.network.neutron [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.090 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.092 253542 DEBUG nova.compute.manager [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.240 253542 DEBUG nova.compute.manager [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-changed-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.241 253542 DEBUG nova.compute.manager [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing instance network info cache due to event network-changed-0416b402-0842-4b73-910b-d30a5750474c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.241 253542 DEBUG oslo_concurrency.lockutils [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.242 253542 DEBUG oslo_concurrency.lockutils [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.242 253542 DEBUG nova.network.neutron [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing network info cache for port 0416b402-0842-4b73-910b-d30a5750474c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:27:08 compute-0 kernel: tap0416b402-08 (unregistering): left promiscuous mode
Nov 25 08:27:08 compute-0 NetworkManager[48915]: <info>  [1764059228.4101] device (tap0416b402-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:27:08 compute-0 ovn_controller[152859]: 2025-11-25T08:27:08Z|00133|binding|INFO|Releasing lport 0416b402-0842-4b73-910b-d30a5750474c from this chassis (sb_readonly=0)
Nov 25 08:27:08 compute-0 ovn_controller[152859]: 2025-11-25T08:27:08Z|00134|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c down in Southbound
Nov 25 08:27:08 compute-0 ovn_controller[152859]: 2025-11-25T08:27:08Z|00135|binding|INFO|Removing iface tap0416b402-08 ovn-installed in OVS
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.428 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a5:e1 10.100.0.4'], port_security=['fa:16:3e:97:a5:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c787de46-dba9-458e-acc0-57470097fac5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '94ed9e1b-8451-4dd9-95ef-2d9affe4fca9 b88ff3c6-bea0-4b7c-9374-f058821e8f5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0416b402-0842-4b73-910b-d30a5750474c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.429 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0416b402-0842-4b73-910b-d30a5750474c in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 unbound from our chassis
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.431 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.440 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.451 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9deb1c-29cc-4a8d-b2c5-3f2041dc9795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:08 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 25 08:27:08 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Consumed 5.582s CPU time.
Nov 25 08:27:08 compute-0 systemd-machined[215790]: Machine qemu-28-instance-00000018 terminated.
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.478 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9c0236-983d-478c-ba2b-1fd6940d6f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.481 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47f69743-3728-435d-b64c-d6ffa7227ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.508 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a7e06c-a7d6-45c9-be32-a164ff449fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.523 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29e1a1fc-f198-4e9c-bde5-7fbc16294c84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286409, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.543 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1e3584-092d-4070-9361-df079ebaf618]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452894, 'tstamp': 452894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286410, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452897, 'tstamp': 452897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286410, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.545 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.551 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.551 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.552 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:08.552 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:27:08 compute-0 NetworkManager[48915]: <info>  [1764059228.6529] manager: (tap0416b402-08): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.676 253542 INFO nova.virt.libvirt.driver [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance destroyed successfully.
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.677 253542 DEBUG nova.objects.instance [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'resources' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.689 253542 DEBUG nova.virt.libvirt.vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:08Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.690 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.691 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.691 253542 DEBUG os_vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.694 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0416b402-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.701 253542 INFO os_vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08')
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.707 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start _get_guest_xml network_info=[{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.711 253542 WARNING nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.716 253542 DEBUG nova.virt.libvirt.host [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.717 253542 DEBUG nova.virt.libvirt.host [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.719 253542 DEBUG nova.virt.libvirt.host [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.719 253542 DEBUG nova.virt.libvirt.host [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.720 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.720 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.720 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.721 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.722 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.722 253542 DEBUG nova.virt.hardware [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.722 253542 DEBUG nova.objects.instance [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'vcpu_model' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:08 compute-0 nova_compute[253538]: 2025-11-25 08:27:08.734 253542 DEBUG oslo_concurrency.processutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:08 compute-0 podman[286423]: 2025-11-25 08:27:08.826217198 +0000 UTC m=+0.100730869 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.052 253542 DEBUG nova.compute.manager [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-unplugged-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.054 253542 DEBUG oslo_concurrency.lockutils [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.054 253542 DEBUG oslo_concurrency.lockutils [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.055 253542 DEBUG oslo_concurrency.lockutils [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.056 253542 DEBUG nova.compute.manager [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] No waiting events found dispatching network-vif-unplugged-0416b402-0842-4b73-910b-d30a5750474c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.056 253542 WARNING nova.compute.manager [req-c4c95fb5-c6dc-473e-9c7f-40f6d3841953 req-f28342f1-c3c3-4047-8907-187e4d600112 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received unexpected event network-vif-unplugged-0416b402-0842-4b73-910b-d30a5750474c for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:27:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Nov 25 08:27:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080990611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.200 253542 DEBUG oslo_concurrency.processutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.233 253542 DEBUG oslo_concurrency.processutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.408 253542 DEBUG nova.network.neutron [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updated VIF entry in instance network info cache for port 0416b402-0842-4b73-910b-d30a5750474c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.409 253542 DEBUG nova.network.neutron [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.422 253542 DEBUG oslo_concurrency.lockutils [req-dc39e33f-d083-4974-8b10-833f991e23a3 req-a22c4aef-ea3c-40d8-9fae-75855c837145 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.574 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:27:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683937940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.675 253542 DEBUG oslo_concurrency.processutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.676 253542 DEBUG nova.virt.libvirt.vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:08Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.676 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.677 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.678 253542 DEBUG nova.objects.instance [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'pci_devices' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.689 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <uuid>c787de46-dba9-458e-acc0-57470097fac5</uuid>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <name>instance-00000018</name>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <nova:name>tempest-SecurityGroupsTestJSON-server-910851624</nova:name>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:27:08</nova:creationTime>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <nova:user uuid="02e795c75a3b40bbbc3ca83d0501777f">tempest-SecurityGroupsTestJSON-1828125381-project-member</nova:user>
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <nova:project uuid="52217f37b23343d697fa6d2be38e236d">tempest-SecurityGroupsTestJSON-1828125381</nova:project>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <nova:port uuid="0416b402-0842-4b73-910b-d30a5750474c">
Nov 25 08:27:09 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <system>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <entry name="serial">c787de46-dba9-458e-acc0-57470097fac5</entry>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <entry name="uuid">c787de46-dba9-458e-acc0-57470097fac5</entry>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </system>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <os>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   </os>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <features>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   </features>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c787de46-dba9-458e-acc0-57470097fac5_disk">
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c787de46-dba9-458e-acc0-57470097fac5_disk.config">
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:09 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:97:a5:e1"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <target dev="tap0416b402-08"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5/console.log" append="off"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <video>
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </video>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:27:09 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:27:09 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:27:09 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:27:09 compute-0 nova_compute[253538]: </domain>
Nov 25 08:27:09 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.691 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.692 253542 DEBUG nova.virt.libvirt.driver [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.695 253542 DEBUG nova.virt.libvirt.vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:08Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.695 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.697 253542 DEBUG nova.network.os_vif_util [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.698 253542 DEBUG os_vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.700 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.701 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.706 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0416b402-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.707 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0416b402-08, col_values=(('external_ids', {'iface-id': '0416b402-0842-4b73-910b-d30a5750474c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:a5:e1', 'vm-uuid': 'c787de46-dba9-458e-acc0-57470097fac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 NetworkManager[48915]: <info>  [1764059229.7095] manager: (tap0416b402-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.714 253542 INFO os_vif [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08')
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 kernel: tap0416b402-08: entered promiscuous mode
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 ovn_controller[152859]: 2025-11-25T08:27:09Z|00136|binding|INFO|Claiming lport 0416b402-0842-4b73-910b-d30a5750474c for this chassis.
Nov 25 08:27:09 compute-0 ovn_controller[152859]: 2025-11-25T08:27:09Z|00137|binding|INFO|0416b402-0842-4b73-910b-d30a5750474c: Claiming fa:16:3e:97:a5:e1 10.100.0.4
Nov 25 08:27:09 compute-0 NetworkManager[48915]: <info>  [1764059229.8044] manager: (tap0416b402-08): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.809 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a5:e1 10.100.0.4'], port_security=['fa:16:3e:97:a5:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c787de46-dba9-458e-acc0-57470097fac5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '94ed9e1b-8451-4dd9-95ef-2d9affe4fca9 b88ff3c6-bea0-4b7c-9374-f058821e8f5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0416b402-0842-4b73-910b-d30a5750474c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.811 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0416b402-0842-4b73-910b-d30a5750474c in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 bound to our chassis
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.814 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 ovn_controller[152859]: 2025-11-25T08:27:09Z|00138|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c ovn-installed in OVS
Nov 25 08:27:09 compute-0 ovn_controller[152859]: 2025-11-25T08:27:09Z|00139|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c up in Southbound
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.835 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d6209c-e531-465f-8f9a-b6b984765f0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:09 compute-0 systemd-machined[215790]: New machine qemu-30-instance-00000018.
Nov 25 08:27:09 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000018.
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.865 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8de7897e-4571-41bc-9816-42c37e698581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.868 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eb67f60c-c0a0-436f-be50-b759305138fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:09 compute-0 systemd-udevd[286530]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:27:09 compute-0 NetworkManager[48915]: <info>  [1764059229.8886] device (tap0416b402-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:27:09 compute-0 NetworkManager[48915]: <info>  [1764059229.8896] device (tap0416b402-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.908 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcf9c40-f692-4949-a867-04fd1beb6de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.928 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a839ae78-e1e1-4f3f-b938-28e425b057b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286539, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.944 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cdea10da-6f04-42f4-8261-16468012d46a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452894, 'tstamp': 452894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286541, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452897, 'tstamp': 452897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286541, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.946 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.948 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.949 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.949 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:09.949 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.960 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.960 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.960 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:27:09 compute-0 nova_compute[253538]: 2025-11-25 08:27:09.961 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca088afd-31e5-497b-bfc5-ba1f56096642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:10 compute-0 ceph-mon[75015]: pgmap v1251: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Nov 25 08:27:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3080990611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/683937940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.345 253542 DEBUG nova.compute.manager [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.345 253542 DEBUG oslo_concurrency.lockutils [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.346 253542 DEBUG oslo_concurrency.lockutils [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.346 253542 DEBUG oslo_concurrency.lockutils [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.347 253542 DEBUG nova.compute.manager [req-5818361c-b3ea-4bf3-8884-8d1c9da59516 req-4462a348-d5f3-40cd-a670-9a98fd31e329 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Processing event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.348 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.352 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059230.3520625, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.352 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Resumed (Lifecycle Event)
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.366 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.370 253542 INFO nova.virt.libvirt.driver [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance spawned successfully.
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.370 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.386 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.392 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.395 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.396 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.396 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.397 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.397 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.398 253542 DEBUG nova.virt.libvirt.driver [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.420 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.444 253542 INFO nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 16.86 seconds to spawn the instance on the hypervisor.
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.445 253542 DEBUG nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.491 253542 INFO nova.compute.manager [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 18.35 seconds to build instance.
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.505 253542 DEBUG oslo_concurrency.lockutils [None req-8fecfbed-03de-473a-b0f0-eb92655a9d55 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.576 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for c787de46-dba9-458e-acc0-57470097fac5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.577 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059230.5742455, c787de46-dba9-458e-acc0-57470097fac5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.577 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Resumed (Lifecycle Event)
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.579 253542 DEBUG nova.compute.manager [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.583 253542 INFO nova.virt.libvirt.driver [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance rebooted successfully.
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.583 253542 DEBUG nova.compute.manager [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.615 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.619 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.647 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.647 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059230.5768902, c787de46-dba9-458e-acc0-57470097fac5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.648 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Started (Lifecycle Event)
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.652 253542 DEBUG oslo_concurrency.lockutils [None req-0abff3d9-f9fd-4392-94bd-904bac40d2f1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.671 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:10 compute-0 nova_compute[253538]: 2025-11-25 08:27:10.675 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.154 253542 DEBUG nova.compute.manager [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.155 253542 DEBUG oslo_concurrency.lockutils [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.155 253542 DEBUG oslo_concurrency.lockutils [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.156 253542 DEBUG oslo_concurrency.lockutils [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.156 253542 DEBUG nova.compute.manager [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] No waiting events found dispatching network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.156 253542 WARNING nova.compute.manager [req-19ad8b09-ed19-4825-adb5-059aeee99bc9 req-9115d95a-fce9-440e-a751-e518a2468eb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received unexpected event network-vif-plugged-0416b402-0842-4b73-910b-d30a5750474c for instance with vm_state active and task_state None.
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.618 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [{"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.632 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-ca088afd-31e5-497b-bfc5-ba1f56096642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.632 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.869 253542 INFO nova.compute.manager [None req-cee343ae-c164-4e92-b5d4-19a622a8797f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Pausing
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.870 253542 DEBUG nova.objects.instance [None req-cee343ae-c164-4e92-b5d4-19a622a8797f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'flavor' on Instance uuid 0fc86c7e-5de2-431c-9152-cfe293f8cc7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.919 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059231.9190938, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.920 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Paused (Lifecycle Event)
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.922 253542 DEBUG nova.compute.manager [None req-cee343ae-c164-4e92-b5d4-19a622a8797f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.941 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.945 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:11 compute-0 nova_compute[253538]: 2025-11-25 08:27:11.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 08:27:12 compute-0 ceph-mon[75015]: pgmap v1252: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.421 253542 DEBUG nova.compute.manager [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.422 253542 DEBUG oslo_concurrency.lockutils [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.422 253542 DEBUG oslo_concurrency.lockutils [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.422 253542 DEBUG oslo_concurrency.lockutils [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.422 253542 DEBUG nova.compute.manager [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] No waiting events found dispatching network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.423 253542 WARNING nova.compute.manager [req-b52d6b9a-665b-4411-9300-1487fe3ae398 req-90262f6e-971e-4c33-9d61-5410b73325e0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received unexpected event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a for instance with vm_state paused and task_state None.
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.574 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:27:12 compute-0 nova_compute[253538]: 2025-11-25 08:27:12.574 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012209073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.005 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 192 op/s
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.090 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.091 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.097 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.098 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.102 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.103 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:27:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2012209073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.305 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.307 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3977MB free_disk=59.9009895324707GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.308 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.308 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.380 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ca088afd-31e5-497b-bfc5-ba1f56096642 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.380 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance c787de46-dba9-458e-acc0-57470097fac5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.381 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0fc86c7e-5de2-431c-9152-cfe293f8cc7d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.381 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.381 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:27:13 compute-0 ovn_controller[152859]: 2025-11-25T08:27:13Z|00140|binding|INFO|Releasing lport 26e04d60-4f32-4592-b567-fc34513c5aba from this chassis (sb_readonly=0)
Nov 25 08:27:13 compute-0 ovn_controller[152859]: 2025-11-25T08:27:13Z|00141|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.448 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.826 253542 DEBUG nova.compute.manager [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-changed-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.827 253542 DEBUG nova.compute.manager [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing instance network info cache due to event network-changed-0416b402-0842-4b73-910b-d30a5750474c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.828 253542 DEBUG oslo_concurrency.lockutils [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.828 253542 DEBUG oslo_concurrency.lockutils [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.828 253542 DEBUG nova.network.neutron [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Refreshing network info cache for port 0416b402-0842-4b73-910b-d30a5750474c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:27:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/447937006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.901 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.908 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.936 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.965 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:27:13 compute-0 nova_compute[253538]: 2025-11-25 08:27:13.966 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:14 compute-0 ceph-mon[75015]: pgmap v1253: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 192 op/s
Nov 25 08:27:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/447937006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.394 253542 DEBUG nova.compute.manager [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.429 253542 INFO nova.compute.manager [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] instance snapshotting
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.429 253542 WARNING nova.compute.manager [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] trying to snapshot a non-running instance: (state: 3 expected: 1)
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.492 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.493 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.494 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "c787de46-dba9-458e-acc0-57470097fac5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.494 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.494 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.495 253542 INFO nova.compute.manager [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Terminating instance
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.496 253542 DEBUG nova.compute.manager [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:27:14 compute-0 kernel: tap0416b402-08 (unregistering): left promiscuous mode
Nov 25 08:27:14 compute-0 NetworkManager[48915]: <info>  [1764059234.5341] device (tap0416b402-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:27:14 compute-0 ovn_controller[152859]: 2025-11-25T08:27:14Z|00142|binding|INFO|Releasing lport 0416b402-0842-4b73-910b-d30a5750474c from this chassis (sb_readonly=0)
Nov 25 08:27:14 compute-0 ovn_controller[152859]: 2025-11-25T08:27:14Z|00143|binding|INFO|Setting lport 0416b402-0842-4b73-910b-d30a5750474c down in Southbound
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.547 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:14 compute-0 ovn_controller[152859]: 2025-11-25T08:27:14Z|00144|binding|INFO|Removing iface tap0416b402-08 ovn-installed in OVS
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.555 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a5:e1 10.100.0.4'], port_security=['fa:16:3e:97:a5:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c787de46-dba9-458e-acc0-57470097fac5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '7', 'neutron:security_group_ids': '69424ef4-6807-4dfd-9ed6-238b08ebb77e 94ed9e1b-8451-4dd9-95ef-2d9affe4fca9 b88ff3c6-bea0-4b7c-9374-f058821e8f5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0416b402-0842-4b73-910b-d30a5750474c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.556 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0416b402-0842-4b73-910b-d30a5750474c in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 unbound from our chassis
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.558 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.565 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.572 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56af7d81-09b9-4386-b98b-ad0aca9d23c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:14 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 25 08:27:14 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000018.scope: Consumed 4.754s CPU time.
Nov 25 08:27:14 compute-0 systemd-machined[215790]: Machine qemu-30-instance-00000018 terminated.
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.596 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e766f2be-bf4d-4607-a329-af208abe0935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.599 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[718ead1f-d223-4206-8f96-fd92e8e0c428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.623 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b3d092-eab4-4f93-851b-6f32b2ea02fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6765490-9c70-4ad3-b265-58a2e5dfb1ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec4e7ebb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:64:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452883, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286641, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.655 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ff16d0-9265-404e-98eb-41e9374bf3dd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452894, 'tstamp': 452894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286642, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec4e7ebb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452897, 'tstamp': 452897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286642, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.656 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.662 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.662 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec4e7ebb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.663 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.663 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec4e7ebb-a0, col_values=(('external_ids', {'iface-id': '26e04d60-4f32-4592-b567-fc34513c5aba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:14.663 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.739 253542 INFO nova.virt.libvirt.driver [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Beginning live snapshot process
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.743 253542 INFO nova.virt.libvirt.driver [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Instance destroyed successfully.
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.744 253542 DEBUG nova.objects.instance [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'resources' on Instance uuid c787de46-dba9-458e-acc0-57470097fac5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.760 253542 DEBUG nova.virt.libvirt.vif [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-910851624',display_name='tempest-SecurityGroupsTestJSON-server-910851624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-910851624',id=24,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-s7z6d2k0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:10Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=c787de46-dba9-458e-acc0-57470097fac5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.761 253542 DEBUG nova.network.os_vif_util [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.762 253542 DEBUG nova.network.os_vif_util [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.762 253542 DEBUG os_vif [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.765 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0416b402-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.770 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.773 253542 INFO os_vif [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a5:e1,bridge_name='br-int',has_traffic_filtering=True,id=0416b402-0842-4b73-910b-d30a5750474c,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0416b402-08')
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.891 253542 DEBUG nova.virt.libvirt.imagebackend [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:27:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.939 253542 DEBUG nova.network.neutron [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updated VIF entry in instance network info cache for port 0416b402-0842-4b73-910b-d30a5750474c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.940 253542 DEBUG nova.network.neutron [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [{"id": "0416b402-0842-4b73-910b-d30a5750474c", "address": "fa:16:3e:97:a5:e1", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0416b402-08", "ovs_interfaceid": "0416b402-0842-4b73-910b-d30a5750474c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.959 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:14 compute-0 nova_compute[253538]: 2025-11-25 08:27:14.960 253542 DEBUG oslo_concurrency.lockutils [req-93594e17-9c44-489e-9111-1a93d2cc27b2 req-45805faf-a52d-4cd9-ab8a-25b9d8939fca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c787de46-dba9-458e-acc0-57470097fac5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 712 KiB/s wr, 246 op/s
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.078 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(829c97c5e20f4a8eba46793041b33893) on rbd image(0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:27:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 25 08:27:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 25 08:27:15 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.229 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk@829c97c5e20f4a8eba46793041b33893 to images/0743e309-9e26-4d9c-aa8d-6c681073dac1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.263 253542 INFO nova.virt.libvirt.driver [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Deleting instance files /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5_del
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.264 253542 INFO nova.virt.libvirt.driver [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Deletion of /var/lib/nova/instances/c787de46-dba9-458e-acc0-57470097fac5_del complete
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.335 253542 INFO nova.compute.manager [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.336 253542 DEBUG oslo.service.loopingcall [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.336 253542 DEBUG nova.compute.manager [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.337 253542 DEBUG nova.network.neutron [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.353 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/0743e309-9e26-4d9c-aa8d-6c681073dac1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:15 compute-0 nova_compute[253538]: 2025-11-25 08:27:15.735 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(829c97c5e20f4a8eba46793041b33893) on rbd image(0fc86c7e-5de2-431c-9152-cfe293f8cc7d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:27:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 25 08:27:16 compute-0 ceph-mon[75015]: pgmap v1254: 321 pgs: 321 active+clean; 214 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 712 KiB/s wr, 246 op/s
Nov 25 08:27:16 compute-0 ceph-mon[75015]: osdmap e114: 3 total, 3 up, 3 in
Nov 25 08:27:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 25 08:27:16 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 25 08:27:16 compute-0 nova_compute[253538]: 2025-11-25 08:27:16.219 253542 DEBUG nova.storage.rbd_utils [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(0743e309-9e26-4d9c-aa8d-6c681073dac1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:27:16 compute-0 nova_compute[253538]: 2025-11-25 08:27:16.682 253542 DEBUG nova.network.neutron [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:16 compute-0 nova_compute[253538]: 2025-11-25 08:27:16.711 253542 INFO nova.compute.manager [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] Took 1.37 seconds to deallocate network for instance.
Nov 25 08:27:16 compute-0 nova_compute[253538]: 2025-11-25 08:27:16.758 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:16 compute-0 nova_compute[253538]: 2025-11-25 08:27:16.758 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:16 compute-0 nova_compute[253538]: 2025-11-25 08:27:16.765 253542 DEBUG nova.compute.manager [req-13ed3801-a1f3-4f37-98a9-1dbe215d1dca req-1aa031e2-3c70-46d3-981d-dea478ecb950 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c787de46-dba9-458e-acc0-57470097fac5] Received event network-vif-deleted-0416b402-0842-4b73-910b-d30a5750474c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:16 compute-0 nova_compute[253538]: 2025-11-25 08:27:16.833 253542 DEBUG oslo_concurrency.processutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 195 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 119 KiB/s wr, 287 op/s
Nov 25 08:27:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 25 08:27:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 25 08:27:17 compute-0 ceph-mon[75015]: osdmap e115: 3 total, 3 up, 3 in
Nov 25 08:27:17 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 25 08:27:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638005216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:17 compute-0 nova_compute[253538]: 2025-11-25 08:27:17.271 253542 DEBUG oslo_concurrency.processutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:17 compute-0 nova_compute[253538]: 2025-11-25 08:27:17.277 253542 DEBUG nova.compute.provider_tree [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:17 compute-0 nova_compute[253538]: 2025-11-25 08:27:17.298 253542 DEBUG nova.scheduler.client.report [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:17 compute-0 nova_compute[253538]: 2025-11-25 08:27:17.323 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:17 compute-0 nova_compute[253538]: 2025-11-25 08:27:17.348 253542 INFO nova.scheduler.client.report [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Deleted allocations for instance c787de46-dba9-458e-acc0-57470097fac5
Nov 25 08:27:17 compute-0 nova_compute[253538]: 2025-11-25 08:27:17.409 253542 DEBUG oslo_concurrency.lockutils [None req-14254fe2-cf42-4e2e-8f63-f12433240d4d 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "c787de46-dba9-458e-acc0-57470097fac5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:18 compute-0 ceph-mon[75015]: pgmap v1257: 321 pgs: 321 active+clean; 195 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 119 KiB/s wr, 287 op/s
Nov 25 08:27:18 compute-0 ceph-mon[75015]: osdmap e116: 3 total, 3 up, 3 in
Nov 25 08:27:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1638005216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 223 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.8 MiB/s wr, 303 op/s
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.138 253542 INFO nova.virt.libvirt.driver [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Snapshot image upload complete
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.139 253542 INFO nova.compute.manager [None req-572a994c-6629-4d26-bd6a-89259d771317 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 4.71 seconds to snapshot the instance on the hypervisor.
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:27:19 compute-0 ceph-mon[75015]: pgmap v1259: 321 pgs: 321 active+clean; 223 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.8 MiB/s wr, 303 op/s
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:19 compute-0 nova_compute[253538]: 2025-11-25 08:27:19.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 08:27:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 25 08:27:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 25 08:27:20 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 25 08:27:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 213 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Nov 25 08:27:21 compute-0 ceph-mon[75015]: osdmap e117: 3 total, 3 up, 3 in
Nov 25 08:27:21 compute-0 ceph-mon[75015]: pgmap v1261: 321 pgs: 321 active+clean; 213 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Nov 25 08:27:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.760 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.761 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.838 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.839 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.839 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.839 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.840 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.841 253542 INFO nova.compute.manager [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Terminating instance
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.842 253542 DEBUG nova.compute.manager [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:27:22 compute-0 kernel: tap521823df-58 (unregistering): left promiscuous mode
Nov 25 08:27:22 compute-0 NetworkManager[48915]: <info>  [1764059242.8888] device (tap521823df-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:27:22 compute-0 ovn_controller[152859]: 2025-11-25T08:27:22Z|00145|binding|INFO|Releasing lport 521823df-589a-4370-a3ea-a5a6f4c73a6a from this chassis (sb_readonly=0)
Nov 25 08:27:22 compute-0 ovn_controller[152859]: 2025-11-25T08:27:22Z|00146|binding|INFO|Setting lport 521823df-589a-4370-a3ea-a5a6f4c73a6a down in Southbound
Nov 25 08:27:22 compute-0 ovn_controller[152859]: 2025-11-25T08:27:22Z|00147|binding|INFO|Removing iface tap521823df-58 ovn-installed in OVS
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.903 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:ef:3b 10.100.0.6'], port_security=['fa:16:3e:3f:ef:3b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0fc86c7e-5de2-431c-9152-cfe293f8cc7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=521823df-589a-4370-a3ea-a5a6f4c73a6a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.904 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 521823df-589a-4370-a3ea-a5a6f4c73a6a in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis
Nov 25 08:27:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.905 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:27:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.906 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[761f5ad5-3da1-4077-8853-a3ca39aee63e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:22.907 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore
Nov 25 08:27:22 compute-0 nova_compute[253538]: 2025-11-25 08:27:22.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:22 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 25 08:27:22 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Consumed 2.440s CPU time.
Nov 25 08:27:22 compute-0 systemd-machined[215790]: Machine qemu-29-instance-00000019 terminated.
Nov 25 08:27:23 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [NOTICE]   (286389) : haproxy version is 2.8.14-c23fe91
Nov 25 08:27:23 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [NOTICE]   (286389) : path to executable is /usr/sbin/haproxy
Nov 25 08:27:23 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [WARNING]  (286389) : Exiting Master process...
Nov 25 08:27:23 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [ALERT]    (286389) : Current worker (286391) exited with code 143 (Terminated)
Nov 25 08:27:23 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[286385]: [WARNING]  (286389) : All workers exited. Exiting... (0)
Nov 25 08:27:23 compute-0 systemd[1]: libpod-ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06.scope: Deactivated successfully.
Nov 25 08:27:23 compute-0 podman[286864]: 2025-11-25 08:27:23.048889514 +0000 UTC m=+0.049416809 container died ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:27:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 213 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 159 op/s
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.079 253542 INFO nova.virt.libvirt.driver [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Instance destroyed successfully.
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.079 253542 DEBUG nova.objects.instance [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 0fc86c7e-5de2-431c-9152-cfe293f8cc7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-301b66a4b9ad7a58ec132f0b777f3b1abc4e7a7e468241d4a3386fefbc812d53-merged.mount: Deactivated successfully.
Nov 25 08:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06-userdata-shm.mount: Deactivated successfully.
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.094 253542 DEBUG nova.compute.manager [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-unplugged-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.094 253542 DEBUG oslo_concurrency.lockutils [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.094 253542 DEBUG oslo_concurrency.lockutils [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.095 253542 DEBUG oslo_concurrency.lockutils [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.095 253542 DEBUG nova.compute.manager [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] No waiting events found dispatching network-vif-unplugged-521823df-589a-4370-a3ea-a5a6f4c73a6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:23 compute-0 podman[286864]: 2025-11-25 08:27:23.095573927 +0000 UTC m=+0.096101222 container cleanup ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.095 253542 DEBUG nova.compute.manager [req-1bbc3295-4e90-4e20-8010-26963f339e9b req-0fd8b9d0-855d-4ee1-bb4d-844141a53162 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-unplugged-521823df-589a-4370-a3ea-a5a6f4c73a6a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.097 253542 DEBUG nova.virt.libvirt.vif [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-232041693',display_name='tempest-ImagesTestJSON-server-232041693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-232041693',id=25,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-pxohzxru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:19Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0fc86c7e-5de2-431c-9152-cfe293f8cc7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.097 253542 DEBUG nova.network.os_vif_util [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "address": "fa:16:3e:3f:ef:3b", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521823df-58", "ovs_interfaceid": "521823df-589a-4370-a3ea-a5a6f4c73a6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.098 253542 DEBUG nova.network.os_vif_util [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.098 253542 DEBUG os_vif [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap521823df-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:23 compute-0 systemd[1]: libpod-conmon-ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06.scope: Deactivated successfully.
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.107 253542 INFO os_vif [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:ef:3b,bridge_name='br-int',has_traffic_filtering=True,id=521823df-589a-4370-a3ea-a5a6f4c73a6a,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521823df-58')
Nov 25 08:27:23 compute-0 podman[286902]: 2025-11-25 08:27:23.21672841 +0000 UTC m=+0.099458264 container remove ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.224 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[64d77ff4-9134-4af2-90e2-6ebc736f100b]: (4, ('Tue Nov 25 08:27:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06)\nad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06\nTue Nov 25 08:27:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (ad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06)\nad75695d4bf05f8349f76f28b37e984abb0ee77e194871b1bc91803c0f376e06\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.226 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[326dd756-184d-4100-a5b5-690d7aea9388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.227 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:23 compute-0 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.255 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3c4750-c75c-4c36-a5d6-322f7348281e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.268 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[202355d6-2481-4b10-af9e-21d0fd7a8956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.270 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00fdab35-dce1-4a0e-869c-bc657d37c453]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.289 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a1a3e4-ea89-4d5e-918c-7662af9a8c2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458139, 'reachable_time': 39620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286935, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.291 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.291 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a1d904-272f-4e52-b0a9-e0300ab3f405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:23 compute-0 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 08:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.642 253542 INFO nova.virt.libvirt.driver [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Deleting instance files /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_del
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.643 253542 INFO nova.virt.libvirt.driver [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Deletion of /var/lib/nova/instances/0fc86c7e-5de2-431c-9152-cfe293f8cc7d_del complete
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.707 253542 INFO nova.compute.manager [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.708 253542 DEBUG oslo.service.loopingcall [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.709 253542 DEBUG nova.compute.manager [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.709 253542 DEBUG nova.network.neutron [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.892 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.893 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.893 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.893 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.893 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.894 253542 INFO nova.compute.manager [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Terminating instance
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.895 253542 DEBUG nova.compute.manager [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:27:23 compute-0 kernel: tap2089bf75-61 (unregistering): left promiscuous mode
Nov 25 08:27:23 compute-0 NetworkManager[48915]: <info>  [1764059243.9668] device (tap2089bf75-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:27:23 compute-0 ovn_controller[152859]: 2025-11-25T08:27:23Z|00148|binding|INFO|Releasing lport 2089bf75-6119-4c42-a326-989b3931ec08 from this chassis (sb_readonly=0)
Nov 25 08:27:23 compute-0 ovn_controller[152859]: 2025-11-25T08:27:23Z|00149|binding|INFO|Setting lport 2089bf75-6119-4c42-a326-989b3931ec08 down in Southbound
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:23 compute-0 ovn_controller[152859]: 2025-11-25T08:27:23Z|00150|binding|INFO|Removing iface tap2089bf75-61 ovn-installed in OVS
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.976 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.986 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:c0:7d 10.100.0.7'], port_security=['fa:16:3e:b9:c0:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ca088afd-31e5-497b-bfc5-ba1f56096642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52217f37b23343d697fa6d2be38e236d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '401f5f56-5a06-49e0-952d-2d39380ca37b 883216ab-7df8-4c90-b073-93f5d75fcaa1 94ed9e1b-8451-4dd9-95ef-2d9affe4fca9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16b4c924-25ba-4e74-8549-ba25753d78e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2089bf75-6119-4c42-a326-989b3931ec08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.987 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2089bf75-6119-4c42-a326-989b3931ec08 in datapath ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 unbound from our chassis
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.988 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.989 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[118c2ae7-080a-49a2-ac32-a3829e689828]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:23.989 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 namespace which is not needed anymore
Nov 25 08:27:23 compute-0 nova_compute[253538]: 2025-11-25 08:27:23.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:24 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 25 08:27:24 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Consumed 15.853s CPU time.
Nov 25 08:27:24 compute-0 systemd-machined[215790]: Machine qemu-26-instance-00000017 terminated.
Nov 25 08:27:24 compute-0 NetworkManager[48915]: <info>  [1764059244.1125] manager: (tap2089bf75-61): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Nov 25 08:27:24 compute-0 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [NOTICE]   (283572) : haproxy version is 2.8.14-c23fe91
Nov 25 08:27:24 compute-0 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [NOTICE]   (283572) : path to executable is /usr/sbin/haproxy
Nov 25 08:27:24 compute-0 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [WARNING]  (283572) : Exiting Master process...
Nov 25 08:27:24 compute-0 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [WARNING]  (283572) : Exiting Master process...
Nov 25 08:27:24 compute-0 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [ALERT]    (283572) : Current worker (283574) exited with code 143 (Terminated)
Nov 25 08:27:24 compute-0 neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954[283568]: [WARNING]  (283572) : All workers exited. Exiting... (0)
Nov 25 08:27:24 compute-0 systemd[1]: libpod-e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117.scope: Deactivated successfully.
Nov 25 08:27:24 compute-0 podman[286958]: 2025-11-25 08:27:24.125225228 +0000 UTC m=+0.046313793 container died e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:27:24 compute-0 ceph-mon[75015]: pgmap v1262: 321 pgs: 321 active+clean; 213 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 159 op/s
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.126 253542 INFO nova.virt.libvirt.driver [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Instance destroyed successfully.
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.126 253542 DEBUG nova.objects.instance [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lazy-loading 'resources' on Instance uuid ca088afd-31e5-497b-bfc5-ba1f56096642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.139 253542 DEBUG nova.virt.libvirt.vif [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1453002528',display_name='tempest-SecurityGroupsTestJSON-server-1453002528',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1453002528',id=23,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:26:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52217f37b23343d697fa6d2be38e236d',ramdisk_id='',reservation_id='r-502qkvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1828125381',owner_user_name='tempest-SecurityGroupsTestJSON-1828125381-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:26:19Z,user_data=None,user_id='02e795c75a3b40bbbc3ca83d0501777f',uuid=ca088afd-31e5-497b-bfc5-ba1f56096642,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.139 253542 DEBUG nova.network.os_vif_util [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converting VIF {"id": "2089bf75-6119-4c42-a326-989b3931ec08", "address": "fa:16:3e:b9:c0:7d", "network": {"id": "ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-908342220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52217f37b23343d697fa6d2be38e236d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2089bf75-61", "ovs_interfaceid": "2089bf75-6119-4c42-a326-989b3931ec08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.140 253542 DEBUG nova.network.os_vif_util [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.140 253542 DEBUG os_vif [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.141 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.141 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2089bf75-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.144 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.148 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.149 253542 INFO os_vif [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c0:7d,bridge_name='br-int',has_traffic_filtering=True,id=2089bf75-6119-4c42-a326-989b3931ec08,network=Network(ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2089bf75-61')
Nov 25 08:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117-userdata-shm.mount: Deactivated successfully.
Nov 25 08:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d651f3fccf787af2f2b211bc049772630cbec479a6881f484dd2c0eb6af6c354-merged.mount: Deactivated successfully.
Nov 25 08:27:24 compute-0 podman[286958]: 2025-11-25 08:27:24.181957688 +0000 UTC m=+0.103046233 container cleanup e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:27:24 compute-0 systemd[1]: libpod-conmon-e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117.scope: Deactivated successfully.
Nov 25 08:27:24 compute-0 podman[287016]: 2025-11-25 08:27:24.255595927 +0000 UTC m=+0.050977372 container remove e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.261 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[43e608b0-fecb-4b2d-a6db-be17556b80e9]: (4, ('Tue Nov 25 08:27:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 (e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117)\ne27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117\nTue Nov 25 08:27:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 (e27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117)\ne27492d5c587b8c65634ba52539c61715f617596520348327f85b4f22e359117\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b341679-ca46-46e4-9aad-c70d8cea9065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.263 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec4e7ebb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:24 compute-0 kernel: tapec4e7ebb-a0: left promiscuous mode
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.332 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.335 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bb779d-5eef-4ce2-a73f-4d03f7ac0af7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.348 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3831cf37-93fa-4330-8dd6-a7494ec8d75e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.349 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4624f8d0-7a42-44b0-a451-99261d5657ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.362 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61415e8d-aeac-4060-b6df-b709ba9aaeed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452873, 'reachable_time': 41890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287031, 'error': None, 'target': 'ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.364 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec4e7ebb-aba7-46f2-8d8d-f7d49f5af954 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:27:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:24.364 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[715bfdf5-bc7c-4bcf-95ce-805b222e5a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:24 compute-0 systemd[1]: run-netns-ovnmeta\x2dec4e7ebb\x2daba7\x2d46f2\x2d8d8d\x2df7d49f5af954.mount: Deactivated successfully.
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.577 253542 DEBUG nova.network.neutron [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.592 253542 INFO nova.compute.manager [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Took 0.88 seconds to deallocate network for instance.
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.638 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.638 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.706 253542 INFO nova.virt.libvirt.driver [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Deleting instance files /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642_del
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.707 253542 INFO nova.virt.libvirt.driver [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Deletion of /var/lib/nova/instances/ca088afd-31e5-497b-bfc5-ba1f56096642_del complete
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.736 253542 DEBUG oslo_concurrency.processutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.783 253542 INFO nova.compute.manager [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Took 0.89 seconds to destroy the instance on the hypervisor.
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.784 253542 DEBUG oslo.service.loopingcall [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.787 253542 DEBUG nova.compute.manager [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.787 253542 DEBUG nova.network.neutron [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.874 253542 DEBUG nova.compute.manager [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-unplugged-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.875 253542 DEBUG oslo_concurrency.lockutils [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.875 253542 DEBUG oslo_concurrency.lockutils [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.876 253542 DEBUG oslo_concurrency.lockutils [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.876 253542 DEBUG nova.compute.manager [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] No waiting events found dispatching network-vif-unplugged-2089bf75-6119-4c42-a326-989b3931ec08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:24 compute-0 nova_compute[253538]: 2025-11-25 08:27:24.876 253542 DEBUG nova.compute.manager [req-6bd5a465-1fae-48bc-9847-2e7fd019c8dd req-9c26e124-5f06-46b9-86ac-01d571f49af8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-unplugged-2089bf75-6119-4c42-a326-989b3931ec08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:27:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 25 08:27:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 25 08:27:24 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 25 08:27:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 128 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 136 op/s
Nov 25 08:27:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876696747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.202 253542 DEBUG oslo_concurrency.processutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.208 253542 DEBUG nova.compute.provider_tree [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.215 253542 DEBUG nova.compute.manager [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.216 253542 DEBUG oslo_concurrency.lockutils [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.216 253542 DEBUG oslo_concurrency.lockutils [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.216 253542 DEBUG oslo_concurrency.lockutils [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.216 253542 DEBUG nova.compute.manager [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] No waiting events found dispatching network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.217 253542 WARNING nova.compute.manager [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received unexpected event network-vif-plugged-521823df-589a-4370-a3ea-a5a6f4c73a6a for instance with vm_state deleted and task_state None.
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.217 253542 DEBUG nova.compute.manager [req-9302e5ad-fe7b-4152-920c-c476b3a85c43 req-750f4741-9d50-456b-8b4e-533c1e8f2045 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Received event network-vif-deleted-521823df-589a-4370-a3ea-a5a6f4c73a6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.221 253542 DEBUG nova.scheduler.client.report [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.240 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.266 253542 INFO nova.scheduler.client.report [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 0fc86c7e-5de2-431c-9152-cfe293f8cc7d
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.357 253542 DEBUG oslo_concurrency.lockutils [None req-33d7afed-25cd-4c7e-a18b-79c0bb0f5750 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0fc86c7e-5de2-431c-9152-cfe293f8cc7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.461 253542 DEBUG nova.network.neutron [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.476 253542 INFO nova.compute.manager [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Took 0.69 seconds to deallocate network for instance.
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.493 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.494 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.510 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.533 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.534 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.578 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.586 253542 DEBUG oslo_concurrency.processutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:25 compute-0 ceph-mon[75015]: osdmap e118: 3 total, 3 up, 3 in
Nov 25 08:27:25 compute-0 ceph-mon[75015]: pgmap v1264: 321 pgs: 321 active+clean; 128 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 136 op/s
Nov 25 08:27:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1876696747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550632676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:25 compute-0 nova_compute[253538]: 2025-11-25 08:27:25.997 253542 DEBUG oslo_concurrency.processutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.002 253542 DEBUG nova.compute.provider_tree [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.017 253542 DEBUG nova.scheduler.client.report [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.043 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.045 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.051 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.051 253542 INFO nova.compute.claims [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.081 253542 INFO nova.scheduler.client.report [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Deleted allocations for instance ca088afd-31e5-497b-bfc5-ba1f56096642
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.147 253542 DEBUG oslo_concurrency.lockutils [None req-5a0606ae-7546-4870-a8b8-3eb9852746c1 02e795c75a3b40bbbc3ca83d0501777f 52217f37b23343d697fa6d2be38e236d - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.169 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897132439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.560 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.565 253542 DEBUG nova.compute.provider_tree [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.582 253542 DEBUG nova.scheduler.client.report [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.606 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.607 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.673 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.673 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.697 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.718 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.806 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.808 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.808 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Creating image(s)
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.833 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.861 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.882 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.885 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/550632676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2897132439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.941 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.942 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.942 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.942 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.960 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.965 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.994 253542 DEBUG nova.compute.manager [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.995 253542 DEBUG oslo_concurrency.lockutils [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.995 253542 DEBUG oslo_concurrency.lockutils [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.995 253542 DEBUG oslo_concurrency.lockutils [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ca088afd-31e5-497b-bfc5-ba1f56096642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.996 253542 DEBUG nova.compute.manager [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] No waiting events found dispatching network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:26 compute-0 nova_compute[253538]: 2025-11-25 08:27:26.996 253542 WARNING nova.compute.manager [req-a0caca5f-cf48-464f-8694-6f461ef9d834 req-71d20ab9-0549-406e-9910-6ea58befccd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received unexpected event network-vif-plugged-2089bf75-6119-4c42-a326-989b3931ec08 for instance with vm_state deleted and task_state None.
Nov 25 08:27:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 83 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 586 KiB/s wr, 102 op/s
Nov 25 08:27:27 compute-0 nova_compute[253538]: 2025-11-25 08:27:27.355 253542 DEBUG nova.compute.manager [req-e6acb18e-070d-4e06-911e-b8da5f460392 req-e32861e6-24d4-4ae7-b81a-6d82e92966ca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Received event network-vif-deleted-2089bf75-6119-4c42-a326-989b3931ec08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:28 compute-0 ceph-mon[75015]: pgmap v1265: 321 pgs: 321 active+clean; 83 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 586 KiB/s wr, 102 op/s
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.106 253542 DEBUG nova.policy [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.154 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.217 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.333 253542 DEBUG nova.objects.instance [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.356 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.357 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Ensure instance console log exists: /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.357 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.358 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:28 compute-0 nova_compute[253538]: 2025-11-25 08:27:28.358 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:27:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651653136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:27:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:27:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651653136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:27:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 58 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 930 KiB/s wr, 128 op/s
Nov 25 08:27:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3651653136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:27:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3651653136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:27:29 compute-0 nova_compute[253538]: 2025-11-25 08:27:29.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:29 compute-0 sudo[287265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:29 compute-0 sudo[287265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:29 compute-0 sudo[287265]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:29 compute-0 nova_compute[253538]: 2025-11-25 08:27:29.483 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Successfully created port: d6aa33fe-8dd6-4546-aa75-715ad57e5b5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:27:29 compute-0 sudo[287296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:27:29 compute-0 sudo[287296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:29 compute-0 sudo[287296]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:29 compute-0 podman[287289]: 2025-11-25 08:27:29.515635388 +0000 UTC m=+0.094747593 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 08:27:29 compute-0 sudo[287334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:29 compute-0 sudo[287334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:29 compute-0 sudo[287334]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:29 compute-0 sudo[287359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:27:29 compute-0 sudo[287359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:29 compute-0 nova_compute[253538]: 2025-11-25 08:27:29.735 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059234.732528, c787de46-dba9-458e-acc0-57470097fac5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:29 compute-0 nova_compute[253538]: 2025-11-25 08:27:29.735 253542 INFO nova.compute.manager [-] [instance: c787de46-dba9-458e-acc0-57470097fac5] VM Stopped (Lifecycle Event)
Nov 25 08:27:29 compute-0 nova_compute[253538]: 2025-11-25 08:27:29.768 253542 DEBUG nova.compute.manager [None req-7868c83d-cef1-4c14-b055-f04fb2931d79 - - - - - -] [instance: c787de46-dba9-458e-acc0-57470097fac5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:29 compute-0 nova_compute[253538]: 2025-11-25 08:27:29.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 25 08:27:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 25 08:27:29 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 25 08:27:30 compute-0 ceph-mon[75015]: pgmap v1266: 321 pgs: 321 active+clean; 58 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 930 KiB/s wr, 128 op/s
Nov 25 08:27:30 compute-0 ceph-mon[75015]: osdmap e119: 3 total, 3 up, 3 in
Nov 25 08:27:30 compute-0 sudo[287359]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:27:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:27:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:27:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:27:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:27:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:27:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9cd63bc1-3d4b-4033-a776-2fff13e856a7 does not exist
Nov 25 08:27:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3dc20461-e5af-4d4b-baf9-dece8a303856 does not exist
Nov 25 08:27:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 80ca0317-a9de-4908-90d2-ad8a6df29e33 does not exist
Nov 25 08:27:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:27:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:27:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:27:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:27:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:27:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:27:30 compute-0 sudo[287414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:30 compute-0 sudo[287414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:30 compute-0 sudo[287414]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:30 compute-0 sudo[287439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:27:30 compute-0 sudo[287439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:30 compute-0 sudo[287439]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:30 compute-0 sudo[287464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:30 compute-0 sudo[287464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:30 compute-0 sudo[287464]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:30 compute-0 sudo[287489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:27:30 compute-0 sudo[287489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:30.763 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:30 compute-0 podman[287556]: 2025-11-25 08:27:30.861647787 +0000 UTC m=+0.046754135 container create 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:27:30 compute-0 systemd[1]: Started libpod-conmon-3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0.scope.
Nov 25 08:27:30 compute-0 podman[287556]: 2025-11-25 08:27:30.839839573 +0000 UTC m=+0.024945921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:27:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:30 compute-0 podman[287556]: 2025-11-25 08:27:30.962590851 +0000 UTC m=+0.147697219 container init 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:27:30 compute-0 podman[287556]: 2025-11-25 08:27:30.970224693 +0000 UTC m=+0.155331031 container start 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:27:30 compute-0 systemd[1]: libpod-3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0.scope: Deactivated successfully.
Nov 25 08:27:30 compute-0 zen_hopper[287572]: 167 167
Nov 25 08:27:30 compute-0 podman[287556]: 2025-11-25 08:27:30.978032208 +0000 UTC m=+0.163138556 container attach 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:27:30 compute-0 conmon[287572]: conmon 3bf5018f1c110f251759 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0.scope/container/memory.events
Nov 25 08:27:30 compute-0 podman[287556]: 2025-11-25 08:27:30.978374858 +0000 UTC m=+0.163481187 container died 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 08:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac1d07c6bf788864a02885dde0a4826cab77a929c996a4098073c777cf857f91-merged.mount: Deactivated successfully.
Nov 25 08:27:31 compute-0 podman[287556]: 2025-11-25 08:27:31.041968958 +0000 UTC m=+0.227075296 container remove 3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hopper, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:27:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 70 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 1.4 MiB/s wr, 146 op/s
Nov 25 08:27:31 compute-0 systemd[1]: libpod-conmon-3bf5018f1c110f2517596f90ddaf8e7211e8f4c77bc39a3146baf477dc1ee4e0.scope: Deactivated successfully.
Nov 25 08:27:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:27:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:27:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:27:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:27:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:27:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:27:31 compute-0 podman[287596]: 2025-11-25 08:27:31.241203133 +0000 UTC m=+0.047129915 container create 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 08:27:31 compute-0 nova_compute[253538]: 2025-11-25 08:27:31.250 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Successfully updated port: d6aa33fe-8dd6-4546-aa75-715ad57e5b5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:27:31 compute-0 nova_compute[253538]: 2025-11-25 08:27:31.278 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:31 compute-0 nova_compute[253538]: 2025-11-25 08:27:31.278 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:31 compute-0 nova_compute[253538]: 2025-11-25 08:27:31.278 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:27:31 compute-0 systemd[1]: Started libpod-conmon-09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8.scope.
Nov 25 08:27:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:31 compute-0 podman[287596]: 2025-11-25 08:27:31.221892128 +0000 UTC m=+0.027818890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:31 compute-0 podman[287596]: 2025-11-25 08:27:31.336342657 +0000 UTC m=+0.142269429 container init 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:27:31 compute-0 podman[287596]: 2025-11-25 08:27:31.345729367 +0000 UTC m=+0.151656119 container start 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:27:31 compute-0 podman[287596]: 2025-11-25 08:27:31.349263405 +0000 UTC m=+0.155190177 container attach 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:27:31 compute-0 nova_compute[253538]: 2025-11-25 08:27:31.432 253542 DEBUG nova.compute.manager [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-changed-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:31 compute-0 nova_compute[253538]: 2025-11-25 08:27:31.433 253542 DEBUG nova.compute.manager [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Refreshing instance network info cache due to event network-changed-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:27:31 compute-0 nova_compute[253538]: 2025-11-25 08:27:31.433 253542 DEBUG oslo_concurrency.lockutils [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:31 compute-0 nova_compute[253538]: 2025-11-25 08:27:31.467 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:27:32 compute-0 ceph-mon[75015]: pgmap v1268: 321 pgs: 321 active+clean; 70 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 1.4 MiB/s wr, 146 op/s
Nov 25 08:27:32 compute-0 adoring_lalande[287613]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:27:32 compute-0 adoring_lalande[287613]: --> relative data size: 1.0
Nov 25 08:27:32 compute-0 adoring_lalande[287613]: --> All data devices are unavailable
Nov 25 08:27:32 compute-0 systemd[1]: libpod-09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8.scope: Deactivated successfully.
Nov 25 08:27:32 compute-0 podman[287642]: 2025-11-25 08:27:32.398152798 +0000 UTC m=+0.030738382 container died 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 25 08:27:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed9470821c9b21846ee5102deabbd54f6b1a0b24bc73785509a92c67e0846bcb-merged.mount: Deactivated successfully.
Nov 25 08:27:32 compute-0 podman[287642]: 2025-11-25 08:27:32.718444235 +0000 UTC m=+0.351029809 container remove 09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lalande, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:27:32 compute-0 systemd[1]: libpod-conmon-09429595a68217f42d57a18a744ffc8e4789ace4dc544839f22fd8956bc25ce8.scope: Deactivated successfully.
Nov 25 08:27:32 compute-0 sudo[287489]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:32 compute-0 sudo[287658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:32 compute-0 sudo[287658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:32 compute-0 sudo[287658]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:32 compute-0 sudo[287687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:27:32 compute-0 sudo[287687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:32 compute-0 sudo[287687]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:33 compute-0 podman[287682]: 2025-11-25 08:27:32.997024276 +0000 UTC m=+0.085251331 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:27:33 compute-0 sudo[287728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:33 compute-0 sudo[287728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:33 compute-0 sudo[287728]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 88 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 08:27:33 compute-0 sudo[287753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:27:33 compute-0 sudo[287753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.264 253542 DEBUG nova.network.neutron [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Updating instance_info_cache with network_info: [{"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.306 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.307 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance network_info: |[{"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.308 253542 DEBUG oslo_concurrency.lockutils [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.308 253542 DEBUG nova.network.neutron [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Refreshing network info cache for port d6aa33fe-8dd6-4546-aa75-715ad57e5b5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.314 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start _get_guest_xml network_info=[{"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.323 253542 WARNING nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.330 253542 DEBUG nova.virt.libvirt.host [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.332 253542 DEBUG nova.virt.libvirt.host [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.343 253542 DEBUG nova.virt.libvirt.host [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.344 253542 DEBUG nova.virt.libvirt.host [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.345 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.345 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.346 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.346 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.347 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.347 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.347 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.347 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.348 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.348 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.348 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.349 253542 DEBUG nova.virt.hardware [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.353 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:33 compute-0 podman[287834]: 2025-11-25 08:27:33.616178424 +0000 UTC m=+0.099905327 container create 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:27:33 compute-0 podman[287834]: 2025-11-25 08:27:33.538624257 +0000 UTC m=+0.022351190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:27:33 compute-0 systemd[1]: Started libpod-conmon-2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa.scope.
Nov 25 08:27:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991690127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:33 compute-0 nova_compute[253538]: 2025-11-25 08:27:33.778 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:33 compute-0 podman[287834]: 2025-11-25 08:27:33.93610574 +0000 UTC m=+0.419832733 container init 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 08:27:33 compute-0 podman[287834]: 2025-11-25 08:27:33.943425083 +0000 UTC m=+0.427151986 container start 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 08:27:33 compute-0 crazy_gould[287851]: 167 167
Nov 25 08:27:33 compute-0 systemd[1]: libpod-2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa.scope: Deactivated successfully.
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.004 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.007 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:34 compute-0 podman[287834]: 2025-11-25 08:27:34.015901569 +0000 UTC m=+0.499628482 container attach 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 08:27:34 compute-0 podman[287834]: 2025-11-25 08:27:34.02173424 +0000 UTC m=+0.505461143 container died 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c2756e7871396c548ef9241b696d2f78ce0cf44a7f0053c39e78d92809c6dbc-merged.mount: Deactivated successfully.
Nov 25 08:27:34 compute-0 podman[287834]: 2025-11-25 08:27:34.216154502 +0000 UTC m=+0.699881415 container remove 2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:27:34 compute-0 systemd[1]: libpod-conmon-2752baa65f290c01a504d2bd40658a491ef0acb08fb1933a0f5ea927f9e7cbfa.scope: Deactivated successfully.
Nov 25 08:27:34 compute-0 ceph-mon[75015]: pgmap v1269: 321 pgs: 321 active+clean; 88 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 08:27:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3991690127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:34 compute-0 podman[287916]: 2025-11-25 08:27:34.411805078 +0000 UTC m=+0.047021733 container create 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:27:34 compute-0 systemd[1]: Started libpod-conmon-6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb.scope.
Nov 25 08:27:34 compute-0 podman[287916]: 2025-11-25 08:27:34.390862808 +0000 UTC m=+0.026079483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:27:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:34 compute-0 podman[287916]: 2025-11-25 08:27:34.525872335 +0000 UTC m=+0.161089090 container init 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:27:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930265700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:34 compute-0 podman[287916]: 2025-11-25 08:27:34.533086174 +0000 UTC m=+0.168302829 container start 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 08:27:34 compute-0 podman[287916]: 2025-11-25 08:27:34.540945542 +0000 UTC m=+0.176162217 container attach 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.550 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.552 253542 DEBUG nova.virt.libvirt.vif [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2071435205',display_name='tempest-ImagesTestJSON-server-2071435205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2071435205',id=26,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-a9u90nyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:26Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=30afcbba-78f3-433c-ba0a-5a2d25cf2d48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.552 253542 DEBUG nova.network.os_vif_util [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.553 253542 DEBUG nova.network.os_vif_util [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.554 253542 DEBUG nova.objects.instance [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.568 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <uuid>30afcbba-78f3-433c-ba0a-5a2d25cf2d48</uuid>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <name>instance-0000001a</name>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesTestJSON-server-2071435205</nova:name>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:27:33</nova:creationTime>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <nova:port uuid="d6aa33fe-8dd6-4546-aa75-715ad57e5b5c">
Nov 25 08:27:34 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <system>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <entry name="serial">30afcbba-78f3-433c-ba0a-5a2d25cf2d48</entry>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <entry name="uuid">30afcbba-78f3-433c-ba0a-5a2d25cf2d48</entry>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </system>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <os>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   </os>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <features>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   </features>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk">
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config">
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:34 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:93:2b:39"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <target dev="tapd6aa33fe-8d"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/console.log" append="off"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <video>
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </video>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:27:34 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:27:34 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:27:34 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:27:34 compute-0 nova_compute[253538]: </domain>
Nov 25 08:27:34 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.569 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Preparing to wait for external event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.569 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.569 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.569 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.570 253542 DEBUG nova.virt.libvirt.vif [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2071435205',display_name='tempest-ImagesTestJSON-server-2071435205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2071435205',id=26,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-a9u90nyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:26Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=30afcbba-78f3-433c-ba0a-5a2d25cf2d48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.570 253542 DEBUG nova.network.os_vif_util [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.571 253542 DEBUG nova.network.os_vif_util [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.571 253542 DEBUG os_vif [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.572 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.572 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.572 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.577 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6aa33fe-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.577 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6aa33fe-8d, col_values=(('external_ids', {'iface-id': 'd6aa33fe-8dd6-4546-aa75-715ad57e5b5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:2b:39', 'vm-uuid': '30afcbba-78f3-433c-ba0a-5a2d25cf2d48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:34 compute-0 NetworkManager[48915]: <info>  [1764059254.5800] manager: (tapd6aa33fe-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.588 253542 INFO os_vif [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d')
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.633 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.634 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.634 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:93:2b:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.634 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Using config drive
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.659 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:34 compute-0 nova_compute[253538]: 2025-11-25 08:27:34.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 88 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.232 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Creating config drive at /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.236 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm4h9nfd6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.300 253542 DEBUG nova.network.neutron [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Updated VIF entry in instance network info cache for port d6aa33fe-8dd6-4546-aa75-715ad57e5b5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.301 253542 DEBUG nova.network.neutron [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Updating instance_info_cache with network_info: [{"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.315 253542 DEBUG oslo_concurrency.lockutils [req-22ff0f5f-f5eb-4f01-b807-93f88b489afe req-b261fd50-76c4-400b-9aaa-180f713ea52a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-30afcbba-78f3-433c-ba0a-5a2d25cf2d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3930265700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]: {
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:     "0": [
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:         {
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "devices": [
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "/dev/loop3"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             ],
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_name": "ceph_lv0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_size": "21470642176",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "name": "ceph_lv0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "tags": {
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cluster_name": "ceph",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.crush_device_class": "",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.encrypted": "0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osd_id": "0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.type": "block",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.vdo": "0"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             },
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "type": "block",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "vg_name": "ceph_vg0"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:         }
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:     ],
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:     "1": [
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:         {
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "devices": [
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "/dev/loop4"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             ],
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_name": "ceph_lv1",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_size": "21470642176",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "name": "ceph_lv1",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "tags": {
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cluster_name": "ceph",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.crush_device_class": "",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.encrypted": "0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osd_id": "1",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.type": "block",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.vdo": "0"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             },
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "type": "block",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "vg_name": "ceph_vg1"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:         }
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:     ],
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:     "2": [
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:         {
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "devices": [
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "/dev/loop5"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             ],
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_name": "ceph_lv2",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_size": "21470642176",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "name": "ceph_lv2",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "tags": {
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.cluster_name": "ceph",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.crush_device_class": "",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.encrypted": "0",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osd_id": "2",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.type": "block",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:                 "ceph.vdo": "0"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             },
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "type": "block",
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:             "vg_name": "ceph_vg2"
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:         }
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]:     ]
Nov 25 08:27:35 compute-0 vigilant_chandrasekhar[287933]: }
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.378 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm4h9nfd6" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:35 compute-0 systemd[1]: libpod-6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb.scope: Deactivated successfully.
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.412 253542 DEBUG nova.storage.rbd_utils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.416 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:35 compute-0 podman[287983]: 2025-11-25 08:27:35.447174848 +0000 UTC m=+0.026680810 container died 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.458 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.459 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.482 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:27:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa01c12bac5dc48b980e357c7430fefb7567d708d4ced2b306713edccca7e1fd-merged.mount: Deactivated successfully.
Nov 25 08:27:35 compute-0 podman[287983]: 2025-11-25 08:27:35.52853114 +0000 UTC m=+0.108037082 container remove 6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 08:27:35 compute-0 systemd[1]: libpod-conmon-6cd3cec6f9b344259a34a523e6b7b658d999538d8bf77d0167a1ef75fbcadceb.scope: Deactivated successfully.
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.565 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.566 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:35 compute-0 sudo[287753]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.573 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.574 253542 INFO nova.compute.claims [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.598 253542 DEBUG oslo_concurrency.processutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config 30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.599 253542 INFO nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Deleting local config drive /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48/disk.config because it was imported into RBD.
Nov 25 08:27:35 compute-0 sudo[288020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:35 compute-0 sudo[288020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:35 compute-0 sudo[288020]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:35 compute-0 kernel: tapd6aa33fe-8d: entered promiscuous mode
Nov 25 08:27:35 compute-0 ovn_controller[152859]: 2025-11-25T08:27:35Z|00151|binding|INFO|Claiming lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c for this chassis.
Nov 25 08:27:35 compute-0 ovn_controller[152859]: 2025-11-25T08:27:35Z|00152|binding|INFO|d6aa33fe-8dd6-4546-aa75-715ad57e5b5c: Claiming fa:16:3e:93:2b:39 10.100.0.13
Nov 25 08:27:35 compute-0 NetworkManager[48915]: <info>  [1764059255.6584] manager: (tapd6aa33fe-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.662 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.675 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:2b:39 10.100.0.13'], port_security=['fa:16:3e:93:2b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30afcbba-78f3-433c-ba0a-5a2d25cf2d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.676 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d6aa33fe-8dd6-4546-aa75-715ad57e5b5c in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.677 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[023f8a68-111c-4a32-bd37-d2b8066ee69a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.691 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:27:35 compute-0 systemd-udevd[288080]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.696 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.693 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.693 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51cb4b9d-2247-493c-9af6-3bdd896807aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.694 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9f677bb3-bfea-4db6-9b02-ad925941f0b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 systemd-machined[215790]: New machine qemu-31-instance-0000001a.
Nov 25 08:27:35 compute-0 NetworkManager[48915]: <info>  [1764059255.7070] device (tapd6aa33fe-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:27:35 compute-0 NetworkManager[48915]: <info>  [1764059255.7084] device (tapd6aa33fe-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.707 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6f84e6-2bc3-42bf-9362-c27eaf006a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001a.
Nov 25 08:27:35 compute-0 sudo[288055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:27:35 compute-0 sudo[288055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:35 compute-0 sudo[288055]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.723 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f212446-e696-473a-81d8-ecb4887bee06]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:35 compute-0 ovn_controller[152859]: 2025-11-25T08:27:35Z|00153|binding|INFO|Setting lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c ovn-installed in OVS
Nov 25 08:27:35 compute-0 ovn_controller[152859]: 2025-11-25T08:27:35Z|00154|binding|INFO|Setting lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c up in Southbound
Nov 25 08:27:35 compute-0 nova_compute[253538]: 2025-11-25 08:27:35.742 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.753 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a8612fd6-a70e-4d9c-be29-72e6ba2a31fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.757 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a482220e-0e2d-439e-b07b-9dac0eb4b8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 NetworkManager[48915]: <info>  [1764059255.7588] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Nov 25 08:27:35 compute-0 systemd-udevd[288086]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:27:35 compute-0 sudo[288091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:35 compute-0 sudo[288091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:35 compute-0 sudo[288091]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.790 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7220eb-1c42-45e6-b1a7-84415f1e747d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.798 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[649c022a-fcf1-4cc2-8b94-4c40b9b83a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 NetworkManager[48915]: <info>  [1764059255.8191] device (tapba659d6c-c0): carrier: link connected
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.823 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[77df593f-ecc9-43b0-a40f-3d3f6853e03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.841 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0e98ac-543d-4fb9-a484-f5fd1ec91a2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461041, 'reachable_time': 24761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288175, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 sudo[288139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:27:35 compute-0 sudo[288139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.857 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb1abac-8d9c-4318-a95b-b90e5d3dd368]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461041, 'tstamp': 461041}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288187, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.875 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[967f1271-4eb4-4e5c-94ce-a185cb766bf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461041, 'reachable_time': 24761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288188, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.907 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d09e639-3ae8-45c8-916b-049c509300a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.976 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6eb185b-f701-47fd-9518-de546bd81002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.978 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.978 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.979 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:35 compute-0 NetworkManager[48915]: <info>  [1764059255.9815] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 25 08:27:35 compute-0 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 08:27:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:35.987 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:35 compute-0 ovn_controller[152859]: 2025-11-25T08:27:35Z|00155|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:36.013 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:36.014 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c8458879-63bd-4c0e-bbfe-fa558915454c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:36.015 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:27:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:36.017 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:27:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684891317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.165 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.175 253542 DEBUG nova.compute.provider_tree [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.190 253542 DEBUG nova.scheduler.client.report [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:36 compute-0 podman[288238]: 2025-11-25 08:27:36.207393471 +0000 UTC m=+0.046683064 container create c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.213 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.214 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:27:36 compute-0 systemd[1]: Started libpod-conmon-c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09.scope.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.264 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.265 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:27:36 compute-0 podman[288238]: 2025-11-25 08:27:36.185395942 +0000 UTC m=+0.024685555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:27:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.287 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:27:36 compute-0 podman[288238]: 2025-11-25 08:27:36.301041904 +0000 UTC m=+0.140331537 container init c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.303 253542 DEBUG nova.compute.manager [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.304 253542 DEBUG oslo_concurrency.lockutils [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.304 253542 DEBUG oslo_concurrency.lockutils [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.304 253542 DEBUG oslo_concurrency.lockutils [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.304 253542 DEBUG nova.compute.manager [req-215b1f70-ef99-4afa-b336-17d9d02c3cdb req-e77c9183-69e1-41af-8083-724b54e8a0bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Processing event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:27:36 compute-0 podman[288238]: 2025-11-25 08:27:36.310461664 +0000 UTC m=+0.149751257 container start c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:27:36 compute-0 gifted_euler[288281]: 167 167
Nov 25 08:27:36 compute-0 podman[288238]: 2025-11-25 08:27:36.31757671 +0000 UTC m=+0.156866303 container attach c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 08:27:36 compute-0 systemd[1]: libpod-c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09.scope: Deactivated successfully.
Nov 25 08:27:36 compute-0 conmon[288281]: conmon c47d5ea98aa78c4695bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09.scope/container/memory.events
Nov 25 08:27:36 compute-0 podman[288238]: 2025-11-25 08:27:36.3200741 +0000 UTC m=+0.159363683 container died c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.331 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:27:36 compute-0 ceph-mon[75015]: pgmap v1270: 321 pgs: 321 active+clean; 88 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 25 08:27:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2684891317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc3ce88f042b26dbe9064ab00d824af8761c2a1816db79bf02cd9d027003b52c-merged.mount: Deactivated successfully.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.382 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059256.3823512, 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.383 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] VM Started (Lifecycle Event)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.385 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.389 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:27:36 compute-0 podman[288238]: 2025-11-25 08:27:36.390359035 +0000 UTC m=+0.229648608 container remove c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_euler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.393 253542 INFO nova.virt.libvirt.driver [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance spawned successfully.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.393 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:27:36 compute-0 systemd[1]: libpod-conmon-c47d5ea98aa78c4695bbbabe965e58f90159e0f2336d6d53bf85011960ec8f09.scope: Deactivated successfully.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.414 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.418 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.419 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.419 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.420 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.420 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.421 253542 DEBUG nova.virt.libvirt.driver [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.424 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.446 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.446 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059256.382505, 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.446 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] VM Paused (Lifecycle Event)
Nov 25 08:27:36 compute-0 podman[288336]: 2025-11-25 08:27:36.447406635 +0000 UTC m=+0.070424830 container create 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.461 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.477 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059256.387846, 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.477 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] VM Resumed (Lifecycle Event)
Nov 25 08:27:36 compute-0 systemd[1]: Started libpod-conmon-48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529.scope.
Nov 25 08:27:36 compute-0 podman[288336]: 2025-11-25 08:27:36.404378383 +0000 UTC m=+0.027396608 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:27:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedff8d23f1f42a2bd3738ba82be5e7d6fdc0734473529518931479e1db062ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:36 compute-0 podman[288336]: 2025-11-25 08:27:36.534966168 +0000 UTC m=+0.157984383 container init 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:27:36 compute-0 podman[288336]: 2025-11-25 08:27:36.541270003 +0000 UTC m=+0.164288198 container start 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:27:36 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [NOTICE]   (288371) : New worker (288376) forked
Nov 25 08:27:36 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [NOTICE]   (288371) : Loading success.
Nov 25 08:27:36 compute-0 podman[288359]: 2025-11-25 08:27:36.56899756 +0000 UTC m=+0.046880059 container create 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:27:36 compute-0 systemd[1]: Started libpod-conmon-48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e.scope.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.632 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.640 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:36 compute-0 podman[288359]: 2025-11-25 08:27:36.547127705 +0000 UTC m=+0.025010234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.645 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.646 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.647 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Creating image(s)
Nov 25 08:27:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.674 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:36 compute-0 podman[288359]: 2025-11-25 08:27:36.693686652 +0000 UTC m=+0.171569181 container init 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.695 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:36 compute-0 podman[288359]: 2025-11-25 08:27:36.7019136 +0000 UTC m=+0.179796099 container start 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 08:27:36 compute-0 podman[288359]: 2025-11-25 08:27:36.709024516 +0000 UTC m=+0.186907045 container attach 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.716 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.721 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.750 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.753 253542 INFO nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 9.95 seconds to spawn the instance on the hypervisor.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.753 253542 DEBUG nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.789 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.790 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.791 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.791 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.816 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.821 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 575b6526-de38-4a80-a952-be1b891b4792_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.857 253542 INFO nova.compute.manager [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 11.30 seconds to build instance.
Nov 25 08:27:36 compute-0 nova_compute[253538]: 2025-11-25 08:27:36.873 253542 DEBUG oslo_concurrency.lockutils [None req-f786152c-1e05-4ab9-9ade-48969d18372c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.067 253542 DEBUG nova.policy [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cbe175b10d9243369c5cae0b1a0c718b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '16044f9687494680b68b927090e5afc5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:27:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 91 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 2.2 MiB/s wr, 57 op/s
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.154 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 575b6526-de38-4a80-a952-be1b891b4792_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.231 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] resizing rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.361 253542 DEBUG nova.objects.instance [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lazy-loading 'migration_context' on Instance uuid 575b6526-de38-4a80-a952-be1b891b4792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.374 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.375 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Ensure instance console log exists: /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.375 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.375 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.376 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:37 compute-0 hardcore_black[288387]: {
Nov 25 08:27:37 compute-0 hardcore_black[288387]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "osd_id": 1,
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "type": "bluestore"
Nov 25 08:27:37 compute-0 hardcore_black[288387]:     },
Nov 25 08:27:37 compute-0 hardcore_black[288387]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "osd_id": 2,
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "type": "bluestore"
Nov 25 08:27:37 compute-0 hardcore_black[288387]:     },
Nov 25 08:27:37 compute-0 hardcore_black[288387]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "osd_id": 0,
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:27:37 compute-0 hardcore_black[288387]:         "type": "bluestore"
Nov 25 08:27:37 compute-0 hardcore_black[288387]:     }
Nov 25 08:27:37 compute-0 hardcore_black[288387]: }
Nov 25 08:27:37 compute-0 systemd[1]: libpod-48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e.scope: Deactivated successfully.
Nov 25 08:27:37 compute-0 podman[288359]: 2025-11-25 08:27:37.783519299 +0000 UTC m=+1.261401818 container died 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:27:37 compute-0 systemd[1]: libpod-48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e.scope: Consumed 1.037s CPU time.
Nov 25 08:27:37 compute-0 nova_compute[253538]: 2025-11-25 08:27:37.788 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Successfully created port: 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.076 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059243.074372, 0fc86c7e-5de2-431c-9152-cfe293f8cc7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.076 253542 INFO nova.compute.manager [-] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] VM Stopped (Lifecycle Event)
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.095 253542 DEBUG nova.compute.manager [None req-f77fe06f-35c1-4131-a261-3794bbda4539 - - - - - -] [instance: 0fc86c7e-5de2-431c-9152-cfe293f8cc7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.470 253542 DEBUG nova.compute.manager [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.471 253542 DEBUG oslo_concurrency.lockutils [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.471 253542 DEBUG oslo_concurrency.lockutils [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.471 253542 DEBUG oslo_concurrency.lockutils [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.472 253542 DEBUG nova.compute.manager [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] No waiting events found dispatching network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.472 253542 WARNING nova.compute.manager [req-774b4035-6e54-4529-baba-7d17350c3073 req-7e2d1fe0-b1da-46b9-afc8-17ff74412b0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received unexpected event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c for instance with vm_state active and task_state None.
Nov 25 08:27:38 compute-0 ceph-mon[75015]: pgmap v1271: 321 pgs: 321 active+clean; 91 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 2.2 MiB/s wr, 57 op/s
Nov 25 08:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd4a0d49da341748888f3ce03ed5ae40d2ac41d60dc48ee22f8786ea18eb3aaa-merged.mount: Deactivated successfully.
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.953 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Successfully updated port: 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.967 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.967 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquired lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:38 compute-0 nova_compute[253538]: 2025-11-25 08:27:38.968 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:27:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 104 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 931 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.122 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059244.1218808, ca088afd-31e5-497b-bfc5-ba1f56096642 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.123 253542 INFO nova.compute.manager [-] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] VM Stopped (Lifecycle Event)
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.137 253542 DEBUG nova.compute.manager [None req-5ca2c5ed-d721-431e-ac93-21a53164a5a2 - - - - - -] [instance: ca088afd-31e5-497b-bfc5-ba1f56096642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:39 compute-0 podman[288359]: 2025-11-25 08:27:39.212235497 +0000 UTC m=+2.690118006 container remove 48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 08:27:39 compute-0 systemd[1]: libpod-conmon-48574a6bfaa1a5b66c264939ba99714244fb483bf5ac0cb267a09a1c5beb1f8e.scope: Deactivated successfully.
Nov 25 08:27:39 compute-0 sudo[288139]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:27:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:27:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:27:39 compute-0 podman[288598]: 2025-11-25 08:27:39.320400981 +0000 UTC m=+0.405548617 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.328 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:27:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:27:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3d570d29-7fda-46e8-9bfd-852966078f3c does not exist
Nov 25 08:27:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2b31b402-648e-4e3f-9073-e08a3940b324 does not exist
Nov 25 08:27:39 compute-0 sudo[288623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:27:39 compute-0 sudo[288623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:39 compute-0 sudo[288623]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:39 compute-0 sudo[288648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:27:39 compute-0 sudo[288648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:27:39 compute-0 sudo[288648]: pam_unix(sudo:session): session closed for user root
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.789 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.906 253542 DEBUG oslo_concurrency.lockutils [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.906 253542 DEBUG oslo_concurrency.lockutils [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.906 253542 DEBUG nova.compute.manager [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.909 253542 DEBUG nova.compute.manager [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.910 253542 DEBUG nova.objects.instance [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'flavor' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:39 compute-0 nova_compute[253538]: 2025-11-25 08:27:39.933 253542 DEBUG nova.virt.libvirt.driver [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.177 253542 DEBUG nova.network.neutron [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Updating instance_info_cache with network_info: [{"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.214 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Releasing lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.215 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance network_info: |[{"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.219 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start _get_guest_xml network_info=[{"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.226 253542 WARNING nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.234 253542 DEBUG nova.virt.libvirt.host [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.236 253542 DEBUG nova.virt.libvirt.host [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.240 253542 DEBUG nova.virt.libvirt.host [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.240 253542 DEBUG nova.virt.libvirt.host [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.241 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.242 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.242 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.243 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.243 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.244 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.244 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.245 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.246 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.247 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.247 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.247 253542 DEBUG nova.virt.hardware [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.251 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:40 compute-0 ceph-mon[75015]: pgmap v1272: 321 pgs: 321 active+clean; 104 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 931 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 08:27:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:27:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.640 253542 DEBUG nova.compute.manager [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-changed-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.641 253542 DEBUG nova.compute.manager [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Refreshing instance network info cache due to event network-changed-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.641 253542 DEBUG oslo_concurrency.lockutils [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.641 253542 DEBUG oslo_concurrency.lockutils [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.642 253542 DEBUG nova.network.neutron [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Refreshing network info cache for port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:27:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1518166724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.776 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.807 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:40 compute-0 nova_compute[253538]: 2025-11-25 08:27:40.813 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:41.053 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:41.054 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:41.054 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 125 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 83 op/s
Nov 25 08:27:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1259217215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.235 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.237 253542 DEBUG nova.virt.libvirt.vif [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-719907871',display_name='tempest-ImagesNegativeTestJSON-server-719907871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-719907871',id=27,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16044f9687494680b68b927090e5afc5',ramdisk_id='',reservation_id='r-a19z8id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1651122487',owner_user_name='tempest-ImagesNegativeTestJSON-1651122487-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:36Z,user_data=None,user_id='cbe175b10d9243369c5cae0b1a0c718b',uuid=575b6526-de38-4a80-a952-be1b891b4792,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.237 253542 DEBUG nova.network.os_vif_util [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converting VIF {"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.238 253542 DEBUG nova.network.os_vif_util [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.239 253542 DEBUG nova.objects.instance [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 575b6526-de38-4a80-a952-be1b891b4792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.253 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <uuid>575b6526-de38-4a80-a952-be1b891b4792</uuid>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <name>instance-0000001b</name>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesNegativeTestJSON-server-719907871</nova:name>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:27:40</nova:creationTime>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <nova:user uuid="cbe175b10d9243369c5cae0b1a0c718b">tempest-ImagesNegativeTestJSON-1651122487-project-member</nova:user>
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <nova:project uuid="16044f9687494680b68b927090e5afc5">tempest-ImagesNegativeTestJSON-1651122487</nova:project>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <nova:port uuid="1de27b55-f4ed-42e6-a9b2-65d84a8a77f2">
Nov 25 08:27:41 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <system>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <entry name="serial">575b6526-de38-4a80-a952-be1b891b4792</entry>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <entry name="uuid">575b6526-de38-4a80-a952-be1b891b4792</entry>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </system>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <os>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   </os>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <features>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   </features>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/575b6526-de38-4a80-a952-be1b891b4792_disk">
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/575b6526-de38-4a80-a952-be1b891b4792_disk.config">
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:41 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:14:9a:0c"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <target dev="tap1de27b55-f4"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/console.log" append="off"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <video>
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </video>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:27:41 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:27:41 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:27:41 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:27:41 compute-0 nova_compute[253538]: </domain>
Nov 25 08:27:41 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.254 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Preparing to wait for external event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.254 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.255 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.255 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.255 253542 DEBUG nova.virt.libvirt.vif [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-719907871',display_name='tempest-ImagesNegativeTestJSON-server-719907871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-719907871',id=27,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16044f9687494680b68b927090e5afc5',ramdisk_id='',reservation_id='r-a19z8id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1651122487',owner_user_name='tempest-ImagesNegativeTestJSON-1651122487-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:36Z,user_data=None,user_id='cbe175b10d9243369c5cae0b1a0c718b',uuid=575b6526-de38-4a80-a952-be1b891b4792,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.256 253542 DEBUG nova.network.os_vif_util [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converting VIF {"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.256 253542 DEBUG nova.network.os_vif_util [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.257 253542 DEBUG os_vif [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.257 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.258 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.263 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.264 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1de27b55-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.265 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1de27b55-f4, col_values=(('external_ids', {'iface-id': '1de27b55-f4ed-42e6-a9b2-65d84a8a77f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:9a:0c', 'vm-uuid': '575b6526-de38-4a80-a952-be1b891b4792'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:41 compute-0 NetworkManager[48915]: <info>  [1764059261.2681] manager: (tap1de27b55-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.275 253542 INFO os_vif [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4')
Nov 25 08:27:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1518166724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1259217215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.336 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.337 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.337 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] No VIF found with MAC fa:16:3e:14:9a:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.338 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Using config drive
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.368 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.869 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Creating config drive at /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config
Nov 25 08:27:41 compute-0 nova_compute[253538]: 2025-11-25 08:27:41.873 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6jevnn_8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.005 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6jevnn_8" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.031 253542 DEBUG nova.storage.rbd_utils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] rbd image 575b6526-de38-4a80-a952-be1b891b4792_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.035 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config 575b6526-de38-4a80-a952-be1b891b4792_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.157 253542 DEBUG nova.network.neutron [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Updated VIF entry in instance network info cache for port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.159 253542 DEBUG nova.network.neutron [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Updating instance_info_cache with network_info: [{"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.174 253542 DEBUG oslo_concurrency.lockutils [req-47aed39f-8024-48c2-a459-1bbb66a66725 req-6d382777-c191-4a50-b088-ce9c2156ec58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-575b6526-de38-4a80-a952-be1b891b4792" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:42 compute-0 ceph-mon[75015]: pgmap v1273: 321 pgs: 321 active+clean; 125 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 83 op/s
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.420 253542 DEBUG oslo_concurrency.processutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config 575b6526-de38-4a80-a952-be1b891b4792_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.421 253542 INFO nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Deleting local config drive /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792/disk.config because it was imported into RBD.
Nov 25 08:27:42 compute-0 kernel: tap1de27b55-f4: entered promiscuous mode
Nov 25 08:27:42 compute-0 NetworkManager[48915]: <info>  [1764059262.5100] manager: (tap1de27b55-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Nov 25 08:27:42 compute-0 ovn_controller[152859]: 2025-11-25T08:27:42Z|00156|binding|INFO|Claiming lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for this chassis.
Nov 25 08:27:42 compute-0 ovn_controller[152859]: 2025-11-25T08:27:42Z|00157|binding|INFO|1de27b55-f4ed-42e6-a9b2-65d84a8a77f2: Claiming fa:16:3e:14:9a:0c 10.100.0.9
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.513 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.526 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:9a:0c 10.100.0.9'], port_security=['fa:16:3e:14:9a:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '575b6526-de38-4a80-a952-be1b891b4792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16044f9687494680b68b927090e5afc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c14b24f1-7e28-4619-b440-53caae2d78a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05232433-f85e-44a8-b448-6f9e1f5b2c59, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.528 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 in datapath 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da bound to our chassis
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.530 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.542 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[50f8c6ca-a330-42e9-bad1-5dc6b683dd04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.547 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap57f2ccd3-e1 in ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:27:42 compute-0 systemd-machined[215790]: New machine qemu-32-instance-0000001b.
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.551 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap57f2ccd3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.551 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[477f6a10-ffa7-47a0-8333-807e0b428c2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b2328658-8d74-4187-9b02-b4eba664ce02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001b.
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.566 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[774ff828-195b-4ab0-bf6e-95290e31883f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.591 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[baad3c66-d6e6-4fca-85a2-5f0aa1ce9647]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 systemd-udevd[288812]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:27:42 compute-0 ovn_controller[152859]: 2025-11-25T08:27:42Z|00158|binding|INFO|Setting lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 ovn-installed in OVS
Nov 25 08:27:42 compute-0 ovn_controller[152859]: 2025-11-25T08:27:42Z|00159|binding|INFO|Setting lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 up in Southbound
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:42 compute-0 NetworkManager[48915]: <info>  [1764059262.6230] device (tap1de27b55-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:27:42 compute-0 NetworkManager[48915]: <info>  [1764059262.6242] device (tap1de27b55-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.634 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ef74eecb-0284-4fb5-b7d2-dfc53627c249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 NetworkManager[48915]: <info>  [1764059262.6412] manager: (tap57f2ccd3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Nov 25 08:27:42 compute-0 systemd-udevd[288820]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b37b037-d516-458b-974c-0cf55ce949db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.678 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[82595a53-a03f-49d5-8377-8ccf6978a2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.681 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[69379c7c-2785-49c2-8918-92c25a19f361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 NetworkManager[48915]: <info>  [1764059262.7106] device (tap57f2ccd3-e0): carrier: link connected
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.717 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c2b829-ed47-412d-9d38-872dd0d1ca98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[139e32de-0a17-4630-ab83-8f3517953dc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57f2ccd3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:48:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461730, 'reachable_time': 16521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288842, 'error': None, 'target': 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.751 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2a894e95-527f-4dd6-a6c3-48815bdd12e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:48fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461730, 'tstamp': 461730}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288843, 'error': None, 'target': 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.770 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b86809fd-4b1e-48ee-b430-cc6d38c5c333]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57f2ccd3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:48:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461730, 'reachable_time': 16521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288844, 'error': None, 'target': 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.808 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82eb9985-25c2-4aa1-b36b-c35c38a359fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.858 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab595cb-cccc-411b-b32c-155c4a73080b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.859 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57f2ccd3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.860 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.860 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57f2ccd3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.862 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:42 compute-0 kernel: tap57f2ccd3-e0: entered promiscuous mode
Nov 25 08:27:42 compute-0 NetworkManager[48915]: <info>  [1764059262.8632] manager: (tap57f2ccd3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.867 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap57f2ccd3-e0, col_values=(('external_ids', {'iface-id': 'f5b8b379-b9d0-48f1-8e76-7cf52c7f9630'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:42 compute-0 ovn_controller[152859]: 2025-11-25T08:27:42Z|00160|binding|INFO|Releasing lport f5b8b379-b9d0-48f1-8e76-7cf52c7f9630 from this chassis (sb_readonly=0)
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.870 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57f2ccd3-ed2f-4d2f-8493-3dd1452e16da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57f2ccd3-ed2f-4d2f-8493-3dd1452e16da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.872 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2aae2f4-5f2e-4ccb-afed-e2756dd7552d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.873 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/57f2ccd3-ed2f-4d2f-8493-3dd1452e16da.pid.haproxy
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:27:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:42.876 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'env', 'PROCESS_TAG=haproxy-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/57f2ccd3-ed2f-4d2f-8493-3dd1452e16da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:27:42 compute-0 nova_compute[253538]: 2025-11-25 08:27:42.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.092 253542 DEBUG nova.compute.manager [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.093 253542 DEBUG oslo_concurrency.lockutils [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.093 253542 DEBUG oslo_concurrency.lockutils [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.094 253542 DEBUG oslo_concurrency.lockutils [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.094 253542 DEBUG nova.compute.manager [req-84bd18c8-a7c5-44e5-9f86-4a7f5dbc83bf req-b1253b4e-1ef4-4fe6-9af0-3672e8889cbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Processing event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:27:43 compute-0 podman[288876]: 2025-11-25 08:27:43.3065462 +0000 UTC m=+0.039123494 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:27:43 compute-0 podman[288876]: 2025-11-25 08:27:43.453695054 +0000 UTC m=+0.186272328 container create a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 08:27:43 compute-0 systemd[1]: Started libpod-conmon-a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91.scope.
Nov 25 08:27:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754c241e4d2ab39b3351c7386a235b84734f23374e154e299f4df3e053aaff0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:27:43 compute-0 podman[288876]: 2025-11-25 08:27:43.767578762 +0000 UTC m=+0.500156106 container init a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:27:43 compute-0 podman[288876]: 2025-11-25 08:27:43.780809609 +0000 UTC m=+0.513386893 container start a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:27:43 compute-0 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [NOTICE]   (288928) : New worker (288933) forked
Nov 25 08:27:43 compute-0 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [NOTICE]   (288928) : Loading success.
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.918 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059263.9179223, 575b6526-de38-4a80-a952-be1b891b4792 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.919 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] VM Started (Lifecycle Event)
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.921 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.924 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.928 253542 INFO nova.virt.libvirt.driver [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance spawned successfully.
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.929 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.940 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.945 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.949 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.950 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.950 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.951 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.952 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.952 253542 DEBUG nova.virt.libvirt.driver [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.960 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.960 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059263.9187248, 575b6526-de38-4a80-a952-be1b891b4792 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.960 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] VM Paused (Lifecycle Event)
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.977 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.980 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059263.9238884, 575b6526-de38-4a80-a952-be1b891b4792 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.980 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] VM Resumed (Lifecycle Event)
Nov 25 08:27:43 compute-0 nova_compute[253538]: 2025-11-25 08:27:43.997 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:44 compute-0 nova_compute[253538]: 2025-11-25 08:27:44.000 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:27:44 compute-0 nova_compute[253538]: 2025-11-25 08:27:44.005 253542 INFO nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Took 7.36 seconds to spawn the instance on the hypervisor.
Nov 25 08:27:44 compute-0 nova_compute[253538]: 2025-11-25 08:27:44.005 253542 DEBUG nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:44 compute-0 nova_compute[253538]: 2025-11-25 08:27:44.013 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:27:44 compute-0 nova_compute[253538]: 2025-11-25 08:27:44.058 253542 INFO nova.compute.manager [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Took 8.52 seconds to build instance.
Nov 25 08:27:44 compute-0 nova_compute[253538]: 2025-11-25 08:27:44.070 253542 DEBUG oslo_concurrency.lockutils [None req-953d03a0-0edd-4a18-b9f6-3c24a3ba24fc cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:44 compute-0 ceph-mon[75015]: pgmap v1274: 321 pgs: 321 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Nov 25 08:27:44 compute-0 nova_compute[253538]: 2025-11-25 08:27:44.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.191 253542 DEBUG nova.compute.manager [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.191 253542 DEBUG oslo_concurrency.lockutils [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.191 253542 DEBUG oslo_concurrency.lockutils [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.192 253542 DEBUG oslo_concurrency.lockutils [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.192 253542 DEBUG nova.compute.manager [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] No waiting events found dispatching network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.192 253542 WARNING nova.compute.manager [req-1bc5dc39-0b6b-4913-9311-c1334be29b5d req-96df723b-d274-400e-81af-f07152cf5d7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received unexpected event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for instance with vm_state active and task_state None.
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.567 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.568 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.568 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.569 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.569 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.570 253542 INFO nova.compute.manager [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Terminating instance
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.571 253542 DEBUG nova.compute.manager [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:27:45 compute-0 ceph-mon[75015]: pgmap v1275: 321 pgs: 321 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 08:27:45 compute-0 kernel: tap1de27b55-f4 (unregistering): left promiscuous mode
Nov 25 08:27:45 compute-0 NetworkManager[48915]: <info>  [1764059265.7046] device (tap1de27b55-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:27:45 compute-0 ovn_controller[152859]: 2025-11-25T08:27:45Z|00161|binding|INFO|Releasing lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 from this chassis (sb_readonly=0)
Nov 25 08:27:45 compute-0 ovn_controller[152859]: 2025-11-25T08:27:45Z|00162|binding|INFO|Setting lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 down in Southbound
Nov 25 08:27:45 compute-0 ovn_controller[152859]: 2025-11-25T08:27:45Z|00163|binding|INFO|Removing iface tap1de27b55-f4 ovn-installed in OVS
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.788 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:9a:0c 10.100.0.9'], port_security=['fa:16:3e:14:9a:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '575b6526-de38-4a80-a952-be1b891b4792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16044f9687494680b68b927090e5afc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c14b24f1-7e28-4619-b440-53caae2d78a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05232433-f85e-44a8-b448-6f9e1f5b2c59, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.790 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 in datapath 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da unbound from our chassis
Nov 25 08:27:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.791 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:27:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.792 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[54a61603-4817-43e2-8307-ee3923036067]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:45.793 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da namespace which is not needed anymore
Nov 25 08:27:45 compute-0 nova_compute[253538]: 2025-11-25 08:27:45.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:45 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 25 08:27:45 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Consumed 2.879s CPU time.
Nov 25 08:27:45 compute-0 systemd-machined[215790]: Machine qemu-32-instance-0000001b terminated.
Nov 25 08:27:45 compute-0 kernel: tap1de27b55-f4: entered promiscuous mode
Nov 25 08:27:45 compute-0 NetworkManager[48915]: <info>  [1764059265.9942] manager: (tap1de27b55-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Nov 25 08:27:46 compute-0 ovn_controller[152859]: 2025-11-25T08:27:46Z|00164|binding|INFO|Claiming lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for this chassis.
Nov 25 08:27:46 compute-0 ovn_controller[152859]: 2025-11-25T08:27:46Z|00165|binding|INFO|1de27b55-f4ed-42e6-a9b2-65d84a8a77f2: Claiming fa:16:3e:14:9a:0c 10.100.0.9
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:46 compute-0 kernel: tap1de27b55-f4 (unregistering): left promiscuous mode
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.014 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:9a:0c 10.100.0.9'], port_security=['fa:16:3e:14:9a:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '575b6526-de38-4a80-a952-be1b891b4792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16044f9687494680b68b927090e5afc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c14b24f1-7e28-4619-b440-53caae2d78a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05232433-f85e-44a8-b448-6f9e1f5b2c59, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:46 compute-0 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [NOTICE]   (288928) : haproxy version is 2.8.14-c23fe91
Nov 25 08:27:46 compute-0 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [NOTICE]   (288928) : path to executable is /usr/sbin/haproxy
Nov 25 08:27:46 compute-0 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [WARNING]  (288928) : Exiting Master process...
Nov 25 08:27:46 compute-0 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [WARNING]  (288928) : Exiting Master process...
Nov 25 08:27:46 compute-0 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [ALERT]    (288928) : Current worker (288933) exited with code 143 (Terminated)
Nov 25 08:27:46 compute-0 neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da[288890]: [WARNING]  (288928) : All workers exited. Exiting... (0)
Nov 25 08:27:46 compute-0 systemd[1]: libpod-a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91.scope: Deactivated successfully.
Nov 25 08:27:46 compute-0 ovn_controller[152859]: 2025-11-25T08:27:46Z|00166|binding|INFO|Releasing lport 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 from this chassis (sb_readonly=0)
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:46 compute-0 podman[288972]: 2025-11-25 08:27:46.044065036 +0000 UTC m=+0.148780799 container died a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.046 253542 INFO nova.virt.libvirt.driver [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Instance destroyed successfully.
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.047 253542 DEBUG nova.objects.instance [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lazy-loading 'resources' on Instance uuid 575b6526-de38-4a80-a952-be1b891b4792 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.052 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:9a:0c 10.100.0.9'], port_security=['fa:16:3e:14:9a:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '575b6526-de38-4a80-a952-be1b891b4792', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16044f9687494680b68b927090e5afc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c14b24f1-7e28-4619-b440-53caae2d78a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05232433-f85e-44a8-b448-6f9e1f5b2c59, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.062 253542 DEBUG nova.virt.libvirt.vif [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-719907871',display_name='tempest-ImagesNegativeTestJSON-server-719907871',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-719907871',id=27,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='16044f9687494680b68b927090e5afc5',ramdisk_id='',reservation_id='r-a19z8id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1651122487',owner_user_name='tempest-ImagesNegativeTestJSON-1651122487-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:27:44Z,user_data=None,user_id='cbe175b10d9243369c5cae0b1a0c718b',uuid=575b6526-de38-4a80-a952-be1b891b4792,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.062 253542 DEBUG nova.network.os_vif_util [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converting VIF {"id": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "address": "fa:16:3e:14:9a:0c", "network": {"id": "57f2ccd3-ed2f-4d2f-8493-3dd1452e16da", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1870872754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16044f9687494680b68b927090e5afc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de27b55-f4", "ovs_interfaceid": "1de27b55-f4ed-42e6-a9b2-65d84a8a77f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.063 253542 DEBUG nova.network.os_vif_util [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.063 253542 DEBUG os_vif [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.069 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1de27b55-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.076 253542 INFO os_vif [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:9a:0c,bridge_name='br-int',has_traffic_filtering=True,id=1de27b55-f4ed-42e6-a9b2-65d84a8a77f2,network=Network(57f2ccd3-ed2f-4d2f-8493-3dd1452e16da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de27b55-f4')
Nov 25 08:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91-userdata-shm.mount: Deactivated successfully.
Nov 25 08:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-754c241e4d2ab39b3351c7386a235b84734f23374e154e299f4df3e053aaff0b-merged.mount: Deactivated successfully.
Nov 25 08:27:46 compute-0 podman[288972]: 2025-11-25 08:27:46.10884306 +0000 UTC m=+0.213558823 container cleanup a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:27:46 compute-0 systemd[1]: libpod-conmon-a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91.scope: Deactivated successfully.
Nov 25 08:27:46 compute-0 podman[289026]: 2025-11-25 08:27:46.198839891 +0000 UTC m=+0.067102498 container remove a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.206 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa0e1dd-4789-49c3-9334-c93ebd72c803]: (4, ('Tue Nov 25 08:27:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da (a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91)\na0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91\nTue Nov 25 08:27:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da (a0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91)\na0a016cd70afdaf0b4b5e64fd65184e8ba97739405a3f728b1b0a8d1d9a0ac91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.208 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4d59de-cef8-4ac2-b742-e63c766b8eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57f2ccd3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:46 compute-0 kernel: tap57f2ccd3-e0: left promiscuous mode
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.220 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5990001e-08be-4f9c-8012-15801acf57bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.238 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ccbc2dd-5cc7-4b38-bfa1-03c8ae8f474e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.240 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[11498008-a8a5-4ec9-9875-04a642dab501]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.255 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efde0ba3-baf8-422c-a8af-501623aa1f0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461722, 'reachable_time': 18279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289041, 'error': None, 'target': 'ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d57f2ccd3\x2ded2f\x2d4d2f\x2d8493\x2d3dd1452e16da.mount: Deactivated successfully.
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.258 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-57f2ccd3-ed2f-4d2f-8493-3dd1452e16da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.258 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[078a0fcf-ac5d-4ed9-a688-0f58846028a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.261 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 in datapath 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da unbound from our chassis
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.262 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1634a4-0b63-4976-bd57-25503727750d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.263 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 in datapath 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da unbound from our chassis
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.264 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57f2ccd3-ed2f-4d2f-8493-3dd1452e16da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:27:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:46.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e89017b6-fd54-4f32-83f8-3fd599c524b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.834 253542 INFO nova.virt.libvirt.driver [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Deleting instance files /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792_del
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.835 253542 INFO nova.virt.libvirt.driver [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Deletion of /var/lib/nova/instances/575b6526-de38-4a80-a952-be1b891b4792_del complete
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.907 253542 INFO nova.compute.manager [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Took 1.34 seconds to destroy the instance on the hypervisor.
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.909 253542 DEBUG oslo.service.loopingcall [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.909 253542 DEBUG nova.compute.manager [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:27:46 compute-0 nova_compute[253538]: 2025-11-25 08:27:46.910 253542 DEBUG nova.network.neutron [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:27:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 121 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.330 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-unplugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.331 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.331 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] No waiting events found dispatching network-vif-unplugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-unplugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.332 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "575b6526-de38-4a80-a952-be1b891b4792-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.333 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.333 253542 DEBUG oslo_concurrency.lockutils [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.333 253542 DEBUG nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] No waiting events found dispatching network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:47 compute-0 nova_compute[253538]: 2025-11-25 08:27:47.333 253542 WARNING nova.compute.manager [req-794c29ef-a149-44a9-9b74-36e5acea01dd req-969727de-f242-4b18-8a02-d71b97fea74b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received unexpected event network-vif-plugged-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 for instance with vm_state active and task_state deleting.
Nov 25 08:27:48 compute-0 ceph-mon[75015]: pgmap v1276: 321 pgs: 321 active+clean; 121 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 08:27:48 compute-0 nova_compute[253538]: 2025-11-25 08:27:48.653 253542 DEBUG nova.network.neutron [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:48 compute-0 nova_compute[253538]: 2025-11-25 08:27:48.676 253542 INFO nova.compute.manager [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Took 1.77 seconds to deallocate network for instance.
Nov 25 08:27:48 compute-0 nova_compute[253538]: 2025-11-25 08:27:48.739 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:48 compute-0 nova_compute[253538]: 2025-11-25 08:27:48.740 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:48 compute-0 nova_compute[253538]: 2025-11-25 08:27:48.815 253542 DEBUG oslo_concurrency.processutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 117 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 148 op/s
Nov 25 08:27:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3834716908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.315 253542 DEBUG oslo_concurrency.processutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.322 253542 DEBUG nova.compute.provider_tree [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.336 253542 DEBUG nova.scheduler.client.report [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.392 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.412 253542 DEBUG nova.compute.manager [req-e73a15f3-0671-4374-b80b-bd34353be108 req-daddbc0b-843c-4473-9be6-48d928b4d2e8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Received event network-vif-deleted-1de27b55-f4ed-42e6-a9b2-65d84a8a77f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:49 compute-0 ovn_controller[152859]: 2025-11-25T08:27:49Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:2b:39 10.100.0.13
Nov 25 08:27:49 compute-0 ovn_controller[152859]: 2025-11-25T08:27:49Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:2b:39 10.100.0.13
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.449 253542 INFO nova.scheduler.client.report [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Deleted allocations for instance 575b6526-de38-4a80-a952-be1b891b4792
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.547 253542 DEBUG oslo_concurrency.lockutils [None req-99878b1a-80c7-4f02-93f8-9a091688e41f cbe175b10d9243369c5cae0b1a0c718b 16044f9687494680b68b927090e5afc5 - - default default] Lock "575b6526-de38-4a80-a952-be1b891b4792" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:49 compute-0 nova_compute[253538]: 2025-11-25 08:27:49.994 253542 DEBUG nova.virt.libvirt.driver [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:27:50 compute-0 ceph-mon[75015]: pgmap v1277: 321 pgs: 321 active+clean; 117 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 148 op/s
Nov 25 08:27:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3834716908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:51 compute-0 nova_compute[253538]: 2025-11-25 08:27:51.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 104 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 180 op/s
Nov 25 08:27:52 compute-0 ceph-mon[75015]: pgmap v1278: 321 pgs: 321 active+clean; 104 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 180 op/s
Nov 25 08:27:52 compute-0 kernel: tapd6aa33fe-8d (unregistering): left promiscuous mode
Nov 25 08:27:52 compute-0 NetworkManager[48915]: <info>  [1764059272.4284] device (tapd6aa33fe-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:27:52 compute-0 ovn_controller[152859]: 2025-11-25T08:27:52Z|00167|binding|INFO|Releasing lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c from this chassis (sb_readonly=0)
Nov 25 08:27:52 compute-0 ovn_controller[152859]: 2025-11-25T08:27:52Z|00168|binding|INFO|Setting lport d6aa33fe-8dd6-4546-aa75-715ad57e5b5c down in Southbound
Nov 25 08:27:52 compute-0 ovn_controller[152859]: 2025-11-25T08:27:52Z|00169|binding|INFO|Removing iface tapd6aa33fe-8d ovn-installed in OVS
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.487 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.490 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.511 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:2b:39 10.100.0.13'], port_security=['fa:16:3e:93:2b:39 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30afcbba-78f3-433c-ba0a-5a2d25cf2d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.514 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d6aa33fe-8dd6-4546-aa75-715ad57e5b5c in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.516 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.517 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb132d1d-3791-47d8-9d27-c59547b429cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.518 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:52 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 25 08:27:52 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001a.scope: Consumed 13.638s CPU time.
Nov 25 08:27:52 compute-0 systemd-machined[215790]: Machine qemu-31-instance-0000001a terminated.
Nov 25 08:27:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [NOTICE]   (288371) : haproxy version is 2.8.14-c23fe91
Nov 25 08:27:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [NOTICE]   (288371) : path to executable is /usr/sbin/haproxy
Nov 25 08:27:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [WARNING]  (288371) : Exiting Master process...
Nov 25 08:27:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [ALERT]    (288371) : Current worker (288376) exited with code 143 (Terminated)
Nov 25 08:27:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[288351]: [WARNING]  (288371) : All workers exited. Exiting... (0)
Nov 25 08:27:52 compute-0 systemd[1]: libpod-48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529.scope: Deactivated successfully.
Nov 25 08:27:52 compute-0 podman[289091]: 2025-11-25 08:27:52.683704618 +0000 UTC m=+0.071022647 container died 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:27:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529-userdata-shm.mount: Deactivated successfully.
Nov 25 08:27:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dedff8d23f1f42a2bd3738ba82be5e7d6fdc0734473529518931479e1db062ba-merged.mount: Deactivated successfully.
Nov 25 08:27:52 compute-0 podman[289091]: 2025-11-25 08:27:52.750452602 +0000 UTC m=+0.137770621 container cleanup 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 08:27:52 compute-0 systemd[1]: libpod-conmon-48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529.scope: Deactivated successfully.
Nov 25 08:27:52 compute-0 podman[289132]: 2025-11-25 08:27:52.825698432 +0000 UTC m=+0.052295646 container remove 48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.832 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7109513b-a5e2-4780-aedf-140b4cac6f4a]: (4, ('Tue Nov 25 08:27:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529)\n48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529\nTue Nov 25 08:27:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529)\n48affac628b5196fd72e2915e1684dca5200b312cdab4ddbed73be0f5b5df529\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.834 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c4eb6c86-879f-450a-82e0-954af3fb9635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.835 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:52 compute-0 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.862 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a95262b0-89fb-460b-8d87-ba7245e11c65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.876 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6cab376-2519-4d7d-bf47-dd6fd7bb9996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.878 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b19fedf7-d269-4a15-b3ad-3441296da47b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d63a616-520c-48e3-9cd7-2c7e34dcd3d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461034, 'reachable_time': 16044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289150, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.901 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:27:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:27:52.901 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a3083430-1856-4580-8160-d58e618cf9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:27:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.921 253542 DEBUG nova.compute.manager [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-unplugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.922 253542 DEBUG oslo_concurrency.lockutils [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.922 253542 DEBUG oslo_concurrency.lockutils [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.922 253542 DEBUG oslo_concurrency.lockutils [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.922 253542 DEBUG nova.compute.manager [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] No waiting events found dispatching network-vif-unplugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:52 compute-0 nova_compute[253538]: 2025-11-25 08:27:52.923 253542 WARNING nova.compute.manager [req-c48052f3-0bea-4429-869e-7b5ed89d6fd2 req-c597cdab-dd8b-4d4d-8bd9-ddb4fd9e2141 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received unexpected event network-vif-unplugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c for instance with vm_state active and task_state powering-off.
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.009 253542 INFO nova.virt.libvirt.driver [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance shutdown successfully after 13 seconds.
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.015 253542 INFO nova.virt.libvirt.driver [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance destroyed successfully.
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.016 253542 DEBUG nova.objects.instance [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'numa_topology' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.036 253542 DEBUG nova.compute.manager [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 111 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 179 op/s
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.125 253542 DEBUG oslo_concurrency.lockutils [None req-a6ead9e6-c858-49aa-82dc-e5d94b3760e8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:27:53
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.349 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.350 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.392 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.501 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.501 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.509 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.509 253542 INFO nova.compute.claims [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:27:53 compute-0 nova_compute[253538]: 2025-11-25 08:27:53.639 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:27:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:27:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5311080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.087 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.093 253542 DEBUG nova.compute.provider_tree [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.106 253542 DEBUG nova.scheduler.client.report [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:54 compute-0 ceph-mon[75015]: pgmap v1279: 321 pgs: 321 active+clean; 111 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 179 op/s
Nov 25 08:27:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/5311080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.250 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.251 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.319 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.320 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.425 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.461 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.536 253542 DEBUG nova.compute.manager [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.584 253542 INFO nova.compute.manager [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] instance snapshotting
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.585 253542 WARNING nova.compute.manager [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] trying to snapshot a non-running instance: (state: 4 expected: 1)
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.587 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.588 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.588 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Creating image(s)
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.611 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.639 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.663 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.666 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.724 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.724 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.725 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.725 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.746 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.750 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:54 compute-0 nova_compute[253538]: 2025-11-25 08:27:54.886 253542 INFO nova.virt.libvirt.driver [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Beginning cold snapshot process
Nov 25 08:27:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.021 253542 DEBUG nova.compute.manager [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.021 253542 DEBUG oslo_concurrency.lockutils [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.022 253542 DEBUG oslo_concurrency.lockutils [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.022 253542 DEBUG oslo_concurrency.lockutils [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.022 253542 DEBUG nova.compute.manager [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] No waiting events found dispatching network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.022 253542 WARNING nova.compute.manager [req-d3350719-728d-4f6b-8895-7884f7b90819 req-23c6dbcb-9ba0-4188-bd5c-98777a23a055 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received unexpected event network-vif-plugged-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c for instance with vm_state stopped and task_state image_uploading.
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.024 253542 DEBUG nova.policy [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5981df3e8536420ea5b8fcd98ef92e1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7627e6bf071942db89329eee4a7d6b59', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.029 253542 DEBUG nova.virt.libvirt.imagebackend [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:27:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 121 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.230 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(b68f0d230d0b4d0ebbc0c5333f15da85) on rbd image(30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.263 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.332 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] resizing rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.722 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Successfully created port: eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.827 253542 DEBUG nova.objects.instance [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'migration_context' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.840 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.841 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Ensure instance console log exists: /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.841 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.841 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:55 compute-0 nova_compute[253538]: 2025-11-25 08:27:55.842 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:56 compute-0 nova_compute[253538]: 2025-11-25 08:27:56.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 25 08:27:56 compute-0 ceph-mon[75015]: pgmap v1280: 321 pgs: 321 active+clean; 121 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Nov 25 08:27:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 25 08:27:56 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 25 08:27:56 compute-0 nova_compute[253538]: 2025-11-25 08:27:56.321 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk@b68f0d230d0b4d0ebbc0c5333f15da85 to images/d060ded4-54b8-40d0-bea0-dc1f1f572072 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:27:56 compute-0 nova_compute[253538]: 2025-11-25 08:27:56.439 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/d060ded4-54b8-40d0-bea0-dc1f1f572072 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:27:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 180 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.1 MiB/s wr, 206 op/s
Nov 25 08:27:57 compute-0 ceph-mon[75015]: osdmap e120: 3 total, 3 up, 3 in
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.261 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(b68f0d230d0b4d0ebbc0c5333f15da85) on rbd image(30afcbba-78f3-433c-ba0a-5a2d25cf2d48_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.396 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Successfully updated port: eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.411 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.412 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquired lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.412 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.882 253542 DEBUG nova.compute.manager [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-changed-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.883 253542 DEBUG nova.compute.manager [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Refreshing instance network info cache due to event network-changed-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.883 253542 DEBUG oslo_concurrency.lockutils [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:27:57 compute-0 nova_compute[253538]: 2025-11-25 08:27:57.983 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:27:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 25 08:27:58 compute-0 ceph-mon[75015]: pgmap v1282: 321 pgs: 321 active+clean; 180 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.1 MiB/s wr, 206 op/s
Nov 25 08:27:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 25 08:27:58 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.353 253542 DEBUG nova.storage.rbd_utils [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(d060ded4-54b8-40d0-bea0-dc1f1f572072) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.777 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.778 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.796 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.860 253542 DEBUG nova.network.neutron [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.872 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.872 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.880 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.881 253542 INFO nova.compute.claims [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.885 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Releasing lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.885 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance network_info: |[{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.886 253542 DEBUG oslo_concurrency.lockutils [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.886 253542 DEBUG nova.network.neutron [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Refreshing network info cache for port eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.889 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start _get_guest_xml network_info=[{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.893 253542 WARNING nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.898 253542 DEBUG nova.virt.libvirt.host [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.898 253542 DEBUG nova.virt.libvirt.host [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.901 253542 DEBUG nova.virt.libvirt.host [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.902 253542 DEBUG nova.virt.libvirt.host [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.902 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.903 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.904 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.904 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.904 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.905 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.905 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.905 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.906 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.906 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.907 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.907 253542 DEBUG nova.virt.hardware [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:27:58 compute-0 nova_compute[253538]: 2025-11-25 08:27:58.911 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.041 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 216 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.8 MiB/s wr, 166 op/s
Nov 25 08:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 25 08:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 25 08:27:59 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 25 08:27:59 compute-0 ceph-mon[75015]: osdmap e121: 3 total, 3 up, 3 in
Nov 25 08:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421263637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.405 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.426 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.430 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:27:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2498905340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.513 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.520 253542 DEBUG nova.compute.provider_tree [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.539 253542 DEBUG nova.scheduler.client.report [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.692 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.693 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.750 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.751 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:59 compute-0 podman[289562]: 2025-11-25 08:27:59.81933087 +0000 UTC m=+0.066320094 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.830 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:27:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/864016032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.876 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.877 253542 DEBUG nova.virt.libvirt.vif [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:54Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.878 253542 DEBUG nova.network.os_vif_util [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.879 253542 DEBUG nova.network.os_vif_util [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.879 253542 DEBUG nova.objects.instance [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.888 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <uuid>14cd6797-cf47-44da-acac-0e5e3d5dfe11</uuid>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <name>instance-0000001c</name>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <nova:name>tempest-AttachInterfacesV270Test-server-1019332220</nova:name>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:27:58</nova:creationTime>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <nova:user uuid="5981df3e8536420ea5b8fcd98ef92e1b">tempest-AttachInterfacesV270Test-968002196-project-member</nova:user>
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <nova:project uuid="7627e6bf071942db89329eee4a7d6b59">tempest-AttachInterfacesV270Test-968002196</nova:project>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <nova:port uuid="eb3ca9e2-cc78-478d-97c2-03b1c7d29b95">
Nov 25 08:27:59 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <system>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <entry name="serial">14cd6797-cf47-44da-acac-0e5e3d5dfe11</entry>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <entry name="uuid">14cd6797-cf47-44da-acac-0e5e3d5dfe11</entry>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </system>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <os>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   </os>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <features>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   </features>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk">
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config">
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:27:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:f3:13:20"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <target dev="tapeb3ca9e2-cc"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/console.log" append="off"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <video>
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </video>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:27:59 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:27:59 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:27:59 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:27:59 compute-0 nova_compute[253538]: </domain>
Nov 25 08:27:59 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.890 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Preparing to wait for external event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.890 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.890 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.890 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.891 253542 DEBUG nova.virt.libvirt.vif [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:54Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.891 253542 DEBUG nova.network.os_vif_util [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.892 253542 DEBUG nova.network.os_vif_util [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.892 253542 DEBUG os_vif [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.893 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.894 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.896 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb3ca9e2-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.897 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb3ca9e2-cc, col_values=(('external_ids', {'iface-id': 'eb3ca9e2-cc78-478d-97c2-03b1c7d29b95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:13:20', 'vm-uuid': '14cd6797-cf47-44da-acac-0e5e3d5dfe11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:59 compute-0 NetworkManager[48915]: <info>  [1764059279.8998] manager: (tapeb3ca9e2-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.912 253542 INFO os_vif [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc')
Nov 25 08:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:27:59 compute-0 nova_compute[253538]: 2025-11-25 08:27:59.957 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.059 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.060 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.060 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No VIF found with MAC fa:16:3e:f3:13:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.061 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Using config drive
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.086 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.144 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.145 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.145 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Creating image(s)
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.226 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.255 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.278 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.283 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.360 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.361 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.361 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.362 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:00 compute-0 ceph-mon[75015]: pgmap v1284: 321 pgs: 321 active+clean; 216 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.8 MiB/s wr, 166 op/s
Nov 25 08:28:00 compute-0 ceph-mon[75015]: osdmap e122: 3 total, 3 up, 3 in
Nov 25 08:28:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/421263637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2498905340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/864016032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.431 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:00 compute-0 nova_compute[253538]: 2025-11-25 08:28:00.435 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.012 253542 DEBUG nova.policy [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8350a560f2bc4b57a5da0e3a1f582f82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5b1125d171240e2895276836b4fd6d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.016 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Creating config drive at /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.025 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2vr__eu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.061 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059266.0391502, 575b6526-de38-4a80-a952-be1b891b4792 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.062 253542 INFO nova.compute.manager [-] [instance: 575b6526-de38-4a80-a952-be1b891b4792] VM Stopped (Lifecycle Event)
Nov 25 08:28:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 236 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 185 op/s
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.083 253542 DEBUG nova.compute.manager [None req-c3086c63-fc5e-49a1-9cea-9bd1c40fcbcc - - - - - -] [instance: 575b6526-de38-4a80-a952-be1b891b4792] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.174 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2vr__eu" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.206 253542 DEBUG nova.storage.rbd_utils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] rbd image 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.209 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.389 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.954s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.457 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] resizing rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.523 253542 DEBUG oslo_concurrency.processutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config 14cd6797-cf47-44da-acac-0e5e3d5dfe11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.525 253542 INFO nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Deleting local config drive /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11/disk.config because it was imported into RBD.
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.543 253542 DEBUG nova.network.neutron [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updated VIF entry in instance network info cache for port eb3ca9e2-cc78-478d-97c2-03b1c7d29b95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.543 253542 DEBUG nova.network.neutron [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.556 253542 DEBUG oslo_concurrency.lockutils [req-96070fe5-3c6d-412c-beec-ba7e590d0015 req-86449235-eb99-486f-8abc-526a0eec8eeb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:01 compute-0 kernel: tapeb3ca9e2-cc: entered promiscuous mode
Nov 25 08:28:01 compute-0 NetworkManager[48915]: <info>  [1764059281.6058] manager: (tapeb3ca9e2-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Nov 25 08:28:01 compute-0 ovn_controller[152859]: 2025-11-25T08:28:01Z|00170|binding|INFO|Claiming lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 for this chassis.
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:01 compute-0 ovn_controller[152859]: 2025-11-25T08:28:01Z|00171|binding|INFO|eb3ca9e2-cc78-478d-97c2-03b1c7d29b95: Claiming fa:16:3e:f3:13:20 10.100.0.6
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.627 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:13:20 10.100.0.6'], port_security=['fa:16:3e:f3:13:20 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '14cd6797-cf47-44da-acac-0e5e3d5dfe11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7627e6bf071942db89329eee4a7d6b59', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d49786c-dade-46e6-8a51-0f68b7957195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ee2cc9a-2b72-45fd-9005-f89765076013, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.628 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 in datapath 431353a0-bdb3-445c-95e7-9cd19a8e3783 bound to our chassis
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.630 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 431353a0-bdb3-445c-95e7-9cd19a8e3783
Nov 25 08:28:01 compute-0 systemd-udevd[289809]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.646 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8245ea9f-c32f-4489-964d-b73c2c66eb67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.647 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap431353a0-b1 in ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.648 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap431353a0-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.649 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd36c31-1ec8-44be-9547-745d2afe3627]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.650 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c85e3dc9-96a1-4921-8aa2-3f3eff5d3537]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 systemd-machined[215790]: New machine qemu-33-instance-0000001c.
Nov 25 08:28:01 compute-0 NetworkManager[48915]: <info>  [1764059281.6644] device (tapeb3ca9e2-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:28:01 compute-0 NetworkManager[48915]: <info>  [1764059281.6652] device (tapeb3ca9e2-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.668 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8aac51c7-5d2e-4f4b-86ba-6d5104857451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001c.
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c3f24b-3878-42da-ada3-e5d1e117f670]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:01 compute-0 ovn_controller[152859]: 2025-11-25T08:28:01Z|00172|binding|INFO|Setting lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 ovn-installed in OVS
Nov 25 08:28:01 compute-0 ovn_controller[152859]: 2025-11-25T08:28:01Z|00173|binding|INFO|Setting lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 up in Southbound
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.713 253542 DEBUG nova.objects.instance [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.716 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.726 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.726 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Ensure instance console log exists: /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.727 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.728 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.728 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.740 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d9688416-8e4d-4c0f-82ca-0e0cb02c2f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.742 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Successfully created port: a0d5bf0b-a708-4159-968d-5c597313379d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.750 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4562937-3bf4-47eb-b0c0-4dc8388952b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 NetworkManager[48915]: <info>  [1764059281.7516] manager: (tap431353a0-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Nov 25 08:28:01 compute-0 systemd-udevd[289820]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.788 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3a54720f-fc9c-4d60-b240-5edcc5c5b038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.792 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dd55da40-385a-4174-9b9a-2227a4aaa2af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 NetworkManager[48915]: <info>  [1764059281.8203] device (tap431353a0-b0): carrier: link connected
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.826 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc35469-4394-48bc-bb9f-0f9a4ea33d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4343a0-baf3-4a55-9784-dc268726fe86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap431353a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:1b:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463641, 'reachable_time': 15962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289865, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.868 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[102fe676-9fd3-4b7e-bb32-a14efdac2890]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:1b92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463641, 'tstamp': 463641}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289866, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.888 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2b562297-d733-49f3-9bcb-e70af75d421b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap431353a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:1b:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 3, 'rx_bytes': 176, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 3, 'rx_bytes': 176, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463641, 'reachable_time': 15962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289867, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:01.927 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fc014f6c-b33f-4f6b-a01e-3f3b6a10098a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:01 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.999 253542 INFO nova.virt.libvirt.driver [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Snapshot image upload complete
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:01.999 253542 INFO nova.compute.manager [None req-edca5066-bf60-4cb0-8cf6-52d8ab5e0c17 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 7.41 seconds to snapshot the instance on the hypervisor.
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.002 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b111b0f6-282f-42cc-ab18-d1c45afd1139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.003 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431353a0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.004 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.004 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap431353a0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:02 compute-0 NetworkManager[48915]: <info>  [1764059282.0071] manager: (tap431353a0-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Nov 25 08:28:02 compute-0 kernel: tap431353a0-b0: entered promiscuous mode
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.013 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap431353a0-b0, col_values=(('external_ids', {'iface-id': 'eb9dc67a-a121-4efb-a3df-9647173b8d46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:02 compute-0 ovn_controller[152859]: 2025-11-25T08:28:02Z|00174|binding|INFO|Releasing lport eb9dc67a-a121-4efb-a3df-9647173b8d46 from this chassis (sb_readonly=0)
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.036 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.036 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/431353a0-bdb3-445c-95e7-9cd19a8e3783.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/431353a0-bdb3-445c-95e7-9cd19a8e3783.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ea49440e-fd3f-4c75-842c-7d67ab55c126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.038 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-431353a0-bdb3-445c-95e7-9cd19a8e3783
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/431353a0-bdb3-445c-95e7-9cd19a8e3783.pid.haproxy
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 431353a0-bdb3-445c-95e7-9cd19a8e3783
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:28:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:02.039 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'env', 'PROCESS_TAG=haproxy-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/431353a0-bdb3-445c-95e7-9cd19a8e3783.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.196 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059282.1963282, 14cd6797-cf47-44da-acac-0e5e3d5dfe11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.198 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] VM Started (Lifecycle Event)
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.214 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.219 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059282.1964796, 14cd6797-cf47-44da-acac-0e5e3d5dfe11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.219 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] VM Paused (Lifecycle Event)
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.223 253542 DEBUG nova.compute.manager [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.223 253542 DEBUG oslo_concurrency.lockutils [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.224 253542 DEBUG oslo_concurrency.lockutils [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.224 253542 DEBUG oslo_concurrency.lockutils [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.224 253542 DEBUG nova.compute.manager [req-ca0bc5d2-692b-4564-b8f0-84496287ff64 req-9398c8ca-6443-4042-b90c-f2f3db152888 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Processing event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.225 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.229 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.232 253542 INFO nova.virt.libvirt.driver [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance spawned successfully.
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.233 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.256 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.263 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059282.2292566, 14cd6797-cf47-44da-acac-0e5e3d5dfe11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.264 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] VM Resumed (Lifecycle Event)
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.267 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.268 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.268 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.269 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.269 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.270 253542 DEBUG nova.virt.libvirt.driver [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.298 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.303 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.321 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.429 253542 INFO nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Took 7.84 seconds to spawn the instance on the hypervisor.
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.430 253542 DEBUG nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:02 compute-0 ceph-mon[75015]: pgmap v1286: 321 pgs: 321 active+clean; 236 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 185 op/s
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.508 253542 INFO nova.compute.manager [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Took 9.02 seconds to build instance.
Nov 25 08:28:02 compute-0 podman[289945]: 2025-11-25 08:28:02.427393186 +0000 UTC m=+0.027592674 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:28:02 compute-0 podman[289945]: 2025-11-25 08:28:02.546074136 +0000 UTC m=+0.146273604 container create 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:28:02 compute-0 nova_compute[253538]: 2025-11-25 08:28:02.563 253542 DEBUG oslo_concurrency.lockutils [None req-a08427a9-393d-477c-899d-da297a0ffeb3 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:02 compute-0 systemd[1]: Started libpod-conmon-36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16.scope.
Nov 25 08:28:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d59ce0fb78beb290599dd8ea1cac60e0a337aea145db00ab9301e67824cad23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:02 compute-0 podman[289945]: 2025-11-25 08:28:02.68224433 +0000 UTC m=+0.282443838 container init 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 08:28:02 compute-0 podman[289945]: 2025-11-25 08:28:02.690388675 +0000 UTC m=+0.290588143 container start 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:28:02 compute-0 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [NOTICE]   (289964) : New worker (289966) forked
Nov 25 08:28:02 compute-0 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [NOTICE]   (289964) : Loading success.
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 258 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 11 MiB/s wr, 203 op/s
Nov 25 08:28:03 compute-0 nova_compute[253538]: 2025-11-25 08:28:03.215 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Successfully updated port: a0d5bf0b-a708-4159-968d-5c597313379d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:28:03 compute-0 nova_compute[253538]: 2025-11-25 08:28:03.308 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:03 compute-0 nova_compute[253538]: 2025-11-25 08:28:03.308 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquired lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:03 compute-0 nova_compute[253538]: 2025-11-25 08:28:03.308 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:28:03 compute-0 nova_compute[253538]: 2025-11-25 08:28:03.335 253542 DEBUG nova.compute.manager [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-changed-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:03 compute-0 nova_compute[253538]: 2025-11-25 08:28:03.336 253542 DEBUG nova.compute.manager [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Refreshing instance network info cache due to event network-changed-a0d5bf0b-a708-4159-968d-5c597313379d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:28:03 compute-0 nova_compute[253538]: 2025-11-25 08:28:03.336 253542 DEBUG oslo_concurrency.lockutils [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 25 08:28:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 25 08:28:03 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 25 08:28:03 compute-0 ceph-mon[75015]: pgmap v1287: 321 pgs: 321 active+clean; 258 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 11 MiB/s wr, 203 op/s
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014231600410236375 of space, bias 1.0, pg target 0.4269480123070913 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014230328596079964 of space, bias 1.0, pg target 0.42690985788239894 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:28:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:28:03 compute-0 nova_compute[253538]: 2025-11-25 08:28:03.768 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:28:03 compute-0 podman[289975]: 2025-11-25 08:28:03.823531774 +0000 UTC m=+0.071503097 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.545 253542 DEBUG nova.compute.manager [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.546 253542 DEBUG oslo_concurrency.lockutils [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.546 253542 DEBUG oslo_concurrency.lockutils [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.546 253542 DEBUG oslo_concurrency.lockutils [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.547 253542 DEBUG nova.compute.manager [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.547 253542 WARNING nova.compute.manager [req-a2486f81-c28a-478f-bc2e-26f5c297c673 req-7b3f93e0-4eb3-4ea5-a720-5d3ec4c180d8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 for instance with vm_state active and task_state None.
Nov 25 08:28:04 compute-0 ceph-mon[75015]: osdmap e123: 3 total, 3 up, 3 in
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.743 253542 DEBUG nova.network.neutron [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Updating instance_info_cache with network_info: [{"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.842 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "interface-14cd6797-cf47-44da-acac-0e5e3d5dfe11-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.844 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "interface-14cd6797-cf47-44da-acac-0e5e3d5dfe11-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.844 253542 DEBUG nova.objects.instance [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'flavor' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.855 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Releasing lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.856 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance network_info: |[{"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.856 253542 DEBUG oslo_concurrency.lockutils [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.856 253542 DEBUG nova.network.neutron [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Refreshing network info cache for port a0d5bf0b-a708-4159-968d-5c597313379d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.860 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start _get_guest_xml network_info=[{"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.862 253542 DEBUG nova.objects.instance [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'pci_requests' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.878 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.884 253542 WARNING nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.890 253542 DEBUG nova.virt.libvirt.host [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.891 253542 DEBUG nova.virt.libvirt.host [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.893 253542 DEBUG nova.virt.libvirt.host [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.894 253542 DEBUG nova.virt.libvirt.host [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.894 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.894 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.895 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.896 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.896 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.896 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.896 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.897 253542 DEBUG nova.virt.hardware [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.899 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:28:04 compute-0 nova_compute[253538]: 2025-11-25 08:28:04.934 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 292 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.1 MiB/s wr, 167 op/s
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.234 253542 DEBUG nova.policy [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5981df3e8536420ea5b8fcd98ef92e1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7627e6bf071942db89329eee4a7d6b59', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:28:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264332856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.394 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.419 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.424 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:05 compute-0 ceph-mon[75015]: pgmap v1289: 321 pgs: 321 active+clean; 292 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.1 MiB/s wr, 167 op/s
Nov 25 08:28:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/264332856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:28:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 5918 writes, 26K keys, 5918 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5917 writes, 5917 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1509 writes, 6764 keys, 1509 commit groups, 1.0 writes per commit group, ingest: 9.28 MB, 0.02 MB/s
                                           Interval WAL: 1509 writes, 1509 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.4      1.42              0.10        15    0.094       0      0       0.0       0.0
                                             L6      1/0    6.96 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     40.5     32.7      3.02              0.32        14    0.215     64K   7831       0.0       0.0
                                            Sum      1/0    6.96 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     27.5     29.1      4.43              0.42        29    0.153     64K   7831       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7     20.2     20.1      1.90              0.14         8    0.237     21K   2587       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     40.5     32.7      3.02              0.32        14    0.215     64K   7831       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.4      1.41              0.10        14    0.101       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 4.4 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 1.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 308.00 MB usage: 13.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000117 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(863,12.69 MB,4.11933%) FilterBlock(30,184.17 KB,0.0583946%) IndexBlock(30,334.05 KB,0.105915%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 08:28:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241261070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.884 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.887 253542 DEBUG nova.virt.libvirt.vif [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-184009211',display_name='tempest-ImagesOneServerNegativeTestJSON-server-184009211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-184009211',id=29,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-7mj1pjji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:59Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.888 253542 DEBUG nova.network.os_vif_util [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.889 253542 DEBUG nova.network.os_vif_util [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.890 253542 DEBUG nova.objects.instance [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.917 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <uuid>59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0</uuid>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <name>instance-0000001d</name>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-184009211</nova:name>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:28:04</nova:creationTime>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <nova:user uuid="8350a560f2bc4b57a5da0e3a1f582f82">tempest-ImagesOneServerNegativeTestJSON-192511421-project-member</nova:user>
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <nova:project uuid="c5b1125d171240e2895276836b4fd6d7">tempest-ImagesOneServerNegativeTestJSON-192511421</nova:project>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <nova:port uuid="a0d5bf0b-a708-4159-968d-5c597313379d">
Nov 25 08:28:05 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <system>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <entry name="serial">59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0</entry>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <entry name="uuid">59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0</entry>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </system>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <os>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   </os>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <features>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   </features>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk">
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config">
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:05 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:67:41:bb"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <target dev="tapa0d5bf0b-a7"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/console.log" append="off"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <video>
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </video>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:28:05 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:28:05 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:28:05 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:28:05 compute-0 nova_compute[253538]: </domain>
Nov 25 08:28:05 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.919 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Preparing to wait for external event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.919 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.920 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.920 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.921 253542 DEBUG nova.virt.libvirt.vif [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-184009211',display_name='tempest-ImagesOneServerNegativeTestJSON-server-184009211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-184009211',id=29,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-7mj1pjji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:27:59Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.921 253542 DEBUG nova.network.os_vif_util [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.921 253542 DEBUG nova.network.os_vif_util [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.922 253542 DEBUG os_vif [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.923 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.923 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.926 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.926 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d5bf0b-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.927 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0d5bf0b-a7, col_values=(('external_ids', {'iface-id': 'a0d5bf0b-a708-4159-968d-5c597313379d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:41:bb', 'vm-uuid': '59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:05 compute-0 NetworkManager[48915]: <info>  [1764059285.9289] manager: (tapa0d5bf0b-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.937 253542 INFO os_vif [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7')
Nov 25 08:28:05 compute-0 nova_compute[253538]: 2025-11-25 08:28:05.962 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Successfully created port: c620cff4-b028-4d86-b951-0d489781da2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.004 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.004 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.005 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No VIF found with MAC fa:16:3e:67:41:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.005 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Using config drive
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.028 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.071 253542 DEBUG nova.network.neutron [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Updated VIF entry in instance network info cache for port a0d5bf0b-a708-4159-968d-5c597313379d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.071 253542 DEBUG nova.network.neutron [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Updating instance_info_cache with network_info: [{"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.089 253542 DEBUG oslo_concurrency.lockutils [req-589eb583-e7c2-44d7-ad18-a259ef45fcd9 req-8d07beb8-111f-4d19-a355-e50741638f40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.616 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.617 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.617 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.618 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.619 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.621 253542 INFO nova.compute.manager [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Terminating instance
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.622 253542 DEBUG nova.compute.manager [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.630 253542 INFO nova.virt.libvirt.driver [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Instance destroyed successfully.
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.631 253542 DEBUG nova.objects.instance [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.642 253542 DEBUG nova.virt.libvirt.vif [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2071435205',display_name='tempest-ImagesTestJSON-server-2071435205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2071435205',id=26,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:27:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-a9u90nyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=30afcbba-78f3-433c-ba0a-5a2d25cf2d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.642 253542 DEBUG nova.network.os_vif_util [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "address": "fa:16:3e:93:2b:39", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6aa33fe-8d", "ovs_interfaceid": "d6aa33fe-8dd6-4546-aa75-715ad57e5b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.643 253542 DEBUG nova.network.os_vif_util [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.643 253542 DEBUG os_vif [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.646 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6aa33fe-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.649 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.654 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.656 253542 INFO os_vif [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2b:39,bridge_name='br-int',has_traffic_filtering=True,id=d6aa33fe-8dd6-4546-aa75-715ad57e5b5c,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6aa33fe-8d')
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.679 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Creating config drive at /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.683 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkrjw_fcq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.831 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkrjw_fcq" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.862 253542 DEBUG nova.storage.rbd_utils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:06 compute-0 nova_compute[253538]: 2025-11-25 08:28:06.865 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3241261070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 248 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.4 MiB/s wr, 222 op/s
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.288 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Successfully updated port: c620cff4-b028-4d86-b951-0d489781da2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.310 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.310 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquired lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.311 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.506 253542 WARNING nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] 431353a0-bdb3-445c-95e7-9cd19a8e3783 already exists in list: networks containing: ['431353a0-bdb3-445c-95e7-9cd19a8e3783']. ignoring it
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.714 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059272.7129986, 30afcbba-78f3-433c-ba0a-5a2d25cf2d48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.715 253542 INFO nova.compute.manager [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] VM Stopped (Lifecycle Event)
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.731 253542 DEBUG nova.compute.manager [None req-cbf23426-e2d5-45a5-b75b-e828c7d4e001 - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.734 253542 DEBUG nova.compute.manager [None req-cbf23426-e2d5-45a5-b75b-e828c7d4e001 - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:07 compute-0 nova_compute[253538]: 2025-11-25 08:28:07.754 253542 INFO nova.compute.manager [None req-cbf23426-e2d5-45a5-b75b-e828c7d4e001 - - - - - -] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 25 08:28:08 compute-0 ceph-mon[75015]: pgmap v1290: 321 pgs: 321 active+clean; 248 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.4 MiB/s wr, 222 op/s
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.469 253542 DEBUG oslo_concurrency.processutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.470 253542 INFO nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Deleting local config drive /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0/disk.config because it was imported into RBD.
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.505 253542 DEBUG nova.compute.manager [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-changed-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.505 253542 DEBUG nova.compute.manager [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Refreshing instance network info cache due to event network-changed-c620cff4-b028-4d86-b951-0d489781da2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.505 253542 DEBUG oslo_concurrency.lockutils [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:08 compute-0 NetworkManager[48915]: <info>  [1764059288.5251] manager: (tapa0d5bf0b-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Nov 25 08:28:08 compute-0 kernel: tapa0d5bf0b-a7: entered promiscuous mode
Nov 25 08:28:08 compute-0 ovn_controller[152859]: 2025-11-25T08:28:08Z|00175|binding|INFO|Claiming lport a0d5bf0b-a708-4159-968d-5c597313379d for this chassis.
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:08 compute-0 ovn_controller[152859]: 2025-11-25T08:28:08Z|00176|binding|INFO|a0d5bf0b-a708-4159-968d-5c597313379d: Claiming fa:16:3e:67:41:bb 10.100.0.9
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.542 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:41:bb 10.100.0.9'], port_security=['fa:16:3e:67:41:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0d5bf0b-a708-4159-968d-5c597313379d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.544 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0d5bf0b-a708-4159-968d-5c597313379d in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 bound to our chassis
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.545 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.556 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75352861-793b-4210-a151-14303e116dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.557 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52a7668b-f1 in ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.558 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52a7668b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[236a2c3a-f199-4973-8e36-fc67fa1d7d0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.559 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78c52bc3-b874-4caa-b40c-03c5a17c43dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.576 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[107b6888-0786-451d-8ff9-d412e8de5f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 systemd-machined[215790]: New machine qemu-34-instance-0000001d.
Nov 25 08:28:08 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001d.
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.598 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa87def3-074e-4e20-bf4d-f65a54834655]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:08 compute-0 ovn_controller[152859]: 2025-11-25T08:28:08Z|00177|binding|INFO|Setting lport a0d5bf0b-a708-4159-968d-5c597313379d ovn-installed in OVS
Nov 25 08:28:08 compute-0 ovn_controller[152859]: 2025-11-25T08:28:08Z|00178|binding|INFO|Setting lport a0d5bf0b-a708-4159-968d-5c597313379d up in Southbound
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:08 compute-0 systemd-udevd[290155]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:08 compute-0 NetworkManager[48915]: <info>  [1764059288.6348] device (tapa0d5bf0b-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:28:08 compute-0 NetworkManager[48915]: <info>  [1764059288.6356] device (tapa0d5bf0b-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.640 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3873ada9-7138-4051-92ef-7977e3c1a97b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 systemd-udevd[290159]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.645 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[890d1d5e-e1f7-4736-85f9-26747f9ac50d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 NetworkManager[48915]: <info>  [1764059288.6463] manager: (tap52a7668b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.671 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[03fd2872-b28d-4e7f-ae4c-40bb2c2827b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.674 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[06d9993a-af4f-4a59-9dfa-ead28fdb1931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 NetworkManager[48915]: <info>  [1764059288.6961] device (tap52a7668b-f0): carrier: link connected
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.701 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d87f490e-82d6-4b69-8dd4-4434ea9fbcdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f052f4c1-a9ca-47fd-ba07-3c7dc9b85658]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464329, 'reachable_time': 30934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290184, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.736 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cabd5b9-dfc4-41cc-9818-f2ed5d8c29b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:1c70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464329, 'tstamp': 464329}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290185, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.756 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf71ae5-cee1-4ca1-9212-1766d923da1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464329, 'reachable_time': 30934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290186, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.783 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cec52e-27e1-4d17-9f28-3e0eae284d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.858 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa4ff86-a7de-4440-9e67-25e8397517b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.860 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.860 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.861 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a7668b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:08 compute-0 kernel: tap52a7668b-f0: entered promiscuous mode
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:08 compute-0 NetworkManager[48915]: <info>  [1764059288.8645] manager: (tap52a7668b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.866 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52a7668b-f0, col_values=(('external_ids', {'iface-id': 'ac244317-fa52-4a6a-92f4-98845a41804d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:08 compute-0 ovn_controller[152859]: 2025-11-25T08:28:08Z|00179|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 08:28:08 compute-0 nova_compute[253538]: 2025-11-25 08:28:08.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.898 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf23358-e226-4791-aa47-c4663f04cabf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.900 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:28:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:08.901 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'env', 'PROCESS_TAG=haproxy-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52a7668b-f0ac-4b07-a778-1ee89adbf076.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:28:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 230 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.184 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.184 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.198 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.246 253542 DEBUG nova.network.neutron [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.266 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Releasing lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.267 253542 DEBUG oslo_concurrency.lockutils [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.268 253542 DEBUG nova.network.neutron [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Refreshing network info cache for port c620cff4-b028-4d86-b951-0d489781da2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.273 253542 DEBUG nova.virt.libvirt.vif [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.274 253542 DEBUG nova.network.os_vif_util [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.275 253542 DEBUG nova.network.os_vif_util [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.275 253542 DEBUG os_vif [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.276 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.277 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.281 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.282 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc620cff4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc620cff4-b0, col_values=(('external_ids', {'iface-id': 'c620cff4-b028-4d86-b951-0d489781da2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:33:b7', 'vm-uuid': '14cd6797-cf47-44da-acac-0e5e3d5dfe11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:09 compute-0 NetworkManager[48915]: <info>  [1764059289.2900] manager: (tapc620cff4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.294 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.295 253542 INFO nova.compute.claims [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.299 253542 INFO os_vif [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0')
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.300 253542 DEBUG nova.virt.libvirt.vif [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.300 253542 DEBUG nova.network.os_vif_util [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.301 253542 DEBUG nova.network.os_vif_util [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.303 253542 DEBUG nova.virt.libvirt.guest [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:b0:33:b7"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <target dev="tapc620cff4-b0"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]: </interface>
Nov 25 08:28:09 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:28:09 compute-0 NetworkManager[48915]: <info>  [1764059289.3158] manager: (tapc620cff4-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Nov 25 08:28:09 compute-0 kernel: tapc620cff4-b0: entered promiscuous mode
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:09 compute-0 ovn_controller[152859]: 2025-11-25T08:28:09Z|00180|binding|INFO|Claiming lport c620cff4-b028-4d86-b951-0d489781da2f for this chassis.
Nov 25 08:28:09 compute-0 ovn_controller[152859]: 2025-11-25T08:28:09Z|00181|binding|INFO|c620cff4-b028-4d86-b951-0d489781da2f: Claiming fa:16:3e:b0:33:b7 10.100.0.5
Nov 25 08:28:09 compute-0 systemd-udevd[290179]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:09 compute-0 ovn_controller[152859]: 2025-11-25T08:28:09Z|00182|binding|INFO|Setting lport c620cff4-b028-4d86-b951-0d489781da2f ovn-installed in OVS
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:09 compute-0 ovn_controller[152859]: 2025-11-25T08:28:09Z|00183|binding|INFO|Setting lport c620cff4-b028-4d86-b951-0d489781da2f up in Southbound
Nov 25 08:28:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:09.339 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:33:b7 10.100.0.5'], port_security=['fa:16:3e:b0:33:b7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '14cd6797-cf47-44da-acac-0e5e3d5dfe11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7627e6bf071942db89329eee4a7d6b59', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d49786c-dade-46e6-8a51-0f68b7957195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ee2cc9a-2b72-45fd-9005-f89765076013, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c620cff4-b028-4d86-b951-0d489781da2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.341 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:09 compute-0 NetworkManager[48915]: <info>  [1764059289.3471] device (tapc620cff4-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:28:09 compute-0 NetworkManager[48915]: <info>  [1764059289.3492] device (tapc620cff4-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:28:09 compute-0 podman[290217]: 2025-11-25 08:28:09.262905556 +0000 UTC m=+0.026243117 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.414 253542 DEBUG nova.compute.manager [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.414 253542 DEBUG oslo_concurrency.lockutils [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.415 253542 DEBUG oslo_concurrency.lockutils [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.415 253542 DEBUG oslo_concurrency.lockutils [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.416 253542 DEBUG nova.compute.manager [req-98f8ec82-1b8f-4caa-af16-2af48d577536 req-407a7fae-7369-4e64-8914-9100bcfabaef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Processing event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.433 253542 DEBUG nova.virt.libvirt.driver [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.433 253542 DEBUG nova.virt.libvirt.driver [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.434 253542 DEBUG nova.virt.libvirt.driver [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No VIF found with MAC fa:16:3e:f3:13:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.434 253542 DEBUG nova.virt.libvirt.driver [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] No VIF found with MAC fa:16:3e:b0:33:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.460 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.497 253542 DEBUG nova.virt.libvirt.guest [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesV270Test-server-1019332220</nova:name>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:28:09</nova:creationTime>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:user uuid="5981df3e8536420ea5b8fcd98ef92e1b">tempest-AttachInterfacesV270Test-968002196-project-member</nova:user>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:project uuid="7627e6bf071942db89329eee4a7d6b59">tempest-AttachInterfacesV270Test-968002196</nova:project>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:port uuid="eb3ca9e2-cc78-478d-97c2-03b1c7d29b95">
Nov 25 08:28:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     <nova:port uuid="c620cff4-b028-4d86-b951-0d489781da2f">
Nov 25 08:28:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:28:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:28:09 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:28:09 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:28:09 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.632 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.632 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:28:09 compute-0 podman[290217]: 2025-11-25 08:28:09.646714464 +0000 UTC m=+0.410052015 container create 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.656 253542 DEBUG oslo_concurrency.lockutils [None req-31bdda25-7846-472f-a438-75508dd174e9 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "interface-14cd6797-cf47-44da-acac-0e5e3d5dfe11-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.658 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:28:09 compute-0 podman[290237]: 2025-11-25 08:28:09.792800972 +0000 UTC m=+0.420787571 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:09 compute-0 systemd[1]: Started libpod-conmon-1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644.scope.
Nov 25 08:28:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.841 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059289.8408206, 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.841 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] VM Started (Lifecycle Event)
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.843 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/320269360c4cc6b75b0f40afebd67ecea31de81414d235d4d1629a905a53b4de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.849 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.854 253542 INFO nova.virt.libvirt.driver [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance spawned successfully.
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.854 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.883 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.888 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.891 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.892 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.892 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.892 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.893 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.893 253542 DEBUG nova.virt.libvirt.driver [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.912 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.912 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059289.8410053, 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.912 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] VM Paused (Lifecycle Event)
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.926 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.930 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059289.8494318, 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.930 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] VM Resumed (Lifecycle Event)
Nov 25 08:28:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:28:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.956 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.960 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.979 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.985 253542 INFO nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Took 9.84 seconds to spawn the instance on the hypervisor.
Nov 25 08:28:09 compute-0 nova_compute[253538]: 2025-11-25 08:28:09.985 253542 DEBUG nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:09 compute-0 podman[290217]: 2025-11-25 08:28:09.994337582 +0000 UTC m=+0.757675143 container init 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:28:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062628715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:10 compute-0 podman[290217]: 2025-11-25 08:28:10.003093324 +0000 UTC m=+0.766430865 container start 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:28:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.019 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.025 253542 DEBUG nova.compute.provider_tree [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:10 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [NOTICE]   (290331) : New worker (290333) forked
Nov 25 08:28:10 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [NOTICE]   (290331) : Loading success.
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.035 253542 DEBUG nova.scheduler.client.report [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.075 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.076 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.089 253542 INFO nova.compute.manager [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Took 11.24 seconds to build instance.
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.109 253542 DEBUG oslo_concurrency.lockutils [None req-0ab1c55e-a54f-4e9f-8e35-b9475b4764b8 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:10 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.128 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.128 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.146 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.186 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.200 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c620cff4-b028-4d86-b951-0d489781da2f in datapath 431353a0-bdb3-445c-95e7-9cd19a8e3783 unbound from our chassis
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.203 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 431353a0-bdb3-445c-95e7-9cd19a8e3783
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd509c0-da19-4905-8d1c-1e3bf657ec6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.249 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[89a70fbf-7c68-4cb4-915a-3e38b0618162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.252 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5357f81c-6edd-43d8-98b3-3067f96b6388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:10 compute-0 ceph-mon[75015]: pgmap v1291: 321 pgs: 321 active+clean; 230 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Nov 25 08:28:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2062628715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:10 compute-0 ceph-mon[75015]: osdmap e124: 3 total, 3 up, 3 in
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.280 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2fd3b9-cd8e-43d4-8e2e-9cac8c0e10d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.294 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[907a9ac0-2263-4e98-b1d7-debff6924579]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap431353a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:1b:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463641, 'reachable_time': 15962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290347, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b15aa228-267f-46d4-9501-759d563378a4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap431353a0-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463655, 'tstamp': 463655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290348, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap431353a0-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463659, 'tstamp': 463659}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290348, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.310 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431353a0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.312 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.313 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap431353a0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.313 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.314 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap431353a0-b0, col_values=(('external_ids', {'iface-id': 'eb9dc67a-a121-4efb-a3df-9647173b8d46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:10.314 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.335 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.336 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.336 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Creating image(s)
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.414 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.446 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.472 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.477 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.517 253542 DEBUG nova.policy [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.562 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.562 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.566 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.567 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.588 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.597 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6998a6cf-b660-4558-98cf-bf5984775b1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.847 253542 DEBUG nova.network.neutron [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updated VIF entry in instance network info cache for port c620cff4-b028-4d86-b951-0d489781da2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.848 253542 DEBUG nova.network.neutron [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [{"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:10 compute-0 nova_compute[253538]: 2025-11-25 08:28:10.866 253542 DEBUG oslo_concurrency.lockutils [req-94db6fa7-eede-427e-b821-ca953530c2dc req-59292340-7c53-407e-8ddc-d441fccb4a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-14cd6797-cf47-44da-acac-0e5e3d5dfe11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 190 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 207 op/s
Nov 25 08:28:12 compute-0 nova_compute[253538]: 2025-11-25 08:28:12.008 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6998a6cf-b660-4558-98cf-bf5984775b1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:12 compute-0 nova_compute[253538]: 2025-11-25 08:28:12.068 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:28:12 compute-0 ceph-mon[75015]: pgmap v1293: 321 pgs: 321 active+clean; 190 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 207 op/s
Nov 25 08:28:12 compute-0 nova_compute[253538]: 2025-11-25 08:28:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:12 compute-0 nova_compute[253538]: 2025-11-25 08:28:12.945 253542 INFO nova.virt.libvirt.driver [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Deleting instance files /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_del
Nov 25 08:28:12 compute-0 nova_compute[253538]: 2025-11-25 08:28:12.947 253542 INFO nova.virt.libvirt.driver [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Deletion of /var/lib/nova/instances/30afcbba-78f3-433c-ba0a-5a2d25cf2d48_del complete
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.021 253542 DEBUG nova.objects.instance [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 6998a6cf-b660-4558-98cf-bf5984775b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.041 253542 DEBUG nova.compute.manager [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.042 253542 DEBUG oslo_concurrency.lockutils [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.043 253542 DEBUG oslo_concurrency.lockutils [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.044 253542 DEBUG oslo_concurrency.lockutils [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.044 253542 DEBUG nova.compute.manager [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] No waiting events found dispatching network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.045 253542 WARNING nova.compute.manager [req-3ebb8ac9-5487-4fb9-9761-c057403ccdd0 req-55054e0c-3264-4a9d-95c8-6bc2facb0b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received unexpected event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d for instance with vm_state active and task_state None.
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.050 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.051 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Ensure instance console log exists: /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.052 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.053 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.054 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.060 253542 INFO nova.compute.manager [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 6.44 seconds to destroy the instance on the hypervisor.
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.061 253542 DEBUG oslo.service.loopingcall [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.063 253542 DEBUG nova.compute.manager [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.064 253542 DEBUG nova.network.neutron [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:28:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 182 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.105 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Successfully created port: 65fd7d0e-59ee-4411-92ee-f934016f1d1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.236 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.237 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.251 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.321 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.322 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.328 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.329 253542 INFO nova.compute.claims [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.499 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:13 compute-0 ceph-mon[75015]: pgmap v1294: 321 pgs: 321 active+clean; 182 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.963 253542 DEBUG nova.network.neutron [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:13 compute-0 nova_compute[253538]: 2025-11-25 08:28:13.986 253542 INFO nova.compute.manager [-] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Took 0.92 seconds to deallocate network for instance.
Nov 25 08:28:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424324964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.030 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.036 253542 DEBUG nova.compute.provider_tree [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.039 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.045 253542 DEBUG nova.scheduler.client.report [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.063 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.063 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.065 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.065 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.065 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.066 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.086 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.127 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.128 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.145 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.222 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.297 253542 DEBUG oslo_concurrency.processutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.350 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.353 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.354 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Creating image(s)
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.393 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.432 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.456 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.460 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716634779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.527 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.530 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.530 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.531 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.531 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.550 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.559 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4c934302-d7cd-4826-835e-cab6dba97e3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3424324964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1716634779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.669 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.670 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.675 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.675 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793932564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.838 253542 DEBUG oslo_concurrency.processutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.847 253542 DEBUG nova.compute.provider_tree [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.858 253542 DEBUG nova.scheduler.client.report [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.878 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.904 253542 INFO nova.scheduler.client.report [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 30afcbba-78f3-433c-ba0a-5a2d25cf2d48
Nov 25 08:28:14 compute-0 nova_compute[253538]: 2025-11-25 08:28:14.973 253542 DEBUG oslo_concurrency.lockutils [None req-ac44ee47-68fa-4cb1-8769-1d0727a691a8 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "30afcbba-78f3-433c-ba0a-5a2d25cf2d48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.005 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.007 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4013MB free_disk=59.918922424316406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.008 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.008 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.016 253542 DEBUG nova.compute.manager [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.017 253542 DEBUG oslo_concurrency.lockutils [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.017 253542 DEBUG oslo_concurrency.lockutils [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.018 253542 DEBUG oslo_concurrency.lockutils [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.018 253542 DEBUG nova.compute.manager [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.018 253542 WARNING nova.compute.manager [req-643e9ddd-7ccd-48a2-9bad-130631332854 req-5f4b0066-9833-46ee-99b5-367bd8be7371 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f for instance with vm_state active and task_state None.
Nov 25 08:28:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 189 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 246 op/s
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.085 253542 DEBUG nova.policy [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee2fe69e0dfa4467926cec954790823e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bd945273cd04d8981dcb3a319e8d026', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:28:15 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.115 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 14cd6797-cf47-44da-acac-0e5e3d5dfe11 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.115 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.116 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 6998a6cf-b660-4558-98cf-bf5984775b1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.116 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 4c934302-d7cd-4826-835e-cab6dba97e3a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.116 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.116 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.274 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.452 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Successfully updated port: 65fd7d0e-59ee-4411-92ee-f934016f1d1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:28:15 compute-0 ovn_controller[152859]: 2025-11-25T08:28:15Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:33:b7 10.100.0.5
Nov 25 08:28:15 compute-0 ovn_controller[152859]: 2025-11-25T08:28:15Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:33:b7 10.100.0.5
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.468 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.468 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.468 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.502 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4c934302-d7cd-4826-835e-cab6dba97e3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.943s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.568 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] resizing rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:28:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2793932564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:15 compute-0 ceph-mon[75015]: pgmap v1295: 321 pgs: 321 active+clean; 189 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 246 op/s
Nov 25 08:28:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3303659574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.754 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.759 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.767 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.790 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.900 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.900 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.909 253542 DEBUG nova.objects.instance [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c934302-d7cd-4826-835e-cab6dba97e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.927 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.928 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Ensure instance console log exists: /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.929 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.930 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:15 compute-0 nova_compute[253538]: 2025-11-25 08:28:15.930 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.201 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.202 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.203 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.203 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.204 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.206 253542 INFO nova.compute.manager [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Terminating instance
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.208 253542 DEBUG nova.compute.manager [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.278 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Successfully created port: b62f3741-11c8-4840-a720-d6ee07f06284 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:28:16 compute-0 kernel: tapeb3ca9e2-cc (unregistering): left promiscuous mode
Nov 25 08:28:16 compute-0 NetworkManager[48915]: <info>  [1764059296.3065] device (tapeb3ca9e2-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:28:16 compute-0 ovn_controller[152859]: 2025-11-25T08:28:16Z|00184|binding|INFO|Releasing lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 from this chassis (sb_readonly=0)
Nov 25 08:28:16 compute-0 ovn_controller[152859]: 2025-11-25T08:28:16Z|00185|binding|INFO|Setting lport eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 down in Southbound
Nov 25 08:28:16 compute-0 ovn_controller[152859]: 2025-11-25T08:28:16Z|00186|binding|INFO|Removing iface tapeb3ca9e2-cc ovn-installed in OVS
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.360 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 kernel: tapc620cff4-b0 (unregistering): left promiscuous mode
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.372 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:13:20 10.100.0.6'], port_security=['fa:16:3e:f3:13:20 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '14cd6797-cf47-44da-acac-0e5e3d5dfe11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7627e6bf071942db89329eee4a7d6b59', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d49786c-dade-46e6-8a51-0f68b7957195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ee2cc9a-2b72-45fd-9005-f89765076013, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.374 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 in datapath 431353a0-bdb3-445c-95e7-9cd19a8e3783 unbound from our chassis
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.376 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 431353a0-bdb3-445c-95e7-9cd19a8e3783
Nov 25 08:28:16 compute-0 NetworkManager[48915]: <info>  [1764059296.3831] device (tapc620cff4-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.389 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.392 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0eb01a-92bc-40af-af50-2d638a7fc5ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:16 compute-0 ovn_controller[152859]: 2025-11-25T08:28:16Z|00187|binding|INFO|Releasing lport c620cff4-b028-4d86-b951-0d489781da2f from this chassis (sb_readonly=0)
Nov 25 08:28:16 compute-0 ovn_controller[152859]: 2025-11-25T08:28:16Z|00188|binding|INFO|Setting lport c620cff4-b028-4d86-b951-0d489781da2f down in Southbound
Nov 25 08:28:16 compute-0 ovn_controller[152859]: 2025-11-25T08:28:16Z|00189|binding|INFO|Removing iface tapc620cff4-b0 ovn-installed in OVS
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.429 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:33:b7 10.100.0.5'], port_security=['fa:16:3e:b0:33:b7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '14cd6797-cf47-44da-acac-0e5e3d5dfe11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7627e6bf071942db89329eee4a7d6b59', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d49786c-dade-46e6-8a51-0f68b7957195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ee2cc9a-2b72-45fd-9005-f89765076013, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c620cff4-b028-4d86-b951-0d489781da2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.431 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[db6bfac8-c97f-4596-8e7c-51c609262b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.434 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9220b30b-5f0a-4ed9-8474-63b9b3e30ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:16 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 25 08:28:16 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Consumed 12.629s CPU time.
Nov 25 08:28:16 compute-0 systemd-machined[215790]: Machine qemu-33-instance-0000001c terminated.
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.465 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[27c364a4-baac-4016-95b5-6715312d05d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.482 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d96486-6f8b-4299-a5de-88fdd0eb0cdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap431353a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:1b:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463641, 'reachable_time': 15962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290789, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.500 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7e914b39-c53a-4386-8bb1-7c5bb8bbe49e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap431353a0-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463655, 'tstamp': 463655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290790, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap431353a0-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463659, 'tstamp': 463659}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290790, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.502 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431353a0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.503 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.510 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap431353a0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.510 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.511 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap431353a0-b0, col_values=(('external_ids', {'iface-id': 'eb9dc67a-a121-4efb-a3df-9647173b8d46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.511 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.512 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c620cff4-b028-4d86-b951-0d489781da2f in datapath 431353a0-bdb3-445c-95e7-9cd19a8e3783 unbound from our chassis
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.514 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 431353a0-bdb3-445c-95e7-9cd19a8e3783, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.514 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[446cfd46-da82-4561-b905-4c7383c4bd87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:16.515 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 namespace which is not needed anymore
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 NetworkManager[48915]: <info>  [1764059296.6568] manager: (tapc620cff4-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.677 253542 INFO nova.virt.libvirt.driver [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Instance destroyed successfully.
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.677 253542 DEBUG nova.objects.instance [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lazy-loading 'resources' on Instance uuid 14cd6797-cf47-44da-acac-0e5e3d5dfe11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.690 253542 DEBUG nova.virt.libvirt.vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.691 253542 DEBUG nova.network.os_vif_util [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "address": "fa:16:3e:f3:13:20", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb3ca9e2-cc", "ovs_interfaceid": "eb3ca9e2-cc78-478d-97c2-03b1c7d29b95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.691 253542 DEBUG nova.network.os_vif_util [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.692 253542 DEBUG os_vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.694 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb3ca9e2-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.696 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.700 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.702 253542 INFO os_vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:13:20,bridge_name='br-int',has_traffic_filtering=True,id=eb3ca9e2-cc78-478d-97c2-03b1c7d29b95,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb3ca9e2-cc')
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.702 253542 DEBUG nova.virt.libvirt.vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1019332220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1019332220',id=28,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7627e6bf071942db89329eee4a7d6b59',ramdisk_id='',reservation_id='r-ykh6cze2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-968002196',owner_user_name='tempest-AttachInterfacesV270Test-968002196-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:02Z,user_data=None,user_id='5981df3e8536420ea5b8fcd98ef92e1b',uuid=14cd6797-cf47-44da-acac-0e5e3d5dfe11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.703 253542 DEBUG nova.network.os_vif_util [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converting VIF {"id": "c620cff4-b028-4d86-b951-0d489781da2f", "address": "fa:16:3e:b0:33:b7", "network": {"id": "431353a0-bdb3-445c-95e7-9cd19a8e3783", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-862491740-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7627e6bf071942db89329eee4a7d6b59", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc620cff4-b0", "ovs_interfaceid": "c620cff4-b028-4d86-b951-0d489781da2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.703 253542 DEBUG nova.network.os_vif_util [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.704 253542 DEBUG os_vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.705 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc620cff4-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.712 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.713 253542 INFO os_vif [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=c620cff4-b028-4d86-b951-0d489781da2f,network=Network(431353a0-bdb3-445c-95e7-9cd19a8e3783),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc620cff4-b0')
Nov 25 08:28:16 compute-0 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [NOTICE]   (289964) : haproxy version is 2.8.14-c23fe91
Nov 25 08:28:16 compute-0 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [NOTICE]   (289964) : path to executable is /usr/sbin/haproxy
Nov 25 08:28:16 compute-0 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [WARNING]  (289964) : Exiting Master process...
Nov 25 08:28:16 compute-0 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [WARNING]  (289964) : Exiting Master process...
Nov 25 08:28:16 compute-0 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [ALERT]    (289964) : Current worker (289966) exited with code 143 (Terminated)
Nov 25 08:28:16 compute-0 neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783[289960]: [WARNING]  (289964) : All workers exited. Exiting... (0)
Nov 25 08:28:16 compute-0 systemd[1]: libpod-36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16.scope: Deactivated successfully.
Nov 25 08:28:16 compute-0 podman[290811]: 2025-11-25 08:28:16.813422329 +0000 UTC m=+0.215704433 container died 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:28:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3303659574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.895 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.895 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:16 compute-0 nova_compute[253538]: 2025-11-25 08:28:16.917 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.027 253542 DEBUG nova.compute.manager [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-unplugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.028 253542 DEBUG oslo_concurrency.lockutils [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.028 253542 DEBUG oslo_concurrency.lockutils [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.029 253542 DEBUG oslo_concurrency.lockutils [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.029 253542 DEBUG nova.compute.manager [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-unplugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.030 253542 DEBUG nova.compute.manager [req-a5068d2e-94b4-4f85-bca6-8102cf38dc9b req-0e348ebf-de72-4090-8c34-701b456d8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-unplugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.036 253542 DEBUG nova.network.neutron [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Updating instance_info_cache with network_info: [{"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.053 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.054 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance network_info: |[{"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.058 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start _get_guest_xml network_info=[{"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.066 253542 WARNING nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.072 253542 DEBUG nova.virt.libvirt.host [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.073 253542 DEBUG nova.virt.libvirt.host [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16-userdata-shm.mount: Deactivated successfully.
Nov 25 08:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d59ce0fb78beb290599dd8ea1cac60e0a337aea145db00ab9301e67824cad23-merged.mount: Deactivated successfully.
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.083 253542 DEBUG nova.virt.libvirt.host [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.084 253542 DEBUG nova.virt.libvirt.host [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.084 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.085 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:28:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 215 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.9 MiB/s wr, 224 op/s
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.086 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.087 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.087 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.088 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.089 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.089 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.090 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.090 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.091 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.091 253542 DEBUG nova.virt.hardware [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.097 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:17 compute-0 podman[290811]: 2025-11-25 08:28:17.187247711 +0000 UTC m=+0.589529795 container cleanup 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.232 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 30afcbba-78f3-433c-ba0a-5a2d25cf2d48] Received event network-vif-deleted-d6aa33fe-8dd6-4546-aa75-715ad57e5b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.234 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.235 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.235 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.236 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.237 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.237 253542 WARNING nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f for instance with vm_state active and task_state deleting.
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.238 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-changed-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.238 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Refreshing instance network info cache due to event network-changed-65fd7d0e-59ee-4411-92ee-f934016f1d1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.239 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.240 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.240 253542 DEBUG nova.network.neutron [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Refreshing network info cache for port 65fd7d0e-59ee-4411-92ee-f934016f1d1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:28:17 compute-0 podman[290878]: 2025-11-25 08:28:17.435171493 +0000 UTC m=+0.207697541 container remove 36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.444 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f5eaa1de-1ff2-4b82-b7e3-df675eaac5b5]: (4, ('Tue Nov 25 08:28:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 (36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16)\n36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16\nTue Nov 25 08:28:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 (36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16)\n36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.445 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd93369-577a-4871-9226-7913ed44baaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.446 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431353a0-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:17 compute-0 systemd[1]: libpod-conmon-36cca6be3031d3a8b0ade847ff4a722403a1ba0f8d0bfdc5b75ab94c88e38b16.scope: Deactivated successfully.
Nov 25 08:28:17 compute-0 kernel: tap431353a0-b0: left promiscuous mode
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.529 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a62f3cb8-0419-49bc-8ef9-5d5bad120cf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223899209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.545 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f29e96-4faf-4fcd-8cc1-575cab74614e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.546 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d08e1cd5-e5a0-4121-891b-ad8cad749366]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.564 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f17340a1-e306-4027-8cb7-491c18093de5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463633, 'reachable_time': 41570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290914, 'error': None, 'target': 'ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.567 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-431353a0-bdb3-445c-95e7-9cd19a8e3783 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:28:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d431353a0\x2dbdb3\x2d445c\x2d95e7\x2d9cd19a8e3783.mount: Deactivated successfully.
Nov 25 08:28:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:17.567 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6f47a3b2-26cd-42d8-9462-0a470311ff7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.573 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.592 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.600 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.773 253542 INFO nova.virt.libvirt.driver [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Deleting instance files /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11_del
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.775 253542 INFO nova.virt.libvirt.driver [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Deletion of /var/lib/nova/instances/14cd6797-cf47-44da-acac-0e5e3d5dfe11_del complete
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.843 253542 INFO nova.compute.manager [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Took 1.63 seconds to destroy the instance on the hypervisor.
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.844 253542 DEBUG oslo.service.loopingcall [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.845 253542 DEBUG nova.compute.manager [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:28:17 compute-0 nova_compute[253538]: 2025-11-25 08:28:17.845 253542 DEBUG nova.network.neutron [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:28:17 compute-0 ceph-mon[75015]: pgmap v1296: 321 pgs: 321 active+clean; 215 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.9 MiB/s wr, 224 op/s
Nov 25 08:28:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2223899209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837282604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.063 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Successfully updated port: b62f3741-11c8-4840-a720-d6ee07f06284 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.069 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.070 253542 DEBUG nova.virt.libvirt.vif [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-577988769',display_name='tempest-ImagesTestJSON-server-577988769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-577988769',id=30,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-r74nry8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:10Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=6998a6cf-b660-4558-98cf-bf5984775b1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.071 253542 DEBUG nova.network.os_vif_util [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.072 253542 DEBUG nova.network.os_vif_util [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.073 253542 DEBUG nova.objects.instance [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 6998a6cf-b660-4558-98cf-bf5984775b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.115 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <uuid>6998a6cf-b660-4558-98cf-bf5984775b1d</uuid>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <name>instance-0000001e</name>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesTestJSON-server-577988769</nova:name>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:28:17</nova:creationTime>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <nova:port uuid="65fd7d0e-59ee-4411-92ee-f934016f1d1f">
Nov 25 08:28:18 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <system>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <entry name="serial">6998a6cf-b660-4558-98cf-bf5984775b1d</entry>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <entry name="uuid">6998a6cf-b660-4558-98cf-bf5984775b1d</entry>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </system>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <os>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   </os>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <features>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   </features>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6998a6cf-b660-4558-98cf-bf5984775b1d_disk">
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config">
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:73:80:23"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <target dev="tap65fd7d0e-59"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/console.log" append="off"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <video>
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </video>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:28:18 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:28:18 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:28:18 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:28:18 compute-0 nova_compute[253538]: </domain>
Nov 25 08:28:18 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.121 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Preparing to wait for external event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.122 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.122 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.122 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.123 253542 DEBUG nova.virt.libvirt.vif [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-577988769',display_name='tempest-ImagesTestJSON-server-577988769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-577988769',id=30,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-r74nry8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:10Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=6998a6cf-b660-4558-98cf-bf5984775b1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.123 253542 DEBUG nova.network.os_vif_util [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.124 253542 DEBUG nova.network.os_vif_util [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.124 253542 DEBUG os_vif [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.125 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.125 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.129 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65fd7d0e-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.129 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap65fd7d0e-59, col_values=(('external_ids', {'iface-id': '65fd7d0e-59ee-4411-92ee-f934016f1d1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:80:23', 'vm-uuid': '6998a6cf-b660-4558-98cf-bf5984775b1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:18 compute-0 NetworkManager[48915]: <info>  [1764059298.1319] manager: (tap65fd7d0e-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.135 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.135 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquired lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.136 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.138 253542 INFO os_vif [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59')
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.214 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.215 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.215 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:73:80:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.216 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Using config drive
Nov 25 08:28:18 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.237 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/837282604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:18.997 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:28:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 210 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.3 MiB/s wr, 233 op/s
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.313 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Creating config drive at /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.317 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqu7941ar execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.461 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqu7941ar" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.487 253542 DEBUG nova.storage.rbd_utils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.491 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.638 253542 DEBUG oslo_concurrency.processutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config 6998a6cf-b660-4558-98cf-bf5984775b1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.640 253542 INFO nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Deleting local config drive /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d/disk.config because it was imported into RBD.
Nov 25 08:28:19 compute-0 kernel: tap65fd7d0e-59: entered promiscuous mode
Nov 25 08:28:19 compute-0 NetworkManager[48915]: <info>  [1764059299.6902] manager: (tap65fd7d0e-59): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Nov 25 08:28:19 compute-0 ovn_controller[152859]: 2025-11-25T08:28:19Z|00190|binding|INFO|Claiming lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f for this chassis.
Nov 25 08:28:19 compute-0 ovn_controller[152859]: 2025-11-25T08:28:19Z|00191|binding|INFO|65fd7d0e-59ee-4411-92ee-f934016f1d1f: Claiming fa:16:3e:73:80:23 10.100.0.10
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.707 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:80:23 10.100.0.10'], port_security=['fa:16:3e:73:80:23 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6998a6cf-b660-4558-98cf-bf5984775b1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=65fd7d0e-59ee-4411-92ee-f934016f1d1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.709 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 65fd7d0e-59ee-4411-92ee-f934016f1d1f in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.711 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:28:19 compute-0 systemd-udevd[291029]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.729 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d8b162-ccb4-4de4-8a59-62668f306bee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.730 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.732 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.732 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e179db54-0dfa-4c22-a4f1-52093477db63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a6a6ad-f50f-40f1-b9c2-a5203d9a2fe5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 NetworkManager[48915]: <info>  [1764059299.7400] device (tap65fd7d0e-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:28:19 compute-0 NetworkManager[48915]: <info>  [1764059299.7410] device (tap65fd7d0e-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:28:19 compute-0 systemd-machined[215790]: New machine qemu-35-instance-0000001e.
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.747 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d1943182-2098-4cbc-b20c-5e4d36df2cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001e.
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.760 253542 DEBUG nova.network.neutron [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Updated VIF entry in instance network info cache for port 65fd7d0e-59ee-4411-92ee-f934016f1d1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.760 253542 DEBUG nova.network.neutron [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Updating instance_info_cache with network_info: [{"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:19 compute-0 ovn_controller[152859]: 2025-11-25T08:28:19Z|00192|binding|INFO|Setting lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f ovn-installed in OVS
Nov 25 08:28:19 compute-0 ovn_controller[152859]: 2025-11-25T08:28:19Z|00193|binding|INFO|Setting lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f up in Southbound
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.774 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[578d09b3-664c-4564-8a93-7767c9cd79e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.780 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6998a6cf-b660-4558-98cf-bf5984775b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.780 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-unplugged-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.781 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.781 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.781 253542 DEBUG oslo_concurrency.lockutils [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.783 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-unplugged-c620cff4-b028-4d86-b951-0d489781da2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.783 253542 DEBUG nova.compute.manager [req-0d137173-6982-4ee3-9fef-a9a2757559d9 req-07825075-34a3-423b-b2ae-42131cfb4350 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-unplugged-c620cff4-b028-4d86-b951-0d489781da2f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:28:19 compute-0 nova_compute[253538]: 2025-11-25 08:28:19.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.804 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf661a6-6335-44da-b3c0-ec23ab132059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 NetworkManager[48915]: <info>  [1764059299.8100] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.808 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b99e90af-84d4-4dea-b006-2e357a25e7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.838 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[73cbca7f-921b-49e1-9f0d-70f19400ed70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.842 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[31cda7a4-c392-439d-8e75-1a461b261efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 NetworkManager[48915]: <info>  [1764059299.8653] device (tapba659d6c-c0): carrier: link connected
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.870 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa99d32-a968-4a62-90f8-abcfad042a42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.885 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f58dfe-1f33-4ce8-9194-d8252ad98b06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465446, 'reachable_time': 41021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291062, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2c4b18-4a39-47db-ada6-030a8b98ee97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465446, 'tstamp': 465446}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291063, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ceph-mon[75015]: pgmap v1297: 321 pgs: 321 active+clean; 210 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.3 MiB/s wr, 233 op/s
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.926 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[07c7a8cb-939c-4a66-b7e0-4545ebcb5d72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465446, 'reachable_time': 41021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291064, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:19.957 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8d391f-63d5-4401-b349-df4681893d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.009 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0cf228-a61a-4d74-8001-99b44dfddd42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.020 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.020 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.021 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.023 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:20 compute-0 NetworkManager[48915]: <info>  [1764059300.0236] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Nov 25 08:28:20 compute-0 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.030 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:20 compute-0 ovn_controller[152859]: 2025-11-25T08:28:20Z|00194|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.034 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.035 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd12b5f-4b04-4908-a05b-239187a40815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.036 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:28:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:20.036 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.039 253542 DEBUG nova.network.neutron [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.045 253542 DEBUG nova.compute.manager [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.045 253542 DEBUG oslo_concurrency.lockutils [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.045 253542 DEBUG oslo_concurrency.lockutils [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.046 253542 DEBUG oslo_concurrency.lockutils [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.046 253542 DEBUG nova.compute.manager [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.046 253542 WARNING nova.compute.manager [req-24913f6b-9e14-490d-b602-944de98bbad2 req-5991dc5b-e828-44c8-870f-66e33805b622 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 for instance with vm_state active and task_state deleting.
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.055 253542 INFO nova.compute.manager [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Took 2.21 seconds to deallocate network for instance.
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.120 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.120 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.121 253542 DEBUG nova.network.neutron [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Updating instance_info_cache with network_info: [{"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.136 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Releasing lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.137 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance network_info: |[{"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.139 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start _get_guest_xml network_info=[{"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.144 253542 WARNING nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.154 253542 DEBUG nova.virt.libvirt.host [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.160 253542 DEBUG nova.virt.libvirt.host [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.165 253542 DEBUG nova.virt.libvirt.host [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.166 253542 DEBUG nova.virt.libvirt.host [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.166 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.167 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.167 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.167 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.168 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.168 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.168 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.168 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.169 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.169 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.169 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.170 253542 DEBUG nova.virt.hardware [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.173 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.205 253542 DEBUG nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.206 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.206 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.206 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.207 253542 DEBUG nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] No waiting events found dispatching network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.207 253542 WARNING nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received unexpected event network-vif-plugged-c620cff4-b028-4d86-b951-0d489781da2f for instance with vm_state deleted and task_state None.
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.207 253542 DEBUG nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-changed-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.207 253542 DEBUG nova.compute.manager [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Refreshing instance network info cache due to event network-changed-b62f3741-11c8-4840-a720-d6ee07f06284. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.208 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.208 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.208 253542 DEBUG nova.network.neutron [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Refreshing network info cache for port b62f3741-11c8-4840-a720-d6ee07f06284 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.265 253542 DEBUG oslo_concurrency.processutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:20 compute-0 podman[291114]: 2025-11-25 08:28:20.417171304 +0000 UTC m=+0.031966764 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:28:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984392094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.634 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.671 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.675 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221969757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.705 253542 DEBUG oslo_concurrency.processutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.714 253542 DEBUG nova.compute.provider_tree [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.747 253542 DEBUG nova.scheduler.client.report [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.760 253542 DEBUG nova.compute.manager [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.775 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.802 253542 INFO nova.compute.manager [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] instance snapshotting
Nov 25 08:28:20 compute-0 podman[291114]: 2025-11-25 08:28:20.81895388 +0000 UTC m=+0.433749320 container create a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.831 253542 INFO nova.scheduler.client.report [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Deleted allocations for instance 14cd6797-cf47-44da-acac-0e5e3d5dfe11
Nov 25 08:28:20 compute-0 systemd[1]: Started libpod-conmon-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d.scope.
Nov 25 08:28:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.894 253542 DEBUG oslo_concurrency.lockutils [None req-2c8e5d4b-37aa-44f4-8353-01f647647394 5981df3e8536420ea5b8fcd98ef92e1b 7627e6bf071942db89329eee4a7d6b59 - - default default] Lock "14cd6797-cf47-44da-acac-0e5e3d5dfe11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7948cd9e3db77c2b6e7fa96e239c18a011489fdccbd892f78fbbd5fdba626e7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3984392094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:20 compute-0 podman[291114]: 2025-11-25 08:28:20.91050122 +0000 UTC m=+0.525296670 container init a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 08:28:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2221969757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:20 compute-0 podman[291114]: 2025-11-25 08:28:20.916466225 +0000 UTC m=+0.531261665 container start a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 08:28:20 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [NOTICE]   (291234) : New worker (291237) forked
Nov 25 08:28:20 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [NOTICE]   (291234) : Loading success.
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.978 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059300.977583, 6998a6cf-b660-4558-98cf-bf5984775b1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Started (Lifecycle Event)
Nov 25 08:28:20 compute-0 nova_compute[253538]: 2025-11-25 08:28:20.997 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.000 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059300.9778752, 6998a6cf-b660-4558-98cf-bf5984775b1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.000 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Paused (Lifecycle Event)
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.019 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.022 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.045 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 210 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.2 MiB/s wr, 218 op/s
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.138 253542 INFO nova.virt.libvirt.driver [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Beginning live snapshot process
Nov 25 08:28:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752781916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.166 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.168 253542 DEBUG nova.virt.libvirt.vif [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1824498337',display_name='tempest-ImagesOneServerTestJSON-server-1824498337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1824498337',id=31,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bd945273cd04d8981dcb3a319e8d026',ramdisk_id='',reservation_id='r-p00agd9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-174767469',owner_user_name='tempest-ImagesOneServerTestJSON-174767469-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:14Z,user_data=None,user_id='ee2fe69e0dfa4467926cec954790823e',uuid=4c934302-d7cd-4826-835e-cab6dba97e3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.168 253542 DEBUG nova.network.os_vif_util [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converting VIF {"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.169 253542 DEBUG nova.network.os_vif_util [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.171 253542 DEBUG nova.objects.instance [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c934302-d7cd-4826-835e-cab6dba97e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.188 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <uuid>4c934302-d7cd-4826-835e-cab6dba97e3a</uuid>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <name>instance-0000001f</name>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesOneServerTestJSON-server-1824498337</nova:name>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:28:20</nova:creationTime>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <nova:user uuid="ee2fe69e0dfa4467926cec954790823e">tempest-ImagesOneServerTestJSON-174767469-project-member</nova:user>
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <nova:project uuid="9bd945273cd04d8981dcb3a319e8d026">tempest-ImagesOneServerTestJSON-174767469</nova:project>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <nova:port uuid="b62f3741-11c8-4840-a720-d6ee07f06284">
Nov 25 08:28:21 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <system>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <entry name="serial">4c934302-d7cd-4826-835e-cab6dba97e3a</entry>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <entry name="uuid">4c934302-d7cd-4826-835e-cab6dba97e3a</entry>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </system>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <os>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   </os>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <features>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   </features>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4c934302-d7cd-4826-835e-cab6dba97e3a_disk">
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config">
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:21 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:c9:9b:99"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <target dev="tapb62f3741-11"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/console.log" append="off"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <video>
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </video>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:28:21 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:28:21 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:28:21 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:28:21 compute-0 nova_compute[253538]: </domain>
Nov 25 08:28:21 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.189 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Preparing to wait for external event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.189 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.190 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.190 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.191 253542 DEBUG nova.virt.libvirt.vif [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1824498337',display_name='tempest-ImagesOneServerTestJSON-server-1824498337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1824498337',id=31,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bd945273cd04d8981dcb3a319e8d026',ramdisk_id='',reservation_id='r-p00agd9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-174767469',owner_user_name='tempest-ImagesOneServerTestJSON-174767469-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:14Z,user_data=None,user_id='ee2fe69e0dfa4467926cec954790823e',uuid=4c934302-d7cd-4826-835e-cab6dba97e3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.191 253542 DEBUG nova.network.os_vif_util [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converting VIF {"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.192 253542 DEBUG nova.network.os_vif_util [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.193 253542 DEBUG os_vif [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.194 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.195 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.201 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb62f3741-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.202 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb62f3741-11, col_values=(('external_ids', {'iface-id': 'b62f3741-11c8-4840-a720-d6ee07f06284', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:9b:99', 'vm-uuid': '4c934302-d7cd-4826-835e-cab6dba97e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:21 compute-0 NetworkManager[48915]: <info>  [1764059301.2049] manager: (tapb62f3741-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.210 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.211 253542 INFO os_vif [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11')
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.290 253542 DEBUG nova.virt.libvirt.imagebackend [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.310 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.310 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.311 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No VIF found with MAC fa:16:3e:c9:9b:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.311 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Using config drive
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.329 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.453 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] creating snapshot(55c71d68d39f458abfbf1f8209ae0ece) on rbd image(59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.673 253542 DEBUG nova.network.neutron [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Updated VIF entry in instance network info cache for port b62f3741-11c8-4840-a720-d6ee07f06284. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.673 253542 DEBUG nova.network.neutron [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Updating instance_info_cache with network_info: [{"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.702 253542 DEBUG oslo_concurrency.lockutils [req-d5e41382-4243-4e0c-a72d-eb44af4f2f14 req-0addf58c-2f95-46d8-b275-548b87250e2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4c934302-d7cd-4826-835e-cab6dba97e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.754 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Creating config drive at /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.759 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_4cbba3g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.891 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_4cbba3g" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.912 253542 DEBUG nova.storage.rbd_utils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] rbd image 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.916 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:21 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 25 08:28:21 compute-0 ceph-mon[75015]: pgmap v1298: 321 pgs: 321 active+clean; 210 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.2 MiB/s wr, 218 op/s
Nov 25 08:28:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2752781916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 08:28:21 compute-0 nova_compute[253538]: 2025-11-25 08:28:21.990 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] cloning vms/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk@55c71d68d39f458abfbf1f8209ae0ece to images/28dfc6fb-4f2c-4796-bac4-301408e87b71 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.131 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-deleted-c620cff4-b028-4d86-b951-0d489781da2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.132 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.132 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.133 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.133 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.133 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Processing event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.133 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Received event network-vif-deleted-eb3ca9e2-cc78-478d-97c2-03b1c7d29b95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.134 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.134 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.134 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.134 253542 DEBUG oslo_concurrency.lockutils [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.135 253542 DEBUG nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] No waiting events found dispatching network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.135 253542 WARNING nova.compute.manager [req-7c8135ec-ab5c-4435-862d-93b20778d67b req-797760c7-3753-40cb-9fa3-64f1540c022d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received unexpected event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f for instance with vm_state building and task_state spawning.
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.136 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.140 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059302.140143, 6998a6cf-b660-4558-98cf-bf5984775b1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.140 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Resumed (Lifecycle Event)
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.142 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.145 253542 INFO nova.virt.libvirt.driver [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance spawned successfully.
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.145 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.171 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.174 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.175 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.175 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.175 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.176 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.176 253542 DEBUG nova.virt.libvirt.driver [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.180 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.210 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.338 253542 INFO nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 12.00 seconds to spawn the instance on the hypervisor.
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.339 253542 DEBUG nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.341 253542 DEBUG oslo_concurrency.processutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config 4c934302-d7cd-4826-835e-cab6dba97e3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.343 253542 INFO nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Deleting local config drive /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a/disk.config because it was imported into RBD.
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.368 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] flattening images/28dfc6fb-4f2c-4796-bac4-301408e87b71 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:28:22 compute-0 kernel: tapb62f3741-11: entered promiscuous mode
Nov 25 08:28:22 compute-0 NetworkManager[48915]: <info>  [1764059302.4012] manager: (tapb62f3741-11): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Nov 25 08:28:22 compute-0 systemd-udevd[291055]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:22 compute-0 ovn_controller[152859]: 2025-11-25T08:28:22Z|00195|binding|INFO|Claiming lport b62f3741-11c8-4840-a720-d6ee07f06284 for this chassis.
Nov 25 08:28:22 compute-0 ovn_controller[152859]: 2025-11-25T08:28:22Z|00196|binding|INFO|b62f3741-11c8-4840-a720-d6ee07f06284: Claiming fa:16:3e:c9:9b:99 10.100.0.9
Nov 25 08:28:22 compute-0 NetworkManager[48915]: <info>  [1764059302.4150] device (tapb62f3741-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:28:22 compute-0 NetworkManager[48915]: <info>  [1764059302.4162] device (tapb62f3741-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.439 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:22 compute-0 systemd-machined[215790]: New machine qemu-36-instance-0000001f.
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.448 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:9b:99 10.100.0.9'], port_security=['fa:16:3e:c9:9b:99 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4c934302-d7cd-4826-835e-cab6dba97e3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0070171d-b7ca-4ed3-baea-814d9cd382de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bd945273cd04d8981dcb3a319e8d026', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ee7f6e6-6de7-4c93-8dc8-a8140fbc4a5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2ce04b2-ff6f-4536-bd4d-73688e8a9b75, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=b62f3741-11c8-4840-a720-d6ee07f06284) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.450 162739 INFO neutron.agent.ovn.metadata.agent [-] Port b62f3741-11c8-4840-a720-d6ee07f06284 in datapath 0070171d-b7ca-4ed3-baea-814d9cd382de bound to our chassis
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.452 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0070171d-b7ca-4ed3-baea-814d9cd382de
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.457 253542 INFO nova.compute.manager [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 13.22 seconds to build instance.
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.462 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d602cb-3be1-49c0-9d49-521d82482bb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.463 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0070171d-b1 in ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.469 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0070171d-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81fe3fa5-9c94-49f1-b61f-1e89184d5aec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb245cfd-2cf0-4cf4-8a5d-d4533cf465e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-0000001f.
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.486 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[de65e1f6-ba14-467d-af91-96f470a59978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:22 compute-0 ovn_controller[152859]: 2025-11-25T08:28:22Z|00197|binding|INFO|Setting lport b62f3741-11c8-4840-a720-d6ee07f06284 ovn-installed in OVS
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:22 compute-0 ovn_controller[152859]: 2025-11-25T08:28:22Z|00198|binding|INFO|Setting lport b62f3741-11c8-4840-a720-d6ee07f06284 up in Southbound
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.508 253542 DEBUG oslo_concurrency.lockutils [None req-e3022e31-fb04-4eb1-b57c-c8286674db75 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.515 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c441706a-d7ac-47bd-b5ea-b3723fdb0cce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.543 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0d024a-f0dc-46e9-b041-757b088acde4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 NetworkManager[48915]: <info>  [1764059302.5489] manager: (tap0070171d-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.548 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36f617e5-a9fe-485a-bf6e-a52e1127b8b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.582 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f8d0d8-e7f0-4785-8f42-0d5016c692df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.585 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[56915201-bd95-4b2f-bba1-639124801146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 NetworkManager[48915]: <info>  [1764059302.6121] device (tap0070171d-b0): carrier: link connected
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.615 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f89de0df-00c3-4dd2-ba5b-263b57f401cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.631 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0b1f73-413e-40fc-b511-8b10c1c800ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0070171d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:a2:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465721, 'reachable_time': 36339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291444, 'error': None, 'target': 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.652 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6322f8b4-84c8-4eeb-a5e3-5ed6a1660dab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:a24f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465721, 'tstamp': 465721}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291445, 'error': None, 'target': 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.681 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[696333dc-4ad0-468a-b813-6ef2b4cfe4bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0070171d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:a2:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465721, 'reachable_time': 36339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291446, 'error': None, 'target': 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.718 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[590fe60d-199c-483a-8eb9-903dc4c3e5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.800 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c7e3e4-69cb-42a9-93d1-54a69c0c530d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.802 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0070171d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.802 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.802 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0070171d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:22 compute-0 NetworkManager[48915]: <info>  [1764059302.8051] manager: (tap0070171d-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:22 compute-0 kernel: tap0070171d-b0: entered promiscuous mode
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.808 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0070171d-b0, col_values=(('external_ids', {'iface-id': 'd8cb45b7-fcc6-4a5e-82c7-1991008fce33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:22 compute-0 ovn_controller[152859]: 2025-11-25T08:28:22Z|00199|binding|INFO|Releasing lport d8cb45b7-fcc6-4a5e-82c7-1991008fce33 from this chassis (sb_readonly=0)
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.811 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0070171d-b7ca-4ed3-baea-814d9cd382de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0070171d-b7ca-4ed3-baea-814d9cd382de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.817 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0581f5e2-108a-46d7-a6fb-9f19117525ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.818 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-0070171d-b7ca-4ed3-baea-814d9cd382de
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/0070171d-b7ca-4ed3-baea-814d9cd382de.pid.haproxy
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 0070171d-b7ca-4ed3-baea-814d9cd382de
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:28:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:22.818 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'env', 'PROCESS_TAG=haproxy-0070171d-b7ca-4ed3-baea-814d9cd382de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0070171d-b7ca-4ed3-baea-814d9cd382de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.874 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] removing snapshot(55c71d68d39f458abfbf1f8209ae0ece) on rbd image(59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:28:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 25 08:28:22 compute-0 ceph-mon[75015]: osdmap e125: 3 total, 3 up, 3 in
Nov 25 08:28:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 25 08:28:22 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 25 08:28:22 compute-0 nova_compute[253538]: 2025-11-25 08:28:22.976 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] creating snapshot(snap) on rbd image(28dfc6fb-4f2c-4796-bac4-301408e87b71) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 186 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 489 KiB/s rd, 5.5 MiB/s wr, 161 op/s
Nov 25 08:28:23 compute-0 podman[291514]: 2025-11-25 08:28:23.24446996 +0000 UTC m=+0.059585828 container create e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 08:28:23 compute-0 systemd[1]: Started libpod-conmon-e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0.scope.
Nov 25 08:28:23 compute-0 podman[291514]: 2025-11-25 08:28:23.211826508 +0000 UTC m=+0.026942406 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:28:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38844f50af0338993d204476dce181cc29aa7db95beeb492a051beb5aa7d312/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:23 compute-0 podman[291514]: 2025-11-25 08:28:23.344757813 +0000 UTC m=+0.159873711 container init e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 08:28:23 compute-0 podman[291514]: 2025-11-25 08:28:23.352510966 +0000 UTC m=+0.167626834 container start e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:28:23 compute-0 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [NOTICE]   (291575) : New worker (291578) forked
Nov 25 08:28:23 compute-0 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [NOTICE]   (291575) : Loading success.
Nov 25 08:28:23 compute-0 nova_compute[253538]: 2025-11-25 08:28:23.419 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059303.4188738, 4c934302-d7cd-4826-835e-cab6dba97e3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:23 compute-0 nova_compute[253538]: 2025-11-25 08:28:23.420 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] VM Started (Lifecycle Event)
Nov 25 08:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:28:23 compute-0 nova_compute[253538]: 2025-11-25 08:28:23.443 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:23 compute-0 nova_compute[253538]: 2025-11-25 08:28:23.448 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059303.4189668, 4c934302-d7cd-4826-835e-cab6dba97e3a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:23 compute-0 nova_compute[253538]: 2025-11-25 08:28:23.448 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] VM Paused (Lifecycle Event)
Nov 25 08:28:23 compute-0 nova_compute[253538]: 2025-11-25 08:28:23.463 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:23 compute-0 nova_compute[253538]: 2025-11-25 08:28:23.467 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:23 compute-0 nova_compute[253538]: 2025-11-25 08:28:23.483 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 25 08:28:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 25 08:28:23 compute-0 ceph-mon[75015]: osdmap e126: 3 total, 3 up, 3 in
Nov 25 08:28:23 compute-0 ceph-mon[75015]: pgmap v1301: 321 pgs: 321 active+clean; 186 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 489 KiB/s rd, 5.5 MiB/s wr, 161 op/s
Nov 25 08:28:23 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 25 08:28:24 compute-0 ovn_controller[152859]: 2025-11-25T08:28:24Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:41:bb 10.100.0.9
Nov 25 08:28:24 compute-0 ovn_controller[152859]: 2025-11-25T08:28:24Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:41:bb 10.100.0.9
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 28dfc6fb-4f2c-4796-bac4-301408e87b71 could not be found.
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 28dfc6fb-4f2c-4796-bac4-301408e87b71
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 28dfc6fb-4f2c-4796-bac4-301408e87b71 could not be found.
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.112 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.413 253542 DEBUG nova.compute.manager [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.414 253542 DEBUG oslo_concurrency.lockutils [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.414 253542 DEBUG oslo_concurrency.lockutils [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.415 253542 DEBUG oslo_concurrency.lockutils [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.415 253542 DEBUG nova.compute.manager [req-497feb2e-6932-43a3-9b65-5a2a8fde9303 req-8f8da338-818a-43e7-a29a-10108817084b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Processing event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.419 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.435 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059304.4341586, 4c934302-d7cd-4826-835e-cab6dba97e3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.436 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] VM Resumed (Lifecycle Event)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.437 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.437 253542 DEBUG nova.storage.rbd_utils [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] removing snapshot(snap) on rbd image(28dfc6fb-4f2c-4796-bac4-301408e87b71) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.441 253542 INFO nova.virt.libvirt.driver [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance spawned successfully.
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.441 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.456 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.460 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.465 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.465 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.465 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.465 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.466 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.466 253542 DEBUG nova.virt.libvirt.driver [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.469 253542 DEBUG nova.objects.instance [None req-70405417-f9f7-44d7-8864-4fb84eda8c4f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 6998a6cf-b660-4558-98cf-bf5984775b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.507 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.511 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059304.5114098, 6998a6cf-b660-4558-98cf-bf5984775b1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.511 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Paused (Lifecycle Event)
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.531 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.543 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.572 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.618 253542 INFO nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 10.27 seconds to spawn the instance on the hypervisor.
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.619 253542 DEBUG nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:24.644 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:24.646 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.706 253542 INFO nova.compute.manager [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 11.40 seconds to build instance.
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.766 253542 DEBUG oslo_concurrency.lockutils [None req-d8ccf12d-2ede-4970-9c6f-82a0d8301467 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:24 compute-0 nova_compute[253538]: 2025-11-25 08:28:24.806 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 25 08:28:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 25 08:28:24 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 25 08:28:24 compute-0 ceph-mon[75015]: osdmap e127: 3 total, 3 up, 3 in
Nov 25 08:28:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:28:25 compute-0 kernel: tap65fd7d0e-59 (unregistering): left promiscuous mode
Nov 25 08:28:25 compute-0 NetworkManager[48915]: <info>  [1764059305.0291] device (tap65fd7d0e-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:28:25 compute-0 ovn_controller[152859]: 2025-11-25T08:28:25Z|00200|binding|INFO|Releasing lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f from this chassis (sb_readonly=0)
Nov 25 08:28:25 compute-0 ovn_controller[152859]: 2025-11-25T08:28:25Z|00201|binding|INFO|Setting lport 65fd7d0e-59ee-4411-92ee-f934016f1d1f down in Southbound
Nov 25 08:28:25 compute-0 ovn_controller[152859]: 2025-11-25T08:28:25Z|00202|binding|INFO|Removing iface tap65fd7d0e-59 ovn-installed in OVS
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.085 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:80:23 10.100.0.10'], port_security=['fa:16:3e:73:80:23 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6998a6cf-b660-4558-98cf-bf5984775b1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=65fd7d0e-59ee-4411-92ee-f934016f1d1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.089 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 65fd7d0e-59ee-4411-92ee-f934016f1d1f in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis
Nov 25 08:28:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 234 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 477 op/s
Nov 25 08:28:25 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 25 08:28:25 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Consumed 3.481s CPU time.
Nov 25 08:28:25 compute-0 systemd-machined[215790]: Machine qemu-35-instance-0000001e terminated.
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.100 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.102 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0bc5aa-e4d6-4b84-a91d-1c469f8fea08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.104 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.208 253542 DEBUG nova.compute.manager [None req-70405417-f9f7-44d7-8864-4fb84eda8c4f 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:25 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [NOTICE]   (291234) : haproxy version is 2.8.14-c23fe91
Nov 25 08:28:25 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [NOTICE]   (291234) : path to executable is /usr/sbin/haproxy
Nov 25 08:28:25 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [WARNING]  (291234) : Exiting Master process...
Nov 25 08:28:25 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [WARNING]  (291234) : Exiting Master process...
Nov 25 08:28:25 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [ALERT]    (291234) : Current worker (291237) exited with code 143 (Terminated)
Nov 25 08:28:25 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[291226]: [WARNING]  (291234) : All workers exited. Exiting... (0)
Nov 25 08:28:25 compute-0 systemd[1]: libpod-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d.scope: Deactivated successfully.
Nov 25 08:28:25 compute-0 conmon[291226]: conmon a52ce674315fe1f72af4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d.scope/container/memory.events
Nov 25 08:28:25 compute-0 podman[291657]: 2025-11-25 08:28:25.310650928 +0000 UTC m=+0.054699902 container died a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:28:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d-userdata-shm.mount: Deactivated successfully.
Nov 25 08:28:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7948cd9e3db77c2b6e7fa96e239c18a011489fdccbd892f78fbbd5fdba626e7d-merged.mount: Deactivated successfully.
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.350 253542 DEBUG nova.compute.manager [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-unplugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.352 253542 DEBUG oslo_concurrency.lockutils [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.353 253542 DEBUG oslo_concurrency.lockutils [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.353 253542 DEBUG oslo_concurrency.lockutils [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.353 253542 DEBUG nova.compute.manager [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] No waiting events found dispatching network-vif-unplugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.354 253542 WARNING nova.compute.manager [req-7477ccbc-064d-410a-85c1-32f3e0dda821 req-7f2290fa-be28-4eb2-96e3-057cfafad11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received unexpected event network-vif-unplugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f for instance with vm_state suspended and task_state None.
Nov 25 08:28:25 compute-0 podman[291657]: 2025-11-25 08:28:25.370173074 +0000 UTC m=+0.114222048 container cleanup a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:28:25 compute-0 systemd[1]: libpod-conmon-a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d.scope: Deactivated successfully.
Nov 25 08:28:25 compute-0 podman[291687]: 2025-11-25 08:28:25.451845061 +0000 UTC m=+0.054875448 container remove a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.460 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[187a6dd0-309c-4898-9c04-fb122cd92f63]: (4, ('Tue Nov 25 08:28:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d)\na52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d\nTue Nov 25 08:28:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (a52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d)\na52ce674315fe1f72af4d85a4b1eace3568c16e615f4490b2a4b9368ff45761d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.462 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81b84588-dd03-45d9-a8cf-98ecc2f8dff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.462 253542 WARNING nova.compute.manager [None req-b9b3cf22-1625-4d33-91e2-a44aef1e71a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Image not found during snapshot: nova.exception.ImageNotFound: Image 28dfc6fb-4f2c-4796-bac4-301408e87b71 could not be found.
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.463 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:25 compute-0 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:25 compute-0 nova_compute[253538]: 2025-11-25 08:28:25.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.489 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[695f0c5f-885f-40a0-ae8b-b85181f042de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.501 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea4f683-6108-4550-a367-a38e9452cd69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.502 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5626f217-eb1a-457d-9efb-697fd25f5c31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.522 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc9a87f-08c7-47ca-bb32-c08bc44e1147]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465439, 'reachable_time': 20682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291706, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.526 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:28:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 08:28:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:25.526 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[adaec329-9eb6-489c-8f7f-a256bd0e76ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:25 compute-0 ceph-mon[75015]: osdmap e128: 3 total, 3 up, 3 in
Nov 25 08:28:25 compute-0 ceph-mon[75015]: pgmap v1304: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 234 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 477 op/s
Nov 25 08:28:26 compute-0 ovn_controller[152859]: 2025-11-25T08:28:26Z|00203|binding|INFO|Releasing lport d8cb45b7-fcc6-4a5e-82c7-1991008fce33 from this chassis (sb_readonly=0)
Nov 25 08:28:26 compute-0 ovn_controller[152859]: 2025-11-25T08:28:26Z|00204|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.205 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.297 253542 DEBUG nova.compute.manager [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.298 253542 DEBUG oslo_concurrency.lockutils [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.298 253542 DEBUG oslo_concurrency.lockutils [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.298 253542 DEBUG oslo_concurrency.lockutils [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.298 253542 DEBUG nova.compute.manager [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] No waiting events found dispatching network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.299 253542 WARNING nova.compute.manager [req-e97c4579-f27e-4665-ad63-b2d873470b85 req-cf47b261-58ad-4efa-9caa-253cb5b1ecfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received unexpected event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 for instance with vm_state active and task_state image_snapshot.
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.354 253542 DEBUG nova.compute.manager [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.396 253542 INFO nova.compute.manager [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] instance snapshotting
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.629 253542 INFO nova.virt.libvirt.driver [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Beginning live snapshot process
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.787 253542 DEBUG nova.virt.libvirt.imagebackend [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.838 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.839 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.840 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.840 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.841 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.843 253542 INFO nova.compute.manager [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Terminating instance
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.845 253542 DEBUG nova.compute.manager [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:28:26 compute-0 kernel: tapa0d5bf0b-a7 (unregistering): left promiscuous mode
Nov 25 08:28:26 compute-0 NetworkManager[48915]: <info>  [1764059306.9491] device (tapa0d5bf0b-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:28:26 compute-0 nova_compute[253538]: 2025-11-25 08:28:26.958 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] creating snapshot(ff48a6a1401942de8cd2cfe7818718dc) on rbd image(4c934302-d7cd-4826-835e-cab6dba97e3a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:26 compute-0 ovn_controller[152859]: 2025-11-25T08:28:26Z|00205|binding|INFO|Releasing lport a0d5bf0b-a708-4159-968d-5c597313379d from this chassis (sb_readonly=0)
Nov 25 08:28:26 compute-0 ovn_controller[152859]: 2025-11-25T08:28:26Z|00206|binding|INFO|Setting lport a0d5bf0b-a708-4159-968d-5c597313379d down in Southbound
Nov 25 08:28:26 compute-0 ovn_controller[152859]: 2025-11-25T08:28:26Z|00207|binding|INFO|Removing iface tapa0d5bf0b-a7 ovn-installed in OVS
Nov 25 08:28:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.972 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:41:bb 10.100.0.9'], port_security=['fa:16:3e:67:41:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0d5bf0b-a708-4159-968d-5c597313379d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.973 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0d5bf0b-a708-4159-968d-5c597313379d in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 unbound from our chassis
Nov 25 08:28:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.974 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52a7668b-f0ac-4b07-a778-1ee89adbf076, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:28:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.975 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d8344a-4dd6-4bae-aeda-d40137e7056c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:26.976 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace which is not needed anymore
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:27 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 25 08:28:27 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Consumed 13.819s CPU time.
Nov 25 08:28:27 compute-0 systemd-machined[215790]: Machine qemu-34-instance-0000001d terminated.
Nov 25 08:28:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.080 253542 INFO nova.virt.libvirt.driver [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Instance destroyed successfully.
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.081 253542 DEBUG nova.objects.instance [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'resources' on Instance uuid 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 25 08:28:27 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 25 08:28:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 243 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 18 MiB/s rd, 13 MiB/s wr, 705 op/s
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.095 253542 DEBUG nova.virt.libvirt.vif [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-184009211',display_name='tempest-ImagesOneServerNegativeTestJSON-server-184009211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-184009211',id=29,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-7mj1pjji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:25Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.096 253542 DEBUG nova.network.os_vif_util [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "a0d5bf0b-a708-4159-968d-5c597313379d", "address": "fa:16:3e:67:41:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0d5bf0b-a7", "ovs_interfaceid": "a0d5bf0b-a708-4159-968d-5c597313379d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.096 253542 DEBUG nova.network.os_vif_util [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.097 253542 DEBUG os_vif [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.099 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d5bf0b-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.107 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.109 253542 INFO os_vif [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:41:bb,bridge_name='br-int',has_traffic_filtering=True,id=a0d5bf0b-a708-4159-968d-5c597313379d,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0d5bf0b-a7')
Nov 25 08:28:27 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [NOTICE]   (290331) : haproxy version is 2.8.14-c23fe91
Nov 25 08:28:27 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [NOTICE]   (290331) : path to executable is /usr/sbin/haproxy
Nov 25 08:28:27 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [WARNING]  (290331) : Exiting Master process...
Nov 25 08:28:27 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [ALERT]    (290331) : Current worker (290333) exited with code 143 (Terminated)
Nov 25 08:28:27 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[290325]: [WARNING]  (290331) : All workers exited. Exiting... (0)
Nov 25 08:28:27 compute-0 systemd[1]: libpod-1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644.scope: Deactivated successfully.
Nov 25 08:28:27 compute-0 podman[291784]: 2025-11-25 08:28:27.124982936 +0000 UTC m=+0.066047327 container died 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644-userdata-shm.mount: Deactivated successfully.
Nov 25 08:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-320269360c4cc6b75b0f40afebd67ecea31de81414d235d4d1629a905a53b4de-merged.mount: Deactivated successfully.
Nov 25 08:28:27 compute-0 podman[291784]: 2025-11-25 08:28:27.175829311 +0000 UTC m=+0.116893672 container cleanup 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 08:28:27 compute-0 systemd[1]: libpod-conmon-1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644.scope: Deactivated successfully.
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.191 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] cloning vms/4c934302-d7cd-4826-835e-cab6dba97e3a_disk@ff48a6a1401942de8cd2cfe7818718dc to images/a5f8815f-c59d-400a-82d8-e11527b9e78e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:28:27 compute-0 podman[291844]: 2025-11-25 08:28:27.271137095 +0000 UTC m=+0.065407738 container remove 1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9f185328-319b-4932-b888-9efbb7b017e4]: (4, ('Tue Nov 25 08:28:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644)\n1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644\nTue Nov 25 08:28:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644)\n1b5d918d4aa47ffd23b27816a787433eb73a973f296732e37d895598606e1644\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.279 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ef9898-804b-4001-9173-279f293d284b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.280 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:27 compute-0 kernel: tap52a7668b-f0: left promiscuous mode
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.282 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01c7eafb-4531-496f-a90d-de3f06b88c36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.318 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2271e17-05f0-4694-8688-a78800b00e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.321 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbefe770-6811-4c8a-bac6-cf33f363c8f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.338 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23a658a1-c271-43e4-ac89-1a9c6d5b9d15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464323, 'reachable_time': 28199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291898, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.340 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.340 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[746c8db1-fd6e-4d62-9253-bc41f5efac4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d52a7668b\x2df0ac\x2d4b07\x2da778\x2d1ee89adbf076.mount: Deactivated successfully.
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.358 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] flattening images/a5f8815f-c59d-400a-82d8-e11527b9e78e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.438 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.439 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.439 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.439 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.440 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] No waiting events found dispatching network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.440 253542 WARNING nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received unexpected event network-vif-plugged-65fd7d0e-59ee-4411-92ee-f934016f1d1f for instance with vm_state suspended and task_state image_snapshot_pending.
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.440 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-unplugged-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.440 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.441 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] No waiting events found dispatching network-vif-unplugged-a0d5bf0b-a708-4159-968d-5c597313379d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-unplugged-a0d5bf0b-a708-4159-968d-5c597313379d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.442 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.443 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.443 253542 DEBUG oslo_concurrency.lockutils [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.443 253542 DEBUG nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] No waiting events found dispatching network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.443 253542 WARNING nova.compute.manager [req-3e456390-ca37-48eb-842f-785bf5eb9c6e req-248c58a1-3962-4a03-b272-0e04843245f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received unexpected event network-vif-plugged-a0d5bf0b-a708-4159-968d-5c597313379d for instance with vm_state active and task_state deleting.
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.460 253542 DEBUG nova.compute.manager [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.500 253542 INFO nova.compute.manager [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] instance snapshotting
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.501 253542 WARNING nova.compute.manager [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] trying to snapshot a non-running instance: (state: 4 expected: 1)
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.636 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] removing snapshot(ff48a6a1401942de8cd2cfe7818718dc) on rbd image(4c934302-d7cd-4826-835e-cab6dba97e3a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:28:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:27.648 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.700 253542 INFO nova.virt.libvirt.driver [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Deleting instance files /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_del
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.700 253542 INFO nova.virt.libvirt.driver [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Deletion of /var/lib/nova/instances/59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0_del complete
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.749 253542 INFO nova.compute.manager [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Took 0.90 seconds to destroy the instance on the hypervisor.
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.750 253542 DEBUG oslo.service.loopingcall [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.750 253542 DEBUG nova.compute.manager [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.750 253542 DEBUG nova.network.neutron [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.837 253542 INFO nova.virt.libvirt.driver [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Beginning cold snapshot process
Nov 25 08:28:27 compute-0 nova_compute[253538]: 2025-11-25 08:28:27.947 253542 DEBUG nova.virt.libvirt.imagebackend [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:28:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 25 08:28:28 compute-0 ceph-mon[75015]: osdmap e129: 3 total, 3 up, 3 in
Nov 25 08:28:28 compute-0 ceph-mon[75015]: pgmap v1306: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 243 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 18 MiB/s rd, 13 MiB/s wr, 705 op/s
Nov 25 08:28:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 25 08:28:28 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 25 08:28:28 compute-0 nova_compute[253538]: 2025-11-25 08:28:28.124 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(0c03632062564fcfafdab5877261a564) on rbd image(6998a6cf-b660-4558-98cf-bf5984775b1d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:28 compute-0 nova_compute[253538]: 2025-11-25 08:28:28.164 253542 DEBUG nova.storage.rbd_utils [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] creating snapshot(snap) on rbd image(a5f8815f-c59d-400a-82d8-e11527b9e78e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:28 compute-0 nova_compute[253538]: 2025-11-25 08:28:28.967 253542 DEBUG nova.network.neutron [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:28 compute-0 nova_compute[253538]: 2025-11-25 08:28:28.982 253542 INFO nova.compute.manager [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Took 1.23 seconds to deallocate network for instance.
Nov 25 08:28:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:28:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048595304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:28:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:28:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048595304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.021 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.021 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 209 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 7.3 MiB/s wr, 545 op/s
Nov 25 08:28:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 25 08:28:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 25 08:28:29 compute-0 ceph-mon[75015]: osdmap e130: 3 total, 3 up, 3 in
Nov 25 08:28:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2048595304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:28:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2048595304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:28:29 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.128 253542 DEBUG oslo_concurrency.processutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.193 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/6998a6cf-b660-4558-98cf-bf5984775b1d_disk@0c03632062564fcfafdab5877261a564 to images/f94ff9e5-3861-4109-9467-e810e355f205 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.284 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/f94ff9e5-3861-4109-9467-e810e355f205 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.498 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(0c03632062564fcfafdab5877261a564) on rbd image(6998a6cf-b660-4558-98cf-bf5984775b1d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:28:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796532119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.542 253542 DEBUG oslo_concurrency.processutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.550 253542 DEBUG nova.compute.provider_tree [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.561 253542 DEBUG nova.compute.manager [req-4def1102-ab73-40e1-9fe2-d62db9a5c97b req-d7c96622-9266-4dc0-974d-5e9a85c9dd3b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Received event network-vif-deleted-a0d5bf0b-a708-4159-968d-5c597313379d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.564 253542 DEBUG nova.scheduler.client.report [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.586 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.609 253542 INFO nova.scheduler.client.report [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Deleted allocations for instance 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.673 253542 DEBUG oslo_concurrency.lockutils [None req-e2a9b0ff-347a-4ae3-86f9-9b5a02e4a477 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:29 compute-0 nova_compute[253538]: 2025-11-25 08:28:29.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:28:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 25 08:28:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 25 08:28:30 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 25 08:28:30 compute-0 nova_compute[253538]: 2025-11-25 08:28:30.077 253542 DEBUG nova.storage.rbd_utils [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(f94ff9e5-3861-4109-9467-e810e355f205) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:30 compute-0 ceph-mon[75015]: pgmap v1308: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 209 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 7.3 MiB/s wr, 545 op/s
Nov 25 08:28:30 compute-0 ceph-mon[75015]: osdmap e131: 3 total, 3 up, 3 in
Nov 25 08:28:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1796532119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:30 compute-0 ceph-mon[75015]: osdmap e132: 3 total, 3 up, 3 in
Nov 25 08:28:30 compute-0 nova_compute[253538]: 2025-11-25 08:28:30.393 253542 INFO nova.virt.libvirt.driver [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Snapshot image upload complete
Nov 25 08:28:30 compute-0 nova_compute[253538]: 2025-11-25 08:28:30.394 253542 INFO nova.compute.manager [None req-05ede99d-9309-4616-8f19-cd02ee40c71b ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 4.00 seconds to snapshot the instance on the hypervisor.
Nov 25 08:28:30 compute-0 podman[292117]: 2025-11-25 08:28:30.863255109 +0000 UTC m=+0.104516799 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 25 08:28:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 201 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.7 MiB/s wr, 441 op/s
Nov 25 08:28:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 25 08:28:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 25 08:28:31 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 25 08:28:31 compute-0 nova_compute[253538]: 2025-11-25 08:28:31.675 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059296.6746256, 14cd6797-cf47-44da-acac-0e5e3d5dfe11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:31 compute-0 nova_compute[253538]: 2025-11-25 08:28:31.676 253542 INFO nova.compute.manager [-] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] VM Stopped (Lifecycle Event)
Nov 25 08:28:31 compute-0 nova_compute[253538]: 2025-11-25 08:28:31.695 253542 DEBUG nova.compute.manager [None req-1b16c6c4-013a-43d8-9941-88b8870d9237 - - - - - -] [instance: 14cd6797-cf47-44da-acac-0e5e3d5dfe11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:32 compute-0 nova_compute[253538]: 2025-11-25 08:28:32.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 25 08:28:32 compute-0 ceph-mon[75015]: pgmap v1311: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 201 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.7 MiB/s wr, 441 op/s
Nov 25 08:28:32 compute-0 ceph-mon[75015]: osdmap e133: 3 total, 3 up, 3 in
Nov 25 08:28:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 25 08:28:32 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 25 08:28:32 compute-0 nova_compute[253538]: 2025-11-25 08:28:32.624 253542 INFO nova.virt.libvirt.driver [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Snapshot image upload complete
Nov 25 08:28:32 compute-0 nova_compute[253538]: 2025-11-25 08:28:32.624 253542 INFO nova.compute.manager [None req-239d6fbf-11db-4cc3-8f20-0da7fd675135 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 5.12 seconds to snapshot the instance on the hypervisor.
Nov 25 08:28:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 217 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 8.5 MiB/s wr, 415 op/s
Nov 25 08:28:33 compute-0 ceph-mon[75015]: osdmap e134: 3 total, 3 up, 3 in
Nov 25 08:28:33 compute-0 nova_compute[253538]: 2025-11-25 08:28:33.457 253542 DEBUG nova.compute.manager [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:33 compute-0 nova_compute[253538]: 2025-11-25 08:28:33.514 253542 INFO nova.compute.manager [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] instance snapshotting
Nov 25 08:28:33 compute-0 nova_compute[253538]: 2025-11-25 08:28:33.737 253542 INFO nova.virt.libvirt.driver [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Beginning live snapshot process
Nov 25 08:28:33 compute-0 nova_compute[253538]: 2025-11-25 08:28:33.861 253542 DEBUG nova.virt.libvirt.imagebackend [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:28:34 compute-0 nova_compute[253538]: 2025-11-25 08:28:34.057 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] creating snapshot(f40154d546174793b9fabb1461848c07) on rbd image(4c934302-d7cd-4826-835e-cab6dba97e3a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 25 08:28:34 compute-0 ceph-mon[75015]: pgmap v1314: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 217 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 8.5 MiB/s wr, 415 op/s
Nov 25 08:28:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 25 08:28:34 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 25 08:28:34 compute-0 nova_compute[253538]: 2025-11-25 08:28:34.263 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] cloning vms/4c934302-d7cd-4826-835e-cab6dba97e3a_disk@f40154d546174793b9fabb1461848c07 to images/1a54e9d9-bd25-49b0-8ef1-8fd05dc29219 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:28:34 compute-0 nova_compute[253538]: 2025-11-25 08:28:34.351 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] flattening images/1a54e9d9-bd25-49b0-8ef1-8fd05dc29219 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:28:34 compute-0 nova_compute[253538]: 2025-11-25 08:28:34.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:34 compute-0 podman[292240]: 2025-11-25 08:28:34.873350977 +0000 UTC m=+0.118593619 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 08:28:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 08:28:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 25 08:28:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 25 08:28:35 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.090 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] removing snapshot(f40154d546174793b9fabb1461848c07) on rbd image(4c934302-d7cd-4826-835e-cab6dba97e3a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:28:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 198 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 6.0 MiB/s wr, 266 op/s
Nov 25 08:28:35 compute-0 ceph-mon[75015]: osdmap e135: 3 total, 3 up, 3 in
Nov 25 08:28:35 compute-0 ceph-mon[75015]: osdmap e136: 3 total, 3 up, 3 in
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.347 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.348 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.364 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.422 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.423 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.429 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.430 253542 INFO nova.compute.claims [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:28:35 compute-0 nova_compute[253538]: 2025-11-25 08:28:35.602 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 25 08:28:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 25 08:28:36 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 25 08:28:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726501938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.088 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.093 253542 DEBUG nova.compute.provider_tree [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.106 253542 DEBUG nova.scheduler.client.report [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.128 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.128 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.204 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.205 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.243 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:28:36 compute-0 ceph-mon[75015]: pgmap v1317: 321 pgs: 321 active+clean; 198 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 6.0 MiB/s wr, 266 op/s
Nov 25 08:28:36 compute-0 ceph-mon[75015]: osdmap e137: 3 total, 3 up, 3 in
Nov 25 08:28:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2726501938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.278 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.286 253542 DEBUG nova.storage.rbd_utils [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] creating snapshot(snap) on rbd image(1a54e9d9-bd25-49b0-8ef1-8fd05dc29219) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.370 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.371 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.372 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Creating image(s)
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.397 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.447 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.474 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.477 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.505 253542 DEBUG nova.policy [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8350a560f2bc4b57a5da0e3a1f582f82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5b1125d171240e2895276836b4fd6d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.567 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.568 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.568 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.569 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.596 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.600 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:36 compute-0 nova_compute[253538]: 2025-11-25 08:28:36.987 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.018 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Successfully created port: 6bcb5ada-83f7-419f-9909-98ba6f37630c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.053 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] resizing rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:28:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 202 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.7 MiB/s wr, 276 op/s
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.102 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.201 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.201 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.202 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.202 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.202 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.204 253542 INFO nova.compute.manager [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Terminating instance
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.205 253542 DEBUG nova.compute.manager [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.212 253542 INFO nova.virt.libvirt.driver [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Instance destroyed successfully.
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.212 253542 DEBUG nova.objects.instance [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 6998a6cf-b660-4558-98cf-bf5984775b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.224 253542 DEBUG nova.virt.libvirt.vif [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-577988769',display_name='tempest-ImagesTestJSON-server-577988769',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-577988769',id=30,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-r74nry8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:32Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=6998a6cf-b660-4558-98cf-bf5984775b1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.224 253542 DEBUG nova.network.os_vif_util [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "address": "fa:16:3e:73:80:23", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65fd7d0e-59", "ovs_interfaceid": "65fd7d0e-59ee-4411-92ee-f934016f1d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.225 253542 DEBUG nova.network.os_vif_util [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.226 253542 DEBUG os_vif [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.228 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65fd7d0e-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.234 253542 INFO os_vif [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:80:23,bridge_name='br-int',has_traffic_filtering=True,id=65fd7d0e-59ee-4411-92ee-f934016f1d1f,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65fd7d0e-59')
Nov 25 08:28:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 25 08:28:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 25 08:28:37 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.406 253542 DEBUG nova.objects.instance [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'migration_context' on Instance uuid c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.419 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.420 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Ensure instance console log exists: /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.421 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.421 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.422 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.775 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Successfully updated port: 6bcb5ada-83f7-419f-9909-98ba6f37630c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.793 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.794 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquired lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.794 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:28:37 compute-0 nova_compute[253538]: 2025-11-25 08:28:37.979 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:28:38 compute-0 ceph-mon[75015]: pgmap v1319: 321 pgs: 321 active+clean; 202 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.7 MiB/s wr, 276 op/s
Nov 25 08:28:38 compute-0 ceph-mon[75015]: osdmap e138: 3 total, 3 up, 3 in
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.728 253542 DEBUG nova.network.neutron [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Updating instance_info_cache with network_info: [{"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.756 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Releasing lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.757 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance network_info: |[{"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.759 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start _get_guest_xml network_info=[{"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.765 253542 WARNING nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.771 253542 DEBUG nova.virt.libvirt.host [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.772 253542 DEBUG nova.virt.libvirt.host [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.774 253542 DEBUG nova.virt.libvirt.host [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.775 253542 DEBUG nova.virt.libvirt.host [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.775 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.776 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.776 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.776 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.777 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.778 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.778 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.778 253542 DEBUG nova.virt.hardware [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:28:38 compute-0 nova_compute[253538]: 2025-11-25 08:28:38.782 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:38 compute-0 ovn_controller[152859]: 2025-11-25T08:28:38Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:9b:99 10.100.0.9
Nov 25 08:28:38 compute-0 ovn_controller[152859]: 2025-11-25T08:28:38Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:9b:99 10.100.0.9
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.035 253542 DEBUG nova.compute.manager [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-changed-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.036 253542 DEBUG nova.compute.manager [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Refreshing instance network info cache due to event network-changed-6bcb5ada-83f7-419f-9909-98ba6f37630c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.036 253542 DEBUG oslo_concurrency.lockutils [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.036 253542 DEBUG oslo_concurrency.lockutils [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.037 253542 DEBUG nova.network.neutron [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Refreshing network info cache for port 6bcb5ada-83f7-419f-9909-98ba6f37630c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:28:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 240 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 11 MiB/s wr, 372 op/s
Nov 25 08:28:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669809045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.283 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.337 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.344 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.388 253542 INFO nova.virt.libvirt.driver [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Snapshot image upload complete
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.389 253542 INFO nova.compute.manager [None req-815c19c7-4c38-4bdc-bf80-3f6608f1b2b5 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 5.87 seconds to snapshot the instance on the hypervisor.
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.396 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.396 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.424 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.516 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.517 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.522 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.523 253542 INFO nova.compute.claims [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:28:39 compute-0 sudo[292557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:39 compute-0 sudo[292557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:39 compute-0 sudo[292557]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.613 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.614 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:39 compute-0 sudo[292591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.645 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:28:39 compute-0 sudo[292591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:39 compute-0 sudo[292591]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:39 compute-0 sudo[292616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:39 compute-0 sudo[292616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:39 compute-0 sudo[292616]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.708 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.763 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:39 compute-0 sudo[292641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:28:39 compute-0 sudo[292641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2669809045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042903216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.845 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.848 253542 DEBUG nova.virt.libvirt.vif [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-492840202',display_name='tempest-ImagesOneServerNegativeTestJSON-server-492840202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-492840202',id=32,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-ftgun0ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:36Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.849 253542 DEBUG nova.network.os_vif_util [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.851 253542 DEBUG nova.network.os_vif_util [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.853 253542 DEBUG nova.objects.instance [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.867 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <uuid>c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e</uuid>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <name>instance-00000020</name>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-492840202</nova:name>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:28:38</nova:creationTime>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <nova:user uuid="8350a560f2bc4b57a5da0e3a1f582f82">tempest-ImagesOneServerNegativeTestJSON-192511421-project-member</nova:user>
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <nova:project uuid="c5b1125d171240e2895276836b4fd6d7">tempest-ImagesOneServerNegativeTestJSON-192511421</nova:project>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <nova:port uuid="6bcb5ada-83f7-419f-9909-98ba6f37630c">
Nov 25 08:28:39 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <system>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <entry name="serial">c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e</entry>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <entry name="uuid">c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e</entry>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </system>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <os>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   </os>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <features>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   </features>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk">
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config">
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:39 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:34:c6:fc"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <target dev="tap6bcb5ada-83"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/console.log" append="off"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <video>
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </video>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:28:39 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:28:39 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:28:39 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:28:39 compute-0 nova_compute[253538]: </domain>
Nov 25 08:28:39 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.876 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Preparing to wait for external event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.877 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.877 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.878 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.879 253542 DEBUG nova.virt.libvirt.vif [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-492840202',display_name='tempest-ImagesOneServerNegativeTestJSON-server-492840202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-492840202',id=32,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-ftgun0ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:36Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.879 253542 DEBUG nova.network.os_vif_util [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.880 253542 DEBUG nova.network.os_vif_util [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.881 253542 DEBUG os_vif [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.886 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.887 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.891 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6bcb5ada-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.892 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6bcb5ada-83, col_values=(('external_ids', {'iface-id': '6bcb5ada-83f7-419f-9909-98ba6f37630c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:c6:fc', 'vm-uuid': 'c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:39 compute-0 NetworkManager[48915]: <info>  [1764059319.8950] manager: (tap6bcb5ada-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.900 253542 INFO os_vif [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83')
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.984 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.984 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.985 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No VIF found with MAC fa:16:3e:34:c6:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:39 compute-0 nova_compute[253538]: 2025-11-25 08:28:39.985 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Using config drive
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.013 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:40 compute-0 podman[292693]: 2025-11-25 08:28:40.072459768 +0000 UTC m=+0.142012057 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.211 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059305.20793, 6998a6cf-b660-4558-98cf-bf5984775b1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.212 253542 INFO nova.compute.manager [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] VM Stopped (Lifecycle Event)
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818203405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.234 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.236 253542 DEBUG nova.compute.manager [None req-1134abea-d389-4369-b8ab-1af4eaeda085 - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.241 253542 DEBUG nova.compute.manager [None req-1134abea-d389-4369-b8ab-1af4eaeda085 - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.244 253542 DEBUG nova.compute.provider_tree [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.261 253542 DEBUG nova.scheduler.client.report [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.265 253542 INFO nova.compute.manager [None req-1134abea-d389-4369-b8ab-1af4eaeda085 - - - - - -] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.280 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.281 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.282 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.291 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.292 253542 INFO nova.compute.claims [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.298 253542 DEBUG nova.network.neutron [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Updated VIF entry in instance network info cache for port 6bcb5ada-83f7-419f-9909-98ba6f37630c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.298 253542 DEBUG nova.network.neutron [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Updating instance_info_cache with network_info: [{"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:40 compute-0 sudo[292641]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.339 253542 DEBUG oslo_concurrency.lockutils [req-81995c14-e38a-4ce0-a60f-f041242f0b0b req-8d88d25f-da60-4bbb-94ca-6c633d0f12ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.353 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.355 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.383 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.402 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Creating config drive at /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:28:40 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e72d64c1-bc6f-44e3-b2ce-04588f38a37b does not exist
Nov 25 08:28:40 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8d9dbce8-f271-4654-83b1-ea4a108a988a does not exist
Nov 25 08:28:40 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a8917cfd-7774-4c30-aa59-367cbee98299 does not exist
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.408 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc7e1pms2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.452 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.464 253542 INFO nova.virt.libvirt.driver [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Deleting instance files /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d_del
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.465 253542 INFO nova.virt.libvirt.driver [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Deletion of /var/lib/nova/instances/6998a6cf-b660-4558-98cf-bf5984775b1d_del complete
Nov 25 08:28:40 compute-0 sudo[292767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:40 compute-0 sudo[292767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:40 compute-0 sudo[292767]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.529 253542 DEBUG nova.policy [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.552 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:40 compute-0 sudo[292794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:28:40 compute-0 sudo[292794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:40 compute-0 sudo[292794]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.582 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc7e1pms2" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.585 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.586 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.587 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Creating image(s)
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.626 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:40 compute-0 sudo[292820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:40 compute-0 sudo[292820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:40 compute-0 sudo[292820]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.655 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:40 compute-0 sudo[292870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:28:40 compute-0 sudo[292870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.703 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.716 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.747 253542 INFO nova.compute.manager [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 3.54 seconds to destroy the instance on the hypervisor.
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.749 253542 DEBUG oslo.service.loopingcall [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.784 253542 DEBUG nova.storage.rbd_utils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.788 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:40 compute-0 ceph-mon[75015]: pgmap v1321: 321 pgs: 321 active+clean; 240 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 11 MiB/s wr, 372 op/s
Nov 25 08:28:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3042903216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: osdmap e139: 3 total, 3 up, 3 in
Nov 25 08:28:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/818203405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:28:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:28:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.815 253542 DEBUG nova.compute.manager [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.816 253542 DEBUG nova.network.neutron [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.819 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.819 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.820 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.820 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.847 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.851 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/223984199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.988 253542 DEBUG oslo_concurrency.processutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:40 compute-0 nova_compute[253538]: 2025-11-25 08:28:40.989 253542 INFO nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Deleting local config drive /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e/disk.config because it was imported into RBD.
Nov 25 08:28:41 compute-0 virtqemud[253839]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 08:28:41 compute-0 virtqemud[253839]: hostname: compute-0
Nov 25 08:28:41 compute-0 virtqemud[253839]: End of file while reading data: Input/output error
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.007 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.017 253542 DEBUG nova.compute.provider_tree [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.039 253542 DEBUG nova.scheduler.client.report [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:41 compute-0 kernel: tap6bcb5ada-83: entered promiscuous mode
Nov 25 08:28:41 compute-0 NetworkManager[48915]: <info>  [1764059321.0505] manager: (tap6bcb5ada-83): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Nov 25 08:28:41 compute-0 ovn_controller[152859]: 2025-11-25T08:28:41Z|00208|binding|INFO|Claiming lport 6bcb5ada-83f7-419f-9909-98ba6f37630c for this chassis.
Nov 25 08:28:41 compute-0 ovn_controller[152859]: 2025-11-25T08:28:41Z|00209|binding|INFO|6bcb5ada-83f7-419f-9909-98ba6f37630c: Claiming fa:16:3e:34:c6:fc 10.100.0.8
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.053 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.054 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.055 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.059 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:c6:fc 10.100.0.8'], port_security=['fa:16:3e:34:c6:fc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6bcb5ada-83f7-419f-9909-98ba6f37630c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.060 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.061 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.062 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6bcb5ada-83f7-419f-9909-98ba6f37630c in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 bound to our chassis
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.065 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:28:41 compute-0 ovn_controller[152859]: 2025-11-25T08:28:41Z|00210|binding|INFO|Setting lport 6bcb5ada-83f7-419f-9909-98ba6f37630c ovn-installed in OVS
Nov 25 08:28:41 compute-0 ovn_controller[152859]: 2025-11-25T08:28:41Z|00211|binding|INFO|Setting lport 6bcb5ada-83f7-419f-9909-98ba6f37630c up in Southbound
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.077 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.076 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[156207ff-6454-42fe-b4ea-85584de4c47a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.077 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52a7668b-f1 in ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.080 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52a7668b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.080 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15bf3519-e8da-4448-bffe-2a262dd1edb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.081 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[34f45e27-bde5-4059-b8e9-c921ba89c167]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 systemd-machined[215790]: New machine qemu-37-instance-00000020.
Nov 25 08:28:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 241 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 317 op/s
Nov 25 08:28:41 compute-0 podman[293060]: 2025-11-25 08:28:41.099176435 +0000 UTC m=+0.092280842 container create b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.098 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6d398dcb-deab-4af5-9447-d9f4bd4e80f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.103 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.103 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.117 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:28:41 compute-0 systemd-udevd[293088]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.130 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72c79670-b6f2-471f-9d11-9e1302591464]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.132 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:28:41 compute-0 NetworkManager[48915]: <info>  [1764059321.1371] device (tap6bcb5ada-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:28:41 compute-0 NetworkManager[48915]: <info>  [1764059321.1382] device (tap6bcb5ada-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:28:41 compute-0 podman[293060]: 2025-11-25 08:28:41.046795878 +0000 UTC m=+0.039900355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.170 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6bded1e5-2e60-4ef6-914e-9608d2f481aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 systemd[1]: Started libpod-conmon-b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70.scope.
Nov 25 08:28:41 compute-0 NetworkManager[48915]: <info>  [1764059321.1781] manager: (tap52a7668b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.180 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[968ccfb0-7ee4-4e0c-96ac-bcfc3c2c8985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.207 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1f34ad-7e5b-4476-9567-902103180dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.209 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.210 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.211 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Creating image(s)
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.211 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c45e0a1f-ed1e-437c-b713-d24c5b10e8e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.233 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:41 compute-0 NetworkManager[48915]: <info>  [1764059321.2342] device (tap52a7668b-f0): carrier: link connected
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.241 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e8defb0f-99d6-4e59-bd1b-038af4308ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.261 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7c8f81-84a2-4bc9-91f0-920e32000497]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467583, 'reachable_time': 25642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293150, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.264 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.275 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e89648be-afee-4f4a-9114-5f478beb66a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:1c70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467583, 'tstamp': 467583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293162, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 podman[293060]: 2025-11-25 08:28:41.27879829 +0000 UTC m=+0.271902707 container init b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 08:28:41 compute-0 podman[293060]: 2025-11-25 08:28:41.291359597 +0000 UTC m=+0.284463994 container start b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:28:41 compute-0 podman[293060]: 2025-11-25 08:28:41.295784749 +0000 UTC m=+0.288889156 container attach b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 08:28:41 compute-0 sweet_gauss[293103]: 167 167
Nov 25 08:28:41 compute-0 systemd[1]: libpod-b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70.scope: Deactivated successfully.
Nov 25 08:28:41 compute-0 podman[293060]: 2025-11-25 08:28:41.299809911 +0000 UTC m=+0.292914298 container died b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.299 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68cf7bed-7a52-43c3-a420-0ffc4de57627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467583, 'reachable_time': 25642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293181, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.307 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.312 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd73c1fb-c57e-482c-a0fc-9363da5bbc9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.346 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.389 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d59a09d-96b9-4c53-8859-cee42cfd57c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a7668b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:41 compute-0 kernel: tap52a7668b-f0: entered promiscuous mode
Nov 25 08:28:41 compute-0 NetworkManager[48915]: <info>  [1764059321.4528] manager: (tap52a7668b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.455 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52a7668b-f0, col_values=(('external_ids', {'iface-id': 'ac244317-fa52-4a6a-92f4-98845a41804d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:41 compute-0 ovn_controller[152859]: 2025-11-25T08:28:41Z|00212|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.464 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.468 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.468 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.469 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.470 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.474 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.475 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7baf434-f3be-453e-98e9-ca8063299a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.475 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:41.476 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'env', 'PROCESS_TAG=haproxy-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52a7668b-f0ac-4b07-a778-1ee89adbf076.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.498 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.503 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc cf07e611-51eb-4bcf-8757-8f75d3807da6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d11ed5076b45d3f966f16c650926cd17b316ab1961582b80e4362b526b06dfcd-merged.mount: Deactivated successfully.
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.531 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.542 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:28:41 compute-0 podman[293060]: 2025-11-25 08:28:41.55849465 +0000 UTC m=+0.551599037 container remove b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:28:41 compute-0 systemd[1]: libpod-conmon-b79eaa40a8462881f2a8ca5419040d8334ee86246bebfa10528f69a57a184c70.scope: Deactivated successfully.
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.618 253542 DEBUG nova.policy [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9a96a98d6bb448aab904a8763d3675ec', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17d31dbb1e4542daaa43d2fda87e18ad', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.676 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059321.6757755, c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.676 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] VM Started (Lifecycle Event)
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.720 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.732 253542 DEBUG nova.objects.instance [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 8b29fb31-718d-4926-bf4f-bae461ea70ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.740 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059321.6759026, c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.740 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] VM Paused (Lifecycle Event)
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.751 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.752 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Ensure instance console log exists: /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.756 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.757 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.757 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.760 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.763 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.781 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 25 08:28:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/223984199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:41 compute-0 ceph-mon[75015]: pgmap v1323: 321 pgs: 321 active+clean; 241 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 317 op/s
Nov 25 08:28:41 compute-0 podman[293369]: 2025-11-25 08:28:41.83741396 +0000 UTC m=+0.101321161 container create 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.843 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Successfully created port: 799c50c8-d1e7-4c15-a3d9-29903d576304 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:28:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 25 08:28:41 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 25 08:28:41 compute-0 podman[293369]: 2025-11-25 08:28:41.77119945 +0000 UTC m=+0.035106681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.877 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc cf07e611-51eb-4bcf-8757-8f75d3807da6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:41 compute-0 systemd[1]: Started libpod-conmon-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope.
Nov 25 08:28:41 compute-0 podman[293403]: 2025-11-25 08:28:41.902000845 +0000 UTC m=+0.094949265 container create 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:28:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:41 compute-0 systemd[1]: Started libpod-conmon-528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e.scope.
Nov 25 08:28:41 compute-0 podman[293369]: 2025-11-25 08:28:41.945854728 +0000 UTC m=+0.209761959 container init 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 25 08:28:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:41 compute-0 podman[293369]: 2025-11-25 08:28:41.955342199 +0000 UTC m=+0.219249410 container start 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 08:28:41 compute-0 podman[293369]: 2025-11-25 08:28:41.958691572 +0000 UTC m=+0.222598833 container attach 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:28:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db1af69cade0c066d85c02aae054723464bc6f1508df360470a31f2e5094b6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:41 compute-0 podman[293403]: 2025-11-25 08:28:41.876250964 +0000 UTC m=+0.069199384 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:28:41 compute-0 podman[293403]: 2025-11-25 08:28:41.974043027 +0000 UTC m=+0.166991487 container init 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.977 253542 DEBUG nova.compute.manager [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.978 253542 DEBUG oslo_concurrency.lockutils [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.978 253542 DEBUG oslo_concurrency.lockutils [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:41 compute-0 podman[293403]: 2025-11-25 08:28:41.979224889 +0000 UTC m=+0.172173319 container start 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.978 253542 DEBUG oslo_concurrency.lockutils [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.980 253542 DEBUG nova.compute.manager [req-28f61271-c089-4064-abc6-a333935f5e5e req-08b25b9c-5453-41b8-b7cf-9f3c9dc0739e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Processing event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.981 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.989 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059321.9890084, c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.989 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] VM Resumed (Lifecycle Event)
Nov 25 08:28:41 compute-0 nova_compute[253538]: 2025-11-25 08:28:41.991 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.002 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] resizing rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:28:42 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [NOTICE]   (293466) : New worker (293468) forked
Nov 25 08:28:42 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [NOTICE]   (293466) : Loading success.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.038 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.040 253542 INFO nova.virt.libvirt.driver [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance spawned successfully.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.040 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.043 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.064 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.068 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.068 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.068 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.069 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.069 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.069 253542 DEBUG nova.virt.libvirt.driver [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.102 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059307.078669, 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.103 253542 INFO nova.compute.manager [-] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] VM Stopped (Lifecycle Event)
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.107 253542 DEBUG nova.objects.instance [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lazy-loading 'migration_context' on Instance uuid cf07e611-51eb-4bcf-8757-8f75d3807da6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.126 253542 DEBUG nova.compute.manager [None req-92a3afe2-aa5c-4ae9-aa51-aae8fb994f8f - - - - - -] [instance: 59d5b429-b6cb-4ced-bf99-b3a5b42dc5a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.127 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.127 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Ensure instance console log exists: /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.128 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.128 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.128 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.137 253542 INFO nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Took 5.77 seconds to spawn the instance on the hypervisor.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.137 253542 DEBUG nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.205 253542 INFO nova.compute.manager [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Took 6.80 seconds to build instance.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.220 253542 DEBUG oslo_concurrency.lockutils [None req-066abce6-4ea2-415d-9239-fa5fb27251b9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.303 253542 DEBUG nova.network.neutron [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.317 253542 INFO nova.compute.manager [-] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Took 1.50 seconds to deallocate network for instance.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.363 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.364 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.443 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Successfully created port: d4493bab-df0a-4934-ab26-43dae0dbae72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.496 253542 DEBUG oslo_concurrency.processutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.632 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.634 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.634 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.635 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.635 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.637 253542 INFO nova.compute.manager [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Terminating instance
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.639 253542 DEBUG nova.compute.manager [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:28:42 compute-0 kernel: tapb62f3741-11 (unregistering): left promiscuous mode
Nov 25 08:28:42 compute-0 NetworkManager[48915]: <info>  [1764059322.6985] device (tapb62f3741-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:28:42 compute-0 ovn_controller[152859]: 2025-11-25T08:28:42Z|00213|binding|INFO|Releasing lport b62f3741-11c8-4840-a720-d6ee07f06284 from this chassis (sb_readonly=0)
Nov 25 08:28:42 compute-0 ovn_controller[152859]: 2025-11-25T08:28:42Z|00214|binding|INFO|Setting lport b62f3741-11c8-4840-a720-d6ee07f06284 down in Southbound
Nov 25 08:28:42 compute-0 ovn_controller[152859]: 2025-11-25T08:28:42Z|00215|binding|INFO|Removing iface tapb62f3741-11 ovn-installed in OVS
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.710 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.716 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:9b:99 10.100.0.9'], port_security=['fa:16:3e:c9:9b:99 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4c934302-d7cd-4826-835e-cab6dba97e3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0070171d-b7ca-4ed3-baea-814d9cd382de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bd945273cd04d8981dcb3a319e8d026', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ee7f6e6-6de7-4c93-8dc8-a8140fbc4a5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2ce04b2-ff6f-4536-bd4d-73688e8a9b75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=b62f3741-11c8-4840-a720-d6ee07f06284) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.717 162739 INFO neutron.agent.ovn.metadata.agent [-] Port b62f3741-11c8-4840-a720-d6ee07f06284 in datapath 0070171d-b7ca-4ed3-baea-814d9cd382de unbound from our chassis
Nov 25 08:28:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.719 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0070171d-b7ca-4ed3-baea-814d9cd382de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:28:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd150b9b-fe78-41db-b8bc-cf0bc24c0028]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:42.721 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de namespace which is not needed anymore
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.731 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:42 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 25 08:28:42 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Consumed 14.134s CPU time.
Nov 25 08:28:42 compute-0 systemd-machined[215790]: Machine qemu-36-instance-0000001f terminated.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.760 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Successfully updated port: 799c50c8-d1e7-4c15-a3d9-29903d576304 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.774 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.775 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.775 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:28:42 compute-0 ceph-mon[75015]: osdmap e140: 3 total, 3 up, 3 in
Nov 25 08:28:42 compute-0 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [NOTICE]   (291575) : haproxy version is 2.8.14-c23fe91
Nov 25 08:28:42 compute-0 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [NOTICE]   (291575) : path to executable is /usr/sbin/haproxy
Nov 25 08:28:42 compute-0 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [WARNING]  (291575) : Exiting Master process...
Nov 25 08:28:42 compute-0 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [ALERT]    (291575) : Current worker (291578) exited with code 143 (Terminated)
Nov 25 08:28:42 compute-0 neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de[291562]: [WARNING]  (291575) : All workers exited. Exiting... (0)
Nov 25 08:28:42 compute-0 systemd[1]: libpod-e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0.scope: Deactivated successfully.
Nov 25 08:28:42 compute-0 podman[293560]: 2025-11-25 08:28:42.877296862 +0000 UTC m=+0.062177560 container died e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.880 253542 INFO nova.virt.libvirt.driver [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Instance destroyed successfully.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.881 253542 DEBUG nova.objects.instance [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lazy-loading 'resources' on Instance uuid 4c934302-d7cd-4826-835e-cab6dba97e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.895 253542 DEBUG nova.virt.libvirt.vif [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1824498337',display_name='tempest-ImagesOneServerTestJSON-server-1824498337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1824498337',id=31,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bd945273cd04d8981dcb3a319e8d026',ramdisk_id='',reservation_id='r-p00agd9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-174767469',owner_user_name='tempest-ImagesOneServerTestJSON-174767469-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:39Z,user_data=None,user_id='ee2fe69e0dfa4467926cec954790823e',uuid=4c934302-d7cd-4826-835e-cab6dba97e3a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.896 253542 DEBUG nova.network.os_vif_util [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converting VIF {"id": "b62f3741-11c8-4840-a720-d6ee07f06284", "address": "fa:16:3e:c9:9b:99", "network": {"id": "0070171d-b7ca-4ed3-baea-814d9cd382de", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-33340236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bd945273cd04d8981dcb3a319e8d026", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb62f3741-11", "ovs_interfaceid": "b62f3741-11c8-4840-a720-d6ee07f06284", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.897 253542 DEBUG nova.network.os_vif_util [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.897 253542 DEBUG os_vif [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.899 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.899 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb62f3741-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.906 253542 INFO os_vif [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:9b:99,bridge_name='br-int',has_traffic_filtering=True,id=b62f3741-11c8-4840-a720-d6ee07f06284,network=Network(0070171d-b7ca-4ed3-baea-814d9cd382de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb62f3741-11')
Nov 25 08:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0-userdata-shm.mount: Deactivated successfully.
Nov 25 08:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b38844f50af0338993d204476dce181cc29aa7db95beeb492a051beb5aa7d312-merged.mount: Deactivated successfully.
Nov 25 08:28:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879197356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.964 253542 DEBUG oslo_concurrency.processutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.970 253542 DEBUG nova.compute.provider_tree [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:42 compute-0 podman[293560]: 2025-11-25 08:28:42.970919229 +0000 UTC m=+0.155799937 container cleanup e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:28:42 compute-0 systemd[1]: libpod-conmon-e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0.scope: Deactivated successfully.
Nov 25 08:28:42 compute-0 nova_compute[253538]: 2025-11-25 08:28:42.988 253542 DEBUG nova.scheduler.client.report [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.003 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.011 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:43 compute-0 podman[293625]: 2025-11-25 08:28:43.034658251 +0000 UTC m=+0.043816852 container remove e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.038 253542 INFO nova.scheduler.client.report [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 6998a6cf-b660-4558-98cf-bf5984775b1d
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.038 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1310e79f-e6c0-42c7-bf5b-e616aea94f66]: (4, ('Tue Nov 25 08:28:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de (e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0)\ne1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0\nTue Nov 25 08:28:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de (e1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0)\ne1b52127dc1ec05b94275d0736abfa77621b03b28b6ce6d889d5bd0ecf1be9c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.041 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[807a07a0-46e8-46cb-baf6-0e9091d6e6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.043 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0070171d-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:43 compute-0 kernel: tap0070171d-b0: left promiscuous mode
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.068 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5db841-c956-40d5-8d3b-c94f66698622]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.084 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[50393e9c-abf9-44c7-b0c3-61d9984f8772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.088 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b083f00-9f03-4bc8-a34b-9cc4744885bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 257 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 9.0 MiB/s wr, 297 op/s
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.104 253542 DEBUG oslo_concurrency.lockutils [None req-bf3b8450-5c1c-44a8-9b80-e93361ab5f19 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "6998a6cf-b660-4558-98cf-bf5984775b1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.114 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d78b7ccb-0757-4694-99df-93bd8a59b731]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465713, 'reachable_time': 27140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293645, 'error': None, 'target': 'ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.117 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0070171d-b7ca-4ed3-baea-814d9cd382de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:28:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:43.117 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[11c3a51b-3c39-4a0c-8712-b62b085dd2ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d0070171d\x2db7ca\x2d4ed3\x2dbaea\x2d814d9cd382de.mount: Deactivated successfully.
Nov 25 08:28:43 compute-0 optimistic_kowalevski[293426]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:28:43 compute-0 optimistic_kowalevski[293426]: --> relative data size: 1.0
Nov 25 08:28:43 compute-0 optimistic_kowalevski[293426]: --> All data devices are unavailable
Nov 25 08:28:43 compute-0 systemd[1]: libpod-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope: Deactivated successfully.
Nov 25 08:28:43 compute-0 systemd[1]: libpod-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope: Consumed 1.074s CPU time.
Nov 25 08:28:43 compute-0 conmon[293426]: conmon 8291d7d8ca45642f360e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope/container/memory.events
Nov 25 08:28:43 compute-0 podman[293369]: 2025-11-25 08:28:43.178926669 +0000 UTC m=+1.442833870 container died 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.179 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Successfully updated port: d4493bab-df0a-4934-ab26-43dae0dbae72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.194 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.195 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquired lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.195 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:28:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-40668a1f5dedaf4cab05a01ddcda2a8366970cc4fe4c9fc1f11870dd061fe30c-merged.mount: Deactivated successfully.
Nov 25 08:28:43 compute-0 podman[293369]: 2025-11-25 08:28:43.256872382 +0000 UTC m=+1.520779583 container remove 8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.267 253542 DEBUG nova.compute.manager [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-changed-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.267 253542 DEBUG nova.compute.manager [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Refreshing instance network info cache due to event network-changed-d4493bab-df0a-4934-ab26-43dae0dbae72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.268 253542 DEBUG oslo_concurrency.lockutils [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:43 compute-0 systemd[1]: libpod-conmon-8291d7d8ca45642f360ebce1d5b883212a7d530c7b569291626b68abf79cab8d.scope: Deactivated successfully.
Nov 25 08:28:43 compute-0 sudo[292870]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:43 compute-0 sudo[293663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:43 compute-0 sudo[293663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:43 compute-0 sudo[293663]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.378 253542 INFO nova.virt.libvirt.driver [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Deleting instance files /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a_del
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.379 253542 INFO nova.virt.libvirt.driver [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Deletion of /var/lib/nova/instances/4c934302-d7cd-4826-835e-cab6dba97e3a_del complete
Nov 25 08:28:43 compute-0 sudo[293688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:28:43 compute-0 sudo[293688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:43 compute-0 sudo[293688]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.435 253542 INFO nova.compute.manager [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.436 253542 DEBUG oslo.service.loopingcall [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.436 253542 DEBUG nova.compute.manager [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.436 253542 DEBUG nova.network.neutron [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:28:43 compute-0 sudo[293713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:43 compute-0 sudo[293713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:43 compute-0 sudo[293713]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:43 compute-0 sudo[293740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:28:43 compute-0 sudo[293740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.567 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:28:43 compute-0 podman[293805]: 2025-11-25 08:28:43.846353686 +0000 UTC m=+0.048705107 container create 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:28:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/879197356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:43 compute-0 ceph-mon[75015]: pgmap v1325: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 257 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 9.0 MiB/s wr, 297 op/s
Nov 25 08:28:43 compute-0 systemd[1]: Started libpod-conmon-2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f.scope.
Nov 25 08:28:43 compute-0 podman[293805]: 2025-11-25 08:28:43.817586521 +0000 UTC m=+0.019937972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:28:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:43 compute-0 sshd-session[293736]: Invalid user loginuser from 193.32.162.151 port 32856
Nov 25 08:28:43 compute-0 podman[293805]: 2025-11-25 08:28:43.949754014 +0000 UTC m=+0.152105535 container init 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:28:43 compute-0 podman[293805]: 2025-11-25 08:28:43.96190469 +0000 UTC m=+0.164256121 container start 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.967 253542 DEBUG nova.network.neutron [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updating instance_info_cache with network_info: [{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:43 compute-0 podman[293805]: 2025-11-25 08:28:43.965294603 +0000 UTC m=+0.167646094 container attach 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:28:43 compute-0 boring_wilson[293823]: 167 167
Nov 25 08:28:43 compute-0 systemd[1]: libpod-2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f.scope: Deactivated successfully.
Nov 25 08:28:43 compute-0 podman[293805]: 2025-11-25 08:28:43.973109949 +0000 UTC m=+0.175461380 container died 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.984 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.984 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance network_info: |[{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.986 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start _get_guest_xml network_info=[{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.990 253542 WARNING nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.996 253542 DEBUG nova.virt.libvirt.host [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:28:43 compute-0 nova_compute[253538]: 2025-11-25 08:28:43.996 253542 DEBUG nova.virt.libvirt.host [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.000 253542 DEBUG nova.virt.libvirt.host [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.000 253542 DEBUG nova.virt.libvirt.host [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.001 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.001 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.001 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.001 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.002 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.003 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.003 253542 DEBUG nova.virt.hardware [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.005 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-0797a8ee5200bbda7b4216fba6239f07c6a0fc4b80c27932480a39e4b316e0e1-merged.mount: Deactivated successfully.
Nov 25 08:28:44 compute-0 sshd-session[293736]: Connection closed by invalid user loginuser 193.32.162.151 port 32856 [preauth]
Nov 25 08:28:44 compute-0 podman[293805]: 2025-11-25 08:28:44.126791007 +0000 UTC m=+0.329142468 container remove 2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 08:28:44 compute-0 systemd[1]: libpod-conmon-2f5a8a58cf5da28159707136391d71c0f8ac81986a1e109aae90be15a9043d1f.scope: Deactivated successfully.
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.156 253542 DEBUG nova.network.neutron [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.170 253542 INFO nova.compute.manager [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Took 0.73 seconds to deallocate network for instance.
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.208 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.209 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.300 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.301 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.301 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.301 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.301 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] No waiting events found dispatching network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.302 253542 WARNING nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received unexpected event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c for instance with vm_state active and task_state None.
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.302 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6998a6cf-b660-4558-98cf-bf5984775b1d] Received event network-vif-deleted-65fd7d0e-59ee-4411-92ee-f934016f1d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.302 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-changed-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.302 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Refreshing instance network info cache due to event network-changed-799c50c8-d1e7-4c15-a3d9-29903d576304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.303 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.303 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.303 253542 DEBUG nova.network.neutron [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Refreshing network info cache for port 799c50c8-d1e7-4c15-a3d9-29903d576304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:28:44 compute-0 podman[293866]: 2025-11-25 08:28:44.334423686 +0000 UTC m=+0.062025665 container create 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.335 253542 DEBUG oslo_concurrency.processutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:44 compute-0 podman[293866]: 2025-11-25 08:28:44.297526006 +0000 UTC m=+0.025128005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.392 253542 DEBUG nova.network.neutron [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:44 compute-0 systemd[1]: Started libpod-conmon-6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742.scope.
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.418 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Releasing lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.418 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance network_info: |[{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.418 253542 DEBUG oslo_concurrency.lockutils [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.419 253542 DEBUG nova.network.neutron [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Refreshing network info cache for port d4493bab-df0a-4934-ab26-43dae0dbae72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:28:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.422 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start _get_guest_xml network_info=[{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:28:44 compute-0 podman[293866]: 2025-11-25 08:28:44.441083974 +0000 UTC m=+0.168686033 container init 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.446 253542 WARNING nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:28:44 compute-0 podman[293866]: 2025-11-25 08:28:44.448686154 +0000 UTC m=+0.176288133 container start 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.451 253542 DEBUG nova.virt.libvirt.host [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:28:44 compute-0 podman[293866]: 2025-11-25 08:28:44.452247292 +0000 UTC m=+0.179849321 container attach 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.452 253542 DEBUG nova.virt.libvirt.host [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.469 253542 DEBUG nova.virt.libvirt.host [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.469 253542 DEBUG nova.virt.libvirt.host [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.470 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:28:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.471 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.472 253542 DEBUG nova.virt.hardware [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:28:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352777220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.474 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.500 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.521 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.525 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:28:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400758869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.799 253542 DEBUG oslo_concurrency.processutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.805 253542 DEBUG nova.compute.provider_tree [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.818 253542 DEBUG nova.scheduler.client.report [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.842 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.872 253542 INFO nova.scheduler.client.report [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Deleted allocations for instance 4c934302-d7cd-4826-835e-cab6dba97e3a
Nov 25 08:28:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742349790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.894 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2352777220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2400758869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:28:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1742349790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.915 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.919 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:44 compute-0 nova_compute[253538]: 2025-11-25 08:28:44.952 253542 DEBUG oslo_concurrency.lockutils [None req-c5756116-ca2b-4696-927b-d39eef6d1be9 ee2fe69e0dfa4467926cec954790823e 9bd945273cd04d8981dcb3a319e8d026 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.028 253542 DEBUG nova.compute.manager [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557599900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:28:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 25 08:28:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 25 08:28:45 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.064 253542 INFO nova.compute.manager [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] instance snapshotting
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.080 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.081 253542 DEBUG nova.virt.libvirt.vif [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1025077292',display_name='tempest-ImagesTestJSON-server-1025077292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1025077292',id=33,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-4kbbjx71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:40Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=8b29fb31-718d-4926-bf4f-bae461ea70ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.081 253542 DEBUG nova.network.os_vif_util [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.082 253542 DEBUG nova.network.os_vif_util [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.083 253542 DEBUG nova.objects.instance [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b29fb31-718d-4926-bf4f-bae461ea70ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.093 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <uuid>8b29fb31-718d-4926-bf4f-bae461ea70ef</uuid>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <name>instance-00000021</name>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesTestJSON-server-1025077292</nova:name>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:28:43</nova:creationTime>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:port uuid="799c50c8-d1e7-4c15-a3d9-29903d576304">
Nov 25 08:28:45 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <system>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="serial">8b29fb31-718d-4926-bf4f-bae461ea70ef</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="uuid">8b29fb31-718d-4926-bf4f-bae461ea70ef</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </system>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <os>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </os>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <features>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </features>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8b29fb31-718d-4926-bf4f-bae461ea70ef_disk">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:fd:03:96"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <target dev="tap799c50c8-d1"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/console.log" append="off"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <video>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </video>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:28:45 compute-0 nova_compute[253538]: </domain>
Nov 25 08:28:45 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.093 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Preparing to wait for external event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.094 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.094 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.094 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.094 253542 DEBUG nova.virt.libvirt.vif [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1025077292',display_name='tempest-ImagesTestJSON-server-1025077292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1025077292',id=33,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-4kbbjx71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:40Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=8b29fb31-718d-4926-bf4f-bae461ea70ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.095 253542 DEBUG nova.network.os_vif_util [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.095 253542 DEBUG nova.network.os_vif_util [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.096 253542 DEBUG os_vif [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.096 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 232 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 9.6 MiB/s wr, 450 op/s
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.097 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.099 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.099 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap799c50c8-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap799c50c8-d1, col_values=(('external_ids', {'iface-id': '799c50c8-d1e7-4c15-a3d9-29903d576304', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:03:96', 'vm-uuid': '8b29fb31-718d-4926-bf4f-bae461ea70ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:45 compute-0 NetworkManager[48915]: <info>  [1764059325.1024] manager: (tap799c50c8-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.109 253542 INFO os_vif [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1')
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.153 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.153 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.154 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:fd:03:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.154 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Using config drive
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.180 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]: {
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:     "0": [
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:         {
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "devices": [
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "/dev/loop3"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             ],
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_name": "ceph_lv0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_size": "21470642176",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "name": "ceph_lv0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "tags": {
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cluster_name": "ceph",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.crush_device_class": "",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.encrypted": "0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osd_id": "0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.type": "block",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.vdo": "0"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             },
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "type": "block",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "vg_name": "ceph_vg0"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:         }
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:     ],
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:     "1": [
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:         {
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "devices": [
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "/dev/loop4"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             ],
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_name": "ceph_lv1",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_size": "21470642176",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "name": "ceph_lv1",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "tags": {
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cluster_name": "ceph",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.crush_device_class": "",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.encrypted": "0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osd_id": "1",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.type": "block",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.vdo": "0"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             },
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "type": "block",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "vg_name": "ceph_vg1"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:         }
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:     ],
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:     "2": [
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:         {
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "devices": [
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "/dev/loop5"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             ],
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_name": "ceph_lv2",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_size": "21470642176",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "name": "ceph_lv2",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "tags": {
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.cluster_name": "ceph",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.crush_device_class": "",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.encrypted": "0",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osd_id": "2",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.type": "block",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:                 "ceph.vdo": "0"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             },
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "type": "block",
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:             "vg_name": "ceph_vg2"
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:         }
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]:     ]
Nov 25 08:28:45 compute-0 epic_mccarthy[293883]: }
Nov 25 08:28:45 compute-0 systemd[1]: libpod-6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742.scope: Deactivated successfully.
Nov 25 08:28:45 compute-0 podman[293866]: 2025-11-25 08:28:45.263697571 +0000 UTC m=+0.991299550 container died 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c701313089dcf59b569866adaf6ca7c27a06e3d84927e44617ca2d58f785c6cd-merged.mount: Deactivated successfully.
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.321 253542 INFO nova.virt.libvirt.driver [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Beginning live snapshot process
Nov 25 08:28:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:28:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3011737728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:45 compute-0 podman[293866]: 2025-11-25 08:28:45.339687731 +0000 UTC m=+1.067289710 container remove 6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mccarthy, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:28:45 compute-0 systemd[1]: libpod-conmon-6529d1593ec1bb086527735570c4ab2a33f3f0f5efd982af9053f1898b4ae742.scope: Deactivated successfully.
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.376 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.378 253542 DEBUG nova.virt.libvirt.vif [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-159347704',display_name='tempest-ServersTestJSON-server-159347704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-159347704',id=34,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJTnavPSMTdJ98k7GbPudzwCAZWvOVoss8PE9qNwjiCue78AnUJTbduASU9tXAUM03eX8VLrSKKQxmPEVUcAUgD9baA3BJYk4n2P01dqgil022Gs27o2zUO7uKTgrjs9Q==',key_name='tempest-keypair-1656658254',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17d31dbb1e4542daaa43d2fda87e18ad',ramdisk_id='',reservation_id='r-6gyhk8p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1586688039',owner_user_name='tempest-ServersTestJSON-1586688039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9a96a98d6bb448aab904a8763d3675ec',uuid=cf07e611-51eb-4bcf-8757-8f75d3807da6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.378 253542 DEBUG nova.network.os_vif_util [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converting VIF {"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.379 253542 DEBUG nova.network.os_vif_util [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.380 253542 DEBUG nova.objects.instance [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lazy-loading 'pci_devices' on Instance uuid cf07e611-51eb-4bcf-8757-8f75d3807da6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:28:45 compute-0 sudo[293740]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.395 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <uuid>cf07e611-51eb-4bcf-8757-8f75d3807da6</uuid>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <name>instance-00000022</name>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestJSON-server-159347704</nova:name>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:28:44</nova:creationTime>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:user uuid="9a96a98d6bb448aab904a8763d3675ec">tempest-ServersTestJSON-1586688039-project-member</nova:user>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:project uuid="17d31dbb1e4542daaa43d2fda87e18ad">tempest-ServersTestJSON-1586688039</nova:project>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <nova:port uuid="d4493bab-df0a-4934-ab26-43dae0dbae72">
Nov 25 08:28:45 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <system>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="serial">cf07e611-51eb-4bcf-8757-8f75d3807da6</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="uuid">cf07e611-51eb-4bcf-8757-8f75d3807da6</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </system>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <os>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </os>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <features>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </features>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/cf07e611-51eb-4bcf-8757-8f75d3807da6_disk">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </source>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:28:45 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:6b:9f:ca"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <target dev="tapd4493bab-df"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/console.log" append="off"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <video>
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </video>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:28:45 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:28:45 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:28:45 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:28:45 compute-0 nova_compute[253538]: </domain>
Nov 25 08:28:45 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.396 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Preparing to wait for external event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.396 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.396 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.397 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.397 253542 DEBUG nova.virt.libvirt.vif [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-159347704',display_name='tempest-ServersTestJSON-server-159347704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-159347704',id=34,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJTnavPSMTdJ98k7GbPudzwCAZWvOVoss8PE9qNwjiCue78AnUJTbduASU9tXAUM03eX8VLrSKKQxmPEVUcAUgD9baA3BJYk4n2P01dqgil022Gs27o2zUO7uKTgrjs9Q==',key_name='tempest-keypair-1656658254',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17d31dbb1e4542daaa43d2fda87e18ad',ramdisk_id='',reservation_id='r-6gyhk8p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1586688039',owner_user_name='tempest-ServersTestJSON-1586688039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:28:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9a96a98d6bb448aab904a8763d3675ec',uuid=cf07e611-51eb-4bcf-8757-8f75d3807da6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.398 253542 DEBUG nova.network.os_vif_util [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converting VIF {"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.398 253542 DEBUG nova.network.os_vif_util [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.399 253542 DEBUG os_vif [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.399 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.400 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.451 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4493bab-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.451 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4493bab-df, col_values=(('external_ids', {'iface-id': 'd4493bab-df0a-4934-ab26-43dae0dbae72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:9f:ca', 'vm-uuid': 'cf07e611-51eb-4bcf-8757-8f75d3807da6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.452 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:45 compute-0 NetworkManager[48915]: <info>  [1764059325.4535] manager: (tapd4493bab-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.458 253542 DEBUG nova.virt.libvirt.imagebackend [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.460 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.462 253542 INFO os_vif [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df')
Nov 25 08:28:45 compute-0 sudo[294052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:45 compute-0 sudo[294052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:45 compute-0 sudo[294052]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.504 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.505 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.505 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] No VIF found with MAC fa:16:3e:6b:9f:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.505 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Using config drive
Nov 25 08:28:45 compute-0 sudo[294112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:28:45 compute-0 sudo[294112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.530 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:45 compute-0 sudo[294112]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.562 253542 DEBUG nova.network.neutron [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updated VIF entry in instance network info cache for port 799c50c8-d1e7-4c15-a3d9-29903d576304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.563 253542 DEBUG nova.network.neutron [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updating instance_info_cache with network_info: [{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.575 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.576 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-unplugged-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.576 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.576 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.576 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.577 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] No waiting events found dispatching network-vif-unplugged-b62f3741-11c8-4840-a720-d6ee07f06284 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.577 253542 WARNING nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received unexpected event network-vif-unplugged-b62f3741-11c8-4840-a720-d6ee07f06284 for instance with vm_state deleted and task_state None.
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.577 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.577 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.578 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.578 253542 DEBUG oslo_concurrency.lockutils [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4c934302-d7cd-4826-835e-cab6dba97e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.578 253542 DEBUG nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] No waiting events found dispatching network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.578 253542 WARNING nova.compute.manager [req-5266ba18-be39-490a-b6ab-c72dfc626390 req-47b2092e-bd30-4057-80eb-996673e38468 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received unexpected event network-vif-plugged-b62f3741-11c8-4840-a720-d6ee07f06284 for instance with vm_state deleted and task_state None.
Nov 25 08:28:45 compute-0 sudo[294155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:45 compute-0 sudo[294155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:45 compute-0 sudo[294155]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.619 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Creating config drive at /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.626 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpklaxl3p8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:45 compute-0 sudo[294180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:28:45 compute-0 sudo[294180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.712 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] creating snapshot(c22d1af405f648b08ec82c457c2957ba) on rbd image(c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.778 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpklaxl3p8" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.802 253542 DEBUG nova.storage.rbd_utils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.805 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.834 253542 DEBUG nova.network.neutron [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updated VIF entry in instance network info cache for port d4493bab-df0a-4934-ab26-43dae0dbae72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.835 253542 DEBUG nova.network.neutron [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.852 253542 DEBUG oslo_concurrency.lockutils [req-41ba1e70-abb1-4d6a-bb57-fa81f134d33f req-765bb381-5988-4deb-b389-6750012978b5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.942 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Creating config drive at /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config
Nov 25 08:28:45 compute-0 nova_compute[253538]: 2025-11-25 08:28:45.949 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqvqme_g6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1557599900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:46 compute-0 ceph-mon[75015]: osdmap e141: 3 total, 3 up, 3 in
Nov 25 08:28:46 compute-0 ceph-mon[75015]: pgmap v1327: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 232 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 9.6 MiB/s wr, 450 op/s
Nov 25 08:28:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3011737728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:28:46 compute-0 podman[294298]: 2025-11-25 08:28:45.958934907 +0000 UTC m=+0.020559549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:28:46 compute-0 nova_compute[253538]: 2025-11-25 08:28:46.086 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqvqme_g6" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:46 compute-0 podman[294298]: 2025-11-25 08:28:46.220734872 +0000 UTC m=+0.282359504 container create 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 08:28:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 25 08:28:46 compute-0 nova_compute[253538]: 2025-11-25 08:28:46.291 253542 DEBUG nova.storage.rbd_utils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] rbd image cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:28:46 compute-0 nova_compute[253538]: 2025-11-25 08:28:46.295 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:28:46 compute-0 systemd[1]: Started libpod-conmon-81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df.scope.
Nov 25 08:28:46 compute-0 nova_compute[253538]: 2025-11-25 08:28:46.382 253542 DEBUG nova.compute.manager [req-9a5e8226-4d2e-4e64-be3c-59447e89c8dc req-aa4b130c-8c65-4f9c-9d7f-844c4c66f8de b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Received event network-vif-deleted-b62f3741-11c8-4840-a720-d6ee07f06284 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:46 compute-0 podman[294298]: 2025-11-25 08:28:46.554585541 +0000 UTC m=+0.616210263 container init 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:28:46 compute-0 podman[294298]: 2025-11-25 08:28:46.563721073 +0000 UTC m=+0.625345715 container start 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:28:46 compute-0 affectionate_shtern[294351]: 167 167
Nov 25 08:28:46 compute-0 systemd[1]: libpod-81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df.scope: Deactivated successfully.
Nov 25 08:28:46 compute-0 conmon[294351]: conmon 81344a838ab472469a2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df.scope/container/memory.events
Nov 25 08:28:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 25 08:28:46 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 25 08:28:46 compute-0 podman[294298]: 2025-11-25 08:28:46.785928344 +0000 UTC m=+0.847552996 container attach 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 08:28:46 compute-0 podman[294298]: 2025-11-25 08:28:46.787548039 +0000 UTC m=+0.849172671 container died 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:28:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 180 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.3 MiB/s wr, 426 op/s
Nov 25 08:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0ef35e2278b48f2a0c771a4fe239f72944bc5bb0dd86c0849fae98b51a10205-merged.mount: Deactivated successfully.
Nov 25 08:28:47 compute-0 podman[294298]: 2025-11-25 08:28:47.509619707 +0000 UTC m=+1.571244349 container remove 81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shtern, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:28:47 compute-0 systemd[1]: libpod-conmon-81344a838ab472469a2ce4aa2ec860eb711a769a3bf0df0e28652a73271bb3df.scope: Deactivated successfully.
Nov 25 08:28:47 compute-0 podman[294382]: 2025-11-25 08:28:47.728663801 +0000 UTC m=+0.032099628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:28:48 compute-0 podman[294382]: 2025-11-25 08:28:48.135381933 +0000 UTC m=+0.438817700 container create e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 08:28:48 compute-0 ceph-mon[75015]: osdmap e142: 3 total, 3 up, 3 in
Nov 25 08:28:48 compute-0 ceph-mon[75015]: pgmap v1329: 321 pgs: 321 active+clean; 180 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.3 MiB/s wr, 426 op/s
Nov 25 08:28:48 compute-0 systemd[1]: Started libpod-conmon-e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781.scope.
Nov 25 08:28:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.541 253542 DEBUG oslo_concurrency.processutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config 8b29fb31-718d-4926-bf4f-bae461ea70ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.544 253542 INFO nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Deleting local config drive /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef/disk.config because it was imported into RBD.
Nov 25 08:28:48 compute-0 kernel: tap799c50c8-d1: entered promiscuous mode
Nov 25 08:28:48 compute-0 NetworkManager[48915]: <info>  [1764059328.6366] manager: (tap799c50c8-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:48 compute-0 podman[294382]: 2025-11-25 08:28:48.647633361 +0000 UTC m=+0.951069118 container init e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:28:48 compute-0 ovn_controller[152859]: 2025-11-25T08:28:48Z|00216|binding|INFO|Claiming lport 799c50c8-d1e7-4c15-a3d9-29903d576304 for this chassis.
Nov 25 08:28:48 compute-0 ovn_controller[152859]: 2025-11-25T08:28:48Z|00217|binding|INFO|799c50c8-d1e7-4c15-a3d9-29903d576304: Claiming fa:16:3e:fd:03:96 10.100.0.12
Nov 25 08:28:48 compute-0 podman[294382]: 2025-11-25 08:28:48.660454015 +0000 UTC m=+0.963889762 container start e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.664 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:03:96 10.100.0.12'], port_security=['fa:16:3e:fd:03:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8b29fb31-718d-4926-bf4f-bae461ea70ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=799c50c8-d1e7-4c15-a3d9-29903d576304) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.665 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 799c50c8-d1e7-4c15-a3d9-29903d576304 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.667 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.681 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58e86e0b-4b00-4bf4-8718-922ab7ac9e9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.682 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:28:48 compute-0 systemd-machined[215790]: New machine qemu-38-instance-00000021.
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.687 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7105c66d-7a24-4039-bc00-6c505b3aa540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a59e38-72c3-4aa5-9413-2454e62bdfe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000021.
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.703 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[df1b49a7-b702-4ef5-a674-a4da1ca3789b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 systemd-udevd[294421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.730 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[689bdc15-67e4-408c-9b60-bdd38c91652c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 NetworkManager[48915]: <info>  [1764059328.7325] device (tap799c50c8-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:28:48 compute-0 NetworkManager[48915]: <info>  [1764059328.7335] device (tap799c50c8-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:48 compute-0 podman[294382]: 2025-11-25 08:28:48.758489475 +0000 UTC m=+1.061925242 container attach e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 08:28:48 compute-0 ovn_controller[152859]: 2025-11-25T08:28:48Z|00218|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 ovn-installed in OVS
Nov 25 08:28:48 compute-0 ovn_controller[152859]: 2025-11-25T08:28:48Z|00219|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 up in Southbound
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.764 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0af4bc-1e58-4278-a75b-442dd3d63e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.771 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b02be0b-b06e-40cf-99e7-54795611d9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 systemd-udevd[294423]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:48 compute-0 NetworkManager[48915]: <info>  [1764059328.7720] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.803 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d52beaaf-17c1-4f21-92aa-6ad5c8606844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.810 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc14722-25af-4139-9c75-1894d918fd65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.831 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] cloning vms/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk@c22d1af405f648b08ec82c457c2957ba to images/86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:28:48 compute-0 NetworkManager[48915]: <info>  [1764059328.8412] device (tapba659d6c-c0): carrier: link connected
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.846 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[279e8e2e-1e65-408c-92ca-0fd7fd16b7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.863 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2650088a-7fe9-401a-a8a9-634ad8b539d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468344, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294470, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.880 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6888de8c-0a49-439b-86c0-2bf6e1d28c75]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468344, 'tstamp': 468344}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294486, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.896 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25970630-f083-4aab-ada7-94e26e7137c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468344, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294490, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.923 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3c340c-fe41-4d42-9ace-c545a4dd4506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.988 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[048fc774-22a1-48ee-9be4-b7444a7b561e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.990 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.990 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.991 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.992 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:48 compute-0 NetworkManager[48915]: <info>  [1764059328.9930] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 25 08:28:48 compute-0 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.995 253542 DEBUG oslo_concurrency.processutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config cf07e611-51eb-4bcf-8757-8f75d3807da6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.995 253542 INFO nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Deleting local config drive /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6/disk.config because it was imported into RBD.
Nov 25 08:28:48 compute-0 nova_compute[253538]: 2025-11-25 08:28:48.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:48.997 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:48 compute-0 ovn_controller[152859]: 2025-11-25T08:28:48Z|00220|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.026 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.028 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66466645-dd6a-460c-8d09-a04b91318b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.029 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.029 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:28:49 compute-0 kernel: tapd4493bab-df: entered promiscuous mode
Nov 25 08:28:49 compute-0 NetworkManager[48915]: <info>  [1764059329.0479] manager: (tapd4493bab-df): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Nov 25 08:28:49 compute-0 systemd-udevd[294442]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:28:49 compute-0 ovn_controller[152859]: 2025-11-25T08:28:49Z|00221|binding|INFO|Claiming lport d4493bab-df0a-4934-ab26-43dae0dbae72 for this chassis.
Nov 25 08:28:49 compute-0 ovn_controller[152859]: 2025-11-25T08:28:49Z|00222|binding|INFO|d4493bab-df0a-4934-ab26-43dae0dbae72: Claiming fa:16:3e:6b:9f:ca 10.100.0.3
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:49 compute-0 NetworkManager[48915]: <info>  [1764059329.0616] device (tapd4493bab-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:49 compute-0 NetworkManager[48915]: <info>  [1764059329.0621] device (tapd4493bab-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:28:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:49.068 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:9f:ca 10.100.0.3'], port_security=['fa:16:3e:6b:9f:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf07e611-51eb-4bcf-8757-8f75d3807da6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17d31dbb1e4542daaa43d2fda87e18ad', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5ca9dda1-b2ea-4f89-8fc2-d2049ead3ade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d85b3127-2363-4567-943e-4e79235e055a, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d4493bab-df0a-4934-ab26-43dae0dbae72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:28:49 compute-0 systemd-machined[215790]: New machine qemu-39-instance-00000022.
Nov 25 08:28:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 180 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 385 op/s
Nov 25 08:28:49 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000022.
Nov 25 08:28:49 compute-0 ovn_controller[152859]: 2025-11-25T08:28:49Z|00223|binding|INFO|Setting lport d4493bab-df0a-4934-ab26-43dae0dbae72 ovn-installed in OVS
Nov 25 08:28:49 compute-0 ovn_controller[152859]: 2025-11-25T08:28:49Z|00224|binding|INFO|Setting lport d4493bab-df0a-4934-ab26-43dae0dbae72 up in Southbound
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.158 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] flattening images/86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.383 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.2709084, 8b29fb31-718d-4926-bf4f-bae461ea70ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.383 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] VM Started (Lifecycle Event)
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.389 253542 DEBUG nova.compute.manager [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.390 253542 DEBUG oslo_concurrency.lockutils [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.390 253542 DEBUG oslo_concurrency.lockutils [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.390 253542 DEBUG oslo_concurrency.lockutils [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.391 253542 DEBUG nova.compute.manager [req-0a552268-0ec6-46c2-ae20-9e54096ac86f req-08454a75-4b98-4b89-9e40-5e3d3b69d322 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Processing event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.392 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.396 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.404 253542 INFO nova.virt.libvirt.driver [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance spawned successfully.
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.405 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:28:49 compute-0 ovn_controller[152859]: 2025-11-25T08:28:49Z|00225|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:28:49 compute-0 ovn_controller[152859]: 2025-11-25T08:28:49Z|00226|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.410 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.412 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.435 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.436 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.2710547, 8b29fb31-718d-4926-bf4f-bae461ea70ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.436 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] VM Paused (Lifecycle Event)
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.444 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.445 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.445 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.446 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.447 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.447 253542 DEBUG nova.virt.libvirt.driver [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.451 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.454 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.396191, 8b29fb31-718d-4926-bf4f-bae461ea70ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.454 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] VM Resumed (Lifecycle Event)
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.508 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.509 253542 INFO nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 8.92 seconds to spawn the instance on the hypervisor.
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.510 253542 DEBUG nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.513 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:49 compute-0 podman[294611]: 2025-11-25 08:28:49.437137833 +0000 UTC m=+0.045510110 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.536 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.570 253542 INFO nova.compute.manager [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 10.08 seconds to build instance.
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.589 253542 DEBUG oslo_concurrency.lockutils [None req-02fcfbc8-8adf-4b3c-8fc6-7f90fda8674d 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:49 compute-0 friendly_pike[294398]: {
Nov 25 08:28:49 compute-0 friendly_pike[294398]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "osd_id": 1,
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "type": "bluestore"
Nov 25 08:28:49 compute-0 friendly_pike[294398]:     },
Nov 25 08:28:49 compute-0 friendly_pike[294398]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "osd_id": 2,
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "type": "bluestore"
Nov 25 08:28:49 compute-0 friendly_pike[294398]:     },
Nov 25 08:28:49 compute-0 friendly_pike[294398]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "osd_id": 0,
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:28:49 compute-0 friendly_pike[294398]:         "type": "bluestore"
Nov 25 08:28:49 compute-0 friendly_pike[294398]:     }
Nov 25 08:28:49 compute-0 friendly_pike[294398]: }
Nov 25 08:28:49 compute-0 systemd[1]: libpod-e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781.scope: Deactivated successfully.
Nov 25 08:28:49 compute-0 podman[294611]: 2025-11-25 08:28:49.735142639 +0000 UTC m=+0.343514886 container create 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 08:28:49 compute-0 podman[294382]: 2025-11-25 08:28:49.746642287 +0000 UTC m=+2.050078074 container died e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:49 compute-0 systemd[1]: Started libpod-conmon-74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3.scope.
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.914 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.9140046, cf07e611-51eb-4bcf-8757-8f75d3807da6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.915 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] VM Started (Lifecycle Event)
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.931 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.937 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059329.9147372, cf07e611-51eb-4bcf-8757-8f75d3807da6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.938 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] VM Paused (Lifecycle Event)
Nov 25 08:28:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb3ae8526a3fd45e0f449dbfc51ea63977bc5a178cd81680ec65b696722764a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.959 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.964 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:49 compute-0 nova_compute[253538]: 2025-11-25 08:28:49.987 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:28:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 25 08:28:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-443e567917fa884a16c81c49e288bf3bfe26774cf959ec0db389e252dbdf0cb9-merged.mount: Deactivated successfully.
Nov 25 08:28:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 25 08:28:50 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 25 08:28:50 compute-0 podman[294382]: 2025-11-25 08:28:50.36399698 +0000 UTC m=+2.667432717 container remove e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:28:50 compute-0 ceph-mon[75015]: pgmap v1330: 321 pgs: 321 active+clean; 180 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 385 op/s
Nov 25 08:28:50 compute-0 ceph-mon[75015]: osdmap e143: 3 total, 3 up, 3 in
Nov 25 08:28:50 compute-0 podman[294611]: 2025-11-25 08:28:50.392825318 +0000 UTC m=+1.001197585 container init 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 08:28:50 compute-0 podman[294611]: 2025-11-25 08:28:50.399490241 +0000 UTC m=+1.007862478 container start 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:28:50 compute-0 sudo[294180]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:28:50 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:28:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:28:50 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [NOTICE]   (294717) : New worker (294722) forked
Nov 25 08:28:50 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [NOTICE]   (294717) : Loading success.
Nov 25 08:28:50 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:28:50 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 92748db0-d4de-4c59-8155-823f2b014854 does not exist
Nov 25 08:28:50 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ae4e0042-c102-402a-a02c-b1c15b9cc4ef does not exist
Nov 25 08:28:50 compute-0 systemd[1]: libpod-conmon-e472271b1d3f6307594b1ec5027db1077bfb61c89e380196eb076b88a5e08781.scope: Deactivated successfully.
Nov 25 08:28:50 compute-0 nova_compute[253538]: 2025-11-25 08:28:50.454 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:50 compute-0 nova_compute[253538]: 2025-11-25 08:28:50.460 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] removing snapshot(c22d1af405f648b08ec82c457c2957ba) on rbd image(c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.471 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d4493bab-df0a-4934-ab26-43dae0dbae72 in datapath 3eeb3245-b22f-4899-9ec0-084ea5f63b6b unbound from our chassis
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.473 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3eeb3245-b22f-4899-9ec0-084ea5f63b6b
Nov 25 08:28:50 compute-0 sudo[294731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.484 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[326f85e0-8ef7-4063-b084-3b6026a8882c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.485 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3eeb3245-b1 in ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:28:50 compute-0 sudo[294731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.488 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3eeb3245-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.489 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56153995-97a0-45e8-a5bf-c1356b4581d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.489 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb06096-7298-48d5-a275-d73e1f883117]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 sudo[294731]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.505 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[debe0072-d6b5-47c8-8dd3-15ccfa30a926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.528 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3e36e8-d327-4a40-a28f-0f7bbdf45b50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 sudo[294758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:28:50 compute-0 sudo[294758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:28:50 compute-0 sudo[294758]: pam_unix(sudo:session): session closed for user root
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.558 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[09dcd5c9-d1d1-40ef-9b95-a7ebec9e1792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.565 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4fb470-f6cb-4149-adc7-80d53f354a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 NetworkManager[48915]: <info>  [1764059330.5678] manager: (tap3eeb3245-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.598 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[96a6e261-e908-4a5e-83a6-aaae374998a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.602 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aba639be-1786-4136-8df4-b2c262ef9024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 NetworkManager[48915]: <info>  [1764059330.6214] device (tap3eeb3245-b0): carrier: link connected
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.625 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4df0b3-d8ec-4ec7-b5ae-9838a11b6154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e1fde7-01cb-4680-9712-b6c66fe086d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3eeb3245-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:bc:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468522, 'reachable_time': 40763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294791, 'error': None, 'target': 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f603cecb-efb9-4a5f-a05c-cec43b7c4ed8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:bc10'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468522, 'tstamp': 468522}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294792, 'error': None, 'target': 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.671 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be27721a-99c9-4dd1-af0e-240d45b03504]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3eeb3245-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:bc:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468522, 'reachable_time': 40763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294793, 'error': None, 'target': 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.696 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37ecffa5-9251-43e4-907e-6218b5f3aa33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.746 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d101aeed-4b6c-4afc-bfe6-ecccae96654e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.747 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eeb3245-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.748 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.748 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eeb3245-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:50 compute-0 NetworkManager[48915]: <info>  [1764059330.7584] manager: (tap3eeb3245-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 25 08:28:50 compute-0 kernel: tap3eeb3245-b0: entered promiscuous mode
Nov 25 08:28:50 compute-0 nova_compute[253538]: 2025-11-25 08:28:50.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.760 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3eeb3245-b0, col_values=(('external_ids', {'iface-id': 'ff896c66-8e2b-41d0-a738-217411538e37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:28:50 compute-0 nova_compute[253538]: 2025-11-25 08:28:50.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:50 compute-0 ovn_controller[152859]: 2025-11-25T08:28:50Z|00227|binding|INFO|Releasing lport ff896c66-8e2b-41d0-a738-217411538e37 from this chassis (sb_readonly=0)
Nov 25 08:28:50 compute-0 nova_compute[253538]: 2025-11-25 08:28:50.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.780 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3eeb3245-b22f-4899-9ec0-084ea5f63b6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3eeb3245-b22f-4899-9ec0-084ea5f63b6b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.784 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76f2804f-e432-44e1-bf8c-8c679305d7d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.785 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-3eeb3245-b22f-4899-9ec0-084ea5f63b6b
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/3eeb3245-b22f-4899-9ec0-084ea5f63b6b.pid.haproxy
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 3eeb3245-b22f-4899-9ec0-084ea5f63b6b
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:28:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:28:50.785 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'env', 'PROCESS_TAG=haproxy-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3eeb3245-b22f-4899-9ec0-084ea5f63b6b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:28:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 181 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 139 op/s
Nov 25 08:28:51 compute-0 podman[294826]: 2025-11-25 08:28:51.181812434 +0000 UTC m=+0.061959823 container create b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:28:51 compute-0 systemd[1]: Started libpod-conmon-b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162.scope.
Nov 25 08:28:51 compute-0 podman[294826]: 2025-11-25 08:28:51.146523499 +0000 UTC m=+0.026670908 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:28:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/609605f94689c92905145955c14d87bafa567f0f1b1543ae9778cd645791164d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:28:51 compute-0 podman[294826]: 2025-11-25 08:28:51.290867459 +0000 UTC m=+0.171014868 container init b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 08:28:51 compute-0 podman[294826]: 2025-11-25 08:28:51.297075921 +0000 UTC m=+0.177223300 container start b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:28:51 compute-0 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [NOTICE]   (294845) : New worker (294847) forked
Nov 25 08:28:51 compute-0 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [NOTICE]   (294845) : Loading success.
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.389 253542 DEBUG nova.compute.manager [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.420 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.420 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.421 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.421 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.421 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] No waiting events found dispatching network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.422 253542 WARNING nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received unexpected event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 for instance with vm_state active and task_state image_snapshot.
Nov 25 08:28:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.423 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.423 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.423 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.424 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.424 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Processing event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.425 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.425 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.425 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.426 253542 DEBUG oslo_concurrency.lockutils [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.426 253542 DEBUG nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] No waiting events found dispatching network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.426 253542 WARNING nova.compute.manager [req-85bf1f0e-c512-4044-ac7f-4814156d57a5 req-e3db0289-02d4-4cae-b757-c97947bb924e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received unexpected event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 for instance with vm_state building and task_state spawning.
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.427 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.431 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059331.4312408, cf07e611-51eb-4bcf-8757-8f75d3807da6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.432 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] VM Resumed (Lifecycle Event)
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.433 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.439 253542 INFO nova.virt.libvirt.driver [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance spawned successfully.
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.440 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.456 253542 INFO nova.compute.manager [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] instance snapshotting
Nov 25 08:28:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.459 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.472 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.473 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:51 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.474 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.474 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.475 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.476 253542 DEBUG nova.virt.libvirt.driver [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.479 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.512 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.534 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] creating snapshot(snap) on rbd image(86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.563 253542 INFO nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Took 10.35 seconds to spawn the instance on the hypervisor.
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.564 253542 DEBUG nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.619 253542 INFO nova.compute.manager [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Took 11.92 seconds to build instance.
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.633 253542 DEBUG oslo_concurrency.lockutils [None req-d5ac8371-613d-488f-9c4b-164aad80e8a3 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.765 253542 INFO nova.virt.libvirt.driver [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Beginning live snapshot process
Nov 25 08:28:51 compute-0 nova_compute[253538]: 2025-11-25 08:28:51.921 253542 DEBUG nova.virt.libvirt.imagebackend [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:28:52 compute-0 nova_compute[253538]: 2025-11-25 08:28:52.171 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(c40393c46a9146f59b3d7035ca3e9699) on rbd image(8b29fb31-718d-4926-bf4f-bae461ea70ef_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:28:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 25 08:28:52 compute-0 ceph-mon[75015]: pgmap v1332: 321 pgs: 321 active+clean; 181 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 139 op/s
Nov 25 08:28:52 compute-0 ceph-mon[75015]: osdmap e144: 3 total, 3 up, 3 in
Nov 25 08:28:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 25 08:28:52 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 196 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 725 KiB/s wr, 148 op/s
Nov 25 08:28:53 compute-0 ovn_controller[152859]: 2025-11-25T08:28:53Z|00228|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:28:53 compute-0 ovn_controller[152859]: 2025-11-25T08:28:53Z|00229|binding|INFO|Releasing lport ff896c66-8e2b-41d0-a738-217411538e37 from this chassis (sb_readonly=0)
Nov 25 08:28:53 compute-0 ovn_controller[152859]: 2025-11-25T08:28:53Z|00230|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 08:28:53 compute-0 nova_compute[253538]: 2025-11-25 08:28:53.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:53 compute-0 NetworkManager[48915]: <info>  [1764059333.1134] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Nov 25 08:28:53 compute-0 NetworkManager[48915]: <info>  [1764059333.1141] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Nov 25 08:28:53 compute-0 nova_compute[253538]: 2025-11-25 08:28:53.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:53 compute-0 ovn_controller[152859]: 2025-11-25T08:28:53Z|00231|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:28:53 compute-0 ovn_controller[152859]: 2025-11-25T08:28:53Z|00232|binding|INFO|Releasing lport ff896c66-8e2b-41d0-a738-217411538e37 from this chassis (sb_readonly=0)
Nov 25 08:28:53 compute-0 ovn_controller[152859]: 2025-11-25T08:28:53Z|00233|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 08:28:53 compute-0 nova_compute[253538]: 2025-11-25 08:28:53.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:28:53
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images']
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:28:53 compute-0 nova_compute[253538]: 2025-11-25 08:28:53.670 253542 DEBUG nova.compute.manager [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-changed-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:28:53 compute-0 nova_compute[253538]: 2025-11-25 08:28:53.671 253542 DEBUG nova.compute.manager [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Refreshing instance network info cache due to event network-changed-d4493bab-df0a-4934-ab26-43dae0dbae72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:28:53 compute-0 nova_compute[253538]: 2025-11-25 08:28:53.671 253542 DEBUG oslo_concurrency.lockutils [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:28:53 compute-0 nova_compute[253538]: 2025-11-25 08:28:53.671 253542 DEBUG oslo_concurrency.lockutils [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:28:53 compute-0 nova_compute[253538]: 2025-11-25 08:28:53.671 253542 DEBUG nova.network.neutron [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Refreshing network info cache for port d4493bab-df0a-4934-ab26-43dae0dbae72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:28:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:28:54 compute-0 nova_compute[253538]: 2025-11-25 08:28:54.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 227 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 9.5 MiB/s rd, 3.6 MiB/s wr, 345 op/s
Nov 25 08:28:55 compute-0 nova_compute[253538]: 2025-11-25 08:28:55.124 253542 DEBUG nova.network.neutron [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updated VIF entry in instance network info cache for port d4493bab-df0a-4934-ab26-43dae0dbae72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:28:55 compute-0 nova_compute[253538]: 2025-11-25 08:28:55.125 253542 DEBUG nova.network.neutron [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [{"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:28:55 compute-0 nova_compute[253538]: 2025-11-25 08:28:55.151 253542 DEBUG oslo_concurrency.lockutils [req-dd56b59c-5eb3-4b69-b32c-1d142b2e015f req-f57ec85d-8d62-4e28-8eb5-1e5b27c22d0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-cf07e611-51eb-4bcf-8757-8f75d3807da6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:28:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:28:55 compute-0 ceph-mon[75015]: osdmap e145: 3 total, 3 up, 3 in
Nov 25 08:28:55 compute-0 ceph-mon[75015]: pgmap v1335: 321 pgs: 321 active+clean; 196 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 725 KiB/s wr, 148 op/s
Nov 25 08:28:55 compute-0 nova_compute[253538]: 2025-11-25 08:28:55.455 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:28:56 compute-0 nova_compute[253538]: 2025-11-25 08:28:56.161 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/8b29fb31-718d-4926-bf4f-bae461ea70ef_disk@c40393c46a9146f59b3d7035ca3e9699 to images/c4072411-f87d-45fb-92e8-02dc5884a35e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:28:56 compute-0 ceph-mon[75015]: pgmap v1336: 321 pgs: 321 active+clean; 227 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 9.5 MiB/s rd, 3.6 MiB/s wr, 345 op/s
Nov 25 08:28:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 3.2 MiB/s wr, 345 op/s
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d could not be found.
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d could not be found.
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.479 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.874 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059322.8733459, 4c934302-d7cd-4826-835e-cab6dba97e3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.874 253542 INFO nova.compute.manager [-] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] VM Stopped (Lifecycle Event)
Nov 25 08:28:57 compute-0 nova_compute[253538]: 2025-11-25 08:28:57.898 253542 DEBUG nova.compute.manager [None req-75a64359-8c63-4d8d-8411-66c536a9a120 - - - - - -] [instance: 4c934302-d7cd-4826-835e-cab6dba97e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:28:58 compute-0 nova_compute[253538]: 2025-11-25 08:28:58.314 253542 DEBUG nova.storage.rbd_utils [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] removing snapshot(snap) on rbd image(86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:28:58 compute-0 ceph-mon[75015]: pgmap v1337: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 3.2 MiB/s wr, 345 op/s
Nov 25 08:28:58 compute-0 nova_compute[253538]: 2025-11-25 08:28:58.612 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/c4072411-f87d-45fb-92e8-02dc5884a35e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:28:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 230 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 8.5 MiB/s rd, 3.0 MiB/s wr, 315 op/s
Nov 25 08:28:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 25 08:28:59 compute-0 ceph-mon[75015]: pgmap v1338: 321 pgs: 321 active+clean; 230 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 8.5 MiB/s rd, 3.0 MiB/s wr, 315 op/s
Nov 25 08:28:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 25 08:28:59 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 25 08:28:59 compute-0 nova_compute[253538]: 2025-11-25 08:28:59.827 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(c40393c46a9146f59b3d7035ca3e9699) on rbd image(8b29fb31-718d-4926-bf4f-bae461ea70ef_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:28:59 compute-0 nova_compute[253538]: 2025-11-25 08:28:59.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:00 compute-0 nova_compute[253538]: 2025-11-25 08:29:00.054 253542 WARNING nova.compute.manager [None req-aaed5288-95fd-4850-a790-cb3482a467a9 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Image not found during snapshot: nova.exception.ImageNotFound: Image 86f2be0d-0948-4bc3-9e65-5ca5e45d1b2d could not be found.
Nov 25 08:29:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 25 08:29:00 compute-0 ovn_controller[152859]: 2025-11-25T08:29:00Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:c6:fc 10.100.0.8
Nov 25 08:29:00 compute-0 ovn_controller[152859]: 2025-11-25T08:29:00Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:c6:fc 10.100.0.8
Nov 25 08:29:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 25 08:29:00 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 25 08:29:00 compute-0 nova_compute[253538]: 2025-11-25 08:29:00.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:00 compute-0 nova_compute[253538]: 2025-11-25 08:29:00.667 253542 DEBUG nova.storage.rbd_utils [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(c4072411-f87d-45fb-92e8-02dc5884a35e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:00 compute-0 ceph-mon[75015]: osdmap e146: 3 total, 3 up, 3 in
Nov 25 08:29:00 compute-0 ceph-mon[75015]: osdmap e147: 3 total, 3 up, 3 in
Nov 25 08:29:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 245 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.1 MiB/s wr, 285 op/s
Nov 25 08:29:01 compute-0 nova_compute[253538]: 2025-11-25 08:29:01.849 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:01 compute-0 nova_compute[253538]: 2025-11-25 08:29:01.849 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:01 compute-0 nova_compute[253538]: 2025-11-25 08:29:01.850 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:01 compute-0 nova_compute[253538]: 2025-11-25 08:29:01.850 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:01 compute-0 nova_compute[253538]: 2025-11-25 08:29:01.850 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:01 compute-0 nova_compute[253538]: 2025-11-25 08:29:01.851 253542 INFO nova.compute.manager [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Terminating instance
Nov 25 08:29:01 compute-0 nova_compute[253538]: 2025-11-25 08:29:01.852 253542 DEBUG nova.compute.manager [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:29:01 compute-0 podman[295058]: 2025-11-25 08:29:01.860953991 +0000 UTC m=+0.100759396 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 08:29:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 25 08:29:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 25 08:29:02 compute-0 ceph-mon[75015]: pgmap v1341: 321 pgs: 321 active+clean; 245 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.1 MiB/s wr, 285 op/s
Nov 25 08:29:02 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 25 08:29:02 compute-0 kernel: tap6bcb5ada-83 (unregistering): left promiscuous mode
Nov 25 08:29:02 compute-0 NetworkManager[48915]: <info>  [1764059342.7728] device (tap6bcb5ada-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:29:02 compute-0 ovn_controller[152859]: 2025-11-25T08:29:02Z|00234|binding|INFO|Releasing lport 6bcb5ada-83f7-419f-9909-98ba6f37630c from this chassis (sb_readonly=0)
Nov 25 08:29:02 compute-0 ovn_controller[152859]: 2025-11-25T08:29:02Z|00235|binding|INFO|Setting lport 6bcb5ada-83f7-419f-9909-98ba6f37630c down in Southbound
Nov 25 08:29:02 compute-0 ovn_controller[152859]: 2025-11-25T08:29:02Z|00236|binding|INFO|Removing iface tap6bcb5ada-83 ovn-installed in OVS
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.803 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:c6:fc 10.100.0.8'], port_security=['fa:16:3e:34:c6:fc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6bcb5ada-83f7-419f-9909-98ba6f37630c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.804 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6bcb5ada-83f7-419f-9909-98ba6f37630c in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 unbound from our chassis
Nov 25 08:29:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.805 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52a7668b-f0ac-4b07-a778-1ee89adbf076, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:29:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.806 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[80242c66-d61d-4b5b-a9c5-6720eca7812b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:02.807 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace which is not needed anymore
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:02 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Nov 25 08:29:02 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 13.006s CPU time.
Nov 25 08:29:02 compute-0 systemd-machined[215790]: Machine qemu-37-instance-00000020 terminated.
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.888 253542 INFO nova.virt.libvirt.driver [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Instance destroyed successfully.
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.889 253542 DEBUG nova.objects.instance [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'resources' on Instance uuid c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.905 253542 DEBUG nova.virt.libvirt.vif [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-492840202',display_name='tempest-ImagesOneServerNegativeTestJSON-server-492840202',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-492840202',id=32,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-ftgun0ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:00Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.906 253542 DEBUG nova.network.os_vif_util [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "address": "fa:16:3e:34:c6:fc", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bcb5ada-83", "ovs_interfaceid": "6bcb5ada-83f7-419f-9909-98ba6f37630c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.907 253542 DEBUG nova.network.os_vif_util [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.907 253542 DEBUG os_vif [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.910 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6bcb5ada-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:02 compute-0 nova_compute[253538]: 2025-11-25 08:29:02.919 253542 INFO os_vif [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:c6:fc,bridge_name='br-int',has_traffic_filtering=True,id=6bcb5ada-83f7-419f-9909-98ba6f37630c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bcb5ada-83')
Nov 25 08:29:02 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [NOTICE]   (293466) : haproxy version is 2.8.14-c23fe91
Nov 25 08:29:02 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [NOTICE]   (293466) : path to executable is /usr/sbin/haproxy
Nov 25 08:29:02 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [ALERT]    (293466) : Current worker (293468) exited with code 143 (Terminated)
Nov 25 08:29:02 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[293442]: [WARNING]  (293466) : All workers exited. Exiting... (0)
Nov 25 08:29:02 compute-0 systemd[1]: libpod-528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e.scope: Deactivated successfully.
Nov 25 08:29:02 compute-0 podman[295108]: 2025-11-25 08:29:02.987725064 +0000 UTC m=+0.073472632 container died 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e-userdata-shm.mount: Deactivated successfully.
Nov 25 08:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7db1af69cade0c066d85c02aae054723464bc6f1508df360470a31f2e5094b6d-merged.mount: Deactivated successfully.
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 267 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 164 op/s
Nov 25 08:29:03 compute-0 podman[295108]: 2025-11-25 08:29:03.109510769 +0000 UTC m=+0.195258327 container cleanup 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:29:03 compute-0 systemd[1]: libpod-conmon-528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e.scope: Deactivated successfully.
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.123 253542 DEBUG nova.compute.manager [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-unplugged-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.124 253542 DEBUG oslo_concurrency.lockutils [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.124 253542 DEBUG oslo_concurrency.lockutils [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.124 253542 DEBUG oslo_concurrency.lockutils [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.125 253542 DEBUG nova.compute.manager [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] No waiting events found dispatching network-vif-unplugged-6bcb5ada-83f7-419f-9909-98ba6f37630c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.125 253542 DEBUG nova.compute.manager [req-6bec4f1d-9e5a-41c2-b8b4-f3f73b5e020a req-60ec1b9e-2a46-498b-9c16-8203e91b48b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-unplugged-6bcb5ada-83f7-419f-9909-98ba6f37630c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:29:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 25 08:29:03 compute-0 ceph-mon[75015]: osdmap e148: 3 total, 3 up, 3 in
Nov 25 08:29:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 25 08:29:03 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 25 08:29:03 compute-0 podman[295159]: 2025-11-25 08:29:03.290560574 +0000 UTC m=+0.155319145 container remove 528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.300 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5757ca2-4c96-4b87-9fad-18776b253eb6]: (4, ('Tue Nov 25 08:29:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e)\n528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e\nTue Nov 25 08:29:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e)\n528056e06a9a8fa83064f1b5b0c05273e19c81181eb5b70492a449cf2e44017e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.302 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4124f1a-818c-4a03-9744-3e8955a02fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.307 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:03 compute-0 kernel: tap52a7668b-f0: left promiscuous mode
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.310 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.314 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67b4f865-49af-4959-8ec8-c0f8ceef058b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.327 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c042106-bc59-4317-95b8-53e10741a8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.329 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f1330b-0be4-43e8-9127-2cc29101fa15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:03 compute-0 nova_compute[253538]: 2025-11-25 08:29:03.332 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.358 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a78b62d6-9acc-4650-ba0f-42bb22a5cfa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467576, 'reachable_time': 27034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295173, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d52a7668b\x2df0ac\x2d4b07\x2da778\x2d1ee89adbf076.mount: Deactivated successfully.
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.362 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:29:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:03.362 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[05ea67ee-cd50-4433-b969-194536adb5b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014835712134531483 of space, bias 1.0, pg target 0.4450713640359445 quantized to 32 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001186539017223416 of space, bias 1.0, pg target 0.3559617051670248 quantized to 32 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:29:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:29:04 compute-0 ceph-mon[75015]: pgmap v1343: 321 pgs: 321 active+clean; 267 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 164 op/s
Nov 25 08:29:04 compute-0 ceph-mon[75015]: osdmap e149: 3 total, 3 up, 3 in
Nov 25 08:29:04 compute-0 ovn_controller[152859]: 2025-11-25T08:29:04Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:9f:ca 10.100.0.3
Nov 25 08:29:04 compute-0 ovn_controller[152859]: 2025-11-25T08:29:04Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:9f:ca 10.100.0.3
Nov 25 08:29:04 compute-0 nova_compute[253538]: 2025-11-25 08:29:04.521 253542 INFO nova.virt.libvirt.driver [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Snapshot image upload complete
Nov 25 08:29:04 compute-0 nova_compute[253538]: 2025-11-25 08:29:04.522 253542 INFO nova.compute.manager [None req-0d4386b8-1e51-441e-9ef0-e32bc45c6e4e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 13.06 seconds to snapshot the instance on the hypervisor.
Nov 25 08:29:04 compute-0 ovn_controller[152859]: 2025-11-25T08:29:04Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:03:96 10.100.0.12
Nov 25 08:29:04 compute-0 ovn_controller[152859]: 2025-11-25T08:29:04Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:03:96 10.100.0.12
Nov 25 08:29:04 compute-0 nova_compute[253538]: 2025-11-25 08:29:04.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 262 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 13 MiB/s wr, 377 op/s
Nov 25 08:29:05 compute-0 nova_compute[253538]: 2025-11-25 08:29:05.185 253542 DEBUG nova.compute.manager [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:05 compute-0 nova_compute[253538]: 2025-11-25 08:29:05.185 253542 DEBUG oslo_concurrency.lockutils [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:05 compute-0 nova_compute[253538]: 2025-11-25 08:29:05.185 253542 DEBUG oslo_concurrency.lockutils [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:05 compute-0 nova_compute[253538]: 2025-11-25 08:29:05.186 253542 DEBUG oslo_concurrency.lockutils [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:05 compute-0 nova_compute[253538]: 2025-11-25 08:29:05.186 253542 DEBUG nova.compute.manager [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] No waiting events found dispatching network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:05 compute-0 nova_compute[253538]: 2025-11-25 08:29:05.186 253542 WARNING nova.compute.manager [req-46b80c08-5c8a-429d-9120-1abd2fafef2c req-03656427-4c8f-4066-b277-46e8bf7624df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received unexpected event network-vif-plugged-6bcb5ada-83f7-419f-9909-98ba6f37630c for instance with vm_state active and task_state deleting.
Nov 25 08:29:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:05 compute-0 podman[295175]: 2025-11-25 08:29:05.811922153 +0000 UTC m=+0.059929868 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:06 compute-0 nova_compute[253538]: 2025-11-25 08:29:06.149 253542 INFO nova.virt.libvirt.driver [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Deleting instance files /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_del
Nov 25 08:29:06 compute-0 nova_compute[253538]: 2025-11-25 08:29:06.150 253542 INFO nova.virt.libvirt.driver [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Deletion of /var/lib/nova/instances/c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e_del complete
Nov 25 08:29:06 compute-0 nova_compute[253538]: 2025-11-25 08:29:06.216 253542 INFO nova.compute.manager [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Took 4.36 seconds to destroy the instance on the hypervisor.
Nov 25 08:29:06 compute-0 nova_compute[253538]: 2025-11-25 08:29:06.216 253542 DEBUG oslo.service.loopingcall [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:29:06 compute-0 nova_compute[253538]: 2025-11-25 08:29:06.217 253542 DEBUG nova.compute.manager [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:29:06 compute-0 nova_compute[253538]: 2025-11-25 08:29:06.217 253542 DEBUG nova.network.neutron [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:29:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 25 08:29:06 compute-0 ceph-mon[75015]: pgmap v1345: 321 pgs: 321 active+clean; 262 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 13 MiB/s wr, 377 op/s
Nov 25 08:29:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 25 08:29:06 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 25 08:29:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 245 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 13 MiB/s wr, 471 op/s
Nov 25 08:29:07 compute-0 nova_compute[253538]: 2025-11-25 08:29:07.266 253542 DEBUG nova.network.neutron [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:07 compute-0 nova_compute[253538]: 2025-11-25 08:29:07.281 253542 INFO nova.compute.manager [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Took 1.06 seconds to deallocate network for instance.
Nov 25 08:29:07 compute-0 nova_compute[253538]: 2025-11-25 08:29:07.318 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:07 compute-0 nova_compute[253538]: 2025-11-25 08:29:07.318 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:07 compute-0 nova_compute[253538]: 2025-11-25 08:29:07.396 253542 DEBUG oslo_concurrency.processutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:07 compute-0 nova_compute[253538]: 2025-11-25 08:29:07.427 253542 DEBUG nova.compute.manager [req-57121eb1-ca13-44e7-bae4-13009971df6c req-433d6ade-0cbb-4226-8954-106d16bf7b2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Received event network-vif-deleted-6bcb5ada-83f7-419f-9909-98ba6f37630c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:07 compute-0 ceph-mon[75015]: osdmap e150: 3 total, 3 up, 3 in
Nov 25 08:29:07 compute-0 nova_compute[253538]: 2025-11-25 08:29:07.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514227300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.020 253542 DEBUG oslo_concurrency.processutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.027 253542 DEBUG nova.compute.provider_tree [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.040 253542 DEBUG nova.scheduler.client.report [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.064 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.089 253542 INFO nova.scheduler.client.report [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Deleted allocations for instance c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.147 253542 DEBUG oslo_concurrency.lockutils [None req-cf7082c2-f65a-40a6-ae91-ce7e371516ad 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.293 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.293 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.309 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.362 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.362 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.368 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.368 253542 INFO nova.compute.claims [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.480 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:29:08 compute-0 ceph-mon[75015]: pgmap v1347: 321 pgs: 321 active+clean; 245 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 13 MiB/s wr, 471 op/s
Nov 25 08:29:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1514227300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.765 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.766 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.779 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.846 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588988723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.956 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.961 253542 DEBUG nova.compute.provider_tree [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.975 253542 DEBUG nova.scheduler.client.report [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.994 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.995 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:29:08 compute-0 nova_compute[253538]: 2025-11-25 08:29:08.998 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.004 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.004 253542 INFO nova.compute.claims [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.067 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.068 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.086 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.100 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:29:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 237 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 8.1 MiB/s wr, 372 op/s
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.189 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.225 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.227 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.228 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Creating image(s)
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.254 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.280 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.304 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.309 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "855ebb89fbe713448ddff4ee3e0e4fec7ce78acc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.310 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "855ebb89fbe713448ddff4ee3e0e4fec7ce78acc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.506 253542 DEBUG nova.virt.libvirt.imagebackend [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/c4072411-f87d-45fb-92e8-02dc5884a35e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/c4072411-f87d-45fb-92e8-02dc5884a35e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.557 253542 DEBUG nova.virt.libvirt.imagebackend [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/c4072411-f87d-45fb-92e8-02dc5884a35e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.557 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning images/c4072411-f87d-45fb-92e8-02dc5884a35e@snap to None/0067149a-8f99-4257-af2a-fd9adcc41719_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:29:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3588988723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:09 compute-0 ceph-mon[75015]: pgmap v1348: 321 pgs: 321 active+clean; 237 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 8.1 MiB/s wr, 372 op/s
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263290914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.948 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.759s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.953 253542 DEBUG nova.compute.provider_tree [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.969 253542 DEBUG nova.scheduler.client.report [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.993 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:09 compute-0 nova_compute[253538]: 2025-11-25 08:29:09.994 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.048 253542 DEBUG nova.policy [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.056 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.057 253542 DEBUG nova.network.neutron [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.078 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.101 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "855ebb89fbe713448ddff4ee3e0e4fec7ce78acc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.131 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.215 253542 DEBUG nova.objects.instance [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 0067149a-8f99-4257-af2a-fd9adcc41719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.228 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.228 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Ensure instance console log exists: /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.229 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.229 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.229 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.257 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.259 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.259 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Creating image(s)
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.278 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.299 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 25 08:29:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 25 08:29:10 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.353 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.356 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.390 253542 DEBUG nova.network.neutron [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.391 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.421 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.421 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.422 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.422 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.439 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.442 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.586 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.587 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:29:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4263290914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:10 compute-0 ceph-mon[75015]: osdmap e151: 3 total, 3 up, 3 in
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.703 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Successfully created port: f3dedfca-04a0-44af-bca1-33a95c9804fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.757 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.758 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.758 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.758 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8b29fb31-718d-4926-bf4f-bae461ea70ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:10 compute-0 podman[295533]: 2025-11-25 08:29:10.864046511 +0000 UTC m=+0.098320229 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.970 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.971 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:10 compute-0 nova_compute[253538]: 2025-11-25 08:29:10.986 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.063 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.064 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.071 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.071 253542 INFO nova.compute.claims [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.081 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 246 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.3 MiB/s wr, 356 op/s
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.157 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] resizing rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.353 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.547 253542 DEBUG nova.objects.instance [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.562 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.562 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Ensure instance console log exists: /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.563 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.563 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.563 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.565 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.566 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Successfully updated port: f3dedfca-04a0-44af-bca1-33a95c9804fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.571 253542 WARNING nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.576 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.576 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.577 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.578 253542 DEBUG nova.virt.libvirt.host [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.579 253542 DEBUG nova.virt.libvirt.host [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.583 253542 DEBUG nova.virt.libvirt.host [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.584 253542 DEBUG nova.virt.libvirt.host [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.584 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.584 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.585 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.585 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.585 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.585 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.586 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.586 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.586 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.586 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.587 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.587 253542 DEBUG nova.virt.hardware [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.589 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:11 compute-0 ceph-mon[75015]: pgmap v1350: 321 pgs: 321 active+clean; 246 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.3 MiB/s wr, 356 op/s
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.695478) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351695509, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2310, "num_deletes": 267, "total_data_size": 3387415, "memory_usage": 3434448, "flush_reason": "Manual Compaction"}
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.699 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.700 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.700 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.700 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.701 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.702 253542 INFO nova.compute.manager [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Terminating instance
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.703 253542 DEBUG nova.compute.manager [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351721591, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3322828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25678, "largest_seqno": 27987, "table_properties": {"data_size": 3312245, "index_size": 6821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22622, "raw_average_key_size": 21, "raw_value_size": 3290856, "raw_average_value_size": 3081, "num_data_blocks": 297, "num_entries": 1068, "num_filter_entries": 1068, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059172, "oldest_key_time": 1764059172, "file_creation_time": 1764059351, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 26155 microseconds, and 6398 cpu microseconds.
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.721631) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3322828 bytes OK
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.721650) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733144) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733168) EVENT_LOG_v1 {"time_micros": 1764059351733161, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733189) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3377458, prev total WAL file size 3377458, number of live WAL files 2.
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733968) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3244KB)], [59(7132KB)]
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351734000, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10626072, "oldest_snapshot_seqno": -1}
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.757 253542 DEBUG nova.compute.manager [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-changed-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.758 253542 DEBUG nova.compute.manager [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Refreshing instance network info cache due to event network-changed-f3dedfca-04a0-44af-bca1-33a95c9804fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.758 253542 DEBUG oslo_concurrency.lockutils [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.767 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:29:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306774249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.821 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.826 253542 DEBUG nova.compute.provider_tree [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.838 253542 DEBUG nova.scheduler.client.report [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5304 keys, 8901387 bytes, temperature: kUnknown
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351848001, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8901387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8864454, "index_size": 22551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 132222, "raw_average_key_size": 24, "raw_value_size": 8767454, "raw_average_value_size": 1652, "num_data_blocks": 930, "num_entries": 5304, "num_filter_entries": 5304, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059351, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.848284) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8901387 bytes
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.849802) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.1 rd, 78.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5840, records dropped: 536 output_compression: NoCompression
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.850083) EVENT_LOG_v1 {"time_micros": 1764059351850072, "job": 32, "event": "compaction_finished", "compaction_time_micros": 114079, "compaction_time_cpu_micros": 18263, "output_level": 6, "num_output_files": 1, "total_output_size": 8901387, "num_input_records": 5840, "num_output_records": 5304, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351850859, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059351852081, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.733897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:29:11 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:29:11.852147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.857 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.858 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:29:11 compute-0 kernel: tapd4493bab-df (unregistering): left promiscuous mode
Nov 25 08:29:11 compute-0 NetworkManager[48915]: <info>  [1764059351.9028] device (tapd4493bab-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:29:11 compute-0 ovn_controller[152859]: 2025-11-25T08:29:11Z|00237|binding|INFO|Releasing lport d4493bab-df0a-4934-ab26-43dae0dbae72 from this chassis (sb_readonly=0)
Nov 25 08:29:11 compute-0 ovn_controller[152859]: 2025-11-25T08:29:11Z|00238|binding|INFO|Setting lport d4493bab-df0a-4934-ab26-43dae0dbae72 down in Southbound
Nov 25 08:29:11 compute-0 ovn_controller[152859]: 2025-11-25T08:29:11Z|00239|binding|INFO|Removing iface tapd4493bab-df ovn-installed in OVS
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.921 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.922 253542 DEBUG nova.network.neutron [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:29:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.922 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:9f:ca 10.100.0.3'], port_security=['fa:16:3e:6b:9f:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf07e611-51eb-4bcf-8757-8f75d3807da6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17d31dbb1e4542daaa43d2fda87e18ad', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5ca9dda1-b2ea-4f89-8fc2-d2049ead3ade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d85b3127-2363-4567-943e-4e79235e055a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d4493bab-df0a-4934-ab26-43dae0dbae72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.926 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d4493bab-df0a-4934-ab26-43dae0dbae72 in datapath 3eeb3245-b22f-4899-9ec0-084ea5f63b6b unbound from our chassis
Nov 25 08:29:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.928 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3eeb3245-b22f-4899-9ec0-084ea5f63b6b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:29:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.930 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5add2573-b93d-47db-9e0e-9f2873978d2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:11.931 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b namespace which is not needed anymore
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.941 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:11 compute-0 nova_compute[253538]: 2025-11-25 08:29:11.961 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:29:11 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 25 08:29:11 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Consumed 13.015s CPU time.
Nov 25 08:29:11 compute-0 systemd-machined[215790]: Machine qemu-39-instance-00000022 terminated.
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.038 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updating instance_info_cache with network_info: [{"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181897552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.045 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.047 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.047 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Creating image(s)
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.073 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:12 compute-0 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [NOTICE]   (294845) : haproxy version is 2.8.14-c23fe91
Nov 25 08:29:12 compute-0 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [NOTICE]   (294845) : path to executable is /usr/sbin/haproxy
Nov 25 08:29:12 compute-0 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [ALERT]    (294845) : Current worker (294847) exited with code 143 (Terminated)
Nov 25 08:29:12 compute-0 neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b[294841]: [WARNING]  (294845) : All workers exited. Exiting... (0)
Nov 25 08:29:12 compute-0 systemd[1]: libpod-b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162.scope: Deactivated successfully.
Nov 25 08:29:12 compute-0 podman[295698]: 2025-11-25 08:29:12.105205516 +0000 UTC m=+0.048966094 container died b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.106 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.142 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.148 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.187 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.196 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-8b29fb31-718d-4926-bf4f-bae461ea70ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.197 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.223 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.228 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.262 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.265 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.265 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.266 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.267 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-609605f94689c92905145955c14d87bafa567f0f1b1543ae9778cd645791164d-merged.mount: Deactivated successfully.
Nov 25 08:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162-userdata-shm.mount: Deactivated successfully.
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.294 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.298 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.329 253542 INFO nova.virt.libvirt.driver [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Instance destroyed successfully.
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.330 253542 DEBUG nova.objects.instance [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lazy-loading 'resources' on Instance uuid cf07e611-51eb-4bcf-8757-8f75d3807da6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.345 253542 DEBUG nova.virt.libvirt.vif [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-159347704',display_name='tempest-ServersTestJSON-server-159347704',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-159347704',id=34,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJTnavPSMTdJ98k7GbPudzwCAZWvOVoss8PE9qNwjiCue78AnUJTbduASU9tXAUM03eX8VLrSKKQxmPEVUcAUgD9baA3BJYk4n2P01dqgil022Gs27o2zUO7uKTgrjs9Q==',key_name='tempest-keypair-1656658254',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17d31dbb1e4542daaa43d2fda87e18ad',ramdisk_id='',reservation_id='r-6gyhk8p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1586688039',owner_user_name='tempest-ServersTestJSON-1586688039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:28:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9a96a98d6bb448aab904a8763d3675ec',uuid=cf07e611-51eb-4bcf-8757-8f75d3807da6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.346 253542 DEBUG nova.network.os_vif_util [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converting VIF {"id": "d4493bab-df0a-4934-ab26-43dae0dbae72", "address": "fa:16:3e:6b:9f:ca", "network": {"id": "3eeb3245-b22f-4899-9ec0-084ea5f63b6b", "bridge": "br-int", "label": "tempest-ServersTestJSON-406706004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17d31dbb1e4542daaa43d2fda87e18ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4493bab-df", "ovs_interfaceid": "d4493bab-df0a-4934-ab26-43dae0dbae72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.347 253542 DEBUG nova.network.os_vif_util [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.347 253542 DEBUG os_vif [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.351 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4493bab-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.357 253542 INFO os_vif [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9f:ca,bridge_name='br-int',has_traffic_filtering=True,id=d4493bab-df0a-4934-ab26-43dae0dbae72,network=Network(3eeb3245-b22f-4899-9ec0-084ea5f63b6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4493bab-df')
Nov 25 08:29:12 compute-0 podman[295698]: 2025-11-25 08:29:12.457856963 +0000 UTC m=+0.401617531 container cleanup b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:29:12 compute-0 systemd[1]: libpod-conmon-b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162.scope: Deactivated successfully.
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:12 compute-0 podman[295885]: 2025-11-25 08:29:12.657607455 +0000 UTC m=+0.170247877 container remove b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.665 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bdfc806-c9da-4a81-b814-cadb70f385ee]: (4, ('Tue Nov 25 08:29:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b (b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162)\nb4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162\nTue Nov 25 08:29:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b (b4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162)\nb4f1371a2efedd0cc183639779022ae9dfbcbde79cef4e481238a42d80498162\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.669 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4440d0-398f-490f-9bfe-a5dd4e34b6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.670 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eeb3245-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:12 compute-0 kernel: tap3eeb3245-b0: left promiscuous mode
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.694 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.697 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52970f9a-4d28-4745-82d6-959be0c9e27a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1306774249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/181897552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.712 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[068f7354-0d41-4918-8930-11a91e4b3649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.714 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4753a65b-b668-4878-aff9-d51e1bae75d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.731 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0100d9ca-3e1a-4901-8e6c-76af52433700]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468515, 'reachable_time': 38849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295905, 'error': None, 'target': 'ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d3eeb3245\x2db22f\x2d4899\x2d9ec0\x2d084ea5f63b6b.mount: Deactivated successfully.
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.734 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3eeb3245-b22f-4899-9ec0-084ea5f63b6b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:29:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:12.734 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba6e13c-0941-4514-b05f-4b6d033edf18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/275503310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.791 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.793 253542 DEBUG nova.objects.instance [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.794 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.824 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <uuid>8400a9a9-bd7a-434b-a11b-6db7e12a4e18</uuid>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <name>instance-00000024</name>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <nova:name>tempest-ListImageFiltersTestJSON-server-502320856</nova:name>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:29:11</nova:creationTime>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <nova:user uuid="6bc7b68c86ab44d29c118388df2a8bc0">tempest-ListImageFiltersTestJSON-638123770-project-member</nova:user>
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <nova:project uuid="70a02215d9344af388c6439ace9208a4">tempest-ListImageFiltersTestJSON-638123770</nova:project>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <system>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <entry name="serial">8400a9a9-bd7a-434b-a11b-6db7e12a4e18</entry>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <entry name="uuid">8400a9a9-bd7a-434b-a11b-6db7e12a4e18</entry>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     </system>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <os>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   </os>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <features>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   </features>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk">
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config">
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:12 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/console.log" append="off"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <video>
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     </video>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:29:12 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:29:12 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:29:12 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:29:12 compute-0 nova_compute[253538]: </domain>
Nov 25 08:29:12 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.867 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] resizing rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.915 253542 DEBUG nova.network.neutron [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.915 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.922 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.923 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.923 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Using config drive
Nov 25 08:29:12 compute-0 nova_compute[253538]: 2025-11-25 08:29:12.944 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.058 253542 DEBUG nova.compute.manager [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-unplugged-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.059 253542 DEBUG oslo_concurrency.lockutils [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.059 253542 DEBUG oslo_concurrency.lockutils [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.059 253542 DEBUG oslo_concurrency.lockutils [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.059 253542 DEBUG nova.compute.manager [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] No waiting events found dispatching network-vif-unplugged-d4493bab-df0a-4934-ab26-43dae0dbae72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.060 253542 DEBUG nova.compute.manager [req-25bb67f6-af5f-4f47-a72c-857abf06dc5c req-e1a37be8-dc97-4dde-8d22-d040c1c6cfa4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-unplugged-d4493bab-df0a-4934-ab26-43dae0dbae72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.064 253542 DEBUG nova.objects.instance [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.074 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.075 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Ensure instance console log exists: /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.075 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.075 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.075 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.077 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.082 253542 WARNING nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.086 253542 DEBUG nova.virt.libvirt.host [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.086 253542 DEBUG nova.virt.libvirt.host [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.089 253542 DEBUG nova.virt.libvirt.host [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.091 253542 DEBUG nova.virt.libvirt.host [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.091 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.091 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.092 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.092 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.092 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.092 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.093 253542 DEBUG nova.virt.hardware [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.097 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 250 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 563 KiB/s rd, 1.8 MiB/s wr, 217 op/s
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.136 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Creating config drive at /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.141 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptld7tzua execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.186 253542 DEBUG nova.network.neutron [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Updating instance_info_cache with network_info: [{"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.204 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.204 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance network_info: |[{"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.204 253542 DEBUG oslo_concurrency.lockutils [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.205 253542 DEBUG nova.network.neutron [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Refreshing network info cache for port f3dedfca-04a0-44af-bca1-33a95c9804fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.208 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start _get_guest_xml network_info=[{"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:28:51Z,direct_url=<?>,disk_format='raw',id=c4072411-f87d-45fb-92e8-02dc5884a35e,min_disk=1,min_ram=0,name='tempest-test-snap-1547610271',owner='b0a28d62fb1841c087b84b40bf5a54ec',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:29:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': 'c4072411-f87d-45fb-92e8-02dc5884a35e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.212 253542 WARNING nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.216 253542 DEBUG nova.virt.libvirt.host [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.216 253542 DEBUG nova.virt.libvirt.host [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.222 253542 DEBUG nova.virt.libvirt.host [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.223 253542 DEBUG nova.virt.libvirt.host [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.223 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.223 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:28:51Z,direct_url=<?>,disk_format='raw',id=c4072411-f87d-45fb-92e8-02dc5884a35e,min_disk=1,min_ram=0,name='tempest-test-snap-1547610271',owner='b0a28d62fb1841c087b84b40bf5a54ec',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:29:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.224 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.225 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.225 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.225 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.225 253542 DEBUG nova.virt.hardware [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.228 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.285 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.285 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.294 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptld7tzua" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.321 253542 DEBUG nova.storage.rbd_utils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.334 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.373 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.446 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.447 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.455 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.456 253542 INFO nova.compute.claims [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.541 253542 INFO nova.virt.libvirt.driver [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Deleting instance files /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6_del
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.542 253542 INFO nova.virt.libvirt.driver [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Deletion of /var/lib/nova/instances/cf07e611-51eb-4bcf-8757-8f75d3807da6_del complete
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.553 253542 DEBUG oslo_concurrency.processutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config 8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.553 253542 INFO nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Deleting local config drive /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18/disk.config because it was imported into RBD.
Nov 25 08:29:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042899764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.583 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.612 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.619 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:13 compute-0 systemd-machined[215790]: New machine qemu-40-instance-00000024.
Nov 25 08:29:13 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.669 253542 INFO nova.compute.manager [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Took 1.97 seconds to destroy the instance on the hypervisor.
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.670 253542 DEBUG oslo.service.loopingcall [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.670 253542 DEBUG nova.compute.manager [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.670 253542 DEBUG nova.network.neutron [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:29:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679845287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.727 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.755 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.791 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.797 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/275503310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:13 compute-0 ceph-mon[75015]: pgmap v1351: 321 pgs: 321 active+clean; 250 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 563 KiB/s rd, 1.8 MiB/s wr, 217 op/s
Nov 25 08:29:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3042899764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3679845287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.978 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059353.9762833, 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] VM Resumed (Lifecycle Event)
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.981 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.981 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.985 253542 INFO nova.virt.libvirt.driver [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance spawned successfully.
Nov 25 08:29:13 compute-0 nova_compute[253538]: 2025-11-25 08:29:13.985 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.002 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.010 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.016 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.017 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.017 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.018 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.018 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.018 253542 DEBUG nova.virt.libvirt.driver [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.041 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.042 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059353.9768515, 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.042 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] VM Started (Lifecycle Event)
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.076 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.079 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.100 253542 INFO nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 3.84 seconds to spawn the instance on the hypervisor.
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.100 253542 DEBUG nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.102 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186389793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.135 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.136 253542 DEBUG nova.objects.instance [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.152 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <uuid>225f80e2-9e66-46fb-b77d-9a54fa8a2a41</uuid>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <name>instance-00000025</name>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:name>tempest-ListImageFiltersTestJSON-server-185269720</nova:name>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:29:13</nova:creationTime>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:user uuid="6bc7b68c86ab44d29c118388df2a8bc0">tempest-ListImageFiltersTestJSON-638123770-project-member</nova:user>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:project uuid="70a02215d9344af388c6439ace9208a4">tempest-ListImageFiltersTestJSON-638123770</nova:project>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <system>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="serial">225f80e2-9e66-46fb-b77d-9a54fa8a2a41</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="uuid">225f80e2-9e66-46fb-b77d-9a54fa8a2a41</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </system>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <os>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </os>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <features>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </features>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/console.log" append="off"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <video>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </video>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:29:14 compute-0 nova_compute[253538]: </domain>
Nov 25 08:29:14 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.164 253542 INFO nova.compute.manager [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 5.34 seconds to build instance.
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.184 253542 DEBUG oslo_concurrency.lockutils [None req-d0807b7a-6d5d-4783-b9b7-10e6a0c3760c 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446547332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.204 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.209 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.209 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.210 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Using config drive
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.232 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484542476' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.250 253542 DEBUG nova.compute.provider_tree [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.266 253542 DEBUG nova.scheduler.client.report [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.270 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.271 253542 DEBUG nova.virt.libvirt.vif [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533297759',display_name='tempest-ImagesTestJSON-server-533297759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533297759',id=35,image_ref='c4072411-f87d-45fb-92e8-02dc5884a35e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-xg0ar0p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8b29fb31-718d-4926-bf4f-bae461ea70ef',image_min_disk='1',image_min_ram='0',image_owner_id='b0a28d62fb1841c087b84b40bf5a54ec',image_owner_project_name='tempest-ImagesTestJSON-109091550',image_owner_user_name='tempest-ImagesTestJSON-109091550-project-member',image_user_id='38fa175fb699405c9a05d7c28f994ebc',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:09Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0067149a-8f99-4257-af2a-fd9adcc41719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.271 253542 DEBUG nova.network.os_vif_util [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.272 253542 DEBUG nova.network.os_vif_util [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.272 253542 DEBUG nova.objects.instance [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 0067149a-8f99-4257-af2a-fd9adcc41719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.287 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <uuid>0067149a-8f99-4257-af2a-fd9adcc41719</uuid>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <name>instance-00000023</name>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesTestJSON-server-533297759</nova:name>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:29:13</nova:creationTime>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="c4072411-f87d-45fb-92e8-02dc5884a35e"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <nova:port uuid="f3dedfca-04a0-44af-bca1-33a95c9804fa">
Nov 25 08:29:14 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <system>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="serial">0067149a-8f99-4257-af2a-fd9adcc41719</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="uuid">0067149a-8f99-4257-af2a-fd9adcc41719</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </system>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <os>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </os>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <features>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </features>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0067149a-8f99-4257-af2a-fd9adcc41719_disk">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0067149a-8f99-4257-af2a-fd9adcc41719_disk.config">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:7d:ec:bd"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <target dev="tapf3dedfca-04"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/console.log" append="off"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <video>
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </video>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:29:14 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:29:14 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:29:14 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:29:14 compute-0 nova_compute[253538]: </domain>
Nov 25 08:29:14 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.288 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Preparing to wait for external event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.288 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.288 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.289 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.289 253542 DEBUG nova.virt.libvirt.vif [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533297759',display_name='tempest-ImagesTestJSON-server-533297759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533297759',id=35,image_ref='c4072411-f87d-45fb-92e8-02dc5884a35e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-xg0ar0p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8b29fb31-718d-4926-bf4f-bae461ea70ef',image_min_disk='1',image_min_ram='0',image_owner_id='b0a28d62fb1841c087b84b40bf5a54ec',image_owner_project_name='tempest-ImagesTestJSON-109091550',image_owner_user_name='tempest-ImagesTestJSON-109091550-project-member',image_user_id='38fa175fb699405c9a05d7c28f994ebc',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:09Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0067149a-8f99-4257-af2a-fd9adcc41719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.289 253542 DEBUG nova.network.os_vif_util [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.290 253542 DEBUG nova.network.os_vif_util [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.290 253542 DEBUG os_vif [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.291 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.292 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.294 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.294 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.297 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3dedfca-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.298 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf3dedfca-04, col_values=(('external_ids', {'iface-id': 'f3dedfca-04a0-44af-bca1-33a95c9804fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:ec:bd', 'vm-uuid': '0067149a-8f99-4257-af2a-fd9adcc41719'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.299 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:14 compute-0 NetworkManager[48915]: <info>  [1764059354.3009] manager: (tapf3dedfca-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.305 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.306 253542 INFO os_vif [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04')
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.348 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.349 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:7d:ec:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.361 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Using config drive
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.377 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.383 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.403 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.425 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Creating config drive at /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.430 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4v9w67l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.498 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.500 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.500 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Creating image(s)
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.523 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.542 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.563 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.567 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.608 253542 DEBUG nova.network.neutron [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Updated VIF entry in instance network info cache for port f3dedfca-04a0-44af-bca1-33a95c9804fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.609 253542 DEBUG nova.network.neutron [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Updating instance_info_cache with network_info: [{"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.611 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4v9w67l" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.639 253542 DEBUG nova.storage.rbd_utils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] rbd image 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.642 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.670 253542 DEBUG nova.policy [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8350a560f2bc4b57a5da0e3a1f582f82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5b1125d171240e2895276836b4fd6d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.680 253542 DEBUG nova.network.neutron [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.685 253542 DEBUG nova.compute.manager [req-c46d4873-a3c4-4b29-91d6-2a53edc712f2 req-ca3d4a7e-0090-4e6c-b9a0-e0c577cff9fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-deleted-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.686 253542 INFO nova.compute.manager [req-c46d4873-a3c4-4b29-91d6-2a53edc712f2 req-ca3d4a7e-0090-4e6c-b9a0-e0c577cff9fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Neutron deleted interface d4493bab-df0a-4934-ab26-43dae0dbae72; detaching it from the instance and deleting it from the info cache
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.686 253542 DEBUG nova.network.neutron [req-c46d4873-a3c4-4b29-91d6-2a53edc712f2 req-ca3d4a7e-0090-4e6c-b9a0-e0c577cff9fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.689 253542 DEBUG oslo_concurrency.lockutils [req-18bf05de-4be4-4677-9f2b-6d6e4615819a req-141cd7a5-5ca6-4986-81cb-7da71f3f603d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0067149a-8f99-4257-af2a-fd9adcc41719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.690 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.693 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.694 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.694 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.726 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.730 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.765 253542 INFO nova.compute.manager [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Took 1.09 seconds to deallocate network for instance.
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.777 253542 DEBUG nova.compute.manager [req-c46d4873-a3c4-4b29-91d6-2a53edc712f2 req-ca3d4a7e-0090-4e6c-b9a0-e0c577cff9fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Detach interface failed, port_id=d4493bab-df0a-4934-ab26-43dae0dbae72, reason: Instance cf07e611-51eb-4bcf-8757-8f75d3807da6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.828 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Creating config drive at /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.834 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcfosizp9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.864 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.865 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2186389793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1446547332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3484542476' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:14 compute-0 nova_compute[253538]: 2025-11-25 08:29:14.969 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcfosizp9" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.009 253542 DEBUG nova.storage.rbd_utils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.011 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.052 253542 DEBUG oslo_concurrency.processutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 280 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 767 KiB/s rd, 6.4 MiB/s wr, 285 op/s
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.130 253542 DEBUG nova.compute.manager [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.130 253542 DEBUG oslo_concurrency.lockutils [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.131 253542 DEBUG oslo_concurrency.lockutils [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.131 253542 DEBUG oslo_concurrency.lockutils [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.131 253542 DEBUG nova.compute.manager [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] No waiting events found dispatching network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.131 253542 WARNING nova.compute.manager [req-581e18a4-980e-478e-8f6a-1947078ca743 req-de7cf18e-61bd-42b4-920b-189af2eb538e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Received unexpected event network-vif-plugged-d4493bab-df0a-4934-ab26-43dae0dbae72 for instance with vm_state deleted and task_state None.
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.212 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Successfully created port: 19217fbd-a123-469a-b432-be5d2543613c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.258 253542 DEBUG oslo_concurrency.processutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config 225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.260 253542 INFO nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Deleting local config drive /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41/disk.config because it was imported into RBD.
Nov 25 08:29:15 compute-0 systemd-machined[215790]: New machine qemu-41-instance-00000025.
Nov 25 08:29:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:15 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.475 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433953808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.531 253542 DEBUG oslo_concurrency.processutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.536 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] resizing rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.627 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.628 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.633 253542 DEBUG nova.compute.provider_tree [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.650 253542 DEBUG nova.scheduler.client.report [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.659 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.671 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.673 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.674 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.674 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.674 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.727 253542 INFO nova.scheduler.client.report [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Deleted allocations for instance cf07e611-51eb-4bcf-8757-8f75d3807da6
Nov 25 08:29:15 compute-0 nova_compute[253538]: 2025-11-25 08:29:15.789 253542 DEBUG oslo_concurrency.lockutils [None req-d5334ba9-2022-4d0b-a27d-51c535991f7d 9a96a98d6bb448aab904a8763d3675ec 17d31dbb1e4542daaa43d2fda87e18ad - - default default] Lock "cf07e611-51eb-4bcf-8757-8f75d3807da6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:16 compute-0 ceph-mon[75015]: pgmap v1352: 321 pgs: 321 active+clean; 280 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 767 KiB/s rd, 6.4 MiB/s wr, 285 op/s
Nov 25 08:29:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1433953808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43977870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.260 253542 DEBUG oslo_concurrency.processutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config 0067149a-8f99-4257-af2a-fd9adcc41719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.261 253542 INFO nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Deleting local config drive /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719/disk.config because it was imported into RBD.
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.274 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:16 compute-0 kernel: tapf3dedfca-04: entered promiscuous mode
Nov 25 08:29:16 compute-0 NetworkManager[48915]: <info>  [1764059356.3041] manager: (tapf3dedfca-04): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Nov 25 08:29:16 compute-0 ovn_controller[152859]: 2025-11-25T08:29:16Z|00240|binding|INFO|Claiming lport f3dedfca-04a0-44af-bca1-33a95c9804fa for this chassis.
Nov 25 08:29:16 compute-0 ovn_controller[152859]: 2025-11-25T08:29:16Z|00241|binding|INFO|f3dedfca-04a0-44af-bca1-33a95c9804fa: Claiming fa:16:3e:7d:ec:bd 10.100.0.14
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.323 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:ec:bd 10.100.0.14'], port_security=['fa:16:3e:7d:ec:bd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0067149a-8f99-4257-af2a-fd9adcc41719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3dedfca-04a0-44af-bca1-33a95c9804fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.325 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3dedfca-04a0-44af-bca1-33a95c9804fa in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.326 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:29:16 compute-0 systemd-udevd[296585]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:29:16 compute-0 ovn_controller[152859]: 2025-11-25T08:29:16Z|00242|binding|INFO|Setting lport f3dedfca-04a0-44af-bca1-33a95c9804fa ovn-installed in OVS
Nov 25 08:29:16 compute-0 ovn_controller[152859]: 2025-11-25T08:29:16Z|00243|binding|INFO|Setting lport f3dedfca-04a0-44af-bca1-33a95c9804fa up in Southbound
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:16 compute-0 systemd-machined[215790]: New machine qemu-42-instance-00000023.
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[098eeb37-ed75-42b5-977f-d36cb6608d0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:16 compute-0 NetworkManager[48915]: <info>  [1764059356.3561] device (tapf3dedfca-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:29:16 compute-0 NetworkManager[48915]: <info>  [1764059356.3575] device (tapf3dedfca-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:29:16 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000023.
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.378 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2cc40c-d0b6-435d-b7ab-fac09359d717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.381 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7f9b5c-d2f0-4700-a3f3-c8f0aea3279f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.407 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1f63ba-a4fd-4797-b8ab-530e271ffa0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.426 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78a9d06a-a8f6-494a-adb9-6bec7040fa83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468344, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296612, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.443 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86f585ce-0573-4988-b53b-2f709bca94df]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapba659d6c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468354, 'tstamp': 468354}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296614, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapba659d6c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468357, 'tstamp': 468357}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296614, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.445 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.448 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.448 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.449 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.449 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:16.450 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.466 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.466 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.470 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.470 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.473 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.474 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.477 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.478 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.714 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.715 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3878MB free_disk=59.88420486450195GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.716 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.716 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.786 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8b29fb31-718d-4926-bf4f-bae461ea70ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.786 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0067149a-8f99-4257-af2a-fd9adcc41719 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.786 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.787 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.787 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.787 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.788 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:29:16 compute-0 nova_compute[253538]: 2025-11-25 08:29:16.877 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.038 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059357.015526, 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.040 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] VM Resumed (Lifecycle Event)
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.042 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.043 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.053 253542 DEBUG nova.objects.instance [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.057 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.063 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.063 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Ensure instance console log exists: /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.064 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.064 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.064 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.065 253542 INFO nova.virt.libvirt.driver [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance spawned successfully.
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.065 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.071 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.078 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.079 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.080 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.081 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.082 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.082 253542 DEBUG nova.virt.libvirt.driver [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.085 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.085 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059357.0156102, 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.085 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] VM Started (Lifecycle Event)
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.104 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.107 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 270 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 5.3 MiB/s wr, 204 op/s
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.121 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.122 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059357.0695934, 0067149a-8f99-4257-af2a-fd9adcc41719 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.122 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] VM Started (Lifecycle Event)
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.133 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.137 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059357.0703814, 0067149a-8f99-4257-af2a-fd9adcc41719 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.138 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] VM Paused (Lifecycle Event)
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.151 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.153 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.166 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/43977870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16103673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.378 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.383 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.405 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.440 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.441 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.457 253542 INFO nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Took 5.41 seconds to spawn the instance on the hypervisor.
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.458 253542 DEBUG nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.514 253542 INFO nova.compute.manager [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Took 6.47 seconds to build instance.
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.529 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Successfully updated port: 19217fbd-a123-469a-b432-be5d2543613c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.530 253542 DEBUG oslo_concurrency.lockutils [None req-a0928e58-2fe3-4ced-a142-64404b6cb388 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.538 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.538 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquired lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.539 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.717 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.884 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059342.8839343, c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.885 253542 INFO nova.compute.manager [-] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] VM Stopped (Lifecycle Event)
Nov 25 08:29:17 compute-0 nova_compute[253538]: 2025-11-25 08:29:17.900 253542 DEBUG nova.compute.manager [None req-17f0f62d-08db-488d-943b-90562cd16652 - - - - - -] [instance: c3cdbe22-9aa9-4d2e-a76e-8d28c1edc71e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:18 compute-0 nova_compute[253538]: 2025-11-25 08:29:18.018 253542 DEBUG nova.compute.manager [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-changed-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:18 compute-0 nova_compute[253538]: 2025-11-25 08:29:18.018 253542 DEBUG nova.compute.manager [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Refreshing instance network info cache due to event network-changed-19217fbd-a123-469a-b432-be5d2543613c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:29:18 compute-0 nova_compute[253538]: 2025-11-25 08:29:18.018 253542 DEBUG oslo_concurrency.lockutils [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:18 compute-0 ceph-mon[75015]: pgmap v1353: 321 pgs: 321 active+clean; 270 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 5.3 MiB/s wr, 204 op/s
Nov 25 08:29:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/16103673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.073 253542 DEBUG nova.network.neutron [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Updating instance_info_cache with network_info: [{"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.092 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Releasing lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.092 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance network_info: |[{"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.093 253542 DEBUG oslo_concurrency.lockutils [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.093 253542 DEBUG nova.network.neutron [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Refreshing network info cache for port 19217fbd-a123-469a-b432-be5d2543613c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.095 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start _get_guest_xml network_info=[{"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:29:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 281 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.9 MiB/s wr, 256 op/s
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.132 253542 WARNING nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.142 253542 DEBUG nova.virt.libvirt.host [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.142 253542 DEBUG nova.virt.libvirt.host [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.146 253542 DEBUG nova.virt.libvirt.host [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.147 253542 DEBUG nova.virt.libvirt.host [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.147 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.148 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.148 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.149 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.149 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.149 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.149 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.150 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.150 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.151 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.151 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.151 253542 DEBUG nova.virt.hardware [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.154 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.299 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.367 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.368 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794632115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.668 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.692 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.696 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.785 253542 DEBUG nova.compute.manager [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.822 253542 INFO nova.compute.manager [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] instance snapshotting
Nov 25 08:29:19 compute-0 nova_compute[253538]: 2025-11-25 08:29:19.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.039 253542 INFO nova.virt.libvirt.driver [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Beginning live snapshot process
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.108 253542 DEBUG nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.109 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.110 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.110 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.110 253542 DEBUG nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Processing event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.111 253542 DEBUG nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.111 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.111 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.112 253542 DEBUG oslo_concurrency.lockutils [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.112 253542 DEBUG nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] No waiting events found dispatching network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.113 253542 WARNING nova.compute.manager [req-bcbb7520-d840-4040-bbc2-c93da766eec4 req-eec480e9-9a4b-43ce-8344-3e286ed80fe7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received unexpected event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa for instance with vm_state building and task_state spawning.
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.114 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.117 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059360.116616, 0067149a-8f99-4257-af2a-fd9adcc41719 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.117 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] VM Resumed (Lifecycle Event)
Nov 25 08:29:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2180142779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.165 253542 DEBUG nova.virt.libvirt.driver [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.168 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.173 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.178 253542 DEBUG nova.virt.libvirt.imagebackend [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.182 253542 INFO nova.virt.libvirt.driver [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance spawned successfully.
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.183 253542 INFO nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Took 10.96 seconds to spawn the instance on the hypervisor.
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.183 253542 DEBUG nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.191 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.193 253542 DEBUG nova.virt.libvirt.vif [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1871314991',id=38,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-np4uml6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:14Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=8a8b6989-6ea7-4cf7-ad21-a1563967c7f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.193 253542 DEBUG nova.network.os_vif_util [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.195 253542 DEBUG nova.network.os_vif_util [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.196 253542 DEBUG nova.objects.instance [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.197 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.210 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <uuid>8a8b6989-6ea7-4cf7-ad21-a1563967c7f4</uuid>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <name>instance-00000026</name>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1871314991</nova:name>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:29:19</nova:creationTime>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <nova:user uuid="8350a560f2bc4b57a5da0e3a1f582f82">tempest-ImagesOneServerNegativeTestJSON-192511421-project-member</nova:user>
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <nova:project uuid="c5b1125d171240e2895276836b4fd6d7">tempest-ImagesOneServerNegativeTestJSON-192511421</nova:project>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <nova:port uuid="19217fbd-a123-469a-b432-be5d2543613c">
Nov 25 08:29:20 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <system>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <entry name="serial">8a8b6989-6ea7-4cf7-ad21-a1563967c7f4</entry>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <entry name="uuid">8a8b6989-6ea7-4cf7-ad21-a1563967c7f4</entry>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </system>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <os>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   </os>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <features>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   </features>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk">
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config">
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:20 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:85:9f:bb"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <target dev="tap19217fbd-a1"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/console.log" append="off"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <video>
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </video>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:29:20 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:29:20 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:29:20 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:29:20 compute-0 nova_compute[253538]: </domain>
Nov 25 08:29:20 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.217 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Preparing to wait for external event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.217 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.218 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.218 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.219 253542 DEBUG nova.virt.libvirt.vif [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1871314991',id=38,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-np4uml6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:14Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=8a8b6989-6ea7-4cf7-ad21-a1563967c7f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.219 253542 DEBUG nova.network.os_vif_util [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.220 253542 DEBUG nova.network.os_vif_util [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.221 253542 DEBUG os_vif [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.221 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.222 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.222 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.225 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19217fbd-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.226 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19217fbd-a1, col_values=(('external_ids', {'iface-id': '19217fbd-a123-469a-b432-be5d2543613c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:9f:bb', 'vm-uuid': '8a8b6989-6ea7-4cf7-ad21-a1563967c7f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:20 compute-0 NetworkManager[48915]: <info>  [1764059360.2284] manager: (tap19217fbd-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.241 253542 INFO os_vif [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1')
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.243 253542 INFO nova.compute.manager [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Took 11.90 seconds to build instance.
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.260 253542 DEBUG oslo_concurrency.lockutils [None req-4a46e8e8-8247-4f85-ab93-5b0a8164f13c 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.299 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.300 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.300 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] No VIF found with MAC fa:16:3e:85:9f:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.301 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Using config drive
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.318 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:20 compute-0 nova_compute[253538]: 2025-11-25 08:29:20.379 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(ac64e0e5302e4b08871927b6e4b87158) on rbd image(8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:20 compute-0 ceph-mon[75015]: pgmap v1354: 321 pgs: 321 active+clean; 281 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.9 MiB/s wr, 256 op/s
Nov 25 08:29:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1794632115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2180142779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.061 253542 DEBUG nova.network.neutron [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Updated VIF entry in instance network info cache for port 19217fbd-a123-469a-b432-be5d2543613c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.062 253542 DEBUG nova.network.neutron [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Updating instance_info_cache with network_info: [{"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.076 253542 DEBUG oslo_concurrency.lockutils [req-3738d1d8-be7e-4326-bc9c-c8d2a8cdc755 req-0d8a5858-021b-40c6-9b52-39690b81703d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 306 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 327 op/s
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.182 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Creating config drive at /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.187 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnfyp67o4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.274 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.275 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.276 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.276 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.276 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.278 253542 INFO nova.compute.manager [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Terminating instance
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.279 253542 DEBUG nova.compute.manager [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:29:21 compute-0 kernel: tapf3dedfca-04 (unregistering): left promiscuous mode
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.322 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnfyp67o4" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:21 compute-0 NetworkManager[48915]: <info>  [1764059361.3326] device (tapf3dedfca-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:29:21 compute-0 ovn_controller[152859]: 2025-11-25T08:29:21Z|00244|binding|INFO|Releasing lport f3dedfca-04a0-44af-bca1-33a95c9804fa from this chassis (sb_readonly=0)
Nov 25 08:29:21 compute-0 ovn_controller[152859]: 2025-11-25T08:29:21Z|00245|binding|INFO|Setting lport f3dedfca-04a0-44af-bca1-33a95c9804fa down in Southbound
Nov 25 08:29:21 compute-0 ovn_controller[152859]: 2025-11-25T08:29:21Z|00246|binding|INFO|Removing iface tapf3dedfca-04 ovn-installed in OVS
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.348 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:ec:bd 10.100.0.14'], port_security=['fa:16:3e:7d:ec:bd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0067149a-8f99-4257-af2a-fd9adcc41719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3dedfca-04a0-44af-bca1-33a95c9804fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.349 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3dedfca-04a0-44af-bca1-33a95c9804fa in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.350 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.368 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca06695-8ce8-4a5e-86c8-b83cf4fca5f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.376 253542 DEBUG nova.storage.rbd_utils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] rbd image 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.381 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:21 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 25 08:29:21 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000023.scope: Consumed 1.621s CPU time.
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.401 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4749c2-c798-4ae6-858a-f29dfcb1f9e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 systemd-machined[215790]: Machine qemu-42-instance-00000023 terminated.
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.407 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f99403-72eb-4eec-b020-f3c255c442de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.435 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9c72a808-499e-4457-9af2-df34fc48a6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e3280b-519e-4c35-ae10-37675bf88104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468344, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296889, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.480 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83b7c4b7-9ed6-4ffe-b2c2-185099f04a10]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapba659d6c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468354, 'tstamp': 468354}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296905, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapba659d6c-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468357, 'tstamp': 468357}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296905, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.481 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.493 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.492 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.493 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.493 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.494 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.518 253542 INFO nova.virt.libvirt.driver [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Instance destroyed successfully.
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.519 253542 DEBUG nova.objects.instance [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 0067149a-8f99-4257-af2a-fd9adcc41719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 25 08:29:21 compute-0 ceph-mon[75015]: pgmap v1355: 321 pgs: 321 active+clean; 306 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 327 op/s
Nov 25 08:29:21 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.536 253542 DEBUG nova.virt.libvirt.vif [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533297759',display_name='tempest-ImagesTestJSON-server-533297759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533297759',id=35,image_ref='c4072411-f87d-45fb-92e8-02dc5884a35e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:29:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-xg0ar0p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8b29fb31-718d-4926-bf4f-bae461ea70ef',image_min_disk='1',image_min_ram='0',image_owner_id='b0a28d62fb1841c087b84b40bf5a54ec',image_owner_project_name='tempest-ImagesTestJSON-109091550',image_owner_user_name='tempest-ImagesTestJSON-109091550-project-member',image_user_id='38fa175fb699405c9a05d7c28f994ebc',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:20Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=0067149a-8f99-4257-af2a-fd9adcc41719,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.537 253542 DEBUG nova.network.os_vif_util [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "address": "fa:16:3e:7d:ec:bd", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3dedfca-04", "ovs_interfaceid": "f3dedfca-04a0-44af-bca1-33a95c9804fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.538 253542 DEBUG nova.network.os_vif_util [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.539 253542 DEBUG os_vif [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.544 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3dedfca-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.554 253542 INFO os_vif [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ec:bd,bridge_name='br-int',has_traffic_filtering=True,id=f3dedfca-04a0-44af-bca1-33a95c9804fa,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3dedfca-04')
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.614 253542 DEBUG oslo_concurrency.processutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.615 253542 INFO nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Deleting local config drive /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4/disk.config because it was imported into RBD.
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.639 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] cloning vms/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk@ac64e0e5302e4b08871927b6e4b87158 to images/148b37ac-1ea9-4409-a4ce-912163252ba4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:29:21 compute-0 systemd-udevd[296877]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:29:21 compute-0 kernel: tap19217fbd-a1: entered promiscuous mode
Nov 25 08:29:21 compute-0 NetworkManager[48915]: <info>  [1764059361.6809] manager: (tap19217fbd-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Nov 25 08:29:21 compute-0 ovn_controller[152859]: 2025-11-25T08:29:21Z|00247|binding|INFO|Claiming lport 19217fbd-a123-469a-b432-be5d2543613c for this chassis.
Nov 25 08:29:21 compute-0 ovn_controller[152859]: 2025-11-25T08:29:21Z|00248|binding|INFO|19217fbd-a123-469a-b432-be5d2543613c: Claiming fa:16:3e:85:9f:bb 10.100.0.10
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:21 compute-0 NetworkManager[48915]: <info>  [1764059361.6917] device (tap19217fbd-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:29:21 compute-0 NetworkManager[48915]: <info>  [1764059361.6924] device (tap19217fbd-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.694 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:9f:bb 10.100.0.10'], port_security=['fa:16:3e:85:9f:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8a8b6989-6ea7-4cf7-ad21-a1563967c7f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=19217fbd-a123-469a-b432-be5d2543613c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.695 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 19217fbd-a123-469a-b432-be5d2543613c in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 bound to our chassis
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.696 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:29:21 compute-0 ovn_controller[152859]: 2025-11-25T08:29:21Z|00249|binding|INFO|Setting lport 19217fbd-a123-469a-b432-be5d2543613c ovn-installed in OVS
Nov 25 08:29:21 compute-0 ovn_controller[152859]: 2025-11-25T08:29:21Z|00250|binding|INFO|Setting lport 19217fbd-a123-469a-b432-be5d2543613c up in Southbound
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.711 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d16af8-0f65-457c-932a-6f32625ad03f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.711 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52a7668b-f1 in ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.713 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52a7668b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.713 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f88867a0-f88d-4517-b70a-a8026b2f27de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.714 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a5b566-5d91-423a-8724-6a2dbc77807a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.729 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ff83f894-da6a-4672-b1fe-8c6d3ccea5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.881 253542 DEBUG nova.compute.manager [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.883 253542 DEBUG oslo_concurrency.lockutils [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.883 253542 DEBUG oslo_concurrency.lockutils [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.883 253542 DEBUG oslo_concurrency.lockutils [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.884 253542 DEBUG nova.compute.manager [req-87ab2865-eb93-4a96-a6e2-6511885924cc req-d3c9db26-8404-4e99-b75f-22848772f58b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Processing event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d63f7c6b-5f47-4074-9c7a-44d08984b5df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.890 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:21 compute-0 systemd-machined[215790]: New machine qemu-43-instance-00000026.
Nov 25 08:29:21 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000026.
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.938 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[099b688d-47da-43d3-a8dc-44fd72628656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 NetworkManager[48915]: <info>  [1764059361.9464] manager: (tap52a7668b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.947 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f929366-6d8d-4bd4-826e-f8528bee7569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 nova_compute[253538]: 2025-11-25 08:29:21.975 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] flattening images/148b37ac-1ea9-4409-a4ce-912163252ba4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.977 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[53153c15-1f3f-4d02-9b5a-62eded2898db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:21.983 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0db5b8dc-309c-406c-bcfc-e373160dbe5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:22 compute-0 NetworkManager[48915]: <info>  [1764059362.0158] device (tap52a7668b-f0): carrier: link connected
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.024 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5421ff45-99f2-4b98-9fea-0b8a6f3c35d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:22 compute-0 ovn_controller[152859]: 2025-11-25T08:29:22Z|00251|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.053 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.062 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c631c40e-196d-4ce3-b2cd-77c7619462c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471661, 'reachable_time': 16282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297042, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ba31fdb6-4010-42a1-9605-147c8fa3c2e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:1c70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471661, 'tstamp': 471661}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297043, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.110 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[511b1436-9965-43c2-b40a-5cd823d49262]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52a7668b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:1c:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471661, 'reachable_time': 16282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297044, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.152 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a644aa-897e-4952-a2e3-63acebaa01bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:22 compute-0 ovn_controller[152859]: 2025-11-25T08:29:22Z|00252|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.211 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-unplugged-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.211 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.212 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.212 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.212 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] No waiting events found dispatching network-vif-unplugged-f3dedfca-04a0-44af-bca1-33a95c9804fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.212 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-unplugged-f3dedfca-04a0-44af-bca1-33a95c9804fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.213 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.213 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.214 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.214 253542 DEBUG oslo_concurrency.lockutils [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.214 253542 DEBUG nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] No waiting events found dispatching network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.214 253542 WARNING nova.compute.manager [req-2daf4382-0f37-4f30-9c87-e4a7e53b8247 req-96c448d5-c11d-456e-a4c1-5a2bbde6520b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received unexpected event network-vif-plugged-f3dedfca-04a0-44af-bca1-33a95c9804fa for instance with vm_state active and task_state deleting.
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.234 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9770c090-a90a-4a65-bc68-d6a8cf14031a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.235 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.235 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.235 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a7668b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:22 compute-0 kernel: tap52a7668b-f0: entered promiscuous mode
Nov 25 08:29:22 compute-0 NetworkManager[48915]: <info>  [1764059362.2379] manager: (tap52a7668b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.241 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52a7668b-f0, col_values=(('external_ids', {'iface-id': 'ac244317-fa52-4a6a-92f4-98845a41804d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:22 compute-0 ovn_controller[152859]: 2025-11-25T08:29:22Z|00253|binding|INFO|Releasing lport ac244317-fa52-4a6a-92f4-98845a41804d from this chassis (sb_readonly=0)
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.246 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.246 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[170d6208-b276-4027-ab0c-3aa4b3b5bd39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.247 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/52a7668b-f0ac-4b07-a778-1ee89adbf076.pid.haproxy
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 52a7668b-f0ac-4b07-a778-1ee89adbf076
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:29:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:22.248 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'env', 'PROCESS_TAG=haproxy-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52a7668b-f0ac-4b07-a778-1ee89adbf076.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.311 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] removing snapshot(ac64e0e5302e4b08871927b6e4b87158) on rbd image(8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.363 253542 INFO nova.virt.libvirt.driver [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Deleting instance files /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719_del
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.363 253542 INFO nova.virt.libvirt.driver [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Deletion of /var/lib/nova/instances/0067149a-8f99-4257-af2a-fd9adcc41719_del complete
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.427 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059362.4252894, 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.428 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] VM Started (Lifecycle Event)
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.431 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.436 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.436 253542 INFO nova.compute.manager [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Took 1.16 seconds to destroy the instance on the hypervisor.
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.437 253542 DEBUG oslo.service.loopingcall [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.437 253542 DEBUG nova.compute.manager [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.437 253542 DEBUG nova.network.neutron [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.442 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance spawned successfully.
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.442 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.456 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.461 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.466 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.466 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.467 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.467 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.467 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.468 253542 DEBUG nova.virt.libvirt.driver [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.487 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.487 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059362.4257002, 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.487 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] VM Paused (Lifecycle Event)
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.514 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.517 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059362.441391, 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.517 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] VM Resumed (Lifecycle Event)
Nov 25 08:29:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 25 08:29:22 compute-0 ceph-mon[75015]: osdmap e152: 3 total, 3 up, 3 in
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.534 253542 INFO nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Took 8.04 seconds to spawn the instance on the hypervisor.
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.534 253542 DEBUG nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.536 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 25 08:29:22 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.546 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.575 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.598 253542 INFO nova.compute.manager [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Took 9.18 seconds to build instance.
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.611 253542 DEBUG oslo_concurrency.lockutils [None req-3b0c1526-8a8f-41e3-b395-6f88020e2279 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:22 compute-0 nova_compute[253538]: 2025-11-25 08:29:22.635 253542 DEBUG nova.storage.rbd_utils [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(snap) on rbd image(148b37ac-1ea9-4409-a4ce-912163252ba4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:22 compute-0 podman[297134]: 2025-11-25 08:29:22.68654021 +0000 UTC m=+0.088498008 container create 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 08:29:22 compute-0 podman[297134]: 2025-11-25 08:29:22.622299724 +0000 UTC m=+0.024257552 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:29:22 compute-0 systemd[1]: Started libpod-conmon-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3.scope.
Nov 25 08:29:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/362e28361050f34fb4be0843849315f4707ed0cafb024b05b82f317688c6fbf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:22 compute-0 podman[297134]: 2025-11-25 08:29:22.774434069 +0000 UTC m=+0.176391877 container init 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:29:22 compute-0 podman[297134]: 2025-11-25 08:29:22.781954397 +0000 UTC m=+0.183912185 container start 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:29:22 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [NOTICE]   (297170) : New worker (297172) forked
Nov 25 08:29:22 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [NOTICE]   (297170) : Loading success.
Nov 25 08:29:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 306 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.9 MiB/s wr, 333 op/s
Nov 25 08:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:29:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 25 08:29:23 compute-0 ceph-mon[75015]: osdmap e153: 3 total, 3 up, 3 in
Nov 25 08:29:23 compute-0 ceph-mon[75015]: pgmap v1358: 321 pgs: 321 active+clean; 306 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.9 MiB/s wr, 333 op/s
Nov 25 08:29:23 compute-0 nova_compute[253538]: 2025-11-25 08:29:23.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:29:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 25 08:29:23 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 25 08:29:23 compute-0 nova_compute[253538]: 2025-11-25 08:29:23.798 253542 DEBUG nova.network.neutron [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:23 compute-0 nova_compute[253538]: 2025-11-25 08:29:23.813 253542 INFO nova.compute.manager [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Took 1.38 seconds to deallocate network for instance.
Nov 25 08:29:23 compute-0 nova_compute[253538]: 2025-11-25 08:29:23.851 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:23 compute-0 nova_compute[253538]: 2025-11-25 08:29:23.852 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:23 compute-0 nova_compute[253538]: 2025-11-25 08:29:23.970 253542 DEBUG oslo_concurrency.processutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.029 253542 DEBUG nova.compute.manager [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.030 253542 DEBUG oslo_concurrency.lockutils [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.030 253542 DEBUG oslo_concurrency.lockutils [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.030 253542 DEBUG oslo_concurrency.lockutils [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.030 253542 DEBUG nova.compute.manager [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] No waiting events found dispatching network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.031 253542 WARNING nova.compute.manager [req-8e26a821-5c81-4a85-8a8e-5fffecb8d6e3 req-771722c4-e599-479a-a77b-d4c0d907ce95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received unexpected event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c for instance with vm_state active and task_state None.
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.204 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.204 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.204 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.204 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.205 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.207 253542 INFO nova.compute.manager [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Terminating instance
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.207 253542 DEBUG nova.compute.manager [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:29:24 compute-0 kernel: tap19217fbd-a1 (unregistering): left promiscuous mode
Nov 25 08:29:24 compute-0 NetworkManager[48915]: <info>  [1764059364.2425] device (tap19217fbd-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:24 compute-0 ovn_controller[152859]: 2025-11-25T08:29:24Z|00254|binding|INFO|Releasing lport 19217fbd-a123-469a-b432-be5d2543613c from this chassis (sb_readonly=0)
Nov 25 08:29:24 compute-0 ovn_controller[152859]: 2025-11-25T08:29:24Z|00255|binding|INFO|Setting lport 19217fbd-a123-469a-b432-be5d2543613c down in Southbound
Nov 25 08:29:24 compute-0 ovn_controller[152859]: 2025-11-25T08:29:24Z|00256|binding|INFO|Removing iface tap19217fbd-a1 ovn-installed in OVS
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.262 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:9f:bb 10.100.0.10'], port_security=['fa:16:3e:85:9f:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8a8b6989-6ea7-4cf7-ad21-a1563967c7f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5b1125d171240e2895276836b4fd6d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd498cc2-3a1f-4313-a3a9-7b1fa737f1eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98666ab-8fd3-4256-b0ce-536c54c0072e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=19217fbd-a123-469a-b432-be5d2543613c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.263 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 19217fbd-a123-469a-b432-be5d2543613c in datapath 52a7668b-f0ac-4b07-a778-1ee89adbf076 unbound from our chassis
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.265 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52a7668b-f0ac-4b07-a778-1ee89adbf076, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.266 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6962f25-a17c-44e9-999e-dbe814c6079d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.267 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 namespace which is not needed anymore
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:24 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Deactivated successfully.
Nov 25 08:29:24 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Consumed 2.191s CPU time.
Nov 25 08:29:24 compute-0 systemd-machined[215790]: Machine qemu-43-instance-00000026 terminated.
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.371 253542 DEBUG nova.compute.manager [req-67bc24ee-244d-4169-ac18-e8205afb00f5 req-3ae60689-e8e2-4181-910d-a40e84107fff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Received event network-vif-deleted-f3dedfca-04a0-44af-bca1-33a95c9804fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:24 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [NOTICE]   (297170) : haproxy version is 2.8.14-c23fe91
Nov 25 08:29:24 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [NOTICE]   (297170) : path to executable is /usr/sbin/haproxy
Nov 25 08:29:24 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [WARNING]  (297170) : Exiting Master process...
Nov 25 08:29:24 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [ALERT]    (297170) : Current worker (297172) exited with code 143 (Terminated)
Nov 25 08:29:24 compute-0 neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076[297166]: [WARNING]  (297170) : All workers exited. Exiting... (0)
Nov 25 08:29:24 compute-0 systemd[1]: libpod-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3.scope: Deactivated successfully.
Nov 25 08:29:24 compute-0 conmon[297166]: conmon 371385d5ffc10739b94c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3.scope/container/memory.events
Nov 25 08:29:24 compute-0 podman[297222]: 2025-11-25 08:29:24.401589923 +0000 UTC m=+0.046008324 container died 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:29:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913488154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.447 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Instance destroyed successfully.
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.447 253542 DEBUG nova.objects.instance [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lazy-loading 'resources' on Instance uuid 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.453 253542 DEBUG oslo_concurrency.processutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3-userdata-shm.mount: Deactivated successfully.
Nov 25 08:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-362e28361050f34fb4be0843849315f4707ed0cafb024b05b82f317688c6fbf5-merged.mount: Deactivated successfully.
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.461 253542 DEBUG nova.virt.libvirt.vif [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1871314991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1871314991',id=38,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5b1125d171240e2895276836b4fd6d7',ramdisk_id='',reservation_id='r-np4uml6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-192511421',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-192511421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:22Z,user_data=None,user_id='8350a560f2bc4b57a5da0e3a1f582f82',uuid=8a8b6989-6ea7-4cf7-ad21-a1563967c7f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.461 253542 DEBUG nova.network.os_vif_util [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converting VIF {"id": "19217fbd-a123-469a-b432-be5d2543613c", "address": "fa:16:3e:85:9f:bb", "network": {"id": "52a7668b-f0ac-4b07-a778-1ee89adbf076", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1430369364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5b1125d171240e2895276836b4fd6d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19217fbd-a1", "ovs_interfaceid": "19217fbd-a123-469a-b432-be5d2543613c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.462 253542 DEBUG nova.network.os_vif_util [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.463 253542 DEBUG os_vif [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.465 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19217fbd-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:24 compute-0 podman[297222]: 2025-11-25 08:29:24.471962638 +0000 UTC m=+0.116381039 container cleanup 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.471 253542 INFO os_vif [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:9f:bb,bridge_name='br-int',has_traffic_filtering=True,id=19217fbd-a123-469a-b432-be5d2543613c,network=Network(52a7668b-f0ac-4b07-a778-1ee89adbf076),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19217fbd-a1')
Nov 25 08:29:24 compute-0 systemd[1]: libpod-conmon-371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3.scope: Deactivated successfully.
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.494 253542 DEBUG nova.compute.provider_tree [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.514 253542 DEBUG nova.scheduler.client.report [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.536 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:24 compute-0 podman[297269]: 2025-11-25 08:29:24.541994293 +0000 UTC m=+0.040451139 container remove 371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.550 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf828c7e-a64a-449e-a008-ebeaa1e6fb31]: (4, ('Tue Nov 25 08:29:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3)\n371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3\nTue Nov 25 08:29:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 (371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3)\n371385d5ffc10739b94c041f3d057447f138b63edba250e5c30c3c74cfd71ae3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.551 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[07816e29-af07-47f3-9158-70c2243ae057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.552 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a7668b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:24 compute-0 kernel: tap52a7668b-f0: left promiscuous mode
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.560 253542 INFO nova.scheduler.client.report [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 0067149a-8f99-4257-af2a-fd9adcc41719
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.567 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcc4a9d-e67b-4750-9130-685b0d8c086a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:24 compute-0 ceph-mon[75015]: osdmap e154: 3 total, 3 up, 3 in
Nov 25 08:29:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/913488154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.584 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e324810-1c2b-4bc1-9aa2-d87268f14914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.585 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b80679ea-15f7-4519-b0ae-3a8e12ed9d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9b90cf-cbb7-41a5-b86d-465082dde436]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471653, 'reachable_time': 26697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297297, 'error': None, 'target': 'ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d52a7668b\x2df0ac\x2d4b07\x2da778\x2d1ee89adbf076.mount: Deactivated successfully.
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.606 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52a7668b-f0ac-4b07-a778-1ee89adbf076 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:29:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:24.606 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[677c4762-fdb8-4800-9032-5d2ebce3650a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.621 253542 DEBUG oslo_concurrency.lockutils [None req-f7dfe45d-327e-47e6-b09f-33bac34bd270 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "0067149a-8f99-4257-af2a-fd9adcc41719" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.876 253542 INFO nova.virt.libvirt.driver [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Deleting instance files /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_del
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.877 253542 INFO nova.virt.libvirt.driver [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Deletion of /var/lib/nova/instances/8a8b6989-6ea7-4cf7-ad21-a1563967c7f4_del complete
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.927 253542 INFO nova.compute.manager [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.928 253542 DEBUG oslo.service.loopingcall [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.928 253542 DEBUG nova.compute.manager [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:29:24 compute-0 nova_compute[253538]: 2025-11-25 08:29:24.928 253542 DEBUG nova.network.neutron [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:29:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 340 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 3.2 MiB/s wr, 494 op/s
Nov 25 08:29:25 compute-0 nova_compute[253538]: 2025-11-25 08:29:25.245 253542 INFO nova.virt.libvirt.driver [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Snapshot image upload complete
Nov 25 08:29:25 compute-0 nova_compute[253538]: 2025-11-25 08:29:25.245 253542 INFO nova.compute.manager [None req-03d2cd0f-2ae1-4e1f-a91a-eab170244953 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 5.42 seconds to snapshot the instance on the hypervisor.
Nov 25 08:29:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:25.598 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:25.599 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:29:25 compute-0 nova_compute[253538]: 2025-11-25 08:29:25.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:25 compute-0 nova_compute[253538]: 2025-11-25 08:29:25.620 253542 DEBUG nova.network.neutron [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:25 compute-0 ceph-mon[75015]: pgmap v1360: 321 pgs: 321 active+clean; 340 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 3.2 MiB/s wr, 494 op/s
Nov 25 08:29:25 compute-0 nova_compute[253538]: 2025-11-25 08:29:25.646 253542 INFO nova.compute.manager [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Took 0.72 seconds to deallocate network for instance.
Nov 25 08:29:25 compute-0 nova_compute[253538]: 2025-11-25 08:29:25.690 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:25 compute-0 nova_compute[253538]: 2025-11-25 08:29:25.690 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:25 compute-0 nova_compute[253538]: 2025-11-25 08:29:25.795 253542 DEBUG oslo_concurrency.processutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603604533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.333 253542 DEBUG oslo_concurrency.processutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.338 253542 DEBUG nova.compute.provider_tree [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.351 253542 DEBUG nova.scheduler.client.report [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.374 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.400 253542 INFO nova.scheduler.client.report [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Deleted allocations for instance 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.478 253542 DEBUG oslo_concurrency.lockutils [None req-ce3b7f4c-22f4-4056-a25c-047ba7dc62ce 8350a560f2bc4b57a5da0e3a1f582f82 c5b1125d171240e2895276836b4fd6d7 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.510 253542 DEBUG nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-unplugged-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.510 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.511 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.512 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.512 253542 DEBUG nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] No waiting events found dispatching network-vif-unplugged-19217fbd-a123-469a-b432-be5d2543613c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.513 253542 WARNING nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received unexpected event network-vif-unplugged-19217fbd-a123-469a-b432-be5d2543613c for instance with vm_state deleted and task_state None.
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.513 253542 DEBUG nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.514 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.515 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.515 253542 DEBUG oslo_concurrency.lockutils [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a8b6989-6ea7-4cf7-ad21-a1563967c7f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.516 253542 DEBUG nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] No waiting events found dispatching network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.516 253542 WARNING nova.compute.manager [req-690552b6-e2a9-42b8-b243-3f0ba48b1704 req-2cbd6bff-d12c-4c58-bbf6-2b4c03b2ea42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received unexpected event network-vif-plugged-19217fbd-a123-469a-b432-be5d2543613c for instance with vm_state deleted and task_state None.
Nov 25 08:29:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 25 08:29:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3603604533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 25 08:29:26 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 25 08:29:26 compute-0 nova_compute[253538]: 2025-11-25 08:29:26.664 253542 DEBUG nova.compute.manager [req-817e43e1-64ef-4af2-93dc-ce3973720dda req-dbab75b1-4cde-4bae-9ca0-bb6228e6a783 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Received event network-vif-deleted-19217fbd-a123-469a-b432-be5d2543613c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.064 253542 DEBUG nova.compute.manager [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 328 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 4.0 MiB/s wr, 486 op/s
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.121 253542 INFO nova.compute.manager [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] instance snapshotting
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.185 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059352.130481, cf07e611-51eb-4bcf-8757-8f75d3807da6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.186 253542 INFO nova.compute.manager [-] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] VM Stopped (Lifecycle Event)
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.202 253542 DEBUG nova.compute.manager [None req-a72e1acd-12cf-4ca1-9a7e-d458e9a00069 - - - - - -] [instance: cf07e611-51eb-4bcf-8757-8f75d3807da6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.343 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.343 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.344 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.344 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.345 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.347 253542 INFO nova.compute.manager [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Terminating instance
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.349 253542 DEBUG nova.compute.manager [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.360 253542 INFO nova.virt.libvirt.driver [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Beginning live snapshot process
Nov 25 08:29:27 compute-0 kernel: tap799c50c8-d1 (unregistering): left promiscuous mode
Nov 25 08:29:27 compute-0 NetworkManager[48915]: <info>  [1764059367.4177] device (tap799c50c8-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.429 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00257|binding|INFO|Releasing lport 799c50c8-d1e7-4c15-a3d9-29903d576304 from this chassis (sb_readonly=0)
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00258|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 down in Southbound
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00259|binding|INFO|Removing iface tap799c50c8-d1 ovn-installed in OVS
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.457 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.459 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:03:96 10.100.0.12'], port_security=['fa:16:3e:fd:03:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8b29fb31-718d-4926-bf4f-bae461ea70ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=799c50c8-d1e7-4c15-a3d9-29903d576304) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.461 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 799c50c8-d1e7-4c15-a3d9-29903d576304 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.463 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.464 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76001237-7f43-4b5d-975c-9009dbfbb9df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.464 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore
Nov 25 08:29:27 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Deactivated successfully.
Nov 25 08:29:27 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Consumed 13.842s CPU time.
Nov 25 08:29:27 compute-0 systemd-machined[215790]: Machine qemu-38-instance-00000021 terminated.
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.531 253542 DEBUG nova.virt.libvirt.imagebackend [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:29:27 compute-0 kernel: tap799c50c8-d1: entered promiscuous mode
Nov 25 08:29:27 compute-0 systemd-udevd[297343]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:29:27 compute-0 NetworkManager[48915]: <info>  [1764059367.5829] manager: (tap799c50c8-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00260|binding|INFO|Claiming lport 799c50c8-d1e7-4c15-a3d9-29903d576304 for this chassis.
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00261|binding|INFO|799c50c8-d1e7-4c15-a3d9-29903d576304: Claiming fa:16:3e:fd:03:96 10.100.0.12
Nov 25 08:29:27 compute-0 kernel: tap799c50c8-d1 (unregistering): left promiscuous mode
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.633 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: hostname: compute-0
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.640 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:03:96 10.100.0.12'], port_security=['fa:16:3e:fd:03:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8b29fb31-718d-4926-bf4f-bae461ea70ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=799c50c8-d1e7-4c15-a3d9-29903d576304) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 08:29:27 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [NOTICE]   (294717) : haproxy version is 2.8.14-c23fe91
Nov 25 08:29:27 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [NOTICE]   (294717) : path to executable is /usr/sbin/haproxy
Nov 25 08:29:27 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [WARNING]  (294717) : Exiting Master process...
Nov 25 08:29:27 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [ALERT]    (294717) : Current worker (294722) exited with code 143 (Terminated)
Nov 25 08:29:27 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[294697]: [WARNING]  (294717) : All workers exited. Exiting... (0)
Nov 25 08:29:27 compute-0 systemd[1]: libpod-74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3.scope: Deactivated successfully.
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 08:29:27 compute-0 podman[297377]: 2025-11-25 08:29:27.657118384 +0000 UTC m=+0.099305377 container died 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.661 253542 INFO nova.virt.libvirt.driver [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Instance destroyed successfully.
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.662 253542 DEBUG nova.objects.instance [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 8b29fb31-718d-4926-bf4f-bae461ea70ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:27 compute-0 ceph-mon[75015]: osdmap e155: 3 total, 3 up, 3 in
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 08:29:27 compute-0 ceph-mon[75015]: pgmap v1362: 321 pgs: 321 active+clean; 328 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 4.0 MiB/s wr, 486 op/s
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00262|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 ovn-installed in OVS
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00263|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 up in Southbound
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00264|binding|INFO|Releasing lport 799c50c8-d1e7-4c15-a3d9-29903d576304 from this chassis (sb_readonly=1)
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00265|if_status|INFO|Not setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 down as sb is readonly
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00266|binding|INFO|Removing iface tap799c50c8-d1 ovn-installed in OVS
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.674 253542 DEBUG nova.virt.libvirt.vif [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:28:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1025077292',display_name='tempest-ImagesTestJSON-server-1025077292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1025077292',id=33,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:28:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-4kbbjx71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:04Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=8b29fb31-718d-4926-bf4f-bae461ea70ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.674 253542 DEBUG nova.network.os_vif_util [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "799c50c8-d1e7-4c15-a3d9-29903d576304", "address": "fa:16:3e:fd:03:96", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap799c50c8-d1", "ovs_interfaceid": "799c50c8-d1e7-4c15-a3d9-29903d576304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.675 253542 DEBUG nova.network.os_vif_util [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00267|binding|INFO|Releasing lport 799c50c8-d1e7-4c15-a3d9-29903d576304 from this chassis (sb_readonly=0)
Nov 25 08:29:27 compute-0 ovn_controller[152859]: 2025-11-25T08:29:27Z|00268|binding|INFO|Setting lport 799c50c8-d1e7-4c15-a3d9-29903d576304 down in Southbound
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.675 253542 DEBUG os_vif [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.679 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.679 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap799c50c8-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.683 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.687 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:03:96 10.100.0.12'], port_security=['fa:16:3e:fd:03:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8b29fb31-718d-4926-bf4f-bae461ea70ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=799c50c8-d1e7-4c15-a3d9-29903d576304) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap799c50c8-d1: No such device
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.696 253542 INFO os_vif [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:03:96,bridge_name='br-int',has_traffic_filtering=True,id=799c50c8-d1e7-4c15-a3d9-29903d576304,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap799c50c8-d1')
Nov 25 08:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3-userdata-shm.mount: Deactivated successfully.
Nov 25 08:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb3ae8526a3fd45e0f449dbfc51ea63977bc5a178cd81680ec65b696722764a-merged.mount: Deactivated successfully.
Nov 25 08:29:27 compute-0 podman[297377]: 2025-11-25 08:29:27.720235898 +0000 UTC m=+0.162422941 container cleanup 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:29:27 compute-0 systemd[1]: libpod-conmon-74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3.scope: Deactivated successfully.
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.746 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(6c0f9d2c7de3419c86919344433cfae0) on rbd image(225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:27 compute-0 podman[297444]: 2025-11-25 08:29:27.810347689 +0000 UTC m=+0.058385225 container remove 74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.819 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8388b7-ceab-4672-9dc2-ac02a5551c22]: (4, ('Tue Nov 25 08:29:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3)\n74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3\nTue Nov 25 08:29:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3)\n74d9d88aec0d43eba2eea2e94d83fd5630607c65e951353c7bdd0b61f83bfda3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.821 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5886fe56-8c77-4e14-878c-74d5db841da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.823 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:27 compute-0 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c777495c-d75e-4008-9870-3e9cdad1c072]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 nova_compute[253538]: 2025-11-25 08:29:27.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d34f734-1cbc-437f-a97a-58693fbe9309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.850 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fe59fe-3874-4f88-97a9-9f6803a23e16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bacf4c0c-64d9-4877-be6f-c432bd39221c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468335, 'reachable_time': 31048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297479, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.874 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.874 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a945e3-b816-44d6-8f99-55468c06a6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.875 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 799c50c8-d1e7-4c15-a3d9-29903d576304 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.877 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.878 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb54484d-e314-4f4d-857e-9fe13438f490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.879 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 799c50c8-d1e7-4c15-a3d9-29903d576304 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.881 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:29:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:27.882 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79ea8b00-3f2f-4f89-b37c-8de1cf1316be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.125 253542 INFO nova.virt.libvirt.driver [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Deleting instance files /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef_del
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.126 253542 INFO nova.virt.libvirt.driver [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Deletion of /var/lib/nova/instances/8b29fb31-718d-4926-bf4f-bae461ea70ef_del complete
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.192 253542 INFO nova.compute.manager [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.192 253542 DEBUG oslo.service.loopingcall [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.192 253542 DEBUG nova.compute.manager [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.192 253542 DEBUG nova.network.neutron [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:29:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 25 08:29:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 25 08:29:28 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.716 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] cloning vms/225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk@6c0f9d2c7de3419c86919344433cfae0 to images/f65b3684-821a-49e5-bd6a-65afe2f061e8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.761 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-unplugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.763 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.764 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.764 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.764 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] No waiting events found dispatching network-vif-unplugged-799c50c8-d1e7-4c15-a3d9-29903d576304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.765 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-unplugged-799c50c8-d1e7-4c15-a3d9-29903d576304 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.765 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.765 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.766 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.767 253542 DEBUG oslo_concurrency.lockutils [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.767 253542 DEBUG nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] No waiting events found dispatching network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.768 253542 WARNING nova.compute.manager [req-c608692c-825e-49a1-99e5-edea536d24c3 req-83f5f79b-57b2-4006-9638-763c4c5f3436 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received unexpected event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 for instance with vm_state active and task_state deleting.
Nov 25 08:29:28 compute-0 nova_compute[253538]: 2025-11-25 08:29:28.876 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] flattening images/f65b3684-821a-49e5-bd6a-65afe2f061e8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:29:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:29:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/955554572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:29:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:29:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/955554572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:29:29 compute-0 nova_compute[253538]: 2025-11-25 08:29:29.104 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] removing snapshot(6c0f9d2c7de3419c86919344433cfae0) on rbd image(225f80e2-9e66-46fb-b77d-9a54fa8a2a41_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:29:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 280 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 4.8 MiB/s wr, 448 op/s
Nov 25 08:29:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 25 08:29:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 25 08:29:29 compute-0 ceph-mon[75015]: osdmap e156: 3 total, 3 up, 3 in
Nov 25 08:29:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/955554572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:29:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/955554572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:29:29 compute-0 ceph-mon[75015]: pgmap v1364: 321 pgs: 321 active+clean; 280 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 4.8 MiB/s wr, 448 op/s
Nov 25 08:29:29 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 25 08:29:29 compute-0 nova_compute[253538]: 2025-11-25 08:29:29.709 253542 DEBUG nova.storage.rbd_utils [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(snap) on rbd image(f65b3684-821a-49e5-bd6a-65afe2f061e8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:29 compute-0 nova_compute[253538]: 2025-11-25 08:29:29.946 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 25 08:29:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 25 08:29:30 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.426 253542 DEBUG nova.network.neutron [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.442 253542 INFO nova.compute.manager [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Took 2.25 seconds to deallocate network for instance.
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.484 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.484 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.489 253542 DEBUG nova.compute.manager [req-5bec9bd3-d80a-4b58-a58f-d5b1fa8fc4d9 req-7d391862-b113-4c47-af88-3a85fbf2804c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-deleted-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.574 253542 DEBUG oslo_concurrency.processutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:30 compute-0 ceph-mon[75015]: osdmap e157: 3 total, 3 up, 3 in
Nov 25 08:29:30 compute-0 ceph-mon[75015]: osdmap e158: 3 total, 3 up, 3 in
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.884 253542 DEBUG nova.compute.manager [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.885 253542 DEBUG oslo_concurrency.lockutils [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.885 253542 DEBUG oslo_concurrency.lockutils [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.886 253542 DEBUG oslo_concurrency.lockutils [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.886 253542 DEBUG nova.compute.manager [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] No waiting events found dispatching network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:30 compute-0 nova_compute[253538]: 2025-11-25 08:29:30.886 253542 WARNING nova.compute.manager [req-087c7b64-5957-4c4b-b328-05a94cee6f54 req-ad5d6ec3-076f-47f9-973e-01881daeafa2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Received unexpected event network-vif-plugged-799c50c8-d1e7-4c15-a3d9-29903d576304 for instance with vm_state deleted and task_state None.
Nov 25 08:29:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047568817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.085 253542 DEBUG oslo_concurrency.processutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.096 253542 DEBUG nova.compute.provider_tree [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.112 253542 DEBUG nova.scheduler.client.report [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 267 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 12 MiB/s wr, 725 op/s
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.138 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.176 253542 INFO nova.scheduler.client.report [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 8b29fb31-718d-4926-bf4f-bae461ea70ef
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.281 253542 DEBUG oslo_concurrency.lockutils [None req-dae6bf40-45fc-4390-ad8f-9e0a2e1fba54 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "8b29fb31-718d-4926-bf4f-bae461ea70ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.521 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.522 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.542 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.613 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.614 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.624 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.625 253542 INFO nova.compute.claims [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:29:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4047568817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:31 compute-0 ceph-mon[75015]: pgmap v1367: 321 pgs: 321 active+clean; 267 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 12 MiB/s wr, 725 op/s
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.783 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.993 253542 INFO nova.virt.libvirt.driver [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Snapshot image upload complete
Nov 25 08:29:31 compute-0 nova_compute[253538]: 2025-11-25 08:29:31.994 253542 INFO nova.compute.manager [None req-dfbd3510-f758-4bfb-ae01-969361f55bfe 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Took 4.87 seconds to snapshot the instance on the hypervisor.
Nov 25 08:29:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376256686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.282 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.290 253542 DEBUG nova.compute.provider_tree [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.303 253542 DEBUG nova.scheduler.client.report [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.332 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.333 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.397 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.398 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.418 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.440 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.568 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.571 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.572 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Creating image(s)
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.612 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.647 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.675 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.679 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.721 253542 DEBUG nova.policy [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38fa175fb699405c9a05d7c28f994ebc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.724 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3376256686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.767 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.768 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.769 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.769 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.792 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:32 compute-0 nova_compute[253538]: 2025-11-25 08:29:32.796 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:32 compute-0 podman[297670]: 2025-11-25 08:29:32.816433965 +0000 UTC m=+0.058768475 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 276 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 11 MiB/s wr, 436 op/s
Nov 25 08:29:33 compute-0 nova_compute[253538]: 2025-11-25 08:29:33.135 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:33 compute-0 nova_compute[253538]: 2025-11-25 08:29:33.216 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] resizing rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:29:33 compute-0 nova_compute[253538]: 2025-11-25 08:29:33.438 253542 DEBUG nova.objects.instance [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'migration_context' on Instance uuid 34941187-b6b9-4153-a8b9-6f5c00f10dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:33 compute-0 nova_compute[253538]: 2025-11-25 08:29:33.534 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:29:33 compute-0 nova_compute[253538]: 2025-11-25 08:29:33.535 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Ensure instance console log exists: /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:29:33 compute-0 nova_compute[253538]: 2025-11-25 08:29:33.535 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:33 compute-0 nova_compute[253538]: 2025-11-25 08:29:33.535 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:33 compute-0 nova_compute[253538]: 2025-11-25 08:29:33.536 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:33 compute-0 ceph-mon[75015]: pgmap v1368: 321 pgs: 321 active+clean; 276 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 11 MiB/s wr, 436 op/s
Nov 25 08:29:34 compute-0 nova_compute[253538]: 2025-11-25 08:29:34.131 253542 DEBUG nova.compute.manager [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:34 compute-0 nova_compute[253538]: 2025-11-25 08:29:34.168 253542 INFO nova.compute.manager [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] instance snapshotting
Nov 25 08:29:34 compute-0 nova_compute[253538]: 2025-11-25 08:29:34.398 253542 INFO nova.virt.libvirt.driver [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Beginning live snapshot process
Nov 25 08:29:34 compute-0 nova_compute[253538]: 2025-11-25 08:29:34.570 253542 DEBUG nova.virt.libvirt.imagebackend [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:29:34 compute-0 nova_compute[253538]: 2025-11-25 08:29:34.770 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Successfully created port: eab8f4a1-ffde-4387-94df-ebfe864e9534 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:29:34 compute-0 nova_compute[253538]: 2025-11-25 08:29:34.782 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(46aa4661084f4d7b8fa958b23428336b) on rbd image(8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 25 08:29:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 25 08:29:34 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 25 08:29:34 compute-0 nova_compute[253538]: 2025-11-25 08:29:34.895 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] cloning vms/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk@46aa4661084f4d7b8fa958b23428336b to images/f798a86c-e34a-4469-9738-76f1311b65e9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:29:34 compute-0 nova_compute[253538]: 2025-11-25 08:29:34.947 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:35 compute-0 nova_compute[253538]: 2025-11-25 08:29:35.013 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] flattening images/f798a86c-e34a-4469-9738-76f1311b65e9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:29:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 328 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 13 MiB/s wr, 485 op/s
Nov 25 08:29:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 25 08:29:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 25 08:29:35 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 25 08:29:35 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 08:29:35 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 08:29:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:35.601 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:35 compute-0 nova_compute[253538]: 2025-11-25 08:29:35.603 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] removing snapshot(46aa4661084f4d7b8fa958b23428336b) on rbd image(8400a9a9-bd7a-434b-a11b-6db7e12a4e18_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:29:35 compute-0 ceph-mon[75015]: osdmap e159: 3 total, 3 up, 3 in
Nov 25 08:29:35 compute-0 ceph-mon[75015]: pgmap v1370: 321 pgs: 321 active+clean; 328 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 13 MiB/s wr, 485 op/s
Nov 25 08:29:35 compute-0 ceph-mon[75015]: osdmap e160: 3 total, 3 up, 3 in
Nov 25 08:29:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 25 08:29:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 25 08:29:36 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 25 08:29:36 compute-0 nova_compute[253538]: 2025-11-25 08:29:36.414 253542 DEBUG nova.storage.rbd_utils [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] creating snapshot(snap) on rbd image(f798a86c-e34a-4469-9738-76f1311b65e9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:36 compute-0 nova_compute[253538]: 2025-11-25 08:29:36.511 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059361.5100331, 0067149a-8f99-4257-af2a-fd9adcc41719 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:36 compute-0 nova_compute[253538]: 2025-11-25 08:29:36.512 253542 INFO nova.compute.manager [-] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] VM Stopped (Lifecycle Event)
Nov 25 08:29:36 compute-0 nova_compute[253538]: 2025-11-25 08:29:36.541 253542 DEBUG nova.compute.manager [None req-be29be4c-f1f2-4509-8523-b9d8ec6813e9 - - - - - -] [instance: 0067149a-8f99-4257-af2a-fd9adcc41719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:36 compute-0 sshd-session[297940]: Connection closed by 119.96.131.8 port 60702
Nov 25 08:29:36 compute-0 podman[297941]: 2025-11-25 08:29:36.826327536 +0000 UTC m=+0.077351830 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.044 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 397 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 14 MiB/s wr, 316 op/s
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.212 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Successfully updated port: eab8f4a1-ffde-4387-94df-ebfe864e9534 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.231 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.232 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquired lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.232 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:29:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 25 08:29:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 25 08:29:37 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 25 08:29:37 compute-0 ceph-mon[75015]: osdmap e161: 3 total, 3 up, 3 in
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.362 253542 DEBUG nova.compute.manager [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received event network-changed-eab8f4a1-ffde-4387-94df-ebfe864e9534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.363 253542 DEBUG nova.compute.manager [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Refreshing instance network info cache due to event network-changed-eab8f4a1-ffde-4387-94df-ebfe864e9534. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.364 253542 DEBUG oslo_concurrency.lockutils [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:37 compute-0 nova_compute[253538]: 2025-11-25 08:29:37.727 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:29:38 compute-0 ceph-mon[75015]: pgmap v1373: 321 pgs: 321 active+clean; 397 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 14 MiB/s wr, 316 op/s
Nov 25 08:29:38 compute-0 ceph-mon[75015]: osdmap e162: 3 total, 3 up, 3 in
Nov 25 08:29:38 compute-0 nova_compute[253538]: 2025-11-25 08:29:38.672 253542 INFO nova.virt.libvirt.driver [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Snapshot image upload complete
Nov 25 08:29:38 compute-0 nova_compute[253538]: 2025-11-25 08:29:38.673 253542 INFO nova.compute.manager [None req-2953982c-db6e-4419-befe-28abb827d0a5 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 4.50 seconds to snapshot the instance on the hypervisor.
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.077 253542 DEBUG nova.network.neutron [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Updating instance_info_cache with network_info: [{"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 408 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 11 MiB/s wr, 237 op/s
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.131 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Releasing lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.132 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Instance network_info: |[{"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.132 253542 DEBUG oslo_concurrency.lockutils [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.132 253542 DEBUG nova.network.neutron [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Refreshing network info cache for port eab8f4a1-ffde-4387-94df-ebfe864e9534 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.137 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start _get_guest_xml network_info=[{"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.142 253542 WARNING nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.151 253542 DEBUG nova.virt.libvirt.host [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.152 253542 DEBUG nova.virt.libvirt.host [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.166 253542 DEBUG nova.virt.libvirt.host [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.166 253542 DEBUG nova.virt.libvirt.host [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.167 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.168 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.169 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.169 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.170 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.170 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.171 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.171 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.172 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.172 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.173 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.173 253542 DEBUG nova.virt.hardware [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.178 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.443 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059364.4406812, 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.444 253542 INFO nova.compute.manager [-] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] VM Stopped (Lifecycle Event)
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.470 253542 DEBUG nova.compute.manager [None req-bdfb7777-6bc1-4ecd-b418-ec97d91f8f09 - - - - - -] [instance: 8a8b6989-6ea7-4cf7-ad21-a1563967c7f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140178893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.636 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.658 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.662 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:39 compute-0 nova_compute[253538]: 2025-11-25 08:29:39.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685734955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.079 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.082 253542 DEBUG nova.virt.libvirt.vif [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-722195337',display_name='tempest-ImagesTestJSON-server-722195337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-722195337',id=39,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-saunovqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:32Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=34941187-b6b9-4153-a8b9-6f5c00f10dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.083 253542 DEBUG nova.network.os_vif_util [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.084 253542 DEBUG nova.network.os_vif_util [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.086 253542 DEBUG nova.objects.instance [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 34941187-b6b9-4153-a8b9-6f5c00f10dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.105 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <uuid>34941187-b6b9-4153-a8b9-6f5c00f10dda</uuid>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <name>instance-00000027</name>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <nova:name>tempest-ImagesTestJSON-server-722195337</nova:name>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:29:39</nova:creationTime>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <nova:user uuid="38fa175fb699405c9a05d7c28f994ebc">tempest-ImagesTestJSON-109091550-project-member</nova:user>
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <nova:project uuid="b0a28d62fb1841c087b84b40bf5a54ec">tempest-ImagesTestJSON-109091550</nova:project>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <nova:port uuid="eab8f4a1-ffde-4387-94df-ebfe864e9534">
Nov 25 08:29:40 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <system>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <entry name="serial">34941187-b6b9-4153-a8b9-6f5c00f10dda</entry>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <entry name="uuid">34941187-b6b9-4153-a8b9-6f5c00f10dda</entry>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </system>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <os>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   </os>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <features>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   </features>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/34941187-b6b9-4153-a8b9-6f5c00f10dda_disk">
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config">
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:40 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:52:af:86"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <target dev="tapeab8f4a1-ff"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/console.log" append="off"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <video>
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </video>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:29:40 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:29:40 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:29:40 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:29:40 compute-0 nova_compute[253538]: </domain>
Nov 25 08:29:40 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.106 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Preparing to wait for external event network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.107 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.107 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.107 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.108 253542 DEBUG nova.virt.libvirt.vif [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-722195337',display_name='tempest-ImagesTestJSON-server-722195337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-722195337',id=39,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-saunovqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:32Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=34941187-b6b9-4153-a8b9-6f5c00f10dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.108 253542 DEBUG nova.network.os_vif_util [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.109 253542 DEBUG nova.network.os_vif_util [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.109 253542 DEBUG os_vif [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.115 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.115 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeab8f4a1-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.116 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeab8f4a1-ff, col_values=(('external_ids', {'iface-id': 'eab8f4a1-ffde-4387-94df-ebfe864e9534', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:af:86', 'vm-uuid': '34941187-b6b9-4153-a8b9-6f5c00f10dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:40 compute-0 NetworkManager[48915]: <info>  [1764059380.1188] manager: (tapeab8f4a1-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.120 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.127 253542 INFO os_vif [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff')
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.197 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.197 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.198 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No VIF found with MAC fa:16:3e:52:af:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.198 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Using config drive
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.230 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 25 08:29:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 25 08:29:40 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 25 08:29:40 compute-0 ceph-mon[75015]: pgmap v1375: 321 pgs: 321 active+clean; 408 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 11 MiB/s wr, 237 op/s
Nov 25 08:29:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/140178893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/685734955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:40 compute-0 ceph-mon[75015]: osdmap e163: 3 total, 3 up, 3 in
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.748 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Creating config drive at /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.757 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8l5_jsld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.912 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8l5_jsld" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.940 253542 DEBUG nova.storage.rbd_utils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] rbd image 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:40 compute-0 nova_compute[253538]: 2025-11-25 08:29:40.943 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.055 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.055 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.056 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.066 253542 DEBUG nova.network.neutron [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Updated VIF entry in instance network info cache for port eab8f4a1-ffde-4387-94df-ebfe864e9534. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.067 253542 DEBUG nova.network.neutron [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Updating instance_info_cache with network_info: [{"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.080 253542 DEBUG oslo_concurrency.lockutils [req-aa801ab4-0fa1-45f8-8c35-e6d0221c209b req-10dcdeb1-13d1-487b-9943-5fca30f3d918 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-34941187-b6b9-4153-a8b9-6f5c00f10dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.099 253542 DEBUG oslo_concurrency.processutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config 34941187-b6b9-4153-a8b9-6f5c00f10dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.099 253542 INFO nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Deleting local config drive /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda/disk.config because it was imported into RBD.
Nov 25 08:29:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 418 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 9.0 MiB/s wr, 182 op/s
Nov 25 08:29:41 compute-0 kernel: tapeab8f4a1-ff: entered promiscuous mode
Nov 25 08:29:41 compute-0 NetworkManager[48915]: <info>  [1764059381.1802] manager: (tapeab8f4a1-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Nov 25 08:29:41 compute-0 ovn_controller[152859]: 2025-11-25T08:29:41Z|00269|binding|INFO|Claiming lport eab8f4a1-ffde-4387-94df-ebfe864e9534 for this chassis.
Nov 25 08:29:41 compute-0 ovn_controller[152859]: 2025-11-25T08:29:41Z|00270|binding|INFO|eab8f4a1-ffde-4387-94df-ebfe864e9534: Claiming fa:16:3e:52:af:86 10.100.0.12
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:41 compute-0 systemd-machined[215790]: New machine qemu-44-instance-00000027.
Nov 25 08:29:41 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000027.
Nov 25 08:29:41 compute-0 systemd-udevd[298113]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:29:41 compute-0 NetworkManager[48915]: <info>  [1764059381.2607] device (tapeab8f4a1-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:29:41 compute-0 NetworkManager[48915]: <info>  [1764059381.2621] device (tapeab8f4a1-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.278 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:af:86 10.100.0.12'], port_security=['fa:16:3e:52:af:86 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '34941187-b6b9-4153-a8b9-6f5c00f10dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eab8f4a1-ffde-4387-94df-ebfe864e9534) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.279 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eab8f4a1-ffde-4387-94df-ebfe864e9534 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e bound to our chassis
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.280 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.291 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dad777-ee81-430c-acc4-214c7064f5c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.292 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba659d6c-c1 in ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.293 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba659d6c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.294 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e45637c3-fe7c-4e9a-aac4-5199ebb9ba83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_controller[152859]: 2025-11-25T08:29:41Z|00271|binding|INFO|Setting lport eab8f4a1-ffde-4387-94df-ebfe864e9534 ovn-installed in OVS
Nov 25 08:29:41 compute-0 ovn_controller[152859]: 2025-11-25T08:29:41Z|00272|binding|INFO|Setting lport eab8f4a1-ffde-4387-94df-ebfe864e9534 up in Southbound
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.295 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf52e68-b9c1-4dbc-938b-3393ce1eba24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.305 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5768dd8d-e35c-4994-bfc9-35ae67cde1b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 podman[298094]: 2025-11-25 08:29:41.322736273 +0000 UTC m=+0.116467200 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.327 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35170cc2-dac3-4891-8d21-cc0f7d99fc71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.355 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47b0bedf-c568-47f9-8893-4106582cba65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.360 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12c6fc1c-b7bb-4444-8ca3-fb7651f8d74c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 NetworkManager[48915]: <info>  [1764059381.3616] manager: (tapba659d6c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Nov 25 08:29:41 compute-0 systemd-udevd[298120]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.393 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd8f866-395d-4578-a9ae-a0d6252e8ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.395 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[007e2dba-a4ef-4b4b-9884-29faa81015eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 NetworkManager[48915]: <info>  [1764059381.4181] device (tapba659d6c-c0): carrier: link connected
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.423 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[46690044-c39e-442e-aef7-d5dd33504cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.440 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f0a256-1bd1-408e-af6a-a254fadc0a1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473601, 'reachable_time': 39620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298155, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[426b9dff-7c71-4072-8289-7d352d146ee3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:c340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473601, 'tstamp': 473601}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298156, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.475 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03456cae-3606-4ded-8ae7-76a1a0326cbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba659d6c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:c3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473601, 'reachable_time': 39620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298157, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.510 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[df424c34-743e-44ca-b769-6149f9c77c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[134deb86-a076-4a35-b50e-4589612b2026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.579 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:41 compute-0 rsyslogd[1007]: imjournal from <np0005534516:ovn_metadata_agent>: begin to drop messages due to rate-limiting
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.579 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.579 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba659d6c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:41 compute-0 NetworkManager[48915]: <info>  [1764059381.5825] manager: (tapba659d6c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Nov 25 08:29:41 compute-0 kernel: tapba659d6c-c0: entered promiscuous mode
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.585 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba659d6c-c0, col_values=(('external_ids', {'iface-id': '02ee51d1-7fc5-4815-93ec-b9ead088a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:41 compute-0 ovn_controller[152859]: 2025-11-25T08:29:41Z|00273|binding|INFO|Releasing lport 02ee51d1-7fc5-4815-93ec-b9ead088a46e from this chassis (sb_readonly=0)
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.611 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.612 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18db7390-e696-425d-af52-82f21fda3cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.613 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ba659d6c-c094-47d7-ba45-d0e659ce778e.pid.haproxy
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ba659d6c-c094-47d7-ba45-d0e659ce778e
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:41.614 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'env', 'PROCESS_TAG=haproxy-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba659d6c-c094-47d7-ba45-d0e659ce778e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.649 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059381.649272, 34941187-b6b9-4153-a8b9-6f5c00f10dda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.650 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] VM Started (Lifecycle Event)
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.666 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.671 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059381.6493998, 34941187-b6b9-4153-a8b9-6f5c00f10dda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.671 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] VM Paused (Lifecycle Event)
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.689 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.692 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:41 compute-0 nova_compute[253538]: 2025-11-25 08:29:41.708 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:42 compute-0 podman[298229]: 2025-11-25 08:29:42.012468117 +0000 UTC m=+0.072075983 container create 07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:29:42 compute-0 podman[298229]: 2025-11-25 08:29:41.96483858 +0000 UTC m=+0.024446526 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:29:42 compute-0 systemd[1]: Started libpod-conmon-07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1.scope.
Nov 25 08:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea6b4f6838733412fb90c0109af913f2e952229503320472fd0695bd9461c66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.091 253542 DEBUG nova.compute.manager [req-3ccbe0a8-2420-45a6-ba2b-48ab4b17c113 req-75130cfe-d070-4458-8089-2c9c5facd354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received event network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.092 253542 DEBUG oslo_concurrency.lockutils [req-3ccbe0a8-2420-45a6-ba2b-48ab4b17c113 req-75130cfe-d070-4458-8089-2c9c5facd354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.092 253542 DEBUG oslo_concurrency.lockutils [req-3ccbe0a8-2420-45a6-ba2b-48ab4b17c113 req-75130cfe-d070-4458-8089-2c9c5facd354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.092 253542 DEBUG oslo_concurrency.lockutils [req-3ccbe0a8-2420-45a6-ba2b-48ab4b17c113 req-75130cfe-d070-4458-8089-2c9c5facd354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.093 253542 DEBUG nova.compute.manager [req-3ccbe0a8-2420-45a6-ba2b-48ab4b17c113 req-75130cfe-d070-4458-8089-2c9c5facd354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Processing event network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.093 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.097 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059382.0974786, 34941187-b6b9-4153-a8b9-6f5c00f10dda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.097 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] VM Resumed (Lifecycle Event)
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.099 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.102 253542 INFO nova.virt.libvirt.driver [-] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Instance spawned successfully.
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.102 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:29:42 compute-0 podman[298229]: 2025-11-25 08:29:42.115292799 +0000 UTC m=+0.174900695 container init 07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:29:42 compute-0 podman[298229]: 2025-11-25 08:29:42.121973143 +0000 UTC m=+0.181581009 container start 07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.122 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.122 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.123 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.123 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.124 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.124 253542 DEBUG nova.virt.libvirt.driver [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.131 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.134 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:42 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[298244]: [NOTICE]   (298248) : New worker (298250) forked
Nov 25 08:29:42 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[298244]: [NOTICE]   (298248) : Loading success.
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.177 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.204 253542 INFO nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Took 9.63 seconds to spawn the instance on the hypervisor.
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.205 253542 DEBUG nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.267 253542 INFO nova.compute.manager [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Took 10.67 seconds to build instance.
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.284 253542 DEBUG oslo_concurrency.lockutils [None req-3847c90e-2ac9-43e2-ae42-058e295bfd8e 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:42 compute-0 ceph-mon[75015]: pgmap v1377: 321 pgs: 321 active+clean; 418 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 9.0 MiB/s wr, 182 op/s
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.650 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059367.649511, 8b29fb31-718d-4926-bf4f-bae461ea70ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.651 253542 INFO nova.compute.manager [-] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] VM Stopped (Lifecycle Event)
Nov 25 08:29:42 compute-0 nova_compute[253538]: 2025-11-25 08:29:42.668 253542 DEBUG nova.compute.manager [None req-b18f4e49-f9d3-4e0b-ac1d-02f81652ee44 - - - - - -] [instance: 8b29fb31-718d-4926-bf4f-bae461ea70ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 418 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 7.7 MiB/s wr, 181 op/s
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.009 253542 DEBUG nova.compute.manager [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.117 253542 INFO nova.compute.manager [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] instance snapshotting
Nov 25 08:29:44 compute-0 ceph-mon[75015]: pgmap v1378: 321 pgs: 321 active+clean; 418 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 7.7 MiB/s wr, 181 op/s
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.717 253542 DEBUG nova.compute.manager [req-0bd52044-ce74-487f-b4de-ad35400177df req-a10075b6-751d-45d7-a226-143b2e0ac48f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received event network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.718 253542 DEBUG oslo_concurrency.lockutils [req-0bd52044-ce74-487f-b4de-ad35400177df req-a10075b6-751d-45d7-a226-143b2e0ac48f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.719 253542 DEBUG oslo_concurrency.lockutils [req-0bd52044-ce74-487f-b4de-ad35400177df req-a10075b6-751d-45d7-a226-143b2e0ac48f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.719 253542 DEBUG oslo_concurrency.lockutils [req-0bd52044-ce74-487f-b4de-ad35400177df req-a10075b6-751d-45d7-a226-143b2e0ac48f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.719 253542 DEBUG nova.compute.manager [req-0bd52044-ce74-487f-b4de-ad35400177df req-a10075b6-751d-45d7-a226-143b2e0ac48f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] No waiting events found dispatching network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.720 253542 WARNING nova.compute.manager [req-0bd52044-ce74-487f-b4de-ad35400177df req-a10075b6-751d-45d7-a226-143b2e0ac48f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received unexpected event network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 for instance with vm_state active and task_state image_snapshot.
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.753 253542 INFO nova.virt.libvirt.driver [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Beginning live snapshot process
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.947 253542 DEBUG nova.virt.libvirt.imagebackend [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:29:44 compute-0 nova_compute[253538]: 2025-11-25 08:29:44.951 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:45 compute-0 nova_compute[253538]: 2025-11-25 08:29:45.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 418 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Nov 25 08:29:45 compute-0 nova_compute[253538]: 2025-11-25 08:29:45.185 253542 DEBUG nova.storage.rbd_utils [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(ec25c3cb9ea942969c8b35d3b27fd4d4) on rbd image(34941187-b6b9-4153-a8b9-6f5c00f10dda_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 25 08:29:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 25 08:29:45 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 25 08:29:45 compute-0 nova_compute[253538]: 2025-11-25 08:29:45.491 253542 DEBUG nova.storage.rbd_utils [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] cloning vms/34941187-b6b9-4153-a8b9-6f5c00f10dda_disk@ec25c3cb9ea942969c8b35d3b27fd4d4 to images/a477aad7-2f73-4e2c-85f7-d6b7ac0e3112 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:29:45 compute-0 nova_compute[253538]: 2025-11-25 08:29:45.854 253542 DEBUG nova.storage.rbd_utils [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] flattening images/a477aad7-2f73-4e2c-85f7-d6b7ac0e3112 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:29:46 compute-0 nova_compute[253538]: 2025-11-25 08:29:46.163 253542 DEBUG nova.storage.rbd_utils [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(ec25c3cb9ea942969c8b35d3b27fd4d4) on rbd image(34941187-b6b9-4153-a8b9-6f5c00f10dda_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:29:46 compute-0 ceph-mon[75015]: pgmap v1379: 321 pgs: 321 active+clean; 418 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Nov 25 08:29:46 compute-0 ceph-mon[75015]: osdmap e164: 3 total, 3 up, 3 in
Nov 25 08:29:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 25 08:29:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 25 08:29:46 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 25 08:29:46 compute-0 nova_compute[253538]: 2025-11-25 08:29:46.474 253542 DEBUG nova.storage.rbd_utils [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] creating snapshot(snap) on rbd image(a477aad7-2f73-4e2c-85f7-d6b7ac0e3112) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:29:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 444 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 218 op/s
Nov 25 08:29:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 25 08:29:47 compute-0 ceph-mon[75015]: osdmap e165: 3 total, 3 up, 3 in
Nov 25 08:29:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 25 08:29:47 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.506 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "ce1afa72-143f-43f3-9859-df7f3523888a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.506 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.526 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.599 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.600 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.607 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.608 253542 INFO nova.compute.claims [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image a477aad7-2f73-4e2c-85f7-d6b7ac0e3112 could not be found.
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID a477aad7-2f73-4e2c-85f7-d6b7ac0e3112
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image a477aad7-2f73-4e2c-85f7-d6b7ac0e3112 could not be found.
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.695 253542 ERROR nova.virt.libvirt.driver 
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.774 253542 DEBUG nova.storage.rbd_utils [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] removing snapshot(snap) on rbd image(a477aad7-2f73-4e2c-85f7-d6b7ac0e3112) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:29:47 compute-0 nova_compute[253538]: 2025-11-25 08:29:47.795 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2681446942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.297 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.304 253542 DEBUG nova.compute.provider_tree [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.316 253542 DEBUG nova.scheduler.client.report [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.336 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.337 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.388 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.389 253542 DEBUG nova.network.neutron [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.410 253542 INFO nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.433 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:29:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 25 08:29:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 25 08:29:48 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 25 08:29:48 compute-0 ceph-mon[75015]: pgmap v1382: 321 pgs: 321 active+clean; 444 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 218 op/s
Nov 25 08:29:48 compute-0 ceph-mon[75015]: osdmap e166: 3 total, 3 up, 3 in
Nov 25 08:29:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2681446942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.519 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.521 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.522 253542 INFO nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Creating image(s)
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.546 253542 DEBUG nova.storage.rbd_utils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] rbd image ce1afa72-143f-43f3-9859-df7f3523888a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.581 253542 DEBUG nova.storage.rbd_utils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] rbd image ce1afa72-143f-43f3-9859-df7f3523888a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.613 253542 DEBUG nova.storage.rbd_utils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] rbd image ce1afa72-143f-43f3-9859-df7f3523888a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.617 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.699 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.701 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.701 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.702 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.721 253542 DEBUG nova.storage.rbd_utils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] rbd image ce1afa72-143f-43f3-9859-df7f3523888a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.724 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ce1afa72-143f-43f3-9859-df7f3523888a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:48 compute-0 nova_compute[253538]: 2025-11-25 08:29:48.893 253542 WARNING nova.compute.manager [None req-be452acc-b5c2-4a49-9975-db5eb418dbae 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Image not found during snapshot: nova.exception.ImageNotFound: Image a477aad7-2f73-4e2c-85f7-d6b7ac0e3112 could not be found.
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.041 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ce1afa72-143f-43f3-9859-df7f3523888a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.104 253542 DEBUG nova.policy [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cc7f739420484ab696255ef9cdfcc581', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e37021edb08c44fbb7ea019842489f3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.112 253542 DEBUG nova.storage.rbd_utils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] resizing rbd image ce1afa72-143f-43f3-9859-df7f3523888a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:29:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 457 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.7 MiB/s wr, 204 op/s
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.212 253542 DEBUG nova.objects.instance [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lazy-loading 'migration_context' on Instance uuid ce1afa72-143f-43f3-9859-df7f3523888a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.225 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.226 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Ensure instance console log exists: /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.226 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.227 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.227 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 25 08:29:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 25 08:29:49 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 25 08:29:49 compute-0 ceph-mon[75015]: osdmap e167: 3 total, 3 up, 3 in
Nov 25 08:29:49 compute-0 nova_compute[253538]: 2025-11-25 08:29:49.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:50 compute-0 nova_compute[253538]: 2025-11-25 08:29:50.119 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:50 compute-0 ceph-mon[75015]: pgmap v1385: 321 pgs: 321 active+clean; 457 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.7 MiB/s wr, 204 op/s
Nov 25 08:29:50 compute-0 ceph-mon[75015]: osdmap e168: 3 total, 3 up, 3 in
Nov 25 08:29:50 compute-0 sudo[298625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:29:50 compute-0 sudo[298625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:50 compute-0 sudo[298625]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:50 compute-0 nova_compute[253538]: 2025-11-25 08:29:50.654 253542 DEBUG nova.network.neutron [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Successfully created port: 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:29:50 compute-0 sudo[298650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:29:50 compute-0 sudo[298650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:50 compute-0 sudo[298650]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:50 compute-0 sudo[298675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:29:50 compute-0 sudo[298675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:50 compute-0 sudo[298675]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:50 compute-0 sudo[298700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:29:50 compute-0 sudo[298700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 472 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.0 MiB/s wr, 224 op/s
Nov 25 08:29:51 compute-0 sudo[298700]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:29:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:29:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:29:51 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:29:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:29:51 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:29:51 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3bff131e-da53-4cdb-8b18-228dee6ca553 does not exist
Nov 25 08:29:51 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0a51fa5d-613a-4b8a-8b9b-f7878e69bac3 does not exist
Nov 25 08:29:51 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 66b5daf9-e670-4031-acd5-3e5d6b81b6cf does not exist
Nov 25 08:29:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:29:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:29:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:29:51 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:29:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:29:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:29:51 compute-0 sudo[298756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:29:51 compute-0 sudo[298756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:51 compute-0 sudo[298756]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:51 compute-0 sudo[298781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:29:51 compute-0 sudo[298781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 25 08:29:51 compute-0 sudo[298781]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:51 compute-0 ceph-mon[75015]: pgmap v1387: 321 pgs: 321 active+clean; 472 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.0 MiB/s wr, 224 op/s
Nov 25 08:29:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:29:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:29:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:29:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:29:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:29:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:29:51 compute-0 sudo[298806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:29:51 compute-0 sudo[298806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:51 compute-0 sudo[298806]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 25 08:29:51 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 25 08:29:51 compute-0 sudo[298831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:29:51 compute-0 sudo[298831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.031 253542 DEBUG nova.network.neutron [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Successfully updated port: 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:29:52 compute-0 podman[298893]: 2025-11-25 08:29:52.133785755 +0000 UTC m=+0.061643975 container create f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.157 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.157 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquired lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.157 253542 DEBUG nova.network.neutron [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:29:52 compute-0 systemd[1]: Started libpod-conmon-f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1.scope.
Nov 25 08:29:52 compute-0 podman[298893]: 2025-11-25 08:29:52.096597698 +0000 UTC m=+0.024455898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:29:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:29:52 compute-0 podman[298893]: 2025-11-25 08:29:52.227131906 +0000 UTC m=+0.154990136 container init f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:52 compute-0 podman[298893]: 2025-11-25 08:29:52.237035479 +0000 UTC m=+0.164893679 container start f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 08:29:52 compute-0 podman[298893]: 2025-11-25 08:29:52.2417369 +0000 UTC m=+0.169595130 container attach f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 08:29:52 compute-0 focused_dhawan[298910]: 167 167
Nov 25 08:29:52 compute-0 systemd[1]: libpod-f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1.scope: Deactivated successfully.
Nov 25 08:29:52 compute-0 podman[298893]: 2025-11-25 08:29:52.248425094 +0000 UTC m=+0.176283334 container died f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 08:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-79a2fa40c6cec9e8b51fb3d3ca9177a7103ef4e4851cecc5e2dfcec037023016-merged.mount: Deactivated successfully.
Nov 25 08:29:52 compute-0 podman[298893]: 2025-11-25 08:29:52.29894936 +0000 UTC m=+0.226807560 container remove f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:29:52 compute-0 systemd[1]: libpod-conmon-f0f522dfb4bc3ec0fef20fd33fd7c829c3ee0be88063e1c754619cc5d74b72a1.scope: Deactivated successfully.
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.325 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.327 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.328 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.328 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.329 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.330 253542 INFO nova.compute.manager [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Terminating instance
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.332 253542 DEBUG nova.compute.manager [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.354 253542 DEBUG nova.network.neutron [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:29:52 compute-0 kernel: tapeab8f4a1-ff (unregistering): left promiscuous mode
Nov 25 08:29:52 compute-0 NetworkManager[48915]: <info>  [1764059392.3754] device (tapeab8f4a1-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.378 253542 DEBUG nova.compute.manager [req-0ced56c2-3ebc-4f9e-8065-a1491fd7940e req-c5d146b9-59ae-48ec-8450-edc7f1b7d79c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received event network-changed-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.378 253542 DEBUG nova.compute.manager [req-0ced56c2-3ebc-4f9e-8065-a1491fd7940e req-c5d146b9-59ae-48ec-8450-edc7f1b7d79c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Refreshing instance network info cache due to event network-changed-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.378 253542 DEBUG oslo_concurrency.lockutils [req-0ced56c2-3ebc-4f9e-8065-a1491fd7940e req-c5d146b9-59ae-48ec-8450-edc7f1b7d79c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:52 compute-0 ovn_controller[152859]: 2025-11-25T08:29:52Z|00274|binding|INFO|Releasing lport eab8f4a1-ffde-4387-94df-ebfe864e9534 from this chassis (sb_readonly=0)
Nov 25 08:29:52 compute-0 ovn_controller[152859]: 2025-11-25T08:29:52Z|00275|binding|INFO|Setting lport eab8f4a1-ffde-4387-94df-ebfe864e9534 down in Southbound
Nov 25 08:29:52 compute-0 ovn_controller[152859]: 2025-11-25T08:29:52Z|00276|binding|INFO|Removing iface tapeab8f4a1-ff ovn-installed in OVS
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.388 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.394 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:af:86 10.100.0.12'], port_security=['fa:16:3e:52:af:86 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '34941187-b6b9-4153-a8b9-6f5c00f10dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0a28d62fb1841c087b84b40bf5a54ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0dcf48e-8342-437f-bc91-be284d9d2e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38cfcfdd-6d8a-45fc-8bf6-5c1aa5128b91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eab8f4a1-ffde-4387-94df-ebfe864e9534) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.395 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eab8f4a1-ffde-4387-94df-ebfe864e9534 in datapath ba659d6c-c094-47d7-ba45-d0e659ce778e unbound from our chassis
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.396 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba659d6c-c094-47d7-ba45-d0e659ce778e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.398 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7b2c85-6e9e-45f4-9eac-c7dc50af41aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.398 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e namespace which is not needed anymore
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.414 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:52 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Deactivated successfully.
Nov 25 08:29:52 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Consumed 10.781s CPU time.
Nov 25 08:29:52 compute-0 systemd-machined[215790]: Machine qemu-44-instance-00000027 terminated.
Nov 25 08:29:52 compute-0 podman[298943]: 2025-11-25 08:29:52.472430285 +0000 UTC m=+0.040420477 container create d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 08:29:52 compute-0 systemd[1]: Started libpod-conmon-d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446.scope.
Nov 25 08:29:52 compute-0 podman[298943]: 2025-11-25 08:29:52.455026795 +0000 UTC m=+0.023017017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:29:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137ace5abb572b876d1455b8477a054bf409307174493b386f0edb1508feae5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137ace5abb572b876d1455b8477a054bf409307174493b386f0edb1508feae5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.562 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137ace5abb572b876d1455b8477a054bf409307174493b386f0edb1508feae5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[298244]: [NOTICE]   (298248) : haproxy version is 2.8.14-c23fe91
Nov 25 08:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137ace5abb572b876d1455b8477a054bf409307174493b386f0edb1508feae5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137ace5abb572b876d1455b8477a054bf409307174493b386f0edb1508feae5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[298244]: [NOTICE]   (298248) : path to executable is /usr/sbin/haproxy
Nov 25 08:29:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[298244]: [WARNING]  (298248) : Exiting Master process...
Nov 25 08:29:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[298244]: [ALERT]    (298248) : Current worker (298250) exited with code 143 (Terminated)
Nov 25 08:29:52 compute-0 neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e[298244]: [WARNING]  (298248) : All workers exited. Exiting... (0)
Nov 25 08:29:52 compute-0 systemd[1]: libpod-07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1.scope: Deactivated successfully.
Nov 25 08:29:52 compute-0 podman[298967]: 2025-11-25 08:29:52.579150315 +0000 UTC m=+0.068128524 container died 07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.579 253542 INFO nova.virt.libvirt.driver [-] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Instance destroyed successfully.
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.580 253542 DEBUG nova.objects.instance [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lazy-loading 'resources' on Instance uuid 34941187-b6b9-4153-a8b9-6f5c00f10dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:52 compute-0 podman[298943]: 2025-11-25 08:29:52.594181451 +0000 UTC m=+0.162171673 container init d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.594 253542 DEBUG nova.virt.libvirt.vif [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:29:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-722195337',display_name='tempest-ImagesTestJSON-server-722195337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-722195337',id=39,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:29:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0a28d62fb1841c087b84b40bf5a54ec',ramdisk_id='',reservation_id='r-saunovqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-109091550',owner_user_name='tempest-ImagesTestJSON-109091550-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:48Z,user_data=None,user_id='38fa175fb699405c9a05d7c28f994ebc',uuid=34941187-b6b9-4153-a8b9-6f5c00f10dda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.595 253542 DEBUG nova.network.os_vif_util [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converting VIF {"id": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "address": "fa:16:3e:52:af:86", "network": {"id": "ba659d6c-c094-47d7-ba45-d0e659ce778e", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1404047212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0a28d62fb1841c087b84b40bf5a54ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeab8f4a1-ff", "ovs_interfaceid": "eab8f4a1-ffde-4387-94df-ebfe864e9534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.595 253542 DEBUG nova.network.os_vif_util [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.596 253542 DEBUG os_vif [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.599 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeab8f4a1-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:52 compute-0 podman[298943]: 2025-11-25 08:29:52.608518847 +0000 UTC m=+0.176509039 container start d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.609 253542 INFO os_vif [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:af:86,bridge_name='br-int',has_traffic_filtering=True,id=eab8f4a1-ffde-4387-94df-ebfe864e9534,network=Network(ba659d6c-c094-47d7-ba45-d0e659ce778e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeab8f4a1-ff')
Nov 25 08:29:52 compute-0 podman[298943]: 2025-11-25 08:29:52.613654509 +0000 UTC m=+0.181644731 container attach d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1-userdata-shm.mount: Deactivated successfully.
Nov 25 08:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ea6b4f6838733412fb90c0109af913f2e952229503320472fd0695bd9461c66-merged.mount: Deactivated successfully.
Nov 25 08:29:52 compute-0 podman[298967]: 2025-11-25 08:29:52.635878043 +0000 UTC m=+0.124856242 container cleanup 07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:52 compute-0 systemd[1]: libpod-conmon-07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1.scope: Deactivated successfully.
Nov 25 08:29:52 compute-0 ceph-mon[75015]: osdmap e169: 3 total, 3 up, 3 in
Nov 25 08:29:52 compute-0 podman[299030]: 2025-11-25 08:29:52.741331268 +0000 UTC m=+0.075997042 container remove 07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.747 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[91fd4b92-2660-4a97-a2e8-fe90a432bf8b]: (4, ('Tue Nov 25 08:29:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1)\n07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1\nTue Nov 25 08:29:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e (07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1)\n07b9c4b844a6935f2d1b14f0197c74d94a3e0ad95a311d04996c02c2f7c602c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.749 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5a7025-c5df-49fc-8465-23e6456ece2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.750 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba659d6c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:52 compute-0 kernel: tapba659d6c-c0: left promiscuous mode
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.756 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95dc7504-1410-4e50-bd07-49dfc3e085d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.771 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ecafb3c8-fa5d-40d6-bb5a-66375d1e7075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.774 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bce0bb-1953-4fa3-abf5-d9cc1e557f25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:52 compute-0 nova_compute[253538]: 2025-11-25 08:29:52.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.800 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4de4118b-a7c4-4e33-81ae-e37b25388389]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473594, 'reachable_time': 32315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299047, 'error': None, 'target': 'ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.802 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba659d6c-c094-47d7-ba45-d0e659ce778e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:29:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:52.803 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[184c0582-5a1d-46dc-a01b-e4f9bde55f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.088 253542 INFO nova.virt.libvirt.driver [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Deleting instance files /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda_del
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.090 253542 INFO nova.virt.libvirt.driver [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Deletion of /var/lib/nova/instances/34941187-b6b9-4153-a8b9-6f5c00f10dda_del complete
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 396 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 140 op/s
Nov 25 08:29:53 compute-0 systemd[1]: run-netns-ovnmeta\x2dba659d6c\x2dc094\x2d47d7\x2dba45\x2dd0e659ce778e.mount: Deactivated successfully.
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.161 253542 INFO nova.compute.manager [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.162 253542 DEBUG oslo.service.loopingcall [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.163 253542 DEBUG nova.compute.manager [-] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.163 253542 DEBUG nova.network.neutron [-] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:29:53
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.log', 'backups', 'default.rgw.control']
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:29:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 25 08:29:53 compute-0 ceph-mon[75015]: pgmap v1389: 321 pgs: 321 active+clean; 396 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 140 op/s
Nov 25 08:29:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 25 08:29:53 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.760 253542 DEBUG nova.network.neutron [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Updating instance_info_cache with network_info: [{"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:29:53 compute-0 charming_brown[298981]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:29:53 compute-0 charming_brown[298981]: --> relative data size: 1.0
Nov 25 08:29:53 compute-0 charming_brown[298981]: --> All data devices are unavailable
Nov 25 08:29:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.798 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Releasing lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.799 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Instance network_info: |[{"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.800 253542 DEBUG oslo_concurrency.lockutils [req-0ced56c2-3ebc-4f9e-8065-a1491fd7940e req-c5d146b9-59ae-48ec-8450-edc7f1b7d79c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.800 253542 DEBUG nova.network.neutron [req-0ced56c2-3ebc-4f9e-8065-a1491fd7940e req-c5d146b9-59ae-48ec-8450-edc7f1b7d79c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Refreshing network info cache for port 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.804 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Start _get_guest_xml network_info=[{"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.809 253542 WARNING nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:29:53 compute-0 systemd[1]: libpod-d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446.scope: Deactivated successfully.
Nov 25 08:29:53 compute-0 systemd[1]: libpod-d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446.scope: Consumed 1.082s CPU time.
Nov 25 08:29:53 compute-0 podman[298943]: 2025-11-25 08:29:53.812022151 +0000 UTC m=+1.380012343 container died d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.816 253542 DEBUG nova.virt.libvirt.host [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.817 253542 DEBUG nova.virt.libvirt.host [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.825 253542 DEBUG nova.virt.libvirt.host [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.826 253542 DEBUG nova.virt.libvirt.host [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.826 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.827 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.827 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.827 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.828 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.828 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.828 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.828 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.828 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.829 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.829 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.829 253542 DEBUG nova.virt.hardware [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:29:53 compute-0 nova_compute[253538]: 2025-11-25 08:29:53.831 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c137ace5abb572b876d1455b8477a054bf409307174493b386f0edb1508feae5-merged.mount: Deactivated successfully.
Nov 25 08:29:53 compute-0 podman[298943]: 2025-11-25 08:29:53.985597878 +0000 UTC m=+1.553588080 container remove d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 08:29:53 compute-0 systemd[1]: libpod-conmon-d4806f8b9d96f087af299bd694241735559f7e9fff5c33bd333095dee118c446.scope: Deactivated successfully.
Nov 25 08:29:54 compute-0 sudo[298831]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:54 compute-0 sudo[299104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:29:54 compute-0 sudo[299104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:54 compute-0 sudo[299104]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:54 compute-0 sudo[299129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:29:54 compute-0 sudo[299129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:54 compute-0 sudo[299129]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:54 compute-0 sudo[299154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:29:54 compute-0 sudo[299154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:54 compute-0 sudo[299154]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/714800288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:54 compute-0 sudo[299179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:29:54 compute-0 sudo[299179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.284 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.307 253542 DEBUG nova.storage.rbd_utils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] rbd image ce1afa72-143f-43f3-9859-df7f3523888a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.310 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.547 253542 DEBUG nova.compute.manager [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received event network-vif-unplugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.548 253542 DEBUG oslo_concurrency.lockutils [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.548 253542 DEBUG oslo_concurrency.lockutils [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.548 253542 DEBUG oslo_concurrency.lockutils [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.549 253542 DEBUG nova.compute.manager [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] No waiting events found dispatching network-vif-unplugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.549 253542 DEBUG nova.compute.manager [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received event network-vif-unplugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.549 253542 DEBUG nova.compute.manager [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received event network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.549 253542 DEBUG oslo_concurrency.lockutils [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.550 253542 DEBUG oslo_concurrency.lockutils [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.550 253542 DEBUG oslo_concurrency.lockutils [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.550 253542 DEBUG nova.compute.manager [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] No waiting events found dispatching network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.550 253542 WARNING nova.compute.manager [req-ecab1eaa-1a6e-4499-b677-7f0baf8c8211 req-76df9f9d-5c68-44ac-a02f-d7ada8822653 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received unexpected event network-vif-plugged-eab8f4a1-ffde-4387-94df-ebfe864e9534 for instance with vm_state active and task_state deleting.
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.580 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.581 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.582 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.582 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.583 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.584 253542 INFO nova.compute.manager [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Terminating instance
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.585 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "refresh_cache-225f80e2-9e66-46fb-b77d-9a54fa8a2a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.586 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquired lock "refresh_cache-225f80e2-9e66-46fb-b77d-9a54fa8a2a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.586 253542 DEBUG nova.network.neutron [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:29:54 compute-0 podman[299283]: 2025-11-25 08:29:54.594944221 +0000 UTC m=+0.051773332 container create 015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.619 253542 DEBUG nova.network.neutron [-] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:54 compute-0 systemd[1]: Started libpod-conmon-015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd.scope.
Nov 25 08:29:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:29:54 compute-0 podman[299283]: 2025-11-25 08:29:54.56919256 +0000 UTC m=+0.026021701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:29:54 compute-0 podman[299283]: 2025-11-25 08:29:54.676111225 +0000 UTC m=+0.132940356 container init 015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:29:54 compute-0 podman[299283]: 2025-11-25 08:29:54.686315856 +0000 UTC m=+0.143144967 container start 015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:29:54 compute-0 podman[299283]: 2025-11-25 08:29:54.690017169 +0000 UTC m=+0.146846300 container attach 015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 08:29:54 compute-0 hopeful_lovelace[299299]: 167 167
Nov 25 08:29:54 compute-0 systemd[1]: libpod-015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd.scope: Deactivated successfully.
Nov 25 08:29:54 compute-0 podman[299283]: 2025-11-25 08:29:54.694193694 +0000 UTC m=+0.151022815 container died 015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.709 253542 INFO nova.compute.manager [-] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Took 1.55 seconds to deallocate network for instance.
Nov 25 08:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-68a9872822f03f3f951dcea771eb17e592dcd4cb000b366c3fcc9b7352146431-merged.mount: Deactivated successfully.
Nov 25 08:29:54 compute-0 podman[299283]: 2025-11-25 08:29:54.731726042 +0000 UTC m=+0.188555143 container remove 015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:29:54 compute-0 ceph-mon[75015]: osdmap e170: 3 total, 3 up, 3 in
Nov 25 08:29:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/714800288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:54 compute-0 systemd[1]: libpod-conmon-015bfe31d3d7d153eed0b30dae58edadf678bbbd99726048ae72d53a43e8befd.scope: Deactivated successfully.
Nov 25 08:29:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:29:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237204949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.787 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.788 253542 DEBUG nova.virt.libvirt.vif [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-342565699',display_name='tempest-ServersTestManualDisk-server-342565699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-342565699',id=40,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDhRS19G0kH2X3pDMDdi0IU8772j8a8Yzb7Q6dfDkWUMmP5RtMrsn0oQKvygM9ASW3NADtnaU7L2yy4qYCSo157088ZYCg7StYPEN9aWQW29kHzWDtSoReuM5x9SQIeWw==',key_name='tempest-keypair-1530897481',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e37021edb08c44fbb7ea019842489f3c',ramdisk_id='',reservation_id='r-9hx6wxky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1420264531',owner_user_name='tempest-ServersTestManualDisk-1420264531-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cc7f739420484ab696255ef9cdfcc581',uuid=ce1afa72-143f-43f3-9859-df7f3523888a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.789 253542 DEBUG nova.network.os_vif_util [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Converting VIF {"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.789 253542 DEBUG nova.network.os_vif_util [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:78:70,bridge_name='br-int',has_traffic_filtering=True,id=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c,network=Network(824090d9-c62a-4860-847c-47ccffd255d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4521eb38-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.791 253542 DEBUG nova.objects.instance [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lazy-loading 'pci_devices' on Instance uuid ce1afa72-143f-43f3-9859-df7f3523888a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.807 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <uuid>ce1afa72-143f-43f3-9859-df7f3523888a</uuid>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <name>instance-00000028</name>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestManualDisk-server-342565699</nova:name>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:29:53</nova:creationTime>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <nova:user uuid="cc7f739420484ab696255ef9cdfcc581">tempest-ServersTestManualDisk-1420264531-project-member</nova:user>
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <nova:project uuid="e37021edb08c44fbb7ea019842489f3c">tempest-ServersTestManualDisk-1420264531</nova:project>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <nova:port uuid="4521eb38-aa12-4b8a-92f6-3f1a9121fe5c">
Nov 25 08:29:54 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <system>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <entry name="serial">ce1afa72-143f-43f3-9859-df7f3523888a</entry>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <entry name="uuid">ce1afa72-143f-43f3-9859-df7f3523888a</entry>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </system>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <os>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   </os>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <features>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   </features>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ce1afa72-143f-43f3-9859-df7f3523888a_disk">
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ce1afa72-143f-43f3-9859-df7f3523888a_disk.config">
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       </source>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:29:54 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:13:78:70"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <target dev="tap4521eb38-aa"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a/console.log" append="off"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <video>
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </video>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:29:54 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:29:54 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:29:54 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:29:54 compute-0 nova_compute[253538]: </domain>
Nov 25 08:29:54 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.808 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Preparing to wait for external event network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.808 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.809 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.809 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.810 253542 DEBUG nova.virt.libvirt.vif [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:29:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-342565699',display_name='tempest-ServersTestManualDisk-server-342565699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-342565699',id=40,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDhRS19G0kH2X3pDMDdi0IU8772j8a8Yzb7Q6dfDkWUMmP5RtMrsn0oQKvygM9ASW3NADtnaU7L2yy4qYCSo157088ZYCg7StYPEN9aWQW29kHzWDtSoReuM5x9SQIeWw==',key_name='tempest-keypair-1530897481',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e37021edb08c44fbb7ea019842489f3c',ramdisk_id='',reservation_id='r-9hx6wxky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1420264531',owner_user_name='tempest-ServersTestManualDisk-1420264531-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cc7f739420484ab696255ef9cdfcc581',uuid=ce1afa72-143f-43f3-9859-df7f3523888a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.810 253542 DEBUG nova.network.os_vif_util [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Converting VIF {"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.810 253542 DEBUG nova.network.os_vif_util [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:78:70,bridge_name='br-int',has_traffic_filtering=True,id=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c,network=Network(824090d9-c62a-4860-847c-47ccffd255d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4521eb38-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.811 253542 DEBUG os_vif [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:78:70,bridge_name='br-int',has_traffic_filtering=True,id=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c,network=Network(824090d9-c62a-4860-847c-47ccffd255d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4521eb38-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.812 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.812 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.815 253542 DEBUG nova.network.neutron [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.823 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.823 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4521eb38-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.824 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4521eb38-aa, col_values=(('external_ids', {'iface-id': '4521eb38-aa12-4b8a-92f6-3f1a9121fe5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:78:70', 'vm-uuid': 'ce1afa72-143f-43f3-9859-df7f3523888a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.825 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:54 compute-0 NetworkManager[48915]: <info>  [1764059394.8267] manager: (tap4521eb38-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.827 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.833 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.833 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.835 253542 INFO os_vif [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:78:70,bridge_name='br-int',has_traffic_filtering=True,id=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c,network=Network(824090d9-c62a-4860-847c-47ccffd255d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4521eb38-aa')
Nov 25 08:29:54 compute-0 podman[299328]: 2025-11-25 08:29:54.895990641 +0000 UTC m=+0.043812771 container create 50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.899 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.900 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.900 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] No VIF found with MAC fa:16:3e:13:78:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.901 253542 INFO nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Using config drive
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.921 253542 DEBUG nova.storage.rbd_utils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] rbd image ce1afa72-143f-43f3-9859-df7f3523888a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:54 compute-0 systemd[1]: Started libpod-conmon-50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39.scope.
Nov 25 08:29:54 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.954 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:54 compute-0 podman[299328]: 2025-11-25 08:29:54.875168026 +0000 UTC m=+0.022990196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:29:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44b0060ab41b26da4cec6106f4bb6868ec0ae2769bb4b89c94011e057c1af9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44b0060ab41b26da4cec6106f4bb6868ec0ae2769bb4b89c94011e057c1af9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44b0060ab41b26da4cec6106f4bb6868ec0ae2769bb4b89c94011e057c1af9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44b0060ab41b26da4cec6106f4bb6868ec0ae2769bb4b89c94011e057c1af9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:54.999 253542 DEBUG oslo_concurrency.processutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:55 compute-0 podman[299328]: 2025-11-25 08:29:55.004986294 +0000 UTC m=+0.152808454 container init 50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 08:29:55 compute-0 podman[299328]: 2025-11-25 08:29:55.012911293 +0000 UTC m=+0.160733433 container start 50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:29:55 compute-0 podman[299328]: 2025-11-25 08:29:55.023776284 +0000 UTC m=+0.171598454 container attach 50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:29:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 310 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 256 op/s
Nov 25 08:29:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:29:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 25 08:29:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:29:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383558781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.459 253542 DEBUG oslo_concurrency.processutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.473 253542 DEBUG nova.compute.provider_tree [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:29:55 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.490 253542 DEBUG nova.scheduler.client.report [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.532 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.626 253542 INFO nova.scheduler.client.report [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Deleted allocations for instance 34941187-b6b9-4153-a8b9-6f5c00f10dda
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.646 253542 DEBUG nova.network.neutron [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.690 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Releasing lock "refresh_cache-225f80e2-9e66-46fb-b77d-9a54fa8a2a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.691 253542 DEBUG nova.compute.manager [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.700 253542 INFO nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Creating config drive at /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a/disk.config
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.707 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzwqq50z8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.746 253542 DEBUG oslo_concurrency.lockutils [None req-953aac5a-2931-4727-be29-4dd517d2dca1 38fa175fb699405c9a05d7c28f994ebc b0a28d62fb1841c087b84b40bf5a54ec - - default default] Lock "34941187-b6b9-4153-a8b9-6f5c00f10dda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:55 compute-0 nervous_allen[299363]: {
Nov 25 08:29:55 compute-0 nervous_allen[299363]:     "0": [
Nov 25 08:29:55 compute-0 nervous_allen[299363]:         {
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "devices": [
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "/dev/loop3"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             ],
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_name": "ceph_lv0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_size": "21470642176",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "name": "ceph_lv0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "tags": {
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cluster_name": "ceph",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.crush_device_class": "",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.encrypted": "0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osd_id": "0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.type": "block",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.vdo": "0"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             },
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "type": "block",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "vg_name": "ceph_vg0"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:         }
Nov 25 08:29:55 compute-0 nervous_allen[299363]:     ],
Nov 25 08:29:55 compute-0 nervous_allen[299363]:     "1": [
Nov 25 08:29:55 compute-0 nervous_allen[299363]:         {
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "devices": [
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "/dev/loop4"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             ],
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_name": "ceph_lv1",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_size": "21470642176",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "name": "ceph_lv1",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "tags": {
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cluster_name": "ceph",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.crush_device_class": "",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.encrypted": "0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osd_id": "1",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.type": "block",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.vdo": "0"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             },
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "type": "block",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "vg_name": "ceph_vg1"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:         }
Nov 25 08:29:55 compute-0 nervous_allen[299363]:     ],
Nov 25 08:29:55 compute-0 nervous_allen[299363]:     "2": [
Nov 25 08:29:55 compute-0 nervous_allen[299363]:         {
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "devices": [
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "/dev/loop5"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             ],
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_name": "ceph_lv2",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_size": "21470642176",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "name": "ceph_lv2",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "tags": {
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.cluster_name": "ceph",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.crush_device_class": "",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.encrypted": "0",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osd_id": "2",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.type": "block",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:                 "ceph.vdo": "0"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             },
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "type": "block",
Nov 25 08:29:55 compute-0 nervous_allen[299363]:             "vg_name": "ceph_vg2"
Nov 25 08:29:55 compute-0 nervous_allen[299363]:         }
Nov 25 08:29:55 compute-0 nervous_allen[299363]:     ]
Nov 25 08:29:55 compute-0 nervous_allen[299363]: }
Nov 25 08:29:55 compute-0 systemd[1]: libpod-50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39.scope: Deactivated successfully.
Nov 25 08:29:55 compute-0 podman[299328]: 2025-11-25 08:29:55.802782445 +0000 UTC m=+0.950604625 container died 50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:55 compute-0 nova_compute[253538]: 2025-11-25 08:29:55.847 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzwqq50z8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3237204949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:29:55 compute-0 ceph-mon[75015]: pgmap v1391: 321 pgs: 321 active+clean; 310 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 256 op/s
Nov 25 08:29:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3383558781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:29:55 compute-0 ceph-mon[75015]: osdmap e171: 3 total, 3 up, 3 in
Nov 25 08:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a44b0060ab41b26da4cec6106f4bb6868ec0ae2769bb4b89c94011e057c1af9e-merged.mount: Deactivated successfully.
Nov 25 08:29:55 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Deactivated successfully.
Nov 25 08:29:55 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Consumed 14.148s CPU time.
Nov 25 08:29:56 compute-0 systemd-machined[215790]: Machine qemu-41-instance-00000025 terminated.
Nov 25 08:29:56 compute-0 nova_compute[253538]: 2025-11-25 08:29:56.051 253542 DEBUG nova.storage.rbd_utils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] rbd image ce1afa72-143f-43f3-9859-df7f3523888a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:29:56 compute-0 nova_compute[253538]: 2025-11-25 08:29:56.055 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a/disk.config ce1afa72-143f-43f3-9859-df7f3523888a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:29:56 compute-0 nova_compute[253538]: 2025-11-25 08:29:56.115 253542 INFO nova.virt.libvirt.driver [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance destroyed successfully.
Nov 25 08:29:56 compute-0 nova_compute[253538]: 2025-11-25 08:29:56.116 253542 DEBUG nova.objects.instance [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'resources' on Instance uuid 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:29:56 compute-0 podman[299328]: 2025-11-25 08:29:56.203596883 +0000 UTC m=+1.351419023 container remove 50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 08:29:56 compute-0 systemd[1]: libpod-conmon-50525884c317d5970286c72722e28d4c64aadd99c90b6a6f35639c1282818b39.scope: Deactivated successfully.
Nov 25 08:29:56 compute-0 sudo[299179]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:56 compute-0 sudo[299463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:29:56 compute-0 sudo[299463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:56 compute-0 sudo[299463]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:56 compute-0 nova_compute[253538]: 2025-11-25 08:29:56.361 253542 DEBUG nova.network.neutron [req-0ced56c2-3ebc-4f9e-8065-a1491fd7940e req-c5d146b9-59ae-48ec-8450-edc7f1b7d79c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Updated VIF entry in instance network info cache for port 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:29:56 compute-0 nova_compute[253538]: 2025-11-25 08:29:56.363 253542 DEBUG nova.network.neutron [req-0ced56c2-3ebc-4f9e-8065-a1491fd7940e req-c5d146b9-59ae-48ec-8450-edc7f1b7d79c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Updating instance_info_cache with network_info: [{"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:29:56 compute-0 sudo[299489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:29:56 compute-0 sudo[299489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:56 compute-0 nova_compute[253538]: 2025-11-25 08:29:56.375 253542 DEBUG oslo_concurrency.lockutils [req-0ced56c2-3ebc-4f9e-8065-a1491fd7940e req-c5d146b9-59ae-48ec-8450-edc7f1b7d79c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:29:56 compute-0 sudo[299489]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:56 compute-0 sudo[299514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:29:56 compute-0 sudo[299514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:56 compute-0 sudo[299514]: pam_unix(sudo:session): session closed for user root
Nov 25 08:29:56 compute-0 sudo[299539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:29:56 compute-0 sudo[299539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:29:56 compute-0 nova_compute[253538]: 2025-11-25 08:29:56.623 253542 DEBUG nova.compute.manager [req-66a23416-a3ba-429a-ab49-e2295352d42f req-a4f07a60-245b-4b97-8f4f-f53e5ef09c08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Received event network-vif-deleted-eab8f4a1-ffde-4387-94df-ebfe864e9534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:56 compute-0 podman[299605]: 2025-11-25 08:29:56.83661674 +0000 UTC m=+0.038794594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:29:57 compute-0 podman[299605]: 2025-11-25 08:29:57.054575084 +0000 UTC m=+0.256752908 container create b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 08:29:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 261 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 172 KiB/s rd, 2.4 MiB/s wr, 247 op/s
Nov 25 08:29:57 compute-0 systemd[1]: Started libpod-conmon-b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4.scope.
Nov 25 08:29:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:29:57 compute-0 podman[299605]: 2025-11-25 08:29:57.287564934 +0000 UTC m=+0.489742778 container init b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:29:57 compute-0 podman[299605]: 2025-11-25 08:29:57.294655799 +0000 UTC m=+0.496833643 container start b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 08:29:57 compute-0 busy_satoshi[299624]: 167 167
Nov 25 08:29:57 compute-0 systemd[1]: libpod-b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4.scope: Deactivated successfully.
Nov 25 08:29:57 compute-0 podman[299605]: 2025-11-25 08:29:57.324217027 +0000 UTC m=+0.526394871 container attach b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 25 08:29:57 compute-0 podman[299605]: 2025-11-25 08:29:57.324580867 +0000 UTC m=+0.526758701 container died b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:29:57 compute-0 nova_compute[253538]: 2025-11-25 08:29:57.939 253542 DEBUG oslo_concurrency.processutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a/disk.config ce1afa72-143f-43f3-9859-df7f3523888a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.883s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:29:57 compute-0 nova_compute[253538]: 2025-11-25 08:29:57.940 253542 INFO nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Deleting local config drive /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a/disk.config because it was imported into RBD.
Nov 25 08:29:58 compute-0 kernel: tap4521eb38-aa: entered promiscuous mode
Nov 25 08:29:58 compute-0 systemd-udevd[299427]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:29:58 compute-0 ovn_controller[152859]: 2025-11-25T08:29:58Z|00277|binding|INFO|Claiming lport 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c for this chassis.
Nov 25 08:29:58 compute-0 ovn_controller[152859]: 2025-11-25T08:29:58Z|00278|binding|INFO|4521eb38-aa12-4b8a-92f6-3f1a9121fe5c: Claiming fa:16:3e:13:78:70 10.100.0.8
Nov 25 08:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-168a0035aafe40632caacb27c0f00f443573e59276dcf2b65e35b15909b67d94-merged.mount: Deactivated successfully.
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:58 compute-0 NetworkManager[48915]: <info>  [1764059398.0343] manager: (tap4521eb38-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Nov 25 08:29:58 compute-0 NetworkManager[48915]: <info>  [1764059398.0455] device (tap4521eb38-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:29:58 compute-0 NetworkManager[48915]: <info>  [1764059398.0466] device (tap4521eb38-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:29:58 compute-0 systemd-machined[215790]: New machine qemu-45-instance-00000028.
Nov 25 08:29:58 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000028.
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.142 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:78:70 10.100.0.8'], port_security=['fa:16:3e:13:78:70 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ce1afa72-143f-43f3-9859-df7f3523888a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824090d9-c62a-4860-847c-47ccffd255d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e37021edb08c44fbb7ea019842489f3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '868a82d8-4d65-49d2-a7a6-131752d09d46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18939c60-bcf8-4c2d-8acd-d9d0c9ad5069, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.144 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c in datapath 824090d9-c62a-4860-847c-47ccffd255d9 bound to our chassis
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.147 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 824090d9-c62a-4860-847c-47ccffd255d9
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:58 compute-0 ovn_controller[152859]: 2025-11-25T08:29:58Z|00279|binding|INFO|Setting lport 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c ovn-installed in OVS
Nov 25 08:29:58 compute-0 ovn_controller[152859]: 2025-11-25T08:29:58Z|00280|binding|INFO|Setting lport 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c up in Southbound
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.159 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81e612f7-56d8-4efe-9125-4f5cc34ff8af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.160 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap824090d9-c1 in ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.162 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap824090d9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.162 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d63d0bc-c481-49b5-adaf-530b248c457c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.164 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[199c8a65-bbb5-4eb7-9027-73469c201c6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.175 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[92b388b2-5060-406f-9823-1b3d1379388d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.204 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5db49dfa-cbae-4208-982d-d1c8cb1ed5b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.240 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8529e366-1f18-4a34-880d-69dfc4bf38c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.245 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86fc6a04-3a0c-443d-af2f-7703a88f2666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 systemd-udevd[299653]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:29:58 compute-0 NetworkManager[48915]: <info>  [1764059398.2485] manager: (tap824090d9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.290 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[34073851-6f1b-45c2-9f26-d8159a8b812c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.293 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fd66d0bd-87a9-4237-bdb2-1874d855c440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 NetworkManager[48915]: <info>  [1764059398.3235] device (tap824090d9-c0): carrier: link connected
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.331 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fa863772-ef56-4a7d-a2df-d0d8928c1bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.350 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[360a5855-2bff-4b34-aab7-523ecb86f200]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824090d9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:fb:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475292, 'reachable_time': 25621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299687, 'error': None, 'target': 'ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.378 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e10373a5-0765-44f3-93eb-92aeb2d06db4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:fb8c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475292, 'tstamp': 475292}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299695, 'error': None, 'target': 'ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbae8e7-7a98-442c-b303-20f9c77bf56a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824090d9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:fb:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475292, 'reachable_time': 25621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299707, 'error': None, 'target': 'ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79e034ac-00b9-4600-b288-9bd8394d697b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.557 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[107afc61-4416-4dbc-a1dd-9b2c8dfc470f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.558 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824090d9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.558 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.559 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap824090d9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:58 compute-0 kernel: tap824090d9-c0: entered promiscuous mode
Nov 25 08:29:58 compute-0 NetworkManager[48915]: <info>  [1764059398.5921] manager: (tap824090d9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.595 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap824090d9-c0, col_values=(('external_ids', {'iface-id': 'f0e1127d-a27d-48d2-a04e-31cdc9a5e135'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:29:58 compute-0 ovn_controller[152859]: 2025-11-25T08:29:58Z|00281|binding|INFO|Releasing lport f0e1127d-a27d-48d2-a04e-31cdc9a5e135 from this chassis (sb_readonly=0)
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.596 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:58 compute-0 ceph-mon[75015]: pgmap v1393: 321 pgs: 321 active+clean; 261 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 172 KiB/s rd, 2.4 MiB/s wr, 247 op/s
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.616 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/824090d9-c62a-4860-847c-47ccffd255d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/824090d9-c62a-4860-847c-47ccffd255d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.617 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[414ced9d-ef55-420e-b789-5ac430ca3060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.618 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-824090d9-c62a-4860-847c-47ccffd255d9
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/824090d9-c62a-4860-847c-47ccffd255d9.pid.haproxy
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 824090d9-c62a-4860-847c-47ccffd255d9
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:29:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:29:58.619 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9', 'env', 'PROCESS_TAG=haproxy-824090d9-c62a-4860-847c-47ccffd255d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/824090d9-c62a-4860-847c-47ccffd255d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:29:58 compute-0 podman[299605]: 2025-11-25 08:29:58.732622734 +0000 UTC m=+1.934800558 container remove b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.737 253542 DEBUG nova.compute.manager [req-864069fd-592f-4cc4-8ff2-7aca5fb334e6 req-0deb4384-47ac-4f03-a146-1875ecff4e14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received event network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.737 253542 DEBUG oslo_concurrency.lockutils [req-864069fd-592f-4cc4-8ff2-7aca5fb334e6 req-0deb4384-47ac-4f03-a146-1875ecff4e14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.738 253542 DEBUG oslo_concurrency.lockutils [req-864069fd-592f-4cc4-8ff2-7aca5fb334e6 req-0deb4384-47ac-4f03-a146-1875ecff4e14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.738 253542 DEBUG oslo_concurrency.lockutils [req-864069fd-592f-4cc4-8ff2-7aca5fb334e6 req-0deb4384-47ac-4f03-a146-1875ecff4e14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:58 compute-0 nova_compute[253538]: 2025-11-25 08:29:58.738 253542 DEBUG nova.compute.manager [req-864069fd-592f-4cc4-8ff2-7aca5fb334e6 req-0deb4384-47ac-4f03-a146-1875ecff4e14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Processing event network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:29:58 compute-0 systemd[1]: libpod-conmon-b70e776d83d7f674c6588867b69e35b8201f47a71becb69b6fdd408f70326ea4.scope: Deactivated successfully.
Nov 25 08:29:58 compute-0 podman[299742]: 2025-11-25 08:29:58.889462059 +0000 UTC m=+0.020725264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:29:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 246 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 2.0 MiB/s wr, 205 op/s
Nov 25 08:29:59 compute-0 podman[299742]: 2025-11-25 08:29:59.308949514 +0000 UTC m=+0.440212729 container create f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.450 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059399.4495244, ce1afa72-143f-43f3-9859-df7f3523888a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.450 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] VM Started (Lifecycle Event)
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.454 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.467 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.474 253542 INFO nova.virt.libvirt.driver [-] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Instance spawned successfully.
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.476 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.479 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.501 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.507 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.507 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.508 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.509 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.510 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.510 253542 DEBUG nova.virt.libvirt.driver [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:29:59 compute-0 systemd[1]: Started libpod-conmon-f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26.scope.
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.544 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.544 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059399.4497626, ce1afa72-143f-43f3-9859-df7f3523888a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.545 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] VM Paused (Lifecycle Event)
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.575 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.578 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059399.466129, ce1afa72-143f-43f3-9859-df7f3523888a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.579 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] VM Resumed (Lifecycle Event)
Nov 25 08:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f7c7a134cacbea8a2fef7d773e82e5ef29e8d4a01baee283221ce1b6b2ac46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f7c7a134cacbea8a2fef7d773e82e5ef29e8d4a01baee283221ce1b6b2ac46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f7c7a134cacbea8a2fef7d773e82e5ef29e8d4a01baee283221ce1b6b2ac46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f7c7a134cacbea8a2fef7d773e82e5ef29e8d4a01baee283221ce1b6b2ac46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.596 253542 INFO nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Took 11.08 seconds to spawn the instance on the hypervisor.
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.597 253542 DEBUG nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.598 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.604 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.627 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.667 253542 INFO nova.compute.manager [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Took 12.09 seconds to build instance.
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.682 253542 DEBUG oslo_concurrency.lockutils [None req-3904c638-fb24-4579-a5c0-af0367188ae0 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:29:59 compute-0 podman[299789]: 2025-11-25 08:29:59.59501986 +0000 UTC m=+0.030353440 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:29:59 compute-0 podman[299742]: 2025-11-25 08:29:59.711128139 +0000 UTC m=+0.842391404 container init f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:29:59 compute-0 podman[299742]: 2025-11-25 08:29:59.730953217 +0000 UTC m=+0.862216422 container start f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:29:59 compute-0 ceph-mon[75015]: pgmap v1394: 321 pgs: 321 active+clean; 246 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 2.0 MiB/s wr, 205 op/s
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.828 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:59 compute-0 podman[299742]: 2025-11-25 08:29:59.834448418 +0000 UTC m=+0.965711673 container attach f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_robinson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 08:29:59 compute-0 nova_compute[253538]: 2025-11-25 08:29:59.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:29:59 compute-0 podman[299789]: 2025-11-25 08:29:59.960662986 +0000 UTC m=+0.395996536 container create 8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:30:00 compute-0 systemd[1]: Started libpod-conmon-8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37.scope.
Nov 25 08:30:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b87c0f7057e314a671ce4e484b7e89dc62dc70e55daf171ffb4199d529243f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:30:00 compute-0 podman[299789]: 2025-11-25 08:30:00.204880876 +0000 UTC m=+0.640214416 container init 8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:30:00 compute-0 podman[299789]: 2025-11-25 08:30:00.215422228 +0000 UTC m=+0.650755778 container start 8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:30:00 compute-0 neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9[299807]: [NOTICE]   (299811) : New worker (299813) forked
Nov 25 08:30:00 compute-0 neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9[299807]: [NOTICE]   (299811) : Loading success.
Nov 25 08:30:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 25 08:30:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 25 08:30:00 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 25 08:30:00 compute-0 nova_compute[253538]: 2025-11-25 08:30:00.844 253542 DEBUG nova.compute.manager [req-d9b181a4-ab86-46b2-b850-f66f99477924 req-a7bf6ec1-feaa-4cc5-b6eb-8aab0a6f5a34 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received event network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:00 compute-0 nova_compute[253538]: 2025-11-25 08:30:00.844 253542 DEBUG oslo_concurrency.lockutils [req-d9b181a4-ab86-46b2-b850-f66f99477924 req-a7bf6ec1-feaa-4cc5-b6eb-8aab0a6f5a34 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:00 compute-0 nova_compute[253538]: 2025-11-25 08:30:00.845 253542 DEBUG oslo_concurrency.lockutils [req-d9b181a4-ab86-46b2-b850-f66f99477924 req-a7bf6ec1-feaa-4cc5-b6eb-8aab0a6f5a34 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:00 compute-0 nova_compute[253538]: 2025-11-25 08:30:00.845 253542 DEBUG oslo_concurrency.lockutils [req-d9b181a4-ab86-46b2-b850-f66f99477924 req-a7bf6ec1-feaa-4cc5-b6eb-8aab0a6f5a34 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:00 compute-0 nova_compute[253538]: 2025-11-25 08:30:00.845 253542 DEBUG nova.compute.manager [req-d9b181a4-ab86-46b2-b850-f66f99477924 req-a7bf6ec1-feaa-4cc5-b6eb-8aab0a6f5a34 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] No waiting events found dispatching network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:00 compute-0 nova_compute[253538]: 2025-11-25 08:30:00.846 253542 WARNING nova.compute.manager [req-d9b181a4-ab86-46b2-b850-f66f99477924 req-a7bf6ec1-feaa-4cc5-b6eb-8aab0a6f5a34 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received unexpected event network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c for instance with vm_state active and task_state None.
Nov 25 08:30:00 compute-0 cranky_robinson[299782]: {
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "osd_id": 1,
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "type": "bluestore"
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:     },
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "osd_id": 2,
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "type": "bluestore"
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:     },
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "osd_id": 0,
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:         "type": "bluestore"
Nov 25 08:30:00 compute-0 cranky_robinson[299782]:     }
Nov 25 08:30:00 compute-0 cranky_robinson[299782]: }
Nov 25 08:30:00 compute-0 systemd[1]: libpod-f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26.scope: Deactivated successfully.
Nov 25 08:30:00 compute-0 systemd[1]: libpod-f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26.scope: Consumed 1.152s CPU time.
Nov 25 08:30:00 compute-0 podman[299742]: 2025-11-25 08:30:00.937561108 +0000 UTC m=+2.068824313 container died f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:30:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 246 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 23 KiB/s wr, 121 op/s
Nov 25 08:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f7c7a134cacbea8a2fef7d773e82e5ef29e8d4a01baee283221ce1b6b2ac46-merged.mount: Deactivated successfully.
Nov 25 08:30:01 compute-0 podman[299742]: 2025-11-25 08:30:01.479747573 +0000 UTC m=+2.611010808 container remove f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 08:30:01 compute-0 systemd[1]: libpod-conmon-f2d9b11baf10205611e2ee05a8dbaed94f127e65b059bed63e40e14b83caca26.scope: Deactivated successfully.
Nov 25 08:30:01 compute-0 sudo[299539]: pam_unix(sudo:session): session closed for user root
Nov 25 08:30:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:30:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:30:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:30:01 compute-0 ceph-mon[75015]: osdmap e172: 3 total, 3 up, 3 in
Nov 25 08:30:01 compute-0 ceph-mon[75015]: pgmap v1396: 321 pgs: 321 active+clean; 246 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 23 KiB/s wr, 121 op/s
Nov 25 08:30:02 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:30:02 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 37465248-d0d5-4d38-b55f-f0bde82b0dd8 does not exist
Nov 25 08:30:02 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 7dd3b9c3-9839-4b9c-817e-cdb9f47dcda5 does not exist
Nov 25 08:30:02 compute-0 sudo[299862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:02 compute-0 sudo[299862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:30:02 compute-0 NetworkManager[48915]: <info>  [1764059402.4731] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 25 08:30:02 compute-0 NetworkManager[48915]: <info>  [1764059402.4740] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Nov 25 08:30:02 compute-0 sudo[299862]: pam_unix(sudo:session): session closed for user root
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:02 compute-0 ovn_controller[152859]: 2025-11-25T08:30:02Z|00282|binding|INFO|Releasing lport f0e1127d-a27d-48d2-a04e-31cdc9a5e135 from this chassis (sb_readonly=0)
Nov 25 08:30:02 compute-0 sudo[299887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:30:02 compute-0 sudo[299887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.545 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:02 compute-0 sudo[299887]: pam_unix(sudo:session): session closed for user root
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.608 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:30:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.895 253542 DEBUG nova.compute.manager [req-b7290989-4d4d-444d-8fa8-4bb5279852a6 req-1aee1ace-a92d-42dd-ac49-afe1b4d5ff32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received event network-changed-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.896 253542 DEBUG nova.compute.manager [req-b7290989-4d4d-444d-8fa8-4bb5279852a6 req-1aee1ace-a92d-42dd-ac49-afe1b4d5ff32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Refreshing instance network info cache due to event network-changed-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.896 253542 DEBUG oslo_concurrency.lockutils [req-b7290989-4d4d-444d-8fa8-4bb5279852a6 req-1aee1ace-a92d-42dd-ac49-afe1b4d5ff32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.896 253542 DEBUG oslo_concurrency.lockutils [req-b7290989-4d4d-444d-8fa8-4bb5279852a6 req-1aee1ace-a92d-42dd-ac49-afe1b4d5ff32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:02 compute-0 nova_compute[253538]: 2025-11-25 08:30:02.896 253542 DEBUG nova.network.neutron [req-b7290989-4d4d-444d-8fa8-4bb5279852a6 req-1aee1ace-a92d-42dd-ac49-afe1b4d5ff32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Refreshing network info cache for port 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 233 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 746 KiB/s rd, 37 KiB/s wr, 71 op/s
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.516 253542 INFO nova.virt.libvirt.driver [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Deleting instance files /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41_del
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.518 253542 INFO nova.virt.libvirt.driver [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Deletion of /var/lib/nova/instances/225f80e2-9e66-46fb-b77d-9a54fa8a2a41_del complete
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.580 253542 INFO nova.compute.manager [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Took 7.89 seconds to destroy the instance on the hypervisor.
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.580 253542 DEBUG oslo.service.loopingcall [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.581 253542 DEBUG nova.compute.manager [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.581 253542 DEBUG nova.network.neutron [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.733 253542 DEBUG nova.network.neutron [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.749 253542 DEBUG nova.network.neutron [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.761 253542 INFO nova.compute.manager [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Took 0.18 seconds to deallocate network for instance.
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017758341065963405 of space, bias 1.0, pg target 0.5327502319789021 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668757529139806 of space, bias 1.0, pg target 0.20006272587419416 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:30:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.797 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.799 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:03 compute-0 podman[299913]: 2025-11-25 08:30:03.817130468 +0000 UTC m=+0.054709344 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:30:03 compute-0 nova_compute[253538]: 2025-11-25 08:30:03.890 253542 DEBUG oslo_concurrency.processutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:03 compute-0 ceph-mon[75015]: pgmap v1397: 321 pgs: 321 active+clean; 233 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 746 KiB/s rd, 37 KiB/s wr, 71 op/s
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.067 253542 DEBUG nova.network.neutron [req-b7290989-4d4d-444d-8fa8-4bb5279852a6 req-1aee1ace-a92d-42dd-ac49-afe1b4d5ff32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Updated VIF entry in instance network info cache for port 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.068 253542 DEBUG nova.network.neutron [req-b7290989-4d4d-444d-8fa8-4bb5279852a6 req-1aee1ace-a92d-42dd-ac49-afe1b4d5ff32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Updating instance_info_cache with network_info: [{"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.091 253542 DEBUG oslo_concurrency.lockutils [req-b7290989-4d4d-444d-8fa8-4bb5279852a6 req-1aee1ace-a92d-42dd-ac49-afe1b4d5ff32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ce1afa72-143f-43f3-9859-df7f3523888a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:30:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228829295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.345 253542 DEBUG oslo_concurrency.processutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.356 253542 DEBUG nova.compute.provider_tree [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.378 253542 DEBUG nova.scheduler.client.report [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.405 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.441 253542 INFO nova.scheduler.client.report [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Deleted allocations for instance 225f80e2-9e66-46fb-b77d-9a54fa8a2a41
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.544 253542 DEBUG oslo_concurrency.lockutils [None req-139c0c48-167a-4777-8402-d207f4542b36 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "225f80e2-9e66-46fb-b77d-9a54fa8a2a41" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.869 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.870 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.870 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.870 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.870 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.871 253542 INFO nova.compute.manager [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Terminating instance
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.872 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "refresh_cache-8400a9a9-bd7a-434b-a11b-6db7e12a4e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.872 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquired lock "refresh_cache-8400a9a9-bd7a-434b-a11b-6db7e12a4e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.872 253542 DEBUG nova.network.neutron [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.882 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:04 compute-0 nova_compute[253538]: 2025-11-25 08:30:04.957 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2228829295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:05 compute-0 nova_compute[253538]: 2025-11-25 08:30:05.095 253542 DEBUG nova.network.neutron [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:30:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:30:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.5 total, 600.0 interval
                                           Cumulative writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 5104 syncs, 3.25 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 43.13 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4064 syncs, 2.63 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:30:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 31 KiB/s wr, 141 op/s
Nov 25 08:30:05 compute-0 nova_compute[253538]: 2025-11-25 08:30:05.346 253542 DEBUG nova.network.neutron [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:05 compute-0 nova_compute[253538]: 2025-11-25 08:30:05.357 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Releasing lock "refresh_cache-8400a9a9-bd7a-434b-a11b-6db7e12a4e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:05 compute-0 nova_compute[253538]: 2025-11-25 08:30:05.358 253542 DEBUG nova.compute.manager [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:30:05 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Nov 25 08:30:05 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 14.455s CPU time.
Nov 25 08:30:05 compute-0 systemd-machined[215790]: Machine qemu-40-instance-00000024 terminated.
Nov 25 08:30:05 compute-0 nova_compute[253538]: 2025-11-25 08:30:05.596 253542 INFO nova.virt.libvirt.driver [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance destroyed successfully.
Nov 25 08:30:05 compute-0 nova_compute[253538]: 2025-11-25 08:30:05.597 253542 DEBUG nova.objects.instance [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lazy-loading 'resources' on Instance uuid 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:30:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:06 compute-0 ceph-mon[75015]: pgmap v1398: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 31 KiB/s wr, 141 op/s
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.215 253542 INFO nova.virt.libvirt.driver [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Deleting instance files /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_del
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.216 253542 INFO nova.virt.libvirt.driver [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Deletion of /var/lib/nova/instances/8400a9a9-bd7a-434b-a11b-6db7e12a4e18_del complete
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.292 253542 INFO nova.compute.manager [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 0.93 seconds to destroy the instance on the hypervisor.
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.292 253542 DEBUG oslo.service.loopingcall [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.292 253542 DEBUG nova.compute.manager [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.293 253542 DEBUG nova.network.neutron [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.449 253542 DEBUG nova.network.neutron [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.457 253542 DEBUG nova.network.neutron [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.468 253542 INFO nova.compute.manager [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Took 0.18 seconds to deallocate network for instance.
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.508 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.508 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:06 compute-0 nova_compute[253538]: 2025-11-25 08:30:06.625 253542 DEBUG oslo_concurrency.processutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:30:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889314553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.021 253542 DEBUG oslo_concurrency.processutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.028 253542 DEBUG nova.compute.provider_tree [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.042 253542 DEBUG nova.scheduler.client.report [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.066 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.089 253542 INFO nova.scheduler.client.report [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Deleted allocations for instance 8400a9a9-bd7a-434b-a11b-6db7e12a4e18
Nov 25 08:30:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/889314553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 158 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 31 KiB/s wr, 148 op/s
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.158 253542 DEBUG oslo_concurrency.lockutils [None req-766959c1-b4e1-41b6-9a51-b896564fb685 6bc7b68c86ab44d29c118388df2a8bc0 70a02215d9344af388c6439ace9208a4 - - default default] Lock "8400a9a9-bd7a-434b-a11b-6db7e12a4e18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.576 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059392.5729303, 34941187-b6b9-4153-a8b9-6f5c00f10dda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.576 253542 INFO nova.compute.manager [-] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] VM Stopped (Lifecycle Event)
Nov 25 08:30:07 compute-0 nova_compute[253538]: 2025-11-25 08:30:07.591 253542 DEBUG nova.compute.manager [None req-9097a184-de2e-4ec6-afd0-74ae43fdec33 - - - - - -] [instance: 34941187-b6b9-4153-a8b9-6f5c00f10dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:07 compute-0 podman[299999]: 2025-11-25 08:30:07.806457251 +0000 UTC m=+0.056596495 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:30:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 25 08:30:08 compute-0 ceph-mon[75015]: pgmap v1399: 321 pgs: 321 active+clean; 158 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 31 KiB/s wr, 148 op/s
Nov 25 08:30:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 25 08:30:08 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 25 08:30:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 134 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 23 KiB/s wr, 165 op/s
Nov 25 08:30:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 25 08:30:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 25 08:30:09 compute-0 ceph-mon[75015]: osdmap e173: 3 total, 3 up, 3 in
Nov 25 08:30:09 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.327 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.328 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.352 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.435 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.435 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.443 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.443 253542 INFO nova.compute.claims [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.547 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.584 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.585 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:09 compute-0 nova_compute[253538]: 2025-11-25 08:30:09.960 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:30:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947323409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.011 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.018 253542 DEBUG nova.compute.provider_tree [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.032 253542 DEBUG nova.scheduler.client.report [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.051 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.051 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.087 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.087 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.101 253542 INFO nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.114 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.194 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.195 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.196 253542 INFO nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Creating image(s)
Nov 25 08:30:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 25 08:30:10 compute-0 ceph-mon[75015]: pgmap v1401: 321 pgs: 321 active+clean; 134 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 23 KiB/s wr, 165 op/s
Nov 25 08:30:10 compute-0 ceph-mon[75015]: osdmap e174: 3 total, 3 up, 3 in
Nov 25 08:30:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2947323409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 25 08:30:10 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.241 253542 DEBUG nova.storage.rbd_utils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.274 253542 DEBUG nova.storage.rbd_utils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.304 253542 DEBUG nova.storage.rbd_utils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.310 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.385 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.386 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.387 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.387 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.413 253542 DEBUG nova.storage.rbd_utils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.417 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.452 253542 DEBUG nova.policy [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ccf1e57f59084541821b20089873a6ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:30:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.722 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.784 253542 DEBUG nova.storage.rbd_utils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] resizing rbd image b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.873 253542 DEBUG nova.objects.instance [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lazy-loading 'migration_context' on Instance uuid b06ecfc8-23f8-42c8-8a5a-396257c46b68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.884 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.885 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Ensure instance console log exists: /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.885 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.885 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.885 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:10 compute-0 nova_compute[253538]: 2025-11-25 08:30:10.952 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Successfully created port: a0054aae-e1e0-48e7-87eb-6836c8f5d8ec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:30:11 compute-0 nova_compute[253538]: 2025-11-25 08:30:11.113 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059396.1125371, 225f80e2-9e66-46fb-b77d-9a54fa8a2a41 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:11 compute-0 nova_compute[253538]: 2025-11-25 08:30:11.114 253542 INFO nova.compute.manager [-] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] VM Stopped (Lifecycle Event)
Nov 25 08:30:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 88 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 6.3 KiB/s wr, 104 op/s
Nov 25 08:30:11 compute-0 nova_compute[253538]: 2025-11-25 08:30:11.138 253542 DEBUG nova.compute.manager [None req-26f5154d-830e-4f05-9c1a-7c0f1d295f51 - - - - - -] [instance: 225f80e2-9e66-46fb-b77d-9a54fa8a2a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:11 compute-0 ceph-mon[75015]: osdmap e175: 3 total, 3 up, 3 in
Nov 25 08:30:11 compute-0 nova_compute[253538]: 2025-11-25 08:30:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:11 compute-0 nova_compute[253538]: 2025-11-25 08:30:11.583 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Successfully created port: 97c308c5-90af-4f05-a10a-723dc57687cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:30:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:30:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.4 total, 600.0 interval
                                           Cumulative writes: 16K writes, 68K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 4991 syncs, 3.37 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 43.04 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 3886 syncs, 2.65 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:30:11 compute-0 podman[300207]: 2025-11-25 08:30:11.860090131 +0000 UTC m=+0.106398232 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 08:30:12 compute-0 nova_compute[253538]: 2025-11-25 08:30:12.225 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Successfully created port: 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:30:12 compute-0 ceph-mon[75015]: pgmap v1404: 321 pgs: 321 active+clean; 88 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 6.3 KiB/s wr, 104 op/s
Nov 25 08:30:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 107 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.4 MiB/s wr, 112 op/s
Nov 25 08:30:13 compute-0 nova_compute[253538]: 2025-11-25 08:30:13.715 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Successfully updated port: a0054aae-e1e0-48e7-87eb-6836c8f5d8ec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:30:13 compute-0 nova_compute[253538]: 2025-11-25 08:30:13.906 253542 DEBUG nova.compute.manager [req-ffe33105-0e7e-41b0-9545-2be399361000 req-b8ca371d-514c-4584-bd25-4c21eecae67a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-changed-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:13 compute-0 nova_compute[253538]: 2025-11-25 08:30:13.906 253542 DEBUG nova.compute.manager [req-ffe33105-0e7e-41b0-9545-2be399361000 req-b8ca371d-514c-4584-bd25-4c21eecae67a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Refreshing instance network info cache due to event network-changed-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:30:13 compute-0 nova_compute[253538]: 2025-11-25 08:30:13.907 253542 DEBUG oslo_concurrency.lockutils [req-ffe33105-0e7e-41b0-9545-2be399361000 req-b8ca371d-514c-4584-bd25-4c21eecae67a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:13 compute-0 nova_compute[253538]: 2025-11-25 08:30:13.907 253542 DEBUG oslo_concurrency.lockutils [req-ffe33105-0e7e-41b0-9545-2be399361000 req-b8ca371d-514c-4584-bd25-4c21eecae67a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:13 compute-0 nova_compute[253538]: 2025-11-25 08:30:13.907 253542 DEBUG nova.network.neutron [req-ffe33105-0e7e-41b0-9545-2be399361000 req-b8ca371d-514c-4584-bd25-4c21eecae67a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Refreshing network info cache for port a0054aae-e1e0-48e7-87eb-6836c8f5d8ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.165 253542 DEBUG nova.network.neutron [req-ffe33105-0e7e-41b0-9545-2be399361000 req-b8ca371d-514c-4584-bd25-4c21eecae67a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 08:30:14 compute-0 ceph-mon[75015]: pgmap v1405: 321 pgs: 321 active+clean; 107 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.4 MiB/s wr, 112 op/s
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 08:30:14 compute-0 ovn_controller[152859]: 2025-11-25T08:30:14Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:78:70 10.100.0.8
Nov 25 08:30:14 compute-0 ovn_controller[152859]: 2025-11-25T08:30:14Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:78:70 10.100.0.8
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.717 253542 DEBUG nova.network.neutron [req-ffe33105-0e7e-41b0-9545-2be399361000 req-b8ca371d-514c-4584-bd25-4c21eecae67a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.731 253542 DEBUG oslo_concurrency.lockutils [req-ffe33105-0e7e-41b0-9545-2be399361000 req-b8ca371d-514c-4584-bd25-4c21eecae67a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:14 compute-0 nova_compute[253538]: 2025-11-25 08:30:14.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 08:30:15 compute-0 nova_compute[253538]: 2025-11-25 08:30:15.054 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Successfully updated port: 97c308c5-90af-4f05-a10a-723dc57687cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:30:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 156 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 477 KiB/s rd, 6.6 MiB/s wr, 233 op/s
Nov 25 08:30:15 compute-0 nova_compute[253538]: 2025-11-25 08:30:15.558 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:15 compute-0 ceph-mon[75015]: pgmap v1406: 321 pgs: 321 active+clean; 156 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 477 KiB/s rd, 6.6 MiB/s wr, 233 op/s
Nov 25 08:30:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 25 08:30:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 25 08:30:15 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.505 253542 DEBUG nova.compute.manager [req-d5fb6646-85de-4a96-bd65-a013f8cf9a19 req-3a50b543-0891-468c-8382-2ef7f48235c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-changed-97c308c5-90af-4f05-a10a-723dc57687cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.505 253542 DEBUG nova.compute.manager [req-d5fb6646-85de-4a96-bd65-a013f8cf9a19 req-3a50b543-0891-468c-8382-2ef7f48235c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Refreshing instance network info cache due to event network-changed-97c308c5-90af-4f05-a10a-723dc57687cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.506 253542 DEBUG oslo_concurrency.lockutils [req-d5fb6646-85de-4a96-bd65-a013f8cf9a19 req-3a50b543-0891-468c-8382-2ef7f48235c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.506 253542 DEBUG oslo_concurrency.lockutils [req-d5fb6646-85de-4a96-bd65-a013f8cf9a19 req-3a50b543-0891-468c-8382-2ef7f48235c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.506 253542 DEBUG nova.network.neutron [req-d5fb6646-85de-4a96-bd65-a013f8cf9a19 req-3a50b543-0891-468c-8382-2ef7f48235c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Refreshing network info cache for port 97c308c5-90af-4f05-a10a-723dc57687cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.551 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Successfully updated port: 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.568 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.573 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.573 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:16 compute-0 ceph-mon[75015]: osdmap e176: 3 total, 3 up, 3 in
Nov 25 08:30:16 compute-0 nova_compute[253538]: 2025-11-25 08:30:16.910 253542 DEBUG nova.network.neutron [req-d5fb6646-85de-4a96-bd65-a013f8cf9a19 req-3a50b543-0891-468c-8382-2ef7f48235c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:30:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:30:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415415013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.057 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 159 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 541 KiB/s rd, 5.8 MiB/s wr, 218 op/s
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.135 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.135 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.303 253542 DEBUG nova.network.neutron [req-d5fb6646-85de-4a96-bd65-a013f8cf9a19 req-3a50b543-0891-468c-8382-2ef7f48235c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.315 253542 DEBUG oslo_concurrency.lockutils [req-d5fb6646-85de-4a96-bd65-a013f8cf9a19 req-3a50b543-0891-468c-8382-2ef7f48235c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.316 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquired lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.317 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.326 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.328 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4018MB free_disk=59.92296600341797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.329 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.330 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.498 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.631 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ce1afa72-143f-43f3-9859-df7f3523888a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.632 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance b06ecfc8-23f8-42c8-8a5a-396257c46b68 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.633 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.633 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:30:17 compute-0 nova_compute[253538]: 2025-11-25 08:30:17.733 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1415415013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:17 compute-0 ceph-mon[75015]: pgmap v1408: 321 pgs: 321 active+clean; 159 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 541 KiB/s rd, 5.8 MiB/s wr, 218 op/s
Nov 25 08:30:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:30:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3144351261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.248 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.254 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.267 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.585 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.651 253542 DEBUG nova.compute.manager [req-64a28fef-e73d-4f20-9c4c-0861bc9c2b72 req-1bb72c7e-6ba8-451a-9065-e93587c68f5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-changed-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.651 253542 DEBUG nova.compute.manager [req-64a28fef-e73d-4f20-9c4c-0861bc9c2b72 req-1bb72c7e-6ba8-451a-9065-e93587c68f5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Refreshing instance network info cache due to event network-changed-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:30:18 compute-0 nova_compute[253538]: 2025-11-25 08:30:18.652 253542 DEBUG oslo_concurrency.lockutils [req-64a28fef-e73d-4f20-9c4c-0861bc9c2b72 req-1bb72c7e-6ba8-451a-9065-e93587c68f5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3144351261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 166 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 588 KiB/s rd, 5.2 MiB/s wr, 175 op/s
Nov 25 08:30:19 compute-0 nova_compute[253538]: 2025-11-25 08:30:19.529 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:19 compute-0 nova_compute[253538]: 2025-11-25 08:30:19.596 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:19 compute-0 nova_compute[253538]: 2025-11-25 08:30:19.614 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:20 compute-0 ceph-mon[75015]: pgmap v1409: 321 pgs: 321 active+clean; 166 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 588 KiB/s rd, 5.2 MiB/s wr, 175 op/s
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.222 253542 DEBUG nova.network.neutron [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Updating instance_info_cache with network_info: [{"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.246 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Releasing lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.247 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Instance network_info: |[{"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.248 253542 DEBUG oslo_concurrency.lockutils [req-64a28fef-e73d-4f20-9c4c-0861bc9c2b72 req-1bb72c7e-6ba8-451a-9065-e93587c68f5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.249 253542 DEBUG nova.network.neutron [req-64a28fef-e73d-4f20-9c4c-0861bc9c2b72 req-1bb72c7e-6ba8-451a-9065-e93587c68f5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Refreshing network info cache for port 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.256 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Start _get_guest_xml network_info=[{"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.263 253542 WARNING nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.268 253542 DEBUG nova.virt.libvirt.host [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.397 253542 DEBUG nova.virt.libvirt.host [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.404 253542 DEBUG nova.virt.libvirt.host [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.405 253542 DEBUG nova.virt.libvirt.host [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.405 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.406 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.406 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.407 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.407 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.407 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.407 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.408 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.408 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.408 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.408 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.409 253542 DEBUG nova.virt.hardware [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.411 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.559 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.560 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.560 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.595 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059405.5936518, 8400a9a9-bd7a-434b-a11b-6db7e12a4e18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.595 253542 INFO nova.compute.manager [-] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] VM Stopped (Lifecycle Event)
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.620 253542 DEBUG nova.compute.manager [None req-3dda9093-2c1d-406b-b2da-5cf81f98d18d - - - - - -] [instance: 8400a9a9-bd7a-434b-a11b-6db7e12a4e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:30:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264573138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.921 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.952 253542 DEBUG nova.storage.rbd_utils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:20 compute-0 nova_compute[253538]: 2025-11-25 08:30:20.956 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3264573138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:30:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 535 KiB/s rd, 4.7 MiB/s wr, 163 op/s
Nov 25 08:30:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:30:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834785206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.426 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.428 253542 DEBUG nova.virt.libvirt.vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:30:10Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.429 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.430 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:e3:d5,bridge_name='br-int',has_traffic_filtering=True,id=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0054aae-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.431 253542 DEBUG nova.virt.libvirt.vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:30:10Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.432 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.432 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6e:e0,bridge_name='br-int',has_traffic_filtering=True,id=97c308c5-90af-4f05-a10a-723dc57687cf,network=Network(ffe1357e-4705-49bd-8af4-1a224647f586),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97c308c5-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.433 253542 DEBUG nova.virt.libvirt.vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:30:10Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.433 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.434 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7b:1e,bridge_name='br-int',has_traffic_filtering=True,id=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4205ccc7-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.436 253542 DEBUG nova.objects.instance [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lazy-loading 'pci_devices' on Instance uuid b06ecfc8-23f8-42c8-8a5a-396257c46b68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.456 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <uuid>b06ecfc8-23f8-42c8-8a5a-396257c46b68</uuid>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <name>instance-00000029</name>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestMultiNic-server-123732340</nova:name>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:30:20</nova:creationTime>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:user uuid="ccf1e57f59084541821b20089873a6ac">tempest-ServersTestMultiNic-272267582-project-member</nova:user>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:project uuid="c50e3969ac5b472b8defc2e5cca2901a">tempest-ServersTestMultiNic-272267582</nova:project>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:port uuid="a0054aae-e1e0-48e7-87eb-6836c8f5d8ec">
Nov 25 08:30:21 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.180" ipVersion="4"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:port uuid="97c308c5-90af-4f05-a10a-723dc57687cf">
Nov 25 08:30:21 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.1.247" ipVersion="4"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <nova:port uuid="4205ccc7-d2d4-4623-a44a-9ee072d2f2a1">
Nov 25 08:30:21 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.186" ipVersion="4"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <system>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <entry name="serial">b06ecfc8-23f8-42c8-8a5a-396257c46b68</entry>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <entry name="uuid">b06ecfc8-23f8-42c8-8a5a-396257c46b68</entry>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </system>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <os>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   </os>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <features>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   </features>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk">
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       </source>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk.config">
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       </source>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:30:21 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:36:e3:d5"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <target dev="tapa0054aae-e1"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5b:6e:e0"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <target dev="tap97c308c5-90"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:cd:7b:1e"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <target dev="tap4205ccc7-d2"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68/console.log" append="off"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <video>
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </video>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:30:21 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:30:21 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:30:21 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:30:21 compute-0 nova_compute[253538]: </domain>
Nov 25 08:30:21 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.459 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Preparing to wait for external event network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.460 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.460 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.460 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.460 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Preparing to wait for external event network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.460 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.461 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.461 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.461 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Preparing to wait for external event network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.461 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.461 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.461 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.462 253542 DEBUG nova.virt.libvirt.vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:30:10Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.462 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.463 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:e3:d5,bridge_name='br-int',has_traffic_filtering=True,id=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0054aae-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.463 253542 DEBUG os_vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:e3:d5,bridge_name='br-int',has_traffic_filtering=True,id=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0054aae-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.464 253542 DEBUG nova.network.neutron [req-64a28fef-e73d-4f20-9c4c-0861bc9c2b72 req-1bb72c7e-6ba8-451a-9065-e93587c68f5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Updated VIF entry in instance network info cache for port 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.464 253542 DEBUG nova.network.neutron [req-64a28fef-e73d-4f20-9c4c-0861bc9c2b72 req-1bb72c7e-6ba8-451a-9065-e93587c68f5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Updating instance_info_cache with network_info: [{"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.466 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.466 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.471 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0054aae-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.471 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0054aae-e1, col_values=(('external_ids', {'iface-id': 'a0054aae-e1e0-48e7-87eb-6836c8f5d8ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:e3:d5', 'vm-uuid': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.473 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 NetworkManager[48915]: <info>  [1764059421.4743] manager: (tapa0054aae-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.481 253542 INFO os_vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:e3:d5,bridge_name='br-int',has_traffic_filtering=True,id=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0054aae-e1')
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.483 253542 DEBUG nova.virt.libvirt.vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:30:10Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.483 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.484 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6e:e0,bridge_name='br-int',has_traffic_filtering=True,id=97c308c5-90af-4f05-a10a-723dc57687cf,network=Network(ffe1357e-4705-49bd-8af4-1a224647f586),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97c308c5-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.485 253542 DEBUG os_vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6e:e0,bridge_name='br-int',has_traffic_filtering=True,id=97c308c5-90af-4f05-a10a-723dc57687cf,network=Network(ffe1357e-4705-49bd-8af4-1a224647f586),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97c308c5-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.486 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.486 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.487 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.489 253542 DEBUG oslo_concurrency.lockutils [req-64a28fef-e73d-4f20-9c4c-0861bc9c2b72 req-1bb72c7e-6ba8-451a-9065-e93587c68f5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b06ecfc8-23f8-42c8-8a5a-396257c46b68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.490 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.491 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97c308c5-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.492 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97c308c5-90, col_values=(('external_ids', {'iface-id': '97c308c5-90af-4f05-a10a-723dc57687cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:6e:e0', 'vm-uuid': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 NetworkManager[48915]: <info>  [1764059421.4944] manager: (tap97c308c5-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.501 253542 INFO os_vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6e:e0,bridge_name='br-int',has_traffic_filtering=True,id=97c308c5-90af-4f05-a10a-723dc57687cf,network=Network(ffe1357e-4705-49bd-8af4-1a224647f586),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97c308c5-90')
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.502 253542 DEBUG nova.virt.libvirt.vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:30:10Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.502 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.503 253542 DEBUG nova.network.os_vif_util [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7b:1e,bridge_name='br-int',has_traffic_filtering=True,id=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4205ccc7-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.503 253542 DEBUG os_vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7b:1e,bridge_name='br-int',has_traffic_filtering=True,id=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4205ccc7-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.504 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.504 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.507 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4205ccc7-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.507 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4205ccc7-d2, col_values=(('external_ids', {'iface-id': '4205ccc7-d2d4-4623-a44a-9ee072d2f2a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:7b:1e', 'vm-uuid': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 NetworkManager[48915]: <info>  [1764059421.5091] manager: (tap4205ccc7-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.524 253542 INFO os_vif [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7b:1e,bridge_name='br-int',has_traffic_filtering=True,id=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4205ccc7-d2')
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.804 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.805 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.805 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No VIF found with MAC fa:16:3e:36:e3:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.806 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No VIF found with MAC fa:16:3e:5b:6e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.806 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No VIF found with MAC fa:16:3e:cd:7b:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.807 253542 INFO nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Using config drive
Nov 25 08:30:21 compute-0 nova_compute[253538]: 2025-11-25 08:30:21.841 253542 DEBUG nova.storage.rbd_utils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:30:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2402.4 total, 600.0 interval
                                           Cumulative writes: 14K writes, 57K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4285 syncs, 3.39 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8701 writes, 33K keys, 8701 commit groups, 1.0 writes per commit group, ingest: 33.90 MB, 0.06 MB/s
                                           Interval WAL: 8701 writes, 3355 syncs, 2.59 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:30:22 compute-0 nova_compute[253538]: 2025-11-25 08:30:22.318 253542 INFO nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Creating config drive at /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68/disk.config
Nov 25 08:30:22 compute-0 nova_compute[253538]: 2025-11-25 08:30:22.324 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm_d_ba1u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:22 compute-0 ceph-mon[75015]: pgmap v1410: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 535 KiB/s rd, 4.7 MiB/s wr, 163 op/s
Nov 25 08:30:22 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1834785206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:30:22 compute-0 nova_compute[253538]: 2025-11-25 08:30:22.968 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 501 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Nov 25 08:30:23 compute-0 nova_compute[253538]: 2025-11-25 08:30:23.391 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm_d_ba1u" returned: 0 in 1.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:23 compute-0 nova_compute[253538]: 2025-11-25 08:30:23.417 253542 DEBUG nova.storage.rbd_utils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:30:23 compute-0 nova_compute[253538]: 2025-11-25 08:30:23.430 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68/disk.config b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:23 compute-0 ceph-mon[75015]: pgmap v1411: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 501 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Nov 25 08:30:24 compute-0 nova_compute[253538]: 2025-11-25 08:30:24.290 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 143 KiB/s wr, 39 op/s
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.294 253542 DEBUG oslo_concurrency.processutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68/disk.config b06ecfc8-23f8-42c8-8a5a-396257c46b68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.864s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.295 253542 INFO nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Deleting local config drive /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68/disk.config because it was imported into RBD.
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.3768] manager: (tapa0054aae-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Nov 25 08:30:25 compute-0 kernel: tapa0054aae-e1: entered promiscuous mode
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.390 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00283|binding|INFO|Claiming lport a0054aae-e1e0-48e7-87eb-6836c8f5d8ec for this chassis.
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00284|binding|INFO|a0054aae-e1e0-48e7-87eb-6836c8f5d8ec: Claiming fa:16:3e:36:e3:d5 10.100.0.180
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.3959] manager: (tap97c308c5-90): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.402 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:e3:d5 10.100.0.180'], port_security=['fa:16:3e:36:e3:d5 10.100.0.180'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.180/24', 'neutron:device_id': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36151320-921f-4006-be79-7a57c3a2b422', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d20b1cad-00b2-45d3-9840-5880fb3efff3, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.405 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0054aae-e1e0-48e7-87eb-6836c8f5d8ec in datapath 36151320-921f-4006-be79-7a57c3a2b422 bound to our chassis
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.408 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 36151320-921f-4006-be79-7a57c3a2b422
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.4161] manager: (tap4205ccc7-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Nov 25 08:30:25 compute-0 systemd-udevd[300421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:30:25 compute-0 systemd-udevd[300422]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:30:25 compute-0 systemd-udevd[300424]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.427 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd833ab-e4b1-4ede-a308-db4f354be860]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.429 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap36151320-91 in ovnmeta-36151320-921f-4006-be79-7a57c3a2b422 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.431 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap36151320-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.431 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63325699-d084-4c3c-a01c-ad5fa9888d21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.4340] device (tapa0054aae-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.4347] device (tapa0054aae-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.433 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[92c25840-6245-4a17-849c-9bff1cf4e2e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.448 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[153c598d-dd77-4440-b5ca-9d6cd81bcf3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 kernel: tap97c308c5-90: entered promiscuous mode
Nov 25 08:30:25 compute-0 kernel: tap4205ccc7-d2: entered promiscuous mode
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.4569] device (tap97c308c5-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00285|binding|INFO|Claiming lport 97c308c5-90af-4f05-a10a-723dc57687cf for this chassis.
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00286|binding|INFO|97c308c5-90af-4f05-a10a-723dc57687cf: Claiming fa:16:3e:5b:6e:e0 10.100.1.247
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00287|binding|INFO|Claiming lport 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 for this chassis.
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00288|binding|INFO|4205ccc7-d2d4-4623-a44a-9ee072d2f2a1: Claiming fa:16:3e:cd:7b:1e 10.100.0.186
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.4609] device (tap4205ccc7-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.4619] device (tap97c308c5-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.4623] device (tap4205ccc7-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 systemd-machined[215790]: New machine qemu-46-instance-00000029.
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00289|binding|INFO|Setting lport a0054aae-e1e0-48e7-87eb-6836c8f5d8ec ovn-installed in OVS
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.471 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:6e:e0 10.100.1.247'], port_security=['fa:16:3e:5b:6e:e0 10.100.1.247'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.247/24', 'neutron:device_id': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffe1357e-4705-49bd-8af4-1a224647f586', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b2c610e-45ce-4148-ba5e-2ba066ffe335, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=97c308c5-90af-4f05-a10a-723dc57687cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.472 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:7b:1e 10.100.0.186'], port_security=['fa:16:3e:cd:7b:1e 10.100.0.186'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.186/24', 'neutron:device_id': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36151320-921f-4006-be79-7a57c3a2b422', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d20b1cad-00b2-45d3-9840-5880fb3efff3, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00290|binding|INFO|Setting lport a0054aae-e1e0-48e7-87eb-6836c8f5d8ec up in Southbound
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.480 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f171af10-acf4-44da-8fa8-7fe255391c20]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000029.
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.513 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f73c4513-62a1-4bd3-9864-2090c109137c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.5214] manager: (tap36151320-90): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.520 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3665c7d5-6ff4-4916-be77-f07fa0702817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00291|binding|INFO|Setting lport 97c308c5-90af-4f05-a10a-723dc57687cf ovn-installed in OVS
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00292|binding|INFO|Setting lport 97c308c5-90af-4f05-a10a-723dc57687cf up in Southbound
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00293|binding|INFO|Setting lport 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 ovn-installed in OVS
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00294|binding|INFO|Setting lport 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 up in Southbound
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.553 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ead8a9f9-efe3-41d5-a105-134c006e65d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.556 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[926d277b-8164-45c6-ba91-d78122970f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.5799] device (tap36151320-90): carrier: link connected
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.586 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1688181d-db0a-4e66-987c-e75be87ac39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.604 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e92aa666-f0d0-4a39-913e-8435b246bb8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36151320-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:0d:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478017, 'reachable_time': 35452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300460, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.622 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9cf342-bfd4-4214-9752-f6068ae2b919]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:dbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478017, 'tstamp': 478017}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300461, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[303443e1-e735-4f88-b256-7bd5a86eda4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36151320-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:0d:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478017, 'reachable_time': 35452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300462, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75f7e609-f403-41ee-913f-b8e019b46269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.742 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25dff00b-cf17-428f-b882-e4d321e30dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.744 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36151320-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.745 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.745 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36151320-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 NetworkManager[48915]: <info>  [1764059425.7483] manager: (tap36151320-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Nov 25 08:30:25 compute-0 kernel: tap36151320-90: entered promiscuous mode
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.763 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap36151320-90, col_values=(('external_ids', {'iface-id': '656dbfd7-9cd2-48f5-a6cb-8c6b2d5a1323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 ovn_controller[152859]: 2025-11-25T08:30:25Z|00295|binding|INFO|Releasing lport 656dbfd7-9cd2-48f5-a6cb-8c6b2d5a1323 from this chassis (sb_readonly=0)
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.767 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/36151320-921f-4006-be79-7a57c3a2b422.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/36151320-921f-4006-be79-7a57c3a2b422.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.768 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41ada8ef-e7e2-4f21-a9f8-8ac76f646927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.769 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-36151320-921f-4006-be79-7a57c3a2b422
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/36151320-921f-4006-be79-7a57c3a2b422.pid.haproxy
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 36151320-921f-4006-be79-7a57c3a2b422
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:30:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:25.769 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'env', 'PROCESS_TAG=haproxy-36151320-921f-4006-be79-7a57c3a2b422', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/36151320-921f-4006-be79-7a57c3a2b422.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.957 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059425.9571502, b06ecfc8-23f8-42c8-8a5a-396257c46b68 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.958 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] VM Started (Lifecycle Event)
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.974 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.978 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059425.958285, b06ecfc8-23f8-42c8-8a5a-396257c46b68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:25 compute-0 nova_compute[253538]: 2025-11-25 08:30:25.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] VM Paused (Lifecycle Event)
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.001 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.004 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.025 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:30:26 compute-0 podman[300538]: 2025-11-25 08:30:26.126793883 +0000 UTC m=+0.056170343 container create e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:30:26 compute-0 systemd[1]: Started libpod-conmon-e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00.scope.
Nov 25 08:30:26 compute-0 podman[300538]: 2025-11-25 08:30:26.094392787 +0000 UTC m=+0.023769267 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:30:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73ad405296ac57cc5ec61b9c7623922a009183345e523712d1eb9c093ad9c37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:30:26 compute-0 podman[300538]: 2025-11-25 08:30:26.22837535 +0000 UTC m=+0.157751830 container init e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:30:26 compute-0 podman[300538]: 2025-11-25 08:30:26.236091534 +0000 UTC m=+0.165467994 container start e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 08:30:26 compute-0 neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422[300553]: [NOTICE]   (300557) : New worker (300559) forked
Nov 25 08:30:26 compute-0 neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422[300553]: [NOTICE]   (300557) : Loading success.
Nov 25 08:30:26 compute-0 ceph-mon[75015]: pgmap v1412: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 143 KiB/s wr, 39 op/s
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.302 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 97c308c5-90af-4f05-a10a-723dc57687cf in datapath ffe1357e-4705-49bd-8af4-1a224647f586 unbound from our chassis
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.305 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ffe1357e-4705-49bd-8af4-1a224647f586
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.316 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3409a505-4ddf-4da1-b63f-0115c4d7f2d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.317 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapffe1357e-41 in ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.319 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapffe1357e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.319 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d435c261-0967-402d-9b17-5e50c4815dad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.320 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4523634-6c50-4abc-b53e-fc6f313b8a1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.333 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[498c6eb8-a616-44f7-9f29-8bd424888e9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.355 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[54c7bc4b-407e-4d8a-af0c-f10c67a74d9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.388 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5111babb-d039-470a-85c6-483439741202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.396 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66875398-7c41-49ed-9756-3c24ac54c2b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 systemd-udevd[300451]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:30:26 compute-0 NetworkManager[48915]: <info>  [1764059426.3980] manager: (tapffe1357e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.446 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1550cd70-bd87-4080-8e91-d28622011b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.448 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[da9f8a82-93a2-4162-982d-c7313e34a894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 NetworkManager[48915]: <info>  [1764059426.4753] device (tapffe1357e-40): carrier: link connected
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.484 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab457e8-f4d3-4290-8efd-2d4d200a83c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.504 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c1573644-5b37-4fea-8a26-c4b5e08a1cd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapffe1357e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:90:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478107, 'reachable_time': 24526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300578, 'error': None, 'target': 'ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.520 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1aa145-0f7b-4da7-8c61-95cd7c61cc7d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:901b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478107, 'tstamp': 478107}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300579, 'error': None, 'target': 'ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[761824de-e8f7-4c64-b958-075ce95e99c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapffe1357e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:90:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478107, 'reachable_time': 24526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300580, 'error': None, 'target': 'ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.549 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.566 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a34456-8ae8-4cbc-b6ce-5f66a04d3ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.643 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6d0309-c6a9-4c18-b587-5b7e58b01701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.645 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffe1357e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.645 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.645 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffe1357e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:26 compute-0 kernel: tapffe1357e-40: entered promiscuous mode
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:26 compute-0 NetworkManager[48915]: <info>  [1764059426.6483] manager: (tapffe1357e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.651 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.653 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapffe1357e-40, col_values=(('external_ids', {'iface-id': '9f86fc8b-05ff-4d48-a576-c17d4e405d09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:26 compute-0 ovn_controller[152859]: 2025-11-25T08:30:26Z|00296|binding|INFO|Releasing lport 9f86fc8b-05ff-4d48-a576-c17d4e405d09 from this chassis (sb_readonly=0)
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.654 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:26 compute-0 nova_compute[253538]: 2025-11-25 08:30:26.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.675 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ffe1357e-4705-49bd-8af4-1a224647f586.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ffe1357e-4705-49bd-8af4-1a224647f586.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.676 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4bda4f-8cb0-4fb0-9582-2ae0d6a538b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.677 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ffe1357e-4705-49bd-8af4-1a224647f586
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ffe1357e-4705-49bd-8af4-1a224647f586.pid.haproxy
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ffe1357e-4705-49bd-8af4-1a224647f586
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:30:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:26.678 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586', 'env', 'PROCESS_TAG=haproxy-ffe1357e-4705-49bd-8af4-1a224647f586', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ffe1357e-4705-49bd-8af4-1a224647f586.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:30:27 compute-0 podman[300612]: 2025-11-25 08:30:27.052619592 +0000 UTC m=+0.049765147 container create 2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:30:27 compute-0 systemd[1]: Started libpod-conmon-2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4.scope.
Nov 25 08:30:27 compute-0 podman[300612]: 2025-11-25 08:30:27.024055913 +0000 UTC m=+0.021201488 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:30:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:30:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06c82035632b1860d313a3b83f0f0e10bd79834170fde3ef606cc0809a7274b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:30:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 138 KiB/s wr, 36 op/s
Nov 25 08:30:27 compute-0 podman[300612]: 2025-11-25 08:30:27.142041234 +0000 UTC m=+0.139186809 container init 2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:30:27 compute-0 podman[300612]: 2025-11-25 08:30:27.148928333 +0000 UTC m=+0.146073888 container start 2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:30:27 compute-0 neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586[300627]: [NOTICE]   (300631) : New worker (300633) forked
Nov 25 08:30:27 compute-0 neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586[300627]: [NOTICE]   (300631) : Loading success.
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.200 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 in datapath 36151320-921f-4006-be79-7a57c3a2b422 unbound from our chassis
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.202 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 36151320-921f-4006-be79-7a57c3a2b422
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.217 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[470da813-3086-4bba-8414-5e5762487bb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.244 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[48c71569-409a-49ce-a48c-da1a004d360e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.247 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[17804b45-47d8-40b2-86e9-8f355d1addf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.274 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8676de61-ed06-4f91-bdac-1b75fd37e97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.296 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a727e2-86a4-4c2d-ac91-a7711c7a7098]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36151320-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:0d:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478017, 'reachable_time': 35452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300647, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.315 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72297df1-755f-41a6-906b-e5447f91c3cc]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap36151320-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478030, 'tstamp': 478030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300648, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap36151320-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478033, 'tstamp': 478033}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300648, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.319 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36151320-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:27 compute-0 nova_compute[253538]: 2025-11-25 08:30:27.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:27 compute-0 nova_compute[253538]: 2025-11-25 08:30:27.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.370 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36151320-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.371 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.371 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap36151320-90, col_values=(('external_ids', {'iface-id': '656dbfd7-9cd2-48f5-a6cb-8c6b2d5a1323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.371 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:27.372 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:30:28 compute-0 ceph-mon[75015]: pgmap v1413: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 138 KiB/s wr, 36 op/s
Nov 25 08:30:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:30:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3914427107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:30:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:30:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3914427107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:30:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 83 KiB/s wr, 21 op/s
Nov 25 08:30:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3914427107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:30:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3914427107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.113 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:30 compute-0 ceph-mon[75015]: pgmap v1414: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 83 KiB/s wr, 21 op/s
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.525 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.525 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.539 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.624 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.625 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.631 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.632 253542 INFO nova.compute.claims [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:30:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:30 compute-0 nova_compute[253538]: 2025-11-25 08:30:30.753 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 31 KiB/s wr, 16 op/s
Nov 25 08:30:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:30:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446891655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.163 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.171 253542 DEBUG nova.compute.provider_tree [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.182 253542 DEBUG nova.scheduler.client.report [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.278 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.279 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.357 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.358 253542 DEBUG nova.network.neutron [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.384 253542 INFO nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.404 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.489 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.490 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.490 253542 INFO nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Creating image(s)
Nov 25 08:30:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/446891655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.845 253542 DEBUG nova.storage.rbd_utils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] rbd image 21243957-732f-435d-854b-56d6dd7c1ee5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.891 253542 DEBUG nova.storage.rbd_utils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] rbd image 21243957-732f-435d-854b-56d6dd7c1ee5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.912 253542 DEBUG nova.storage.rbd_utils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] rbd image 21243957-732f-435d-854b-56d6dd7c1ee5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.916 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.950 253542 DEBUG nova.policy [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '65e8abd41b2b4a4ab175f581875790ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '60e7437d74e5463f92e6045be3ca5172', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:30:31 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:31.999 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.000 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.001 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.002 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.027 253542 DEBUG nova.storage.rbd_utils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] rbd image 21243957-732f-435d-854b-56d6dd7c1ee5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.031 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 21243957-732f-435d-854b-56d6dd7c1ee5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.364 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 21243957-732f-435d-854b-56d6dd7c1ee5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.433 253542 DEBUG nova.storage.rbd_utils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] resizing rbd image 21243957-732f-435d-854b-56d6dd7c1ee5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:30:32 compute-0 ceph-mon[75015]: pgmap v1415: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 31 KiB/s wr, 16 op/s
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.574 253542 DEBUG nova.objects.instance [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lazy-loading 'migration_context' on Instance uuid 21243957-732f-435d-854b-56d6dd7c1ee5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.589 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.589 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Ensure instance console log exists: /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.590 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.590 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:32 compute-0 nova_compute[253538]: 2025-11-25 08:30:32.591 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Nov 25 08:30:33 compute-0 nova_compute[253538]: 2025-11-25 08:30:33.546 253542 DEBUG nova.network.neutron [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Successfully created port: b1fe2a2b-159d-40ea-a88d-1a311f1e7702 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:30:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:34.374 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:34 compute-0 ceph-mon[75015]: pgmap v1416: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Nov 25 08:30:34 compute-0 podman[300837]: 2025-11-25 08:30:34.839802555 +0000 UTC m=+0.082063169 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 08:30:35 compute-0 nova_compute[253538]: 2025-11-25 08:30:35.080 253542 DEBUG nova.network.neutron [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Successfully updated port: b1fe2a2b-159d-40ea-a88d-1a311f1e7702 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:30:35 compute-0 nova_compute[253538]: 2025-11-25 08:30:35.098 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:35 compute-0 nova_compute[253538]: 2025-11-25 08:30:35.098 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquired lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:35 compute-0 nova_compute[253538]: 2025-11-25 08:30:35.098 253542 DEBUG nova.network.neutron [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:30:35 compute-0 nova_compute[253538]: 2025-11-25 08:30:35.115 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 198 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Nov 25 08:30:35 compute-0 ceph-mon[75015]: pgmap v1417: 321 pgs: 321 active+clean; 198 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Nov 25 08:30:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:36 compute-0 nova_compute[253538]: 2025-11-25 08:30:36.060 253542 DEBUG nova.network.neutron [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:30:36 compute-0 nova_compute[253538]: 2025-11-25 08:30:36.954 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:37 compute-0 nova_compute[253538]: 2025-11-25 08:30:37.064 253542 DEBUG nova.compute.manager [req-e8ee876b-1ae1-408e-9e0f-dc4e6c7058b0 req-8c68724f-8b46-452f-a554-111adbe372b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-changed-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:37 compute-0 nova_compute[253538]: 2025-11-25 08:30:37.065 253542 DEBUG nova.compute.manager [req-e8ee876b-1ae1-408e-9e0f-dc4e6c7058b0 req-8c68724f-8b46-452f-a554-111adbe372b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Refreshing instance network info cache due to event network-changed-b1fe2a2b-159d-40ea-a88d-1a311f1e7702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:30:37 compute-0 nova_compute[253538]: 2025-11-25 08:30:37.065 253542 DEBUG oslo_concurrency.lockutils [req-e8ee876b-1ae1-408e-9e0f-dc4e6c7058b0 req-8c68724f-8b46-452f-a554-111adbe372b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 213 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 08:30:38 compute-0 ceph-mon[75015]: pgmap v1418: 321 pgs: 321 active+clean; 213 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 08:30:38 compute-0 podman[300857]: 2025-11-25 08:30:38.81399935 +0000 UTC m=+0.063560157 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.090 253542 DEBUG nova.network.neutron [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.108 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Releasing lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.109 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Instance network_info: |[{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.110 253542 DEBUG oslo_concurrency.lockutils [req-e8ee876b-1ae1-408e-9e0f-dc4e6c7058b0 req-8c68724f-8b46-452f-a554-111adbe372b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.111 253542 DEBUG nova.network.neutron [req-e8ee876b-1ae1-408e-9e0f-dc4e6c7058b0 req-8c68724f-8b46-452f-a554-111adbe372b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Refreshing network info cache for port b1fe2a2b-159d-40ea-a88d-1a311f1e7702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.116 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Start _get_guest_xml network_info=[{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.123 253542 WARNING nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.139 253542 DEBUG nova.virt.libvirt.host [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.140 253542 DEBUG nova.virt.libvirt.host [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.143 253542 DEBUG nova.virt.libvirt.host [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.144 253542 DEBUG nova.virt.libvirt.host [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:30:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 213 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.144 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.144 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.145 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.145 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.145 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.145 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.145 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.145 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.146 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.146 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.146 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.146 253542 DEBUG nova.virt.hardware [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.149 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:30:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103531901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.611 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.640 253542 DEBUG nova.storage.rbd_utils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] rbd image 21243957-732f-435d-854b-56d6dd7c1ee5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:39 compute-0 nova_compute[253538]: 2025-11-25 08:30:39.644 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:30:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2184867165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.093 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.095 253542 DEBUG nova.virt.libvirt.vif [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:30:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-431160682',display_name='tempest-AttachInterfacesUnderV243Test-server-431160682',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-431160682',id=42,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM5DjtqNcv+6RkNArYu24kFktfxDRaWC3D2gbYUaR8sRxNJVPe0ZuP7IU98YDbQlWI7Cp8dxno/YFmEni1mtKuJTQQphjmW7WKzVnDUcxPTuZ4GE9nq3vP0QvvkqbmlX/g==',key_name='tempest-keypair-617120992',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='60e7437d74e5463f92e6045be3ca5172',ramdisk_id='',reservation_id='r-qs1dttp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-616024449',owner_user_name='tempest-AttachInterfacesUnderV243Test-616024449-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:30:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='65e8abd41b2b4a4ab175f581875790ac',uuid=21243957-732f-435d-854b-56d6dd7c1ee5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.096 253542 DEBUG nova.network.os_vif_util [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Converting VIF {"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.097 253542 DEBUG nova.network.os_vif_util [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:41:6c,bridge_name='br-int',has_traffic_filtering=True,id=b1fe2a2b-159d-40ea-a88d-1a311f1e7702,network=Network(87ff5af4-98f5-4e7f-8049-0f70796e8c58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fe2a2b-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.098 253542 DEBUG nova.objects.instance [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lazy-loading 'pci_devices' on Instance uuid 21243957-732f-435d-854b-56d6dd7c1ee5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.122 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <uuid>21243957-732f-435d-854b-56d6dd7c1ee5</uuid>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <name>instance-0000002a</name>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-431160682</nova:name>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:30:39</nova:creationTime>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <nova:user uuid="65e8abd41b2b4a4ab175f581875790ac">tempest-AttachInterfacesUnderV243Test-616024449-project-member</nova:user>
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <nova:project uuid="60e7437d74e5463f92e6045be3ca5172">tempest-AttachInterfacesUnderV243Test-616024449</nova:project>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <nova:port uuid="b1fe2a2b-159d-40ea-a88d-1a311f1e7702">
Nov 25 08:30:40 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <system>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <entry name="serial">21243957-732f-435d-854b-56d6dd7c1ee5</entry>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <entry name="uuid">21243957-732f-435d-854b-56d6dd7c1ee5</entry>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </system>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <os>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   </os>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <features>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   </features>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/21243957-732f-435d-854b-56d6dd7c1ee5_disk">
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       </source>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/21243957-732f-435d-854b-56d6dd7c1ee5_disk.config">
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       </source>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:30:40 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:13:41:6c"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <target dev="tapb1fe2a2b-15"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5/console.log" append="off"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <video>
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </video>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:30:40 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:30:40 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:30:40 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:30:40 compute-0 nova_compute[253538]: </domain>
Nov 25 08:30:40 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.123 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Preparing to wait for external event network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.124 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.124 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.124 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.126 253542 DEBUG nova.virt.libvirt.vif [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:30:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-431160682',display_name='tempest-AttachInterfacesUnderV243Test-server-431160682',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-431160682',id=42,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM5DjtqNcv+6RkNArYu24kFktfxDRaWC3D2gbYUaR8sRxNJVPe0ZuP7IU98YDbQlWI7Cp8dxno/YFmEni1mtKuJTQQphjmW7WKzVnDUcxPTuZ4GE9nq3vP0QvvkqbmlX/g==',key_name='tempest-keypair-617120992',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='60e7437d74e5463f92e6045be3ca5172',ramdisk_id='',reservation_id='r-qs1dttp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-616024449',owner_user_name='tempest-AttachInterfacesUnderV243Test-616024449-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:30:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='65e8abd41b2b4a4ab175f581875790ac',uuid=21243957-732f-435d-854b-56d6dd7c1ee5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.126 253542 DEBUG nova.network.os_vif_util [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Converting VIF {"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.127 253542 DEBUG nova.network.os_vif_util [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:41:6c,bridge_name='br-int',has_traffic_filtering=True,id=b1fe2a2b-159d-40ea-a88d-1a311f1e7702,network=Network(87ff5af4-98f5-4e7f-8049-0f70796e8c58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fe2a2b-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.128 253542 DEBUG os_vif [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:41:6c,bridge_name='br-int',has_traffic_filtering=True,id=b1fe2a2b-159d-40ea-a88d-1a311f1e7702,network=Network(87ff5af4-98f5-4e7f-8049-0f70796e8c58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fe2a2b-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.129 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.132 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.132 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.136 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.136 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1fe2a2b-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.137 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1fe2a2b-15, col_values=(('external_ids', {'iface-id': 'b1fe2a2b-159d-40ea-a88d-1a311f1e7702', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:41:6c', 'vm-uuid': '21243957-732f-435d-854b-56d6dd7c1ee5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:40 compute-0 NetworkManager[48915]: <info>  [1764059440.1400] manager: (tapb1fe2a2b-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.141 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.150 253542 INFO os_vif [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:41:6c,bridge_name='br-int',has_traffic_filtering=True,id=b1fe2a2b-159d-40ea-a88d-1a311f1e7702,network=Network(87ff5af4-98f5-4e7f-8049-0f70796e8c58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fe2a2b-15')
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.409 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.409 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.410 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] No VIF found with MAC fa:16:3e:13:41:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.410 253542 INFO nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Using config drive
Nov 25 08:30:40 compute-0 ceph-mon[75015]: pgmap v1419: 321 pgs: 321 active+clean; 213 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 08:30:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1103531901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:30:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2184867165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:30:40 compute-0 nova_compute[253538]: 2025-11-25 08:30:40.460 253542 DEBUG nova.storage.rbd_utils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] rbd image 21243957-732f-435d-854b-56d6dd7c1ee5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:41.055 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:41.056 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:41.057 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 213 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.208 253542 DEBUG nova.compute.manager [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.209 253542 DEBUG oslo_concurrency.lockutils [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.209 253542 DEBUG oslo_concurrency.lockutils [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.209 253542 DEBUG oslo_concurrency.lockutils [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.209 253542 DEBUG nova.compute.manager [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Processing event network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.210 253542 DEBUG nova.compute.manager [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.210 253542 DEBUG oslo_concurrency.lockutils [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.210 253542 DEBUG oslo_concurrency.lockutils [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.210 253542 DEBUG oslo_concurrency.lockutils [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.210 253542 DEBUG nova.compute.manager [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No event matching network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec in dict_keys([('network-vif-plugged', '97c308c5-90af-4f05-a10a-723dc57687cf'), ('network-vif-plugged', '4205ccc7-d2d4-4623-a44a-9ee072d2f2a1')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.211 253542 WARNING nova.compute.manager [req-a0f7ab2a-1d7d-484c-a607-082456d90213 req-21c9534f-6b28-4f5e-bc8b-84d726344c6a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received unexpected event network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec for instance with vm_state building and task_state spawning.
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.334 253542 INFO nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Creating config drive at /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5/disk.config
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.345 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7xhuzi1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.516 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7xhuzi1" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.559 253542 DEBUG nova.storage.rbd_utils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] rbd image 21243957-732f-435d-854b-56d6dd7c1ee5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:30:41 compute-0 nova_compute[253538]: 2025-11-25 08:30:41.564 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5/disk.config 21243957-732f-435d-854b-56d6dd7c1ee5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.202 253542 DEBUG oslo_concurrency.processutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5/disk.config 21243957-732f-435d-854b-56d6dd7c1ee5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.203 253542 INFO nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Deleting local config drive /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5/disk.config because it was imported into RBD.
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.215 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "ce1afa72-143f-43f3-9859-df7f3523888a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.216 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.216 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.217 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.217 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.219 253542 INFO nova.compute.manager [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Terminating instance
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.221 253542 DEBUG nova.compute.manager [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:30:42 compute-0 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 08:30:42 compute-0 kernel: tapb1fe2a2b-15: entered promiscuous mode
Nov 25 08:30:42 compute-0 NetworkManager[48915]: <info>  [1764059442.2566] manager: (tapb1fe2a2b-15): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.254 253542 DEBUG nova.network.neutron [req-e8ee876b-1ae1-408e-9e0f-dc4e6c7058b0 req-8c68724f-8b46-452f-a554-111adbe372b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updated VIF entry in instance network info cache for port b1fe2a2b-159d-40ea-a88d-1a311f1e7702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.255 253542 DEBUG nova.network.neutron [req-e8ee876b-1ae1-408e-9e0f-dc4e6c7058b0 req-8c68724f-8b46-452f-a554-111adbe372b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 ovn_controller[152859]: 2025-11-25T08:30:42Z|00297|binding|INFO|Claiming lport b1fe2a2b-159d-40ea-a88d-1a311f1e7702 for this chassis.
Nov 25 08:30:42 compute-0 ovn_controller[152859]: 2025-11-25T08:30:42Z|00298|binding|INFO|b1fe2a2b-159d-40ea-a88d-1a311f1e7702: Claiming fa:16:3e:13:41:6c 10.100.0.3
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.266 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:41:6c 10.100.0.3'], port_security=['fa:16:3e:13:41:6c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '21243957-732f-435d-854b-56d6dd7c1ee5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87ff5af4-98f5-4e7f-8049-0f70796e8c58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '60e7437d74e5463f92e6045be3ca5172', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd01ef662-2da8-4b58-a1cd-baf30a0ed9e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c94f8cf-40ef-4aa3-b5ea-ef85bbbaaac9, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=b1fe2a2b-159d-40ea-a88d-1a311f1e7702) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.266 162739 INFO neutron.agent.ovn.metadata.agent [-] Port b1fe2a2b-159d-40ea-a88d-1a311f1e7702 in datapath 87ff5af4-98f5-4e7f-8049-0f70796e8c58 bound to our chassis
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.268 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87ff5af4-98f5-4e7f-8049-0f70796e8c58
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.269 253542 DEBUG oslo_concurrency.lockutils [req-e8ee876b-1ae1-408e-9e0f-dc4e6c7058b0 req-8c68724f-8b46-452f-a554-111adbe372b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:42 compute-0 ovn_controller[152859]: 2025-11-25T08:30:42Z|00299|binding|INFO|Setting lport b1fe2a2b-159d-40ea-a88d-1a311f1e7702 ovn-installed in OVS
Nov 25 08:30:42 compute-0 ovn_controller[152859]: 2025-11-25T08:30:42Z|00300|binding|INFO|Setting lport b1fe2a2b-159d-40ea-a88d-1a311f1e7702 up in Southbound
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d09125-d2ef-4e56-810a-e06fa93cb7cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.279 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87ff5af4-91 in ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.281 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87ff5af4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.281 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6bd16f-01c2-4e22-aeff-c650c3747dc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.283 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.282 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[514e7e13-efa7-4634-a2c7-57b298e9d140]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.301 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[dc785c17-26bc-4751-bb51-decbf57aae35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 systemd-machined[215790]: New machine qemu-47-instance-0000002a.
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.316 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01e3ccb3-507d-4bbd-b74a-bfa5ad1c256c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Nov 25 08:30:42 compute-0 systemd-udevd[301029]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:30:42 compute-0 NetworkManager[48915]: <info>  [1764059442.3415] device (tapb1fe2a2b-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:30:42 compute-0 NetworkManager[48915]: <info>  [1764059442.3424] device (tapb1fe2a2b-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.348 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9118eae2-f0b6-40f9-b4f5-43cfe94ffe12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 NetworkManager[48915]: <info>  [1764059442.3543] manager: (tap87ff5af4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.353 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f85b5e78-e447-40d7-8564-c664fb250c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 kernel: tap4521eb38-aa (unregistering): left promiscuous mode
Nov 25 08:30:42 compute-0 NetworkManager[48915]: <info>  [1764059442.3613] device (tap4521eb38-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:30:42 compute-0 ovn_controller[152859]: 2025-11-25T08:30:42Z|00301|binding|INFO|Releasing lport 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c from this chassis (sb_readonly=0)
Nov 25 08:30:42 compute-0 ovn_controller[152859]: 2025-11-25T08:30:42Z|00302|binding|INFO|Setting lport 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c down in Southbound
Nov 25 08:30:42 compute-0 ovn_controller[152859]: 2025-11-25T08:30:42Z|00303|binding|INFO|Removing iface tap4521eb38-aa ovn-installed in OVS
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.379 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:78:70 10.100.0.8'], port_security=['fa:16:3e:13:78:70 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ce1afa72-143f-43f3-9859-df7f3523888a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824090d9-c62a-4860-847c-47ccffd255d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e37021edb08c44fbb7ea019842489f3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '868a82d8-4d65-49d2-a7a6-131752d09d46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18939c60-bcf8-4c2d-8acd-d9d0c9ad5069, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.390 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.394 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[73280581-c7a6-4daa-9913-89a13872f36d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.397 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[86df2bca-ff6e-44f7-81a3-199ddeba312b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 NetworkManager[48915]: <info>  [1764059442.4197] device (tap87ff5af4-90): carrier: link connected
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.424 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[95413e7e-d966-4be4-bc54-8c3221e3671e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000028.scope: Deactivated successfully.
Nov 25 08:30:42 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000028.scope: Consumed 15.073s CPU time.
Nov 25 08:30:42 compute-0 systemd-machined[215790]: Machine qemu-45-instance-00000028 terminated.
Nov 25 08:30:42 compute-0 podman[301011]: 2025-11-25 08:30:42.441631766 +0000 UTC m=+0.139975940 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.441 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[88700c4a-74db-4540-8204-a2b8f207e184]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87ff5af4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:4e:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479701, 'reachable_time': 20966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301077, 'error': None, 'target': 'ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 NetworkManager[48915]: <info>  [1764059442.4488] manager: (tap4521eb38-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[012b590a-1cf2-4ebc-8898-1c97d56c1f7c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:4efe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479701, 'tstamp': 479701}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301080, 'error': None, 'target': 'ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.469 253542 INFO nova.virt.libvirt.driver [-] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Instance destroyed successfully.
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.469 253542 DEBUG nova.objects.instance [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lazy-loading 'resources' on Instance uuid ce1afa72-143f-43f3-9859-df7f3523888a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.475 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e375d55-ecfb-4a03-9ccb-c06f0476adb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87ff5af4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:4e:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479701, 'reachable_time': 20966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301087, 'error': None, 'target': 'ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.480 253542 DEBUG nova.virt.libvirt.vif [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:29:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-342565699',display_name='tempest-ServersTestManualDisk-server-342565699',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-342565699',id=40,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDhRS19G0kH2X3pDMDdi0IU8772j8a8Yzb7Q6dfDkWUMmP5RtMrsn0oQKvygM9ASW3NADtnaU7L2yy4qYCSo157088ZYCg7StYPEN9aWQW29kHzWDtSoReuM5x9SQIeWw==',key_name='tempest-keypair-1530897481',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:29:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e37021edb08c44fbb7ea019842489f3c',ramdisk_id='',reservation_id='r-9hx6wxky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1420264531',owner_user_name='tempest-ServersTestManualDisk-1420264531-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:29:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cc7f739420484ab696255ef9cdfcc581',uuid=ce1afa72-143f-43f3-9859-df7f3523888a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.481 253542 DEBUG nova.network.os_vif_util [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Converting VIF {"id": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "address": "fa:16:3e:13:78:70", "network": {"id": "824090d9-c62a-4860-847c-47ccffd255d9", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-816014635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e37021edb08c44fbb7ea019842489f3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4521eb38-aa", "ovs_interfaceid": "4521eb38-aa12-4b8a-92f6-3f1a9121fe5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.482 253542 DEBUG nova.network.os_vif_util [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:78:70,bridge_name='br-int',has_traffic_filtering=True,id=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c,network=Network(824090d9-c62a-4860-847c-47ccffd255d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4521eb38-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.482 253542 DEBUG os_vif [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:78:70,bridge_name='br-int',has_traffic_filtering=True,id=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c,network=Network(824090d9-c62a-4860-847c-47ccffd255d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4521eb38-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4521eb38-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.499 253542 INFO os_vif [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:78:70,bridge_name='br-int',has_traffic_filtering=True,id=4521eb38-aa12-4b8a-92f6-3f1a9121fe5c,network=Network(824090d9-c62a-4860-847c-47ccffd255d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4521eb38-aa')
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.503 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[939ff2f2-edaa-4a5e-a2b9-060f811179bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 ceph-mon[75015]: pgmap v1420: 321 pgs: 321 active+clean; 213 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.547 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[490d4d7a-e782-4f59-af7c-1d67da6b1edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.548 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87ff5af4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.548 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.548 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87ff5af4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 NetworkManager[48915]: <info>  [1764059442.5508] manager: (tap87ff5af4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Nov 25 08:30:42 compute-0 kernel: tap87ff5af4-90: entered promiscuous mode
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.554 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87ff5af4-90, col_values=(('external_ids', {'iface-id': 'c8895407-4b3c-419a-8694-55e3e69b82b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 ovn_controller[152859]: 2025-11-25T08:30:42Z|00304|binding|INFO|Releasing lport c8895407-4b3c-419a-8694-55e3e69b82b8 from this chassis (sb_readonly=0)
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.588 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87ff5af4-98f5-4e7f-8049-0f70796e8c58.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87ff5af4-98f5-4e7f-8049-0f70796e8c58.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.589 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6e386cdd-2315-4ade-8c74-0d087287ad76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.589 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-87ff5af4-98f5-4e7f-8049-0f70796e8c58
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/87ff5af4-98f5-4e7f-8049-0f70796e8c58.pid.haproxy
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 87ff5af4-98f5-4e7f-8049-0f70796e8c58
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:30:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:42.590 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58', 'env', 'PROCESS_TAG=haproxy-87ff5af4-98f5-4e7f-8049-0f70796e8c58', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87ff5af4-98f5-4e7f-8049-0f70796e8c58.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.873 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059442.873183, 21243957-732f-435d-854b-56d6dd7c1ee5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.874 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] VM Started (Lifecycle Event)
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.893 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.897 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059442.8739564, 21243957-732f-435d-854b-56d6dd7c1ee5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.898 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] VM Paused (Lifecycle Event)
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.915 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.918 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:30:42 compute-0 nova_compute[253538]: 2025-11-25 08:30:42.938 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:30:43 compute-0 podman[301181]: 2025-11-25 08:30:42.95156234 +0000 UTC m=+0.023972793 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:30:43 compute-0 podman[301181]: 2025-11-25 08:30:43.06083478 +0000 UTC m=+0.133245253 container create 825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:30:43 compute-0 systemd[1]: Started libpod-conmon-825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8.scope.
Nov 25 08:30:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 213 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:30:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d121a466039b5ba2f5cf48cb7c03ea4f42ecb7c68566a4e7acbc1cf4c46fa14b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:30:43 compute-0 podman[301181]: 2025-11-25 08:30:43.181271589 +0000 UTC m=+0.253682042 container init 825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:30:43 compute-0 podman[301181]: 2025-11-25 08:30:43.194058663 +0000 UTC m=+0.266469096 container start 825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:30:43 compute-0 neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58[301198]: [NOTICE]   (301202) : New worker (301204) forked
Nov 25 08:30:43 compute-0 neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58[301198]: [NOTICE]   (301202) : Loading success.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.248 253542 DEBUG nova.compute.manager [req-99bae36b-6634-48eb-85c8-adb5bc707e60 req-d37c1eaa-8424-4196-9aee-0cc954534677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.249 253542 DEBUG oslo_concurrency.lockutils [req-99bae36b-6634-48eb-85c8-adb5bc707e60 req-d37c1eaa-8424-4196-9aee-0cc954534677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.249 253542 DEBUG oslo_concurrency.lockutils [req-99bae36b-6634-48eb-85c8-adb5bc707e60 req-d37c1eaa-8424-4196-9aee-0cc954534677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.249 253542 DEBUG oslo_concurrency.lockutils [req-99bae36b-6634-48eb-85c8-adb5bc707e60 req-d37c1eaa-8424-4196-9aee-0cc954534677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.249 253542 DEBUG nova.compute.manager [req-99bae36b-6634-48eb-85c8-adb5bc707e60 req-d37c1eaa-8424-4196-9aee-0cc954534677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Processing event network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.250 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.254 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.254 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4521eb38-aa12-4b8a-92f6-3f1a9121fe5c in datapath 824090d9-c62a-4860-847c-47ccffd255d9 unbound from our chassis
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.255 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059443.255079, 21243957-732f-435d-854b-56d6dd7c1ee5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.255 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] VM Resumed (Lifecycle Event)
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.259 253542 INFO nova.virt.libvirt.driver [-] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Instance spawned successfully.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.259 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.260 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 824090d9-c62a-4860-847c-47ccffd255d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.264 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[05374517-9881-4e17-8905-310ea7c8d620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.265 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9 namespace which is not needed anymore
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.279 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.286 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.289 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.290 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.290 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.291 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.291 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.292 253542 DEBUG nova.virt.libvirt.driver [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.300 253542 INFO nova.virt.libvirt.driver [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Deleting instance files /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a_del
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.301 253542 INFO nova.virt.libvirt.driver [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Deletion of /var/lib/nova/instances/ce1afa72-143f-43f3-9859-df7f3523888a_del complete
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.321 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.362 253542 INFO nova.compute.manager [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Took 1.14 seconds to destroy the instance on the hypervisor.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.362 253542 DEBUG oslo.service.loopingcall [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.363 253542 DEBUG nova.compute.manager [-] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.364 253542 DEBUG nova.network.neutron [-] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.367 253542 INFO nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Took 11.88 seconds to spawn the instance on the hypervisor.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.367 253542 DEBUG nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.432 253542 INFO nova.compute.manager [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Took 12.84 seconds to build instance.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.444 253542 DEBUG oslo_concurrency.lockutils [None req-c5e8afdf-0ef9-4d7b-89b7-8fd08457e293 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:43 compute-0 neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9[299807]: [NOTICE]   (299811) : haproxy version is 2.8.14-c23fe91
Nov 25 08:30:43 compute-0 neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9[299807]: [NOTICE]   (299811) : path to executable is /usr/sbin/haproxy
Nov 25 08:30:43 compute-0 neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9[299807]: [WARNING]  (299811) : Exiting Master process...
Nov 25 08:30:43 compute-0 neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9[299807]: [WARNING]  (299811) : Exiting Master process...
Nov 25 08:30:43 compute-0 neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9[299807]: [ALERT]    (299811) : Current worker (299813) exited with code 143 (Terminated)
Nov 25 08:30:43 compute-0 neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9[299807]: [WARNING]  (299811) : All workers exited. Exiting... (0)
Nov 25 08:30:43 compute-0 systemd[1]: libpod-8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37.scope: Deactivated successfully.
Nov 25 08:30:43 compute-0 podman[301230]: 2025-11-25 08:30:43.488149562 +0000 UTC m=+0.146281284 container died 8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37-userdata-shm.mount: Deactivated successfully.
Nov 25 08:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b87c0f7057e314a671ce4e484b7e89dc62dc70e55daf171ffb4199d529243f-merged.mount: Deactivated successfully.
Nov 25 08:30:43 compute-0 ceph-mon[75015]: pgmap v1421: 321 pgs: 321 active+clean; 213 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:30:43 compute-0 podman[301230]: 2025-11-25 08:30:43.608423536 +0000 UTC m=+0.266554778 container cleanup 8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:30:43 compute-0 systemd[1]: libpod-conmon-8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37.scope: Deactivated successfully.
Nov 25 08:30:43 compute-0 podman[301256]: 2025-11-25 08:30:43.69970867 +0000 UTC m=+0.054919420 container remove 8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2017f650-fb6a-44f8-9e52-563a6b580b67]: (4, ('Tue Nov 25 08:30:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9 (8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37)\n8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37\nTue Nov 25 08:30:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9 (8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37)\n8ff12cb12cbe19feec9a0eb4f56ea43d873f8344ae0dbda5484502a526892a37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.708 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1415ae3c-90f4-4f10-8a28-d083fb8eef04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.709 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824090d9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:43 compute-0 kernel: tap824090d9-c0: left promiscuous mode
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.712 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.728 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.731 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf53053-3fc0-4b71-92cb-d30e24668aa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.744 253542 DEBUG nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.744 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.745 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.745 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.745 253542 DEBUG nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Processing event network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.746 253542 DEBUG nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.746 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.747 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.747 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.747 253542 DEBUG nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No event matching network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf in dict_keys([('network-vif-plugged', '4205ccc7-d2d4-4623-a44a-9ee072d2f2a1')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.749 253542 WARNING nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received unexpected event network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf for instance with vm_state building and task_state spawning.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.749 253542 DEBUG nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.750 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.751 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.750 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cec94f6c-fa5d-408a-adaf-5e912960b2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.751 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.752 253542 DEBUG nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Processing event network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.752 253542 DEBUG nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.752 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73af960a-9d6e-45bc-ac4f-b616d56ae123]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.753 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.753 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.753 253542 DEBUG oslo_concurrency.lockutils [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.754 253542 DEBUG nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No waiting events found dispatching network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.754 253542 WARNING nova.compute.manager [req-4347151d-5dbd-495b-abe8-9aba125b5176 req-e5edbf2c-58f0-4fef-8cfc-bfc20cc1f203 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received unexpected event network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 for instance with vm_state building and task_state spawning.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.755 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Instance event wait completed in 17 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.758 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059443.7583215, b06ecfc8-23f8-42c8-8a5a-396257c46b68 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.758 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] VM Resumed (Lifecycle Event)
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.760 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.763 253542 INFO nova.virt.libvirt.driver [-] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Instance spawned successfully.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.764 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.771 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[13681009-c55b-4706-b57b-140e2a4fa268]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475283, 'reachable_time': 34063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301272, 'error': None, 'target': 'ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.773 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-824090d9-c62a-4860-847c-47ccffd255d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:30:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:43.773 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[18bd6ed7-0d2c-4e10-be70-f57cf01fd3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d824090d9\x2dc62a\x2d4860\x2d847c\x2d47ccffd255d9.mount: Deactivated successfully.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.778 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.785 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.788 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.789 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.789 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.789 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.790 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.791 253542 DEBUG nova.virt.libvirt.driver [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.819 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.872 253542 INFO nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Took 33.68 seconds to spawn the instance on the hypervisor.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.873 253542 DEBUG nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.970 253542 INFO nova.compute.manager [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Took 34.55 seconds to build instance.
Nov 25 08:30:43 compute-0 nova_compute[253538]: 2025-11-25 08:30:43.996 253542 DEBUG oslo_concurrency.lockutils [None req-4a70d496-ae5a-423a-be3b-844f566f0c4c ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 34.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:44 compute-0 nova_compute[253538]: 2025-11-25 08:30:44.337 253542 DEBUG nova.network.neutron [-] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:44 compute-0 nova_compute[253538]: 2025-11-25 08:30:44.363 253542 INFO nova.compute.manager [-] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Took 1.00 seconds to deallocate network for instance.
Nov 25 08:30:44 compute-0 nova_compute[253538]: 2025-11-25 08:30:44.421 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:44 compute-0 nova_compute[253538]: 2025-11-25 08:30:44.422 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:44 compute-0 nova_compute[253538]: 2025-11-25 08:30:44.502 253542 DEBUG oslo_concurrency.processutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:30:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362586139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:44 compute-0 nova_compute[253538]: 2025-11-25 08:30:44.970 253542 DEBUG oslo_concurrency.processutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:44 compute-0 nova_compute[253538]: 2025-11-25 08:30:44.976 253542 DEBUG nova.compute.provider_tree [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:30:44 compute-0 nova_compute[253538]: 2025-11-25 08:30:44.995 253542 DEBUG nova.scheduler.client.report [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:30:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/362586139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.021 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.048 253542 INFO nova.scheduler.client.report [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Deleted allocations for instance ce1afa72-143f-43f3-9859-df7f3523888a
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.143 253542 DEBUG oslo_concurrency.lockutils [None req-cedc77f0-f09d-48b5-aec1-c5a6bb2cf299 cc7f739420484ab696255ef9cdfcc581 e37021edb08c44fbb7ea019842489f3c - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 151 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.402 253542 DEBUG nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.403 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.403 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.404 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.404 253542 DEBUG nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] No waiting events found dispatching network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.405 253542 WARNING nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received unexpected event network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 for instance with vm_state active and task_state None.
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.405 253542 DEBUG nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received event network-vif-unplugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.405 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.406 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.406 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.406 253542 DEBUG nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] No waiting events found dispatching network-vif-unplugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.407 253542 WARNING nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received unexpected event network-vif-unplugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c for instance with vm_state deleted and task_state None.
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.407 253542 DEBUG nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received event network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.407 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.408 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.408 253542 DEBUG oslo_concurrency.lockutils [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce1afa72-143f-43f3-9859-df7f3523888a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.408 253542 DEBUG nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] No waiting events found dispatching network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:45 compute-0 nova_compute[253538]: 2025-11-25 08:30:45.409 253542 WARNING nova.compute.manager [req-668cec16-7ddc-44bf-a1e9-c1e478db35de req-51968b67-8379-4e49-a459-d055aad00edb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received unexpected event network-vif-plugged-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c for instance with vm_state deleted and task_state None.
Nov 25 08:30:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:46 compute-0 ceph-mon[75015]: pgmap v1422: 321 pgs: 321 active+clean; 151 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.231 253542 DEBUG nova.compute.manager [req-f65c686a-5aa2-4220-a69d-278f7786b86d req-becb3f2c-2d1e-4db7-aa9d-1a6f1386a3fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Received event network-vif-deleted-4521eb38-aa12-4b8a-92f6-3f1a9121fe5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.529 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.530 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.531 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.531 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.531 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.533 253542 INFO nova.compute.manager [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Terminating instance
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.534 253542 DEBUG nova.compute.manager [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:30:46 compute-0 kernel: tapa0054aae-e1 (unregistering): left promiscuous mode
Nov 25 08:30:46 compute-0 NetworkManager[48915]: <info>  [1764059446.5786] device (tapa0054aae-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00305|binding|INFO|Releasing lport a0054aae-e1e0-48e7-87eb-6836c8f5d8ec from this chassis (sb_readonly=0)
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00306|binding|INFO|Setting lport a0054aae-e1e0-48e7-87eb-6836c8f5d8ec down in Southbound
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00307|binding|INFO|Removing iface tapa0054aae-e1 ovn-installed in OVS
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 kernel: tap97c308c5-90 (unregistering): left promiscuous mode
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.606 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:e3:d5 10.100.0.180'], port_security=['fa:16:3e:36:e3:d5 10.100.0.180'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.180/24', 'neutron:device_id': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36151320-921f-4006-be79-7a57c3a2b422', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d20b1cad-00b2-45d3-9840-5880fb3efff3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.607 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a0054aae-e1e0-48e7-87eb-6836c8f5d8ec in datapath 36151320-921f-4006-be79-7a57c3a2b422 unbound from our chassis
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.609 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 36151320-921f-4006-be79-7a57c3a2b422
Nov 25 08:30:46 compute-0 NetworkManager[48915]: <info>  [1764059446.6117] device (tap97c308c5-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.630 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab624077-9d87-4584-aabe-92a6ef0c5052]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.639 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 kernel: tap4205ccc7-d2 (unregistering): left promiscuous mode
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00308|binding|INFO|Releasing lport 97c308c5-90af-4f05-a10a-723dc57687cf from this chassis (sb_readonly=0)
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00309|binding|INFO|Setting lport 97c308c5-90af-4f05-a10a-723dc57687cf down in Southbound
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00310|binding|INFO|Removing iface tap97c308c5-90 ovn-installed in OVS
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.642 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 NetworkManager[48915]: <info>  [1764059446.6461] device (tap4205ccc7-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.646 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:6e:e0 10.100.1.247'], port_security=['fa:16:3e:5b:6e:e0 10.100.1.247'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.247/24', 'neutron:device_id': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffe1357e-4705-49bd-8af4-1a224647f586', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b2c610e-45ce-4148-ba5e-2ba066ffe335, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=97c308c5-90af-4f05-a10a-723dc57687cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.671 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d00d4a1e-4c6f-4779-8a40-28a57766ae50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.675 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[63cd2cc3-cf6d-4ac0-8f39-c9268d0f0dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:46 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000029.scope: Deactivated successfully.
Nov 25 08:30:46 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000029.scope: Consumed 3.331s CPU time.
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.689 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00311|binding|INFO|Releasing lport 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 from this chassis (sb_readonly=0)
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00312|binding|INFO|Setting lport 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 down in Southbound
Nov 25 08:30:46 compute-0 ovn_controller[152859]: 2025-11-25T08:30:46Z|00313|binding|INFO|Removing iface tap4205ccc7-d2 ovn-installed in OVS
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 systemd-machined[215790]: Machine qemu-46-instance-00000029 terminated.
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.696 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:7b:1e 10.100.0.186'], port_security=['fa:16:3e:cd:7b:1e 10.100.0.186'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.186/24', 'neutron:device_id': 'b06ecfc8-23f8-42c8-8a5a-396257c46b68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36151320-921f-4006-be79-7a57c3a2b422', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d20b1cad-00b2-45d3-9840-5880fb3efff3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.704 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[04a1740d-7eca-496d-8827-fa1a2e438dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.723 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[796f5087-f011-4bd3-9d34-14b04c650e85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36151320-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:0d:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478017, 'reachable_time': 35452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301316, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.737 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be34d912-c657-48a1-a3f1-9b2f610e9d9f]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap36151320-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478030, 'tstamp': 478030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301317, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap36151320-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478033, 'tstamp': 478033}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301317, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.738 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36151320-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.749 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36151320-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.750 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.750 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap36151320-90, col_values=(('external_ids', {'iface-id': '656dbfd7-9cd2-48f5-a6cb-8c6b2d5a1323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.750 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.751 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 97c308c5-90af-4f05-a10a-723dc57687cf in datapath ffe1357e-4705-49bd-8af4-1a224647f586 unbound from our chassis
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.752 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ffe1357e-4705-49bd-8af4-1a224647f586, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:30:46 compute-0 NetworkManager[48915]: <info>  [1764059446.7550] manager: (tapa0054aae-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Nov 25 08:30:46 compute-0 systemd-udevd[301301]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.755 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1929ef4d-b15a-49a0-9193-083478be5fdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:46.756 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586 namespace which is not needed anymore
Nov 25 08:30:46 compute-0 NetworkManager[48915]: <info>  [1764059446.7839] manager: (tap4205ccc7-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.800 253542 INFO nova.virt.libvirt.driver [-] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Instance destroyed successfully.
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.801 253542 DEBUG nova.objects.instance [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lazy-loading 'resources' on Instance uuid b06ecfc8-23f8-42c8-8a5a-396257c46b68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.813 253542 DEBUG nova.virt.libvirt.vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:30:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:30:43Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.814 253542 DEBUG nova.network.os_vif_util [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.815 253542 DEBUG nova.network.os_vif_util [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:e3:d5,bridge_name='br-int',has_traffic_filtering=True,id=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0054aae-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.815 253542 DEBUG os_vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:e3:d5,bridge_name='br-int',has_traffic_filtering=True,id=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0054aae-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.817 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0054aae-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.825 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.827 253542 INFO os_vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:e3:d5,bridge_name='br-int',has_traffic_filtering=True,id=a0054aae-e1e0-48e7-87eb-6836c8f5d8ec,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0054aae-e1')
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.828 253542 DEBUG nova.virt.libvirt.vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:30:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:30:43Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.828 253542 DEBUG nova.network.os_vif_util [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "97c308c5-90af-4f05-a10a-723dc57687cf", "address": "fa:16:3e:5b:6e:e0", "network": {"id": "ffe1357e-4705-49bd-8af4-1a224647f586", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1434780146", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.247", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c308c5-90", "ovs_interfaceid": "97c308c5-90af-4f05-a10a-723dc57687cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.829 253542 DEBUG nova.network.os_vif_util [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6e:e0,bridge_name='br-int',has_traffic_filtering=True,id=97c308c5-90af-4f05-a10a-723dc57687cf,network=Network(ffe1357e-4705-49bd-8af4-1a224647f586),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97c308c5-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.829 253542 DEBUG os_vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6e:e0,bridge_name='br-int',has_traffic_filtering=True,id=97c308c5-90af-4f05-a10a-723dc57687cf,network=Network(ffe1357e-4705-49bd-8af4-1a224647f586),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97c308c5-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.830 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97c308c5-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.831 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.835 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.837 253542 INFO os_vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6e:e0,bridge_name='br-int',has_traffic_filtering=True,id=97c308c5-90af-4f05-a10a-723dc57687cf,network=Network(ffe1357e-4705-49bd-8af4-1a224647f586),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97c308c5-90')
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.838 253542 DEBUG nova.virt.libvirt.vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-123732340',display_name='tempest-ServersTestMultiNic-server-123732340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-123732340',id=41,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:30:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-ajc9bizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:30:43Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=b06ecfc8-23f8-42c8-8a5a-396257c46b68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.838 253542 DEBUG nova.network.os_vif_util [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.838 253542 DEBUG nova.network.os_vif_util [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7b:1e,bridge_name='br-int',has_traffic_filtering=True,id=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4205ccc7-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.839 253542 DEBUG os_vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7b:1e,bridge_name='br-int',has_traffic_filtering=True,id=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4205ccc7-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.839 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.840 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4205ccc7-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:30:46 compute-0 nova_compute[253538]: 2025-11-25 08:30:46.844 253542 INFO os_vif [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7b:1e,bridge_name='br-int',has_traffic_filtering=True,id=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1,network=Network(36151320-921f-4006-be79-7a57c3a2b422),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4205ccc7-d2')
Nov 25 08:30:46 compute-0 neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586[300627]: [NOTICE]   (300631) : haproxy version is 2.8.14-c23fe91
Nov 25 08:30:46 compute-0 neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586[300627]: [NOTICE]   (300631) : path to executable is /usr/sbin/haproxy
Nov 25 08:30:46 compute-0 neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586[300627]: [WARNING]  (300631) : Exiting Master process...
Nov 25 08:30:46 compute-0 neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586[300627]: [ALERT]    (300631) : Current worker (300633) exited with code 143 (Terminated)
Nov 25 08:30:46 compute-0 neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586[300627]: [WARNING]  (300631) : All workers exited. Exiting... (0)
Nov 25 08:30:46 compute-0 systemd[1]: libpod-2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4.scope: Deactivated successfully.
Nov 25 08:30:46 compute-0 podman[301370]: 2025-11-25 08:30:46.90096055 +0000 UTC m=+0.052115731 container died 2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4-userdata-shm.mount: Deactivated successfully.
Nov 25 08:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c06c82035632b1860d313a3b83f0f0e10bd79834170fde3ef606cc0809a7274b-merged.mount: Deactivated successfully.
Nov 25 08:30:46 compute-0 podman[301370]: 2025-11-25 08:30:46.990509625 +0000 UTC m=+0.141664846 container cleanup 2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:30:47 compute-0 systemd[1]: libpod-conmon-2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4.scope: Deactivated successfully.
Nov 25 08:30:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 684 KiB/s wr, 143 op/s
Nov 25 08:30:47 compute-0 podman[301418]: 2025-11-25 08:30:47.506289721 +0000 UTC m=+0.476753248 container remove 2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.514 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9e01b7-d0d8-4b49-94db-5c8ee7fb9582]: (4, ('Tue Nov 25 08:30:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586 (2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4)\n2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4\nTue Nov 25 08:30:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586 (2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4)\n2ca82038d1f1ac9b06002c1e928a87bade1d8c1134e6ab0d4d6c40a803c4e6c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.516 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[14cad7e3-b478-4256-86e3-a5cea027fb02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.518 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffe1357e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:47 compute-0 kernel: tapffe1357e-40: left promiscuous mode
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.565 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.573 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6395c7-f48a-4fa7-9ce0-7e924e4d1d83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.589 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdfba2b-9d7d-489e-867c-f0ef97b5bb17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.590 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[92d9e46c-d9f1-4774-9a64-0a448d8b9378]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.610 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61334caa-68e5-47e9-839c-901c2f7a5a09]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478097, 'reachable_time': 18460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301433, 'error': None, 'target': 'ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.612 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ffe1357e-4705-49bd-8af4-1a224647f586 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.612 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a99732cf-e9d7-422f-9264-681a2914477a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.613 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 in datapath 36151320-921f-4006-be79-7a57c3a2b422 unbound from our chassis
Nov 25 08:30:47 compute-0 systemd[1]: run-netns-ovnmeta\x2dffe1357e\x2d4705\x2d49bd\x2d8af4\x2d1a224647f586.mount: Deactivated successfully.
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.616 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36151320-921f-4006-be79-7a57c3a2b422, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.617 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6506d0-4ec4-4a98-a0fc-a0514712d5d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.617 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-36151320-921f-4006-be79-7a57c3a2b422 namespace which is not needed anymore
Nov 25 08:30:47 compute-0 neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422[300553]: [NOTICE]   (300557) : haproxy version is 2.8.14-c23fe91
Nov 25 08:30:47 compute-0 neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422[300553]: [NOTICE]   (300557) : path to executable is /usr/sbin/haproxy
Nov 25 08:30:47 compute-0 neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422[300553]: [WARNING]  (300557) : Exiting Master process...
Nov 25 08:30:47 compute-0 neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422[300553]: [ALERT]    (300557) : Current worker (300559) exited with code 143 (Terminated)
Nov 25 08:30:47 compute-0 neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422[300553]: [WARNING]  (300557) : All workers exited. Exiting... (0)
Nov 25 08:30:47 compute-0 systemd[1]: libpod-e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00.scope: Deactivated successfully.
Nov 25 08:30:47 compute-0 podman[301453]: 2025-11-25 08:30:47.743374514 +0000 UTC m=+0.044135921 container died e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00-userdata-shm.mount: Deactivated successfully.
Nov 25 08:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f73ad405296ac57cc5ec61b9c7623922a009183345e523712d1eb9c093ad9c37-merged.mount: Deactivated successfully.
Nov 25 08:30:47 compute-0 podman[301453]: 2025-11-25 08:30:47.777852137 +0000 UTC m=+0.078613544 container cleanup e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:30:47 compute-0 systemd[1]: libpod-conmon-e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00.scope: Deactivated successfully.
Nov 25 08:30:47 compute-0 podman[301478]: 2025-11-25 08:30:47.854607509 +0000 UTC m=+0.054653692 container remove e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.855 253542 INFO nova.virt.libvirt.driver [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Deleting instance files /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68_del
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.856 253542 INFO nova.virt.libvirt.driver [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Deletion of /var/lib/nova/instances/b06ecfc8-23f8-42c8-8a5a-396257c46b68_del complete
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.863 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6aaf6fc-eb5b-4dfe-ad12-e76662089964]: (4, ('Tue Nov 25 08:30:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422 (e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00)\ne52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00\nTue Nov 25 08:30:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-36151320-921f-4006-be79-7a57c3a2b422 (e52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00)\ne52b22bf526a0b7826cc00df7d0a5144f5f5ecc60330e39ca12eb90b90524f00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.865 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[366467cd-c133-48b6-b789-3bcc821fb267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.866 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36151320-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:30:47 compute-0 kernel: tap36151320-90: left promiscuous mode
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.874 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b001e4-d4cb-4fad-9119-641e4465c51c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.891 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79497819-0c77-4f13-bcb4-bb7f080a4b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.892 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[132709e5-d9ce-4c00-a5b5-079dc24872df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.902 253542 INFO nova.compute.manager [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Took 1.37 seconds to destroy the instance on the hypervisor.
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.903 253542 DEBUG oslo.service.loopingcall [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.903 253542 DEBUG nova.compute.manager [-] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:30:47 compute-0 nova_compute[253538]: 2025-11-25 08:30:47.903 253542 DEBUG nova.network.neutron [-] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.906 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6fdbec-8e87-4aef-a97c-6465c4344c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478010, 'reachable_time': 36629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301496, 'error': None, 'target': 'ovnmeta-36151320-921f-4006-be79-7a57c3a2b422', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.907 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-36151320-921f-4006-be79-7a57c3a2b422 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:30:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:30:47.908 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[abc6d8bd-bb40-4257-95e5-c87a3b594ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:30:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d36151320\x2d921f\x2d4006\x2dbe79\x2d7a57c3a2b422.mount: Deactivated successfully.
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.337 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-unplugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.337 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.338 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.338 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.338 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No waiting events found dispatching network-vif-unplugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.339 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-unplugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.339 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.339 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.339 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.340 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.340 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No waiting events found dispatching network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.340 253542 WARNING nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received unexpected event network-vif-plugged-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec for instance with vm_state active and task_state deleting.
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.341 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-changed-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.341 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Refreshing instance network info cache due to event network-changed-b1fe2a2b-159d-40ea-a88d-1a311f1e7702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.341 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.342 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:30:48 compute-0 nova_compute[253538]: 2025-11-25 08:30:48.342 253542 DEBUG nova.network.neutron [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Refreshing network info cache for port b1fe2a2b-159d-40ea-a88d-1a311f1e7702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:30:48 compute-0 ceph-mon[75015]: pgmap v1423: 321 pgs: 321 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 684 KiB/s wr, 143 op/s
Nov 25 08:30:48 compute-0 sshd-session[301497]: Invalid user loginuser from 193.32.162.151 port 47848
Nov 25 08:30:48 compute-0 sshd-session[301497]: Connection closed by invalid user loginuser 193.32.162.151 port 47848 [preauth]
Nov 25 08:30:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 123 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 18 KiB/s wr, 143 op/s
Nov 25 08:30:50 compute-0 ceph-mon[75015]: pgmap v1424: 321 pgs: 321 active+clean; 123 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 18 KiB/s wr, 143 op/s
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.432 253542 DEBUG nova.network.neutron [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updated VIF entry in instance network info cache for port b1fe2a2b-159d-40ea-a88d-1a311f1e7702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.433 253542 DEBUG nova.network.neutron [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.467 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.468 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-unplugged-97c308c5-90af-4f05-a10a-723dc57687cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.468 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.469 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.469 253542 DEBUG oslo_concurrency.lockutils [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.470 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No waiting events found dispatching network-vif-unplugged-97c308c5-90af-4f05-a10a-723dc57687cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.470 253542 DEBUG nova.compute.manager [req-48aeca56-fdb5-45b7-b8ce-29059d0df477 req-5bce762e-5160-468f-9938-744d5e857837 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-unplugged-97c308c5-90af-4f05-a10a-723dc57687cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.597 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.598 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.598 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.599 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.599 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No waiting events found dispatching network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.600 253542 WARNING nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received unexpected event network-vif-plugged-97c308c5-90af-4f05-a10a-723dc57687cf for instance with vm_state active and task_state deleting.
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.600 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-deleted-97c308c5-90af-4f05-a10a-723dc57687cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.601 253542 INFO nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Neutron deleted interface 97c308c5-90af-4f05-a10a-723dc57687cf; detaching it from the instance and deleting it from the info cache
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.601 253542 DEBUG nova.network.neutron [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Updating instance_info_cache with network_info: [{"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "address": "fa:16:3e:cd:7b:1e", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4205ccc7-d2", "ovs_interfaceid": "4205ccc7-d2d4-4623-a44a-9ee072d2f2a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.641 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Detach interface failed, port_id=97c308c5-90af-4f05-a10a-723dc57687cf, reason: Instance b06ecfc8-23f8-42c8-8a5a-396257c46b68 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.642 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-unplugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.642 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.643 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.643 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.644 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No waiting events found dispatching network-vif-unplugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.644 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-unplugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.645 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.646 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.646 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.648 253542 DEBUG oslo_concurrency.lockutils [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.648 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] No waiting events found dispatching network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.649 253542 WARNING nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received unexpected event network-vif-plugged-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 for instance with vm_state active and task_state deleting.
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.649 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-deleted-4205ccc7-d2d4-4623-a44a-9ee072d2f2a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.649 253542 INFO nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Neutron deleted interface 4205ccc7-d2d4-4623-a44a-9ee072d2f2a1; detaching it from the instance and deleting it from the info cache
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.650 253542 DEBUG nova.network.neutron [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Updating instance_info_cache with network_info: [{"id": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "address": "fa:16:3e:36:e3:d5", "network": {"id": "36151320-921f-4006-be79-7a57c3a2b422", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-92739981", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0054aae-e1", "ovs_interfaceid": "a0054aae-e1e0-48e7-87eb-6836c8f5d8ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.674 253542 DEBUG nova.compute.manager [req-2a74bb95-5a75-4911-8f31-bdd2e3f4b10d req-0992d5af-e494-467e-88a0-39cb9f3c4d8a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Detach interface failed, port_id=4205ccc7-d2d4-4623-a44a-9ee072d2f2a1, reason: Instance b06ecfc8-23f8-42c8-8a5a-396257c46b68 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:30:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:50 compute-0 ovn_controller[152859]: 2025-11-25T08:30:50Z|00314|binding|INFO|Releasing lport c8895407-4b3c-419a-8694-55e3e69b82b8 from this chassis (sb_readonly=0)
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.835 253542 DEBUG nova.network.neutron [-] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.878 253542 INFO nova.compute.manager [-] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Took 2.97 seconds to deallocate network for instance.
Nov 25 08:30:50 compute-0 ovn_controller[152859]: 2025-11-25T08:30:50Z|00315|binding|INFO|Releasing lport c8895407-4b3c-419a-8694-55e3e69b82b8 from this chassis (sb_readonly=0)
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.943 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.944 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:30:50 compute-0 nova_compute[253538]: 2025-11-25 08:30:50.946 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.017 253542 DEBUG oslo_concurrency.processutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:30:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 103 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 18 KiB/s wr, 186 op/s
Nov 25 08:30:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:30:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522056250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.460 253542 DEBUG oslo_concurrency.processutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:30:51 compute-0 ovn_controller[152859]: 2025-11-25T08:30:51Z|00316|binding|INFO|Releasing lport c8895407-4b3c-419a-8694-55e3e69b82b8 from this chassis (sb_readonly=0)
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.467 253542 DEBUG nova.compute.provider_tree [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.507 253542 DEBUG nova.scheduler.client.report [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.534 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.560 253542 INFO nova.scheduler.client.report [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Deleted allocations for instance b06ecfc8-23f8-42c8-8a5a-396257c46b68
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.633 253542 DEBUG oslo_concurrency.lockutils [None req-95a57ad9-c13a-4cb3-9b33-976e77055680 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "b06ecfc8-23f8-42c8-8a5a-396257c46b68" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:30:51 compute-0 nova_compute[253538]: 2025-11-25 08:30:51.842 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:52 compute-0 ceph-mon[75015]: pgmap v1425: 321 pgs: 321 active+clean; 103 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 18 KiB/s wr, 186 op/s
Nov 25 08:30:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3522056250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:30:52 compute-0 nova_compute[253538]: 2025-11-25 08:30:52.903 253542 DEBUG nova.compute.manager [req-3ad2f647-56a9-4b1e-8ceb-ac3c89291cf3 req-e5abe0ee-40b2-4a8a-a51b-45ff8b4b85b3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Received event network-vif-deleted-a0054aae-e1e0-48e7-87eb-6836c8f5d8ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 88 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 18 KiB/s wr, 191 op/s
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:30:53
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['vms', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'backups']
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:30:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:30:54 compute-0 ceph-mon[75015]: pgmap v1426: 321 pgs: 321 active+clean; 88 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 18 KiB/s wr, 191 op/s
Nov 25 08:30:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 88 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 18 KiB/s wr, 194 op/s
Nov 25 08:30:55 compute-0 nova_compute[253538]: 2025-11-25 08:30:55.272 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:30:56 compute-0 sshd-session[301521]: banner exchange: Connection from 139.59.146.234 port 54836: invalid format
Nov 25 08:30:56 compute-0 sshd-session[301522]: banner exchange: Connection from 139.59.146.234 port 54852: invalid format
Nov 25 08:30:56 compute-0 ceph-mon[75015]: pgmap v1427: 321 pgs: 321 active+clean; 88 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 18 KiB/s wr, 194 op/s
Nov 25 08:30:56 compute-0 nova_compute[253538]: 2025-11-25 08:30:56.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 91 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 273 KiB/s wr, 136 op/s
Nov 25 08:30:57 compute-0 nova_compute[253538]: 2025-11-25 08:30:57.467 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059442.4664946, ce1afa72-143f-43f3-9859-df7f3523888a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:30:57 compute-0 nova_compute[253538]: 2025-11-25 08:30:57.468 253542 INFO nova.compute.manager [-] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] VM Stopped (Lifecycle Event)
Nov 25 08:30:57 compute-0 nova_compute[253538]: 2025-11-25 08:30:57.591 253542 DEBUG nova.compute.manager [None req-04aa3285-ac6f-424f-a6f1-cd0b1953a8f6 - - - - - -] [instance: ce1afa72-143f-43f3-9859-df7f3523888a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:30:57 compute-0 nova_compute[253538]: 2025-11-25 08:30:57.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:30:57 compute-0 ceph-mon[75015]: pgmap v1428: 321 pgs: 321 active+clean; 91 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 273 KiB/s wr, 136 op/s
Nov 25 08:30:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 92 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 301 KiB/s wr, 83 op/s
Nov 25 08:30:59 compute-0 ceph-mon[75015]: pgmap v1429: 321 pgs: 321 active+clean; 92 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 301 KiB/s wr, 83 op/s
Nov 25 08:31:00 compute-0 nova_compute[253538]: 2025-11-25 08:31:00.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 101 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1020 KiB/s rd, 1.5 MiB/s wr, 68 op/s
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.726 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.726 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.745 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.798 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059446.7981906, b06ecfc8-23f8-42c8-8a5a-396257c46b68 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.799 253542 INFO nova.compute.manager [-] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] VM Stopped (Lifecycle Event)
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.824 253542 DEBUG nova.compute.manager [None req-8c13104d-b730-47c3-aca4-0b8faec86277 - - - - - -] [instance: b06ecfc8-23f8-42c8-8a5a-396257c46b68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.836 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.837 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.843 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.843 253542 INFO nova.compute.claims [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.846 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:01 compute-0 nova_compute[253538]: 2025-11-25 08:31:01.957 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:02 compute-0 ovn_controller[152859]: 2025-11-25T08:31:02Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:41:6c 10.100.0.3
Nov 25 08:31:02 compute-0 ovn_controller[152859]: 2025-11-25T08:31:02Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:41:6c 10.100.0.3
Nov 25 08:31:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046800255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:02 compute-0 ceph-mon[75015]: pgmap v1430: 321 pgs: 321 active+clean; 101 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1020 KiB/s rd, 1.5 MiB/s wr, 68 op/s
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.450 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.459 253542 DEBUG nova.compute.provider_tree [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.488 253542 DEBUG nova.scheduler.client.report [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.514 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.515 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.562 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.563 253542 DEBUG nova.network.neutron [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.587 253542 INFO nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.614 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:31:02 compute-0 sudo[301545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:02 compute-0 sudo[301545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:02 compute-0 sudo[301545]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.703 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.704 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.705 253542 INFO nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Creating image(s)
Nov 25 08:31:02 compute-0 sudo[301570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:31:02 compute-0 sudo[301570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:02 compute-0 sudo[301570]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.754 253542 DEBUG nova.storage.rbd_utils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image 964bed05-dc03-42f0-9a11-b18fa70a4787_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:02 compute-0 sudo[301602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:02 compute-0 sudo[301602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:02 compute-0 sudo[301602]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.794 253542 DEBUG nova.storage.rbd_utils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image 964bed05-dc03-42f0-9a11-b18fa70a4787_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.817 253542 DEBUG nova.storage.rbd_utils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image 964bed05-dc03-42f0-9a11-b18fa70a4787_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.824 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:02 compute-0 sudo[301656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 08:31:02 compute-0 sudo[301656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.876 253542 DEBUG nova.policy [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ccf1e57f59084541821b20089873a6ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.893 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.894 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.896 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.896 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.926 253542 DEBUG nova.storage.rbd_utils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image 964bed05-dc03-42f0-9a11-b18fa70a4787_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:02 compute-0 nova_compute[253538]: 2025-11-25 08:31:02.930 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 964bed05-dc03-42f0-9a11-b18fa70a4787_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:03 compute-0 sudo[301656]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:31:03 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 102 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 1.7 MiB/s wr, 34 op/s
Nov 25 08:31:03 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:03 compute-0 sudo[301756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:03 compute-0 sudo[301756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:03 compute-0 sudo[301756]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:03 compute-0 sudo[301784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:31:03 compute-0 sudo[301784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:03 compute-0 sudo[301784]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:03 compute-0 sudo[301809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:03 compute-0 sudo[301809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:03 compute-0 sudo[301809]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:03 compute-0 nova_compute[253538]: 2025-11-25 08:31:03.449 253542 DEBUG nova.network.neutron [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Successfully created port: 66f4cf24-739f-46ed-af06-5f4556c06239 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:31:03 compute-0 sudo[301834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:31:03 compute-0 sudo[301834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2046800255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000705475312561047 of space, bias 1.0, pg target 0.21164259376831412 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:31:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:31:03 compute-0 sudo[301834]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:03 compute-0 nova_compute[253538]: 2025-11-25 08:31:03.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:31:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:31:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:31:03 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:31:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:31:04 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:04 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev fcc40020-e857-4c84-a655-f7584952cd2c does not exist
Nov 25 08:31:04 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 138ad768-b00a-4db3-984f-6c7c3450b63f does not exist
Nov 25 08:31:04 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 03a44bc5-9af7-4564-89b9-63e865ade653 does not exist
Nov 25 08:31:04 compute-0 nova_compute[253538]: 2025-11-25 08:31:04.175 253542 DEBUG nova.network.neutron [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Successfully created port: bcb935bd-8596-426c-8f99-55d4b9545321 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:31:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:31:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:31:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:31:04 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:31:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:31:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:31:04 compute-0 sudo[301889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:04 compute-0 sudo[301889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:04 compute-0 sudo[301889]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:04 compute-0 sudo[301914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:31:04 compute-0 sudo[301914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:04 compute-0 sudo[301914]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:04 compute-0 sudo[301939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:04 compute-0 sudo[301939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:04 compute-0 sudo[301939]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:04 compute-0 sudo[301964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:31:04 compute-0 sudo[301964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:04 compute-0 ceph-mon[75015]: pgmap v1431: 321 pgs: 321 active+clean; 102 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 1.7 MiB/s wr, 34 op/s
Nov 25 08:31:04 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:31:04 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:31:04 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:04 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:31:04 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:31:04 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:31:04 compute-0 podman[302031]: 2025-11-25 08:31:04.795045273 +0000 UTC m=+0.028184620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:31:04 compute-0 podman[302031]: 2025-11-25 08:31:04.94398842 +0000 UTC m=+0.177127737 container create 55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.004 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 964bed05-dc03-42f0-9a11-b18fa70a4787_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:05 compute-0 systemd[1]: Started libpod-conmon-55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782.scope.
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.049 253542 DEBUG nova.network.neutron [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Successfully updated port: 66f4cf24-739f-46ed-af06-5f4556c06239 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:31:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.099 253542 DEBUG nova.storage.rbd_utils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] resizing rbd image 964bed05-dc03-42f0-9a11-b18fa70a4787_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:31:05 compute-0 podman[302031]: 2025-11-25 08:31:05.142763984 +0000 UTC m=+0.375903321 container init 55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 08:31:05 compute-0 podman[302045]: 2025-11-25 08:31:05.143290428 +0000 UTC m=+0.143218839 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 08:31:05 compute-0 podman[302031]: 2025-11-25 08:31:05.150274682 +0000 UTC m=+0.383413999 container start 55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brown, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:31:05 compute-0 quizzical_brown[302070]: 167 167
Nov 25 08:31:05 compute-0 systemd[1]: libpod-55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782.scope: Deactivated successfully.
Nov 25 08:31:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 130 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 2.5 MiB/s wr, 65 op/s
Nov 25 08:31:05 compute-0 podman[302031]: 2025-11-25 08:31:05.166651214 +0000 UTC m=+0.399790531 container attach 55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:31:05 compute-0 podman[302031]: 2025-11-25 08:31:05.167922079 +0000 UTC m=+0.401061396 container died 55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brown, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.169 253542 DEBUG nova.compute.manager [req-d130050b-c5fd-4786-9089-d2bfe5f58599 req-fed692ec-ee40-4cfe-af81-6a8c94f876fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-changed-66f4cf24-739f-46ed-af06-5f4556c06239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.170 253542 DEBUG nova.compute.manager [req-d130050b-c5fd-4786-9089-d2bfe5f58599 req-fed692ec-ee40-4cfe-af81-6a8c94f876fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Refreshing instance network info cache due to event network-changed-66f4cf24-739f-46ed-af06-5f4556c06239. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.170 253542 DEBUG oslo_concurrency.lockutils [req-d130050b-c5fd-4786-9089-d2bfe5f58599 req-fed692ec-ee40-4cfe-af81-6a8c94f876fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.170 253542 DEBUG oslo_concurrency.lockutils [req-d130050b-c5fd-4786-9089-d2bfe5f58599 req-fed692ec-ee40-4cfe-af81-6a8c94f876fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.170 253542 DEBUG nova.network.neutron [req-d130050b-c5fd-4786-9089-d2bfe5f58599 req-fed692ec-ee40-4cfe-af81-6a8c94f876fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Refreshing network info cache for port 66f4cf24-739f-46ed-af06-5f4556c06239 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ede857fd15dbcf9486318a9796657be68431f1296f0431b4064193c9ca07b46-merged.mount: Deactivated successfully.
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.383 253542 DEBUG nova.network.neutron [req-d130050b-c5fd-4786-9089-d2bfe5f58599 req-fed692ec-ee40-4cfe-af81-6a8c94f876fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:31:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:05 compute-0 podman[302031]: 2025-11-25 08:31:05.745964666 +0000 UTC m=+0.979103993 container remove 55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:31:05 compute-0 ceph-mon[75015]: pgmap v1432: 321 pgs: 321 active+clean; 130 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 2.5 MiB/s wr, 65 op/s
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.836 253542 DEBUG nova.objects.instance [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lazy-loading 'migration_context' on Instance uuid 964bed05-dc03-42f0-9a11-b18fa70a4787 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:05 compute-0 systemd[1]: libpod-conmon-55f91e080e8b2dd4e4d68ad3a195a4cd10c35e98538fc5f1ad9300957f821782.scope: Deactivated successfully.
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.847 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.848 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Ensure instance console log exists: /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.850 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.850 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:05 compute-0 nova_compute[253538]: 2025-11-25 08:31:05.851 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:05 compute-0 podman[302161]: 2025-11-25 08:31:05.975923332 +0000 UTC m=+0.058644382 container create f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.029 253542 DEBUG nova.network.neutron [req-d130050b-c5fd-4786-9089-d2bfe5f58599 req-fed692ec-ee40-4cfe-af81-6a8c94f876fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:06 compute-0 systemd[1]: Started libpod-conmon-f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4.scope.
Nov 25 08:31:06 compute-0 podman[302161]: 2025-11-25 08:31:05.949931374 +0000 UTC m=+0.032652434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.050 253542 DEBUG oslo_concurrency.lockutils [req-d130050b-c5fd-4786-9089-d2bfe5f58599 req-fed692ec-ee40-4cfe-af81-6a8c94f876fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8121c1d707f525473f30fc5a3b229d7d75ad86963f14cef9c77aa48d17ead78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8121c1d707f525473f30fc5a3b229d7d75ad86963f14cef9c77aa48d17ead78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8121c1d707f525473f30fc5a3b229d7d75ad86963f14cef9c77aa48d17ead78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8121c1d707f525473f30fc5a3b229d7d75ad86963f14cef9c77aa48d17ead78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8121c1d707f525473f30fc5a3b229d7d75ad86963f14cef9c77aa48d17ead78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:06 compute-0 podman[302161]: 2025-11-25 08:31:06.102514372 +0000 UTC m=+0.185235412 container init f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:31:06 compute-0 podman[302161]: 2025-11-25 08:31:06.10970105 +0000 UTC m=+0.192422090 container start f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_euler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:31:06 compute-0 podman[302161]: 2025-11-25 08:31:06.112440596 +0000 UTC m=+0.195161666 container attach f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_euler, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.312 253542 DEBUG nova.network.neutron [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Successfully updated port: bcb935bd-8596-426c-8f99-55d4b9545321 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.328 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.329 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquired lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.329 253542 DEBUG nova.network.neutron [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.526 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.701 253542 DEBUG nova.network.neutron [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:31:06 compute-0 nova_compute[253538]: 2025-11-25 08:31:06.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 151 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 256 KiB/s rd, 3.5 MiB/s wr, 73 op/s
Nov 25 08:31:07 compute-0 nova_compute[253538]: 2025-11-25 08:31:07.227 253542 DEBUG nova.compute.manager [req-c0ff67f5-48cd-45cd-9233-22b14dc87e27 req-b450ebb1-3e89-4644-bb8f-ef248dc5cef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-changed-bcb935bd-8596-426c-8f99-55d4b9545321 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:07 compute-0 nova_compute[253538]: 2025-11-25 08:31:07.228 253542 DEBUG nova.compute.manager [req-c0ff67f5-48cd-45cd-9233-22b14dc87e27 req-b450ebb1-3e89-4644-bb8f-ef248dc5cef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Refreshing instance network info cache due to event network-changed-bcb935bd-8596-426c-8f99-55d4b9545321. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:07 compute-0 nova_compute[253538]: 2025-11-25 08:31:07.229 253542 DEBUG oslo_concurrency.lockutils [req-c0ff67f5-48cd-45cd-9233-22b14dc87e27 req-b450ebb1-3e89-4644-bb8f-ef248dc5cef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:07 compute-0 epic_euler[302177]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:31:07 compute-0 epic_euler[302177]: --> relative data size: 1.0
Nov 25 08:31:07 compute-0 epic_euler[302177]: --> All data devices are unavailable
Nov 25 08:31:07 compute-0 systemd[1]: libpod-f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4.scope: Deactivated successfully.
Nov 25 08:31:07 compute-0 podman[302161]: 2025-11-25 08:31:07.26852844 +0000 UTC m=+1.351249490 container died f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_euler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 08:31:07 compute-0 systemd[1]: libpod-f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4.scope: Consumed 1.087s CPU time.
Nov 25 08:31:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8121c1d707f525473f30fc5a3b229d7d75ad86963f14cef9c77aa48d17ead78-merged.mount: Deactivated successfully.
Nov 25 08:31:07 compute-0 podman[302161]: 2025-11-25 08:31:07.421368354 +0000 UTC m=+1.504089394 container remove f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 08:31:07 compute-0 systemd[1]: libpod-conmon-f6bdc15533634f82f5ba4f39b9d41ccd31a85c1dd46bae4666aa9f5b2b00f2c4.scope: Deactivated successfully.
Nov 25 08:31:07 compute-0 sudo[301964]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:07 compute-0 sudo[302220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:07 compute-0 sudo[302220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:07 compute-0 sudo[302220]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:07 compute-0 sudo[302245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:31:07 compute-0 sudo[302245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:07 compute-0 sudo[302245]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:07 compute-0 sudo[302270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:07 compute-0 sudo[302270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:07 compute-0 sudo[302270]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:07 compute-0 sudo[302295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:31:07 compute-0 sudo[302295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:08 compute-0 podman[302360]: 2025-11-25 08:31:08.065896808 +0000 UTC m=+0.089328979 container create 5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_faraday, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:31:08 compute-0 podman[302360]: 2025-11-25 08:31:07.997487877 +0000 UTC m=+0.020920028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:31:08 compute-0 systemd[1]: Started libpod-conmon-5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf.scope.
Nov 25 08:31:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:08 compute-0 podman[302360]: 2025-11-25 08:31:08.1455404 +0000 UTC m=+0.168972551 container init 5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 08:31:08 compute-0 podman[302360]: 2025-11-25 08:31:08.154436596 +0000 UTC m=+0.177868767 container start 5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:31:08 compute-0 affectionate_faraday[302376]: 167 167
Nov 25 08:31:08 compute-0 systemd[1]: libpod-5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf.scope: Deactivated successfully.
Nov 25 08:31:08 compute-0 podman[302360]: 2025-11-25 08:31:08.22984461 +0000 UTC m=+0.253276761 container attach 5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 08:31:08 compute-0 podman[302360]: 2025-11-25 08:31:08.230525808 +0000 UTC m=+0.253957939 container died 5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:31:08 compute-0 ceph-mon[75015]: pgmap v1433: 321 pgs: 321 active+clean; 151 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 256 KiB/s rd, 3.5 MiB/s wr, 73 op/s
Nov 25 08:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f22c65eb05c13d614c4d78111e43088e8be71f817590355e1608e866cf791417-merged.mount: Deactivated successfully.
Nov 25 08:31:08 compute-0 podman[302360]: 2025-11-25 08:31:08.414893034 +0000 UTC m=+0.438325195 container remove 5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_faraday, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 08:31:08 compute-0 systemd[1]: libpod-conmon-5d36704c6e06b05ed097a29e37c64af0ba3f6613915b37b1497c2256adc6bdaf.scope: Deactivated successfully.
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.668 253542 DEBUG nova.network.neutron [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Updating instance_info_cache with network_info: [{"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:08 compute-0 podman[302399]: 2025-11-25 08:31:08.675510808 +0000 UTC m=+0.058398966 container create 4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.687 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Releasing lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.688 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Instance network_info: |[{"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.689 253542 DEBUG oslo_concurrency.lockutils [req-c0ff67f5-48cd-45cd-9233-22b14dc87e27 req-b450ebb1-3e89-4644-bb8f-ef248dc5cef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.690 253542 DEBUG nova.network.neutron [req-c0ff67f5-48cd-45cd-9233-22b14dc87e27 req-b450ebb1-3e89-4644-bb8f-ef248dc5cef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Refreshing network info cache for port bcb935bd-8596-426c-8f99-55d4b9545321 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.693 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Start _get_guest_xml network_info=[{"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.699 253542 WARNING nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.710 253542 DEBUG nova.virt.libvirt.host [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.711 253542 DEBUG nova.virt.libvirt.host [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:31:08 compute-0 systemd[1]: Started libpod-conmon-4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe.scope.
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.718 253542 DEBUG nova.virt.libvirt.host [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.719 253542 DEBUG nova.virt.libvirt.host [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.719 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.720 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.720 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.721 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.721 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.721 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.722 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.722 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.722 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.723 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.723 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.723 253542 DEBUG nova.virt.hardware [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:31:08 compute-0 nova_compute[253538]: 2025-11-25 08:31:08.727 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:08 compute-0 podman[302399]: 2025-11-25 08:31:08.646161237 +0000 UTC m=+0.029049425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:31:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906af4511434ac130b1635301fd0fe1ac21782c9d9d81633630d086b4f971e0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906af4511434ac130b1635301fd0fe1ac21782c9d9d81633630d086b4f971e0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906af4511434ac130b1635301fd0fe1ac21782c9d9d81633630d086b4f971e0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906af4511434ac130b1635301fd0fe1ac21782c9d9d81633630d086b4f971e0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:08 compute-0 podman[302399]: 2025-11-25 08:31:08.784843699 +0000 UTC m=+0.167731857 container init 4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 08:31:08 compute-0 podman[302399]: 2025-11-25 08:31:08.793421686 +0000 UTC m=+0.176309834 container start 4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 08:31:08 compute-0 podman[302399]: 2025-11-25 08:31:08.804873953 +0000 UTC m=+0.187762101 container attach 4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 25 08:31:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 164 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Nov 25 08:31:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2900756493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.252 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.277 253542 DEBUG nova.storage.rbd_utils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image 964bed05-dc03-42f0-9a11-b18fa70a4787_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.280 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2900756493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:09 compute-0 charming_pascal[302415]: {
Nov 25 08:31:09 compute-0 charming_pascal[302415]:     "0": [
Nov 25 08:31:09 compute-0 charming_pascal[302415]:         {
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "devices": [
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "/dev/loop3"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             ],
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_name": "ceph_lv0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_size": "21470642176",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "name": "ceph_lv0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "tags": {
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cluster_name": "ceph",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.crush_device_class": "",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.encrypted": "0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osd_id": "0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.type": "block",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.vdo": "0"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             },
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "type": "block",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "vg_name": "ceph_vg0"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:         }
Nov 25 08:31:09 compute-0 charming_pascal[302415]:     ],
Nov 25 08:31:09 compute-0 charming_pascal[302415]:     "1": [
Nov 25 08:31:09 compute-0 charming_pascal[302415]:         {
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "devices": [
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "/dev/loop4"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             ],
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_name": "ceph_lv1",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_size": "21470642176",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "name": "ceph_lv1",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "tags": {
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cluster_name": "ceph",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.crush_device_class": "",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.encrypted": "0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osd_id": "1",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.type": "block",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.vdo": "0"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             },
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "type": "block",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "vg_name": "ceph_vg1"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:         }
Nov 25 08:31:09 compute-0 charming_pascal[302415]:     ],
Nov 25 08:31:09 compute-0 charming_pascal[302415]:     "2": [
Nov 25 08:31:09 compute-0 charming_pascal[302415]:         {
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "devices": [
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "/dev/loop5"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             ],
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_name": "ceph_lv2",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_size": "21470642176",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "name": "ceph_lv2",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "tags": {
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.cluster_name": "ceph",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.crush_device_class": "",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.encrypted": "0",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osd_id": "2",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.type": "block",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:                 "ceph.vdo": "0"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             },
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "type": "block",
Nov 25 08:31:09 compute-0 charming_pascal[302415]:             "vg_name": "ceph_vg2"
Nov 25 08:31:09 compute-0 charming_pascal[302415]:         }
Nov 25 08:31:09 compute-0 charming_pascal[302415]:     ]
Nov 25 08:31:09 compute-0 charming_pascal[302415]: }
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:31:09 compute-0 systemd[1]: libpod-4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe.scope: Deactivated successfully.
Nov 25 08:31:09 compute-0 podman[302399]: 2025-11-25 08:31:09.589657194 +0000 UTC m=+0.972545342 container died 4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 08:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-906af4511434ac130b1635301fd0fe1ac21782c9d9d81633630d086b4f971e0f-merged.mount: Deactivated successfully.
Nov 25 08:31:09 compute-0 podman[302399]: 2025-11-25 08:31:09.649010195 +0000 UTC m=+1.031898343 container remove 4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 08:31:09 compute-0 systemd[1]: libpod-conmon-4b2236234a1c59390355d9948624fcf0e4bb7b09601638087ccdf322d4af5ebe.scope: Deactivated successfully.
Nov 25 08:31:09 compute-0 podman[302486]: 2025-11-25 08:31:09.687025025 +0000 UTC m=+0.069041939 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:31:09 compute-0 sudo[302295]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751746376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.736 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.738 253542 DEBUG nova.virt.libvirt.vif [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-839232317',display_name='tempest-ServersTestMultiNic-server-839232317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-839232317',id=43,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-jc82luva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:02Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=964bed05-dc03-42f0-9a11-b18fa70a4787,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.738 253542 DEBUG nova.network.os_vif_util [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.739 253542 DEBUG nova.network.os_vif_util [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9a:e6,bridge_name='br-int',has_traffic_filtering=True,id=66f4cf24-739f-46ed-af06-5f4556c06239,network=Network(27ca22e4-edb9-4716-b7d0-baed03e35444),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66f4cf24-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.740 253542 DEBUG nova.virt.libvirt.vif [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-839232317',display_name='tempest-ServersTestMultiNic-server-839232317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-839232317',id=43,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-jc82luva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:02Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=964bed05-dc03-42f0-9a11-b18fa70a4787,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.740 253542 DEBUG nova.network.os_vif_util [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.740 253542 DEBUG nova.network.os_vif_util [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=bcb935bd-8596-426c-8f99-55d4b9545321,network=Network(8b63866f-8f7c-4d12-9269-798a110ab5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcb935bd-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.741 253542 DEBUG nova.objects.instance [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lazy-loading 'pci_devices' on Instance uuid 964bed05-dc03-42f0-9a11-b18fa70a4787 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:09 compute-0 sudo[302519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.761 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <uuid>964bed05-dc03-42f0-9a11-b18fa70a4787</uuid>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <name>instance-0000002b</name>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestMultiNic-server-839232317</nova:name>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:31:08</nova:creationTime>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:user uuid="ccf1e57f59084541821b20089873a6ac">tempest-ServersTestMultiNic-272267582-project-member</nova:user>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:project uuid="c50e3969ac5b472b8defc2e5cca2901a">tempest-ServersTestMultiNic-272267582</nova:project>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:port uuid="66f4cf24-739f-46ed-af06-5f4556c06239">
Nov 25 08:31:09 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.157" ipVersion="4"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <nova:port uuid="bcb935bd-8596-426c-8f99-55d4b9545321">
Nov 25 08:31:09 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.1.47" ipVersion="4"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <system>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <entry name="serial">964bed05-dc03-42f0-9a11-b18fa70a4787</entry>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <entry name="uuid">964bed05-dc03-42f0-9a11-b18fa70a4787</entry>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </system>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <os>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   </os>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <features>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   </features>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/964bed05-dc03-42f0-9a11-b18fa70a4787_disk">
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/964bed05-dc03-42f0-9a11-b18fa70a4787_disk.config">
Nov 25 08:31:09 compute-0 sudo[302519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:09 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5d:9a:e6"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <target dev="tap66f4cf24-73"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:17:c3:d7"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <target dev="tapbcb935bd-85"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787/console.log" append="off"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <video>
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </video>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 sudo[302519]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:31:09 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:31:09 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:31:09 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:31:09 compute-0 nova_compute[253538]: </domain>
Nov 25 08:31:09 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.763 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Preparing to wait for external event network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.763 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.763 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.764 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.764 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Preparing to wait for external event network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.764 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.764 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.764 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.765 253542 DEBUG nova.virt.libvirt.vif [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-839232317',display_name='tempest-ServersTestMultiNic-server-839232317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-839232317',id=43,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-jc82luva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:02Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=964bed05-dc03-42f0-9a11-b18fa70a4787,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.765 253542 DEBUG nova.network.os_vif_util [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.766 253542 DEBUG nova.network.os_vif_util [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9a:e6,bridge_name='br-int',has_traffic_filtering=True,id=66f4cf24-739f-46ed-af06-5f4556c06239,network=Network(27ca22e4-edb9-4716-b7d0-baed03e35444),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66f4cf24-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.766 253542 DEBUG os_vif [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9a:e6,bridge_name='br-int',has_traffic_filtering=True,id=66f4cf24-739f-46ed-af06-5f4556c06239,network=Network(27ca22e4-edb9-4716-b7d0-baed03e35444),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66f4cf24-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.767 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.767 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.773 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66f4cf24-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.774 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66f4cf24-73, col_values=(('external_ids', {'iface-id': '66f4cf24-739f-46ed-af06-5f4556c06239', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:9a:e6', 'vm-uuid': '964bed05-dc03-42f0-9a11-b18fa70a4787'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:09 compute-0 NetworkManager[48915]: <info>  [1764059469.7762] manager: (tap66f4cf24-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.778 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.786 253542 INFO os_vif [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9a:e6,bridge_name='br-int',has_traffic_filtering=True,id=66f4cf24-739f-46ed-af06-5f4556c06239,network=Network(27ca22e4-edb9-4716-b7d0-baed03e35444),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66f4cf24-73')
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.786 253542 DEBUG nova.virt.libvirt.vif [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-839232317',display_name='tempest-ServersTestMultiNic-server-839232317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-839232317',id=43,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-jc82luva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:02Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=964bed05-dc03-42f0-9a11-b18fa70a4787,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.787 253542 DEBUG nova.network.os_vif_util [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.787 253542 DEBUG nova.network.os_vif_util [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=bcb935bd-8596-426c-8f99-55d4b9545321,network=Network(8b63866f-8f7c-4d12-9269-798a110ab5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcb935bd-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.787 253542 DEBUG os_vif [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=bcb935bd-8596-426c-8f99-55d4b9545321,network=Network(8b63866f-8f7c-4d12-9269-798a110ab5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcb935bd-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.788 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.788 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.790 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbcb935bd-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.791 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbcb935bd-85, col_values=(('external_ids', {'iface-id': 'bcb935bd-8596-426c-8f99-55d4b9545321', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:c3:d7', 'vm-uuid': '964bed05-dc03-42f0-9a11-b18fa70a4787'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:09 compute-0 NetworkManager[48915]: <info>  [1764059469.7929] manager: (tapbcb935bd-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.799 253542 INFO os_vif [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=bcb935bd-8596-426c-8f99-55d4b9545321,network=Network(8b63866f-8f7c-4d12-9269-798a110ab5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcb935bd-85')
Nov 25 08:31:09 compute-0 sudo[302546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:31:09 compute-0 sudo[302546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:09 compute-0 sudo[302546]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.843 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.864 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.865 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.865 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No VIF found with MAC fa:16:3e:5d:9a:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.865 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] No VIF found with MAC fa:16:3e:17:c3:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.865 253542 INFO nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Using config drive
Nov 25 08:31:09 compute-0 sudo[302576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:09 compute-0 sudo[302576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:09 compute-0 sudo[302576]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.886 253542 DEBUG nova.storage.rbd_utils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image 964bed05-dc03-42f0-9a11-b18fa70a4787_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.898 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.899 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.902 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 21243957-732f-435d-854b-56d6dd7c1ee5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.903 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 964bed05-dc03-42f0-9a11-b18fa70a4787 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.903 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.903 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "21243957-732f-435d-854b-56d6dd7c1ee5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.903 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.911 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.917 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "1a48a5fd-440c-4f73-89a2-720e38b2f798" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.918 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.934 253542 DEBUG nova.network.neutron [req-c0ff67f5-48cd-45cd-9233-22b14dc87e27 req-b450ebb1-3e89-4644-bb8f-ef248dc5cef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Updated VIF entry in instance network info cache for port bcb935bd-8596-426c-8f99-55d4b9545321. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.934 253542 DEBUG nova.network.neutron [req-c0ff67f5-48cd-45cd-9233-22b14dc87e27 req-b450ebb1-3e89-4644-bb8f-ef248dc5cef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Updating instance_info_cache with network_info: [{"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:09 compute-0 sudo[302619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:31:09 compute-0 sudo[302619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.975 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "21243957-732f-435d-854b-56d6dd7c1ee5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:09 compute-0 nova_compute[253538]: 2025-11-25 08:31:09.976 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:31:10 compute-0 ovn_controller[152859]: 2025-11-25T08:31:10Z|00317|binding|INFO|Releasing lport c8895407-4b3c-419a-8694-55e3e69b82b8 from this chassis (sb_readonly=0)
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.012 253542 DEBUG oslo_concurrency.lockutils [req-c0ff67f5-48cd-45cd-9233-22b14dc87e27 req-b450ebb1-3e89-4644-bb8f-ef248dc5cef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-964bed05-dc03-42f0-9a11-b18fa70a4787" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.036 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.037 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.050 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.052 253542 INFO nova.compute.claims [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.056 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.129 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.200 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.245 253542 INFO nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Creating config drive at /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787/disk.config
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.249 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2nupafzc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.316 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:10 compute-0 ceph-mon[75015]: pgmap v1434: 321 pgs: 321 active+clean; 164 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Nov 25 08:31:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1751746376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:10 compute-0 podman[302684]: 2025-11-25 08:31:10.329266066 +0000 UTC m=+0.066238811 container create 4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_burnell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:31:10 compute-0 podman[302684]: 2025-11-25 08:31:10.287867123 +0000 UTC m=+0.024839888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.391 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2nupafzc" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:10 compute-0 systemd[1]: Started libpod-conmon-4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091.scope.
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.415 253542 DEBUG nova.storage.rbd_utils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] rbd image 964bed05-dc03-42f0-9a11-b18fa70a4787_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.418 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787/disk.config 964bed05-dc03-42f0-9a11-b18fa70a4787_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:10 compute-0 podman[302684]: 2025-11-25 08:31:10.449730396 +0000 UTC m=+0.186703171 container init 4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_burnell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 08:31:10 compute-0 podman[302684]: 2025-11-25 08:31:10.457018388 +0000 UTC m=+0.193991133 container start 4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 08:31:10 compute-0 systemd[1]: libpod-4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091.scope: Deactivated successfully.
Nov 25 08:31:10 compute-0 xenodochial_burnell[302722]: 167 167
Nov 25 08:31:10 compute-0 conmon[302722]: conmon 4824c2a1064387d21055 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091.scope/container/memory.events
Nov 25 08:31:10 compute-0 podman[302684]: 2025-11-25 08:31:10.470651155 +0000 UTC m=+0.207623920 container attach 4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:31:10 compute-0 podman[302684]: 2025-11-25 08:31:10.471902649 +0000 UTC m=+0.208875424 container died 4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_burnell, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-02ae054985d9c06ee68522aab038249767ec50d09c14864d35a75a410130c065-merged.mount: Deactivated successfully.
Nov 25 08:31:10 compute-0 podman[302684]: 2025-11-25 08:31:10.606594321 +0000 UTC m=+0.343567066 container remove 4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.617 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.617 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.618 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:31:10 compute-0 systemd[1]: libpod-conmon-4824c2a1064387d21055fa7c13761c6e890d4243771f90c53615967b44361091.scope: Deactivated successfully.
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.643 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.643 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:31:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902370476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.699 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.705 253542 DEBUG nova.compute.provider_tree [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.716 253542 DEBUG nova.scheduler.client.report [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.743 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.744 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.747 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.752 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.753 253542 INFO nova.compute.claims [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:31:10 compute-0 podman[302786]: 2025-11-25 08:31:10.796935732 +0000 UTC m=+0.054688222 container create 563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elion, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.806 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.807 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.823 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.827 253542 DEBUG oslo_concurrency.processutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787/disk.config 964bed05-dc03-42f0-9a11-b18fa70a4787_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.835 253542 INFO nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Deleting local config drive /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787/disk.config because it was imported into RBD.
Nov 25 08:31:10 compute-0 systemd[1]: Started libpod-conmon-563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b.scope.
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.840 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:31:10 compute-0 podman[302786]: 2025-11-25 08:31:10.768460716 +0000 UTC m=+0.026213216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:31:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0c243f687bf07047bc14ca35667bada7d30f12cb17b93def6749ce338dd672/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0c243f687bf07047bc14ca35667bada7d30f12cb17b93def6749ce338dd672/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0c243f687bf07047bc14ca35667bada7d30f12cb17b93def6749ce338dd672/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0c243f687bf07047bc14ca35667bada7d30f12cb17b93def6749ce338dd672/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:10 compute-0 kernel: tap66f4cf24-73: entered promiscuous mode
Nov 25 08:31:10 compute-0 NetworkManager[48915]: <info>  [1764059470.8948] manager: (tap66f4cf24-73): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Nov 25 08:31:10 compute-0 podman[302786]: 2025-11-25 08:31:10.895018933 +0000 UTC m=+0.152771443 container init 563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:31:10 compute-0 podman[302786]: 2025-11-25 08:31:10.902601553 +0000 UTC m=+0.160354033 container start 563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elion, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 08:31:10 compute-0 ovn_controller[152859]: 2025-11-25T08:31:10Z|00318|binding|INFO|Claiming lport 66f4cf24-739f-46ed-af06-5f4556c06239 for this chassis.
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:10 compute-0 ovn_controller[152859]: 2025-11-25T08:31:10Z|00319|binding|INFO|66f4cf24-739f-46ed-af06-5f4556c06239: Claiming fa:16:3e:5d:9a:e6 10.100.0.157
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.911 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:9a:e6 10.100.0.157'], port_security=['fa:16:3e:5d:9a:e6 10.100.0.157'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.157/24', 'neutron:device_id': '964bed05-dc03-42f0-9a11-b18fa70a4787', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27ca22e4-edb9-4716-b7d0-baed03e35444', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3ce3aaa-a195-4aa5-988d-c73542209679, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=66f4cf24-739f-46ed-af06-5f4556c06239) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.912 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 66f4cf24-739f-46ed-af06-5f4556c06239 in datapath 27ca22e4-edb9-4716-b7d0-baed03e35444 bound to our chassis
Nov 25 08:31:10 compute-0 podman[302786]: 2025-11-25 08:31:10.912220559 +0000 UTC m=+0.169973059 container attach 563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elion, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:31:10 compute-0 NetworkManager[48915]: <info>  [1764059470.9126] manager: (tapbcb935bd-85): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.913 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27ca22e4-edb9-4716-b7d0-baed03e35444
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.915 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.916 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.916 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Creating image(s)
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd6ba82-239a-4dc2-a1d9-5bde79659c6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.925 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27ca22e4-e1 in ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.927 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27ca22e4-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.928 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7053ea67-8bf0-419c-a0e9-3c8962e5b5b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.928 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fdda4b0a-26b1-4dec-b582-cb34f2a46385]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:10 compute-0 systemd-udevd[302834]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:10 compute-0 systemd-udevd[302832]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.940 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d4ed0d-1b9a-4c38-ad0b-5b82ef2fbf92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:10 compute-0 systemd-machined[215790]: New machine qemu-48-instance-0000002b.
Nov 25 08:31:10 compute-0 NetworkManager[48915]: <info>  [1764059470.9482] device (tap66f4cf24-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:31:10 compute-0 NetworkManager[48915]: <info>  [1764059470.9497] device (tap66f4cf24-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:31:10 compute-0 NetworkManager[48915]: <info>  [1764059470.9581] device (tapbcb935bd-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:31:10 compute-0 kernel: tapbcb935bd-85: entered promiscuous mode
Nov 25 08:31:10 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002b.
Nov 25 08:31:10 compute-0 NetworkManager[48915]: <info>  [1764059470.9599] device (tapbcb935bd-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:31:10 compute-0 ovn_controller[152859]: 2025-11-25T08:31:10Z|00320|binding|INFO|Claiming lport bcb935bd-8596-426c-8f99-55d4b9545321 for this chassis.
Nov 25 08:31:10 compute-0 ovn_controller[152859]: 2025-11-25T08:31:10Z|00321|binding|INFO|bcb935bd-8596-426c-8f99-55d4b9545321: Claiming fa:16:3e:17:c3:d7 10.100.1.47
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.963 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.965 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6df67f29-92cf-4f09-bab2-41666237496e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:10 compute-0 ovn_controller[152859]: 2025-11-25T08:31:10Z|00322|binding|INFO|Setting lport 66f4cf24-739f-46ed-af06-5f4556c06239 ovn-installed in OVS
Nov 25 08:31:10 compute-0 ovn_controller[152859]: 2025-11-25T08:31:10Z|00323|binding|INFO|Setting lport 66f4cf24-739f-46ed-af06-5f4556c06239 up in Southbound
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.969 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:c3:d7 10.100.1.47'], port_security=['fa:16:3e:17:c3:d7 10.100.1.47'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.47/24', 'neutron:device_id': '964bed05-dc03-42f0-9a11-b18fa70a4787', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b63866f-8f7c-4d12-9269-798a110ab5eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eee42a71-97b6-4c5b-9c9c-2038ee9718a1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bcb935bd-8596-426c-8f99-55d4b9545321) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:10 compute-0 nova_compute[253538]: 2025-11-25 08:31:10.991 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:10.994 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[980eb0ed-3900-4eb3-8ae4-01f0cba8072c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 NetworkManager[48915]: <info>  [1764059471.0058] manager: (tap27ca22e4-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/155)
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.006 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ee67eb44-29f9-469e-b2f9-53e6526dea29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 ovn_controller[152859]: 2025-11-25T08:31:11Z|00324|binding|INFO|Setting lport bcb935bd-8596-426c-8f99-55d4b9545321 ovn-installed in OVS
Nov 25 08:31:11 compute-0 ovn_controller[152859]: 2025-11-25T08:31:11Z|00325|binding|INFO|Setting lport bcb935bd-8596-426c-8f99-55d4b9545321 up in Southbound
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.037 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.043 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.045 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9d1161-da9c-4033-95a9-30dc51da0173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.048 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e27325-4557-471a-bec9-8b3559326d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 NetworkManager[48915]: <info>  [1764059471.0699] device (tap27ca22e4-e0): carrier: link connected
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.075 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe38020-2b49-47c9-96fa-1fac9a24942c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.081 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.086 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.086 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.086 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.087 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 21243957-732f-435d-854b-56d6dd7c1ee5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.091 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[db49e09c-be74-4daa-bcb5-cc2db4cd2b67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27ca22e4-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f9:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482566, 'reachable_time': 22596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302914, 'error': None, 'target': 'ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.106 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bdab3a-a532-49e1-9861-240490b3e208]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f9be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482566, 'tstamp': 482566}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302915, 'error': None, 'target': 'ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.114 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.124 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9deba3c-7946-447b-8286-881cc00e4047]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27ca22e4-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f9:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482566, 'reachable_time': 22596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302916, 'error': None, 'target': 'ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.154 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1943213-6d8f-44d7-8f9d-d2559ace416a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 302 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.165 253542 DEBUG nova.policy [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3ba89d7ba114005bf727750ed2eb249', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8098b3bc99ed4993a40e217876568115', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.172 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.173 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.174 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.174 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.197 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.203 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[317a3934-7607-463b-a08c-c27fd8a7276a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.210 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27ca22e4-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27ca22e4-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:11 compute-0 kernel: tap27ca22e4-e0: entered promiscuous mode
Nov 25 08:31:11 compute-0 NetworkManager[48915]: <info>  [1764059471.2570] manager: (tap27ca22e4-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27ca22e4-e0, col_values=(('external_ids', {'iface-id': '0755eaa2-fddf-434b-be9c-b845dcdeda11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:11 compute-0 ovn_controller[152859]: 2025-11-25T08:31:11Z|00326|binding|INFO|Releasing lport 0755eaa2-fddf-434b-be9c-b845dcdeda11 from this chassis (sb_readonly=0)
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.259 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.282 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27ca22e4-edb9-4716-b7d0-baed03e35444.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27ca22e4-edb9-4716-b7d0-baed03e35444.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4cf38a7-53f1-40ca-9b73-764ae905861c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.283 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-27ca22e4-edb9-4716-b7d0-baed03e35444
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/27ca22e4-edb9-4716-b7d0-baed03e35444.pid.haproxy
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 27ca22e4-edb9-4716-b7d0-baed03e35444
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:31:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:11.285 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444', 'env', 'PROCESS_TAG=haproxy-27ca22e4-edb9-4716-b7d0-baed03e35444', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27ca22e4-edb9-4716-b7d0-baed03e35444.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.291 253542 DEBUG nova.compute.manager [req-8bdfc621-40e1-41f1-8c6d-afd17d71b45a req-3a8a95b2-b398-43cb-80ac-f16c7fa362b7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.292 253542 DEBUG oslo_concurrency.lockutils [req-8bdfc621-40e1-41f1-8c6d-afd17d71b45a req-3a8a95b2-b398-43cb-80ac-f16c7fa362b7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.292 253542 DEBUG oslo_concurrency.lockutils [req-8bdfc621-40e1-41f1-8c6d-afd17d71b45a req-3a8a95b2-b398-43cb-80ac-f16c7fa362b7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.292 253542 DEBUG oslo_concurrency.lockutils [req-8bdfc621-40e1-41f1-8c6d-afd17d71b45a req-3a8a95b2-b398-43cb-80ac-f16c7fa362b7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.293 253542 DEBUG nova.compute.manager [req-8bdfc621-40e1-41f1-8c6d-afd17d71b45a req-3a8a95b2-b398-43cb-80ac-f16c7fa362b7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Processing event network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:31:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3902370476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636827777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.592 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.599 253542 DEBUG nova.compute.provider_tree [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.621 253542 DEBUG nova.scheduler.client.report [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.651 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.652 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.706 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.707 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:31:11 compute-0 podman[303019]: 2025-11-25 08:31:11.623132068 +0000 UTC m=+0.031014178 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.727 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.748 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:31:11 compute-0 podman[303019]: 2025-11-25 08:31:11.828050962 +0000 UTC m=+0.235933052 container create b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.842 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.844 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.844 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Creating image(s)
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.873 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:11 compute-0 systemd[1]: Started libpod-conmon-b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219.scope.
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.916 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7494be21727f1d148df6cdffdefb1b7c9dfc8dd7633349f9d5176174bb46e059/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.946 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.963 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:11 compute-0 wizardly_elion[302803]: {
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "osd_id": 1,
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "type": "bluestore"
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:     },
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "osd_id": 2,
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "type": "bluestore"
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:     },
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "osd_id": 0,
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:         "type": "bluestore"
Nov 25 08:31:11 compute-0 wizardly_elion[302803]:     }
Nov 25 08:31:11 compute-0 wizardly_elion[302803]: }
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.997 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059471.8902488, 964bed05-dc03-42f0-9a11-b18fa70a4787 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:11 compute-0 nova_compute[253538]: 2025-11-25 08:31:11.997 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] VM Started (Lifecycle Event)
Nov 25 08:31:12 compute-0 systemd[1]: libpod-563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b.scope: Deactivated successfully.
Nov 25 08:31:12 compute-0 systemd[1]: libpod-563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b.scope: Consumed 1.050s CPU time.
Nov 25 08:31:12 compute-0 conmon[302803]: conmon 563f145f9cf9eadae642 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b.scope/container/memory.events
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.016 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.021 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059471.8904066, 964bed05-dc03-42f0-9a11-b18fa70a4787 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.021 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] VM Paused (Lifecycle Event)
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.041 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.041 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.042 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.042 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.063 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.066 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:12 compute-0 podman[303019]: 2025-11-25 08:31:12.06782832 +0000 UTC m=+0.475710420 container init b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 08:31:12 compute-0 podman[303019]: 2025-11-25 08:31:12.076101918 +0000 UTC m=+0.483983998 container start b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 08:31:12 compute-0 neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444[303121]: [NOTICE]   (303183) : New worker (303185) forked
Nov 25 08:31:12 compute-0 neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444[303121]: [NOTICE]   (303183) : Loading success.
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.104 253542 DEBUG nova.policy [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3ba89d7ba114005bf727750ed2eb249', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8098b3bc99ed4993a40e217876568115', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.108 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.113 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.128 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Successfully created port: b929eb45-ce97-4f96-9519-2f3a4aebb9d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.138 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:12 compute-0 podman[302786]: 2025-11-25 08:31:12.183370183 +0000 UTC m=+1.441122663 container died 563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.228 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bcb935bd-8596-426c-8f99-55d4b9545321 in datapath 8b63866f-8f7c-4d12-9269-798a110ab5eb unbound from our chassis
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.229 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b63866f-8f7c-4d12-9269-798a110ab5eb
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.239 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[223d9984-c035-4ed4-ad40-84a2e49fb723]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.240 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8b63866f-81 in ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.242 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8b63866f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.243 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4f62aa-a6f2-4567-bb2a-3fa37c0db064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81608c47-25c7-4928-ae4b-cff7e086dff5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.255 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[29f3feb1-2902-4b28-adb5-91c239e39ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.285 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a97fa978-6515-43b0-b052-7c27db1f694b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0c243f687bf07047bc14ca35667bada7d30f12cb17b93def6749ce338dd672-merged.mount: Deactivated successfully.
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.322 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[23c6637d-d733-4282-9b90-d70ed46327a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.328 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6855ace-571a-48c5-9023-e224f172b31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 NetworkManager[48915]: <info>  [1764059472.3301] manager: (tap8b63866f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/157)
Nov 25 08:31:12 compute-0 systemd-udevd[302893]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.353 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.374 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5df4b077-d7ac-47a6-b3c4-d64818f7bdce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.377 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d9157bfe-2ca0-4397-a7d2-0c5db802d571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 NetworkManager[48915]: <info>  [1764059472.4136] device (tap8b63866f-80): carrier: link connected
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.423 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[69f9d10b-c6d4-4bb6-9f82-42a12eccf596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.437 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] resizing rbd image 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[370b9e4f-1713-4498-8f7f-7734974fa42e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b63866f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:b6:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482701, 'reachable_time': 29353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303260, 'error': None, 'target': 'ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eeddb0ea-98de-43dd-8575-95739c72bb1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:b611'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482701, 'tstamp': 482701}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303276, 'error': None, 'target': 'ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.476 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ffb6807-4fa6-4513-87da-b28089939584]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b63866f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:b6:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482701, 'reachable_time': 29353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303280, 'error': None, 'target': 'ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.511 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[da3745b2-651f-4d4d-b47e-a75d9655a1a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ceph-mon[75015]: pgmap v1435: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 302 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Nov 25 08:31:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2636827777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:12 compute-0 podman[303150]: 2025-11-25 08:31:12.56751204 +0000 UTC m=+0.550078814 container remove 563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elion, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:31:12 compute-0 systemd[1]: libpod-conmon-563f145f9cf9eadae6426ddb94ce5a0f2d04674f2b01a919aa08b9c96f4ba81b.scope: Deactivated successfully.
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.584 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c27a7144-9252-4ab8-8df6-73fb8e92f7f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.587 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b63866f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.587 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.588 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b63866f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:12 compute-0 kernel: tap8b63866f-80: entered promiscuous mode
Nov 25 08:31:12 compute-0 NetworkManager[48915]: <info>  [1764059472.6389] manager: (tap8b63866f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.639 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.645 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b63866f-80, col_values=(('external_ids', {'iface-id': '466aec11-00d8-46b8-b8ef-acde0ec32831'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:12 compute-0 ovn_controller[152859]: 2025-11-25T08:31:12Z|00327|binding|INFO|Releasing lport 466aec11-00d8-46b8-b8ef-acde0ec32831 from this chassis (sb_readonly=0)
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.650 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8b63866f-8f7c-4d12-9269-798a110ab5eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8b63866f-8f7c-4d12-9269-798a110ab5eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.652 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf57be5-5380-4085-8d4c-4937fd33dcee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.653 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-8b63866f-8f7c-4d12-9269-798a110ab5eb
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/8b63866f-8f7c-4d12-9269-798a110ab5eb.pid.haproxy
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:31:12 compute-0 sudo[302619]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 8b63866f-8f7c-4d12-9269-798a110ab5eb
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:31:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:12.653 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb', 'env', 'PROCESS_TAG=haproxy-8b63866f-8f7c-4d12-9269-798a110ab5eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8b63866f-8f7c-4d12-9269-798a110ab5eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:31:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.679 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Successfully created port: f11428cf-fdd2-454c-a9f5-6974d43ec027 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:31:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:31:12 compute-0 podman[303287]: 2025-11-25 08:31:12.760211107 +0000 UTC m=+0.105639081 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:31:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.774 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Successfully updated port: b929eb45-ce97-4f96-9519-2f3a4aebb9d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:31:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev af56ef93-8afd-43f4-a261-043db7890bbe does not exist
Nov 25 08:31:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a024a95f-5a34-493b-b14f-23560b065681 does not exist
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.788 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "refresh_cache-59d86d83-1ff4-4f9e-96f3-5e381272de5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.788 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquired lock "refresh_cache-59d86d83-1ff4-4f9e-96f3-5e381272de5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.789 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:12 compute-0 sudo[303317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:31:12 compute-0 sudo[303317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:12 compute-0 sudo[303317]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:12 compute-0 sudo[303342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:31:12 compute-0 sudo[303342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:31:12 compute-0 sudo[303342]: pam_unix(sudo:session): session closed for user root
Nov 25 08:31:12 compute-0 nova_compute[253538]: 2025-11-25 08:31:12.942 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:31:13 compute-0 podman[303389]: 2025-11-25 08:31:13.028818201 +0000 UTC m=+0.034020601 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:31:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 173 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.8 MiB/s wr, 76 op/s
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.190 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:13 compute-0 podman[303389]: 2025-11-25 08:31:13.194006977 +0000 UTC m=+0.199209347 container create 32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.203 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.203 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.204 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:13 compute-0 systemd[1]: Started libpod-conmon-32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009.scope.
Nov 25 08:31:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6e6ce104a0cc9dc2672ac84e82409bbed51283b99160cc57540d4e70e657e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:13 compute-0 podman[303389]: 2025-11-25 08:31:13.314085326 +0000 UTC m=+0.319287736 container init 32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.316 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:13 compute-0 podman[303389]: 2025-11-25 08:31:13.320719639 +0000 UTC m=+0.325922009 container start 32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 08:31:13 compute-0 neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb[303405]: [NOTICE]   (303427) : New worker (303445) forked
Nov 25 08:31:13 compute-0 neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb[303405]: [NOTICE]   (303427) : Loading success.
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.371 253542 DEBUG nova.objects.instance [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'migration_context' on Instance uuid 59d86d83-1ff4-4f9e-96f3-5e381272de5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.407 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.408 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Ensure instance console log exists: /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.408 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.408 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.409 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.415 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] resizing rbd image 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.582 253542 DEBUG nova.objects.instance [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'migration_context' on Instance uuid 1a48a5fd-440c-4f73-89a2-720e38b2f798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.593 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.594 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Ensure instance console log exists: /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.594 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.594 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.594 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:31:13 compute-0 ceph-mon[75015]: pgmap v1436: 321 pgs: 321 active+clean; 173 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.8 MiB/s wr, 76 op/s
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.738 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Successfully updated port: f11428cf-fdd2-454c-a9f5-6974d43ec027 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.754 253542 DEBUG nova.compute.manager [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.754 253542 DEBUG oslo_concurrency.lockutils [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.754 253542 DEBUG oslo_concurrency.lockutils [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.754 253542 DEBUG oslo_concurrency.lockutils [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.755 253542 DEBUG nova.compute.manager [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] No event matching network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 in dict_keys([('network-vif-plugged', 'bcb935bd-8596-426c-8f99-55d4b9545321')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.755 253542 WARNING nova.compute.manager [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received unexpected event network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 for instance with vm_state building and task_state spawning.
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.755 253542 DEBUG nova.compute.manager [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received event network-changed-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.755 253542 DEBUG nova.compute.manager [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Refreshing instance network info cache due to event network-changed-b929eb45-ce97-4f96-9519-2f3a4aebb9d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.756 253542 DEBUG oslo_concurrency.lockutils [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-59d86d83-1ff4-4f9e-96f3-5e381272de5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.758 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "refresh_cache-1a48a5fd-440c-4f73-89a2-720e38b2f798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.758 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquired lock "refresh_cache-1a48a5fd-440c-4f73-89a2-720e38b2f798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.758 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.829 253542 DEBUG nova.compute.manager [req-789f5e9a-723d-497d-a2e4-ac24442b2e9a req-bf12b13f-3dd8-40f3-a127-2273090e2e30 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received event network-changed-f11428cf-fdd2-454c-a9f5-6974d43ec027 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.830 253542 DEBUG nova.compute.manager [req-789f5e9a-723d-497d-a2e4-ac24442b2e9a req-bf12b13f-3dd8-40f3-a127-2273090e2e30 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Refreshing instance network info cache due to event network-changed-f11428cf-fdd2-454c-a9f5-6974d43ec027. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.830 253542 DEBUG oslo_concurrency.lockutils [req-789f5e9a-723d-497d-a2e4-ac24442b2e9a req-bf12b13f-3dd8-40f3-a127-2273090e2e30 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1a48a5fd-440c-4f73-89a2-720e38b2f798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.928 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.965 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Updating instance_info_cache with network_info: [{"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.982 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Releasing lock "refresh_cache-59d86d83-1ff4-4f9e-96f3-5e381272de5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.983 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Instance network_info: |[{"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.983 253542 DEBUG oslo_concurrency.lockutils [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-59d86d83-1ff4-4f9e-96f3-5e381272de5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.983 253542 DEBUG nova.network.neutron [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Refreshing network info cache for port b929eb45-ce97-4f96-9519-2f3a4aebb9d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.987 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Start _get_guest_xml network_info=[{"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.992 253542 WARNING nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.999 253542 DEBUG nova.virt.libvirt.host [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:31:13 compute-0 nova_compute[253538]: 2025-11-25 08:31:13.999 253542 DEBUG nova.virt.libvirt.host [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.004 253542 DEBUG nova.virt.libvirt.host [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.005 253542 DEBUG nova.virt.libvirt.host [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.005 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.005 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.006 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.006 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.006 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.006 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.006 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.007 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.007 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.007 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.007 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.008 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.010 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452793299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.469 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.489 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.492 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3452793299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.831 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.893 253542 DEBUG nova.network.neutron [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Updating instance_info_cache with network_info: [{"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236593914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.968 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.969 253542 DEBUG nova.virt.libvirt.vif [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-541278070',display_name='tempest-tempest.common.compute-instance-541278070-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-541278070-1',id=44,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-i36adckz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:10Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=59d86d83-1ff4-4f9e-96f3-5e381272de5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.970 253542 DEBUG nova.network.os_vif_util [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.971 253542 DEBUG nova.network.os_vif_util [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:58:52,bridge_name='br-int',has_traffic_filtering=True,id=b929eb45-ce97-4f96-9519-2f3a4aebb9d9,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb929eb45-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.972 253542 DEBUG nova.objects.instance [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59d86d83-1ff4-4f9e-96f3-5e381272de5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.991 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <uuid>59d86d83-1ff4-4f9e-96f3-5e381272de5c</uuid>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <name>instance-0000002c</name>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <nova:name>tempest-tempest.common.compute-instance-541278070-1</nova:name>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:31:13</nova:creationTime>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <nova:user uuid="e3ba89d7ba114005bf727750ed2eb249">tempest-MultipleCreateTestJSON-1813277106-project-member</nova:user>
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <nova:project uuid="8098b3bc99ed4993a40e217876568115">tempest-MultipleCreateTestJSON-1813277106</nova:project>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <nova:port uuid="b929eb45-ce97-4f96-9519-2f3a4aebb9d9">
Nov 25 08:31:14 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <system>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <entry name="serial">59d86d83-1ff4-4f9e-96f3-5e381272de5c</entry>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <entry name="uuid">59d86d83-1ff4-4f9e-96f3-5e381272de5c</entry>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </system>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <os>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   </os>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <features>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   </features>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk">
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk.config">
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5b:58:52"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <target dev="tapb929eb45-ce"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c/console.log" append="off"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <video>
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </video>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:31:14 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:31:14 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:31:14 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:31:14 compute-0 nova_compute[253538]: </domain>
Nov 25 08:31:14 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.992 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Preparing to wait for external event network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.992 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.992 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.993 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.993 253542 DEBUG nova.virt.libvirt.vif [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-541278070',display_name='tempest-tempest.common.compute-instance-541278070-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-541278070-1',id=44,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-i36adckz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:10Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=59d86d83-1ff4-4f9e-96f3-5e381272de5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.994 253542 DEBUG nova.network.os_vif_util [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.995 253542 DEBUG nova.network.os_vif_util [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:58:52,bridge_name='br-int',has_traffic_filtering=True,id=b929eb45-ce97-4f96-9519-2f3a4aebb9d9,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb929eb45-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.995 253542 DEBUG os_vif [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:58:52,bridge_name='br-int',has_traffic_filtering=True,id=b929eb45-ce97-4f96-9519-2f3a4aebb9d9,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb929eb45-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.996 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:14 compute-0 nova_compute[253538]: 2025-11-25 08:31:14.996 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.000 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb929eb45-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.001 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb929eb45-ce, col_values=(('external_ids', {'iface-id': 'b929eb45-ce97-4f96-9519-2f3a4aebb9d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:58:52', 'vm-uuid': '59d86d83-1ff4-4f9e-96f3-5e381272de5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:15 compute-0 NetworkManager[48915]: <info>  [1764059475.0047] manager: (tapb929eb45-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.015 253542 INFO os_vif [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:58:52,bridge_name='br-int',has_traffic_filtering=True,id=b929eb45-ce97-4f96-9519-2f3a4aebb9d9,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb929eb45-ce')
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.054 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Releasing lock "refresh_cache-1a48a5fd-440c-4f73-89a2-720e38b2f798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.055 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Instance network_info: |[{"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.055 253542 DEBUG oslo_concurrency.lockutils [req-789f5e9a-723d-497d-a2e4-ac24442b2e9a req-bf12b13f-3dd8-40f3-a127-2273090e2e30 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1a48a5fd-440c-4f73-89a2-720e38b2f798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.055 253542 DEBUG nova.network.neutron [req-789f5e9a-723d-497d-a2e4-ac24442b2e9a req-bf12b13f-3dd8-40f3-a127-2273090e2e30 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Refreshing network info cache for port f11428cf-fdd2-454c-a9f5-6974d43ec027 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.060 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Start _get_guest_xml network_info=[{"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.066 253542 WARNING nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.071 253542 DEBUG nova.virt.libvirt.host [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.072 253542 DEBUG nova.virt.libvirt.host [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.076 253542 DEBUG nova.virt.libvirt.host [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.076 253542 DEBUG nova.virt.libvirt.host [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.076 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.077 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.077 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.077 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.078 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.078 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.078 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.078 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.078 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.079 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.079 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.079 253542 DEBUG nova.virt.hardware [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.082 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.126 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.126 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.126 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No VIF found with MAC fa:16:3e:5b:58:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.127 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Using config drive
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.146 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 235 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 5.0 MiB/s wr, 125 op/s
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116522926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.543 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.573 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:15 compute-0 nova_compute[253538]: 2025-11-25 08:31:15.579 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4236593914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:15 compute-0 ceph-mon[75015]: pgmap v1437: 321 pgs: 321 active+clean; 235 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 5.0 MiB/s wr, 125 op/s
Nov 25 08:31:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2116522926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870820973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.044 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.047 253542 DEBUG nova.virt.libvirt.vif [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-541278070',display_name='tempest-tempest.common.compute-instance-541278070-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-541278070-2',id=45,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-i36adckz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:11Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=1a48a5fd-440c-4f73-89a2-720e38b2f798,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.048 253542 DEBUG nova.network.os_vif_util [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.049 253542 DEBUG nova.network.os_vif_util [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:86,bridge_name='br-int',has_traffic_filtering=True,id=f11428cf-fdd2-454c-a9f5-6974d43ec027,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11428cf-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.051 253542 DEBUG nova.objects.instance [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a48a5fd-440c-4f73-89a2-720e38b2f798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.066 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <uuid>1a48a5fd-440c-4f73-89a2-720e38b2f798</uuid>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <name>instance-0000002d</name>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <nova:name>tempest-tempest.common.compute-instance-541278070-2</nova:name>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:31:15</nova:creationTime>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <nova:user uuid="e3ba89d7ba114005bf727750ed2eb249">tempest-MultipleCreateTestJSON-1813277106-project-member</nova:user>
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <nova:project uuid="8098b3bc99ed4993a40e217876568115">tempest-MultipleCreateTestJSON-1813277106</nova:project>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <nova:port uuid="f11428cf-fdd2-454c-a9f5-6974d43ec027">
Nov 25 08:31:16 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <system>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <entry name="serial">1a48a5fd-440c-4f73-89a2-720e38b2f798</entry>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <entry name="uuid">1a48a5fd-440c-4f73-89a2-720e38b2f798</entry>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </system>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <os>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   </os>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <features>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   </features>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/1a48a5fd-440c-4f73-89a2-720e38b2f798_disk">
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/1a48a5fd-440c-4f73-89a2-720e38b2f798_disk.config">
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:16 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:ba:01:86"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <target dev="tapf11428cf-fd"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798/console.log" append="off"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <video>
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </video>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:31:16 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:31:16 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:31:16 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:31:16 compute-0 nova_compute[253538]: </domain>
Nov 25 08:31:16 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.067 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Preparing to wait for external event network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.067 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.068 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.068 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.069 253542 DEBUG nova.virt.libvirt.vif [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-541278070',display_name='tempest-tempest.common.compute-instance-541278070-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-541278070-2',id=45,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-i36adckz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:11Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=1a48a5fd-440c-4f73-89a2-720e38b2f798,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.070 253542 DEBUG nova.network.os_vif_util [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.070 253542 DEBUG nova.network.os_vif_util [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:86,bridge_name='br-int',has_traffic_filtering=True,id=f11428cf-fdd2-454c-a9f5-6974d43ec027,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11428cf-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.071 253542 DEBUG os_vif [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:86,bridge_name='br-int',has_traffic_filtering=True,id=f11428cf-fdd2-454c-a9f5-6974d43ec027,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11428cf-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.072 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.073 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.075 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf11428cf-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.076 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf11428cf-fd, col_values=(('external_ids', {'iface-id': 'f11428cf-fdd2-454c-a9f5-6974d43ec027', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:01:86', 'vm-uuid': '1a48a5fd-440c-4f73-89a2-720e38b2f798'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:16 compute-0 NetworkManager[48915]: <info>  [1764059476.0789] manager: (tapf11428cf-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.081 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.086 253542 INFO os_vif [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:86,bridge_name='br-int',has_traffic_filtering=True,id=f11428cf-fdd2-454c-a9f5-6974d43ec027,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11428cf-fd')
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.126 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.127 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.127 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No VIF found with MAC fa:16:3e:ba:01:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.128 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Using config drive
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.148 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.420 253542 DEBUG nova.compute.manager [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.420 253542 DEBUG oslo_concurrency.lockutils [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.420 253542 DEBUG oslo_concurrency.lockutils [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.420 253542 DEBUG oslo_concurrency.lockutils [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.421 253542 DEBUG nova.compute.manager [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Processing event network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.421 253542 DEBUG nova.compute.manager [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.421 253542 DEBUG oslo_concurrency.lockutils [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.421 253542 DEBUG oslo_concurrency.lockutils [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.421 253542 DEBUG oslo_concurrency.lockutils [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.421 253542 DEBUG nova.compute.manager [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] No waiting events found dispatching network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.422 253542 WARNING nova.compute.manager [req-39bff585-6d0f-48fa-8166-046e56f6d063 req-8ad5e8d2-935b-4990-8147-fa557d14188e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received unexpected event network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 for instance with vm_state building and task_state spawning.
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.422 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.427 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059476.4259725, 964bed05-dc03-42f0-9a11-b18fa70a4787 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.428 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] VM Resumed (Lifecycle Event)
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.430 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.433 253542 INFO nova.virt.libvirt.driver [-] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Instance spawned successfully.
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.434 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.447 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.451 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.454 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.454 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.455 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.455 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.455 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.456 253542 DEBUG nova.virt.libvirt.driver [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.475 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.547 253542 INFO nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Took 13.84 seconds to spawn the instance on the hypervisor.
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.548 253542 DEBUG nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1870820973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.843 253542 INFO nova.compute.manager [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Took 15.04 seconds to build instance.
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.871 253542 DEBUG oslo_concurrency.lockutils [None req-8a35b37d-5eee-45b4-8e9b-6f14c0eb4b80 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.872 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.872 253542 INFO nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.873 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.959 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Creating config drive at /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c/disk.config
Nov 25 08:31:16 compute-0 nova_compute[253538]: 2025-11-25 08:31:16.965 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpos1dpykz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.039 253542 DEBUG nova.network.neutron [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Updated VIF entry in instance network info cache for port b929eb45-ce97-4f96-9519-2f3a4aebb9d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.040 253542 DEBUG nova.network.neutron [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Updating instance_info_cache with network_info: [{"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.061 253542 DEBUG oslo_concurrency.lockutils [req-bc8f8090-24b1-47d2-82c8-e2258b102cee req-974af66c-806d-4637-9a57-b3fe197615a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-59d86d83-1ff4-4f9e-96f3-5e381272de5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.124 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpos1dpykz" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.148 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.152 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c/disk.config 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 260 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 5.0 MiB/s wr, 93 op/s
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.185 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Creating config drive at /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798/disk.config
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.195 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy0d19qbc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.340 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy0d19qbc" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.365 253542 DEBUG nova.storage.rbd_utils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.370 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798/disk.config 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.551 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.619 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c/disk.config 59d86d83-1ff4-4f9e-96f3-5e381272de5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.620 253542 DEBUG oslo_concurrency.processutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798/disk.config 1a48a5fd-440c-4f73-89a2-720e38b2f798_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.621 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Deleting local config drive /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c/disk.config because it was imported into RBD.
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.621 253542 INFO nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Deleting local config drive /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798/disk.config because it was imported into RBD.
Nov 25 08:31:17 compute-0 NetworkManager[48915]: <info>  [1764059477.7138] manager: (tapf11428cf-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Nov 25 08:31:17 compute-0 kernel: tapf11428cf-fd: entered promiscuous mode
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00328|binding|INFO|Claiming lport f11428cf-fdd2-454c-a9f5-6974d43ec027 for this chassis.
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00329|binding|INFO|f11428cf-fdd2-454c-a9f5-6974d43ec027: Claiming fa:16:3e:ba:01:86 10.100.0.14
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:17 compute-0 NetworkManager[48915]: <info>  [1764059477.7295] manager: (tapb929eb45-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.735 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:01:86 10.100.0.14'], port_security=['fa:16:3e:ba:01:86 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1a48a5fd-440c-4f73-89a2-720e38b2f798', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8098b3bc99ed4993a40e217876568115', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fca0453d-8e16-42b4-ac5c-91be3458ce00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eafba2a5-2b65-4a07-b367-d4c961f70ef4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f11428cf-fdd2-454c-a9f5-6974d43ec027) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.737 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f11428cf-fdd2-454c-a9f5-6974d43ec027 in datapath 65eaa05f-c76b-4ec2-a994-1bde6d48fae1 bound to our chassis
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.740 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:17 compute-0 kernel: tapb929eb45-ce: entered promiscuous mode
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00330|binding|INFO|Setting lport f11428cf-fdd2-454c-a9f5-6974d43ec027 ovn-installed in OVS
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00331|binding|INFO|Setting lport f11428cf-fdd2-454c-a9f5-6974d43ec027 up in Southbound
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00332|if_status|INFO|Not updating pb chassis for b929eb45-ce97-4f96-9519-2f3a4aebb9d9 now as sb is readonly
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:17 compute-0 systemd-udevd[303791]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:17 compute-0 systemd-udevd[303792]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.760 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1028e63a-70ed-4c17-83a1-93a77cd4c23b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.762 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65eaa05f-c1 in ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.765 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65eaa05f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.765 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23d894a4-0e25-40b6-8f52-769b4dbab660]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00333|binding|INFO|Claiming lport b929eb45-ce97-4f96-9519-2f3a4aebb9d9 for this chassis.
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00334|binding|INFO|b929eb45-ce97-4f96-9519-2f3a4aebb9d9: Claiming fa:16:3e:5b:58:52 10.100.0.5
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.767 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10c4fc2a-e449-46f4-85cf-b0a36ebebfae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.778 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:58:52 10.100.0.5'], port_security=['fa:16:3e:5b:58:52 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '59d86d83-1ff4-4f9e-96f3-5e381272de5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8098b3bc99ed4993a40e217876568115', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fca0453d-8e16-42b4-ac5c-91be3458ce00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eafba2a5-2b65-4a07-b367-d4c961f70ef4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=b929eb45-ce97-4f96-9519-2f3a4aebb9d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:17 compute-0 NetworkManager[48915]: <info>  [1764059477.7795] device (tapb929eb45-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:31:17 compute-0 NetworkManager[48915]: <info>  [1764059477.7811] device (tapb929eb45-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:31:17 compute-0 NetworkManager[48915]: <info>  [1764059477.7819] device (tapf11428cf-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:31:17 compute-0 NetworkManager[48915]: <info>  [1764059477.7829] device (tapf11428cf-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.785 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4d713b-f146-4080-a2f1-19ca0eaf92a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00335|binding|INFO|Setting lport b929eb45-ce97-4f96-9519-2f3a4aebb9d9 ovn-installed in OVS
Nov 25 08:31:17 compute-0 ovn_controller[152859]: 2025-11-25T08:31:17Z|00336|binding|INFO|Setting lport b929eb45-ce97-4f96-9519-2f3a4aebb9d9 up in Southbound
Nov 25 08:31:17 compute-0 nova_compute[253538]: 2025-11-25 08:31:17.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:17 compute-0 systemd-machined[215790]: New machine qemu-50-instance-0000002d.
Nov 25 08:31:17 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002d.
Nov 25 08:31:17 compute-0 systemd-machined[215790]: New machine qemu-49-instance-0000002c.
Nov 25 08:31:17 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002c.
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.811 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2db2b57-c399-47a2-b037-28526f1c1ec9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ceph-mon[75015]: pgmap v1438: 321 pgs: 321 active+clean; 260 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 5.0 MiB/s wr, 93 op/s
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.842 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7e308f92-fc39-4dc7-9097-64f141ba361f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 NetworkManager[48915]: <info>  [1764059477.8547] manager: (tap65eaa05f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.853 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8da06517-a381-4221-a04f-3718dbabaf65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.890 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[973b3c81-ba67-4f83-8350-9c8dd6f944e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.893 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0060de-7ce7-4801-ad47-97d26e33346d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 NetworkManager[48915]: <info>  [1764059477.9123] device (tap65eaa05f-c0): carrier: link connected
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.916 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b65bac75-e58d-48b8-b12f-741482ca8e5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[251efcc3-5322-4483-90fc-b0a031fd2368]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65eaa05f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:f2:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483251, 'reachable_time': 39905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303842, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.952 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2d6e7e-5216-4ce0-bee2-a6e438ce26a2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:f258'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483251, 'tstamp': 483251}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303843, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:17.971 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b81006d-4448-495e-af98-b1bb3b54c96a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65eaa05f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:f2:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483251, 'reachable_time': 39905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303844, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b324915-0993-4652-aa50-f4c8034c400c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.063 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e53b635e-f5c2-4f2f-b040-7081a2975b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.065 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65eaa05f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.065 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.066 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65eaa05f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:18 compute-0 kernel: tap65eaa05f-c0: entered promiscuous mode
Nov 25 08:31:18 compute-0 NetworkManager[48915]: <info>  [1764059478.0687] manager: (tap65eaa05f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.067 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.074 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65eaa05f-c0, col_values=(('external_ids', {'iface-id': 'd2f4cf46-9887-4d81-a2f6-0b34dd30bde4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:18 compute-0 ovn_controller[152859]: 2025-11-25T08:31:18Z|00337|binding|INFO|Releasing lport d2f4cf46-9887-4d81-a2f6-0b34dd30bde4 from this chassis (sb_readonly=0)
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.079 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65eaa05f-c76b-4ec2-a994-1bde6d48fae1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65eaa05f-c76b-4ec2-a994-1bde6d48fae1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.079 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cec162a7-be0a-4c79-bbd3-ffea4b22aa83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.080 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/65eaa05f-c76b-4ec2-a994-1bde6d48fae1.pid.haproxy
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.082 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'env', 'PROCESS_TAG=haproxy-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65eaa05f-c76b-4ec2-a994-1bde6d48fae1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3680679224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.189 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.272 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.272 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.276 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.276 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.278 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.278 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.281 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.281 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.515 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059478.5147948, 59d86d83-1ff4-4f9e-96f3-5e381272de5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.515 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] VM Started (Lifecycle Event)
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.537 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.538 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3875MB free_disk=59.88031768798828GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.538 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.538 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.540 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.543 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059478.5170894, 59d86d83-1ff4-4f9e-96f3-5e381272de5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.544 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] VM Paused (Lifecycle Event)
Nov 25 08:31:18 compute-0 podman[303918]: 2025-11-25 08:31:18.549930402 +0000 UTC m=+0.073400120 container create be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.572 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.576 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:18 compute-0 systemd[1]: Started libpod-conmon-be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624.scope.
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.598 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:18 compute-0 podman[303918]: 2025-11-25 08:31:18.514328498 +0000 UTC m=+0.037798236 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:31:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13b98a62fb015a89bad2779e7916acdfc1dec5ee161483004032cc1254611da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:18 compute-0 podman[303918]: 2025-11-25 08:31:18.635726043 +0000 UTC m=+0.159195771 container init be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:18 compute-0 podman[303918]: 2025-11-25 08:31:18.641563485 +0000 UTC m=+0.165033193 container start be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.648 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 21243957-732f-435d-854b-56d6dd7c1ee5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.648 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 964bed05-dc03-42f0-9a11-b18fa70a4787 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.648 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 59d86d83-1ff4-4f9e-96f3-5e381272de5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.648 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 1a48a5fd-440c-4f73-89a2-720e38b2f798 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.649 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.649 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:31:18 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[303971]: [NOTICE]   (303979) : New worker (303981) forked
Nov 25 08:31:18 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[303971]: [NOTICE]   (303979) : Loading success.
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.701 162739 INFO neutron.agent.ovn.metadata.agent [-] Port b929eb45-ce97-4f96-9519-2f3a4aebb9d9 in datapath 65eaa05f-c76b-4ec2-a994-1bde6d48fae1 unbound from our chassis
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.703 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.704 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059478.7036905, 1a48a5fd-440c-4f73-89a2-720e38b2f798 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.704 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] VM Started (Lifecycle Event)
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.716 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[102913f1-9b4c-4406-bf2f-c822788689e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.734 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.737 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059478.7038627, 1a48a5fd-440c-4f73-89a2-720e38b2f798 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.737 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] VM Paused (Lifecycle Event)
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.744 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dbce95-c308-4d5f-949c-2c444afc5dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.748 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[875c2624-0d3a-4b30-bbfd-c2c661252b67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.756 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.758 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.775 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[12f2fb88-16a7-4d06-b195-4caed0858892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.780 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.795 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e55d6a12-7825-4017-b17a-2ec05392d8f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65eaa05f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:f2:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483251, 'reachable_time': 39905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303996, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.811 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b50df6-9838-49ea-b087-c4a9cf561567]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap65eaa05f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483262, 'tstamp': 483262}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303997, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap65eaa05f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483265, 'tstamp': 483265}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303997, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.813 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65eaa05f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.815 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.816 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65eaa05f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.817 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.817 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65eaa05f-c0, col_values=(('external_ids', {'iface-id': 'd2f4cf46-9887-4d81-a2f6-0b34dd30bde4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:18.818 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.819 253542 DEBUG nova.network.neutron [req-789f5e9a-723d-497d-a2e4-ac24442b2e9a req-bf12b13f-3dd8-40f3-a127-2273090e2e30 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Updated VIF entry in instance network info cache for port f11428cf-fdd2-454c-a9f5-6974d43ec027. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.819 253542 DEBUG nova.network.neutron [req-789f5e9a-723d-497d-a2e4-ac24442b2e9a req-bf12b13f-3dd8-40f3-a127-2273090e2e30 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Updating instance_info_cache with network_info: [{"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.824 253542 DEBUG nova.compute.manager [req-018e8e8d-b699-494a-a2a7-e57d2fda82a4 req-8e328dda-5576-40bc-ae27-bd7052a2e328 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received event network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.825 253542 DEBUG oslo_concurrency.lockutils [req-018e8e8d-b699-494a-a2a7-e57d2fda82a4 req-8e328dda-5576-40bc-ae27-bd7052a2e328 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.825 253542 DEBUG oslo_concurrency.lockutils [req-018e8e8d-b699-494a-a2a7-e57d2fda82a4 req-8e328dda-5576-40bc-ae27-bd7052a2e328 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.825 253542 DEBUG oslo_concurrency.lockutils [req-018e8e8d-b699-494a-a2a7-e57d2fda82a4 req-8e328dda-5576-40bc-ae27-bd7052a2e328 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.825 253542 DEBUG nova.compute.manager [req-018e8e8d-b699-494a-a2a7-e57d2fda82a4 req-8e328dda-5576-40bc-ae27-bd7052a2e328 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Processing event network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.826 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.829 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059478.8291035, 1a48a5fd-440c-4f73-89a2-720e38b2f798 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.829 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] VM Resumed (Lifecycle Event)
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.831 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.832 253542 DEBUG oslo_concurrency.lockutils [req-789f5e9a-723d-497d-a2e4-ac24442b2e9a req-bf12b13f-3dd8-40f3-a127-2273090e2e30 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1a48a5fd-440c-4f73-89a2-720e38b2f798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.834 253542 INFO nova.virt.libvirt.driver [-] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Instance spawned successfully.
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.834 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:31:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3680679224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.853 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.854 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.854 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.854 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.854 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.855 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.861 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.864 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.878 253542 DEBUG nova.compute.manager [req-e454782a-2227-42bc-8814-9d31138d8138 req-8fe08623-7621-4500-9d27-e3ad45a7b0d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received event network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.878 253542 DEBUG oslo_concurrency.lockutils [req-e454782a-2227-42bc-8814-9d31138d8138 req-8fe08623-7621-4500-9d27-e3ad45a7b0d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.878 253542 DEBUG oslo_concurrency.lockutils [req-e454782a-2227-42bc-8814-9d31138d8138 req-8fe08623-7621-4500-9d27-e3ad45a7b0d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.879 253542 DEBUG oslo_concurrency.lockutils [req-e454782a-2227-42bc-8814-9d31138d8138 req-8fe08623-7621-4500-9d27-e3ad45a7b0d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.879 253542 DEBUG nova.compute.manager [req-e454782a-2227-42bc-8814-9d31138d8138 req-8fe08623-7621-4500-9d27-e3ad45a7b0d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Processing event network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.879 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.882 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.893 253542 INFO nova.virt.libvirt.driver [-] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Instance spawned successfully.
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.894 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.896 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.897 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059478.8821654, 59d86d83-1ff4-4f9e-96f3-5e381272de5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.897 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] VM Resumed (Lifecycle Event)
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.903 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.949 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.951 253542 INFO nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Took 7.11 seconds to spawn the instance on the hypervisor.
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.951 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.959 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.959 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.960 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.960 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.961 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.961 253542 DEBUG nova.virt.libvirt.driver [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:18 compute-0 nova_compute[253538]: 2025-11-25 08:31:18.968 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.009 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.058 253542 INFO nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Took 9.02 seconds to build instance.
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.061 253542 INFO nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Took 8.15 seconds to spawn the instance on the hypervisor.
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.062 253542 DEBUG nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.074 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.100 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.101 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.101 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.101 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.102 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.103 253542 INFO nova.compute.manager [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Terminating instance
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.104 253542 DEBUG nova.compute.manager [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.136 253542 INFO nova.compute.manager [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Took 9.13 seconds to build instance.
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.153 253542 DEBUG oslo_concurrency.lockutils [None req-9ecae5ef-8182-4491-9ce5-c64521deb19f e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:19 compute-0 kernel: tap66f4cf24-73 (unregistering): left promiscuous mode
Nov 25 08:31:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 260 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 593 KiB/s rd, 4.0 MiB/s wr, 105 op/s
Nov 25 08:31:19 compute-0 NetworkManager[48915]: <info>  [1764059479.1646] device (tap66f4cf24-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:31:19 compute-0 ovn_controller[152859]: 2025-11-25T08:31:19Z|00338|binding|INFO|Releasing lport 66f4cf24-739f-46ed-af06-5f4556c06239 from this chassis (sb_readonly=0)
Nov 25 08:31:19 compute-0 ovn_controller[152859]: 2025-11-25T08:31:19Z|00339|binding|INFO|Setting lport 66f4cf24-739f-46ed-af06-5f4556c06239 down in Southbound
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.177 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 ovn_controller[152859]: 2025-11-25T08:31:19Z|00340|binding|INFO|Removing iface tap66f4cf24-73 ovn-installed in OVS
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.184 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:9a:e6 10.100.0.157'], port_security=['fa:16:3e:5d:9a:e6 10.100.0.157'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.157/24', 'neutron:device_id': '964bed05-dc03-42f0-9a11-b18fa70a4787', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27ca22e4-edb9-4716-b7d0-baed03e35444', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3ce3aaa-a195-4aa5-988d-c73542209679, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=66f4cf24-739f-46ed-af06-5f4556c06239) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.186 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 66f4cf24-739f-46ed-af06-5f4556c06239 in datapath 27ca22e4-edb9-4716-b7d0-baed03e35444 unbound from our chassis
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.187 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27ca22e4-edb9-4716-b7d0-baed03e35444, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.188 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[281ea948-eb09-46bf-8216-97169bbcf8c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.189 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444 namespace which is not needed anymore
Nov 25 08:31:19 compute-0 kernel: tapbcb935bd-85 (unregistering): left promiscuous mode
Nov 25 08:31:19 compute-0 NetworkManager[48915]: <info>  [1764059479.1977] device (tapbcb935bd-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 ovn_controller[152859]: 2025-11-25T08:31:19Z|00341|binding|INFO|Releasing lport bcb935bd-8596-426c-8f99-55d4b9545321 from this chassis (sb_readonly=0)
Nov 25 08:31:19 compute-0 ovn_controller[152859]: 2025-11-25T08:31:19Z|00342|binding|INFO|Setting lport bcb935bd-8596-426c-8f99-55d4b9545321 down in Southbound
Nov 25 08:31:19 compute-0 ovn_controller[152859]: 2025-11-25T08:31:19Z|00343|binding|INFO|Removing iface tapbcb935bd-85 ovn-installed in OVS
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.217 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:c3:d7 10.100.1.47'], port_security=['fa:16:3e:17:c3:d7 10.100.1.47'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.47/24', 'neutron:device_id': '964bed05-dc03-42f0-9a11-b18fa70a4787', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b63866f-8f7c-4d12-9269-798a110ab5eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c50e3969ac5b472b8defc2e5cca2901a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b2ea7a0-7c1c-42c6-b6e3-ba1d0d87a9c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eee42a71-97b6-4c5b-9c9c-2038ee9718a1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bcb935bd-8596-426c-8f99-55d4b9545321) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Nov 25 08:31:19 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Consumed 3.359s CPU time.
Nov 25 08:31:19 compute-0 systemd-machined[215790]: Machine qemu-48-instance-0000002b terminated.
Nov 25 08:31:19 compute-0 NetworkManager[48915]: <info>  [1764059479.3287] manager: (tap66f4cf24-73): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.334 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4051022673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:19 compute-0 NetworkManager[48915]: <info>  [1764059479.3408] manager: (tapbcb935bd-85): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.352 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.360 253542 INFO nova.virt.libvirt.driver [-] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Instance destroyed successfully.
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.361 253542 DEBUG nova.objects.instance [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lazy-loading 'resources' on Instance uuid 964bed05-dc03-42f0-9a11-b18fa70a4787 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444[303121]: [NOTICE]   (303183) : haproxy version is 2.8.14-c23fe91
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444[303121]: [NOTICE]   (303183) : path to executable is /usr/sbin/haproxy
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444[303121]: [WARNING]  (303183) : Exiting Master process...
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444[303121]: [WARNING]  (303183) : Exiting Master process...
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444[303121]: [ALERT]    (303183) : Current worker (303185) exited with code 143 (Terminated)
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444[303121]: [WARNING]  (303183) : All workers exited. Exiting... (0)
Nov 25 08:31:19 compute-0 systemd[1]: libpod-b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219.scope: Deactivated successfully.
Nov 25 08:31:19 compute-0 conmon[303121]: conmon b720d735a61b17861605 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219.scope/container/memory.events
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.372 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:19 compute-0 podman[304043]: 2025-11-25 08:31:19.377100434 +0000 UTC m=+0.075583730 container died b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.378 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.382 253542 DEBUG nova.virt.libvirt.vif [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-839232317',display_name='tempest-ServersTestMultiNic-server-839232317',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-839232317',id=43,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-jc82luva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:16Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=964bed05-dc03-42f0-9a11-b18fa70a4787,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.383 253542 DEBUG nova.network.os_vif_util [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "66f4cf24-739f-46ed-af06-5f4556c06239", "address": "fa:16:3e:5d:9a:e6", "network": {"id": "27ca22e4-edb9-4716-b7d0-baed03e35444", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1734967384", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66f4cf24-73", "ovs_interfaceid": "66f4cf24-739f-46ed-af06-5f4556c06239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.383 253542 DEBUG nova.network.os_vif_util [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9a:e6,bridge_name='br-int',has_traffic_filtering=True,id=66f4cf24-739f-46ed-af06-5f4556c06239,network=Network(27ca22e4-edb9-4716-b7d0-baed03e35444),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66f4cf24-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.384 253542 DEBUG os_vif [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9a:e6,bridge_name='br-int',has_traffic_filtering=True,id=66f4cf24-739f-46ed-af06-5f4556c06239,network=Network(27ca22e4-edb9-4716-b7d0-baed03e35444),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66f4cf24-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.386 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.387 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66f4cf24-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.396 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.405 253542 INFO os_vif [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9a:e6,bridge_name='br-int',has_traffic_filtering=True,id=66f4cf24-739f-46ed-af06-5f4556c06239,network=Network(27ca22e4-edb9-4716-b7d0-baed03e35444),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66f4cf24-73')
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.406 253542 DEBUG nova.virt.libvirt.vif [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-839232317',display_name='tempest-ServersTestMultiNic-server-839232317',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-839232317',id=43,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c50e3969ac5b472b8defc2e5cca2901a',ramdisk_id='',reservation_id='r-jc82luva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-272267582',owner_user_name='tempest-ServersTestMultiNic-272267582-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:16Z,user_data=None,user_id='ccf1e57f59084541821b20089873a6ac',uuid=964bed05-dc03-42f0-9a11-b18fa70a4787,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.406 253542 DEBUG nova.network.os_vif_util [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converting VIF {"id": "bcb935bd-8596-426c-8f99-55d4b9545321", "address": "fa:16:3e:17:c3:d7", "network": {"id": "8b63866f-8f7c-4d12-9269-798a110ab5eb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-265709819", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c50e3969ac5b472b8defc2e5cca2901a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcb935bd-85", "ovs_interfaceid": "bcb935bd-8596-426c-8f99-55d4b9545321", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.407 253542 DEBUG nova.network.os_vif_util [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=bcb935bd-8596-426c-8f99-55d4b9545321,network=Network(8b63866f-8f7c-4d12-9269-798a110ab5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcb935bd-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.407 253542 DEBUG os_vif [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=bcb935bd-8596-426c-8f99-55d4b9545321,network=Network(8b63866f-8f7c-4d12-9269-798a110ab5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcb935bd-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.410 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.410 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcb935bd-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219-userdata-shm.mount: Deactivated successfully.
Nov 25 08:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7494be21727f1d148df6cdffdefb1b7c9dfc8dd7633349f9d5176174bb46e059-merged.mount: Deactivated successfully.
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.424 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.425 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.429 253542 INFO os_vif [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=bcb935bd-8596-426c-8f99-55d4b9545321,network=Network(8b63866f-8f7c-4d12-9269-798a110ab5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcb935bd-85')
Nov 25 08:31:19 compute-0 podman[304043]: 2025-11-25 08:31:19.43665233 +0000 UTC m=+0.135135626 container cleanup b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:31:19 compute-0 systemd[1]: libpod-conmon-b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219.scope: Deactivated successfully.
Nov 25 08:31:19 compute-0 podman[304103]: 2025-11-25 08:31:19.520392035 +0000 UTC m=+0.049819418 container remove b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.526 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[989e73b4-d262-487f-a27a-702b3749ea70]: (4, ('Tue Nov 25 08:31:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444 (b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219)\nb720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219\nTue Nov 25 08:31:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444 (b720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219)\nb720d735a61b178616057bb2a768976d3e91d336a9314c5eac1c0d22335bf219\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.528 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c557ea-5c6e-4c73-be80-aa8ea19fbaee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.529 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27ca22e4-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:19 compute-0 kernel: tap27ca22e4-e0: left promiscuous mode
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.543 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[11e9595e-4417-4fa0-9c90-8030683d84cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.573 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[217b616c-cc31-4627-bb3a-a9ee43bab804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.575 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2733a275-94c3-498c-97c1-20dd7c7755d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.595 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5461efa-4be0-4634-8f09-60c6c7b96575]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482558, 'reachable_time': 41544, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304120, 'error': None, 'target': 'ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d27ca22e4\x2dedb9\x2d4716\x2db7d0\x2dbaed03e35444.mount: Deactivated successfully.
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.597 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27ca22e4-edb9-4716-b7d0-baed03e35444 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.598 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f36ca3-8c97-4fb8-a68a-e29685058683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.603 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bcb935bd-8596-426c-8f99-55d4b9545321 in datapath 8b63866f-8f7c-4d12-9269-798a110ab5eb unbound from our chassis
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.606 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b63866f-8f7c-4d12-9269-798a110ab5eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.607 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5486865c-0cf9-4ce2-8efe-aa901e3ba5e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.608 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb namespace which is not needed anymore
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb[303405]: [NOTICE]   (303427) : haproxy version is 2.8.14-c23fe91
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb[303405]: [NOTICE]   (303427) : path to executable is /usr/sbin/haproxy
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb[303405]: [WARNING]  (303427) : Exiting Master process...
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb[303405]: [ALERT]    (303427) : Current worker (303445) exited with code 143 (Terminated)
Nov 25 08:31:19 compute-0 neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb[303405]: [WARNING]  (303427) : All workers exited. Exiting... (0)
Nov 25 08:31:19 compute-0 systemd[1]: libpod-32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009.scope: Deactivated successfully.
Nov 25 08:31:19 compute-0 podman[304138]: 2025-11-25 08:31:19.799571561 +0000 UTC m=+0.059237898 container died 32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:31:19 compute-0 ceph-mon[75015]: pgmap v1439: 321 pgs: 321 active+clean; 260 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 593 KiB/s rd, 4.0 MiB/s wr, 105 op/s
Nov 25 08:31:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4051022673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009-userdata-shm.mount: Deactivated successfully.
Nov 25 08:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6e6ce104a0cc9dc2672ac84e82409bbed51283b99160cc57540d4e70e657e7-merged.mount: Deactivated successfully.
Nov 25 08:31:19 compute-0 podman[304138]: 2025-11-25 08:31:19.878783891 +0000 UTC m=+0.138450218 container cleanup 32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:31:19 compute-0 systemd[1]: libpod-conmon-32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009.scope: Deactivated successfully.
Nov 25 08:31:19 compute-0 podman[304166]: 2025-11-25 08:31:19.961867767 +0000 UTC m=+0.050209429 container remove 32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.973 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[de0347a3-1d1d-465b-b5b5-723a260083a0]: (4, ('Tue Nov 25 08:31:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb (32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009)\n32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009\nTue Nov 25 08:31:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb (32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009)\n32751cff355a8459d4ced169be3a04dca751fc267595686fe15361ead7dfe009\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.976 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3457bbb-ebe1-4fcd-b665-76716456958f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:19.978 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b63866f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:19 compute-0 kernel: tap8b63866f-80: left promiscuous mode
Nov 25 08:31:19 compute-0 nova_compute[253538]: 2025-11-25 08:31:19.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:20.006 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58f1e0ef-e0b4-42a6-b643-77d54899ad3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.026 253542 INFO nova.virt.libvirt.driver [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Deleting instance files /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787_del
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.027 253542 INFO nova.virt.libvirt.driver [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Deletion of /var/lib/nova/instances/964bed05-dc03-42f0-9a11-b18fa70a4787_del complete
Nov 25 08:31:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:20.030 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[735fde13-9529-4645-ba36-d0d3cb4a263b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:20.032 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c718ac2-b9d7-44c9-a46d-5be7cd7e9bd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:20.050 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7e09dad8-2427-464c-a803-b9585b44e4ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482691, 'reachable_time': 16319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304179, 'error': None, 'target': 'ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:20.053 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8b63866f-8f7c-4d12-9269-798a110ab5eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:31:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:20.053 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9bc1fa-b6b7-4c33-ada4-8d6d10135d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.091 253542 INFO nova.compute.manager [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.092 253542 DEBUG oslo.service.loopingcall [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.092 253542 DEBUG nova.compute.manager [-] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.092 253542 DEBUG nova.network.neutron [-] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.321 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d8b63866f\x2d8f7c\x2d4d12\x2d9269\x2d798a110ab5eb.mount: Deactivated successfully.
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.668 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.962 253542 DEBUG nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received event network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.962 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.963 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.963 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.963 253542 DEBUG nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] No waiting events found dispatching network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.963 253542 WARNING nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received unexpected event network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 for instance with vm_state active and task_state None.
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.964 253542 DEBUG nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-unplugged-66f4cf24-739f-46ed-af06-5f4556c06239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.964 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.964 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.964 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.964 253542 DEBUG nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] No waiting events found dispatching network-vif-unplugged-66f4cf24-739f-46ed-af06-5f4556c06239 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.965 253542 DEBUG nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-unplugged-66f4cf24-739f-46ed-af06-5f4556c06239 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.965 253542 DEBUG nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.965 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.965 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.965 253542 DEBUG oslo_concurrency.lockutils [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.966 253542 DEBUG nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] No waiting events found dispatching network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:20 compute-0 nova_compute[253538]: 2025-11-25 08:31:20.966 253542 WARNING nova.compute.manager [req-f0917f05-a6b8-4b1b-9688-646d2a4688ac req-be7e06cf-42b4-453a-9d20-6eef44c7a009 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received unexpected event network-vif-plugged-66f4cf24-739f-46ed-af06-5f4556c06239 for instance with vm_state active and task_state deleting.
Nov 25 08:31:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 260 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.412 253542 DEBUG nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received event network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.413 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.413 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.413 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.413 253542 DEBUG nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] No waiting events found dispatching network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.414 253542 WARNING nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received unexpected event network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 for instance with vm_state active and task_state None.
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.414 253542 DEBUG nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-unplugged-bcb935bd-8596-426c-8f99-55d4b9545321 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.414 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.414 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.414 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.415 253542 DEBUG nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] No waiting events found dispatching network-vif-unplugged-bcb935bd-8596-426c-8f99-55d4b9545321 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.415 253542 DEBUG nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-unplugged-bcb935bd-8596-426c-8f99-55d4b9545321 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.415 253542 DEBUG nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.415 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.415 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.416 253542 DEBUG oslo_concurrency.lockutils [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.416 253542 DEBUG nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] No waiting events found dispatching network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.416 253542 WARNING nova.compute.manager [req-937f50f4-18da-489a-8589-454f1b14cef2 req-50cceb05-828f-46c6-867c-48456cb745bf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received unexpected event network-vif-plugged-bcb935bd-8596-426c-8f99-55d4b9545321 for instance with vm_state active and task_state deleting.
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.565 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.565 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.566 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.566 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.566 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.567 253542 INFO nova.compute.manager [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Terminating instance
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.568 253542 DEBUG nova.compute.manager [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:31:21 compute-0 kernel: tapb929eb45-ce (unregistering): left promiscuous mode
Nov 25 08:31:21 compute-0 NetworkManager[48915]: <info>  [1764059481.6155] device (tapb929eb45-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:31:21 compute-0 ovn_controller[152859]: 2025-11-25T08:31:21Z|00344|binding|INFO|Releasing lport b929eb45-ce97-4f96-9519-2f3a4aebb9d9 from this chassis (sb_readonly=0)
Nov 25 08:31:21 compute-0 ovn_controller[152859]: 2025-11-25T08:31:21Z|00345|binding|INFO|Setting lport b929eb45-ce97-4f96-9519-2f3a4aebb9d9 down in Southbound
Nov 25 08:31:21 compute-0 ovn_controller[152859]: 2025-11-25T08:31:21Z|00346|binding|INFO|Removing iface tapb929eb45-ce ovn-installed in OVS
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.629 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.635 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:58:52 10.100.0.5'], port_security=['fa:16:3e:5b:58:52 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '59d86d83-1ff4-4f9e-96f3-5e381272de5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8098b3bc99ed4993a40e217876568115', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fca0453d-8e16-42b4-ac5c-91be3458ce00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eafba2a5-2b65-4a07-b367-d4c961f70ef4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=b929eb45-ce97-4f96-9519-2f3a4aebb9d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.636 162739 INFO neutron.agent.ovn.metadata.agent [-] Port b929eb45-ce97-4f96-9519-2f3a4aebb9d9 in datapath 65eaa05f-c76b-4ec2-a994-1bde6d48fae1 unbound from our chassis
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.638 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.649 253542 DEBUG nova.objects.instance [None req-2010cf8b-d95b-4e09-9bcb-a78c1328cbb5 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lazy-loading 'flavor' on Instance uuid 21243957-732f-435d-854b-56d6dd7c1ee5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.655 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.658 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2f965000-67ae-420e-adef-1529aebe9dba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.676 253542 DEBUG oslo_concurrency.lockutils [None req-2010cf8b-d95b-4e09-9bcb-a78c1328cbb5 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.677 253542 DEBUG oslo_concurrency.lockutils [None req-2010cf8b-d95b-4e09-9bcb-a78c1328cbb5 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquired lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.696 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c2aa53-4084-4028-8d35-14689046e461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.699 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[acb11daa-b47e-41d6-9346-e5b870adac47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:21 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Nov 25 08:31:21 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 3.293s CPU time.
Nov 25 08:31:21 compute-0 systemd-machined[215790]: Machine qemu-49-instance-0000002c terminated.
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.720 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c761d784-9cc0-4735-980c-3f3ea83be0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.738 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9707f1a8-e32d-44eb-bf62-b27c8eff5e7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65eaa05f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:f2:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483251, 'reachable_time': 39905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304192, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.758 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb40f180-5af0-48e9-a9a0-c63e10e0a6be]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap65eaa05f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483262, 'tstamp': 483262}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304193, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap65eaa05f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483265, 'tstamp': 483265}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304193, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.759 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65eaa05f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.768 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65eaa05f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.769 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.769 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65eaa05f-c0, col_values=(('external_ids', {'iface-id': 'd2f4cf46-9887-4d81-a2f6-0b34dd30bde4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.769 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.793 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "1a48a5fd-440c-4f73-89a2-720e38b2f798" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.794 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.794 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.795 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.795 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.797 253542 INFO nova.compute.manager [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Terminating instance
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.800 253542 DEBUG nova.compute.manager [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.800 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.808 253542 INFO nova.virt.libvirt.driver [-] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Instance destroyed successfully.
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.809 253542 DEBUG nova.objects.instance [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'resources' on Instance uuid 59d86d83-1ff4-4f9e-96f3-5e381272de5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.823 253542 DEBUG nova.virt.libvirt.vif [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-541278070',display_name='tempest-tempest.common.compute-instance-541278070-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-541278070-1',id=44,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-i36adckz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:19Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=59d86d83-1ff4-4f9e-96f3-5e381272de5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.824 253542 DEBUG nova.network.os_vif_util [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "address": "fa:16:3e:5b:58:52", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb929eb45-ce", "ovs_interfaceid": "b929eb45-ce97-4f96-9519-2f3a4aebb9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.824 253542 DEBUG nova.network.os_vif_util [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:58:52,bridge_name='br-int',has_traffic_filtering=True,id=b929eb45-ce97-4f96-9519-2f3a4aebb9d9,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb929eb45-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.825 253542 DEBUG os_vif [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:58:52,bridge_name='br-int',has_traffic_filtering=True,id=b929eb45-ce97-4f96-9519-2f3a4aebb9d9,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb929eb45-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.826 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb929eb45-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.828 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.831 253542 INFO os_vif [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:58:52,bridge_name='br-int',has_traffic_filtering=True,id=b929eb45-ce97-4f96-9519-2f3a4aebb9d9,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb929eb45-ce')
Nov 25 08:31:21 compute-0 kernel: tapf11428cf-fd (unregistering): left promiscuous mode
Nov 25 08:31:21 compute-0 NetworkManager[48915]: <info>  [1764059481.8407] device (tapf11428cf-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 ovn_controller[152859]: 2025-11-25T08:31:21Z|00347|binding|INFO|Releasing lport f11428cf-fdd2-454c-a9f5-6974d43ec027 from this chassis (sb_readonly=0)
Nov 25 08:31:21 compute-0 ovn_controller[152859]: 2025-11-25T08:31:21Z|00348|binding|INFO|Setting lport f11428cf-fdd2-454c-a9f5-6974d43ec027 down in Southbound
Nov 25 08:31:21 compute-0 ovn_controller[152859]: 2025-11-25T08:31:21Z|00349|binding|INFO|Removing iface tapf11428cf-fd ovn-installed in OVS
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.858 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:01:86 10.100.0.14'], port_security=['fa:16:3e:ba:01:86 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1a48a5fd-440c-4f73-89a2-720e38b2f798', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8098b3bc99ed4993a40e217876568115', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fca0453d-8e16-42b4-ac5c-91be3458ce00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eafba2a5-2b65-4a07-b367-d4c961f70ef4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f11428cf-fdd2-454c-a9f5-6974d43ec027) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.859 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f11428cf-fdd2-454c-a9f5-6974d43ec027 in datapath 65eaa05f-c76b-4ec2-a994-1bde6d48fae1 unbound from our chassis
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.861 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65eaa05f-c76b-4ec2-a994-1bde6d48fae1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.863 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56412f09-de24-400d-b727-621b9661ca84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:21.863 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 namespace which is not needed anymore
Nov 25 08:31:21 compute-0 nova_compute[253538]: 2025-11-25 08:31:21.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:21 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Nov 25 08:31:21 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Consumed 3.812s CPU time.
Nov 25 08:31:21 compute-0 systemd-machined[215790]: Machine qemu-50-instance-0000002d terminated.
Nov 25 08:31:21 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[303971]: [NOTICE]   (303979) : haproxy version is 2.8.14-c23fe91
Nov 25 08:31:21 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[303971]: [NOTICE]   (303979) : path to executable is /usr/sbin/haproxy
Nov 25 08:31:21 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[303971]: [WARNING]  (303979) : Exiting Master process...
Nov 25 08:31:21 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[303971]: [WARNING]  (303979) : Exiting Master process...
Nov 25 08:31:21 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[303971]: [ALERT]    (303979) : Current worker (303981) exited with code 143 (Terminated)
Nov 25 08:31:21 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[303971]: [WARNING]  (303979) : All workers exited. Exiting... (0)
Nov 25 08:31:21 compute-0 systemd[1]: libpod-be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624.scope: Deactivated successfully.
Nov 25 08:31:22 compute-0 podman[304243]: 2025-11-25 08:31:22.001108881 +0000 UTC m=+0.047456573 container died be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624-userdata-shm.mount: Deactivated successfully.
Nov 25 08:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f13b98a62fb015a89bad2779e7916acdfc1dec5ee161483004032cc1254611da-merged.mount: Deactivated successfully.
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.038 253542 INFO nova.virt.libvirt.driver [-] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Instance destroyed successfully.
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.039 253542 DEBUG nova.objects.instance [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'resources' on Instance uuid 1a48a5fd-440c-4f73-89a2-720e38b2f798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:22 compute-0 podman[304243]: 2025-11-25 08:31:22.04198003 +0000 UTC m=+0.088327722 container cleanup be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:31:22 compute-0 systemd[1]: libpod-conmon-be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624.scope: Deactivated successfully.
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.060 253542 DEBUG nova.virt.libvirt.vif [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-541278070',display_name='tempest-tempest.common.compute-instance-541278070-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-541278070-2',id=45,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T08:31:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-i36adckz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:19Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=1a48a5fd-440c-4f73-89a2-720e38b2f798,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.060 253542 DEBUG nova.network.os_vif_util [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "address": "fa:16:3e:ba:01:86", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11428cf-fd", "ovs_interfaceid": "f11428cf-fdd2-454c-a9f5-6974d43ec027", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.061 253542 DEBUG nova.network.os_vif_util [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:86,bridge_name='br-int',has_traffic_filtering=True,id=f11428cf-fdd2-454c-a9f5-6974d43ec027,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11428cf-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.061 253542 DEBUG os_vif [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:86,bridge_name='br-int',has_traffic_filtering=True,id=f11428cf-fdd2-454c-a9f5-6974d43ec027,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11428cf-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.063 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf11428cf-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.106 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.108 253542 INFO os_vif [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:86,bridge_name='br-int',has_traffic_filtering=True,id=f11428cf-fdd2-454c-a9f5-6974d43ec027,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11428cf-fd')
Nov 25 08:31:22 compute-0 podman[304283]: 2025-11-25 08:31:22.164214569 +0000 UTC m=+0.058917349 container remove be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.171 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12ac0e16-33b2-4dea-b9bf-80bf6034bed8]: (4, ('Tue Nov 25 08:31:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 (be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624)\nbe1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624\nTue Nov 25 08:31:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 (be1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624)\nbe1b4ce2bcda8f974cb5e879d58145b80f52d9c58b0771abfd6f243fa8ef0624\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.172 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0d71cb-daab-45d1-a596-5277f642466a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.173 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65eaa05f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.175 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:22 compute-0 kernel: tap65eaa05f-c0: left promiscuous mode
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbef9cb-c27a-4dd2-993c-3741e2bcf893]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bdeb9c45-499c-4598-b768-dad195238d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.222 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3ee688-0c5f-460b-a9a2-13b233bf7a64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:22 compute-0 ceph-mon[75015]: pgmap v1440: 321 pgs: 321 active+clean; 260 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.240 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa2f8a7-a09b-41cd-9b4a-3189641926a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483243, 'reachable_time': 33308, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304316, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.242 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:31:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:22.242 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b57969fa-2ac8-4246-a639-32653c568ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d65eaa05f\x2dc76b\x2d4ec2\x2da994\x2d1bde6d48fae1.mount: Deactivated successfully.
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.278 253542 INFO nova.virt.libvirt.driver [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Deleting instance files /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c_del
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.279 253542 INFO nova.virt.libvirt.driver [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Deletion of /var/lib/nova/instances/59d86d83-1ff4-4f9e-96f3-5e381272de5c_del complete
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.327 253542 INFO nova.compute.manager [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.327 253542 DEBUG oslo.service.loopingcall [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.327 253542 DEBUG nova.compute.manager [-] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.328 253542 DEBUG nova.network.neutron [-] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.393 253542 DEBUG nova.network.neutron [-] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.417 253542 INFO nova.compute.manager [-] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Took 2.32 seconds to deallocate network for instance.
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.479 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.480 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.526 253542 INFO nova.virt.libvirt.driver [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Deleting instance files /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798_del
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.527 253542 INFO nova.virt.libvirt.driver [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Deletion of /var/lib/nova/instances/1a48a5fd-440c-4f73-89a2-720e38b2f798_del complete
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.567 253542 DEBUG oslo_concurrency.processutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.623 253542 INFO nova.compute.manager [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.624 253542 DEBUG oslo.service.loopingcall [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.625 253542 DEBUG nova.compute.manager [-] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:31:22 compute-0 nova_compute[253538]: 2025-11-25 08:31:22.625 253542 DEBUG nova.network.neutron [-] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:31:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3874588309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.066 253542 DEBUG oslo_concurrency.processutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.073 253542 DEBUG nova.compute.provider_tree [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.088 253542 DEBUG nova.scheduler.client.report [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.112 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.147 253542 INFO nova.scheduler.client.report [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Deleted allocations for instance 964bed05-dc03-42f0-9a11-b18fa70a4787
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.160 253542 DEBUG nova.compute.manager [req-facfa29c-7615-4e1d-9cc7-04af4def2da7 req-eb965d34-2171-4c2e-a619-747f9039b9a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received event network-vif-unplugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.161 253542 DEBUG oslo_concurrency.lockutils [req-facfa29c-7615-4e1d-9cc7-04af4def2da7 req-eb965d34-2171-4c2e-a619-747f9039b9a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.161 253542 DEBUG oslo_concurrency.lockutils [req-facfa29c-7615-4e1d-9cc7-04af4def2da7 req-eb965d34-2171-4c2e-a619-747f9039b9a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.161 253542 DEBUG oslo_concurrency.lockutils [req-facfa29c-7615-4e1d-9cc7-04af4def2da7 req-eb965d34-2171-4c2e-a619-747f9039b9a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.162 253542 DEBUG nova.compute.manager [req-facfa29c-7615-4e1d-9cc7-04af4def2da7 req-eb965d34-2171-4c2e-a619-747f9039b9a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] No waiting events found dispatching network-vif-unplugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.162 253542 DEBUG nova.compute.manager [req-facfa29c-7615-4e1d-9cc7-04af4def2da7 req-eb965d34-2171-4c2e-a619-747f9039b9a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received event network-vif-unplugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:31:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 226 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 204 op/s
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.223 253542 DEBUG oslo_concurrency.lockutils [None req-c22ff29e-3e53-436a-980c-07d5ff8da8d8 ccf1e57f59084541821b20089873a6ac c50e3969ac5b472b8defc2e5cca2901a - - default default] Lock "964bed05-dc03-42f0-9a11-b18fa70a4787" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3874588309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.426 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.503 253542 DEBUG nova.compute.manager [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-deleted-66f4cf24-739f-46ed-af06-5f4556c06239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.504 253542 DEBUG nova.compute.manager [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received event network-vif-unplugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.504 253542 DEBUG oslo_concurrency.lockutils [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.505 253542 DEBUG oslo_concurrency.lockutils [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.505 253542 DEBUG oslo_concurrency.lockutils [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.505 253542 DEBUG nova.compute.manager [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] No waiting events found dispatching network-vif-unplugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.506 253542 DEBUG nova.compute.manager [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received event network-vif-unplugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.506 253542 DEBUG nova.compute.manager [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Received event network-vif-deleted-bcb935bd-8596-426c-8f99-55d4b9545321 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.506 253542 DEBUG nova.compute.manager [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received event network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.506 253542 DEBUG oslo_concurrency.lockutils [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.507 253542 DEBUG oslo_concurrency.lockutils [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.507 253542 DEBUG oslo_concurrency.lockutils [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.507 253542 DEBUG nova.compute.manager [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] No waiting events found dispatching network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.507 253542 WARNING nova.compute.manager [req-b59185e0-fa3c-4e76-af81-1e9d9d745e5c req-f798d26a-3886-4068-a7c4-e0a64201e44a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received unexpected event network-vif-plugged-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 for instance with vm_state active and task_state deleting.
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.677 253542 DEBUG nova.network.neutron [None req-2010cf8b-d95b-4e09-9bcb-a78c1328cbb5 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.800 253542 DEBUG nova.network.neutron [-] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.816 253542 INFO nova.compute.manager [-] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Took 1.49 seconds to deallocate network for instance.
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.852 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.853 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.870 253542 DEBUG nova.network.neutron [-] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.890 253542 INFO nova.compute.manager [-] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Took 1.26 seconds to deallocate network for instance.
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.943 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:23 compute-0 nova_compute[253538]: 2025-11-25 08:31:23.951 253542 DEBUG oslo_concurrency.processutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:24 compute-0 ceph-mon[75015]: pgmap v1441: 321 pgs: 321 active+clean; 226 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 204 op/s
Nov 25 08:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158503590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.385 253542 DEBUG oslo_concurrency.processutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.393 253542 DEBUG nova.compute.provider_tree [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.414 253542 DEBUG nova.scheduler.client.report [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.438 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.442 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.461 253542 INFO nova.scheduler.client.report [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Deleted allocations for instance 59d86d83-1ff4-4f9e-96f3-5e381272de5c
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.519 253542 DEBUG oslo_concurrency.processutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.575 253542 DEBUG oslo_concurrency.lockutils [None req-d606e788-b7eb-42c2-b048-946eda390c40 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "59d86d83-1ff4-4f9e-96f3-5e381272de5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068819025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.991 253542 DEBUG oslo_concurrency.processutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:24 compute-0 nova_compute[253538]: 2025-11-25 08:31:24.996 253542 DEBUG nova.compute.provider_tree [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.013 253542 DEBUG nova.scheduler.client.report [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.031 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.052 253542 INFO nova.scheduler.client.report [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Deleted allocations for instance 1a48a5fd-440c-4f73-89a2-720e38b2f798
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.113 253542 DEBUG oslo_concurrency.lockutils [None req-5efbb6c9-084c-4d9f-aea8-420174a545d2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 137 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 347 op/s
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.224 253542 DEBUG nova.compute.manager [req-387cff26-8e73-4dd0-9401-ac0ff6a2f9ec req-5c0947b1-5648-456d-8e9f-1cc19a440ba4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received event network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.224 253542 DEBUG oslo_concurrency.lockutils [req-387cff26-8e73-4dd0-9401-ac0ff6a2f9ec req-5c0947b1-5648-456d-8e9f-1cc19a440ba4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.224 253542 DEBUG oslo_concurrency.lockutils [req-387cff26-8e73-4dd0-9401-ac0ff6a2f9ec req-5c0947b1-5648-456d-8e9f-1cc19a440ba4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.225 253542 DEBUG oslo_concurrency.lockutils [req-387cff26-8e73-4dd0-9401-ac0ff6a2f9ec req-5c0947b1-5648-456d-8e9f-1cc19a440ba4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1a48a5fd-440c-4f73-89a2-720e38b2f798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.225 253542 DEBUG nova.compute.manager [req-387cff26-8e73-4dd0-9401-ac0ff6a2f9ec req-5c0947b1-5648-456d-8e9f-1cc19a440ba4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] No waiting events found dispatching network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.225 253542 WARNING nova.compute.manager [req-387cff26-8e73-4dd0-9401-ac0ff6a2f9ec req-5c0947b1-5648-456d-8e9f-1cc19a440ba4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received unexpected event network-vif-plugged-f11428cf-fdd2-454c-a9f5-6974d43ec027 for instance with vm_state deleted and task_state None.
Nov 25 08:31:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/158503590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1068819025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.323 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.428 253542 DEBUG nova.network.neutron [None req-2010cf8b-d95b-4e09-9bcb-a78c1328cbb5 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.446 253542 DEBUG oslo_concurrency.lockutils [None req-2010cf8b-d95b-4e09-9bcb-a78c1328cbb5 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Releasing lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.446 253542 DEBUG nova.compute.manager [None req-2010cf8b-d95b-4e09-9bcb-a78c1328cbb5 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.447 253542 DEBUG nova.compute.manager [None req-2010cf8b-d95b-4e09-9bcb-a78c1328cbb5 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] network_info to inject: |[{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.613 253542 DEBUG nova.compute.manager [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-changed-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.614 253542 DEBUG nova.compute.manager [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Refreshing instance network info cache due to event network-changed-b1fe2a2b-159d-40ea-a88d-1a311f1e7702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.614 253542 DEBUG oslo_concurrency.lockutils [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.614 253542 DEBUG oslo_concurrency.lockutils [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.615 253542 DEBUG nova.network.neutron [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Refreshing network info cache for port b1fe2a2b-159d-40ea-a88d-1a311f1e7702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:25 compute-0 ovn_controller[152859]: 2025-11-25T08:31:25Z|00350|binding|INFO|Releasing lport c8895407-4b3c-419a-8694-55e3e69b82b8 from this chassis (sb_readonly=0)
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:25 compute-0 nova_compute[253538]: 2025-11-25 08:31:25.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:26 compute-0 ceph-mon[75015]: pgmap v1442: 321 pgs: 321 active+clean; 137 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 347 op/s
Nov 25 08:31:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:26.616 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:26 compute-0 nova_compute[253538]: 2025-11-25 08:31:26.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:26.617 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.162 253542 DEBUG nova.objects.instance [None req-6222740b-ab64-4ad1-a704-a65242056288 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lazy-loading 'flavor' on Instance uuid 21243957-732f-435d-854b-56d6dd7c1ee5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 121 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 841 KiB/s wr, 295 op/s
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.185 253542 DEBUG oslo_concurrency.lockutils [None req-6222740b-ab64-4ad1-a704-a65242056288 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.274 253542 DEBUG nova.network.neutron [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updated VIF entry in instance network info cache for port b1fe2a2b-159d-40ea-a88d-1a311f1e7702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.275 253542 DEBUG nova.network.neutron [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.294 253542 DEBUG oslo_concurrency.lockutils [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.294 253542 DEBUG nova.compute.manager [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Received event network-vif-deleted-b929eb45-ce97-4f96-9519-2f3a4aebb9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.295 253542 DEBUG nova.compute.manager [req-559e2247-0c51-4eb4-841b-994fc34d7311 req-51486290-20a5-480c-b421-b808906dbf99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Received event network-vif-deleted-f11428cf-fdd2-454c-a9f5-6974d43ec027 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:27 compute-0 nova_compute[253538]: 2025-11-25 08:31:27.296 253542 DEBUG oslo_concurrency.lockutils [None req-6222740b-ab64-4ad1-a704-a65242056288 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquired lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:28 compute-0 ceph-mon[75015]: pgmap v1443: 321 pgs: 321 active+clean; 121 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 841 KiB/s wr, 295 op/s
Nov 25 08:31:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:28.619 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.758 253542 DEBUG nova.network.neutron [None req-6222740b-ab64-4ad1-a704-a65242056288 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.849 253542 DEBUG nova.compute.manager [req-ac23df45-f3c2-4b0b-a089-351674349c3b req-7dded195-5e00-4467-a5de-38d83c51f981 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-changed-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.850 253542 DEBUG nova.compute.manager [req-ac23df45-f3c2-4b0b-a089-351674349c3b req-7dded195-5e00-4467-a5de-38d83c51f981 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Refreshing instance network info cache due to event network-changed-b1fe2a2b-159d-40ea-a88d-1a311f1e7702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.850 253542 DEBUG oslo_concurrency.lockutils [req-ac23df45-f3c2-4b0b-a089-351674349c3b req-7dded195-5e00-4467-a5de-38d83c51f981 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.880 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "44f92d51-7981-4905-8ec2-179722b520f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.881 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.894 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.917 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "e7169151-5523-42c6-bebf-4b9ff2640da3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.918 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.958 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.988 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.988 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.998 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:31:28 compute-0 nova_compute[253538]: 2025-11-25 08:31:28.999 253542 INFO nova.compute.claims [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:31:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:31:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3128039099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:31:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:31:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3128039099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.026 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.140 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 121 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 30 KiB/s wr, 291 op/s
Nov 25 08:31:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3128039099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:31:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3128039099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:31:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066363707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.616 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.624 253542 DEBUG nova.compute.provider_tree [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.641 253542 DEBUG nova.scheduler.client.report [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.665 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.666 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.671 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.681 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.682 253542 INFO nova.compute.claims [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.727 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.728 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.751 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.764 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.819 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.853 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.855 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.855 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Creating image(s)
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.876 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 44f92d51-7981-4905-8ec2-179722b520f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.903 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 44f92d51-7981-4905-8ec2-179722b520f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.933 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 44f92d51-7981-4905-8ec2-179722b520f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:29 compute-0 nova_compute[253538]: 2025-11-25 08:31:29.937 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.008 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.010 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.010 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.011 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.033 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 44f92d51-7981-4905-8ec2-179722b520f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.036 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 44f92d51-7981-4905-8ec2-179722b520f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1802109954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.308 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.319 253542 DEBUG nova.compute.provider_tree [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.333 253542 DEBUG nova.scheduler.client.report [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:30 compute-0 ceph-mon[75015]: pgmap v1444: 321 pgs: 321 active+clean; 121 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 30 KiB/s wr, 291 op/s
Nov 25 08:31:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2066363707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1802109954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.359 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.360 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.365 253542 DEBUG nova.policy [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3ba89d7ba114005bf727750ed2eb249', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8098b3bc99ed4993a40e217876568115', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.394 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 44f92d51-7981-4905-8ec2-179722b520f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.454 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.454 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.463 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] resizing rbd image 44f92d51-7981-4905-8ec2-179722b520f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.494 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.519 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.571 253542 DEBUG nova.objects.instance [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'migration_context' on Instance uuid 44f92d51-7981-4905-8ec2-179722b520f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.581 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.582 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Ensure instance console log exists: /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.582 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.583 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.583 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.623 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.624 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.624 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Creating image(s)
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.643 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image e7169151-5523-42c6-bebf-4b9ff2640da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.669 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image e7169151-5523-42c6-bebf-4b9ff2640da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.696 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image e7169151-5523-42c6-bebf-4b9ff2640da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.700 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.732 253542 DEBUG nova.network.neutron [None req-6222740b-ab64-4ad1-a704-a65242056288 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.771 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.772 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.773 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.773 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.798 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image e7169151-5523-42c6-bebf-4b9ff2640da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.801 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e7169151-5523-42c6-bebf-4b9ff2640da3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.836 253542 DEBUG nova.policy [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3ba89d7ba114005bf727750ed2eb249', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8098b3bc99ed4993a40e217876568115', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.840 253542 DEBUG oslo_concurrency.lockutils [None req-6222740b-ab64-4ad1-a704-a65242056288 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Releasing lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.840 253542 DEBUG nova.compute.manager [None req-6222740b-ab64-4ad1-a704-a65242056288 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.841 253542 DEBUG nova.compute.manager [None req-6222740b-ab64-4ad1-a704-a65242056288 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] network_info to inject: |[{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.844 253542 DEBUG oslo_concurrency.lockutils [req-ac23df45-f3c2-4b0b-a089-351674349c3b req-7dded195-5e00-4467-a5de-38d83c51f981 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:30 compute-0 nova_compute[253538]: 2025-11-25 08:31:30.846 253542 DEBUG nova.network.neutron [req-ac23df45-f3c2-4b0b-a089-351674349c3b req-7dded195-5e00-4467-a5de-38d83c51f981 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Refreshing network info cache for port b1fe2a2b-159d-40ea-a88d-1a311f1e7702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 121 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 18 KiB/s wr, 268 op/s
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.187 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e7169151-5523-42c6-bebf-4b9ff2640da3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.247 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] resizing rbd image e7169151-5523-42c6-bebf-4b9ff2640da3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.274 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Successfully created port: 2d3ff707-0229-412a-9743-73ef63760a76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.330 253542 DEBUG nova.objects.instance [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'migration_context' on Instance uuid e7169151-5523-42c6-bebf-4b9ff2640da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.341 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.342 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Ensure instance console log exists: /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.342 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.343 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.343 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:31 compute-0 ovn_controller[152859]: 2025-11-25T08:31:31Z|00351|binding|INFO|Releasing lport c8895407-4b3c-419a-8694-55e3e69b82b8 from this chassis (sb_readonly=0)
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.790 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Successfully created port: eec863b4-83ac-47db-bf76-d7ba4eee9f0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.912 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.913 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.914 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.914 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.914 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.916 253542 INFO nova.compute.manager [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Terminating instance
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.917 253542 DEBUG nova.compute.manager [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:31:31 compute-0 kernel: tapb1fe2a2b-15 (unregistering): left promiscuous mode
Nov 25 08:31:31 compute-0 NetworkManager[48915]: <info>  [1764059491.9704] device (tapb1fe2a2b-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:31:31 compute-0 nova_compute[253538]: 2025-11-25 08:31:31.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:31 compute-0 ovn_controller[152859]: 2025-11-25T08:31:31Z|00352|binding|INFO|Releasing lport b1fe2a2b-159d-40ea-a88d-1a311f1e7702 from this chassis (sb_readonly=0)
Nov 25 08:31:31 compute-0 ovn_controller[152859]: 2025-11-25T08:31:31Z|00353|binding|INFO|Setting lport b1fe2a2b-159d-40ea-a88d-1a311f1e7702 down in Southbound
Nov 25 08:31:31 compute-0 ovn_controller[152859]: 2025-11-25T08:31:31Z|00354|binding|INFO|Removing iface tapb1fe2a2b-15 ovn-installed in OVS
Nov 25 08:31:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:31.991 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:41:6c 10.100.0.3'], port_security=['fa:16:3e:13:41:6c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '21243957-732f-435d-854b-56d6dd7c1ee5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87ff5af4-98f5-4e7f-8049-0f70796e8c58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '60e7437d74e5463f92e6045be3ca5172', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd01ef662-2da8-4b58-a1cd-baf30a0ed9e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c94f8cf-40ef-4aa3-b5ea-ef85bbbaaac9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=b1fe2a2b-159d-40ea-a88d-1a311f1e7702) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:31.994 162739 INFO neutron.agent.ovn.metadata.agent [-] Port b1fe2a2b-159d-40ea-a88d-1a311f1e7702 in datapath 87ff5af4-98f5-4e7f-8049-0f70796e8c58 unbound from our chassis
Nov 25 08:31:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:31.997 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87ff5af4-98f5-4e7f-8049-0f70796e8c58, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.000 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dca1ba66-a5ba-4479-a9f1-e444203e91ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.001 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58 namespace which is not needed anymore
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Nov 25 08:31:32 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 14.934s CPU time.
Nov 25 08:31:32 compute-0 systemd-machined[215790]: Machine qemu-47-instance-0000002a terminated.
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.107 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.118 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.118 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.138 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:31:32 compute-0 neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58[301198]: [NOTICE]   (301202) : haproxy version is 2.8.14-c23fe91
Nov 25 08:31:32 compute-0 neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58[301198]: [NOTICE]   (301202) : path to executable is /usr/sbin/haproxy
Nov 25 08:31:32 compute-0 neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58[301198]: [WARNING]  (301202) : Exiting Master process...
Nov 25 08:31:32 compute-0 neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58[301198]: [ALERT]    (301202) : Current worker (301204) exited with code 143 (Terminated)
Nov 25 08:31:32 compute-0 neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58[301198]: [WARNING]  (301202) : All workers exited. Exiting... (0)
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 systemd[1]: libpod-825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8.scope: Deactivated successfully.
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.150 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 podman[304786]: 2025-11-25 08:31:32.154621309 +0000 UTC m=+0.052246354 container died 825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.164 253542 INFO nova.virt.libvirt.driver [-] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Instance destroyed successfully.
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.164 253542 DEBUG nova.objects.instance [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lazy-loading 'resources' on Instance uuid 21243957-732f-435d-854b-56d6dd7c1ee5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.191 253542 DEBUG nova.virt.libvirt.vif [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:30:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-431160682',display_name='tempest-AttachInterfacesUnderV243Test-server-431160682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-431160682',id=42,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM5DjtqNcv+6RkNArYu24kFktfxDRaWC3D2gbYUaR8sRxNJVPe0ZuP7IU98YDbQlWI7Cp8dxno/YFmEni1mtKuJTQQphjmW7WKzVnDUcxPTuZ4GE9nq3vP0QvvkqbmlX/g==',key_name='tempest-keypair-617120992',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:30:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='60e7437d74e5463f92e6045be3ca5172',ramdisk_id='',reservation_id='r-qs1dttp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-616024449',owner_user_name='tempest-AttachInterfacesUnderV243Test-616024449-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='65e8abd41b2b4a4ab175f581875790ac',uuid=21243957-732f-435d-854b-56d6dd7c1ee5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.191 253542 DEBUG nova.network.os_vif_util [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Converting VIF {"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.192 253542 DEBUG nova.network.os_vif_util [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:41:6c,bridge_name='br-int',has_traffic_filtering=True,id=b1fe2a2b-159d-40ea-a88d-1a311f1e7702,network=Network(87ff5af4-98f5-4e7f-8049-0f70796e8c58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fe2a2b-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.192 253542 DEBUG os_vif [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:41:6c,bridge_name='br-int',has_traffic_filtering=True,id=b1fe2a2b-159d-40ea-a88d-1a311f1e7702,network=Network(87ff5af4-98f5-4e7f-8049-0f70796e8c58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fe2a2b-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.194 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1fe2a2b-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8-userdata-shm.mount: Deactivated successfully.
Nov 25 08:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d121a466039b5ba2f5cf48cb7c03ea4f42ecb7c68566a4e7acbc1cf4c46fa14b-merged.mount: Deactivated successfully.
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.206 253542 INFO os_vif [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:41:6c,bridge_name='br-int',has_traffic_filtering=True,id=b1fe2a2b-159d-40ea-a88d-1a311f1e7702,network=Network(87ff5af4-98f5-4e7f-8049-0f70796e8c58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fe2a2b-15')
Nov 25 08:31:32 compute-0 podman[304786]: 2025-11-25 08:31:32.208259841 +0000 UTC m=+0.105884886 container cleanup 825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:31:32 compute-0 systemd[1]: libpod-conmon-825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8.scope: Deactivated successfully.
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.232 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.232 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.239 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.240 253542 INFO nova.compute.claims [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.254 253542 DEBUG nova.compute.manager [req-71fbcc33-d2da-4091-99fc-378470e5e521 req-0828a97c-2fcc-4f0f-8a26-f2300d137777 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-vif-unplugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.254 253542 DEBUG oslo_concurrency.lockutils [req-71fbcc33-d2da-4091-99fc-378470e5e521 req-0828a97c-2fcc-4f0f-8a26-f2300d137777 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.254 253542 DEBUG oslo_concurrency.lockutils [req-71fbcc33-d2da-4091-99fc-378470e5e521 req-0828a97c-2fcc-4f0f-8a26-f2300d137777 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.254 253542 DEBUG oslo_concurrency.lockutils [req-71fbcc33-d2da-4091-99fc-378470e5e521 req-0828a97c-2fcc-4f0f-8a26-f2300d137777 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.255 253542 DEBUG nova.compute.manager [req-71fbcc33-d2da-4091-99fc-378470e5e521 req-0828a97c-2fcc-4f0f-8a26-f2300d137777 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] No waiting events found dispatching network-vif-unplugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.255 253542 DEBUG nova.compute.manager [req-71fbcc33-d2da-4091-99fc-378470e5e521 req-0828a97c-2fcc-4f0f-8a26-f2300d137777 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-vif-unplugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.272 253542 DEBUG nova.network.neutron [req-ac23df45-f3c2-4b0b-a089-351674349c3b req-7dded195-5e00-4467-a5de-38d83c51f981 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updated VIF entry in instance network info cache for port b1fe2a2b-159d-40ea-a88d-1a311f1e7702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.272 253542 DEBUG nova.network.neutron [req-ac23df45-f3c2-4b0b-a089-351674349c3b req-7dded195-5e00-4467-a5de-38d83c51f981 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [{"id": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "address": "fa:16:3e:13:41:6c", "network": {"id": "87ff5af4-98f5-4e7f-8049-0f70796e8c58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1494963892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60e7437d74e5463f92e6045be3ca5172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fe2a2b-15", "ovs_interfaceid": "b1fe2a2b-159d-40ea-a88d-1a311f1e7702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.299 253542 DEBUG oslo_concurrency.lockutils [req-ac23df45-f3c2-4b0b-a089-351674349c3b req-7dded195-5e00-4467-a5de-38d83c51f981 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-21243957-732f-435d-854b-56d6dd7c1ee5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:32 compute-0 podman[304830]: 2025-11-25 08:31:32.307412753 +0000 UTC m=+0.070681695 container remove 825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.312 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c36be20a-bb1a-4bec-8ac3-281bd88c1a24]: (4, ('Tue Nov 25 08:31:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58 (825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8)\n825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8\nTue Nov 25 08:31:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58 (825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8)\n825d8ade8a0e5a05f3283987747ff80f37f3794079d11a17d54012ce256974f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.314 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[40690d31-aa51-471a-a62d-bf6952e81a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.314 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87ff5af4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.316 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 kernel: tap87ff5af4-90: left promiscuous mode
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.321 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c423b1-ec3e-4120-9ae2-4259927b2402]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[643bb622-6bd0-4320-852f-2cfaf9ab5ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.335 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f1bcb455-f8e4-4da6-9ef7-b0a4e3662fb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:32 compute-0 ceph-mon[75015]: pgmap v1445: 321 pgs: 321 active+clean; 121 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 18 KiB/s wr, 268 op/s
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.356 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c36ae8-83db-4f30-9c93-485d16dd303f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479694, 'reachable_time': 19931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304856, 'error': None, 'target': 'ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d87ff5af4\x2d98f5\x2d4e7f\x2d8049\x2d0f70796e8c58.mount: Deactivated successfully.
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.362 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87ff5af4-98f5-4e7f-8049-0f70796e8c58 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:31:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:32.363 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b444fe26-1c32-4601-8a51-ae61d7eac14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.395 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.442 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Successfully updated port: 2d3ff707-0229-412a-9743-73ef63760a76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.457 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "refresh_cache-44f92d51-7981-4905-8ec2-179722b520f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.458 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquired lock "refresh_cache-44f92d51-7981-4905-8ec2-179722b520f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.458 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.595 253542 INFO nova.virt.libvirt.driver [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Deleting instance files /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5_del
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.597 253542 INFO nova.virt.libvirt.driver [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Deletion of /var/lib/nova/instances/21243957-732f-435d-854b-56d6dd7c1ee5_del complete
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.633 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.665 253542 INFO nova.compute.manager [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.665 253542 DEBUG oslo.service.loopingcall [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.666 253542 DEBUG nova.compute.manager [-] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.666 253542 DEBUG nova.network.neutron [-] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:31:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/387063666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.826 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.833 253542 DEBUG nova.compute.provider_tree [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.847 253542 DEBUG nova.scheduler.client.report [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.871 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.872 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.907 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Successfully updated port: eec863b4-83ac-47db-bf76-d7ba4eee9f0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.931 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.931 253542 DEBUG nova.network.neutron [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.934 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "refresh_cache-e7169151-5523-42c6-bebf-4b9ff2640da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.935 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquired lock "refresh_cache-e7169151-5523-42c6-bebf-4b9ff2640da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.935 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.960 253542 INFO nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:31:32 compute-0 nova_compute[253538]: 2025-11-25 08:31:32.977 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.067 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.069 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.070 253542 INFO nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Creating image(s)
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.103 253542 DEBUG nova.storage.rbd_utils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.141 253542 DEBUG nova.storage.rbd_utils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 151 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 982 KiB/s wr, 231 op/s
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.175 253542 DEBUG nova.storage.rbd_utils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.180 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.214 253542 DEBUG nova.policy [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.218 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.249 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.250 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.250 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.251 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.274 253542 DEBUG nova.storage.rbd_utils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.277 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/387063666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.606 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.675 253542 DEBUG nova.storage.rbd_utils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] resizing rbd image 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.773 253542 DEBUG nova.objects.instance [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'migration_context' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.786 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.787 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Ensure instance console log exists: /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.787 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.787 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:33 compute-0 nova_compute[253538]: 2025-11-25 08:31:33.788 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.014 253542 DEBUG nova.network.neutron [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Successfully created port: 2456d48f-9440-411c-b5f2-5c27136126f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.217 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Updating instance_info_cache with network_info: [{"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.233 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Releasing lock "refresh_cache-44f92d51-7981-4905-8ec2-179722b520f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.233 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Instance network_info: |[{"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.235 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Start _get_guest_xml network_info=[{"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.240 253542 WARNING nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.246 253542 DEBUG nova.virt.libvirt.host [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.246 253542 DEBUG nova.virt.libvirt.host [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.250 253542 DEBUG nova.virt.libvirt.host [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.250 253542 DEBUG nova.virt.libvirt.host [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.251 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.251 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.251 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.251 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.252 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.252 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.252 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.252 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.252 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.252 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.252 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.253 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.255 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.344 253542 DEBUG nova.network.neutron [-] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.352 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059479.3510795, 964bed05-dc03-42f0-9a11-b18fa70a4787 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.352 253542 INFO nova.compute.manager [-] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] VM Stopped (Lifecycle Event)
Nov 25 08:31:34 compute-0 ceph-mon[75015]: pgmap v1446: 321 pgs: 321 active+clean; 151 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 982 KiB/s wr, 231 op/s
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.383 253542 INFO nova.compute.manager [-] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Took 1.72 seconds to deallocate network for instance.
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.390 253542 DEBUG nova.compute.manager [None req-1eb3ca80-d9bb-4438-9826-933169ed8b7c - - - - - -] [instance: 964bed05-dc03-42f0-9a11-b18fa70a4787] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.430 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.431 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.564 253542 DEBUG nova.network.neutron [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Updating instance_info_cache with network_info: [{"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.586 253542 DEBUG oslo_concurrency.processutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.622 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Releasing lock "refresh_cache-e7169151-5523-42c6-bebf-4b9ff2640da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.623 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Instance network_info: |[{"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.629 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Start _get_guest_xml network_info=[{"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.634 253542 WARNING nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.639 253542 DEBUG nova.virt.libvirt.host [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.639 253542 DEBUG nova.virt.libvirt.host [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.642 253542 DEBUG nova.virt.libvirt.host [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.642 253542 DEBUG nova.virt.libvirt.host [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.643 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.643 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.643 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.643 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.643 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.644 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.644 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.644 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.644 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.644 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.645 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.645 253542 DEBUG nova.virt.hardware [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.648 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833027081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.699 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.725 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 44f92d51-7981-4905-8ec2-179722b520f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:34 compute-0 nova_compute[253538]: 2025-11-25 08:31:34.729 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.004 253542 DEBUG nova.compute.manager [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received event network-changed-2d3ff707-0229-412a-9743-73ef63760a76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.004 253542 DEBUG nova.compute.manager [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Refreshing instance network info cache due to event network-changed-2d3ff707-0229-412a-9743-73ef63760a76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.005 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-44f92d51-7981-4905-8ec2-179722b520f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.005 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-44f92d51-7981-4905-8ec2-179722b520f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.005 253542 DEBUG nova.network.neutron [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Refreshing network info cache for port 2d3ff707-0229-412a-9743-73ef63760a76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3795277023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.072 253542 DEBUG oslo_concurrency.processutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.078 253542 DEBUG nova.compute.provider_tree [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.092 253542 DEBUG nova.scheduler.client.report [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.098 253542 DEBUG nova.compute.manager [req-3cecf987-44ac-4865-9b0c-d04b86555803 req-4e83e233-c6a5-400f-9278-15461ad2ff5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-vif-deleted-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.109 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.134 253542 INFO nova.scheduler.client.report [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Deleted allocations for instance 21243957-732f-435d-854b-56d6dd7c1ee5
Nov 25 08:31:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741235124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446188965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.167 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.168 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.169 253542 DEBUG nova.virt.libvirt.vif [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1423595435',display_name='tempest-MultipleCreateTestJSON-server-1423595435-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1423595435-1',id=46,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-kx0ioy43',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:29Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=44f92d51-7981-4905-8ec2-179722b520f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.169 253542 DEBUG nova.network.os_vif_util [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 188 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 243 op/s
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.170 253542 DEBUG nova.network.os_vif_util [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:29:93,bridge_name='br-int',has_traffic_filtering=True,id=2d3ff707-0229-412a-9743-73ef63760a76,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d3ff707-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.171 253542 DEBUG nova.objects.instance [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f92d51-7981-4905-8ec2-179722b520f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.193 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image e7169151-5523-42c6-bebf-4b9ff2640da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.196 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.227 253542 DEBUG nova.network.neutron [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Successfully updated port: 2456d48f-9440-411c-b5f2-5c27136126f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.231 253542 DEBUG oslo_concurrency.lockutils [None req-253e928b-5456-457f-ba1d-34b8510880f6 65e8abd41b2b4a4ab175f581875790ac 60e7437d74e5463f92e6045be3ca5172 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.257 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <uuid>44f92d51-7981-4905-8ec2-179722b520f9</uuid>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <name>instance-0000002e</name>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:name>tempest-MultipleCreateTestJSON-server-1423595435-1</nova:name>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:31:34</nova:creationTime>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:user uuid="e3ba89d7ba114005bf727750ed2eb249">tempest-MultipleCreateTestJSON-1813277106-project-member</nova:user>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:project uuid="8098b3bc99ed4993a40e217876568115">tempest-MultipleCreateTestJSON-1813277106</nova:project>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:port uuid="2d3ff707-0229-412a-9743-73ef63760a76">
Nov 25 08:31:35 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <system>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="serial">44f92d51-7981-4905-8ec2-179722b520f9</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="uuid">44f92d51-7981-4905-8ec2-179722b520f9</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </system>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <os>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </os>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <features>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </features>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/44f92d51-7981-4905-8ec2-179722b520f9_disk">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/44f92d51-7981-4905-8ec2-179722b520f9_disk.config">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:bb:29:93"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <target dev="tap2d3ff707-02"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9/console.log" append="off"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <video>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </video>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:31:35 compute-0 nova_compute[253538]: </domain>
Nov 25 08:31:35 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.258 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Preparing to wait for external event network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.259 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "44f92d51-7981-4905-8ec2-179722b520f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.259 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.259 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.260 253542 DEBUG nova.virt.libvirt.vif [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1423595435',display_name='tempest-MultipleCreateTestJSON-server-1423595435-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1423595435-1',id=46,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-kx0ioy43',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:29Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=44f92d51-7981-4905-8ec2-179722b520f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.260 253542 DEBUG nova.network.os_vif_util [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.261 253542 DEBUG nova.network.os_vif_util [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:29:93,bridge_name='br-int',has_traffic_filtering=True,id=2d3ff707-0229-412a-9743-73ef63760a76,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d3ff707-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.261 253542 DEBUG os_vif [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:29:93,bridge_name='br-int',has_traffic_filtering=True,id=2d3ff707-0229-412a-9743-73ef63760a76,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d3ff707-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.262 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.262 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.263 253542 DEBUG nova.network.neutron [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.265 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.265 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.266 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.268 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d3ff707-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.269 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d3ff707-02, col_values=(('external_ids', {'iface-id': '2d3ff707-0229-412a-9743-73ef63760a76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:29:93', 'vm-uuid': '44f92d51-7981-4905-8ec2-179722b520f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:35 compute-0 NetworkManager[48915]: <info>  [1764059495.3130] manager: (tap2d3ff707-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.319 253542 INFO os_vif [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:29:93,bridge_name='br-int',has_traffic_filtering=True,id=2d3ff707-0229-412a-9743-73ef63760a76,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d3ff707-02')
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.371 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.372 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.372 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No VIF found with MAC fa:16:3e:bb:29:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.373 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Using config drive
Nov 25 08:31:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2833027081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3795277023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3741235124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/446188965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:35 compute-0 podman[305175]: 2025-11-25 08:31:35.403591889 +0000 UTC m=+0.052207554 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.403 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 44f92d51-7981-4905-8ec2-179722b520f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.634 253542 DEBUG nova.network.neutron [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:31:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4197125596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.706 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.708 253542 DEBUG nova.virt.libvirt.vif [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1423595435',display_name='tempest-MultipleCreateTestJSON-server-1423595435-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1423595435-2',id=47,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-kx0ioy43',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:30Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=e7169151-5523-42c6-bebf-4b9ff2640da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.708 253542 DEBUG nova.network.os_vif_util [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.709 253542 DEBUG nova.network.os_vif_util [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:4f:9c,bridge_name='br-int',has_traffic_filtering=True,id=eec863b4-83ac-47db-bf76-d7ba4eee9f0d,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec863b4-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.710 253542 DEBUG nova.objects.instance [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7169151-5523-42c6-bebf-4b9ff2640da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.725 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <uuid>e7169151-5523-42c6-bebf-4b9ff2640da3</uuid>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <name>instance-0000002f</name>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:name>tempest-MultipleCreateTestJSON-server-1423595435-2</nova:name>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:31:34</nova:creationTime>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:user uuid="e3ba89d7ba114005bf727750ed2eb249">tempest-MultipleCreateTestJSON-1813277106-project-member</nova:user>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:project uuid="8098b3bc99ed4993a40e217876568115">tempest-MultipleCreateTestJSON-1813277106</nova:project>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <nova:port uuid="eec863b4-83ac-47db-bf76-d7ba4eee9f0d">
Nov 25 08:31:35 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <system>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="serial">e7169151-5523-42c6-bebf-4b9ff2640da3</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="uuid">e7169151-5523-42c6-bebf-4b9ff2640da3</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </system>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <os>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </os>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <features>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </features>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e7169151-5523-42c6-bebf-4b9ff2640da3_disk">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e7169151-5523-42c6-bebf-4b9ff2640da3_disk.config">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:3e:4f:9c"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <target dev="tapeec863b4-83"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3/console.log" append="off"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <video>
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </video>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:31:35 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:31:35 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:31:35 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:31:35 compute-0 nova_compute[253538]: </domain>
Nov 25 08:31:35 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.727 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Preparing to wait for external event network-vif-plugged-eec863b4-83ac-47db-bf76-d7ba4eee9f0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.727 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.727 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.728 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.728 253542 DEBUG nova.virt.libvirt.vif [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1423595435',display_name='tempest-MultipleCreateTestJSON-server-1423595435-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1423595435-2',id=47,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-kx0ioy43',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:30Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=e7169151-5523-42c6-bebf-4b9ff2640da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.729 253542 DEBUG nova.network.os_vif_util [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.729 253542 DEBUG nova.network.os_vif_util [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:4f:9c,bridge_name='br-int',has_traffic_filtering=True,id=eec863b4-83ac-47db-bf76-d7ba4eee9f0d,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec863b4-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.730 253542 DEBUG os_vif [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:4f:9c,bridge_name='br-int',has_traffic_filtering=True,id=eec863b4-83ac-47db-bf76-d7ba4eee9f0d,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec863b4-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.730 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.730 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.731 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.733 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeec863b4-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.734 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeec863b4-83, col_values=(('external_ids', {'iface-id': 'eec863b4-83ac-47db-bf76-d7ba4eee9f0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:4f:9c', 'vm-uuid': 'e7169151-5523-42c6-bebf-4b9ff2640da3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:35 compute-0 NetworkManager[48915]: <info>  [1764059495.7364] manager: (tapeec863b4-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.744 253542 INFO os_vif [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:4f:9c,bridge_name='br-int',has_traffic_filtering=True,id=eec863b4-83ac-47db-bf76-d7ba4eee9f0d,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec863b4-83')
Nov 25 08:31:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.754160) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059495754241, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2140, "num_deletes": 520, "total_data_size": 2669190, "memory_usage": 2721448, "flush_reason": "Manual Compaction"}
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059495774363, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1744575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27988, "largest_seqno": 30127, "table_properties": {"data_size": 1737058, "index_size": 3695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22379, "raw_average_key_size": 20, "raw_value_size": 1718499, "raw_average_value_size": 1566, "num_data_blocks": 164, "num_entries": 1097, "num_filter_entries": 1097, "num_deletions": 520, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059352, "oldest_key_time": 1764059352, "file_creation_time": 1764059495, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 20244 microseconds, and 5975 cpu microseconds.
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.774413) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1744575 bytes OK
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.774433) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.778509) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.778536) EVENT_LOG_v1 {"time_micros": 1764059495778529, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.778555) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2658838, prev total WAL file size 2658838, number of live WAL files 2.
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.779578) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1703KB)], [62(8692KB)]
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059495779628, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10645962, "oldest_snapshot_seqno": -1}
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.800 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.801 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.802 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] No VIF found with MAC fa:16:3e:3e:4f:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.802 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Using config drive
Nov 25 08:31:35 compute-0 nova_compute[253538]: 2025-11-25 08:31:35.822 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image e7169151-5523-42c6-bebf-4b9ff2640da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5421 keys, 8103451 bytes, temperature: kUnknown
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059495878750, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8103451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8067387, "index_size": 21426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136389, "raw_average_key_size": 25, "raw_value_size": 7969950, "raw_average_value_size": 1470, "num_data_blocks": 883, "num_entries": 5421, "num_filter_entries": 5421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059495, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.879065) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8103451 bytes
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.883906) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.3 rd, 81.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.5 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(10.7) write-amplify(4.6) OK, records in: 6401, records dropped: 980 output_compression: NoCompression
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.883937) EVENT_LOG_v1 {"time_micros": 1764059495883923, "job": 34, "event": "compaction_finished", "compaction_time_micros": 99220, "compaction_time_cpu_micros": 19050, "output_level": 6, "num_output_files": 1, "total_output_size": 8103451, "num_input_records": 6401, "num_output_records": 5421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059495884776, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059495888049, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.779485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.888155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.888160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.888162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.888164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:31:35 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:31:35.888166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.035 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Creating config drive at /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9/disk.config
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.040 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp762916qg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.175 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp762916qg" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.213 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image 44f92d51-7981-4905-8ec2-179722b520f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.216 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9/disk.config 44f92d51-7981-4905-8ec2-179722b520f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:36 compute-0 ceph-mon[75015]: pgmap v1447: 321 pgs: 321 active+clean; 188 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 243 op/s
Nov 25 08:31:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4197125596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.415 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9/disk.config 44f92d51-7981-4905-8ec2-179722b520f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.416 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Deleting local config drive /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9/disk.config because it was imported into RBD.
Nov 25 08:31:36 compute-0 kernel: tap2d3ff707-02: entered promiscuous mode
Nov 25 08:31:36 compute-0 NetworkManager[48915]: <info>  [1764059496.4742] manager: (tap2d3ff707-02): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Nov 25 08:31:36 compute-0 ovn_controller[152859]: 2025-11-25T08:31:36Z|00355|binding|INFO|Claiming lport 2d3ff707-0229-412a-9743-73ef63760a76 for this chassis.
Nov 25 08:31:36 compute-0 ovn_controller[152859]: 2025-11-25T08:31:36Z|00356|binding|INFO|2d3ff707-0229-412a-9743-73ef63760a76: Claiming fa:16:3e:bb:29:93 10.100.0.8
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.507 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:29:93 10.100.0.8'], port_security=['fa:16:3e:bb:29:93 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '44f92d51-7981-4905-8ec2-179722b520f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8098b3bc99ed4993a40e217876568115', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fca0453d-8e16-42b4-ac5c-91be3458ce00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eafba2a5-2b65-4a07-b367-d4c961f70ef4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2d3ff707-0229-412a-9743-73ef63760a76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.508 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2d3ff707-0229-412a-9743-73ef63760a76 in datapath 65eaa05f-c76b-4ec2-a994-1bde6d48fae1 bound to our chassis
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.521 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:36 compute-0 ovn_controller[152859]: 2025-11-25T08:31:36Z|00357|binding|INFO|Setting lport 2d3ff707-0229-412a-9743-73ef63760a76 ovn-installed in OVS
Nov 25 08:31:36 compute-0 ovn_controller[152859]: 2025-11-25T08:31:36Z|00358|binding|INFO|Setting lport 2d3ff707-0229-412a-9743-73ef63760a76 up in Southbound
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.530 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.531 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5478332c-7ad9-40e2-9488-cbf32dd917ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.532 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65eaa05f-c1 in ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.534 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65eaa05f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.534 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72387357-5403-456f-8eac-69a638919c7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:36 compute-0 systemd-machined[215790]: New machine qemu-51-instance-0000002e.
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.536 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1553325-acb6-4779-beb3-6e89bd3792e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 systemd-udevd[305310]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.550 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e83ade4a-dca8-408d-9029-304342255e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 NetworkManager[48915]: <info>  [1764059496.5515] device (tap2d3ff707-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:31:36 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002e.
Nov 25 08:31:36 compute-0 NetworkManager[48915]: <info>  [1764059496.5526] device (tap2d3ff707-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd86ab3-dcf8-42a2-8d09-305d8601ddfd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.586 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Creating config drive at /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3/disk.config
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.591 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzbz88c0t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.603 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e1b2dc-6fc7-4376-b340-1ddb66526cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.608 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[761a6913-6df6-4f4f-8fbb-4a01656c9f8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 NetworkManager[48915]: <info>  [1764059496.6087] manager: (tap65eaa05f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/170)
Nov 25 08:31:36 compute-0 systemd-udevd[305313]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.641 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea6148b-2522-4669-82cf-76ebad378de3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.643 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2050438a-e833-4ac4-bb76-cfc732fd8e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 NetworkManager[48915]: <info>  [1764059496.6637] device (tap65eaa05f-c0): carrier: link connected
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.668 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[56034666-2b00-46d0-a60a-8a03982bd88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.685 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00da4009-4dc9-4073-aefd-3e0634126d52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65eaa05f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:f2:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485126, 'reachable_time': 39383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305348, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.701 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85578752-ac9f-4d68-a253-06f0b61ca8c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:f258'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485126, 'tstamp': 485126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305349, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.719 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[beda2638-b72b-418d-8441-704edb3b0ad5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65eaa05f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:f2:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485126, 'reachable_time': 39383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305350, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.729 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzbz88c0t" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.754 253542 DEBUG nova.storage.rbd_utils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] rbd image e7169151-5523-42c6-bebf-4b9ff2640da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.754 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[49ec568d-299c-4ad2-8395-e3cc2fec1f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.771 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3/disk.config e7169151-5523-42c6-bebf-4b9ff2640da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.804 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059481.8009377, 59d86d83-1ff4-4f9e-96f3-5e381272de5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.805 253542 INFO nova.compute.manager [-] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] VM Stopped (Lifecycle Event)
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.827 253542 DEBUG nova.compute.manager [None req-85a60e93-715a-4e21-b8b5-7da2fc78e1ee - - - - - -] [instance: 59d86d83-1ff4-4f9e-96f3-5e381272de5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f7983331-fc6b-4dcf-a0ad-863a42db7055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.830 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65eaa05f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.830 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.831 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65eaa05f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:36 compute-0 NetworkManager[48915]: <info>  [1764059496.8339] manager: (tap65eaa05f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 25 08:31:36 compute-0 kernel: tap65eaa05f-c0: entered promiscuous mode
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.840 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65eaa05f-c0, col_values=(('external_ids', {'iface-id': 'd2f4cf46-9887-4d81-a2f6-0b34dd30bde4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:36 compute-0 ovn_controller[152859]: 2025-11-25T08:31:36Z|00359|binding|INFO|Releasing lport d2f4cf46-9887-4d81-a2f6-0b34dd30bde4 from this chassis (sb_readonly=0)
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.860 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65eaa05f-c76b-4ec2-a994-1bde6d48fae1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65eaa05f-c76b-4ec2-a994-1bde6d48fae1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.861 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d73cb8-442c-46ce-8399-72679d60c09a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.862 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/65eaa05f-c76b-4ec2-a994-1bde6d48fae1.pid.haproxy
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:31:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:36.862 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'env', 'PROCESS_TAG=haproxy-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65eaa05f-c76b-4ec2-a994-1bde6d48fae1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.899 253542 DEBUG nova.network.neutron [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.915 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.916 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Instance network_info: |[{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.918 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Start _get_guest_xml network_info=[{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.922 253542 WARNING nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.932 253542 DEBUG nova.virt.libvirt.host [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.934 253542 DEBUG nova.virt.libvirt.host [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.942 253542 DEBUG nova.virt.libvirt.host [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.943 253542 DEBUG nova.virt.libvirt.host [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.944 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.944 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.945 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.945 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.946 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.946 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.946 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.947 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.947 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.947 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.948 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.948 253542 DEBUG nova.virt.hardware [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.951 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.992 253542 DEBUG oslo_concurrency.processutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3/disk.config e7169151-5523-42c6-bebf-4b9ff2640da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:36 compute-0 nova_compute[253538]: 2025-11-25 08:31:36.993 253542 INFO nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Deleting local config drive /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3/disk.config because it was imported into RBD.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.031 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059482.0298529, 1a48a5fd-440c-4f73-89a2-720e38b2f798 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.032 253542 INFO nova.compute.manager [-] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] VM Stopped (Lifecycle Event)
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.038 253542 DEBUG nova.network.neutron [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Updated VIF entry in instance network info cache for port 2d3ff707-0229-412a-9743-73ef63760a76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.039 253542 DEBUG nova.network.neutron [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Updating instance_info_cache with network_info: [{"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:37 compute-0 systemd-udevd[305332]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:37 compute-0 kernel: tapeec863b4-83: entered promiscuous mode
Nov 25 08:31:37 compute-0 NetworkManager[48915]: <info>  [1764059497.0471] manager: (tapeec863b4-83): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.048 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:37 compute-0 ovn_controller[152859]: 2025-11-25T08:31:37Z|00360|binding|INFO|Claiming lport eec863b4-83ac-47db-bf76-d7ba4eee9f0d for this chassis.
Nov 25 08:31:37 compute-0 ovn_controller[152859]: 2025-11-25T08:31:37Z|00361|binding|INFO|eec863b4-83ac-47db-bf76-d7ba4eee9f0d: Claiming fa:16:3e:3e:4f:9c 10.100.0.10
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.052 253542 DEBUG nova.compute.manager [None req-52a94b07-8671-457a-af9e-dafe53161a04 - - - - - -] [instance: 1a48a5fd-440c-4f73-89a2-720e38b2f798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.054 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-44f92d51-7981-4905-8ec2-179722b520f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.054 253542 DEBUG nova.compute.manager [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received event network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.055 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.055 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.056 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "21243957-732f-435d-854b-56d6dd7c1ee5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.056 253542 DEBUG nova.compute.manager [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] No waiting events found dispatching network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:37 compute-0 NetworkManager[48915]: <info>  [1764059497.0574] device (tapeec863b4-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.057 253542 WARNING nova.compute.manager [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Received unexpected event network-vif-plugged-b1fe2a2b-159d-40ea-a88d-1a311f1e7702 for instance with vm_state deleted and task_state None.
Nov 25 08:31:37 compute-0 NetworkManager[48915]: <info>  [1764059497.0582] device (tapeec863b4-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.057 253542 DEBUG nova.compute.manager [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Received event network-changed-eec863b4-83ac-47db-bf76-d7ba4eee9f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.057 253542 DEBUG nova.compute.manager [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Refreshing instance network info cache due to event network-changed-eec863b4-83ac-47db-bf76-d7ba4eee9f0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.058 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e7169151-5523-42c6-bebf-4b9ff2640da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.058 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e7169151-5523-42c6-bebf-4b9ff2640da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.059 253542 DEBUG nova.network.neutron [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Refreshing network info cache for port eec863b4-83ac-47db-bf76-d7ba4eee9f0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.061 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:4f:9c 10.100.0.10'], port_security=['fa:16:3e:3e:4f:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e7169151-5523-42c6-bebf-4b9ff2640da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8098b3bc99ed4993a40e217876568115', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fca0453d-8e16-42b4-ac5c-91be3458ce00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eafba2a5-2b65-4a07-b367-d4c961f70ef4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eec863b4-83ac-47db-bf76-d7ba4eee9f0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:37 compute-0 ovn_controller[152859]: 2025-11-25T08:31:37Z|00362|binding|INFO|Setting lport eec863b4-83ac-47db-bf76-d7ba4eee9f0d ovn-installed in OVS
Nov 25 08:31:37 compute-0 ovn_controller[152859]: 2025-11-25T08:31:37Z|00363|binding|INFO|Setting lport eec863b4-83ac-47db-bf76-d7ba4eee9f0d up in Southbound
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.070 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:37 compute-0 systemd-machined[215790]: New machine qemu-52-instance-0000002f.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.089 253542 DEBUG nova.compute.manager [req-fa20382a-c508-4481-b12f-b95d89243a21 req-01d5d1a1-43c6-4724-91e2-934a353de632 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-changed-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.090 253542 DEBUG nova.compute.manager [req-fa20382a-c508-4481-b12f-b95d89243a21 req-01d5d1a1-43c6-4724-91e2-934a353de632 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Refreshing instance network info cache due to event network-changed-2456d48f-9440-411c-b5f2-5c27136126f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.090 253542 DEBUG oslo_concurrency.lockutils [req-fa20382a-c508-4481-b12f-b95d89243a21 req-01d5d1a1-43c6-4724-91e2-934a353de632 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.090 253542 DEBUG oslo_concurrency.lockutils [req-fa20382a-c508-4481-b12f-b95d89243a21 req-01d5d1a1-43c6-4724-91e2-934a353de632 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.091 253542 DEBUG nova.network.neutron [req-fa20382a-c508-4481-b12f-b95d89243a21 req-01d5d1a1-43c6-4724-91e2-934a353de632 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Refreshing network info cache for port 2456d48f-9440-411c-b5f2-5c27136126f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:37 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002f.
Nov 25 08:31:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 176 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.0 MiB/s wr, 116 op/s
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.199 253542 DEBUG nova.compute.manager [req-01884c96-359a-4f40-b941-99019208e30c req-74409a47-7968-4cf9-b6f0-f2a6409d90ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received event network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.200 253542 DEBUG oslo_concurrency.lockutils [req-01884c96-359a-4f40-b941-99019208e30c req-74409a47-7968-4cf9-b6f0-f2a6409d90ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "44f92d51-7981-4905-8ec2-179722b520f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.200 253542 DEBUG oslo_concurrency.lockutils [req-01884c96-359a-4f40-b941-99019208e30c req-74409a47-7968-4cf9-b6f0-f2a6409d90ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.200 253542 DEBUG oslo_concurrency.lockutils [req-01884c96-359a-4f40-b941-99019208e30c req-74409a47-7968-4cf9-b6f0-f2a6409d90ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.200 253542 DEBUG nova.compute.manager [req-01884c96-359a-4f40-b941-99019208e30c req-74409a47-7968-4cf9-b6f0-f2a6409d90ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Processing event network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:31:37 compute-0 podman[305460]: 2025-11-25 08:31:37.20976204 +0000 UTC m=+0.030221716 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:31:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3057041135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.435 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:37 compute-0 podman[305460]: 2025-11-25 08:31:37.440124237 +0000 UTC m=+0.260583893 container create a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.472 253542 DEBUG nova.storage.rbd_utils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.477 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:37 compute-0 systemd[1]: Started libpod-conmon-a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42.scope.
Nov 25 08:31:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38c35fb9cffac7c12c797a37349b7a4addede0ad12183983cb45e13dca3f860/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:37 compute-0 podman[305460]: 2025-11-25 08:31:37.60558288 +0000 UTC m=+0.426042546 container init a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:37 compute-0 podman[305460]: 2025-11-25 08:31:37.611666608 +0000 UTC m=+0.432126254 container start a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.635 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059497.6342869, 44f92d51-7981-4905-8ec2-179722b520f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:37 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[305531]: [NOTICE]   (305559) : New worker (305578) forked
Nov 25 08:31:37 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[305531]: [NOTICE]   (305559) : Loading success.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.637 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] VM Started (Lifecycle Event)
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.642 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.648 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.664 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.665 253542 INFO nova.virt.libvirt.driver [-] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Instance spawned successfully.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.665 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.670 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.693 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eec863b4-83ac-47db-bf76-d7ba4eee9f0d in datapath 65eaa05f-c76b-4ec2-a994-1bde6d48fae1 unbound from our chassis
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.695 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.696 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.696 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059497.6348884, 44f92d51-7981-4905-8ec2-179722b520f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.697 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] VM Paused (Lifecycle Event)
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.705 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.706 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.706 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.707 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.707 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.708 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.713 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c6d06a-7262-4e9a-af50-619b3c17e711]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.744 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[48eda54c-1b18-4bdf-b0aa-1645faf041f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.748 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[48ff7107-8c96-4676-850f-913c89ff7a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.750 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.754 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059497.647141, 44f92d51-7981-4905-8ec2-179722b520f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.754 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] VM Resumed (Lifecycle Event)
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.777 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2e869eba-2300-44e3-8c4e-cd8427049cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.783 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.786 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.794 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ecad804-c845-48dc-9ebc-0265fef778e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65eaa05f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:f2:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485126, 'reachable_time': 39383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305618, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.806 253542 INFO nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Took 7.95 seconds to spawn the instance on the hypervisor.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.806 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.808 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.808 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059497.794162, e7169151-5523-42c6-bebf-4b9ff2640da3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.808 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] VM Started (Lifecycle Event)
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1e3e4d-a79b-43ca-8946-4ac6ee1614c4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap65eaa05f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485137, 'tstamp': 485137}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305619, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap65eaa05f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485142, 'tstamp': 485142}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305619, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.815 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65eaa05f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.822 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65eaa05f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.822 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.823 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65eaa05f-c0, col_values=(('external_ids', {'iface-id': 'd2f4cf46-9887-4d81-a2f6-0b34dd30bde4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:37.823 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.832 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.836 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059497.794228, e7169151-5523-42c6-bebf-4b9ff2640da3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.836 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] VM Paused (Lifecycle Event)
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.861 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.863 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.875 253542 INFO nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Took 8.92 seconds to build instance.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.881 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:37 compute-0 nova_compute[253538]: 2025-11-25 08:31:37.901 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842823693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.000 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.002 253542 DEBUG nova.virt.libvirt.vif [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.003 253542 DEBUG nova.network.os_vif_util [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.003 253542 DEBUG nova.network.os_vif_util [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.005 253542 DEBUG nova.objects.instance [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_devices' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.022 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <name>instance-00000030</name>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:31:36</nova:creationTime>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 08:31:38 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <system>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <entry name="serial">450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <entry name="uuid">450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </system>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <os>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   </os>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <features>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   </features>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk">
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config">
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:38 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:c2:50:72"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <target dev="tap2456d48f-94"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log" append="off"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <video>
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </video>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:31:38 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:31:38 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:31:38 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:31:38 compute-0 nova_compute[253538]: </domain>
Nov 25 08:31:38 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.024 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Preparing to wait for external event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.024 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.024 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.025 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.025 253542 DEBUG nova.virt.libvirt.vif [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.026 253542 DEBUG nova.network.os_vif_util [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.027 253542 DEBUG nova.network.os_vif_util [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.027 253542 DEBUG os_vif [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.028 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.029 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.029 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.036 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2456d48f-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.036 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2456d48f-94, col_values=(('external_ids', {'iface-id': '2456d48f-9440-411c-b5f2-5c27136126f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:50:72', 'vm-uuid': '450d4b82-4475-4cfc-b868-dc3b0fc37af5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:38 compute-0 NetworkManager[48915]: <info>  [1764059498.0391] manager: (tap2456d48f-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.040 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.047 253542 INFO os_vif [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94')
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.138 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.140 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.140 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:c2:50:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.140 253542 INFO nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Using config drive
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.166 253542 DEBUG nova.storage.rbd_utils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:38 compute-0 ceph-mon[75015]: pgmap v1448: 321 pgs: 321 active+clean; 176 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.0 MiB/s wr, 116 op/s
Nov 25 08:31:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3057041135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1842823693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.706 253542 INFO nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Creating config drive at /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/disk.config
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.712 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxzwi21vz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.747 253542 DEBUG nova.network.neutron [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Updated VIF entry in instance network info cache for port eec863b4-83ac-47db-bf76-d7ba4eee9f0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.748 253542 DEBUG nova.network.neutron [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Updating instance_info_cache with network_info: [{"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.752 253542 DEBUG nova.network.neutron [req-fa20382a-c508-4481-b12f-b95d89243a21 req-01d5d1a1-43c6-4724-91e2-934a353de632 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updated VIF entry in instance network info cache for port 2456d48f-9440-411c-b5f2-5c27136126f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.752 253542 DEBUG nova.network.neutron [req-fa20382a-c508-4481-b12f-b95d89243a21 req-01d5d1a1-43c6-4724-91e2-934a353de632 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.774 253542 DEBUG oslo_concurrency.lockutils [req-fa20382a-c508-4481-b12f-b95d89243a21 req-01d5d1a1-43c6-4724-91e2-934a353de632 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.775 253542 DEBUG oslo_concurrency.lockutils [req-feb3592c-4b12-49e9-b695-19cc42fdeae9 req-340e3129-c0c7-4dd5-a688-540308d66093 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e7169151-5523-42c6-bebf-4b9ff2640da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.852 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxzwi21vz" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.876 253542 DEBUG nova.storage.rbd_utils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:38 compute-0 nova_compute[253538]: 2025-11-25 08:31:38.879 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/disk.config 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.053 253542 DEBUG oslo_concurrency.processutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/disk.config 450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.055 253542 INFO nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Deleting local config drive /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/disk.config because it was imported into RBD.
Nov 25 08:31:39 compute-0 kernel: tap2456d48f-94: entered promiscuous mode
Nov 25 08:31:39 compute-0 NetworkManager[48915]: <info>  [1764059499.1233] manager: (tap2456d48f-94): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:39 compute-0 ovn_controller[152859]: 2025-11-25T08:31:39Z|00364|binding|INFO|Claiming lport 2456d48f-9440-411c-b5f2-5c27136126f9 for this chassis.
Nov 25 08:31:39 compute-0 ovn_controller[152859]: 2025-11-25T08:31:39Z|00365|binding|INFO|2456d48f-9440-411c-b5f2-5c27136126f9: Claiming fa:16:3e:c2:50:72 10.100.0.7
Nov 25 08:31:39 compute-0 NetworkManager[48915]: <info>  [1764059499.1406] device (tap2456d48f-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:31:39 compute-0 NetworkManager[48915]: <info>  [1764059499.1422] device (tap2456d48f-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:31:39 compute-0 ovn_controller[152859]: 2025-11-25T08:31:39Z|00366|binding|INFO|Setting lport 2456d48f-9440-411c-b5f2-5c27136126f9 ovn-installed in OVS
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.155 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:50:72 10.100.0.7'], port_security=['fa:16:3e:c2:50:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1111213c-e81d-4f44-8d10-b8f2ced48789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2456d48f-9440-411c-b5f2-5c27136126f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.156 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2456d48f-9440-411c-b5f2-5c27136126f9 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.157 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:31:39 compute-0 ovn_controller[152859]: 2025-11-25T08:31:39Z|00367|binding|INFO|Setting lport 2456d48f-9440-411c-b5f2-5c27136126f9 up in Southbound
Nov 25 08:31:39 compute-0 systemd-machined[215790]: New machine qemu-53-instance-00000030.
Nov 25 08:31:39 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000030.
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.170 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[77a971d2-cb43-4319-b70a-6e25a2ea34e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.173 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bf3cbfa-71 in ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:31:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 181 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 5.3 MiB/s wr, 121 op/s
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.175 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bf3cbfa-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.175 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[14e4860e-ab9b-446c-b105-1c543e28974b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.176 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2ca996-e41b-4575-903d-9372d951c318]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.192 253542 DEBUG nova.compute.manager [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Received event network-vif-plugged-eec863b4-83ac-47db-bf76-d7ba4eee9f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.193 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4e1082-56c3-4a4c-aa54-c2107e967e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.194 253542 DEBUG oslo_concurrency.lockutils [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.194 253542 DEBUG oslo_concurrency.lockutils [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.195 253542 DEBUG oslo_concurrency.lockutils [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.195 253542 DEBUG nova.compute.manager [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Processing event network-vif-plugged-eec863b4-83ac-47db-bf76-d7ba4eee9f0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.195 253542 DEBUG nova.compute.manager [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Received event network-vif-plugged-eec863b4-83ac-47db-bf76-d7ba4eee9f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.195 253542 DEBUG oslo_concurrency.lockutils [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.195 253542 DEBUG oslo_concurrency.lockutils [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.196 253542 DEBUG oslo_concurrency.lockutils [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.196 253542 DEBUG nova.compute.manager [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] No waiting events found dispatching network-vif-plugged-eec863b4-83ac-47db-bf76-d7ba4eee9f0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.196 253542 WARNING nova.compute.manager [req-63730bcd-776f-4da2-b66c-25e26a678ebc req-c9a3920b-400b-425d-868e-e0fa97579f4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Received unexpected event network-vif-plugged-eec863b4-83ac-47db-bf76-d7ba4eee9f0d for instance with vm_state building and task_state spawning.
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.197 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.203 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059499.203015, e7169151-5523-42c6-bebf-4b9ff2640da3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.204 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] VM Resumed (Lifecycle Event)
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.208 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.213 253542 INFO nova.virt.libvirt.driver [-] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Instance spawned successfully.
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.214 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.225 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d76ccef8-0bfd-482d-85b7-c4739fe9260a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.236 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.251 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.261 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[310895f7-e2af-43a7-995e-6d1b9480f71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 NetworkManager[48915]: <info>  [1764059499.2672] manager: (tap9bf3cbfa-70): new Veth device (/org/freedesktop/NetworkManager/Devices/175)
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0638f959-da15-4c40-ac72-cedae1b70382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.269 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.270 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.270 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.271 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.271 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.272 253542 DEBUG nova.virt.libvirt.driver [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.295 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.294 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1eec9111-ea13-47d7-91c3-b1a096b98645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.297 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[476b9d40-0967-4dfa-81e2-6fb8fa889bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.315 253542 DEBUG nova.compute.manager [req-b17eeca3-6026-44c0-b1ab-23c7e83aeab2 req-aee0eea9-1f87-4221-82c5-beea1ac6a0ce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received event network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.320 253542 DEBUG oslo_concurrency.lockutils [req-b17eeca3-6026-44c0-b1ab-23c7e83aeab2 req-aee0eea9-1f87-4221-82c5-beea1ac6a0ce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "44f92d51-7981-4905-8ec2-179722b520f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.321 253542 DEBUG oslo_concurrency.lockutils [req-b17eeca3-6026-44c0-b1ab-23c7e83aeab2 req-aee0eea9-1f87-4221-82c5-beea1ac6a0ce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.321 253542 DEBUG oslo_concurrency.lockutils [req-b17eeca3-6026-44c0-b1ab-23c7e83aeab2 req-aee0eea9-1f87-4221-82c5-beea1ac6a0ce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.321 253542 DEBUG nova.compute.manager [req-b17eeca3-6026-44c0-b1ab-23c7e83aeab2 req-aee0eea9-1f87-4221-82c5-beea1ac6a0ce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] No waiting events found dispatching network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.322 253542 WARNING nova.compute.manager [req-b17eeca3-6026-44c0-b1ab-23c7e83aeab2 req-aee0eea9-1f87-4221-82c5-beea1ac6a0ce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received unexpected event network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 for instance with vm_state active and task_state None.
Nov 25 08:31:39 compute-0 NetworkManager[48915]: <info>  [1764059499.3257] device (tap9bf3cbfa-70): carrier: link connected
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.325 253542 INFO nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Took 8.70 seconds to spawn the instance on the hypervisor.
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.326 253542 DEBUG nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.332 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae5361a-287d-4722-acc5-c3e23988c103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eefb37f1-9013-4970-a07a-0fc029a31128]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485392, 'reachable_time': 26442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305712, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.367 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[77035155-7513-4eaa-8fc6-292de249eae8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:8fc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485392, 'tstamp': 485392}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305713, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.383 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93af0c86-1ded-4572-833f-7a4441f6e3df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485392, 'reachable_time': 26442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305714, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.413 253542 INFO nova.compute.manager [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Took 10.40 seconds to build instance.
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.422 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd158897-c7b0-4ec5-b905-9221566fa9ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.441 253542 DEBUG oslo_concurrency.lockutils [None req-ad59fa07-ba1e-4c13-b1be-024186aaca23 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.493 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d45ef03-5aac-493a-96ff-9d6d0c0a26b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:39 compute-0 kernel: tap9bf3cbfa-70: entered promiscuous mode
Nov 25 08:31:39 compute-0 NetworkManager[48915]: <info>  [1764059499.4979] manager: (tap9bf3cbfa-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.501 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:39 compute-0 ovn_controller[152859]: 2025-11-25T08:31:39Z|00368|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.531 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.532 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[da7ea306-97b0-4b3c-8180-3e6595cf5605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.533 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:31:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:39.533 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'env', 'PROCESS_TAG=haproxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.740 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059499.7400317, 450d4b82-4475-4cfc-b868-dc3b0fc37af5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.741 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] VM Started (Lifecycle Event)
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.788 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.795 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059499.7402642, 450d4b82-4475-4cfc-b868-dc3b0fc37af5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.795 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] VM Paused (Lifecycle Event)
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.812 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.815 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:39 compute-0 nova_compute[253538]: 2025-11-25 08:31:39.834 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:39 compute-0 podman[305767]: 2025-11-25 08:31:39.843984139 +0000 UTC m=+0.102998388 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 08:31:39 compute-0 podman[305807]: 2025-11-25 08:31:39.969798186 +0000 UTC m=+0.082753078 container create a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:31:40 compute-0 podman[305807]: 2025-11-25 08:31:39.918270983 +0000 UTC m=+0.031225895 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:31:40 compute-0 systemd[1]: Started libpod-conmon-a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84.scope.
Nov 25 08:31:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6e1a4e2eca1eca6ce4c110ec30d8b792ff0a483f2c08ec38ee2cef1cf99ecc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:40 compute-0 podman[305807]: 2025-11-25 08:31:40.092901859 +0000 UTC m=+0.205856761 container init a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:40 compute-0 podman[305807]: 2025-11-25 08:31:40.0983772 +0000 UTC m=+0.211332092 container start a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:40 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [NOTICE]   (305826) : New worker (305828) forked
Nov 25 08:31:40 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [NOTICE]   (305826) : Loading success.
Nov 25 08:31:40 compute-0 nova_compute[253538]: 2025-11-25 08:31:40.329 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:40 compute-0 ceph-mon[75015]: pgmap v1449: 321 pgs: 321 active+clean; 181 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 5.3 MiB/s wr, 121 op/s
Nov 25 08:31:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:40 compute-0 nova_compute[253538]: 2025-11-25 08:31:40.939 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:40 compute-0 ovn_controller[152859]: 2025-11-25T08:31:40Z|00369|binding|INFO|Releasing lport d2f4cf46-9887-4d81-a2f6-0b34dd30bde4 from this chassis (sb_readonly=0)
Nov 25 08:31:40 compute-0 ovn_controller[152859]: 2025-11-25T08:31:40Z|00370|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.042 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "44f92d51-7981-4905-8ec2-179722b520f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.043 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.044 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "44f92d51-7981-4905-8ec2-179722b520f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.044 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.045 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.046 253542 INFO nova.compute.manager [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Terminating instance
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.047 253542 DEBUG nova.compute.manager [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.056 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.057 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.057 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:41 compute-0 kernel: tap2d3ff707-02 (unregistering): left promiscuous mode
Nov 25 08:31:41 compute-0 NetworkManager[48915]: <info>  [1764059501.1285] device (tap2d3ff707-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.148 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 ovn_controller[152859]: 2025-11-25T08:31:41Z|00371|binding|INFO|Releasing lport d2f4cf46-9887-4d81-a2f6-0b34dd30bde4 from this chassis (sb_readonly=0)
Nov 25 08:31:41 compute-0 ovn_controller[152859]: 2025-11-25T08:31:41Z|00372|binding|INFO|Releasing lport 2d3ff707-0229-412a-9743-73ef63760a76 from this chassis (sb_readonly=0)
Nov 25 08:31:41 compute-0 ovn_controller[152859]: 2025-11-25T08:31:41Z|00373|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:31:41 compute-0 ovn_controller[152859]: 2025-11-25T08:31:41Z|00374|binding|INFO|Removing iface tap2d3ff707-02 ovn-installed in OVS
Nov 25 08:31:41 compute-0 ovn_controller[152859]: 2025-11-25T08:31:41Z|00375|binding|INFO|Setting lport 2d3ff707-0229-412a-9743-73ef63760a76 down in Southbound
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.175 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:29:93 10.100.0.8'], port_security=['fa:16:3e:bb:29:93 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '44f92d51-7981-4905-8ec2-179722b520f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8098b3bc99ed4993a40e217876568115', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fca0453d-8e16-42b4-ac5c-91be3458ce00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eafba2a5-2b65-4a07-b367-d4c961f70ef4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2d3ff707-0229-412a-9743-73ef63760a76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 181 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 171 op/s
Nov 25 08:31:41 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Nov 25 08:31:41 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Consumed 4.454s CPU time.
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.178 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.181 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2d3ff707-0229-412a-9743-73ef63760a76 in datapath 65eaa05f-c76b-4ec2-a994-1bde6d48fae1 unbound from our chassis
Nov 25 08:31:41 compute-0 systemd-machined[215790]: Machine qemu-51-instance-0000002e terminated.
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.183 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65eaa05f-c76b-4ec2-a994-1bde6d48fae1
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.189 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "e7169151-5523-42c6-bebf-4b9ff2640da3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.190 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.190 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.190 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.191 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.192 253542 INFO nova.compute.manager [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Terminating instance
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.193 253542 DEBUG nova.compute.manager [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd13a05-d58e-4d64-a20d-eb1c5fc62c2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.224 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ef14b0-82ad-40a1-9690-03a44a9b717c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.227 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b2d1c9-7bac-44db-b808-79345e54e863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 kernel: tapeec863b4-83 (unregistering): left promiscuous mode
Nov 25 08:31:41 compute-0 NetworkManager[48915]: <info>  [1764059501.2348] device (tapeec863b4-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 ovn_controller[152859]: 2025-11-25T08:31:41Z|00376|binding|INFO|Releasing lport eec863b4-83ac-47db-bf76-d7ba4eee9f0d from this chassis (sb_readonly=0)
Nov 25 08:31:41 compute-0 ovn_controller[152859]: 2025-11-25T08:31:41Z|00377|binding|INFO|Setting lport eec863b4-83ac-47db-bf76-d7ba4eee9f0d down in Southbound
Nov 25 08:31:41 compute-0 ovn_controller[152859]: 2025-11-25T08:31:41Z|00378|binding|INFO|Removing iface tapeec863b4-83 ovn-installed in OVS
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.248 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.254 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:4f:9c 10.100.0.10'], port_security=['fa:16:3e:3e:4f:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e7169151-5523-42c6-bebf-4b9ff2640da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8098b3bc99ed4993a40e217876568115', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fca0453d-8e16-42b4-ac5c-91be3458ce00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eafba2a5-2b65-4a07-b367-d4c961f70ef4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eec863b4-83ac-47db-bf76-d7ba4eee9f0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.256 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8e634611-063a-49c5-a9df-1bd298e63520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.262 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.275 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1bdc46-5a11-47dc-9c1f-8baea26affb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65eaa05f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:f2:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485126, 'reachable_time': 39383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305853, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Nov 25 08:31:41 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Consumed 2.716s CPU time.
Nov 25 08:31:41 compute-0 systemd-machined[215790]: Machine qemu-52-instance-0000002f terminated.
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[572b4b34-6171-4bbe-9782-d7338fe6fa63]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap65eaa05f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485137, 'tstamp': 485137}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305858, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap65eaa05f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485142, 'tstamp': 485142}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305858, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.294 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65eaa05f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.296 253542 INFO nova.virt.libvirt.driver [-] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Instance destroyed successfully.
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.297 253542 DEBUG nova.objects.instance [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'resources' on Instance uuid 44f92d51-7981-4905-8ec2-179722b520f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65eaa05f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.305 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65eaa05f-c0, col_values=(('external_ids', {'iface-id': 'd2f4cf46-9887-4d81-a2f6-0b34dd30bde4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.305 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.306 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eec863b4-83ac-47db-bf76-d7ba4eee9f0d in datapath 65eaa05f-c76b-4ec2-a994-1bde6d48fae1 unbound from our chassis
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.307 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65eaa05f-c76b-4ec2-a994-1bde6d48fae1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe062b6-efc4-44e5-a3a6-bf897dc1d8eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.308 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 namespace which is not needed anymore
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.313 253542 DEBUG nova.virt.libvirt.vif [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1423595435',display_name='tempest-MultipleCreateTestJSON-server-1423595435-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1423595435-1',id=46,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-kx0ioy43',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:37Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=44f92d51-7981-4905-8ec2-179722b520f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.314 253542 DEBUG nova.network.os_vif_util [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "2d3ff707-0229-412a-9743-73ef63760a76", "address": "fa:16:3e:bb:29:93", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d3ff707-02", "ovs_interfaceid": "2d3ff707-0229-412a-9743-73ef63760a76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.315 253542 DEBUG nova.network.os_vif_util [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:29:93,bridge_name='br-int',has_traffic_filtering=True,id=2d3ff707-0229-412a-9743-73ef63760a76,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d3ff707-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.316 253542 DEBUG os_vif [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:29:93,bridge_name='br-int',has_traffic_filtering=True,id=2d3ff707-0229-412a-9743-73ef63760a76,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d3ff707-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.318 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d3ff707-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.320 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.322 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.325 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.329 253542 INFO os_vif [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:29:93,bridge_name='br-int',has_traffic_filtering=True,id=2d3ff707-0229-412a-9743-73ef63760a76,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d3ff707-02')
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.427 253542 INFO nova.virt.libvirt.driver [-] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Instance destroyed successfully.
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.428 253542 DEBUG nova.objects.instance [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lazy-loading 'resources' on Instance uuid e7169151-5523-42c6-bebf-4b9ff2640da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.438 253542 DEBUG nova.virt.libvirt.vif [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1423595435',display_name='tempest-MultipleCreateTestJSON-server-1423595435-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1423595435-2',id=47,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T08:31:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8098b3bc99ed4993a40e217876568115',ramdisk_id='',reservation_id='r-kx0ioy43',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1813277106',owner_user_name='tempest-MultipleCreateTestJSON-1813277106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:39Z,user_data=None,user_id='e3ba89d7ba114005bf727750ed2eb249',uuid=e7169151-5523-42c6-bebf-4b9ff2640da3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.439 253542 DEBUG nova.network.os_vif_util [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converting VIF {"id": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "address": "fa:16:3e:3e:4f:9c", "network": {"id": "65eaa05f-c76b-4ec2-a994-1bde6d48fae1", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1709665371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8098b3bc99ed4993a40e217876568115", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec863b4-83", "ovs_interfaceid": "eec863b4-83ac-47db-bf76-d7ba4eee9f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.440 253542 DEBUG nova.network.os_vif_util [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:4f:9c,bridge_name='br-int',has_traffic_filtering=True,id=eec863b4-83ac-47db-bf76-d7ba4eee9f0d,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec863b4-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.441 253542 DEBUG os_vif [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:4f:9c,bridge_name='br-int',has_traffic_filtering=True,id=eec863b4-83ac-47db-bf76-d7ba4eee9f0d,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec863b4-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.442 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.443 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeec863b4-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.444 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[305531]: [NOTICE]   (305559) : haproxy version is 2.8.14-c23fe91
Nov 25 08:31:41 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[305531]: [NOTICE]   (305559) : path to executable is /usr/sbin/haproxy
Nov 25 08:31:41 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[305531]: [WARNING]  (305559) : Exiting Master process...
Nov 25 08:31:41 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[305531]: [ALERT]    (305559) : Current worker (305578) exited with code 143 (Terminated)
Nov 25 08:31:41 compute-0 neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1[305531]: [WARNING]  (305559) : All workers exited. Exiting... (0)
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.451 253542 INFO os_vif [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:4f:9c,bridge_name='br-int',has_traffic_filtering=True,id=eec863b4-83ac-47db-bf76-d7ba4eee9f0d,network=Network(65eaa05f-c76b-4ec2-a994-1bde6d48fae1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec863b4-83')
Nov 25 08:31:41 compute-0 systemd[1]: libpod-a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42.scope: Deactivated successfully.
Nov 25 08:31:41 compute-0 podman[305906]: 2025-11-25 08:31:41.457008672 +0000 UTC m=+0.055277949 container died a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 08:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42-userdata-shm.mount: Deactivated successfully.
Nov 25 08:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e38c35fb9cffac7c12c797a37349b7a4addede0ad12183983cb45e13dca3f860-merged.mount: Deactivated successfully.
Nov 25 08:31:41 compute-0 podman[305906]: 2025-11-25 08:31:41.517444002 +0000 UTC m=+0.115713279 container cleanup a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:31:41 compute-0 systemd[1]: libpod-conmon-a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42.scope: Deactivated successfully.
Nov 25 08:31:41 compute-0 podman[305965]: 2025-11-25 08:31:41.604618812 +0000 UTC m=+0.059878036 container remove a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.612 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ba50c2-b58e-4aaf-ba03-f2da77a5b531]: (4, ('Tue Nov 25 08:31:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 (a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42)\na7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42\nTue Nov 25 08:31:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 (a7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42)\na7b4c426a8a573ba8714c8c58a09a5e786656f81a5cbfcd2f8bd3ee0e5c06c42\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fe70fc2a-a5ee-4899-b771-435175787235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.615 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65eaa05f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:41 compute-0 kernel: tap65eaa05f-c0: left promiscuous mode
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.617 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.631 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.634 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6bf9a0-aeaa-475e-836c-e38ad343a227]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.659 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[353e9816-3936-46d2-8c4c-6a927b0d3913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.661 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[adf8bc68-ba4c-452e-8818-afa579eacaea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.676 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b29fed-0f3b-4e83-8fd0-cdb8248a9f46]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485119, 'reachable_time': 15147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305981, 'error': None, 'target': 'ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.679 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65eaa05f-c76b-4ec2-a994-1bde6d48fae1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:31:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d65eaa05f\x2dc76b\x2d4ec2\x2da994\x2d1bde6d48fae1.mount: Deactivated successfully.
Nov 25 08:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:41.680 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[c93b9317-8926-4bb9-888d-2960cb22db22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.743 253542 DEBUG nova.compute.manager [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.743 253542 DEBUG oslo_concurrency.lockutils [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.744 253542 DEBUG oslo_concurrency.lockutils [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.745 253542 DEBUG oslo_concurrency.lockutils [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.746 253542 DEBUG nova.compute.manager [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Processing event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.747 253542 DEBUG nova.compute.manager [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.748 253542 DEBUG oslo_concurrency.lockutils [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.748 253542 DEBUG oslo_concurrency.lockutils [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.749 253542 DEBUG oslo_concurrency.lockutils [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.749 253542 DEBUG nova.compute.manager [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.750 253542 WARNING nova.compute.manager [req-26c44f8f-c741-418b-87f3-3228aa2ad91f req-9e10485c-f0a3-4c97-90ca-f910a03e2a9e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 for instance with vm_state building and task_state spawning.
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.752 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.756 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059501.7563026, 450d4b82-4475-4cfc-b868-dc3b0fc37af5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.757 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] VM Resumed (Lifecycle Event)
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.760 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.764 253542 INFO nova.virt.libvirt.driver [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Instance spawned successfully.
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.765 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.778 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.783 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.786 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.787 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.787 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.787 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.788 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.788 253542 DEBUG nova.virt.libvirt.driver [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.811 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.842 253542 INFO nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Took 8.77 seconds to spawn the instance on the hypervisor.
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.842 253542 DEBUG nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.899 253542 INFO nova.compute.manager [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Took 9.70 seconds to build instance.
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.912 253542 DEBUG oslo_concurrency.lockutils [None req-3ad81f6e-adc7-487c-9459-8bc90f7117c5 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.991 253542 INFO nova.virt.libvirt.driver [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Deleting instance files /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9_del
Nov 25 08:31:41 compute-0 nova_compute[253538]: 2025-11-25 08:31:41.992 253542 INFO nova.virt.libvirt.driver [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Deletion of /var/lib/nova/instances/44f92d51-7981-4905-8ec2-179722b520f9_del complete
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.050 253542 INFO nova.virt.libvirt.driver [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Deleting instance files /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3_del
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.050 253542 INFO nova.virt.libvirt.driver [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Deletion of /var/lib/nova/instances/e7169151-5523-42c6-bebf-4b9ff2640da3_del complete
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.056 253542 INFO nova.compute.manager [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Took 1.01 seconds to destroy the instance on the hypervisor.
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.056 253542 DEBUG oslo.service.loopingcall [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.056 253542 DEBUG nova.compute.manager [-] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.056 253542 DEBUG nova.network.neutron [-] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.093 253542 INFO nova.compute.manager [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Took 0.90 seconds to destroy the instance on the hypervisor.
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.093 253542 DEBUG oslo.service.loopingcall [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.094 253542 DEBUG nova.compute.manager [-] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.094 253542 DEBUG nova.network.neutron [-] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:31:42 compute-0 ceph-mon[75015]: pgmap v1450: 321 pgs: 321 active+clean; 181 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 171 op/s
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.649 253542 DEBUG nova.network.neutron [-] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.662 253542 INFO nova.compute.manager [-] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Took 0.61 seconds to deallocate network for instance.
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.708 253542 DEBUG nova.compute.manager [req-4dfc691a-3c11-417f-8f72-c77e4c6c28a3 req-0a44071b-ad7e-44d1-9073-ebc9a4e8b0b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received event network-vif-deleted-2d3ff707-0229-412a-9743-73ef63760a76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.711 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.712 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:42 compute-0 nova_compute[253538]: 2025-11-25 08:31:42.816 253542 DEBUG oslo_concurrency.processutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.152 253542 DEBUG nova.network.neutron [-] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.176 253542 INFO nova.compute.manager [-] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Took 1.08 seconds to deallocate network for instance.
Nov 25 08:31:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 151 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 232 op/s
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.231 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371660903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.308 253542 DEBUG oslo_concurrency.processutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.317 253542 DEBUG nova.compute.provider_tree [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.334 253542 DEBUG nova.scheduler.client.report [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.367 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.371 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.409 253542 INFO nova.scheduler.client.report [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Deleted allocations for instance 44f92d51-7981-4905-8ec2-179722b520f9
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.441 253542 DEBUG oslo_concurrency.processutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/371660903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.491 253542 DEBUG oslo_concurrency.lockutils [None req-dcc6446d-af99-4f71-91f1-9eb29a868803 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:43 compute-0 podman[306025]: 2025-11-25 08:31:43.845758797 +0000 UTC m=+0.096032886 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622210238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.927 253542 DEBUG oslo_concurrency.processutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.933 253542 DEBUG nova.compute.provider_tree [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:43 compute-0 nova_compute[253538]: 2025-11-25 08:31:43.991 253542 DEBUG nova.scheduler.client.report [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.007 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.037 253542 INFO nova.scheduler.client.report [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Deleted allocations for instance e7169151-5523-42c6-bebf-4b9ff2640da3
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.112 253542 DEBUG oslo_concurrency.lockutils [None req-f521ec3c-5256-4a20-9f73-3c86efa2aae2 e3ba89d7ba114005bf727750ed2eb249 8098b3bc99ed4993a40e217876568115 - - default default] Lock "e7169151-5523-42c6-bebf-4b9ff2640da3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.138 253542 DEBUG nova.compute.manager [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received event network-vif-unplugged-2d3ff707-0229-412a-9743-73ef63760a76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.139 253542 DEBUG oslo_concurrency.lockutils [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "44f92d51-7981-4905-8ec2-179722b520f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.139 253542 DEBUG oslo_concurrency.lockutils [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.139 253542 DEBUG oslo_concurrency.lockutils [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.139 253542 DEBUG nova.compute.manager [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] No waiting events found dispatching network-vif-unplugged-2d3ff707-0229-412a-9743-73ef63760a76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.139 253542 WARNING nova.compute.manager [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received unexpected event network-vif-unplugged-2d3ff707-0229-412a-9743-73ef63760a76 for instance with vm_state deleted and task_state None.
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.140 253542 DEBUG nova.compute.manager [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received event network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.140 253542 DEBUG oslo_concurrency.lockutils [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "44f92d51-7981-4905-8ec2-179722b520f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.140 253542 DEBUG oslo_concurrency.lockutils [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.140 253542 DEBUG oslo_concurrency.lockutils [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "44f92d51-7981-4905-8ec2-179722b520f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.140 253542 DEBUG nova.compute.manager [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] No waiting events found dispatching network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.141 253542 WARNING nova.compute.manager [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Received unexpected event network-vif-plugged-2d3ff707-0229-412a-9743-73ef63760a76 for instance with vm_state deleted and task_state None.
Nov 25 08:31:44 compute-0 nova_compute[253538]: 2025-11-25 08:31:44.141 253542 DEBUG nova.compute.manager [req-8093ea24-c489-435a-a9e9-7602f3107a16 req-a9525f23-5ac9-4456-97cf-8a4caaa8711a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Received event network-vif-deleted-eec863b4-83ac-47db-bf76-d7ba4eee9f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:44 compute-0 ceph-mon[75015]: pgmap v1451: 321 pgs: 321 active+clean; 151 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 232 op/s
Nov 25 08:31:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3622210238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:45 compute-0 nova_compute[253538]: 2025-11-25 08:31:45.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:45 compute-0 NetworkManager[48915]: <info>  [1764059505.1041] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Nov 25 08:31:45 compute-0 NetworkManager[48915]: <info>  [1764059505.1053] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 25 08:31:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 88 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.4 MiB/s wr, 358 op/s
Nov 25 08:31:45 compute-0 nova_compute[253538]: 2025-11-25 08:31:45.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:45 compute-0 ovn_controller[152859]: 2025-11-25T08:31:45Z|00379|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:31:45 compute-0 nova_compute[253538]: 2025-11-25 08:31:45.223 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:45 compute-0 nova_compute[253538]: 2025-11-25 08:31:45.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:45 compute-0 nova_compute[253538]: 2025-11-25 08:31:45.376 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:46 compute-0 nova_compute[253538]: 2025-11-25 08:31:46.258 253542 DEBUG nova.compute.manager [req-6327e95c-c52c-406b-8c4c-06507e085510 req-6217527f-62fe-4c9c-b366-6016ff073a72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-changed-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:46 compute-0 nova_compute[253538]: 2025-11-25 08:31:46.258 253542 DEBUG nova.compute.manager [req-6327e95c-c52c-406b-8c4c-06507e085510 req-6217527f-62fe-4c9c-b366-6016ff073a72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Refreshing instance network info cache due to event network-changed-2456d48f-9440-411c-b5f2-5c27136126f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:46 compute-0 nova_compute[253538]: 2025-11-25 08:31:46.258 253542 DEBUG oslo_concurrency.lockutils [req-6327e95c-c52c-406b-8c4c-06507e085510 req-6217527f-62fe-4c9c-b366-6016ff073a72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:46 compute-0 nova_compute[253538]: 2025-11-25 08:31:46.258 253542 DEBUG oslo_concurrency.lockutils [req-6327e95c-c52c-406b-8c4c-06507e085510 req-6217527f-62fe-4c9c-b366-6016ff073a72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:46 compute-0 nova_compute[253538]: 2025-11-25 08:31:46.259 253542 DEBUG nova.network.neutron [req-6327e95c-c52c-406b-8c4c-06507e085510 req-6217527f-62fe-4c9c-b366-6016ff073a72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Refreshing network info cache for port 2456d48f-9440-411c-b5f2-5c27136126f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:46 compute-0 ovn_controller[152859]: 2025-11-25T08:31:46Z|00380|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:31:46 compute-0 nova_compute[253538]: 2025-11-25 08:31:46.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:46 compute-0 ceph-mon[75015]: pgmap v1452: 321 pgs: 321 active+clean; 88 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.4 MiB/s wr, 358 op/s
Nov 25 08:31:47 compute-0 nova_compute[253538]: 2025-11-25 08:31:47.161 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059492.1600246, 21243957-732f-435d-854b-56d6dd7c1ee5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:47 compute-0 nova_compute[253538]: 2025-11-25 08:31:47.161 253542 INFO nova.compute.manager [-] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] VM Stopped (Lifecycle Event)
Nov 25 08:31:47 compute-0 nova_compute[253538]: 2025-11-25 08:31:47.177 253542 DEBUG nova.compute.manager [None req-f50c672e-6191-4433-b2b6-5ffc0adcea0a - - - - - -] [instance: 21243957-732f-435d-854b-56d6dd7c1ee5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 949 KiB/s wr, 289 op/s
Nov 25 08:31:48 compute-0 nova_compute[253538]: 2025-11-25 08:31:48.190 253542 DEBUG nova.network.neutron [req-6327e95c-c52c-406b-8c4c-06507e085510 req-6217527f-62fe-4c9c-b366-6016ff073a72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updated VIF entry in instance network info cache for port 2456d48f-9440-411c-b5f2-5c27136126f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:48 compute-0 nova_compute[253538]: 2025-11-25 08:31:48.191 253542 DEBUG nova.network.neutron [req-6327e95c-c52c-406b-8c4c-06507e085510 req-6217527f-62fe-4c9c-b366-6016ff073a72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:48 compute-0 nova_compute[253538]: 2025-11-25 08:31:48.208 253542 DEBUG oslo_concurrency.lockutils [req-6327e95c-c52c-406b-8c4c-06507e085510 req-6217527f-62fe-4c9c-b366-6016ff073a72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:48 compute-0 ceph-mon[75015]: pgmap v1453: 321 pgs: 321 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 949 KiB/s wr, 289 op/s
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.169 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.169 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 383 KiB/s wr, 271 op/s
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.204 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.329 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.330 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.337 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.338 253542 INFO nova.compute.claims [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.488 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:31:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255527565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.987 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:49 compute-0 nova_compute[253538]: 2025-11-25 08:31:49.993 253542 DEBUG nova.compute.provider_tree [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.012 253542 DEBUG nova.scheduler.client.report [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.038 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.039 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.083 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.084 253542 DEBUG nova.network.neutron [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.170 253542 INFO nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.193 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.296 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.297 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.298 253542 INFO nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating image(s)
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.318 253542 DEBUG nova.storage.rbd_utils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.340 253542 DEBUG nova.storage.rbd_utils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.363 253542 DEBUG nova.storage.rbd_utils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.367 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.405 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.411 253542 DEBUG nova.policy [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ad88cb0e4cf4d0b8e4cbec835318015', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.454 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.456 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.457 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.457 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.478 253542 DEBUG nova.storage.rbd_utils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.482 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:50 compute-0 ceph-mon[75015]: pgmap v1454: 321 pgs: 321 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 383 KiB/s wr, 271 op/s
Nov 25 08:31:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2255527565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:31:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.819 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.875 253542 DEBUG nova.storage.rbd_utils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.973 253542 DEBUG nova.objects.instance [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'migration_context' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.988 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.988 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Ensure instance console log exists: /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.989 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.990 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:50 compute-0 nova_compute[253538]: 2025-11-25 08:31:50.990 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 17 KiB/s wr, 261 op/s
Nov 25 08:31:51 compute-0 nova_compute[253538]: 2025-11-25 08:31:51.313 253542 DEBUG nova.network.neutron [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Successfully created port: 3c3c9c20-8436-4b41-9184-2061010ba6e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:31:51 compute-0 nova_compute[253538]: 2025-11-25 08:31:51.492 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:51 compute-0 nova_compute[253538]: 2025-11-25 08:31:51.827 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:52 compute-0 nova_compute[253538]: 2025-11-25 08:31:52.322 253542 DEBUG nova.network.neutron [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Successfully updated port: 3c3c9c20-8436-4b41-9184-2061010ba6e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:31:52 compute-0 nova_compute[253538]: 2025-11-25 08:31:52.339 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "refresh_cache-7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:52 compute-0 nova_compute[253538]: 2025-11-25 08:31:52.340 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquired lock "refresh_cache-7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:52 compute-0 nova_compute[253538]: 2025-11-25 08:31:52.341 253542 DEBUG nova.network.neutron [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:31:52 compute-0 nova_compute[253538]: 2025-11-25 08:31:52.425 253542 DEBUG nova.compute.manager [req-93ab7c5f-b22c-495d-9ce4-917dc1edfc0e req-7c4d3cdb-223e-4b1d-8242-a7c8318a7602 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-changed-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:31:52 compute-0 nova_compute[253538]: 2025-11-25 08:31:52.425 253542 DEBUG nova.compute.manager [req-93ab7c5f-b22c-495d-9ce4-917dc1edfc0e req-7c4d3cdb-223e-4b1d-8242-a7c8318a7602 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Refreshing instance network info cache due to event network-changed-3c3c9c20-8436-4b41-9184-2061010ba6e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:31:52 compute-0 nova_compute[253538]: 2025-11-25 08:31:52.426 253542 DEBUG oslo_concurrency.lockutils [req-93ab7c5f-b22c-495d-9ce4-917dc1edfc0e req-7c4d3cdb-223e-4b1d-8242-a7c8318a7602 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:31:52 compute-0 nova_compute[253538]: 2025-11-25 08:31:52.528 253542 DEBUG nova.network.neutron [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:31:52 compute-0 ceph-mon[75015]: pgmap v1455: 321 pgs: 321 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 17 KiB/s wr, 261 op/s
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 112 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 663 KiB/s wr, 224 op/s
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:31:53
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images']
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.742 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:31:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.911 253542 DEBUG nova.network.neutron [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Updating instance_info_cache with network_info: [{"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.931 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Releasing lock "refresh_cache-7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.932 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance network_info: |[{"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.932 253542 DEBUG oslo_concurrency.lockutils [req-93ab7c5f-b22c-495d-9ce4-917dc1edfc0e req-7c4d3cdb-223e-4b1d-8242-a7c8318a7602 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.933 253542 DEBUG nova.network.neutron [req-93ab7c5f-b22c-495d-9ce4-917dc1edfc0e req-7c4d3cdb-223e-4b1d-8242-a7c8318a7602 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Refreshing network info cache for port 3c3c9c20-8436-4b41-9184-2061010ba6e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.936 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Start _get_guest_xml network_info=[{"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.940 253542 WARNING nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.945 253542 DEBUG nova.virt.libvirt.host [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.946 253542 DEBUG nova.virt.libvirt.host [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.952 253542 DEBUG nova.virt.libvirt.host [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.953 253542 DEBUG nova.virt.libvirt.host [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.953 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.953 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.954 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.954 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.954 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.954 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.954 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.954 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.954 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.955 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.955 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.955 253542 DEBUG nova.virt.hardware [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:31:53 compute-0 nova_compute[253538]: 2025-11-25 08:31:53.958 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011635042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.438 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.460 253542 DEBUG nova.storage.rbd_utils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.465 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:54 compute-0 ceph-mon[75015]: pgmap v1456: 321 pgs: 321 active+clean; 112 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 663 KiB/s wr, 224 op/s
Nov 25 08:31:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4011635042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:31:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361522939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:54 compute-0 ovn_controller[152859]: 2025-11-25T08:31:54Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:50:72 10.100.0.7
Nov 25 08:31:54 compute-0 ovn_controller[152859]: 2025-11-25T08:31:54Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:50:72 10.100.0.7
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.870 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.872 253542 DEBUG nova.virt.libvirt.vif [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:50Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.873 253542 DEBUG nova.network.os_vif_util [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.874 253542 DEBUG nova.network.os_vif_util [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.875 253542 DEBUG nova.objects.instance [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.890 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <uuid>7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</uuid>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <name>instance-00000031</name>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1569463086</nova:name>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:31:53</nova:creationTime>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <nova:port uuid="3c3c9c20-8436-4b41-9184-2061010ba6e2">
Nov 25 08:31:54 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <system>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <entry name="serial">7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</entry>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <entry name="uuid">7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</entry>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </system>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <os>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   </os>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <features>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   </features>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk">
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config">
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       </source>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:31:54 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:c4:6e:6c"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <target dev="tap3c3c9c20-84"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/console.log" append="off"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <video>
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </video>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:31:54 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:31:54 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:31:54 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:31:54 compute-0 nova_compute[253538]: </domain>
Nov 25 08:31:54 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.890 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Preparing to wait for external event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.890 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.891 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.891 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.891 253542 DEBUG nova.virt.libvirt.vif [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:31:50Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.892 253542 DEBUG nova.network.os_vif_util [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.892 253542 DEBUG nova.network.os_vif_util [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.893 253542 DEBUG os_vif [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.894 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.894 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.897 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3c9c20-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.898 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c3c9c20-84, col_values=(('external_ids', {'iface-id': '3c3c9c20-8436-4b41-9184-2061010ba6e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:6e:6c', 'vm-uuid': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.941 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:54 compute-0 NetworkManager[48915]: <info>  [1764059514.9422] manager: (tap3c3c9c20-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.944 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.947 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:54 compute-0 nova_compute[253538]: 2025-11-25 08:31:54.948 253542 INFO os_vif [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84')
Nov 25 08:31:55 compute-0 nova_compute[253538]: 2025-11-25 08:31:55.028 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:55 compute-0 nova_compute[253538]: 2025-11-25 08:31:55.028 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:31:55 compute-0 nova_compute[253538]: 2025-11-25 08:31:55.028 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:c4:6e:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:31:55 compute-0 nova_compute[253538]: 2025-11-25 08:31:55.029 253542 INFO nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Using config drive
Nov 25 08:31:55 compute-0 nova_compute[253538]: 2025-11-25 08:31:55.059 253542 DEBUG nova.storage.rbd_utils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 157 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 215 op/s
Nov 25 08:31:55 compute-0 nova_compute[253538]: 2025-11-25 08:31:55.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1361522939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:31:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.294 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059501.2931814, 44f92d51-7981-4905-8ec2-179722b520f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.294 253542 INFO nova.compute.manager [-] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] VM Stopped (Lifecycle Event)
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.324 253542 DEBUG nova.compute.manager [None req-f9b62259-a3f1-4ce8-9348-e46a0469169f - - - - - -] [instance: 44f92d51-7981-4905-8ec2-179722b520f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.410 253542 INFO nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating config drive at /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.420 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnbodv1ft execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.458 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059501.4253511, e7169151-5523-42c6-bebf-4b9ff2640da3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.459 253542 INFO nova.compute.manager [-] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] VM Stopped (Lifecycle Event)
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.467 253542 DEBUG nova.network.neutron [req-93ab7c5f-b22c-495d-9ce4-917dc1edfc0e req-7c4d3cdb-223e-4b1d-8242-a7c8318a7602 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Updated VIF entry in instance network info cache for port 3c3c9c20-8436-4b41-9184-2061010ba6e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.468 253542 DEBUG nova.network.neutron [req-93ab7c5f-b22c-495d-9ce4-917dc1edfc0e req-7c4d3cdb-223e-4b1d-8242-a7c8318a7602 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Updating instance_info_cache with network_info: [{"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.511 253542 DEBUG nova.compute.manager [None req-e2c0ad2a-9660-4a60-b670-eea43d50193d - - - - - -] [instance: e7169151-5523-42c6-bebf-4b9ff2640da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.513 253542 DEBUG oslo_concurrency.lockutils [req-93ab7c5f-b22c-495d-9ce4-917dc1edfc0e req-7c4d3cdb-223e-4b1d-8242-a7c8318a7602 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.568 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnbodv1ft" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.648 253542 DEBUG nova.storage.rbd_utils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:31:56 compute-0 nova_compute[253538]: 2025-11-25 08:31:56.653 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:31:56 compute-0 ceph-mon[75015]: pgmap v1457: 321 pgs: 321 active+clean; 157 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 215 op/s
Nov 25 08:31:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 162 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 1020 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.222 253542 DEBUG oslo_concurrency.processutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.223 253542 INFO nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deleting local config drive /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config because it was imported into RBD.
Nov 25 08:31:57 compute-0 kernel: tap3c3c9c20-84: entered promiscuous mode
Nov 25 08:31:57 compute-0 NetworkManager[48915]: <info>  [1764059517.2784] manager: (tap3c3c9c20-84): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.331 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:57 compute-0 ovn_controller[152859]: 2025-11-25T08:31:57Z|00381|binding|INFO|Claiming lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 for this chassis.
Nov 25 08:31:57 compute-0 ovn_controller[152859]: 2025-11-25T08:31:57Z|00382|binding|INFO|3c3c9c20-8436-4b41-9184-2061010ba6e2: Claiming fa:16:3e:c4:6e:6c 10.100.0.4
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.341 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:6e:6c 10.100.0.4'], port_security=['fa:16:3e:c4:6e:6c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3c3c9c20-8436-4b41-9184-2061010ba6e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.343 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3c3c9c20-8436-4b41-9184-2061010ba6e2 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.345 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:31:57 compute-0 ovn_controller[152859]: 2025-11-25T08:31:57Z|00383|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 ovn-installed in OVS
Nov 25 08:31:57 compute-0 ovn_controller[152859]: 2025-11-25T08:31:57Z|00384|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 up in Southbound
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.356 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4c38bb89-0faa-4a03-b4d5-324d930421cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.357 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:31:57 compute-0 systemd-udevd[306379]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.358 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8773ba80-8289-4308-84d9-4d6cfb4fb76e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 systemd-machined[215790]: New machine qemu-54-instance-00000031.
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd04de6f-3d08-456e-a355-c07f33435fa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 NetworkManager[48915]: <info>  [1764059517.3684] device (tap3c3c9c20-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:31:57 compute-0 NetworkManager[48915]: <info>  [1764059517.3696] device (tap3c3c9c20-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.369 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee28a73-a704-48b2-9c6c-8fb24a8eb8a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000031.
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.391 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b92b8d1a-e533-400e-873c-62edb61af93c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.416 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c6813351-7339-43d6-9c41-4dddc04ba648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.421 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f466c50e-1781-437c-8e81-b1eef45c60a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 systemd-udevd[306382]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:31:57 compute-0 NetworkManager[48915]: <info>  [1764059517.4227] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.448 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6f2659-984a-47ca-a913-11a70287955d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.451 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb8aaf8-d3ed-4591-8987-2e012fede04b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 NetworkManager[48915]: <info>  [1764059517.4674] device (tapeb25945d-60): carrier: link connected
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.470 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1af6bfe2-9554-4bc9-9e3b-0820c5ca85d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.485 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3bab3acd-5705-4c24-8149-24aebb72d113]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487206, 'reachable_time': 42468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306411, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.498 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca25255-9b80-445b-b19d-234f2a424e41]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487206, 'tstamp': 487206}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306412, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.513 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10f9e283-792a-45fc-acf4-23f9698cc71e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487206, 'reachable_time': 42468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306413, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cdaac3f0-4b85-413d-9d6a-2f83cc9bfd2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.593 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a15015-052e-4df0-bb97-6ef35a444269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.595 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.595 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.595 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:57 compute-0 NetworkManager[48915]: <info>  [1764059517.5978] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Nov 25 08:31:57 compute-0 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.599 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.600 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:57 compute-0 ovn_controller[152859]: 2025-11-25T08:31:57Z|00385|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.603 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.603 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74709df8-a7bf-4a44-a6d0-fc731a04aeb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.604 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:31:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:31:57.605 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:57 compute-0 ceph-mon[75015]: pgmap v1458: 321 pgs: 321 active+clean; 162 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 1020 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.859 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059517.8585122, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.859 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Started (Lifecycle Event)
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.881 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.886 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059517.8587277, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.886 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Paused (Lifecycle Event)
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.904 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.911 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:31:57 compute-0 nova_compute[253538]: 2025-11-25 08:31:57.926 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:31:57 compute-0 ovn_controller[152859]: 2025-11-25T08:31:57Z|00386|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 08:31:57 compute-0 ovn_controller[152859]: 2025-11-25T08:31:57Z|00387|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:31:57 compute-0 podman[306487]: 2025-11-25 08:31:57.990562332 +0000 UTC m=+0.059411813 container create 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:58 compute-0 nova_compute[253538]: 2025-11-25 08:31:58.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:31:58 compute-0 systemd[1]: Started libpod-conmon-8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b.scope.
Nov 25 08:31:58 compute-0 podman[306487]: 2025-11-25 08:31:57.958158486 +0000 UTC m=+0.027007977 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:31:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e001beb4c5ea87434b006a998a71e0ebe65ce32375515f22c2726c3467336969/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:31:58 compute-0 podman[306487]: 2025-11-25 08:31:58.089438805 +0000 UTC m=+0.158288276 container init 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:31:58 compute-0 podman[306487]: 2025-11-25 08:31:58.099299148 +0000 UTC m=+0.168148619 container start 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:31:58 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [NOTICE]   (306504) : New worker (306506) forked
Nov 25 08:31:58 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [NOTICE]   (306504) : Loading success.
Nov 25 08:31:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Nov 25 08:31:59 compute-0 nova_compute[253538]: 2025-11-25 08:31:59.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:00 compute-0 ceph-mon[75015]: pgmap v1459: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.323 253542 DEBUG nova.compute.manager [req-7bd5de46-4441-4dc4-8006-81be915846b5 req-fe077490-8e35-49d1-a564-4f829b13b7b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.324 253542 DEBUG oslo_concurrency.lockutils [req-7bd5de46-4441-4dc4-8006-81be915846b5 req-fe077490-8e35-49d1-a564-4f829b13b7b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.324 253542 DEBUG oslo_concurrency.lockutils [req-7bd5de46-4441-4dc4-8006-81be915846b5 req-fe077490-8e35-49d1-a564-4f829b13b7b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.324 253542 DEBUG oslo_concurrency.lockutils [req-7bd5de46-4441-4dc4-8006-81be915846b5 req-fe077490-8e35-49d1-a564-4f829b13b7b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.325 253542 DEBUG nova.compute.manager [req-7bd5de46-4441-4dc4-8006-81be915846b5 req-fe077490-8e35-49d1-a564-4f829b13b7b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Processing event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.325 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.331 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059520.3316095, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.332 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Resumed (Lifecycle Event)
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.334 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.337 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance spawned successfully.
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.338 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.354 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.361 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.365 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.366 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.366 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.367 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.367 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.368 253542 DEBUG nova.virt.libvirt.driver [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.393 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.430 253542 INFO nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Took 10.13 seconds to spawn the instance on the hypervisor.
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.431 253542 DEBUG nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.493 253542 INFO nova.compute.manager [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Took 11.24 seconds to build instance.
Nov 25 08:32:00 compute-0 nova_compute[253538]: 2025-11-25 08:32:00.508 253542 DEBUG oslo_concurrency.lockutils [None req-65c4f402-dd73-428e-a17e-a51a9f047f14 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Nov 25 08:32:01 compute-0 nova_compute[253538]: 2025-11-25 08:32:01.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.298 253542 DEBUG oslo_concurrency.lockutils [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.298 253542 DEBUG oslo_concurrency.lockutils [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.299 253542 DEBUG nova.objects.instance [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:02 compute-0 ceph-mon[75015]: pgmap v1460: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.666 253542 DEBUG nova.objects.instance [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.676 253542 DEBUG nova.network.neutron [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.709 253542 DEBUG nova.compute.manager [req-09e73212-183f-432d-ba8e-2a692c970468 req-1a737e60-616f-46da-9443-15a182fcb48d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.709 253542 DEBUG oslo_concurrency.lockutils [req-09e73212-183f-432d-ba8e-2a692c970468 req-1a737e60-616f-46da-9443-15a182fcb48d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.711 253542 DEBUG oslo_concurrency.lockutils [req-09e73212-183f-432d-ba8e-2a692c970468 req-1a737e60-616f-46da-9443-15a182fcb48d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.711 253542 DEBUG oslo_concurrency.lockutils [req-09e73212-183f-432d-ba8e-2a692c970468 req-1a737e60-616f-46da-9443-15a182fcb48d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.711 253542 DEBUG nova.compute.manager [req-09e73212-183f-432d-ba8e-2a692c970468 req-1a737e60-616f-46da-9443-15a182fcb48d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.712 253542 WARNING nova.compute.manager [req-09e73212-183f-432d-ba8e-2a692c970468 req-1a737e60-616f-46da-9443-15a182fcb48d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state None.
Nov 25 08:32:02 compute-0 nova_compute[253538]: 2025-11-25 08:32:02.936 253542 DEBUG nova.policy [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 997 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011055244554600508 of space, bias 1.0, pg target 0.3316573366380153 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:32:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:32:03 compute-0 nova_compute[253538]: 2025-11-25 08:32:03.830 253542 DEBUG nova.network.neutron [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Successfully created port: 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:32:03 compute-0 nova_compute[253538]: 2025-11-25 08:32:03.994 253542 INFO nova.compute.manager [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Rebuilding instance
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.224 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.238 253542 DEBUG nova.compute.manager [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.283 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.293 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.302 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.311 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'migration_context' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.320 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.324 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:32:04 compute-0 ceph-mon[75015]: pgmap v1461: 321 pgs: 321 active+clean; 167 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 997 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Nov 25 08:32:04 compute-0 nova_compute[253538]: 2025-11-25 08:32:04.944 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.221 253542 DEBUG nova.network.neutron [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Successfully updated port: 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.240 253542 DEBUG oslo_concurrency.lockutils [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.240 253542 DEBUG oslo_concurrency.lockutils [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.240 253542 DEBUG nova.network.neutron [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.416 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.423 253542 WARNING nova.network.neutron [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:32:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:05 compute-0 podman[306515]: 2025-11-25 08:32:05.808105524 +0000 UTC m=+0.056412510 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.882 253542 DEBUG nova.compute.manager [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-changed-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.883 253542 DEBUG nova.compute.manager [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Refreshing instance network info cache due to event network-changed-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:32:05 compute-0 nova_compute[253538]: 2025-11-25 08:32:05.883 253542 DEBUG oslo_concurrency.lockutils [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.121 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.121 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.147 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.239 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.240 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.255 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.257 253542 INFO nova.compute.claims [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.347 253542 DEBUG nova.scheduler.client.report [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.364 253542 DEBUG nova.scheduler.client.report [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.365 253542 DEBUG nova.compute.provider_tree [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.378 253542 DEBUG nova.scheduler.client.report [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.407 253542 DEBUG nova.scheduler.client.report [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.476 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:06 compute-0 nova_compute[253538]: 2025-11-25 08:32:06.529 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:06 compute-0 ceph-mon[75015]: pgmap v1462: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Nov 25 08:32:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682816746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.006 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.012 253542 DEBUG nova.compute.provider_tree [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.027 253542 DEBUG nova.scheduler.client.report [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.048 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.049 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.076 253542 DEBUG nova.network.neutron [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.099 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.100 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.103 253542 DEBUG oslo_concurrency.lockutils [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.104 253542 DEBUG oslo_concurrency.lockutils [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.104 253542 DEBUG nova.network.neutron [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Refreshing network info cache for port 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.108 253542 DEBUG nova.virt.libvirt.vif [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.108 253542 DEBUG nova.network.os_vif_util [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.109 253542 DEBUG nova.network.os_vif_util [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.109 253542 DEBUG os_vif [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.114 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b0678fa-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.115 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b0678fa-ee, col_values=(('external_ids', {'iface-id': '1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:f7:0c', 'vm-uuid': '450d4b82-4475-4cfc-b868-dc3b0fc37af5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:07 compute-0 NetworkManager[48915]: <info>  [1764059527.1175] manager: (tap1b0678fa-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.126 253542 INFO nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.130 253542 INFO os_vif [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee')
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.131 253542 DEBUG nova.virt.libvirt.vif [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.131 253542 DEBUG nova.network.os_vif_util [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.132 253542 DEBUG nova.network.os_vif_util [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.134 253542 DEBUG nova.virt.libvirt.guest [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:ad:f7:0c"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <target dev="tap1b0678fa-ee"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]: </interface>
Nov 25 08:32:07 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:32:07 compute-0 kernel: tap1b0678fa-ee: entered promiscuous mode
Nov 25 08:32:07 compute-0 NetworkManager[48915]: <info>  [1764059527.1454] manager: (tap1b0678fa-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Nov 25 08:32:07 compute-0 ovn_controller[152859]: 2025-11-25T08:32:07Z|00388|binding|INFO|Claiming lport 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 for this chassis.
Nov 25 08:32:07 compute-0 ovn_controller[152859]: 2025-11-25T08:32:07Z|00389|binding|INFO|1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53: Claiming fa:16:3e:ad:f7:0c 10.100.0.14
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.146 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:07 compute-0 ovn_controller[152859]: 2025-11-25T08:32:07Z|00390|binding|INFO|Setting lport 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 ovn-installed in OVS
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:07 compute-0 systemd-udevd[306560]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:07 compute-0 ovn_controller[152859]: 2025-11-25T08:32:07Z|00391|binding|INFO|Setting lport 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 up in Southbound
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.173 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:f7:0c 10.100.0.14'], port_security=['fa:16:3e:ad:f7:0c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.174 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.175 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:32:07 compute-0 NetworkManager[48915]: <info>  [1764059527.1858] device (tap1b0678fa-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:07 compute-0 NetworkManager[48915]: <info>  [1764059527.1871] device (tap1b0678fa-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 411 KiB/s wr, 126 op/s
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.192 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[766c6a4e-92d3-4d22-8067-45d4dac09854]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.228 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7b4611-9ce1-4d51-a740-9b5a5706ea1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.233 253542 DEBUG nova.virt.libvirt.driver [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.234 253542 DEBUG nova.virt.libvirt.driver [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.234 253542 DEBUG nova.virt.libvirt.driver [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:c2:50:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.234 253542 DEBUG nova.virt.libvirt.driver [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:ad:f7:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.233 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[39c4e1e0-a4b0-46cf-96b5-686d6e2556ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.239 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.240 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.241 253542 INFO nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Creating image(s)
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.257 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.269 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a5091eb3-9762-4c56-b94a-c47f54c49629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.276 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.286 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fc35a1-7d86-4aff-a435-b28ad6c686a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485392, 'reachable_time': 20076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306602, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.307 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1592edf-51a4-4ef9-b0d7-0d4ae9361cdc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485404, 'tstamp': 485404}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306612, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485408, 'tstamp': 485408}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306612, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.310 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.310 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.313 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.313 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.313 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:07.314 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.316 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.350 253542 DEBUG nova.virt.libvirt.guest [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:32:07</nova:creationTime>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 08:32:07 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     <nova:port uuid="1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53">
Nov 25 08:32:07 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:32:07 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:07 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:32:07 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:32:07 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.372 253542 DEBUG oslo_concurrency.lockutils [None req-cbfb0977-7258-413f-b378-9b5d06e09892 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.379 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.379 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.380 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.380 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.398 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.401 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0feca801-4630-4450-b915-616d8496ab51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/682816746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:07 compute-0 ceph-mon[75015]: pgmap v1463: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 411 KiB/s wr, 126 op/s
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.612 253542 DEBUG nova.policy [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c199ca353ed54a53ab7fe37d3089c82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23237e7592b247838e62457157e64e9e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.714 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0feca801-4630-4450-b915-616d8496ab51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.780 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] resizing rbd image 0feca801-4630-4450-b915-616d8496ab51_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.897 253542 DEBUG nova.objects.instance [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'migration_context' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.908 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.908 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Ensure instance console log exists: /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.909 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.909 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.909 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.954 253542 DEBUG nova.compute.manager [req-51c29430-3de2-481d-9d42-701a36427051 req-8b7298ba-d474-4439-9223-88b25b4b5291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.955 253542 DEBUG oslo_concurrency.lockutils [req-51c29430-3de2-481d-9d42-701a36427051 req-8b7298ba-d474-4439-9223-88b25b4b5291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.955 253542 DEBUG oslo_concurrency.lockutils [req-51c29430-3de2-481d-9d42-701a36427051 req-8b7298ba-d474-4439-9223-88b25b4b5291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.955 253542 DEBUG oslo_concurrency.lockutils [req-51c29430-3de2-481d-9d42-701a36427051 req-8b7298ba-d474-4439-9223-88b25b4b5291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.956 253542 DEBUG nova.compute.manager [req-51c29430-3de2-481d-9d42-701a36427051 req-8b7298ba-d474-4439-9223-88b25b4b5291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:07 compute-0 nova_compute[253538]: 2025-11-25 08:32:07.956 253542 WARNING nova.compute.manager [req-51c29430-3de2-481d-9d42-701a36427051 req-8b7298ba-d474-4439-9223-88b25b4b5291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 for instance with vm_state active and task_state None.
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.031 253542 DEBUG nova.network.neutron [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updated VIF entry in instance network info cache for port 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.033 253542 DEBUG nova.network.neutron [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.048 253542 DEBUG oslo_concurrency.lockutils [req-0c2fb53b-a4fd-4782-ae00-8bba71b865c1 req-3f71c077-ba18-4fbf-aeeb-4242c11df470 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:09 compute-0 rsyslogd[1007]: imjournal: 6776 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 08:32:09 compute-0 ovn_controller[152859]: 2025-11-25T08:32:09Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:f7:0c 10.100.0.14
Nov 25 08:32:09 compute-0 ovn_controller[152859]: 2025-11-25T08:32:09Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:f7:0c 10.100.0.14
Nov 25 08:32:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 175 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 307 KiB/s wr, 118 op/s
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.243 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Successfully created port: 15af3dd8-9788-4a34-b4b2-d3b24300cd4c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.438 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.438 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.457 253542 DEBUG nova.objects.instance [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.474 253542 DEBUG nova.virt.libvirt.vif [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.475 253542 DEBUG nova.network.os_vif_util [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.476 253542 DEBUG nova.network.os_vif_util [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.478 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.480 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.483 253542 DEBUG nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Attempting to detach device tap1b0678fa-ee from instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.483 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:ad:f7:0c"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <target dev="tap1b0678fa-ee"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]: </interface>
Nov 25 08:32:09 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.487 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.490 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface>not found in domain: <domain type='kvm' id='53'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <name>instance-00000030</name>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:32:07</nova:creationTime>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:port uuid="1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53">
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:32:09 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='serial'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='uuid'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk' index='2'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config' index='1'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:c2:50:72'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target dev='tap2456d48f-94'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:ad:f7:0c'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target dev='tap1b0678fa-ee'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='net1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </target>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/2'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </console>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c273,c336</label>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c336</imagelabel>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:32:09 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:09 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.493 253542 INFO nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tap1b0678fa-ee from instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 from the persistent domain config.
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.493 253542 DEBUG nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] (1/8): Attempting to detach device tap1b0678fa-ee with device alias net1 from instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.493 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:ad:f7:0c"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <target dev="tap1b0678fa-ee"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]: </interface>
Nov 25 08:32:09 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:32:09 compute-0 kernel: tap1b0678fa-ee (unregistering): left promiscuous mode
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:32:09 compute-0 NetworkManager[48915]: <info>  [1764059529.5559] device (tap1b0678fa-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.563 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764059529.5628252, 450d4b82-4475-4cfc-b868-dc3b0fc37af5 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.565 253542 DEBUG nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Start waiting for the detach event from libvirt for device tap1b0678fa-ee with device alias net1 for instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.566 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:32:09 compute-0 ovn_controller[152859]: 2025-11-25T08:32:09Z|00392|binding|INFO|Releasing lport 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 from this chassis (sb_readonly=0)
Nov 25 08:32:09 compute-0 ovn_controller[152859]: 2025-11-25T08:32:09Z|00393|binding|INFO|Setting lport 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 down in Southbound
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:09 compute-0 ovn_controller[152859]: 2025-11-25T08:32:09Z|00394|binding|INFO|Removing iface tap1b0678fa-ee ovn-installed in OVS
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.571 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.574 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface>not found in domain: <domain type='kvm' id='53'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <name>instance-00000030</name>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:32:07</nova:creationTime>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:port uuid="1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53">
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:32:09 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='serial'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='uuid'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk' index='2'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config' index='1'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.575 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:f7:0c 10.100.0.14'], port_security=['fa:16:3e:ad:f7:0c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.576 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:c2:50:72'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target dev='tap2456d48f-94'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       </target>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/2'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </console>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c273,c336</label>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c336</imagelabel>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:32:09 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:09 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.577 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.574 253542 INFO nova.virt.libvirt.driver [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tap1b0678fa-ee from instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5 from the live domain config.
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.578 253542 DEBUG nova.virt.libvirt.vif [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.584 253542 DEBUG nova.network.os_vif_util [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.585 253542 DEBUG nova.network.os_vif_util [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.586 253542 DEBUG os_vif [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.589 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b0678fa-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.594 253542 INFO os_vif [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee')
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.595 253542 DEBUG nova.virt.libvirt.guest [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:32:09</nova:creationTime>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 08:32:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:32:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:09 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:32:09 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:32:09 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.606 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1597a09e-8eb7-4e31-8d54-f5c66ad2de91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.642 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[413f890d-82a2-4db8-8c4c-28a45858c57a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.645 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5850664c-8087-483c-9fa9-0cdf10d1eb63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.675 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4c72c2de-0982-4f0b-b5b1-204bee6b7aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.701 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c82f6e-4be8-49db-84fc-d30622992473]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485392, 'reachable_time': 20076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306745, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b18817-0ac9-4082-a395-887609109313]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485404, 'tstamp': 485404}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306746, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485408, 'tstamp': 485408}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306746, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.735 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.781 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:09 compute-0 nova_compute[253538]: 2025-11-25 08:32:09.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.784 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.784 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.784 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:09.785 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.041 253542 DEBUG nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.042 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.042 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.043 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.043 253542 DEBUG nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.043 253542 WARNING nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 for instance with vm_state active and task_state None.
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.043 253542 DEBUG nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-unplugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.044 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.044 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.044 253542 DEBUG oslo_concurrency.lockutils [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.045 253542 DEBUG nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-unplugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.045 253542 WARNING nova.compute.manager [req-2ef8a1a8-227a-4f06-9eaf-94ee7987a1d3 req-41569daf-a402-4ae0-8a88-c4befa553ce9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-unplugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 for instance with vm_state active and task_state None.
Nov 25 08:32:10 compute-0 ceph-mon[75015]: pgmap v1464: 321 pgs: 321 active+clean; 175 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 307 KiB/s wr, 118 op/s
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.309 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Successfully updated port: 15af3dd8-9788-4a34-b4b2-d3b24300cd4c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.329 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.330 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.330 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.429 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.430 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.431 253542 DEBUG nova.network.neutron [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.549 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.604 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.665 253542 DEBUG nova.compute.manager [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-deleted-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.666 253542 INFO nova.compute.manager [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Neutron deleted interface 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53; detaching it from the instance and deleting it from the info cache
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.666 253542 DEBUG nova.network.neutron [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.728 253542 DEBUG nova.objects.instance [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'system_metadata' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.778 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.791 253542 DEBUG nova.objects.instance [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'flavor' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.828 253542 DEBUG nova.virt.libvirt.vif [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.828 253542 DEBUG nova.network.os_vif_util [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.829 253542 DEBUG nova.network.os_vif_util [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.832 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.835 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface>not found in domain: <domain type='kvm' id='53'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <name>instance-00000030</name>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:32:09</nova:creationTime>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:32:10 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='serial'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='uuid'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk' index='2'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config' index='1'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:c2:50:72'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target dev='tap2456d48f-94'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </target>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/2'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </console>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c273,c336</label>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c336</imagelabel>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:32:10 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:10 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.836 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.841 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ad:f7:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b0678fa-ee"/></interface>not found in domain: <domain type='kvm' id='53'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <name>instance-00000030</name>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <uuid>450d4b82-4475-4cfc-b868-dc3b0fc37af5</uuid>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:32:09</nova:creationTime>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:32:10 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='serial'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='uuid'>450d4b82-4475-4cfc-b868-dc3b0fc37af5</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk' index='2'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/450d4b82-4475-4cfc-b868-dc3b0fc37af5_disk.config' index='1'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:c2:50:72'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target dev='tap2456d48f-94'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       </target>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/2'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5/console.log' append='off'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </console>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </input>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c273,c336</label>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c273,c336</imagelabel>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:32:10 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:10 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.841 253542 WARNING nova.virt.libvirt.driver [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Detaching interface fa:16:3e:ad:f7:0c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap1b0678fa-ee' not found.
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.842 253542 DEBUG nova.virt.libvirt.vif [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.842 253542 DEBUG nova.network.os_vif_util [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "address": "fa:16:3e:ad:f7:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b0678fa-ee", "ovs_interfaceid": "1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.843 253542 DEBUG nova.network.os_vif_util [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.843 253542 DEBUG os_vif [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.845 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b0678fa-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.845 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.847 253542 INFO os_vif [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:f7:0c,bridge_name='br-int',has_traffic_filtering=True,id=1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b0678fa-ee')
Nov 25 08:32:10 compute-0 nova_compute[253538]: 2025-11-25 08:32:10.848 253542 DEBUG nova.virt.libvirt.guest [req-aa9cefe6-a4cb-44e9-9ffd-ec4185bbf8ac req-7c3ac807-4590-43f2-a1db-832851a8688c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1331902627</nova:name>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:32:10</nova:creationTime>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     <nova:port uuid="2456d48f-9440-411c-b5f2-5c27136126f9">
Nov 25 08:32:10 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:32:10 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:10 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:32:10 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:32:10 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:32:10 compute-0 podman[306747]: 2025-11-25 08:32:10.869255846 +0000 UTC m=+0.101260233 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:32:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 185 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 799 KiB/s wr, 153 op/s
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.339 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.339 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.339 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.340 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.340 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.341 253542 INFO nova.compute.manager [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Terminating instance
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.342 253542 DEBUG nova.compute.manager [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:32:11 compute-0 kernel: tap2456d48f-94 (unregistering): left promiscuous mode
Nov 25 08:32:11 compute-0 NetworkManager[48915]: <info>  [1764059531.4085] device (tap2456d48f-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.410 253542 DEBUG nova.network.neutron [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00395|binding|INFO|Releasing lport 2456d48f-9440-411c-b5f2-5c27136126f9 from this chassis (sb_readonly=0)
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00396|binding|INFO|Setting lport 2456d48f-9440-411c-b5f2-5c27136126f9 down in Southbound
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00397|binding|INFO|Removing iface tap2456d48f-94 ovn-installed in OVS
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:11 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Nov 25 08:32:11 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 13.353s CPU time.
Nov 25 08:32:11 compute-0 systemd-machined[215790]: Machine qemu-53-instance-00000030 terminated.
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.502 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:50:72 10.100.0.7'], port_security=['fa:16:3e:c2:50:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1111213c-e81d-4f44-8d10-b8f2ced48789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2456d48f-9440-411c-b5f2-5c27136126f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.503 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2456d48f-9440-411c-b5f2-5c27136126f9 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.504 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.505 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[672c0e39-55b1-495b-a96c-baded9e2ea35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.507 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace which is not needed anymore
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.511 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.511 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance network_info: |[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.517 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.525 253542 WARNING nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.533 253542 DEBUG nova.virt.libvirt.host [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.534 253542 DEBUG nova.virt.libvirt.host [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.538 253542 DEBUG nova.virt.libvirt.host [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.539 253542 DEBUG nova.virt.libvirt.host [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.540 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.541 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.542 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.542 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.543 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.543 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.543 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.544 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.544 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.544 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.545 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.545 253542 DEBUG nova.virt.hardware [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.551 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:11 compute-0 kernel: tap2456d48f-94: entered promiscuous mode
Nov 25 08:32:11 compute-0 NetworkManager[48915]: <info>  [1764059531.5609] manager: (tap2456d48f-94): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00398|binding|INFO|Claiming lport 2456d48f-9440-411c-b5f2-5c27136126f9 for this chassis.
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00399|binding|INFO|2456d48f-9440-411c-b5f2-5c27136126f9: Claiming fa:16:3e:c2:50:72 10.100.0.7
Nov 25 08:32:11 compute-0 kernel: tap2456d48f-94 (unregistering): left promiscuous mode
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00400|binding|INFO|Setting lport 2456d48f-9440-411c-b5f2-5c27136126f9 ovn-installed in OVS
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00401|if_status|INFO|Dropped 2 log messages in last 164 seconds (most recently, 164 seconds ago) due to excessive rate
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00402|if_status|INFO|Not setting lport 2456d48f-9440-411c-b5f2-5c27136126f9 down as sb is readonly
Nov 25 08:32:11 compute-0 ovn_controller[152859]: 2025-11-25T08:32:11Z|00403|binding|INFO|Releasing lport 2456d48f-9440-411c-b5f2-5c27136126f9 from this chassis (sb_readonly=0)
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.593 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:50:72 10.100.0.7'], port_security=['fa:16:3e:c2:50:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1111213c-e81d-4f44-8d10-b8f2ced48789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2456d48f-9440-411c-b5f2-5c27136126f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.598 253542 INFO nova.network.neutron [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Port 1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.599 253542 DEBUG nova.network.neutron [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.606 253542 INFO nova.virt.libvirt.driver [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Instance destroyed successfully.
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.607 253542 DEBUG nova.objects.instance [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'resources' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.626 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.630 253542 DEBUG nova.virt.libvirt.vif [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1331902627',display_name='tempest-AttachInterfacesTestJSON-server-1331902627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1331902627',id=48,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxOUDVDCQJ+lDKjOnhY9ZKMp68C2gI2GeL81bN5+Cg0KECDUKYqyfMCz6cyxM0cgudxNhn7b2Ag1lQMKrSgrlBGJM1ipNmUdJjzx17GCK7/NrryCNhuCHqxgCE2AGNPYg==',key_name='tempest-keypair-1979969978',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:31:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-f66b6qgm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:31:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=450d4b82-4475-4cfc-b868-dc3b0fc37af5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.630 253542 DEBUG nova.network.os_vif_util [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.631 253542 DEBUG nova.network.os_vif_util [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.632 253542 DEBUG os_vif [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.634 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2456d48f-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.637 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.637 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.638 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 450d4b82-4475-4cfc-b868-dc3b0fc37af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.643 253542 INFO os_vif [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:50:72,bridge_name='br-int',has_traffic_filtering=True,id=2456d48f-9440-411c-b5f2-5c27136126f9,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2456d48f-94')
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.665 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:50:72 10.100.0.7'], port_security=['fa:16:3e:c2:50:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '450d4b82-4475-4cfc-b868-dc3b0fc37af5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1111213c-e81d-4f44-8d10-b8f2ced48789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2456d48f-9440-411c-b5f2-5c27136126f9) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.665 253542 DEBUG oslo_concurrency.lockutils [None req-cfe6e3ea-e03a-44c4-b92f-47de6ae51c25 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-450d4b82-4475-4cfc-b868-dc3b0fc37af5-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:11 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [NOTICE]   (305826) : haproxy version is 2.8.14-c23fe91
Nov 25 08:32:11 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [NOTICE]   (305826) : path to executable is /usr/sbin/haproxy
Nov 25 08:32:11 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [WARNING]  (305826) : Exiting Master process...
Nov 25 08:32:11 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [ALERT]    (305826) : Current worker (305828) exited with code 143 (Terminated)
Nov 25 08:32:11 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[305822]: [WARNING]  (305826) : All workers exited. Exiting... (0)
Nov 25 08:32:11 compute-0 systemd[1]: libpod-a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84.scope: Deactivated successfully.
Nov 25 08:32:11 compute-0 podman[306798]: 2025-11-25 08:32:11.688861624 +0000 UTC m=+0.056989793 container died a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:32:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84-userdata-shm.mount: Deactivated successfully.
Nov 25 08:32:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6e1a4e2eca1eca6ce4c110ec30d8b792ff0a483f2c08ec38ee2cef1cf99ecc-merged.mount: Deactivated successfully.
Nov 25 08:32:11 compute-0 podman[306798]: 2025-11-25 08:32:11.766785518 +0000 UTC m=+0.134913687 container cleanup a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:32:11 compute-0 systemd[1]: libpod-conmon-a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84.scope: Deactivated successfully.
Nov 25 08:32:11 compute-0 podman[306864]: 2025-11-25 08:32:11.855964035 +0000 UTC m=+0.064808893 container remove a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00b429b1-47ee-4aca-8ee4-725e5c0f7e00]: (4, ('Tue Nov 25 08:32:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84)\na388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84\nTue Nov 25 08:32:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (a388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84)\na388b34065d921c4716fbb1467f852ca7e6045b1633250ff69d18975b6d3ae84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.870 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[49c61f01-d38b-4fb7-9712-2e726a3a138b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.871 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:11 compute-0 kernel: tap9bf3cbfa-70: left promiscuous mode
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:11 compute-0 nova_compute[253538]: 2025-11-25 08:32:11.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[46458f0c-9bfa-4ef3-88f5-ee539cf62265]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[698c0244-4c6a-4a47-97ee-68dcf20b78cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.925 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3ef820-8a6e-4044-ad05-5b0de1fee5be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3df4fe29-3e9d-4b49-a669-1e93481e1244]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485385, 'reachable_time': 43792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306879, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bf3cbfa\x2d7e0d\x2d4c98\x2d99a2\x2d4ca14fb6bbbe.mount: Deactivated successfully.
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.949 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.950 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[372d0f4a-3a70-44d4-9246-33a39d2d8967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.951 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2456d48f-9440-411c-b5f2-5c27136126f9 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.954 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.955 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60686ee7-4c35-46d1-8709-595bd201c451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.955 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2456d48f-9440-411c-b5f2-5c27136126f9 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.957 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:11.957 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ba3892-31d9-49dc-9ada-a580672efd25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534801478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.045 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.066 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.070 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.233 253542 INFO nova.virt.libvirt.driver [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Deleting instance files /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5_del
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.235 253542 INFO nova.virt.libvirt.driver [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Deletion of /var/lib/nova/instances/450d4b82-4475-4cfc-b868-dc3b0fc37af5_del complete
Nov 25 08:32:12 compute-0 ceph-mon[75015]: pgmap v1465: 321 pgs: 321 active+clean; 185 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 799 KiB/s wr, 153 op/s
Nov 25 08:32:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2534801478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.325 253542 INFO nova.compute.manager [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Took 0.98 seconds to destroy the instance on the hypervisor.
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.326 253542 DEBUG oslo.service.loopingcall [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.327 253542 DEBUG nova.compute.manager [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.327 253542 DEBUG nova.network.neutron [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:32:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4058347384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.514 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.516 253542 DEBUG nova.virt.libvirt.vif [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.517 253542 DEBUG nova.network.os_vif_util [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.518 253542 DEBUG nova.network.os_vif_util [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.519 253542 DEBUG nova.objects.instance [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.533 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <name>instance-00000032</name>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:32:11</nova:creationTime>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 08:32:12 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:12 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:07:cd:40"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <target dev="tap15af3dd8-97"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:32:12 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:32:12 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:12 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:12 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:12 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.534 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Preparing to wait for external event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.534 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.535 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.535 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.536 253542 DEBUG nova.virt.libvirt.vif [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.536 253542 DEBUG nova.network.os_vif_util [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.537 253542 DEBUG nova.network.os_vif_util [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.537 253542 DEBUG os_vif [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.540 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.541 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.546 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.547 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:12 compute-0 NetworkManager[48915]: <info>  [1764059532.5506] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.559 253542 INFO os_vif [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.677 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.677 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.678 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No VIF found with MAC fa:16:3e:07:cd:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.679 253542 INFO nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Using config drive
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.711 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.825 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.825 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.826 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.826 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.827 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.827 253542 WARNING nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-plugged-1b0678fa-ee9d-4aaa-adae-a0cd4b6bbc53 for instance with vm_state active and task_state deleting.
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.828 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-changed-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.828 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Refreshing instance network info cache due to event network-changed-15af3dd8-9788-4a34-b4b2-d3b24300cd4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.828 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.829 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:12 compute-0 nova_compute[253538]: 2025-11-25 08:32:12.829 253542 DEBUG nova.network.neutron [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Refreshing network info cache for port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:32:12 compute-0 sudo[306943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:12 compute-0 sudo[306943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:12 compute-0 sudo[306943]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:13 compute-0 sudo[306968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:32:13 compute-0 sudo[306968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:13 compute-0 sudo[306968]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:13 compute-0 sudo[306993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:13 compute-0 sudo[306993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:13 compute-0 sudo[306993]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 203 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 178 op/s
Nov 25 08:32:13 compute-0 sudo[307018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:32:13 compute-0 sudo[307018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.252 253542 INFO nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Creating config drive at /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.265 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppocf7t8m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.337 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [{"id": "2456d48f-9440-411c-b5f2-5c27136126f9", "address": "fa:16:3e:c2:50:72", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2456d48f-94", "ovs_interfaceid": "2456d48f-9440-411c-b5f2-5c27136126f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.386 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-450d4b82-4475-4cfc-b868-dc3b0fc37af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.387 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.387 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.448 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppocf7t8m" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4058347384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.485 253542 DEBUG nova.storage.rbd_utils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 0feca801-4630-4450-b915-616d8496ab51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.490 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config 0feca801-4630-4450-b915-616d8496ab51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.639 253542 DEBUG nova.network.neutron [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.668 253542 INFO nova.compute.manager [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Took 1.34 seconds to deallocate network for instance.
Nov 25 08:32:13 compute-0 ovn_controller[152859]: 2025-11-25T08:32:13Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:6e:6c 10.100.0.4
Nov 25 08:32:13 compute-0 ovn_controller[152859]: 2025-11-25T08:32:13Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:6e:6c 10.100.0.4
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.771 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.772 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:13 compute-0 sudo[307018]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.871 253542 DEBUG oslo_concurrency.processutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 08:32:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 08:32:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:32:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:32:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:32:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:32:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.913 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.914 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.916 253542 DEBUG oslo_concurrency.processutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config 0feca801-4630-4450-b915-616d8496ab51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.917 253542 INFO nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Deleting local config drive /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/disk.config because it was imported into RBD.
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.937 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:32:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:32:13 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9ef845e4-48a4-486e-be3a-905adc939ef1 does not exist
Nov 25 08:32:13 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2e8249d5-4dd1-4c39-8980-1abacb6728f9 does not exist
Nov 25 08:32:13 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3ecd4065-cb91-4cdf-8bcc-189b9114580e does not exist
Nov 25 08:32:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:32:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:32:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:32:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:32:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:32:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:32:13 compute-0 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 08:32:13 compute-0 NetworkManager[48915]: <info>  [1764059533.9846] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Nov 25 08:32:13 compute-0 ovn_controller[152859]: 2025-11-25T08:32:13Z|00404|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 08:32:13 compute-0 ovn_controller[152859]: 2025-11-25T08:32:13Z|00405|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:32:13 compute-0 nova_compute[253538]: 2025-11-25 08:32:13.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:13 compute-0 systemd-udevd[306769]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:14 compute-0 ovn_controller[152859]: 2025-11-25T08:32:14Z|00406|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:14 compute-0 ovn_controller[152859]: 2025-11-25T08:32:14Z|00407|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.027 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.027 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.028 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.034 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:32:14 compute-0 NetworkManager[48915]: <info>  [1764059534.0445] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:14 compute-0 NetworkManager[48915]: <info>  [1764059534.0454] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[20e9b7da-2566-48e7-8ac6-f14100cfb955]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.050 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.052 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:32:14 compute-0 sudo[307119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.052 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca00027-b6ec-4a76-8a97-cd72f17aff97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.053 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3c473b-a030-42b2-909f-27c7217fc669]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:32:14 compute-0 sudo[307119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:14 compute-0 sudo[307119]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.068 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1835a091-ec72-4ecb-b6b1-194b7778a31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 systemd-machined[215790]: New machine qemu-55-instance-00000032.
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.086 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[13f6d76a-b376-45ac-93df-172c0a9ed110]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000032.
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.119 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5e4fc0-d45c-4457-a32e-d4815ea10cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 NetworkManager[48915]: <info>  [1764059534.1254] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.124 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95d4f47b-c15f-4c15-b075-a689f0c06c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 sudo[307181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:32:14 compute-0 sudo[307181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:14 compute-0 sudo[307181]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.159 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[18609545-6e7d-41f0-8da5-a003eff74c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.162 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[045ad219-553c-4d27-b117-99c450967db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 podman[307169]: 2025-11-25 08:32:14.171574589 +0000 UTC m=+0.105075895 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:32:14 compute-0 NetworkManager[48915]: <info>  [1764059534.1895] device (tap908154e6-30): carrier: link connected
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.194 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[22333cae-452e-4905-9930-e4ad1e90d0da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 sudo[307240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:14 compute-0 sudo[307240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:14 compute-0 sudo[307240]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.219 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed5164e-6126-4e44-ad57-a4cdef7f5ec8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488878, 'reachable_time': 30892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307277, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.237 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97c67f80-6e5b-490f-8ca4-4b1c42050e22]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488878, 'tstamp': 488878}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307286, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 sudo[307279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:32:14 compute-0 sudo[307279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.267 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6307b9f-173c-46da-b6fa-7a8b4fafb61c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488878, 'reachable_time': 30892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307303, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.299 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f74e7c4-d086-45c2-bbb1-e505e5b52be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845340065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.351 253542 DEBUG nova.network.neutron [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updated VIF entry in instance network info cache for port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.352 253542 DEBUG nova.network.neutron [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.356 253542 DEBUG oslo_concurrency.processutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e23cfc78-6876-4951-a5cd-39a38081e539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.362 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:14 compute-0 NetworkManager[48915]: <info>  [1764059534.3645] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.365 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:14 compute-0 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.368 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.371 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.371 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-unplugged-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.372 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.372 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.372 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.373 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-unplugged-2456d48f-9440-411c-b5f2-5c27136126f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.373 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-unplugged-2456d48f-9440-411c-b5f2-5c27136126f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.373 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.374 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.374 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.374 253542 DEBUG oslo_concurrency.lockutils [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.374 253542 DEBUG nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] No waiting events found dispatching network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.375 253542 WARNING nova.compute.manager [req-ba0bcfde-c7c4-4673-a0cb-addab144830e req-c9c3e631-1ec5-4df8-86cc-fa43f56c6722 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received unexpected event network-vif-plugged-2456d48f-9440-411c-b5f2-5c27136126f9 for instance with vm_state active and task_state deleting.
Nov 25 08:32:14 compute-0 ovn_controller[152859]: 2025-11-25T08:32:14Z|00408|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.378 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56fa2438-0a7d-465a-b56d-c0ec228ddffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.380 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:32:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:14.381 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.391 253542 DEBUG nova.compute.provider_tree [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.402 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.411 253542 DEBUG nova.scheduler.client.report [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.439 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.442 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.451 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.452 253542 INFO nova.compute.claims [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:32:14 compute-0 ceph-mon[75015]: pgmap v1466: 321 pgs: 321 active+clean; 203 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 178 op/s
Nov 25 08:32:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 08:32:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:32:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:32:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:32:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:32:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:32:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:32:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3845340065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.507 253542 INFO nova.scheduler.client.report [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Deleted allocations for instance 450d4b82-4475-4cfc-b868-dc3b0fc37af5
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.583 253542 DEBUG oslo_concurrency.lockutils [None req-35e5603c-cfad-490a-9a18-06b273165b85 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "450d4b82-4475-4cfc-b868-dc3b0fc37af5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.622 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:14 compute-0 podman[307387]: 2025-11-25 08:32:14.709953428 +0000 UTC m=+0.091393907 container create 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.716 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059534.715351, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.716 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)
Nov 25 08:32:14 compute-0 podman[307387]: 2025-11-25 08:32:14.640202834 +0000 UTC m=+0.021643333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:32:14 compute-0 systemd[1]: Started libpod-conmon-0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5.scope.
Nov 25 08:32:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:14 compute-0 podman[307426]: 2025-11-25 08:32:14.783425123 +0000 UTC m=+0.030636085 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.896 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:14 compute-0 podman[307387]: 2025-11-25 08:32:14.901451745 +0000 UTC m=+0.282892274 container init 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.903 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059534.7193797, 0feca801-4630-4450-b915-616d8496ab51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.903 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Paused (Lifecycle Event)
Nov 25 08:32:14 compute-0 podman[307387]: 2025-11-25 08:32:14.909142862 +0000 UTC m=+0.290583351 container start 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 08:32:14 compute-0 systemd[1]: libpod-0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5.scope: Deactivated successfully.
Nov 25 08:32:14 compute-0 pensive_allen[307457]: 167 167
Nov 25 08:32:14 compute-0 conmon[307457]: conmon 0bd5ed51b07aa59878a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5.scope/container/memory.events
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.922 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.927 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:14 compute-0 podman[307426]: 2025-11-25 08:32:14.938941043 +0000 UTC m=+0.186152015 container create 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.949 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.964 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Received event network-vif-deleted-2456d48f-9440-411c-b5f2-5c27136126f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.965 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.966 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.966 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.966 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.967 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Processing event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.967 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.968 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:14 compute-0 podman[307387]: 2025-11-25 08:32:14.972811373 +0000 UTC m=+0.354251872 container attach 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:32:14 compute-0 podman[307387]: 2025-11-25 08:32:14.97454913 +0000 UTC m=+0.355989619 container died 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.969 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.974 253542 DEBUG oslo_concurrency.lockutils [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.975 253542 DEBUG nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.975 253542 WARNING nova.compute.manager [req-724127d0-6bc1-4904-92ff-81bce992bc15 req-c3c3520c-c26f-4f87-b6ab-d147b43fbe02 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state building and task_state spawning.
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.976 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.982 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059534.9802048, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.982 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)
Nov 25 08:32:14 compute-0 systemd[1]: Started libpod-conmon-5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826.scope.
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.984 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.991 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance spawned successfully.
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.992 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:32:14 compute-0 nova_compute[253538]: 2025-11-25 08:32:14.999 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.008 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.011 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.011 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.012 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.012 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.013 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.014 253542 DEBUG nova.virt.libvirt.driver [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6642dd5f19a45eda9399f2698ab38ce8fab03c28deb9c9cf4ff5295c9db6ea21/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.023 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bf48b96507f8f2079ed42df6dbba16301031af8c0514fe30d686a4e5306855a-merged.mount: Deactivated successfully.
Nov 25 08:32:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486061073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.076 253542 INFO nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Took 7.84 seconds to spawn the instance on the hypervisor.
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.077 253542 DEBUG nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:15 compute-0 podman[307387]: 2025-11-25 08:32:15.0891835 +0000 UTC m=+0.470623979 container remove 0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.096 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:15 compute-0 systemd[1]: libpod-conmon-0bd5ed51b07aa59878a3bed4dccd98f69be98e172f165d201c7ec3b59695b1f5.scope: Deactivated successfully.
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.107 253542 DEBUG nova.compute.provider_tree [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:15 compute-0 podman[307426]: 2025-11-25 08:32:15.116544996 +0000 UTC m=+0.363755968 container init 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:32:15 compute-0 podman[307426]: 2025-11-25 08:32:15.124297284 +0000 UTC m=+0.371508226 container start 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.138 253542 DEBUG nova.scheduler.client.report [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:15 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [NOTICE]   (307488) : New worker (307490) forked
Nov 25 08:32:15 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [NOTICE]   (307488) : Loading success.
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.160 253542 INFO nova.compute.manager [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Took 8.94 seconds to build instance.
Nov 25 08:32:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 157 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.206 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.207 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.223 253542 DEBUG oslo_concurrency.lockutils [None req-e7c1ffa7-e4a2-4981-85ff-8897d3cb3e5e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.251 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.252 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.266 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:32:15 compute-0 podman[307504]: 2025-11-25 08:32:15.276663479 +0000 UTC m=+0.052010738 container create 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.286 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:32:15 compute-0 systemd[1]: Started libpod-conmon-1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1.scope.
Nov 25 08:32:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:15 compute-0 podman[307504]: 2025-11-25 08:32:15.250191068 +0000 UTC m=+0.025538347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:32:15 compute-0 podman[307504]: 2025-11-25 08:32:15.359910217 +0000 UTC m=+0.135257496 container init 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:32:15 compute-0 podman[307504]: 2025-11-25 08:32:15.369946177 +0000 UTC m=+0.145293446 container start 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:32:15 compute-0 podman[307504]: 2025-11-25 08:32:15.373342298 +0000 UTC m=+0.148689587 container attach 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.386 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.387 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.387 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Creating image(s)
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.410 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1486061073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.505 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.524 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.527 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.564 253542 DEBUG nova.policy [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.570 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.607 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.608 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.609 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.609 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.627 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.630 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0201b222-1aa1-4d57-901c-e3c79170b567_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:15 compute-0 nova_compute[253538]: 2025-11-25 08:32:15.945 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0201b222-1aa1-4d57-901c-e3c79170b567_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:16 compute-0 nova_compute[253538]: 2025-11-25 08:32:16.010 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:32:16 compute-0 nova_compute[253538]: 2025-11-25 08:32:16.104 253542 DEBUG nova.objects.instance [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0201b222-1aa1-4d57-901c-e3c79170b567 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:16 compute-0 nova_compute[253538]: 2025-11-25 08:32:16.120 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:32:16 compute-0 nova_compute[253538]: 2025-11-25 08:32:16.121 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Ensure instance console log exists: /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:32:16 compute-0 nova_compute[253538]: 2025-11-25 08:32:16.121 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:16 compute-0 nova_compute[253538]: 2025-11-25 08:32:16.122 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:16 compute-0 nova_compute[253538]: 2025-11-25 08:32:16.122 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:16 compute-0 ceph-mon[75015]: pgmap v1467: 321 pgs: 321 active+clean; 157 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Nov 25 08:32:16 compute-0 sweet_northcutt[307520]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:32:16 compute-0 sweet_northcutt[307520]: --> relative data size: 1.0
Nov 25 08:32:16 compute-0 sweet_northcutt[307520]: --> All data devices are unavailable
Nov 25 08:32:16 compute-0 systemd[1]: libpod-1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1.scope: Deactivated successfully.
Nov 25 08:32:16 compute-0 systemd[1]: libpod-1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1.scope: Consumed 1.018s CPU time.
Nov 25 08:32:16 compute-0 podman[307715]: 2025-11-25 08:32:16.619692745 +0000 UTC m=+0.042117203 container died 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:32:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-99db9ed113397d14766cc2d1a917cd50a5501612d4c6db0f89dd4f2452934b6a-merged.mount: Deactivated successfully.
Nov 25 08:32:16 compute-0 podman[307715]: 2025-11-25 08:32:16.687782915 +0000 UTC m=+0.110207353 container remove 1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:32:16 compute-0 systemd[1]: libpod-conmon-1ee956ad6b069ac5d3db8869f10e848b4460c999cc7a99a7846950f8377bd6f1.scope: Deactivated successfully.
Nov 25 08:32:16 compute-0 sudo[307279]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:16 compute-0 sudo[307730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:16 compute-0 sudo[307730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:16 compute-0 sudo[307730]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:16 compute-0 sudo[307755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:32:16 compute-0 sudo[307755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:16 compute-0 sudo[307755]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:16 compute-0 sudo[307780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:16 compute-0 sudo[307780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:16 compute-0 sudo[307780]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:16 compute-0 sudo[307805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:32:16 compute-0 sudo[307805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 182 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.8 MiB/s wr, 217 op/s
Nov 25 08:32:17 compute-0 podman[307869]: 2025-11-25 08:32:17.287211675 +0000 UTC m=+0.038182228 container create 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:32:17 compute-0 kernel: tap3c3c9c20-84 (unregistering): left promiscuous mode
Nov 25 08:32:17 compute-0 NetworkManager[48915]: <info>  [1764059537.3189] device (tap3c3c9c20-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:32:17 compute-0 systemd[1]: Started libpod-conmon-45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b.scope.
Nov 25 08:32:17 compute-0 ovn_controller[152859]: 2025-11-25T08:32:17Z|00409|binding|INFO|Releasing lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 from this chassis (sb_readonly=0)
Nov 25 08:32:17 compute-0 ovn_controller[152859]: 2025-11-25T08:32:17Z|00410|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 down in Southbound
Nov 25 08:32:17 compute-0 ovn_controller[152859]: 2025-11-25T08:32:17Z|00411|binding|INFO|Removing iface tap3c3c9c20-84 ovn-installed in OVS
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.332 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:6e:6c 10.100.0.4'], port_security=['fa:16:3e:c4:6e:6c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3c3c9c20-8436-4b41-9184-2061010ba6e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.333 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3c3c9c20-8436-4b41-9184-2061010ba6e2 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.335 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.339 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e641a58-9553-4aa9-a04e-db450fea5b70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.340 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore
Nov 25 08:32:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:17 compute-0 podman[307869]: 2025-11-25 08:32:17.271066931 +0000 UTC m=+0.022037504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:32:17 compute-0 podman[307869]: 2025-11-25 08:32:17.380219674 +0000 UTC m=+0.131190247 container init 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:32:17 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 25 08:32:17 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 12.754s CPU time.
Nov 25 08:32:17 compute-0 systemd-machined[215790]: Machine qemu-54-instance-00000031 terminated.
Nov 25 08:32:17 compute-0 podman[307869]: 2025-11-25 08:32:17.388006284 +0000 UTC m=+0.138976867 container start 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:32:17 compute-0 podman[307869]: 2025-11-25 08:32:17.390558092 +0000 UTC m=+0.141528655 container attach 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 08:32:17 compute-0 sleepy_wozniak[307889]: 167 167
Nov 25 08:32:17 compute-0 systemd[1]: libpod-45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b.scope: Deactivated successfully.
Nov 25 08:32:17 compute-0 conmon[307889]: conmon 45e73f2cb71fecf4e95d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b.scope/container/memory.events
Nov 25 08:32:17 compute-0 podman[307869]: 2025-11-25 08:32:17.394940451 +0000 UTC m=+0.145911014 container died 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.404 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Successfully created port: d02d0c40-ff59-4db1-8105-d39f0c8b67c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-be42d92085fef3901b00721071f360a2aeae076a8f78e3735f93ed543f6d3afb-merged.mount: Deactivated successfully.
Nov 25 08:32:17 compute-0 podman[307869]: 2025-11-25 08:32:17.437199546 +0000 UTC m=+0.188170099 container remove 45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:32:17 compute-0 systemd[1]: libpod-conmon-45e73f2cb71fecf4e95d844c25268538a39fce263c430ba53000f11674a2cb9b.scope: Deactivated successfully.
Nov 25 08:32:17 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [NOTICE]   (306504) : haproxy version is 2.8.14-c23fe91
Nov 25 08:32:17 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [NOTICE]   (306504) : path to executable is /usr/sbin/haproxy
Nov 25 08:32:17 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [WARNING]  (306504) : Exiting Master process...
Nov 25 08:32:17 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [ALERT]    (306504) : Current worker (306506) exited with code 143 (Terminated)
Nov 25 08:32:17 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[306500]: [WARNING]  (306504) : All workers exited. Exiting... (0)
Nov 25 08:32:17 compute-0 systemd[1]: libpod-8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b.scope: Deactivated successfully.
Nov 25 08:32:17 compute-0 podman[307919]: 2025-11-25 08:32:17.508692547 +0000 UTC m=+0.054198117 container died 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.556 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.588 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.588 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.623 253542 DEBUG nova.compute.manager [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.623 253542 DEBUG oslo_concurrency.lockutils [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.624 253542 DEBUG oslo_concurrency.lockutils [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.624 253542 DEBUG oslo_concurrency.lockutils [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.624 253542 DEBUG nova.compute.manager [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.624 253542 WARNING nova.compute.manager [req-ceebd3ce-bc79-4745-b37e-3b0dba23f72e req-2b075403-0424-4b50-94e5-aed3e4413a35 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state rebuilding.
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.625 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance shutdown successfully after 13 seconds.
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.633 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance destroyed successfully.
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.641 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance destroyed successfully.
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.642 253542 DEBUG nova.virt.libvirt.vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:03Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.642 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.644 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.644 253542 DEBUG os_vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b-userdata-shm.mount: Deactivated successfully.
Nov 25 08:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e001beb4c5ea87434b006a998a71e0ebe65ce32375515f22c2726c3467336969-merged.mount: Deactivated successfully.
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.652 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3c9c20-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.660 253542 INFO os_vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84')
Nov 25 08:32:17 compute-0 podman[307919]: 2025-11-25 08:32:17.68104293 +0000 UTC m=+0.226548490 container cleanup 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:32:17 compute-0 systemd[1]: libpod-conmon-8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b.scope: Deactivated successfully.
Nov 25 08:32:17 compute-0 podman[307956]: 2025-11-25 08:32:17.702779403 +0000 UTC m=+0.124812915 container create 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 08:32:17 compute-0 systemd[1]: Started libpod-conmon-86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6.scope.
Nov 25 08:32:17 compute-0 podman[307956]: 2025-11-25 08:32:17.680416302 +0000 UTC m=+0.102449844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:32:17 compute-0 podman[307997]: 2025-11-25 08:32:17.762203701 +0000 UTC m=+0.056857980 container remove 8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.768 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[203429d6-584e-45d6-bad3-fbef1ef02222]: (4, ('Tue Nov 25 08:32:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b)\n8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b\nTue Nov 25 08:32:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b)\n8b01de287dc8e8945dfe6ac0027b248fe1b900bdcc53279896633ea29595ae2b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.770 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0f57adde-8f26-4dc3-91e7-cea9028cc82b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.771 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.815 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0b786f93-9b36-435e-b359-9594e596ded4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:17 compute-0 podman[307956]: 2025-11-25 08:32:17.833031034 +0000 UTC m=+0.255064586 container init 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:32:17 compute-0 nova_compute[253538]: 2025-11-25 08:32:17.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.835 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2496451-5e81-4ecd-a525-4d6d79fe6690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.837 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59d68911-7f2e-46d9-89ce-259f5c480038]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:17 compute-0 podman[307956]: 2025-11-25 08:32:17.840726381 +0000 UTC m=+0.262759903 container start 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:32:17 compute-0 podman[307956]: 2025-11-25 08:32:17.844095332 +0000 UTC m=+0.266128854 container attach 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.853 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc0c412-33fe-4c12-83b0-a6c1983dd685]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487201, 'reachable_time': 39245, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308036, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:17 compute-0 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.856 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:32:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:17.856 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1538ab-9c65-4c49-b973-2f57c302a8ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11066372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.035 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.085 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deleting instance files /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_del
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.086 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deletion of /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_del complete
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.097 253542 DEBUG nova.compute.manager [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-changed-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.098 253542 DEBUG nova.compute.manager [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Refreshing instance network info cache due to event network-changed-15af3dd8-9788-4a34-b4b2-d3b24300cd4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.098 253542 DEBUG oslo_concurrency.lockutils [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.098 253542 DEBUG oslo_concurrency.lockutils [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.098 253542 DEBUG nova.network.neutron [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Refreshing network info cache for port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.125 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.125 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.128 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.129 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.237 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.237 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating image(s)
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.256 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.280 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.306 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.311 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.395 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.398 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.399 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.400 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.423 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.428 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.505 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.506 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3991MB free_disk=59.91162109375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.507 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.507 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:18 compute-0 ceph-mon[75015]: pgmap v1468: 321 pgs: 321 active+clean; 182 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.8 MiB/s wr, 217 op/s
Nov 25 08:32:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/11066372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.567861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538567951, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 611, "num_deletes": 251, "total_data_size": 628411, "memory_usage": 639368, "flush_reason": "Manual Compaction"}
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538572662, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 622088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30128, "largest_seqno": 30738, "table_properties": {"data_size": 618836, "index_size": 1160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7786, "raw_average_key_size": 19, "raw_value_size": 612236, "raw_average_value_size": 1522, "num_data_blocks": 52, "num_entries": 402, "num_filter_entries": 402, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059496, "oldest_key_time": 1764059496, "file_creation_time": 1764059538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4818 microseconds, and 2284 cpu microseconds.
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.572704) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 622088 bytes OK
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.572723) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574177) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574225) EVENT_LOG_v1 {"time_micros": 1764059538574213, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574256) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 625059, prev total WAL file size 625059, number of live WAL files 2.
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574959) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(607KB)], [65(7913KB)]
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538575024, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8725539, "oldest_snapshot_seqno": -1}
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.614 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.614 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0feca801-4630-4450-b915-616d8496ab51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.614 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0201b222-1aa1-4d57-901c-e3c79170b567 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.615 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.615 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5311 keys, 7139090 bytes, temperature: kUnknown
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538619727, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7139090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7104605, "index_size": 20103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134856, "raw_average_key_size": 25, "raw_value_size": 7009882, "raw_average_value_size": 1319, "num_data_blocks": 820, "num_entries": 5311, "num_filter_entries": 5311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.619959) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7139090 bytes
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.623698) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.9 rd, 159.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.7 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(25.5) write-amplify(11.5) OK, records in: 5823, records dropped: 512 output_compression: NoCompression
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.623727) EVENT_LOG_v1 {"time_micros": 1764059538623715, "job": 36, "event": "compaction_finished", "compaction_time_micros": 44770, "compaction_time_cpu_micros": 23747, "output_level": 6, "num_output_files": 1, "total_output_size": 7139090, "num_input_records": 5823, "num_output_records": 5311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538623975, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059538625475, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.574837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:32:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:32:18.625550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]: {
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:     "0": [
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:         {
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "devices": [
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "/dev/loop3"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             ],
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_name": "ceph_lv0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_size": "21470642176",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "name": "ceph_lv0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "tags": {
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cluster_name": "ceph",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.crush_device_class": "",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.encrypted": "0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osd_id": "0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.type": "block",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.vdo": "0"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             },
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "type": "block",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "vg_name": "ceph_vg0"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:         }
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:     ],
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:     "1": [
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:         {
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "devices": [
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "/dev/loop4"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             ],
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_name": "ceph_lv1",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_size": "21470642176",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "name": "ceph_lv1",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "tags": {
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cluster_name": "ceph",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.crush_device_class": "",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.encrypted": "0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osd_id": "1",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.type": "block",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.vdo": "0"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             },
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "type": "block",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "vg_name": "ceph_vg1"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:         }
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:     ],
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:     "2": [
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:         {
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "devices": [
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "/dev/loop5"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             ],
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_name": "ceph_lv2",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_size": "21470642176",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "name": "ceph_lv2",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "tags": {
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.cluster_name": "ceph",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.crush_device_class": "",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.encrypted": "0",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osd_id": "2",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.type": "block",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:                 "ceph.vdo": "0"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             },
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "type": "block",
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:             "vg_name": "ceph_vg2"
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:         }
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]:     ]
Nov 25 08:32:18 compute-0 jolly_proskuriakova[308031]: }
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.713 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:18 compute-0 systemd[1]: libpod-86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6.scope: Deactivated successfully.
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.758 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:18 compute-0 podman[308143]: 2025-11-25 08:32:18.773083599 +0000 UTC m=+0.042887664 container died 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c69d31fcbeab19be8f3e7cba029bd6b8beb2560ce9561f0a7dae8398df57e98b-merged.mount: Deactivated successfully.
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.840 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:32:18 compute-0 podman[308143]: 2025-11-25 08:32:18.841808056 +0000 UTC m=+0.111612041 container remove 86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:32:18 compute-0 systemd[1]: libpod-conmon-86bef5820814aca1b74a1ad451e0138f0e3fa38106fd59cffd13dc0fd141d8a6.scope: Deactivated successfully.
Nov 25 08:32:18 compute-0 sudo[307805]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:18 compute-0 sudo[308232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:18 compute-0 sudo[308232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:18 compute-0 sudo[308232]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.962 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.969 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Ensure instance console log exists: /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.970 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.970 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.971 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.976 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Start _get_guest_xml network_info=[{"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.982 253542 WARNING nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.988 253542 DEBUG nova.virt.libvirt.host [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.989 253542 DEBUG nova.virt.libvirt.host [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.993 253542 DEBUG nova.virt.libvirt.host [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.993 253542 DEBUG nova.virt.libvirt.host [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.993 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.994 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.995 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.996 253542 DEBUG nova.virt.hardware [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:32:18 compute-0 nova_compute[253538]: 2025-11-25 08:32:18.996 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.013 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:19 compute-0 sudo[308275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:32:19 compute-0 sudo[308275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:19 compute-0 sudo[308275]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:19 compute-0 sudo[308301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:19 compute-0 sudo[308301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:19 compute-0 sudo[308301]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:19 compute-0 sudo[308326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:32:19 compute-0 sudo[308326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/45810366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 166 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.1 MiB/s wr, 232 op/s
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.213 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.218 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.229 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.453 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.453 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443797501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.497 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:19 compute-0 podman[308412]: 2025-11-25 08:32:19.516502199 +0000 UTC m=+0.050502418 container create 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.523 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.530 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/45810366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:19 compute-0 ceph-mon[75015]: pgmap v1469: 321 pgs: 321 active+clean; 166 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.1 MiB/s wr, 232 op/s
Nov 25 08:32:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1443797501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:19 compute-0 systemd[1]: Started libpod-conmon-70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be.scope.
Nov 25 08:32:19 compute-0 podman[308412]: 2025-11-25 08:32:19.490775758 +0000 UTC m=+0.024776007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:32:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:19 compute-0 podman[308412]: 2025-11-25 08:32:19.61549965 +0000 UTC m=+0.149499899 container init 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 08:32:19 compute-0 podman[308412]: 2025-11-25 08:32:19.62333224 +0000 UTC m=+0.157332469 container start 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:32:19 compute-0 podman[308412]: 2025-11-25 08:32:19.625982832 +0000 UTC m=+0.159983051 container attach 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:32:19 compute-0 cranky_taussig[308450]: 167 167
Nov 25 08:32:19 compute-0 systemd[1]: libpod-70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be.scope: Deactivated successfully.
Nov 25 08:32:19 compute-0 podman[308412]: 2025-11-25 08:32:19.628752426 +0000 UTC m=+0.162752645 container died 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5912157e55dad6a8a45e8bd09c364ada27ec40abc089fac2690a8f8019686762-merged.mount: Deactivated successfully.
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.656 253542 DEBUG nova.compute.manager [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.657 253542 DEBUG oslo_concurrency.lockutils [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.657 253542 DEBUG oslo_concurrency.lockutils [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.657 253542 DEBUG oslo_concurrency.lockutils [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.657 253542 DEBUG nova.compute.manager [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.658 253542 WARNING nova.compute.manager [req-47e1396c-84a2-494c-ab16-348bf9cdc5fb req-1ed138e8-e78d-4489-b3dd-803e169a4450 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state rebuild_spawning.
Nov 25 08:32:19 compute-0 podman[308412]: 2025-11-25 08:32:19.661845976 +0000 UTC m=+0.195846195 container remove 70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 08:32:19 compute-0 systemd[1]: libpod-conmon-70f1145475ab15ed9b21a28caf7f2b416505eee1cd6c35e8683ba01ec37387be.scope: Deactivated successfully.
Nov 25 08:32:19 compute-0 podman[308492]: 2025-11-25 08:32:19.865030046 +0000 UTC m=+0.068438340 container create 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:32:19 compute-0 systemd[1]: Started libpod-conmon-2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880.scope.
Nov 25 08:32:19 compute-0 podman[308492]: 2025-11-25 08:32:19.839250503 +0000 UTC m=+0.042658877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:32:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133880493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.966 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.968 253542 DEBUG nova.virt.libvirt.vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:18Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:19 compute-0 podman[308492]: 2025-11-25 08:32:19.970437939 +0000 UTC m=+0.173846253 container init 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.969 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.972 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.978 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <uuid>7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</uuid>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <name>instance-00000031</name>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1569463086</nova:name>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:32:18</nova:creationTime>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <nova:port uuid="3c3c9c20-8436-4b41-9184-2061010ba6e2">
Nov 25 08:32:19 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <entry name="serial">7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</entry>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <entry name="uuid">7aefbad8-edb5-417c-a34d-e9e3c2dd0c03</entry>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk">
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config">
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:19 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:c4:6e:6c"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <target dev="tap3c3c9c20-84"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/console.log" append="off"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:32:19 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:32:19 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:19 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:19 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:19 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.978 253542 DEBUG nova.compute.manager [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Preparing to wait for external event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.979 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.980 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.980 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.981 253542 DEBUG nova.virt.libvirt.vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:18Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.982 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:19 compute-0 podman[308492]: 2025-11-25 08:32:19.982258987 +0000 UTC m=+0.185667311 container start 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.983 253542 DEBUG nova.network.os_vif_util [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.984 253542 DEBUG os_vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:19 compute-0 podman[308492]: 2025-11-25 08:32:19.986163842 +0000 UTC m=+0.189572146 container attach 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.987 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.991 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3c9c20-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:19 compute-0 nova_compute[253538]: 2025-11-25 08:32:19.992 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c3c9c20-84, col_values=(('external_ids', {'iface-id': '3c3c9c20-8436-4b41-9184-2061010ba6e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:6e:6c', 'vm-uuid': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.037 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:20 compute-0 NetworkManager[48915]: <info>  [1764059540.0391] manager: (tap3c3c9c20-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.047 253542 INFO os_vif [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84')
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.097 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.097 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.097 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:c4:6e:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.098 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Using config drive
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.120 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.145 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Successfully updated port: d02d0c40-ff59-4db1-8105-d39f0c8b67c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.148 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.166 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.166 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.166 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.192 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'keypairs' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.469 253542 DEBUG nova.compute.manager [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-changed-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.470 253542 DEBUG nova.compute.manager [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Refreshing instance network info cache due to event network-changed-d02d0c40-ff59-4db1-8105-d39f0c8b67c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.470 253542 DEBUG oslo_concurrency.lockutils [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.611 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/133880493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:20 compute-0 frosty_nobel[308508]: {
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "osd_id": 1,
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "type": "bluestore"
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:     },
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "osd_id": 2,
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "type": "bluestore"
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:     },
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "osd_id": 0,
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:         "type": "bluestore"
Nov 25 08:32:20 compute-0 frosty_nobel[308508]:     }
Nov 25 08:32:20 compute-0 frosty_nobel[308508]: }
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.973 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Creating config drive at /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config
Nov 25 08:32:20 compute-0 systemd[1]: libpod-2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880.scope: Deactivated successfully.
Nov 25 08:32:20 compute-0 nova_compute[253538]: 2025-11-25 08:32:20.981 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4bbeth4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:21 compute-0 podman[308564]: 2025-11-25 08:32:21.011833788 +0000 UTC m=+0.020887713 container died 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-36ac0440b039a9884f8324e4a53668ecee436b00d36e254a964a4ffbea35093c-merged.mount: Deactivated successfully.
Nov 25 08:32:21 compute-0 podman[308564]: 2025-11-25 08:32:21.075675243 +0000 UTC m=+0.084729188 container remove 2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 08:32:21 compute-0 systemd[1]: libpod-conmon-2472c4b9c2a9a22bb94e050c9e271e6ed4ceb0fea5f79b523bd13a92d1df7880.scope: Deactivated successfully.
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.088 253542 DEBUG nova.network.neutron [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updated VIF entry in instance network info cache for port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.090 253542 DEBUG nova.network.neutron [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:21 compute-0 sudo[308326]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.125 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4bbeth4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:32:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:32:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:32:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:32:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 7a078195-7fdf-4c9b-bd17-b04cbf9484cc does not exist
Nov 25 08:32:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 47fbdb3c-8454-4957-b6d2-7303bf04a38e does not exist
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.164 253542 DEBUG nova.storage.rbd_utils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.171 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 198 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.9 MiB/s wr, 256 op/s
Nov 25 08:32:21 compute-0 sudo[308595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:32:21 compute-0 sudo[308595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:21 compute-0 sudo[308595]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.299 253542 DEBUG oslo_concurrency.lockutils [req-eba182a9-e603-459e-a115-5410135c0215 req-30641b3d-43ca-4dcb-8c83-e3cc3fa746bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:21 compute-0 sudo[308623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:32:21 compute-0 sudo[308623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:32:21 compute-0 sudo[308623]: pam_unix(sudo:session): session closed for user root
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.329 253542 DEBUG oslo_concurrency.processutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.330 253542 INFO nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deleting local config drive /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03/disk.config because it was imported into RBD.
Nov 25 08:32:21 compute-0 kernel: tap3c3c9c20-84: entered promiscuous mode
Nov 25 08:32:21 compute-0 NetworkManager[48915]: <info>  [1764059541.3956] manager: (tap3c3c9c20-84): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.397 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:21 compute-0 ovn_controller[152859]: 2025-11-25T08:32:21Z|00412|binding|INFO|Claiming lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 for this chassis.
Nov 25 08:32:21 compute-0 ovn_controller[152859]: 2025-11-25T08:32:21Z|00413|binding|INFO|3c3c9c20-8436-4b41-9184-2061010ba6e2: Claiming fa:16:3e:c4:6e:6c 10.100.0.4
Nov 25 08:32:21 compute-0 ovn_controller[152859]: 2025-11-25T08:32:21Z|00414|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 ovn-installed in OVS
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.433 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:21 compute-0 systemd-machined[215790]: New machine qemu-56-instance-00000031.
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.447 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.447 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:21 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000031.
Nov 25 08:32:21 compute-0 ovn_controller[152859]: 2025-11-25T08:32:21Z|00415|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 up in Southbound
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.465 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:6e:6c 10.100.0.4'], port_security=['fa:16:3e:c4:6e:6c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3c3c9c20-8436-4b41-9184-2061010ba6e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.466 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3c3c9c20-8436-4b41-9184-2061010ba6e2 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis
Nov 25 08:32:21 compute-0 systemd-udevd[308679]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.468 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.478 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9e52bd-3017-4f9e-994a-f871996c0a15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.479 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.480 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.480 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[df1f4b91-8ac1-49f3-9d53-10fa56c70efc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.480 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.481 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2052bdfe-c045-4fa3-b511-1b5ca966acc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 NetworkManager[48915]: <info>  [1764059541.4870] device (tap3c3c9c20-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:21 compute-0 NetworkManager[48915]: <info>  [1764059541.4918] device (tap3c3c9c20-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.492 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1aa2bf-2acc-4bab-84fe-b843450688ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.516 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4576f327-ab0a-4777-b392-d7852fec9bb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.540 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf4c154-9779-47aa-9700-40b3b73538e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 NetworkManager[48915]: <info>  [1764059541.5467] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.547 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[013b28a3-3845-44d6-99ab-0132135c10b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.586 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a1181fd3-2d14-4343-8e86-fb43725e3cc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.588 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fc967540-7010-4433-b256-b4afdc5defd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 NetworkManager[48915]: <info>  [1764059541.6066] device (tapeb25945d-60): carrier: link connected
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.611 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[33ed429b-81c4-4605-8a1b-f0dd2bf3bf8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.625 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[039feeb9-33f6-4e60-ba46-1361d3591b0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489620, 'reachable_time': 15279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308712, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.637 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3344277-8c8f-453e-84b5-338bd1fbbf83]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489620, 'tstamp': 489620}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308713, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.656 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e91a7f8-2337-411b-9d60-cf477cba0c3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489620, 'reachable_time': 15279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308714, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.681 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d7376603-e872-43e5-b121-cdc1f43c12fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.729 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcdcc60-7092-4a2b-a7d4-6aa7bf328064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.730 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.731 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.731 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:21 compute-0 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 08:32:21 compute-0 NetworkManager[48915]: <info>  [1764059541.7343] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.736 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.738 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:21 compute-0 ovn_controller[152859]: 2025-11-25T08:32:21Z|00416|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:21 compute-0 nova_compute[253538]: 2025-11-25 08:32:21.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.759 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.759 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75c1b7e3-5479-4127-8627-04ecb2a9cba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.760 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:32:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:21.760 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:32:22 compute-0 podman[308746]: 2025-11-25 08:32:22.105700096 +0000 UTC m=+0.047993201 container create a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 08:32:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:32:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:32:22 compute-0 ceph-mon[75015]: pgmap v1470: 321 pgs: 321 active+clean; 198 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.9 MiB/s wr, 256 op/s
Nov 25 08:32:22 compute-0 systemd[1]: Started libpod-conmon-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd.scope.
Nov 25 08:32:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:22 compute-0 podman[308746]: 2025-11-25 08:32:22.082618106 +0000 UTC m=+0.024911231 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.179 253542 DEBUG nova.compute.manager [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.180 253542 DEBUG oslo_concurrency.lockutils [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.181 253542 DEBUG oslo_concurrency.lockutils [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.181 253542 DEBUG oslo_concurrency.lockutils [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.182 253542 DEBUG nova.compute.manager [req-305a7c6f-3bfa-4a94-b4df-a90fe5cb4ff7 req-a5c2f81e-bc58-41dc-94cb-88e4425f830f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Processing event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e58279ce3b57ee04aae33b135da848ead0b67f41b7f32ad9c03b3604e4992d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:22 compute-0 podman[308746]: 2025-11-25 08:32:22.199921819 +0000 UTC m=+0.142214964 container init a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:32:22 compute-0 podman[308746]: 2025-11-25 08:32:22.206219908 +0000 UTC m=+0.148513023 container start a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:32:22 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [NOTICE]   (308766) : New worker (308768) forked
Nov 25 08:32:22 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [NOTICE]   (308766) : Loading success.
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.285 253542 DEBUG nova.network.neutron [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Updating instance_info_cache with network_info: [{"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.306 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.306 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance network_info: |[{"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.307 253542 DEBUG oslo_concurrency.lockutils [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.307 253542 DEBUG nova.network.neutron [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Refreshing network info cache for port d02d0c40-ff59-4db1-8105-d39f0c8b67c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.310 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start _get_guest_xml network_info=[{"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.315 253542 WARNING nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.320 253542 DEBUG nova.virt.libvirt.host [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.321 253542 DEBUG nova.virt.libvirt.host [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.327 253542 DEBUG nova.virt.libvirt.host [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.327 253542 DEBUG nova.virt.libvirt.host [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.328 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.328 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.329 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.329 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.329 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.329 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.330 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.331 253542 DEBUG nova.virt.hardware [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.333 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/848310532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.801 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.821 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.825 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.857 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.858 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059542.8024304, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.858 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Started (Lifecycle Event)
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.860 253542 DEBUG nova.compute.manager [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.864 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.867 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance spawned successfully.
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.867 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.889 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.897 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.901 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.902 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.902 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.903 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.903 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.904 253542 DEBUG nova.virt.libvirt.driver [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.931 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.931 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059542.802881, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.932 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Paused (Lifecycle Event)
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.968 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.971 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059542.863391, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.972 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Resumed (Lifecycle Event)
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.990 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.996 253542 DEBUG nova.compute.manager [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:22 compute-0 nova_compute[253538]: 2025-11-25 08:32:22.998 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.033 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.139 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.140 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.140 253542 DEBUG nova.objects.instance [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:32:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/848310532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 177 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.7 MiB/s wr, 244 op/s
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.251 253542 DEBUG oslo_concurrency.lockutils [None req-274b1ae7-4359-493e-ab71-686e6f17a045 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622454695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.333 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.335 253542 DEBUG nova.virt.libvirt.vif [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-93037066',display_name='tempest-DeleteServersTestJSON-server-93037066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-93037066',id=51,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6q1swzq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:15Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=0201b222-1aa1-4d57-901c-e3c79170b567,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.335 253542 DEBUG nova.network.os_vif_util [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.336 253542 DEBUG nova.network.os_vif_util [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.337 253542 DEBUG nova.objects.instance [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0201b222-1aa1-4d57-901c-e3c79170b567 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.350 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <uuid>0201b222-1aa1-4d57-901c-e3c79170b567</uuid>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <name>instance-00000033</name>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <nova:name>tempest-DeleteServersTestJSON-server-93037066</nova:name>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:32:22</nova:creationTime>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <nova:port uuid="d02d0c40-ff59-4db1-8105-d39f0c8b67c5">
Nov 25 08:32:23 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <entry name="serial">0201b222-1aa1-4d57-901c-e3c79170b567</entry>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <entry name="uuid">0201b222-1aa1-4d57-901c-e3c79170b567</entry>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0201b222-1aa1-4d57-901c-e3c79170b567_disk">
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0201b222-1aa1-4d57-901c-e3c79170b567_disk.config">
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:23 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5c:88:0d"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <target dev="tapd02d0c40-ff"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/console.log" append="off"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:32:23 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:32:23 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:23 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:23 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:23 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.355 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Preparing to wait for external event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.356 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.356 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.356 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.357 253542 DEBUG nova.virt.libvirt.vif [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-93037066',display_name='tempest-DeleteServersTestJSON-server-93037066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-93037066',id=51,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6q1swzq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:15Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=0201b222-1aa1-4d57-901c-e3c79170b567,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.357 253542 DEBUG nova.network.os_vif_util [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.358 253542 DEBUG nova.network.os_vif_util [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.358 253542 DEBUG os_vif [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.360 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.360 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.363 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.363 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd02d0c40-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.363 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd02d0c40-ff, col_values=(('external_ids', {'iface-id': 'd02d0c40-ff59-4db1-8105-d39f0c8b67c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:88:0d', 'vm-uuid': '0201b222-1aa1-4d57-901c-e3c79170b567'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:23 compute-0 NetworkManager[48915]: <info>  [1764059543.3662] manager: (tapd02d0c40-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.373 253542 INFO os_vif [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff')
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.426 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.427 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.427 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:5c:88:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.427 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Using config drive
Nov 25 08:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:32:23 compute-0 nova_compute[253538]: 2025-11-25 08:32:23.455 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.106 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Creating config drive at /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.111 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9xfdw1m9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.147 253542 DEBUG nova.network.neutron [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Updated VIF entry in instance network info cache for port d02d0c40-ff59-4db1-8105-d39f0c8b67c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.148 253542 DEBUG nova.network.neutron [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Updating instance_info_cache with network_info: [{"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:24 compute-0 ceph-mon[75015]: pgmap v1471: 321 pgs: 321 active+clean; 177 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.7 MiB/s wr, 244 op/s
Nov 25 08:32:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3622454695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.164 253542 DEBUG oslo_concurrency.lockutils [req-afb0f8b4-1fdb-4a84-9902-9bf6d5255fc3 req-de375276-d6ea-4697-a201-96d3fa0c8822 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0201b222-1aa1-4d57-901c-e3c79170b567" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.255 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9xfdw1m9" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.276 253542 DEBUG nova.storage.rbd_utils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.278 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.426 253542 DEBUG oslo_concurrency.processutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config 0201b222-1aa1-4d57-901c-e3c79170b567_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.427 253542 INFO nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Deleting local config drive /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567/disk.config because it was imported into RBD.
Nov 25 08:32:24 compute-0 kernel: tapd02d0c40-ff: entered promiscuous mode
Nov 25 08:32:24 compute-0 NetworkManager[48915]: <info>  [1764059544.4872] manager: (tapd02d0c40-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:24 compute-0 ovn_controller[152859]: 2025-11-25T08:32:24Z|00417|binding|INFO|Claiming lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 for this chassis.
Nov 25 08:32:24 compute-0 ovn_controller[152859]: 2025-11-25T08:32:24Z|00418|binding|INFO|d02d0c40-ff59-4db1-8105-d39f0c8b67c5: Claiming fa:16:3e:5c:88:0d 10.100.0.7
Nov 25 08:32:24 compute-0 systemd-udevd[308952]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:24 compute-0 NetworkManager[48915]: <info>  [1764059544.5336] device (tapd02d0c40-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:24 compute-0 NetworkManager[48915]: <info>  [1764059544.5359] device (tapd02d0c40-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:24 compute-0 ovn_controller[152859]: 2025-11-25T08:32:24Z|00419|binding|INFO|Setting lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 ovn-installed in OVS
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:24 compute-0 systemd-machined[215790]: New machine qemu-57-instance-00000033.
Nov 25 08:32:24 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.947 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059544.9471781, 0201b222-1aa1-4d57-901c-e3c79170b567 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.949 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] VM Started (Lifecycle Event)
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.966 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.971 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059544.9483302, 0201b222-1aa1-4d57-901c-e3c79170b567 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.971 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] VM Paused (Lifecycle Event)
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.988 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:24 compute-0 nova_compute[253538]: 2025-11-25 08:32:24.992 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:25 compute-0 nova_compute[253538]: 2025-11-25 08:32:25.009 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 181 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.1 MiB/s wr, 282 op/s
Nov 25 08:32:25 compute-0 nova_compute[253538]: 2025-11-25 08:32:25.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:25 compute-0 nova_compute[253538]: 2025-11-25 08:32:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:32:25 compute-0 ovn_controller[152859]: 2025-11-25T08:32:25Z|00420|binding|INFO|Setting lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 up in Southbound
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.665 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:88:0d 10.100.0.7'], port_security=['fa:16:3e:5c:88:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0201b222-1aa1-4d57-901c-e3c79170b567', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d02d0c40-ff59-4db1-8105-d39f0c8b67c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.667 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d02d0c40-ff59-4db1-8105-d39f0c8b67c5 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 bound to our chassis
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.669 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.682 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38abe753-625a-4ab1-98b3-9b29678ac1a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.684 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.686 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.686 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[33123b87-1dc6-4ad7-bff7-c01a00fdbf4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9d57c6-6426-440f-84dc-7003811c3b00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.698 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e355cad0-aa25-41ee-998b-71610fb10442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.711 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58594e66-955e-4f74-8590-4aa677a52b9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.739 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae355c1-a9cc-4aad-922f-8f2e2128eb17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.745 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e1143db2-465a-41ab-b777-67bc3216a93c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 NetworkManager[48915]: <info>  [1764059545.7465] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/196)
Nov 25 08:32:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.779 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b5cb8338-e082-417c-a2f3-00e7c8ac8242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.782 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cd0f39f5-a027-4e19-ba46-8ca68a090c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 NetworkManager[48915]: <info>  [1764059545.8042] device (tapa66e51b8-e0): carrier: link connected
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.814 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[496a37d9-3b21-493a-8f6d-5562f3df7a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.831 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c1c90e-683a-4bd0-b8e4-3ec54997a13d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490040, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309032, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c26f495c-2569-4562-be40-ded0104d54bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490040, 'tstamp': 490040}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309033, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.865 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[02cd96db-27e8-4d52-bbc2-49111edc2a09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490040, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309034, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b718439-5db6-4ee4-8fe6-d1982058922b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.989 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c9562818-5af4-4279-9529-86782bb72516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.990 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.991 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:25.991 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:25 compute-0 NetworkManager[48915]: <info>  [1764059545.9937] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 25 08:32:25 compute-0 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.002 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:26 compute-0 ovn_controller[152859]: 2025-11-25T08:32:26Z|00421|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 08:32:26 compute-0 nova_compute[253538]: 2025-11-25 08:32:26.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.036 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:32:26 compute-0 nova_compute[253538]: 2025-11-25 08:32:26.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[481bfa73-5e55-4d29-b080-f7e37414f165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.038 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:32:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:26.042 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:32:26 compute-0 ceph-mon[75015]: pgmap v1472: 321 pgs: 321 active+clean; 181 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.1 MiB/s wr, 282 op/s
Nov 25 08:32:26 compute-0 podman[309065]: 2025-11-25 08:32:26.431254069 +0000 UTC m=+0.057528596 container create 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:32:26 compute-0 systemd[1]: Started libpod-conmon-071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d.scope.
Nov 25 08:32:26 compute-0 podman[309065]: 2025-11-25 08:32:26.401463979 +0000 UTC m=+0.027738536 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:32:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c7019eddc15bde3ab32eaf7c4b3a0396a948ded755fecc5de1ea0cb4bb9f24/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:26 compute-0 podman[309065]: 2025-11-25 08:32:26.520361155 +0000 UTC m=+0.146635662 container init 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 08:32:26 compute-0 podman[309065]: 2025-11-25 08:32:26.526361706 +0000 UTC m=+0.152636213 container start 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:32:26 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [NOTICE]   (309084) : New worker (309086) forked
Nov 25 08:32:26 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [NOTICE]   (309084) : Loading success.
Nov 25 08:32:26 compute-0 nova_compute[253538]: 2025-11-25 08:32:26.597 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059531.5846958, 450d4b82-4475-4cfc-b868-dc3b0fc37af5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:26 compute-0 nova_compute[253538]: 2025-11-25 08:32:26.597 253542 INFO nova.compute.manager [-] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] VM Stopped (Lifecycle Event)
Nov 25 08:32:26 compute-0 nova_compute[253538]: 2025-11-25 08:32:26.613 253542 DEBUG nova.compute.manager [None req-9c864453-b358-470d-931c-ea3f52d5915d - - - - - -] [instance: 450d4b82-4475-4cfc-b868-dc3b0fc37af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 190 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 252 op/s
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.248 253542 DEBUG nova.compute.manager [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.248 253542 DEBUG oslo_concurrency.lockutils [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.249 253542 DEBUG oslo_concurrency.lockutils [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.249 253542 DEBUG oslo_concurrency.lockutils [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.249 253542 DEBUG nova.compute.manager [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.249 253542 WARNING nova.compute.manager [req-440fc139-f679-4422-a669-03af311bb59b req-31745e3a-c325-420e-91fa-4be7047a6a1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state None.
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.330 253542 DEBUG nova.compute.manager [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.331 253542 DEBUG oslo_concurrency.lockutils [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.331 253542 DEBUG oslo_concurrency.lockutils [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.331 253542 DEBUG oslo_concurrency.lockutils [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.332 253542 DEBUG nova.compute.manager [req-c8922be0-9073-4901-aa4f-d97d8222df97 req-d8e82eec-4fc1-49cb-b4d8-5c6d358f7f13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Processing event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.332 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.347 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059547.33592, 0201b222-1aa1-4d57-901c-e3c79170b567 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.360 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] VM Resumed (Lifecycle Event)
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.362 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.365 253542 INFO nova.virt.libvirt.driver [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance spawned successfully.
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.365 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.399 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.406 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.407 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.407 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.407 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.408 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.408 253542 DEBUG nova.virt.libvirt.driver [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.410 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.444 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.475 253542 INFO nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Took 12.09 seconds to spawn the instance on the hypervisor.
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.476 253542 DEBUG nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.531 253542 INFO nova.compute.manager [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Took 13.55 seconds to build instance.
Nov 25 08:32:27 compute-0 nova_compute[253538]: 2025-11-25 08:32:27.547 253542 DEBUG oslo_concurrency.lockutils [None req-952fbd32-a4ae-4ddf-92ed-a7c95d18e3d1 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:27 compute-0 ovn_controller[152859]: 2025-11-25T08:32:27Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:32:27 compute-0 ovn_controller[152859]: 2025-11-25T08:32:27Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.123 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.125 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.127 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:32:28 compute-0 ceph-mon[75015]: pgmap v1473: 321 pgs: 321 active+clean; 190 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 252 op/s
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.365 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.447 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.449 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.451 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.451 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.451 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.452 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.452 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.453 253542 INFO nova.compute.manager [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Terminating instance
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.454 253542 DEBUG nova.compute.manager [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.469 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:32:28 compute-0 kernel: tap3c3c9c20-84 (unregistering): left promiscuous mode
Nov 25 08:32:28 compute-0 NetworkManager[48915]: <info>  [1764059548.4991] device (tap3c3c9c20-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:32:28 compute-0 ovn_controller[152859]: 2025-11-25T08:32:28Z|00422|binding|INFO|Releasing lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 from this chassis (sb_readonly=0)
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 ovn_controller[152859]: 2025-11-25T08:32:28Z|00423|binding|INFO|Setting lport 3c3c9c20-8436-4b41-9184-2061010ba6e2 down in Southbound
Nov 25 08:32:28 compute-0 ovn_controller[152859]: 2025-11-25T08:32:28Z|00424|binding|INFO|Removing iface tap3c3c9c20-84 ovn-installed in OVS
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.523 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:6e:6c 10.100.0.4'], port_security=['fa:16:3e:c4:6e:6c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aefbad8-edb5-417c-a34d-e9e3c2dd0c03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3c3c9c20-8436-4b41-9184-2061010ba6e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.525 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3c3c9c20-8436-4b41-9184-2061010ba6e2 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.528 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.529 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[683fb4d7-0864-404d-9627-329b946c60c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.530 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore
Nov 25 08:32:28 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 25 08:32:28 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000031.scope: Consumed 6.983s CPU time.
Nov 25 08:32:28 compute-0 systemd-machined[215790]: Machine qemu-56-instance-00000031 terminated.
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.598 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.598 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.608 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.608 253542 INFO nova.compute.claims [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.679 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.680 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:28 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [NOTICE]   (308766) : haproxy version is 2.8.14-c23fe91
Nov 25 08:32:28 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [NOTICE]   (308766) : path to executable is /usr/sbin/haproxy
Nov 25 08:32:28 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [WARNING]  (308766) : Exiting Master process...
Nov 25 08:32:28 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [ALERT]    (308766) : Current worker (308768) exited with code 143 (Terminated)
Nov 25 08:32:28 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[308762]: [WARNING]  (308766) : All workers exited. Exiting... (0)
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.680 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.686 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.686 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:28 compute-0 systemd[1]: libpod-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd.scope: Deactivated successfully.
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.687 253542 INFO nova.compute.manager [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Terminating instance
Nov 25 08:32:28 compute-0 conmon[308762]: conmon a6adb3fa2f4352cc0f42 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd.scope/container/memory.events
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.688 253542 DEBUG nova.compute.manager [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.692 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Instance destroyed successfully.
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.693 253542 DEBUG nova.objects.instance [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:28 compute-0 podman[309115]: 2025-11-25 08:32:28.695260917 +0000 UTC m=+0.060152158 container died a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.706 253542 DEBUG nova.virt.libvirt.vif [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569463086',display_name='tempest-ServerDiskConfigTestJSON-server-1569463086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569463086',id=49,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-pn5emsvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:23Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=7aefbad8-edb5-417c-a34d-e9e3c2dd0c03,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.706 253542 DEBUG nova.network.os_vif_util [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "address": "fa:16:3e:c4:6e:6c", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c3c9c20-84", "ovs_interfaceid": "3c3c9c20-8436-4b41-9184-2061010ba6e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.707 253542 DEBUG nova.network.os_vif_util [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.707 253542 DEBUG os_vif [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.709 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3c9c20-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.762 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.765 253542 INFO os_vif [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:6e:6c,bridge_name='br-int',has_traffic_filtering=True,id=3c3c9c20-8436-4b41-9184-2061010ba6e2,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c3c9c20-84')
Nov 25 08:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd-userdata-shm.mount: Deactivated successfully.
Nov 25 08:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-91e58279ce3b57ee04aae33b135da848ead0b67f41b7f32ad9c03b3604e4992d-merged.mount: Deactivated successfully.
Nov 25 08:32:28 compute-0 podman[309115]: 2025-11-25 08:32:28.780109068 +0000 UTC m=+0.145000319 container cleanup a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:32:28 compute-0 kernel: tapd02d0c40-ff (unregistering): left promiscuous mode
Nov 25 08:32:28 compute-0 NetworkManager[48915]: <info>  [1764059548.7859] device (tapd02d0c40-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:32:28 compute-0 systemd[1]: libpod-conmon-a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd.scope: Deactivated successfully.
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 ovn_controller[152859]: 2025-11-25T08:32:28Z|00425|binding|INFO|Releasing lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 from this chassis (sb_readonly=0)
Nov 25 08:32:28 compute-0 ovn_controller[152859]: 2025-11-25T08:32:28Z|00426|binding|INFO|Setting lport d02d0c40-ff59-4db1-8105-d39f0c8b67c5 down in Southbound
Nov 25 08:32:28 compute-0 ovn_controller[152859]: 2025-11-25T08:32:28Z|00427|binding|INFO|Removing iface tapd02d0c40-ff ovn-installed in OVS
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.803 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.803 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:88:0d 10.100.0.7'], port_security=['fa:16:3e:5c:88:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0201b222-1aa1-4d57-901c-e3c79170b567', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d02d0c40-ff59-4db1-8105-d39f0c8b67c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.827 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:28 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 1.777s CPU time.
Nov 25 08:32:28 compute-0 systemd-machined[215790]: Machine qemu-57-instance-00000033 terminated.
Nov 25 08:32:28 compute-0 podman[309170]: 2025-11-25 08:32:28.859771548 +0000 UTC m=+0.050542819 container remove a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.865 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44662b88-01ae-479e-a7ed-d564ccf17388]: (4, ('Tue Nov 25 08:32:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd)\na6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd\nTue Nov 25 08:32:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (a6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd)\na6adb3fa2f4352cc0f422dc92678d1d649ab944164ad77928c29dc7fbe180acd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8d037f-5c18-4b7f-8217-191c6c847b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.867 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[372bae15-f129-4c15-872b-618f43271ded]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.905 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[33ba23cd-c223-4e43-906a-81dc1ae63fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.905 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e68d80f2-68c2-4c3e-a9ee-bef78cc74623]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.919 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3695bc8e-046a-4209-86f0-51f57927f3b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489613, 'reachable_time': 25022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309196, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.924 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.924 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab94bbf-589d-4501-9663-bd25775fe600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.924 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d02d0c40-ff59-4db1-8105-d39f0c8b67c5 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.926 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.926 253542 INFO nova.virt.libvirt.driver [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Instance destroyed successfully.
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.927 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[db184aff-2160-490c-9f4a-8bbe35113329]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:28.927 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.928 253542 DEBUG nova.objects.instance [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 0201b222-1aa1-4d57-901c-e3c79170b567 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.942 253542 DEBUG nova.virt.libvirt.vif [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-93037066',display_name='tempest-DeleteServersTestJSON-server-93037066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-93037066',id=51,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6q1swzq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:27Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=0201b222-1aa1-4d57-901c-e3c79170b567,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.942 253542 DEBUG nova.network.os_vif_util [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "address": "fa:16:3e:5c:88:0d", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02d0c40-ff", "ovs_interfaceid": "d02d0c40-ff59-4db1-8105-d39f0c8b67c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.944 253542 DEBUG nova.network.os_vif_util [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.944 253542 DEBUG os_vif [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.947 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.948 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02d0c40-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.950 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:28 compute-0 nova_compute[253538]: 2025-11-25 08:32:28.956 253542 INFO os_vif [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:88:0d,bridge_name='br-int',has_traffic_filtering=True,id=d02d0c40-ff59-4db1-8105-d39f0c8b67c5,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02d0c40-ff')
Nov 25 08:32:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:32:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/717178773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:32:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:32:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/717178773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:32:29 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [NOTICE]   (309084) : haproxy version is 2.8.14-c23fe91
Nov 25 08:32:29 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [NOTICE]   (309084) : path to executable is /usr/sbin/haproxy
Nov 25 08:32:29 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [WARNING]  (309084) : Exiting Master process...
Nov 25 08:32:29 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [WARNING]  (309084) : Exiting Master process...
Nov 25 08:32:29 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [ALERT]    (309084) : Current worker (309086) exited with code 143 (Terminated)
Nov 25 08:32:29 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[309080]: [WARNING]  (309084) : All workers exited. Exiting... (0)
Nov 25 08:32:29 compute-0 systemd[1]: libpod-071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d.scope: Deactivated successfully.
Nov 25 08:32:29 compute-0 podman[309257]: 2025-11-25 08:32:29.086203664 +0000 UTC m=+0.047197510 container died 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d-userdata-shm.mount: Deactivated successfully.
Nov 25 08:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6c7019eddc15bde3ab32eaf7c4b3a0396a948ded755fecc5de1ea0cb4bb9f24-merged.mount: Deactivated successfully.
Nov 25 08:32:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 199 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 224 op/s
Nov 25 08:32:29 compute-0 podman[309257]: 2025-11-25 08:32:29.213460394 +0000 UTC m=+0.174454220 container cleanup 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:32:29 compute-0 systemd[1]: libpod-conmon-071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d.scope: Deactivated successfully.
Nov 25 08:32:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/717178773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:32:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/717178773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:32:29 compute-0 podman[309287]: 2025-11-25 08:32:29.291128131 +0000 UTC m=+0.055737579 container remove 071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.296 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f55be7-29c6-4d2b-afc9-25da29d01944]: (4, ('Tue Nov 25 08:32:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d)\n071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d\nTue Nov 25 08:32:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d)\n071a11fdff16014d72bfaf3316156ef333154916592d1d18547abad2ab3c7a9d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.298 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e92631fc-d7fe-41b3-b71c-9b421898ff60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.299 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:29 compute-0 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 08:32:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982273807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.310 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[06ff67f3-1b8f-44bc-8a2a-6916e4782957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.319 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.325 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4d391a-4f5f-4234-94bc-85c9dcd35e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.326 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60f00b6f-8826-4382-aa40-3841e407d1d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.327 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.333 253542 DEBUG nova.compute.provider_tree [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.342 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fce89d1d-1170-468e-8d26-4e740814cf1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490033, 'reachable_time': 33711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309304, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.344 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:32:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:29.344 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e337028d-fdcb-44b9-995c-1e68f9e3928c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.348 253542 DEBUG nova.scheduler.client.report [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.356 253542 INFO nova.virt.libvirt.driver [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deleting instance files /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_del
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.356 253542 INFO nova.virt.libvirt.driver [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deletion of /var/lib/nova/instances/7aefbad8-edb5-417c-a34d-e9e3c2dd0c03_del complete
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.378 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.379 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.442 253542 INFO nova.compute.manager [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.443 253542 DEBUG oslo.service.loopingcall [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.443 253542 DEBUG nova.compute.manager [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.443 253542 DEBUG nova.network.neutron [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.457 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.458 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.498 253542 INFO nova.virt.libvirt.driver [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Deleting instance files /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567_del
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.499 253542 INFO nova.virt.libvirt.driver [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Deletion of /var/lib/nova/instances/0201b222-1aa1-4d57-901c-e3c79170b567_del complete
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.520 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.562 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.568 253542 INFO nova.compute.manager [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.568 253542 DEBUG oslo.service.loopingcall [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.569 253542 DEBUG nova.compute.manager [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.569 253542 DEBUG nova.network.neutron [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.587 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.587 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.588 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.588 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.588 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.589 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-unplugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.589 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.590 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.590 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.590 253542 DEBUG oslo_concurrency.lockutils [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.591 253542 DEBUG nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] No waiting events found dispatching network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.591 253542 WARNING nova.compute.manager [req-9a53ffd2-3526-4f22-8ab6-c87ce1c45757 req-eac66823-3872-4b16-bb4a-a4455dd5b52c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received unexpected event network-vif-plugged-3c3c9c20-8436-4b41-9184-2061010ba6e2 for instance with vm_state active and task_state deleting.
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.660 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.661 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.662 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Creating image(s)
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.688 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.715 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.743 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.746 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:29 compute-0 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.790 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.790 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.791 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.791 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.791 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] No waiting events found dispatching network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.791 253542 WARNING nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received unexpected event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 for instance with vm_state active and task_state deleting.
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.792 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-unplugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.792 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.792 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.792 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.793 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] No waiting events found dispatching network-vif-unplugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.793 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-unplugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.793 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.793 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.794 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.794 253542 DEBUG oslo_concurrency.lockutils [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.794 253542 DEBUG nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] No waiting events found dispatching network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.794 253542 WARNING nova.compute.manager [req-d73e4afc-40cb-4493-8ac4-d149e0a0f188 req-399ad564-b9ca-4a85-a820-1656d6e74b6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received unexpected event network-vif-plugged-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 for instance with vm_state active and task_state deleting.
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.842 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.843 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.845 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.845 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.881 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:29 compute-0 nova_compute[253538]: 2025-11-25 08:32:29.886 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8191f951-44bc-4371-957a-f2e7d37c1a32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:30.130 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.175 253542 DEBUG nova.policy [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.247 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8191f951-44bc-4371-957a-f2e7d37c1a32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:30 compute-0 ceph-mon[75015]: pgmap v1474: 321 pgs: 321 active+clean; 199 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 224 op/s
Nov 25 08:32:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3982273807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.327 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] resizing rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.419 253542 DEBUG nova.objects.instance [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'migration_context' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.431 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.432 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Ensure instance console log exists: /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.432 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.433 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.433 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.484 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.861 253542 DEBUG nova.network.neutron [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.882 253542 INFO nova.compute.manager [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Took 1.44 seconds to deallocate network for instance.
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.929 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.930 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:30 compute-0 nova_compute[253538]: 2025-11-25 08:32:30.990 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully created port: d8bd16e1-3695-474d-be04-7fdf44bee803 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.019 253542 DEBUG nova.network.neutron [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.040 253542 INFO nova.compute.manager [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Took 1.47 seconds to deallocate network for instance.
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.046 253542 DEBUG oslo_concurrency.processutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.127 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 164 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 08:32:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191338927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.464 253542 DEBUG oslo_concurrency.processutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.470 253542 DEBUG nova.compute.provider_tree [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.487 253542 DEBUG nova.scheduler.client.report [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.525 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.528 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.551 253542 INFO nova.scheduler.client.report [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Deleted allocations for instance 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.616 253542 DEBUG oslo_concurrency.processutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.653 253542 DEBUG oslo_concurrency.lockutils [None req-c7431d93-c045-4619-a3d4-99f3f3e442c8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "7aefbad8-edb5-417c-a34d-e9e3c2dd0c03" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.668 253542 DEBUG nova.compute.manager [req-984860ab-0f99-486f-9723-593d5c2b1902 req-db01d342-8991-4fb5-b89a-e6138cf066ba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Received event network-vif-deleted-d02d0c40-ff59-4db1-8105-d39f0c8b67c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.790 253542 DEBUG nova.compute.manager [req-5492f7e0-491a-419c-8e56-94e9731d534e req-8cf02a02-97bd-4978-b33f-b61639dd8dd9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Received event network-vif-deleted-3c3c9c20-8436-4b41-9184-2061010ba6e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.807 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.808 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.819 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.863 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully updated port: d8bd16e1-3695-474d-be04-7fdf44bee803 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.873 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.874 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.874 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.876 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:31 compute-0 nova_compute[253538]: 2025-11-25 08:32:31.982 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2086060253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.039 253542 DEBUG oslo_concurrency.processutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.046 253542 DEBUG nova.compute.provider_tree [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.063 253542 DEBUG nova.scheduler.client.report [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.083 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.086 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.094 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.095 253542 INFO nova.compute.claims [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.110 253542 INFO nova.scheduler.client.report [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 0201b222-1aa1-4d57-901c-e3c79170b567
Nov 25 08:32:32 compute-0 ceph-mon[75015]: pgmap v1475: 321 pgs: 321 active+clean; 164 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 08:32:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/191338927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2086060253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.300 253542 DEBUG oslo_concurrency.lockutils [None req-f13ec5d5-0a9b-4d5a-9c2a-7dfd143c2b5e a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "0201b222-1aa1-4d57-901c-e3c79170b567" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.351 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.670 253542 DEBUG nova.network.neutron [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.683 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.683 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance network_info: |[{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.685 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start _get_guest_xml network_info=[{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.690 253542 WARNING nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.694 253542 DEBUG nova.virt.libvirt.host [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.695 253542 DEBUG nova.virt.libvirt.host [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.698 253542 DEBUG nova.virt.libvirt.host [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.698 253542 DEBUG nova.virt.libvirt.host [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.698 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.699 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.700 253542 DEBUG nova.virt.hardware [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.703 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115266596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.789 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.796 253542 DEBUG nova.compute.provider_tree [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.809 253542 DEBUG nova.scheduler.client.report [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.830 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.831 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.871 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.871 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.892 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:32:32 compute-0 nova_compute[253538]: 2025-11-25 08:32:32.912 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.006 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.007 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.008 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating image(s)
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.029 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.052 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.075 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.078 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.111 253542 DEBUG nova.policy [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ad88cb0e4cf4d0b8e4cbec835318015', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.151 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.151 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.152 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.152 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316736445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.174 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.176 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 146 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 260 op/s
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.203 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.222 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.227 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3115266596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3316736445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.437 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.518 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.614 253542 DEBUG nova.objects.instance [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'migration_context' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.626 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.626 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Ensure instance console log exists: /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.626 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.627 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.627 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.672 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Successfully created port: 79f4b8f5-d582-44c5-b8e0-a82ad73193de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:32:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440657913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.752 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.754 253542 DEBUG nova.virt.libvirt.vif [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.755 253542 DEBUG nova.network.os_vif_util [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.756 253542 DEBUG nova.network.os_vif_util [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.758 253542 DEBUG nova.objects.instance [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.773 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <name>instance-00000034</name>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:32:32</nova:creationTime>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:32:33 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <entry name="serial">8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <entry name="uuid">8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk">
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config">
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:1a:58:19"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <target dev="tapd8bd16e1-36"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log" append="off"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:32:33 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:32:33 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:33 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:33 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:33 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.773 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Preparing to wait for external event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.774 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.774 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.775 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.776 253542 DEBUG nova.virt.libvirt.vif [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.776 253542 DEBUG nova.network.os_vif_util [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.777 253542 DEBUG nova.network.os_vif_util [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.778 253542 DEBUG os_vif [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.780 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.780 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.785 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8bd16e1-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.786 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8bd16e1-36, col_values=(('external_ids', {'iface-id': 'd8bd16e1-3695-474d-be04-7fdf44bee803', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:58:19', 'vm-uuid': '8191f951-44bc-4371-957a-f2e7d37c1a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:33 compute-0 NetworkManager[48915]: <info>  [1764059553.7897] manager: (tapd8bd16e1-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.796 253542 INFO os_vif [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36')
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.850 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.851 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.851 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:1a:58:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.852 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Using config drive
Nov 25 08:32:33 compute-0 nova_compute[253538]: 2025-11-25 08:32:33.873 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:34 compute-0 ceph-mon[75015]: pgmap v1476: 321 pgs: 321 active+clean; 146 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 260 op/s
Nov 25 08:32:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1440657913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.312 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Creating config drive at /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.320 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod093_q4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.466 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod093_q4" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.500 253542 DEBUG nova.storage.rbd_utils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.506 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.550 253542 DEBUG nova.compute.manager [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.550 253542 DEBUG nova.compute.manager [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-d8bd16e1-3695-474d-be04-7fdf44bee803. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.551 253542 DEBUG oslo_concurrency.lockutils [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.551 253542 DEBUG oslo_concurrency.lockutils [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.551 253542 DEBUG nova.network.neutron [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port d8bd16e1-3695-474d-be04-7fdf44bee803 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.911 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Successfully updated port: 79f4b8f5-d582-44c5-b8e0-a82ad73193de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.924 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.925 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquired lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:34 compute-0 nova_compute[253538]: 2025-11-25 08:32:34.925 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.039 253542 DEBUG oslo_concurrency.processutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config 8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.040 253542 INFO nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Deleting local config drive /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/disk.config because it was imported into RBD.
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.065 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:35 compute-0 kernel: tapd8bd16e1-36: entered promiscuous mode
Nov 25 08:32:35 compute-0 NetworkManager[48915]: <info>  [1764059555.1053] manager: (tapd8bd16e1-36): new Tun device (/org/freedesktop/NetworkManager/Devices/199)
Nov 25 08:32:35 compute-0 ovn_controller[152859]: 2025-11-25T08:32:35Z|00428|binding|INFO|Claiming lport d8bd16e1-3695-474d-be04-7fdf44bee803 for this chassis.
Nov 25 08:32:35 compute-0 ovn_controller[152859]: 2025-11-25T08:32:35Z|00429|binding|INFO|d8bd16e1-3695-474d-be04-7fdf44bee803: Claiming fa:16:3e:1a:58:19 10.100.0.11
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.116 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:58:19 10.100.0.11'], port_security=['fa:16:3e:1a:58:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab6e7e4a-351f-4b59-b94e-a7f51f236dd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d8bd16e1-3695-474d-be04-7fdf44bee803) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.118 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d8bd16e1-3695-474d-be04-7fdf44bee803 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.119 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:32:35 compute-0 systemd-udevd[309840]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:35 compute-0 ovn_controller[152859]: 2025-11-25T08:32:35Z|00430|binding|INFO|Setting lport d8bd16e1-3695-474d-be04-7fdf44bee803 ovn-installed in OVS
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.135 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5600931-7e42-47f9-9986-e410dfa61388]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_controller[152859]: 2025-11-25T08:32:35Z|00431|binding|INFO|Setting lport d8bd16e1-3695-474d-be04-7fdf44bee803 up in Southbound
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.136 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bf3cbfa-71 in ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.138 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bf3cbfa-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.139 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2f954a33-8d40-45b5-928a-2627855d853c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.140 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[413a598e-b4d0-4f4a-a162-01d0faa83691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:35 compute-0 NetworkManager[48915]: <info>  [1764059555.1498] device (tapd8bd16e1-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:35 compute-0 NetworkManager[48915]: <info>  [1764059555.1509] device (tapd8bd16e1-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.151 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7797f486-cf2a-4249-82e3-ece8558596ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 systemd-machined[215790]: New machine qemu-58-instance-00000034.
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.165 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9686e6d5-2143-442c-9c2d-4f9203580ab3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000034.
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.191 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6654ea-077f-4523-ad15-5ee226489236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 191 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.9 MiB/s wr, 285 op/s
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1324e03a-ad18-45af-8753-aded4a2b30d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 NetworkManager[48915]: <info>  [1764059555.1999] manager: (tap9bf3cbfa-70): new Veth device (/org/freedesktop/NetworkManager/Devices/200)
Nov 25 08:32:35 compute-0 systemd-udevd[309844]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.231 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d998355b-c55b-47d4-9555-d210028fdf58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.235 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8560003f-340c-41f1-88b5-777a8aa67046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 NetworkManager[48915]: <info>  [1764059555.2572] device (tap9bf3cbfa-70): carrier: link connected
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.263 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[84e09d2a-14ab-40a7-a08b-258e34bcae4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3077cfd-d194-45e0-bc75-fde10de1a346]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309873, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.294 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e3ac3d-2beb-4ac4-a144-a487af70190d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:8fc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490985, 'tstamp': 490985}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309874, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa6b383-b02b-43d2-9ccb-204bff828868]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309875, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.340 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab1c470-9375-4902-a013-691bfc1121c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.397 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9e5114-840c-4f16-9c54-0d7cb623e896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.398 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.398 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.399 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:35 compute-0 kernel: tap9bf3cbfa-70: entered promiscuous mode
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:35 compute-0 NetworkManager[48915]: <info>  [1764059555.4013] manager: (tap9bf3cbfa-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.403 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:35 compute-0 ovn_controller[152859]: 2025-11-25T08:32:35Z|00432|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.405 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.405 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a5e691-10c5-43e5-8ad3-fbd33fb8ed21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.407 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:32:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:35.408 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'env', 'PROCESS_TAG=haproxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.468 253542 DEBUG nova.compute.manager [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.468 253542 DEBUG oslo_concurrency.lockutils [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.468 253542 DEBUG oslo_concurrency.lockutils [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.469 253542 DEBUG oslo_concurrency.lockutils [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.469 253542 DEBUG nova.compute.manager [req-9b0a464b-30fa-49c6-8775-0829733b9766 req-84cc943e-ea59-4880-bda5-7680788e4257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Processing event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.486 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.681 253542 DEBUG nova.network.neutron [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Updating instance_info_cache with network_info: [{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.704 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Releasing lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.705 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance network_info: |[{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.709 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start _get_guest_xml network_info=[{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.716 253542 WARNING nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.722 253542 DEBUG nova.virt.libvirt.host [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.723 253542 DEBUG nova.virt.libvirt.host [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.725 253542 DEBUG nova.virt.libvirt.host [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.726 253542 DEBUG nova.virt.libvirt.host [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.726 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.726 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.727 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.727 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.727 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.728 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.729 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.729 253542 DEBUG nova.virt.hardware [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.732 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:35 compute-0 podman[309905]: 2025-11-25 08:32:35.789925332 +0000 UTC m=+0.051739122 container create 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:32:35 compute-0 systemd[1]: Started libpod-conmon-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54.scope.
Nov 25 08:32:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.852 253542 DEBUG nova.network.neutron [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port d8bd16e1-3695-474d-be04-7fdf44bee803. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.854 253542 DEBUG nova.network.neutron [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:35 compute-0 podman[309905]: 2025-11-25 08:32:35.760653625 +0000 UTC m=+0.022467415 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc5cfaa8d3aa0802256d8920c39f377bf45d88d3a7eb5b3dfababcae814e2ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:35 compute-0 podman[309905]: 2025-11-25 08:32:35.868901474 +0000 UTC m=+0.130715284 container init 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.870 253542 DEBUG oslo_concurrency.lockutils [req-1157b120-9cba-4533-b92b-f1a40dc184e9 req-de4c1131-2bb9-4a6b-8193-6b7c66ee7b0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:35 compute-0 podman[309905]: 2025-11-25 08:32:35.873908469 +0000 UTC m=+0.135722259 container start 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 08:32:35 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [NOTICE]   (309993) : New worker (309999) forked
Nov 25 08:32:35 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [NOTICE]   (309993) : Loading success.
Nov 25 08:32:35 compute-0 podman[309959]: 2025-11-25 08:32:35.915725873 +0000 UTC m=+0.071771400 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.934 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059555.933833, 8191f951-44bc-4371-957a-f2e7d37c1a32 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.934 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] VM Started (Lifecycle Event)
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.937 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.940 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.943 253542 INFO nova.virt.libvirt.driver [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance spawned successfully.
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.943 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.954 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.959 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.971 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.972 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.972 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.973 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.973 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.973 253542 DEBUG nova.virt.libvirt.driver [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.977 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.978 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059555.9340353, 8191f951-44bc-4371-957a-f2e7d37c1a32 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:35 compute-0 nova_compute[253538]: 2025-11-25 08:32:35.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] VM Paused (Lifecycle Event)
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.002 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.008 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059555.9397054, 8191f951-44bc-4371-957a-f2e7d37c1a32 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.009 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] VM Resumed (Lifecycle Event)
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.026 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.030 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.044 253542 INFO nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Took 6.38 seconds to spawn the instance on the hypervisor.
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.045 253542 DEBUG nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.052 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.111 253542 INFO nova.compute.manager [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Took 7.58 seconds to build instance.
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.129 253542 DEBUG oslo_concurrency.lockutils [None req-27efaf5d-93c8-4a0c-b1f2-436697dd9392 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1682330192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.218 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.244 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.248 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:36 compute-0 ceph-mon[75015]: pgmap v1477: 321 pgs: 321 active+clean; 191 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.9 MiB/s wr, 285 op/s
Nov 25 08:32:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1682330192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.349 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "8ee60656-c206-4a84-9774-e8f852386097" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.350 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.365 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.408 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.409 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.409 253542 INFO nova.compute.manager [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Rebooting instance
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.419 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.419 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.419 253542 DEBUG nova.network.neutron [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.436 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.437 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.443 253542 DEBUG nova.virt.hardware [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.443 253542 INFO nova.compute.claims [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.598 253542 DEBUG oslo_concurrency.processutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.650 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "8ee60656-c206-4a84-9774-e8f852386097" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.671 253542 DEBUG nova.compute.manager [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-changed-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.672 253542 DEBUG nova.compute.manager [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Refreshing instance network info cache due to event network-changed-79f4b8f5-d582-44c5-b8e0-a82ad73193de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.672 253542 DEBUG oslo_concurrency.lockutils [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.672 253542 DEBUG oslo_concurrency.lockutils [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.672 253542 DEBUG nova.network.neutron [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Refreshing network info cache for port 79f4b8f5-d582-44c5-b8e0-a82ad73193de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:32:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2443545900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.714 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.715 253542 DEBUG nova.virt.libvirt.vif [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:32Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.715 253542 DEBUG nova.network.os_vif_util [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.716 253542 DEBUG nova.network.os_vif_util [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.717 253542 DEBUG nova.objects.instance [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.729 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <uuid>c0942fc7-74d4-4fc8-9574-4fea9179e71b</uuid>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <name>instance-00000035</name>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1031241876</nova:name>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:32:35</nova:creationTime>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <nova:port uuid="79f4b8f5-d582-44c5-b8e0-a82ad73193de">
Nov 25 08:32:36 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <entry name="serial">c0942fc7-74d4-4fc8-9574-4fea9179e71b</entry>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <entry name="uuid">c0942fc7-74d4-4fc8-9574-4fea9179e71b</entry>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk">
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config">
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:36 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:24:9d:e4"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <target dev="tap79f4b8f5-d5"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/console.log" append="off"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:32:36 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:32:36 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:36 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:36 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:36 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.730 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Preparing to wait for external event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.730 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.730 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.730 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.731 253542 DEBUG nova.virt.libvirt.vif [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:32Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.731 253542 DEBUG nova.network.os_vif_util [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.732 253542 DEBUG nova.network.os_vif_util [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.732 253542 DEBUG os_vif [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.733 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.734 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79f4b8f5-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79f4b8f5-d5, col_values=(('external_ids', {'iface-id': '79f4b8f5-d582-44c5-b8e0-a82ad73193de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:9d:e4', 'vm-uuid': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:36 compute-0 NetworkManager[48915]: <info>  [1764059556.7409] manager: (tap79f4b8f5-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.742 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.748 253542 INFO os_vif [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5')
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.799 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.799 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.799 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:24:9d:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.800 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Using config drive
Nov 25 08:32:36 compute-0 nova_compute[253538]: 2025-11-25 08:32:36.822 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695310889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.062 253542 DEBUG oslo_concurrency.processutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.067 253542 DEBUG nova.compute.provider_tree [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.078 253542 DEBUG nova.scheduler.client.report [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.107 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.107 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.154 253542 DEBUG nova.compute.claims [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Aborting claim: <nova.compute.claims.Claim object at 0x7f843dedb2b0> abort /usr/lib/python3.9/site-packages/nova/compute/claims.py:85
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.155 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.155 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 204 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.2 MiB/s wr, 242 op/s
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.220 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating config drive at /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.224 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1fay3ve execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2443545900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1695310889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.349 253542 DEBUG oslo_concurrency.processutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.379 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1fay3ve" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.398 253542 DEBUG nova.storage.rbd_utils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.401 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.539 253542 DEBUG oslo_concurrency.processutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.539 253542 INFO nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deleting local config drive /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config because it was imported into RBD.
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.559 253542 DEBUG nova.compute.manager [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.559 253542 DEBUG oslo_concurrency.lockutils [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.559 253542 DEBUG oslo_concurrency.lockutils [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.560 253542 DEBUG oslo_concurrency.lockutils [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.560 253542 DEBUG nova.compute.manager [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.560 253542 WARNING nova.compute.manager [req-aa97446f-68ee-4e2a-a8e2-91ee6faee270 req-39aab6b4-75fb-4e0d-a0c4-cbc94bc9cd20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 for instance with vm_state active and task_state None.
Nov 25 08:32:37 compute-0 NetworkManager[48915]: <info>  [1764059557.5862] manager: (tap79f4b8f5-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Nov 25 08:32:37 compute-0 kernel: tap79f4b8f5-d5: entered promiscuous mode
Nov 25 08:32:37 compute-0 systemd-udevd[309856]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:37 compute-0 ovn_controller[152859]: 2025-11-25T08:32:37Z|00433|binding|INFO|Claiming lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de for this chassis.
Nov 25 08:32:37 compute-0 ovn_controller[152859]: 2025-11-25T08:32:37Z|00434|binding|INFO|79f4b8f5-d582-44c5-b8e0-a82ad73193de: Claiming fa:16:3e:24:9d:e4 10.100.0.11
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:37 compute-0 NetworkManager[48915]: <info>  [1764059557.5981] device (tap79f4b8f5-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:37 compute-0 NetworkManager[48915]: <info>  [1764059557.5991] device (tap79f4b8f5-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.600 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:9d:e4 10.100.0.11'], port_security=['fa:16:3e:24:9d:e4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=79f4b8f5-d582-44c5-b8e0-a82ad73193de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.602 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 79f4b8f5-d582-44c5-b8e0-a82ad73193de in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.603 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:32:37 compute-0 ovn_controller[152859]: 2025-11-25T08:32:37Z|00435|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de ovn-installed in OVS
Nov 25 08:32:37 compute-0 ovn_controller[152859]: 2025-11-25T08:32:37Z|00436|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de up in Southbound
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c3b49a-e3cd-431f-9c04-b2d4a3f9235b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.615 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.617 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.617 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0932c291-ab27-470e-a062-bcc00619195b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 systemd-machined[215790]: New machine qemu-59-instance-00000035.
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.621 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a883a77-f871-4c0b-9f0a-54526b92232a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000035.
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.632 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[47e37947-fee9-4235-9e06-c17fa97c2fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f192ca24-2a31-4e9b-8c49-5b99eba6d666]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.695 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[155559a1-c58b-4515-8c06-41f35668810f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 NetworkManager[48915]: <info>  [1764059557.7045] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/204)
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.703 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e84aac-30e8-429e-9cd7-dba53af22e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.732 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ecebcb31-4c5b-470d-af21-987af9eefff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.735 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f505df12-8f7e-4d13-a556-30f2f0821841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.750 253542 DEBUG nova.network.neutron [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:37 compute-0 NetworkManager[48915]: <info>  [1764059557.7590] device (tapeb25945d-60): carrier: link connected
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.765 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.767 253542 DEBUG nova.compute.manager [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.770 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[16d90b5d-1324-426a-bbee-8a2ef4aaee7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.786 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9f086ade-43d5-4165-8740-e209a9c902c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491235, 'reachable_time': 42549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310192, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4106536887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.801 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[841a84dd-02c4-462a-a5f9-2b7d69e88194]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491235, 'tstamp': 491235}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310193, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.812 253542 DEBUG oslo_concurrency.processutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.817 253542 DEBUG nova.compute.provider_tree [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.819 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad5835f-cd83-4012-97cc-f48144622ffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491235, 'reachable_time': 42549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310196, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.843 253542 DEBUG nova.scheduler.client.report [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.851 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e369befb-7de9-460f-adc6-15fbfa7337d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.876 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.877 253542 DEBUG nova.compute.utils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Conflict updating instance 8ee60656-c206-4a84-9774-e8f852386097. Expected: {'task_state': [None]}. Actual: {'task_state': 'deleting'} notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.877 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance disappeared during build. _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2483
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.878 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.878 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.878 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.878 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.894 253542 DEBUG nova.network.neutron [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Updated VIF entry in instance network info cache for port 79f4b8f5-d582-44c5-b8e0-a82ad73193de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.894 253542 DEBUG nova.network.neutron [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Updating instance_info_cache with network_info: [{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.905 253542 DEBUG oslo_concurrency.lockutils [req-b12e2af8-80ae-4b82-aa4c-c3db6c45a76a req-15a7a207-3d78-43a5-8a0d-b929e4bd52c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-c0942fc7-74d4-4fc8-9574-4fea9179e71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.914 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b61f710-8395-4cd2-bb3e-ca1976819289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.915 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.916 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.916 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:37 compute-0 NetworkManager[48915]: <info>  [1764059557.9199] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:37 compute-0 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.923 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:37 compute-0 ovn_controller[152859]: 2025-11-25T08:32:37Z|00437|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 08:32:37 compute-0 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 08:32:37 compute-0 NetworkManager[48915]: <info>  [1764059557.9466] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.952 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:37 compute-0 ovn_controller[152859]: 2025-11-25T08:32:37Z|00438|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 08:32:37 compute-0 ovn_controller[152859]: 2025-11-25T08:32:37Z|00439|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 08:32:37 compute-0 ovn_controller[152859]: 2025-11-25T08:32:37Z|00440|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.954 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.955 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.958 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d12011e4-6e2b-4def-8093-218f391e4088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.959 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.960 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:32:37 compute-0 nova_compute[253538]: 2025-11-25 08:32:37.975 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:37.975 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:38 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 08:32:38 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Consumed 13.642s CPU time.
Nov 25 08:32:38 compute-0 systemd-machined[215790]: Machine qemu-55-instance-00000032 terminated.
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.029 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:38 compute-0 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 08:32:38 compute-0 NetworkManager[48915]: <info>  [1764059558.0999] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00441|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00442|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.109 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00443|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00444|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.136 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.137 253542 DEBUG nova.objects.instance [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00445|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=1)
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00446|if_status|INFO|Dropped 1 log messages in last 26 seconds (most recently, 26 seconds ago) due to excessive rate
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00447|if_status|INFO|Not setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down as sb is readonly
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00448|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00449|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 08:32:38 compute-0 ovn_controller[152859]: 2025-11-25T08:32:38Z|00450|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.157 253542 DEBUG nova.virt.libvirt.vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.158 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.158 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.158 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.159 253542 DEBUG os_vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.161 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.167 253542 INFO os_vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.174 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.178 253542 WARNING nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.182 253542 DEBUG nova.virt.libvirt.host [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.183 253542 DEBUG nova.virt.libvirt.host [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.186 253542 DEBUG nova.virt.libvirt.host [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.186 253542 DEBUG nova.virt.libvirt.host [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.186 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.187 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.187 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.187 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.187 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.188 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.189 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.189 253542 DEBUG nova.virt.hardware [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.189 253542 DEBUG nova.objects.instance [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.208 253542 DEBUG oslo_concurrency.processutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:38 compute-0 ceph-mon[75015]: pgmap v1478: 321 pgs: 321 active+clean; 204 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.2 MiB/s wr, 242 op/s
Nov 25 08:32:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4106536887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:38 compute-0 podman[310239]: 2025-11-25 08:32:38.358133045 +0000 UTC m=+0.071375810 container create 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:32:38 compute-0 systemd[1]: Started libpod-conmon-54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb.scope.
Nov 25 08:32:38 compute-0 podman[310239]: 2025-11-25 08:32:38.318033607 +0000 UTC m=+0.031276482 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:32:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eef64a7b2bc3a9c676eca2c757f9f8de2a6b5948fd7a6d5db2aa9098a5e44dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:38 compute-0 podman[310239]: 2025-11-25 08:32:38.4509887 +0000 UTC m=+0.164231485 container init 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:32:38 compute-0 podman[310239]: 2025-11-25 08:32:38.457846484 +0000 UTC m=+0.171089239 container start 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:32:38 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [NOTICE]   (310318) : New worker (310321) forked
Nov 25 08:32:38 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [NOTICE]   (310318) : Loading success.
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.504 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.506 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.507 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e81cbb3f-7f8d-4e67-a050-97ed634e92f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.507 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.513 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059558.5129726, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.514 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Started (Lifecycle Event)
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.535 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.539 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059558.5169563, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.539 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Paused (Lifecycle Event)
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.555 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.558 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.578 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:38 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [NOTICE]   (307488) : haproxy version is 2.8.14-c23fe91
Nov 25 08:32:38 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [NOTICE]   (307488) : path to executable is /usr/sbin/haproxy
Nov 25 08:32:38 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [WARNING]  (307488) : Exiting Master process...
Nov 25 08:32:38 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [WARNING]  (307488) : Exiting Master process...
Nov 25 08:32:38 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [ALERT]    (307488) : Current worker (307490) exited with code 143 (Terminated)
Nov 25 08:32:38 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[307477]: [WARNING]  (307488) : All workers exited. Exiting... (0)
Nov 25 08:32:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:38 compute-0 systemd[1]: libpod-5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826.scope: Deactivated successfully.
Nov 25 08:32:38 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4265011623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:38 compute-0 podman[310347]: 2025-11-25 08:32:38.669866793 +0000 UTC m=+0.052932194 container died 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.682 253542 DEBUG oslo_concurrency.processutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826-userdata-shm.mount: Deactivated successfully.
Nov 25 08:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6642dd5f19a45eda9399f2698ab38ce8fab03c28deb9c9cf4ff5295c9db6ea21-merged.mount: Deactivated successfully.
Nov 25 08:32:38 compute-0 podman[310347]: 2025-11-25 08:32:38.711765739 +0000 UTC m=+0.094831120 container cleanup 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:32:38 compute-0 systemd[1]: libpod-conmon-5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826.scope: Deactivated successfully.
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.732 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.742 253542 DEBUG oslo_concurrency.processutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.773 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.774 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.774 253542 DEBUG nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.775 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:32:38 compute-0 podman[310392]: 2025-11-25 08:32:38.786149647 +0000 UTC m=+0.056636523 container remove 5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.791 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4c1b75-8beb-4a43-af7b-e27776c227ce]: (4, ('Tue Nov 25 08:32:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826)\n5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826\nTue Nov 25 08:32:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826)\n5cd8eb5bcd8ba37565e28b878ebca1895bd29ed8fc6e168ca934fc9aa8eb1826\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.792 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7756bab9-35fe-4310-9e65-037cd15c2b0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.793 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 kernel: tap908154e6-30: left promiscuous mode
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.815 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.819 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f52a94-ebd7-4b63-b770-7ee124b5df15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.834 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53512463-746a-4839-b002-4d091645dee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.835 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ada17b5-e1ed-4735-905a-a5de99092166]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.851 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94912074-f5ee-45d5-9be4-74a794710526]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488871, 'reachable_time': 44099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310411, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.858 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.858 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[81ffaf5a-696c-4a0a-baca-267728221815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.859 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.861 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.861 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3539fbe-3283-4f16-84ec-07f5961ef18a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.862 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.863 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:38.864 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95587370-91a1-4572-8263-d66d17efa289]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.929 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.946 253542 DEBUG nova.network.neutron [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:38 compute-0 nova_compute[253538]: 2025-11-25 08:32:38.959 253542 INFO nova.compute.manager [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Took 0.18 seconds to deallocate network for instance.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.031 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.034 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.034 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.035 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.035 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Processing event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.035 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.036 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.036 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.036 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.037 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.038 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state building and task_state spawning.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.038 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.040 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.040 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.040 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.041 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.041 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.041 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.042 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.042 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.042 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.043 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.043 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.043 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.044 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.045 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.046 253542 DEBUG oslo_concurrency.lockutils [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.047 253542 DEBUG nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.047 253542 WARNING nova.compute.manager [req-8ea3216f-8de3-4f19-99a6-84f31029b706 req-a93148d7-054b-486c-98c6-00e60899f968 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.049 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.055 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.058 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance spawned successfully.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.058 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.063 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059559.0629315, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.063 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Resumed (Lifecycle Event)
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.081 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.086 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.086 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.087 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.087 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.088 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.088 253542 DEBUG nova.virt.libvirt.driver [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.101 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.105 253542 INFO nova.scheduler.client.report [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 8ee60656-c206-4a84-9774-e8f852386097
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.105 253542 DEBUG oslo_concurrency.lockutils [None req-f406ea9d-e657-4635-a30a-83190a3a473f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 2.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.108 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 2.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.109 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "8ee60656-c206-4a84-9774-e8f852386097-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.109 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.109 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.110 253542 DEBUG nova.objects.instance [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'flavor' on Instance uuid 8ee60656-c206-4a84-9774-e8f852386097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.134 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.141 253542 DEBUG nova.objects.instance [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'metadata' on Instance uuid 8ee60656-c206-4a84-9774-e8f852386097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687400569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.179 253542 INFO nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Took 6.17 seconds to spawn the instance on the hypervisor.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.179 253542 DEBUG nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.183 253542 INFO nova.compute.manager [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Terminating instance
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.196 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.197 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.197 253542 DEBUG nova.network.neutron [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.198 253542 DEBUG oslo_concurrency.processutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.6 MiB/s wr, 240 op/s
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.200 253542 DEBUG nova.virt.libvirt.vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.200 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.201 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.202 253542 DEBUG nova.objects.instance [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.237 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <name>instance-00000032</name>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:32:38</nova:creationTime>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 08:32:39 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:39 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:07:cd:40"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <target dev="tap15af3dd8-97"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:32:39 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:32:39 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:39 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:39 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:39 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.238 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.238 253542 DEBUG nova.virt.libvirt.driver [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.239 253542 DEBUG nova.virt.libvirt.vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.239 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.241 253542 DEBUG nova.network.os_vif_util [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.242 253542 DEBUG os_vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.242 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.243 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.245 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.246 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.247 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 NetworkManager[48915]: <info>  [1764059559.2485] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.258 253542 INFO os_vif [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.278 253542 INFO nova.compute.manager [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Took 7.42 seconds to build instance.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.302 253542 DEBUG oslo_concurrency.lockutils [None req-f4fdfd78-b540-405c-9e87-7c67184bb7d8 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:39 compute-0 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 08:32:39 compute-0 systemd-udevd[310215]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:39 compute-0 ovn_controller[152859]: 2025-11-25T08:32:39Z|00451|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 08:32:39 compute-0 ovn_controller[152859]: 2025-11-25T08:32:39Z|00452|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.336 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 NetworkManager[48915]: <info>  [1764059559.3381] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.342 253542 DEBUG nova.network.neutron [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.342 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.343 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.344 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:32:39 compute-0 NetworkManager[48915]: <info>  [1764059559.3482] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:39 compute-0 NetworkManager[48915]: <info>  [1764059559.3492] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4265011623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2687400569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:39 compute-0 ovn_controller[152859]: 2025-11-25T08:32:39Z|00453|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 08:32:39 compute-0 ovn_controller[152859]: 2025-11-25T08:32:39Z|00454|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.358 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41195a4f-8f31-4845-a086-bb9ac3862ff7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.359 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.361 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.361 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[02bdab70-2fbf-4a56-9b73-07102ea663e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.364 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.365 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab270bd-b172-4364-9f3c-00e91e09d19b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.376 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[72518e40-79b6-4525-895d-b2a54c626726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 systemd-machined[215790]: New machine qemu-60-instance-00000032.
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e27e76-2934-4898-aad0-33dd07d60847]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000032.
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.434 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8dbe7d-e516-4c07-8ea1-eca90745de22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 NetworkManager[48915]: <info>  [1764059559.4485] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.446 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e8fe79-1d71-4ed4-b334-d03dae6af503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.489 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5d923724-c372-45be-ba8b-59c68f95487c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.492 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9db1faec-0525-44ac-a566-10f01e255c64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 NetworkManager[48915]: <info>  [1764059559.5192] device (tap908154e6-30): carrier: link connected
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.528 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf7b325-771c-43fb-b368-cbbcf66b4855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.549 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8219559e-3103-4949-8e95-eaa7c3b37197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491411, 'reachable_time': 34687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310478, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.562 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30813ce8-88a7-4e5b-afbf-557e505a6d57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491411, 'tstamp': 491411}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310479, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.583 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f88bdd6-ffb0-4225-a5ea-d99e32c68f00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491411, 'reachable_time': 34687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310480, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.615 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e2a636-d9d2-4806-80e9-8784ddfd8af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.661 253542 DEBUG nova.network.neutron [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.665 253542 DEBUG nova.compute.manager [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.665 253542 DEBUG nova.compute.manager [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-d8bd16e1-3695-474d-be04-7fdf44bee803. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.665 253542 DEBUG oslo_concurrency.lockutils [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.665 253542 DEBUG oslo_concurrency.lockutils [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.666 253542 DEBUG nova.network.neutron [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port d8bd16e1-3695-474d-be04-7fdf44bee803 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.676 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-8ee60656-c206-4a84-9774-e8f852386097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.677 253542 DEBUG nova.compute.manager [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7868153-0ed5-4564-b23f-e7e349ec5721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.679 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.679 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.679 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.681 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 NetworkManager[48915]: <info>  [1764059559.6822] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Nov 25 08:32:39 compute-0 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.685 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:39 compute-0 ovn_controller[152859]: 2025-11-25T08:32:39Z|00455|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.702 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.704 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d46d274-e852-40f6-ae89-36b995e162f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.704 253542 DEBUG nova.virt.libvirt.driver [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.705 253542 INFO nova.virt.libvirt.driver [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance destroyed successfully.
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.705 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:32:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:39.706 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.705 253542 DEBUG nova.objects.instance [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8ee60656-c206-4a84-9774-e8f852386097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.715 253542 DEBUG nova.objects.instance [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 8ee60656-c206-4a84-9774-e8f852386097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.749 253542 INFO nova.virt.libvirt.driver [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Deletion of /var/lib/nova/instances/8ee60656-c206-4a84-9774-e8f852386097_del complete
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.807 253542 INFO nova.compute.manager [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Took 0.13 seconds to destroy the instance on the hypervisor.
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.808 253542 DEBUG oslo.service.loopingcall [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.808 253542 DEBUG nova.compute.manager [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.809 253542 DEBUG nova.network.neutron [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.913 253542 DEBUG nova.network.neutron [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.923 253542 DEBUG nova.network.neutron [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:39 compute-0 nova_compute[253538]: 2025-11-25 08:32:39.939 253542 INFO nova.compute.manager [-] [instance: 8ee60656-c206-4a84-9774-e8f852386097] Took 0.13 seconds to deallocate network for instance.
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.162 253542 DEBUG oslo_concurrency.lockutils [None req-c6ed4ceb-c663-4069-85c5-c7caedbbd858 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "8ee60656-c206-4a84-9774-e8f852386097" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:40 compute-0 podman[310544]: 2025-11-25 08:32:40.122879954 +0000 UTC m=+0.034767536 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.364 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.365 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059560.364427, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.365 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.367 253542 DEBUG nova.compute.manager [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.370 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance rebooted successfully.
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.370 253542 DEBUG nova.compute.manager [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.407 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.410 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.436 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.437 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059560.3678687, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.437 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.448 253542 DEBUG oslo_concurrency.lockutils [None req-7c71d64c-53be-44a4-a016-71bef448fe19 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.460 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.464 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:40 compute-0 nova_compute[253538]: 2025-11-25 08:32:40.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:40 compute-0 podman[310544]: 2025-11-25 08:32:40.517692735 +0000 UTC m=+0.429580277 container create 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:32:40 compute-0 ceph-mon[75015]: pgmap v1479: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.6 MiB/s wr, 240 op/s
Nov 25 08:32:40 compute-0 systemd[1]: Started libpod-conmon-2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339.scope.
Nov 25 08:32:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/facc729861bf2043006a179130b6eb00c45dc0c9002c2a929f8d9cf2dc4ea94f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:40 compute-0 podman[310544]: 2025-11-25 08:32:40.773796258 +0000 UTC m=+0.685683850 container init 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:32:40 compute-0 podman[310544]: 2025-11-25 08:32:40.784085114 +0000 UTC m=+0.695972666 container start 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 08:32:40 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [NOTICE]   (310586) : New worker (310588) forked
Nov 25 08:32:40 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [NOTICE]   (310586) : Loading success.
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.018 253542 DEBUG nova.network.neutron [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port d8bd16e1-3695-474d-be04-7fdf44bee803. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.018 253542 DEBUG nova.network.neutron [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.037 253542 DEBUG oslo_concurrency.lockutils [req-6157f941-262c-41d1-b71e-6d8ed7cdecde req-fcdcabaa-82d6-452d-8b15-5e85bc690e0d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:41.057 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.4 MiB/s wr, 258 op/s
Nov 25 08:32:41 compute-0 ceph-mon[75015]: pgmap v1480: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.4 MiB/s wr, 258 op/s
Nov 25 08:32:41 compute-0 podman[310597]: 2025-11-25 08:32:41.81084351 +0000 UTC m=+0.068032150 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.852 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.852 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.852 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 WARNING nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.853 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 WARNING nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.854 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.855 253542 DEBUG oslo_concurrency.lockutils [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.855 253542 DEBUG nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:41 compute-0 nova_compute[253538]: 2025-11-25 08:32:41.855 253542 WARNING nova.compute.manager [req-8c1bfeb1-4f2e-4cb4-a1b4-6243b259ff58 req-c9e12778-7a48-4312-85ce-677d50dc0ded b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.109 253542 INFO nova.compute.manager [None req-f0c7279b-1181-4e1c-a5ce-64e12d6684b3 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Get console output
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.115 253542 INFO oslo.privsep.daemon [None req-f0c7279b-1181-4e1c-a5ce-64e12d6684b3 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp3ey7djee/privsep.sock']
Nov 25 08:32:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.7 MiB/s wr, 261 op/s
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.324 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.325 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.340 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.407 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.408 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.414 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.415 253542 INFO nova.compute.claims [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.557 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.690 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059548.6869287, 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.691 253542 INFO nova.compute.manager [-] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] VM Stopped (Lifecycle Event)
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.713 253542 DEBUG nova.compute.manager [None req-7510b18c-133e-43cd-9017-c4e3c8b5b1be - - - - - -] [instance: 7aefbad8-edb5-417c-a34d-e9e3c2dd0c03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.920 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059548.9199588, 0201b222-1aa1-4d57-901c-e3c79170b567 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.921 253542 INFO nova.compute.manager [-] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] VM Stopped (Lifecycle Event)
Nov 25 08:32:43 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.939 253542 DEBUG nova.compute.manager [None req-41fddf24-ff8c-4fbd-8212-38d2b4875bef - - - - - -] [instance: 0201b222-1aa1-4d57-901c-e3c79170b567] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1318530023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.027 253542 INFO oslo.privsep.daemon [None req-f0c7279b-1181-4e1c-a5ce-64e12d6684b3 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Spawned new privsep daemon via rootwrap
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.902 310639 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.906 310639 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.908 310639 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:43.908 310639 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310639
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.038 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.044 253542 DEBUG nova.compute.provider_tree [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.056 253542 DEBUG nova.scheduler.client.report [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.076 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.077 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.118 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.119 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.119 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.140 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.158 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.247 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.249 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.249 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Creating image(s)
Nov 25 08:32:44 compute-0 ceph-mon[75015]: pgmap v1481: 321 pgs: 321 active+clean; 214 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.7 MiB/s wr, 261 op/s
Nov 25 08:32:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1318530023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.279 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.317 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.355 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.361 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.441 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.442 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.443 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.443 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.462 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.465 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.537 253542 INFO nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Rebuilding instance
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.762 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.800 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.833 253542 DEBUG nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.839 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:32:44 compute-0 podman[310737]: 2025-11-25 08:32:44.848651924 +0000 UTC m=+0.106987757 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.932 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_requests' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.939 253542 DEBUG nova.objects.instance [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid 6fd0259d-3f5c-487b-906c-db0ac2b00830 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.942 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.953 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.953 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Ensure instance console log exists: /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.954 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.954 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.955 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.956 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.968 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'migration_context' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.979 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:32:44 compute-0 nova_compute[253538]: 2025-11-25 08:32:44.982 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:32:45 compute-0 nova_compute[253538]: 2025-11-25 08:32:45.144 253542 DEBUG nova.policy [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:32:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 218 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 304 op/s
Nov 25 08:32:45 compute-0 nova_compute[253538]: 2025-11-25 08:32:45.530 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:46 compute-0 ceph-mon[75015]: pgmap v1482: 321 pgs: 321 active+clean; 218 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 304 op/s
Nov 25 08:32:46 compute-0 nova_compute[253538]: 2025-11-25 08:32:46.666 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Successfully created port: 40df73d0-e48b-4bb9-96eb-c236dd2ca614 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:32:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 229 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.1 MiB/s wr, 263 op/s
Nov 25 08:32:47 compute-0 nova_compute[253538]: 2025-11-25 08:32:47.455 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Successfully updated port: 40df73d0-e48b-4bb9-96eb-c236dd2ca614 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:32:47 compute-0 nova_compute[253538]: 2025-11-25 08:32:47.466 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:47 compute-0 nova_compute[253538]: 2025-11-25 08:32:47.467 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:47 compute-0 nova_compute[253538]: 2025-11-25 08:32:47.467 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:47 compute-0 nova_compute[253538]: 2025-11-25 08:32:47.605 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:32:47 compute-0 nova_compute[253538]: 2025-11-25 08:32:47.632 253542 DEBUG nova.compute.manager [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-changed-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:47 compute-0 nova_compute[253538]: 2025-11-25 08:32:47.632 253542 DEBUG nova.compute.manager [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Refreshing instance network info cache due to event network-changed-40df73d0-e48b-4bb9-96eb-c236dd2ca614. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:32:47 compute-0 nova_compute[253538]: 2025-11-25 08:32:47.632 253542 DEBUG oslo_concurrency.lockutils [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.249 253542 DEBUG nova.network.neutron [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Updating instance_info_cache with network_info: [{"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.268 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.269 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance network_info: |[{"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.269 253542 DEBUG oslo_concurrency.lockutils [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.269 253542 DEBUG nova.network.neutron [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Refreshing network info cache for port 40df73d0-e48b-4bb9-96eb-c236dd2ca614 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.272 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start _get_guest_xml network_info=[{"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.275 253542 WARNING nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.279 253542 DEBUG nova.virt.libvirt.host [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.279 253542 DEBUG nova.virt.libvirt.host [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.283 253542 DEBUG nova.virt.libvirt.host [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.284 253542 DEBUG nova.virt.libvirt.host [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.284 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.285 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.285 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.285 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.286 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.286 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.286 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.287 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.287 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.287 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.287 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.288 253542 DEBUG nova.virt.hardware [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.290 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:48 compute-0 ceph-mon[75015]: pgmap v1483: 321 pgs: 321 active+clean; 229 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.1 MiB/s wr, 263 op/s
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.531 253542 DEBUG oslo_concurrency.lockutils [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.535 253542 DEBUG oslo_concurrency.lockutils [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.535 253542 DEBUG nova.compute.manager [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.541 253542 DEBUG nova.compute.manager [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.543 253542 DEBUG nova.objects.instance [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.568 253542 DEBUG nova.virt.libvirt.driver [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:32:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914164472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.764 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.784 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:48 compute-0 nova_compute[253538]: 2025-11-25 08:32:48.791 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 244 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 254 op/s
Nov 25 08:32:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:32:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852451613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.273 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.277 253542 DEBUG nova.virt.libvirt.vif [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1160841163',display_name='tempest-DeleteServersTestJSON-server-1160841163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1160841163',id=55,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-tls64ynb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:44Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=6fd0259d-3f5c-487b-906c-db0ac2b00830,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.277 253542 DEBUG nova.network.os_vif_util [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.278 253542 DEBUG nova.network.os_vif_util [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.279 253542 DEBUG nova.objects.instance [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fd0259d-3f5c-487b-906c-db0ac2b00830 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.292 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <uuid>6fd0259d-3f5c-487b-906c-db0ac2b00830</uuid>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <name>instance-00000037</name>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <nova:name>tempest-DeleteServersTestJSON-server-1160841163</nova:name>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:32:48</nova:creationTime>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <nova:port uuid="40df73d0-e48b-4bb9-96eb-c236dd2ca614">
Nov 25 08:32:49 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <system>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <entry name="serial">6fd0259d-3f5c-487b-906c-db0ac2b00830</entry>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <entry name="uuid">6fd0259d-3f5c-487b-906c-db0ac2b00830</entry>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </system>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <os>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   </os>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <features>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   </features>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6fd0259d-3f5c-487b-906c-db0ac2b00830_disk">
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config">
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       </source>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:32:49 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:f5:15:48"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <target dev="tap40df73d0-e4"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/console.log" append="off"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <video>
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </video>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:32:49 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:32:49 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:32:49 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:32:49 compute-0 nova_compute[253538]: </domain>
Nov 25 08:32:49 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.298 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Preparing to wait for external event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.298 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.298 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.298 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.299 253542 DEBUG nova.virt.libvirt.vif [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1160841163',display_name='tempest-DeleteServersTestJSON-server-1160841163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1160841163',id=55,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-tls64ynb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:44Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=6fd0259d-3f5c-487b-906c-db0ac2b00830,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.300 253542 DEBUG nova.network.os_vif_util [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.300 253542 DEBUG nova.network.os_vif_util [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.301 253542 DEBUG os_vif [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.302 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.302 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.305 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.305 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40df73d0-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.306 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap40df73d0-e4, col_values=(('external_ids', {'iface-id': '40df73d0-e48b-4bb9-96eb-c236dd2ca614', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:15:48', 'vm-uuid': '6fd0259d-3f5c-487b-906c-db0ac2b00830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:49 compute-0 NetworkManager[48915]: <info>  [1764059569.3083] manager: (tap40df73d0-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.309 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.312 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.314 253542 INFO os_vif [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4')
Nov 25 08:32:49 compute-0 ovn_controller[152859]: 2025-11-25T08:32:49Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:58:19 10.100.0.11
Nov 25 08:32:49 compute-0 ovn_controller[152859]: 2025-11-25T08:32:49Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:58:19 10.100.0.11
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.399 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.400 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.401 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:f5:15:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.401 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Using config drive
Nov 25 08:32:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2914164472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/852451613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:32:49 compute-0 nova_compute[253538]: 2025-11-25 08:32:49.427 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:50 compute-0 nova_compute[253538]: 2025-11-25 08:32:50.201 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Creating config drive at /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config
Nov 25 08:32:50 compute-0 nova_compute[253538]: 2025-11-25 08:32:50.207 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuyr1zuj8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:50 compute-0 nova_compute[253538]: 2025-11-25 08:32:50.360 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuyr1zuj8" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:50 compute-0 nova_compute[253538]: 2025-11-25 08:32:50.382 253542 DEBUG nova.storage.rbd_utils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:50 compute-0 nova_compute[253538]: 2025-11-25 08:32:50.391 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:50 compute-0 ceph-mon[75015]: pgmap v1484: 321 pgs: 321 active+clean; 244 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 254 op/s
Nov 25 08:32:50 compute-0 nova_compute[253538]: 2025-11-25 08:32:50.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:51 compute-0 nova_compute[253538]: 2025-11-25 08:32:51.091 253542 DEBUG nova.network.neutron [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Updated VIF entry in instance network info cache for port 40df73d0-e48b-4bb9-96eb-c236dd2ca614. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:32:51 compute-0 nova_compute[253538]: 2025-11-25 08:32:51.092 253542 DEBUG nova.network.neutron [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Updating instance_info_cache with network_info: [{"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:51 compute-0 nova_compute[253538]: 2025-11-25 08:32:51.106 253542 DEBUG oslo_concurrency.lockutils [req-937c237f-6dea-4dcd-9db8-34d2413968c2 req-e1b46b55-ba74-4220-9055-b54a9df9f8bc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6fd0259d-3f5c-487b-906c-db0ac2b00830" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.9 MiB/s wr, 261 op/s
Nov 25 08:32:51 compute-0 ceph-mon[75015]: pgmap v1485: 321 pgs: 321 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.9 MiB/s wr, 261 op/s
Nov 25 08:32:51 compute-0 nova_compute[253538]: 2025-11-25 08:32:51.845 253542 DEBUG oslo_concurrency.processutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config 6fd0259d-3f5c-487b-906c-db0ac2b00830_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:51 compute-0 nova_compute[253538]: 2025-11-25 08:32:51.846 253542 INFO nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Deleting local config drive /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830/disk.config because it was imported into RBD.
Nov 25 08:32:51 compute-0 kernel: tap40df73d0-e4: entered promiscuous mode
Nov 25 08:32:51 compute-0 NetworkManager[48915]: <info>  [1764059571.9098] manager: (tap40df73d0-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Nov 25 08:32:51 compute-0 ovn_controller[152859]: 2025-11-25T08:32:51Z|00456|binding|INFO|Claiming lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 for this chassis.
Nov 25 08:32:51 compute-0 ovn_controller[152859]: 2025-11-25T08:32:51Z|00457|binding|INFO|40df73d0-e48b-4bb9-96eb-c236dd2ca614: Claiming fa:16:3e:f5:15:48 10.100.0.10
Nov 25 08:32:51 compute-0 nova_compute[253538]: 2025-11-25 08:32:51.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:51 compute-0 ovn_controller[152859]: 2025-11-25T08:32:51Z|00458|binding|INFO|Setting lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 ovn-installed in OVS
Nov 25 08:32:51 compute-0 nova_compute[253538]: 2025-11-25 08:32:51.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:51 compute-0 nova_compute[253538]: 2025-11-25 08:32:51.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:51 compute-0 ovn_controller[152859]: 2025-11-25T08:32:51Z|00459|binding|INFO|Setting lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 up in Southbound
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.942 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:15:48 10.100.0.10'], port_security=['fa:16:3e:f5:15:48 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6fd0259d-3f5c-487b-906c-db0ac2b00830', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=40df73d0-e48b-4bb9-96eb-c236dd2ca614) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.944 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 40df73d0-e48b-4bb9-96eb-c236dd2ca614 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 bound to our chassis
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.946 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:32:51 compute-0 systemd-udevd[310974]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:51 compute-0 systemd-machined[215790]: New machine qemu-61-instance-00000037.
Nov 25 08:32:51 compute-0 NetworkManager[48915]: <info>  [1764059571.9643] device (tap40df73d0-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:51 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000037.
Nov 25 08:32:51 compute-0 NetworkManager[48915]: <info>  [1764059571.9655] device (tap40df73d0-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.966 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[927faca9-86af-4367-ab33-62ca61bee446]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.968 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.970 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.970 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41bf9a3f-6e49-4535-a483-c803cf743c05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.971 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[17c7d82c-9e11-44f4-9440-266f20230f04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.981 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[23784088-96b1-4f88-baa2-42f41cf195b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:51.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[126af961-b72c-41b0-b61d-4af8538d67b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.026 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[50f81391-eeda-48ad-a318-38e35e05e719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.031 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57bfc8cd-325b-4f99-bb1e-17ac8eee47c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 NetworkManager[48915]: <info>  [1764059572.0327] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.069 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f91090bc-9e50-414a-8642-23dbf99f389b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.072 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[69d1625d-6ebb-4165-bd3c-7313351ce5b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 NetworkManager[48915]: <info>  [1764059572.0952] device (tapa66e51b8-e0): carrier: link connected
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.100 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[25d712eb-ef22-4261-9cbf-478e4addd614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 sshd-session[310961]: Invalid user loginuser from 193.32.162.151 port 34616
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.119 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[afd58ac0-651f-4f09-a82f-fec876a01291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492669, 'reachable_time': 37257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311008, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.135 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebf1bf0-b0f8-486a-bfc9-623733ec9bc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492669, 'tstamp': 492669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311009, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.153 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7863c43d-1052-4d1a-ac8e-3f88d8515b34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492669, 'reachable_time': 37257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311010, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.203 253542 DEBUG nova.compute.manager [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.203 253542 DEBUG oslo_concurrency.lockutils [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.204 253542 DEBUG oslo_concurrency.lockutils [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.204 253542 DEBUG oslo_concurrency.lockutils [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.204 253542 DEBUG nova.compute.manager [req-1fd85400-7485-4ae7-833e-60fe4169e7e0 req-73916319-90e3-444b-8ea1-de5551d86721 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Processing event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.206 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7da12626-1af6-4ded-8aba-ce15303ebce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 sshd-session[310961]: Connection closed by invalid user loginuser 193.32.162.151 port 34616 [preauth]
Nov 25 08:32:52 compute-0 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.273 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa44f851-9af1-45b1-8204-f7d34e763e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.276 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.276 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.276 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.278 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:52 compute-0 NetworkManager[48915]: <info>  [1764059572.2793] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.280 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.282 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.284 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.288 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:32:52 compute-0 ovn_controller[152859]: 2025-11-25T08:32:52Z|00460|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.291 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3db9f9-0ee7-40c8-9126-30c9c20476a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.292 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:32:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:52.293 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.584 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059572.584037, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.585 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Started (Lifecycle Event)
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.587 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.591 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.593 253542 INFO nova.virt.libvirt.driver [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance spawned successfully.
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.593 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.624 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.631 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.635 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.636 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.637 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.637 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.637 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.638 253542 DEBUG nova.virt.libvirt.driver [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:32:52 compute-0 podman[311083]: 2025-11-25 08:32:52.691565548 +0000 UTC m=+0.029041101 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.801 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.801 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059572.5843024, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.801 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Paused (Lifecycle Event)
Nov 25 08:32:52 compute-0 podman[311083]: 2025-11-25 08:32:52.805206933 +0000 UTC m=+0.142682466 container create 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.820 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.825 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059572.590889, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.825 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Resumed (Lifecycle Event)
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.842 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.847 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.860 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:32:52 compute-0 systemd[1]: Started libpod-conmon-11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4.scope.
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.896 253542 INFO nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Took 8.65 seconds to spawn the instance on the hypervisor.
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.897 253542 DEBUG nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391b8476a98a77d0632e121cc7458acf3109db7ed9a9f0b088b923765e8de82c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:32:52 compute-0 podman[311083]: 2025-11-25 08:32:52.931653731 +0000 UTC m=+0.269129294 container init 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:32:52 compute-0 podman[311083]: 2025-11-25 08:32:52.937641542 +0000 UTC m=+0.275117075 container start 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.954 253542 INFO nova.compute.manager [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Took 9.56 seconds to build instance.
Nov 25 08:32:52 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [NOTICE]   (311102) : New worker (311104) forked
Nov 25 08:32:52 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [NOTICE]   (311102) : Loading success.
Nov 25 08:32:52 compute-0 nova_compute[253538]: 2025-11-25 08:32:52.986 253542 DEBUG oslo_concurrency.lockutils [None req-eed2a685-acc6-4daa-9135-691466117b97 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 299 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 256 op/s
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:32:53
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control']
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:32:53 compute-0 ovn_controller[152859]: 2025-11-25T08:32:53Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:9d:e4 10.100.0.11
Nov 25 08:32:53 compute-0 ovn_controller[152859]: 2025-11-25T08:32:53Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:9d:e4 10.100.0.11
Nov 25 08:32:53 compute-0 ovn_controller[152859]: 2025-11-25T08:32:53Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:32:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:32:54 compute-0 ceph-mon[75015]: pgmap v1486: 321 pgs: 321 active+clean; 299 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 256 op/s
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.308 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.328 253542 INFO nova.compute.manager [None req-32b6625c-4f16-4686-a25b-706aa6659172 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Pausing
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.329 253542 DEBUG nova.objects.instance [None req-32b6625c-4f16-4686-a25b-706aa6659172 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'flavor' on Instance uuid 6fd0259d-3f5c-487b-906c-db0ac2b00830 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.358 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059574.3579557, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.358 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Paused (Lifecycle Event)
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.359 253542 DEBUG nova.compute.manager [None req-32b6625c-4f16-4686-a25b-706aa6659172 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.387 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.390 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.415 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.637 253542 DEBUG nova.compute.manager [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.637 253542 DEBUG oslo_concurrency.lockutils [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.638 253542 DEBUG oslo_concurrency.lockutils [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.638 253542 DEBUG oslo_concurrency.lockutils [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.638 253542 DEBUG nova.compute.manager [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] No waiting events found dispatching network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:54 compute-0 nova_compute[253538]: 2025-11-25 08:32:54.638 253542 WARNING nova.compute.manager [req-31da3433-ec87-46c3-affc-d3cc304f55e4 req-1cf28725-cccc-4292-b311-ffb3de48b611 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received unexpected event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 for instance with vm_state paused and task_state None.
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.023 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:32:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 318 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 286 op/s
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.285 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.286 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.286 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.286 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.286 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.288 253542 INFO nova.compute.manager [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Terminating instance
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.289 253542 DEBUG nova.compute.manager [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:32:55 compute-0 kernel: tap40df73d0-e4 (unregistering): left promiscuous mode
Nov 25 08:32:55 compute-0 NetworkManager[48915]: <info>  [1764059575.3301] device (tap40df73d0-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 ovn_controller[152859]: 2025-11-25T08:32:55Z|00461|binding|INFO|Releasing lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 from this chassis (sb_readonly=0)
Nov 25 08:32:55 compute-0 ovn_controller[152859]: 2025-11-25T08:32:55Z|00462|binding|INFO|Setting lport 40df73d0-e48b-4bb9-96eb-c236dd2ca614 down in Southbound
Nov 25 08:32:55 compute-0 ovn_controller[152859]: 2025-11-25T08:32:55Z|00463|binding|INFO|Removing iface tap40df73d0-e4 ovn-installed in OVS
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.345 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.351 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:15:48 10.100.0.10'], port_security=['fa:16:3e:f5:15:48 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6fd0259d-3f5c-487b-906c-db0ac2b00830', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=40df73d0-e48b-4bb9-96eb-c236dd2ca614) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.352 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 40df73d0-e48b-4bb9-96eb-c236dd2ca614 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.353 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.354 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a58a4db8-6b6e-4775-92f5-4d79557b1af5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.355 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.360 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 25 08:32:55 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Consumed 2.110s CPU time.
Nov 25 08:32:55 compute-0 systemd-machined[215790]: Machine qemu-61-instance-00000037 terminated.
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.489 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.491 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.492 253542 DEBUG nova.objects.instance [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.516 253542 DEBUG nova.objects.instance [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.528 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.579 253542 INFO nova.virt.libvirt.driver [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Instance destroyed successfully.
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.584 253542 DEBUG nova.objects.instance [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 6fd0259d-3f5c-487b-906c-db0ac2b00830 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.597 253542 DEBUG nova.virt.libvirt.vif [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1160841163',display_name='tempest-DeleteServersTestJSON-server-1160841163',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1160841163',id=55,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-tls64ynb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:54Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=6fd0259d-3f5c-487b-906c-db0ac2b00830,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.598 253542 DEBUG nova.network.os_vif_util [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "address": "fa:16:3e:f5:15:48", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40df73d0-e4", "ovs_interfaceid": "40df73d0-e48b-4bb9-96eb-c236dd2ca614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.599 253542 DEBUG nova.network.os_vif_util [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.600 253542 DEBUG os_vif [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.603 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40df73d0-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.609 253542 INFO os_vif [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:48,bridge_name='br-int',has_traffic_filtering=True,id=40df73d0-e48b-4bb9-96eb-c236dd2ca614,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40df73d0-e4')
Nov 25 08:32:55 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [NOTICE]   (311102) : haproxy version is 2.8.14-c23fe91
Nov 25 08:32:55 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [NOTICE]   (311102) : path to executable is /usr/sbin/haproxy
Nov 25 08:32:55 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [WARNING]  (311102) : Exiting Master process...
Nov 25 08:32:55 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [WARNING]  (311102) : Exiting Master process...
Nov 25 08:32:55 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [ALERT]    (311102) : Current worker (311104) exited with code 143 (Terminated)
Nov 25 08:32:55 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[311098]: [WARNING]  (311102) : All workers exited. Exiting... (0)
Nov 25 08:32:55 compute-0 systemd[1]: libpod-11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4.scope: Deactivated successfully.
Nov 25 08:32:55 compute-0 podman[311138]: 2025-11-25 08:32:55.626839086 +0000 UTC m=+0.107650483 container died 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-391b8476a98a77d0632e121cc7458acf3109db7ed9a9f0b088b923765e8de82c-merged.mount: Deactivated successfully.
Nov 25 08:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4-userdata-shm.mount: Deactivated successfully.
Nov 25 08:32:55 compute-0 podman[311138]: 2025-11-25 08:32:55.679781429 +0000 UTC m=+0.160592816 container cleanup 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:32:55 compute-0 systemd[1]: libpod-conmon-11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4.scope: Deactivated successfully.
Nov 25 08:32:55 compute-0 podman[311194]: 2025-11-25 08:32:55.761577407 +0000 UTC m=+0.053888289 container remove 11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:32:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f2169a2-5931-4d19-ae13-30826ebfd4d2]: (4, ('Tue Nov 25 08:32:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4)\n11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4\nTue Nov 25 08:32:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4)\n11d557cb7ecd3bf9378976bca866b5227a5d6df4370ee8cee61561fe6a2481f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.774 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf3990a-5a20-48b3-8d24-a9f939f6b20e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.775 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:55 compute-0 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 nova_compute[253538]: 2025-11-25 08:32:55.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.795 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9ecf93-c459-465a-b4b0-c92d01c6828c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.806 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d73c4bd9-6caa-4cd2-99d1-577acd93523d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.808 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00d35e44-b179-4103-9774-6d7cf7abe3bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.825 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e79f9ebe-7c71-4a2c-9bf2-c6a9dbbe7f0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492661, 'reachable_time': 25224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311209, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.828 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:32:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:55.828 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ceefa5-cff5-4beb-a0bc-619457430f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:55 compute-0 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.068 253542 DEBUG nova.policy [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.114 253542 INFO nova.virt.libvirt.driver [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Deleting instance files /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830_del
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.115 253542 INFO nova.virt.libvirt.driver [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Deletion of /var/lib/nova/instances/6fd0259d-3f5c-487b-906c-db0ac2b00830_del complete
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.174 253542 INFO nova.compute.manager [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.174 253542 DEBUG oslo.service.loopingcall [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.174 253542 DEBUG nova.compute.manager [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.175 253542 DEBUG nova.network.neutron [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:32:56 compute-0 ceph-mon[75015]: pgmap v1487: 321 pgs: 321 active+clean; 318 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 286 op/s
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.988 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully created port: 223207be-35e0-4b8b-bf78-113792059910 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:32:56 compute-0 nova_compute[253538]: 2025-11-25 08:32:56.992 253542 DEBUG nova.network.neutron [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.006 253542 INFO nova.compute.manager [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Took 0.83 seconds to deallocate network for instance.
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.029 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-unplugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.029 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.029 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] No waiting events found dispatching network-vif-unplugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-unplugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.030 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.031 253542 DEBUG oslo_concurrency.lockutils [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.031 253542 DEBUG nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] No waiting events found dispatching network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.031 253542 WARNING nova.compute.manager [req-bf46663e-2576-46d8-b8f5-31b61d9ddb22 req-a0d7f00a-cdd5-4b23-a1fe-c4dc91e543c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received unexpected event network-vif-plugged-40df73d0-e48b-4bb9-96eb-c236dd2ca614 for instance with vm_state paused and task_state deleting.
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.039 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.040 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.151 253542 DEBUG oslo_concurrency.processutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 310 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.0 MiB/s wr, 237 op/s
Nov 25 08:32:57 compute-0 kernel: tap79f4b8f5-d5 (unregistering): left promiscuous mode
Nov 25 08:32:57 compute-0 NetworkManager[48915]: <info>  [1764059577.5404] device (tap79f4b8f5-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:32:57 compute-0 ovn_controller[152859]: 2025-11-25T08:32:57Z|00464|binding|INFO|Releasing lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de from this chassis (sb_readonly=0)
Nov 25 08:32:57 compute-0 ovn_controller[152859]: 2025-11-25T08:32:57Z|00465|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de down in Southbound
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.547 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:57 compute-0 ovn_controller[152859]: 2025-11-25T08:32:57Z|00466|binding|INFO|Removing iface tap79f4b8f5-d5 ovn-installed in OVS
Nov 25 08:32:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.555 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:9d:e4 10.100.0.11'], port_security=['fa:16:3e:24:9d:e4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=79f4b8f5-d582-44c5-b8e0-a82ad73193de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.556 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 79f4b8f5-d582-44c5-b8e0-a82ad73193de in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis
Nov 25 08:32:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.557 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:32:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f17a0973-a10a-45d5-ad72-efc039dc7895]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:57.558 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:57 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 25 08:32:57 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Consumed 13.937s CPU time.
Nov 25 08:32:57 compute-0 systemd-machined[215790]: Machine qemu-59-instance-00000035 terminated.
Nov 25 08:32:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:32:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859965703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.687 253542 DEBUG oslo_concurrency.processutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.694 253542 DEBUG nova.compute.provider_tree [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.707 253542 DEBUG nova.scheduler.client.report [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:32:57 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [NOTICE]   (310318) : haproxy version is 2.8.14-c23fe91
Nov 25 08:32:57 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [NOTICE]   (310318) : path to executable is /usr/sbin/haproxy
Nov 25 08:32:57 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [WARNING]  (310318) : Exiting Master process...
Nov 25 08:32:57 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [WARNING]  (310318) : Exiting Master process...
Nov 25 08:32:57 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [ALERT]    (310318) : Current worker (310321) exited with code 143 (Terminated)
Nov 25 08:32:57 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[310310]: [WARNING]  (310318) : All workers exited. Exiting... (0)
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.730 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:57 compute-0 systemd[1]: libpod-54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb.scope: Deactivated successfully.
Nov 25 08:32:57 compute-0 podman[311253]: 2025-11-25 08:32:57.732440277 +0000 UTC m=+0.073203559 container died 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.752 253542 INFO nova.scheduler.client.report [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 6fd0259d-3f5c-487b-906c-db0ac2b00830
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.763 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully updated port: 223207be-35e0-4b8b-bf78-113792059910 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.778 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.778 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.778 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.810 253542 DEBUG oslo_concurrency.lockutils [None req-e3223844-2cf6-4665-a1fb-a42de2e4838f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "6fd0259d-3f5c-487b-906c-db0ac2b00830" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.830 253542 DEBUG nova.compute.manager [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.830 253542 DEBUG oslo_concurrency.lockutils [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.831 253542 DEBUG oslo_concurrency.lockutils [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.831 253542 DEBUG oslo_concurrency.lockutils [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.831 253542 DEBUG nova.compute.manager [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:32:57 compute-0 nova_compute[253538]: 2025-11-25 08:32:57.831 253542 WARNING nova.compute.manager [req-e046f0cf-39ab-4a45-8eaa-3396c1ebfedd req-9f58b525-4a70-4510-aec9-05b9481a02e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state active and task_state rebuilding.
Nov 25 08:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eef64a7b2bc3a9c676eca2c757f9f8de2a6b5948fd7a6d5db2aa9098a5e44dd-merged.mount: Deactivated successfully.
Nov 25 08:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb-userdata-shm.mount: Deactivated successfully.
Nov 25 08:32:57 compute-0 podman[311253]: 2025-11-25 08:32:57.98263689 +0000 UTC m=+0.323400162 container cleanup 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:32:57 compute-0 systemd[1]: libpod-conmon-54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb.scope: Deactivated successfully.
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.039 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance shutdown successfully after 13 seconds.
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.045 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance destroyed successfully.
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.048 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance destroyed successfully.
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.049 253542 DEBUG nova.virt.libvirt.vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:44Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.049 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.050 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.050 253542 DEBUG os_vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.052 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79f4b8f5-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.054 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.056 253542 INFO os_vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5')
Nov 25 08:32:58 compute-0 podman[311295]: 2025-11-25 08:32:58.069007302 +0000 UTC m=+0.066777556 container remove 54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.073 253542 WARNING nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.082 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f187da-03c7-4b5a-9b0e-7697847525b2]: (4, ('Tue Nov 25 08:32:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb)\n54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb\nTue Nov 25 08:32:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb)\n54bbfdbca908ba90b6cb297dbb25cf4a267533c3b4a170896d3f7c9c52c76ceb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.084 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f74a7425-4c3e-4310-ae51-b49256fe18b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.085 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:58 compute-0 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.125 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[274a246c-3c09-4f7e-a2a5-b4c92d182515]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.145 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e61c9af1-b33e-4a2a-bd1e-9c10229ab6b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.147 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c97c1296-c7c2-4f26-85b7-ab812716e16b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.163 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d47231b-a59b-4430-9c42-8c7343fb6477]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491227, 'reachable_time': 42325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311328, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.166 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:32:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:58.166 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2505cf60-b893-4357-8906-889cf9d270d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:58 compute-0 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 08:32:58 compute-0 ceph-mon[75015]: pgmap v1488: 321 pgs: 321 active+clean; 310 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.0 MiB/s wr, 237 op/s
Nov 25 08:32:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2859965703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:32:58 compute-0 nova_compute[253538]: 2025-11-25 08:32:58.644 253542 DEBUG nova.virt.libvirt.driver [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.112 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deleting instance files /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b_del
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.113 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deletion of /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b_del complete
Nov 25 08:32:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 301 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.8 MiB/s wr, 273 op/s
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.254 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.254 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating image(s)
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.283 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.310 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.339 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.343 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.431 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.433 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.434 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.434 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.466 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.469 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.697 253542 DEBUG nova.network.neutron [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.722 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.732 253542 DEBUG nova.virt.libvirt.vif [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.733 253542 DEBUG nova.network.os_vif_util [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.735 253542 DEBUG nova.network.os_vif_util [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.736 253542 DEBUG os_vif [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.742 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.743 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap223207be-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.743 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap223207be-35, col_values=(('external_ids', {'iface-id': '223207be-35e0-4b8b-bf78-113792059910', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:eb:0c', 'vm-uuid': '8191f951-44bc-4371-957a-f2e7d37c1a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:59 compute-0 NetworkManager[48915]: <info>  [1764059579.7468] manager: (tap223207be-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.755 253542 INFO os_vif [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35')
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.756 253542 DEBUG nova.virt.libvirt.vif [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.756 253542 DEBUG nova.network.os_vif_util [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.757 253542 DEBUG nova.network.os_vif_util [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.760 253542 DEBUG nova.virt.libvirt.guest [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:3a:eb:0c"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <target dev="tap223207be-35"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]: </interface>
Nov 25 08:32:59 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:32:59 compute-0 kernel: tap223207be-35: entered promiscuous mode
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:59 compute-0 ovn_controller[152859]: 2025-11-25T08:32:59Z|00467|binding|INFO|Claiming lport 223207be-35e0-4b8b-bf78-113792059910 for this chassis.
Nov 25 08:32:59 compute-0 ovn_controller[152859]: 2025-11-25T08:32:59Z|00468|binding|INFO|223207be-35e0-4b8b-bf78-113792059910: Claiming fa:16:3e:3a:eb:0c 10.100.0.12
Nov 25 08:32:59 compute-0 NetworkManager[48915]: <info>  [1764059579.7748] manager: (tap223207be-35): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.779 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:eb:0c 10.100.0.12'], port_security=['fa:16:3e:3a:eb:0c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=223207be-35e0-4b8b-bf78-113792059910) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.781 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 223207be-35e0-4b8b-bf78-113792059910 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.782 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:32:59 compute-0 ovn_controller[152859]: 2025-11-25T08:32:59Z|00469|binding|INFO|Setting lport 223207be-35e0-4b8b-bf78-113792059910 ovn-installed in OVS
Nov 25 08:32:59 compute-0 ovn_controller[152859]: 2025-11-25T08:32:59Z|00470|binding|INFO|Setting lport 223207be-35e0-4b8b-bf78-113792059910 up in Southbound
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.803 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3acb253-ac54-49ff-b7fb-7e23ea344d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:59 compute-0 systemd-udevd[311431]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:32:59 compute-0 NetworkManager[48915]: <info>  [1764059579.8230] device (tap223207be-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:32:59 compute-0 NetworkManager[48915]: <info>  [1764059579.8237] device (tap223207be-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.833 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc1b561-35ec-48f7-a769-b63c3dde36f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.837 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ae981f40-ba9f-4245-aac6-539521b0772e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.852 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.870 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa452e9-9f44-4fa6-bdbd-3b0d38682187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[80135125-7fcf-4105-b33c-c186399884a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311454, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36002729-5665-41b7-8b43-b432e87ec16e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311464, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311464, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:32:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:32:59.914 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.916 253542 DEBUG nova.virt.libvirt.driver [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.917 253542 DEBUG nova.virt.libvirt.driver [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.917 253542 DEBUG nova.virt.libvirt.driver [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:1a:58:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.917 253542 DEBUG nova.virt.libvirt.driver [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:3a:eb:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.926 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.951 253542 DEBUG nova.virt.libvirt.guest [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:32:59</nova:creationTime>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:32:59 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 08:32:59 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:32:59 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:32:59 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:32:59 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:32:59 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:32:59 compute-0 nova_compute[253538]: 2025-11-25 08:32:59.971 253542 DEBUG oslo_concurrency.lockutils [None req-752c8843-dcf6-4988-b600-41333c98989c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.017 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.018 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Ensure instance console log exists: /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.018 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.019 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.019 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.021 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start _get_guest_xml network_info=[{"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.023 253542 WARNING nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.028 253542 DEBUG nova.virt.libvirt.host [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.028 253542 DEBUG nova.virt.libvirt.host [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.030 253542 DEBUG nova.virt.libvirt.host [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.031 253542 DEBUG nova.virt.libvirt.host [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.031 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.031 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.032 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.virt.hardware [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.033 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.050 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.211 253542 DEBUG nova.compute.manager [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Received event network-vif-deleted-40df73d0-e48b-4bb9-96eb-c236dd2ca614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.212 253542 DEBUG nova.compute.manager [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.213 253542 DEBUG nova.compute.manager [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-223207be-35e0-4b8b-bf78-113792059910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.213 253542 DEBUG oslo_concurrency.lockutils [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.214 253542 DEBUG oslo_concurrency.lockutils [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.214 253542 DEBUG nova.network.neutron [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port 223207be-35e0-4b8b-bf78-113792059910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:33:00 compute-0 ceph-mon[75015]: pgmap v1489: 321 pgs: 321 active+clean; 301 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.8 MiB/s wr, 273 op/s
Nov 25 08:33:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046479535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.495 253542 DEBUG nova.compute.manager [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.496 253542 DEBUG oslo_concurrency.lockutils [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.496 253542 DEBUG oslo_concurrency.lockutils [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.496 253542 DEBUG oslo_concurrency.lockutils [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.496 253542 DEBUG nova.compute.manager [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.497 253542 WARNING nova.compute.manager [req-ba8200bb-8537-4279-8720-f65b110a39fd req-c0823b6b-4ac3-4866-8896-b666875febf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state active and task_state rebuild_spawning.
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.511 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.529 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.532 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.568 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.569 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.585 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.650 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.651 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.657 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.658 253542 INFO nova.compute.claims [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:33:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.803 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902323065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.970 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.971 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.971 253542 DEBUG nova.objects.instance [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.980 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.981 253542 DEBUG nova.virt.libvirt.vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:59Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.982 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.983 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.985 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <uuid>c0942fc7-74d4-4fc8-9574-4fea9179e71b</uuid>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <name>instance-00000035</name>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1031241876</nova:name>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:33:00</nova:creationTime>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <nova:port uuid="79f4b8f5-d582-44c5-b8e0-a82ad73193de">
Nov 25 08:33:00 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <entry name="serial">c0942fc7-74d4-4fc8-9574-4fea9179e71b</entry>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <entry name="uuid">c0942fc7-74d4-4fc8-9574-4fea9179e71b</entry>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk">
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config">
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:00 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:24:9d:e4"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <target dev="tap79f4b8f5-d5"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/console.log" append="off"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:33:00 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:33:00 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:00 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:00 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:00 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.985 253542 DEBUG nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Preparing to wait for external event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.985 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.986 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.986 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.987 253542 DEBUG nova.virt.libvirt.vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:32:59Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.987 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.987 253542 DEBUG nova.network.os_vif_util [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.988 253542 DEBUG os_vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.991 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.992 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:00 compute-0 ovn_controller[152859]: 2025-11-25T08:33:00Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:eb:0c 10.100.0.12
Nov 25 08:33:00 compute-0 ovn_controller[152859]: 2025-11-25T08:33:00Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:eb:0c 10.100.0.12
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.998 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79f4b8f5-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:00 compute-0 nova_compute[253538]: 2025-11-25 08:33:00.998 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79f4b8f5-d5, col_values=(('external_ids', {'iface-id': '79f4b8f5-d582-44c5-b8e0-a82ad73193de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:9d:e4', 'vm-uuid': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 NetworkManager[48915]: <info>  [1764059581.0012] manager: (tap79f4b8f5-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.005 253542 INFO os_vif [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5')
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.049 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.049 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.050 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:24:9d:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.050 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Using config drive
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.076 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:01 compute-0 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 08:33:01 compute-0 NetworkManager[48915]: <info>  [1764059581.0911] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.093 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:01 compute-0 ovn_controller[152859]: 2025-11-25T08:33:01Z|00471|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.106 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 ovn_controller[152859]: 2025-11-25T08:33:01Z|00472|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 08:33:01 compute-0 ovn_controller[152859]: 2025-11-25T08:33:01Z|00473|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.117 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'keypairs' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.119 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.121 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.122 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.123 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41d00465-6b31-4305-b5ae-f2f02c62c79a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.124 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 08:33:01 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000032.scope: Consumed 13.452s CPU time.
Nov 25 08:33:01 compute-0 systemd-machined[215790]: Machine qemu-60-instance-00000032 terminated.
Nov 25 08:33:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 250 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 290 op/s
Nov 25 08:33:01 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [NOTICE]   (310586) : haproxy version is 2.8.14-c23fe91
Nov 25 08:33:01 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [NOTICE]   (310586) : path to executable is /usr/sbin/haproxy
Nov 25 08:33:01 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [ALERT]    (310586) : Current worker (310588) exited with code 143 (Terminated)
Nov 25 08:33:01 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[310582]: [WARNING]  (310586) : All workers exited. Exiting... (0)
Nov 25 08:33:01 compute-0 systemd[1]: libpod-2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339.scope: Deactivated successfully.
Nov 25 08:33:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2802527729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:01 compute-0 podman[311637]: 2025-11-25 08:33:01.271400699 +0000 UTC m=+0.052672407 container died 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.280 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.287 253542 DEBUG nova.compute.provider_tree [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339-userdata-shm.mount: Deactivated successfully.
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.304 253542 DEBUG nova.scheduler.client.report [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-facc729861bf2043006a179130b6eb00c45dc0c9002c2a929f8d9cf2dc4ea94f-merged.mount: Deactivated successfully.
Nov 25 08:33:01 compute-0 podman[311637]: 2025-11-25 08:33:01.321597928 +0000 UTC m=+0.102869636 container cleanup 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:33:01 compute-0 systemd[1]: libpod-conmon-2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339.scope: Deactivated successfully.
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.336 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.337 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.353 253542 DEBUG nova.network.neutron [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port 223207be-35e0-4b8b-bf78-113792059910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.353 253542 DEBUG nova.network.neutron [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.369 253542 DEBUG nova.objects.instance [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.383 253542 DEBUG oslo_concurrency.lockutils [req-454c1506-0d0b-4590-908a-fcb20569d3e2 req-b6d343cc-8cec-4488-987e-49d8bd765a86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.385 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:33:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2046479535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3902323065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2802527729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.392 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.393 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:33:01 compute-0 podman[311673]: 2025-11-25 08:33:01.401658769 +0000 UTC m=+0.056405537 container remove 2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.407 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[750bf71c-6ec1-47fd-a5ba-040ba4a0cbe2]: (4, ('Tue Nov 25 08:33:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339)\n2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339\nTue Nov 25 08:33:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339)\n2289953be081e6c39272df663cab12ee7163a6905ffd288c559f372770c19339\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.409 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb73e65-fca0-4156-b1c0-16d753a6d868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.410 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 kernel: tap908154e6-30: left promiscuous mode
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.424 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.455 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c24e52f-321f-4f06-b4e3-d1e3a2773995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.477 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65ec2c40-875e-469d-a501-4251582bfc59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.479 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bae69034-a8ee-4aba-a26a-6819a836ab90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.483 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Creating config drive at /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.487 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuoqy3o6j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.496 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[40332456-d02c-4550-98e3-43156a02045b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491402, 'reachable_time': 44539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311704, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.499 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.499 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b86daaf4-cde2-48b5-a129-22a536002620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.561 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.563 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.564 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Creating image(s)
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.581 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.600 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.619 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.622 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.649 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuoqy3o6j" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.652 253542 DEBUG nova.policy [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.655 253542 DEBUG nova.policy [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.675 253542 DEBUG nova.storage.rbd_utils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.678 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.706 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.708 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.709 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.709 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.733 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.736 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 528fb917-0169-441d-b32d-652963344aea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.768 253542 INFO nova.virt.libvirt.driver [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance shutdown successfully after 13 seconds.
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.777 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.777 253542 DEBUG nova.objects.instance [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.796 253542 DEBUG nova.compute.manager [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.830 253542 DEBUG oslo_concurrency.processutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config c0942fc7-74d4-4fc8-9574-4fea9179e71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.831 253542 INFO nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deleting local config drive /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b/disk.config because it was imported into RBD.
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.839 253542 DEBUG oslo_concurrency.lockutils [None req-2b669202-d948-45cd-9a6e-057ea8730fac c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:01 compute-0 kernel: tap79f4b8f5-d5: entered promiscuous mode
Nov 25 08:33:01 compute-0 NetworkManager[48915]: <info>  [1764059581.8883] manager: (tap79f4b8f5-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Nov 25 08:33:01 compute-0 systemd-udevd[311435]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 ovn_controller[152859]: 2025-11-25T08:33:01Z|00474|binding|INFO|Claiming lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de for this chassis.
Nov 25 08:33:01 compute-0 ovn_controller[152859]: 2025-11-25T08:33:01Z|00475|binding|INFO|79f4b8f5-d582-44c5-b8e0-a82ad73193de: Claiming fa:16:3e:24:9d:e4 10.100.0.11
Nov 25 08:33:01 compute-0 NetworkManager[48915]: <info>  [1764059581.9024] device (tap79f4b8f5-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:33:01 compute-0 NetworkManager[48915]: <info>  [1764059581.9030] device (tap79f4b8f5-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.906 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:9d:e4 10.100.0.11'], port_security=['fa:16:3e:24:9d:e4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=79f4b8f5-d582-44c5-b8e0-a82ad73193de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.908 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 79f4b8f5-d582-44c5-b8e0-a82ad73193de in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.910 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.923 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90870a19-cb75-46b4-abbc-0788645b771f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.924 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.925 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.925 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee757ba-fbf4-4d86-9d1a-3bc374b8732a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.926 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bada7701-730e-4c28-bb9a-f70043cd0e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_controller[152859]: 2025-11-25T08:33:01Z|00476|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de ovn-installed in OVS
Nov 25 08:33:01 compute-0 ovn_controller[152859]: 2025-11-25T08:33:01Z|00477|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de up in Southbound
Nov 25 08:33:01 compute-0 systemd-machined[215790]: New machine qemu-62-instance-00000035.
Nov 25 08:33:01 compute-0 nova_compute[253538]: 2025-11-25 08:33:01.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.937 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[44c9be7c-7b84-4eb3-b33b-589c4e919639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000035.
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.960 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7abd1f-8771-4482-b31c-f38f3429b105]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.986 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c583e1cf-42b9-432f-ba41-b69f17493037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:01.991 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e336f6ed-13a9-459c-a08a-60f83a280f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:01 compute-0 NetworkManager[48915]: <info>  [1764059581.9919] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.018 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a27ea9-64ac-47aa-8456-ee5bed7b1ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.021 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[13f65a1c-9d5e-4143-be54-f2b1b30cb0f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 NetworkManager[48915]: <info>  [1764059582.0464] device (tapeb25945d-60): carrier: link connected
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.053 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[be9ebdff-7062-4194-9051-ae296edf70b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.070 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15e5150d-2d04-4bef-8153-6a5e9bf3f6a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493664, 'reachable_time': 43980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311888, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[929a789c-e42b-4312-bff5-aa069b202a31]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493664, 'tstamp': 493664}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311889, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.094 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 528fb917-0169-441d-b32d-652963344aea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.105 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6448ab28-29b2-4fd0-8735-50d54daebdfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493664, 'reachable_time': 43980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311890, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.136 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78450a83-9d78-438a-aa3f-58d068d6b465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.166 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image 528fb917-0169-441d-b32d-652963344aea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.199 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5572aacb-0efc-4074-b228-93310b3a4d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.201 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.201 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.201 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:02 compute-0 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 08:33:02 compute-0 NetworkManager[48915]: <info>  [1764059582.2063] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.208 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:02 compute-0 ovn_controller[152859]: 2025-11-25T08:33:02Z|00478|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.227 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4774418e-a190-48a3-a2a7-e09225ae955a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.229 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:33:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:02.231 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:02 compute-0 ceph-mon[75015]: pgmap v1490: 321 pgs: 321 active+clean; 250 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 290 op/s
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.492 253542 DEBUG nova.objects.instance [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid 528fb917-0169-441d-b32d-652963344aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.509 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.509 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Ensure instance console log exists: /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.510 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.510 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.511 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:02 compute-0 podman[311997]: 2025-11-25 08:33:02.604065826 +0000 UTC m=+0.046332587 container create 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:33:02 compute-0 systemd[1]: Started libpod-conmon-6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5.scope.
Nov 25 08:33:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27abebbb45e35e11fa193c86fc45de186c88f818d07d7200fb6e69019986f4b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:02 compute-0 podman[311997]: 2025-11-25 08:33:02.579363232 +0000 UTC m=+0.021630023 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:33:02 compute-0 podman[311997]: 2025-11-25 08:33:02.679623257 +0000 UTC m=+0.121890058 container init 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 08:33:02 compute-0 podman[311997]: 2025-11-25 08:33:02.685557976 +0000 UTC m=+0.127824747 container start 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.716 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for c0942fc7-74d4-4fc8-9574-4fea9179e71b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.716 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059582.716126, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.717 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Started (Lifecycle Event)
Nov 25 08:33:02 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [NOTICE]   (312060) : New worker (312062) forked
Nov 25 08:33:02 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [NOTICE]   (312060) : Loading success.
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.733 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.737 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059582.7199795, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.737 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Paused (Lifecycle Event)
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.746 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully created port: 582f57a6-32d3-44a0-ab47-d147a0bb0f43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.752 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.754 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.771 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Successfully created port: 56d077f0-8f69-40d8-bd5e-267a70c4c319 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:33:02 compute-0 nova_compute[253538]: 2025-11-25 08:33:02.773 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.172 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.173 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 for instance with vm_state active and task_state None.
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.174 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 for instance with vm_state active and task_state None.
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.175 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state stopped and task_state None.
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state stopped and task_state None.
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.176 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Processing event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.177 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.178 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.178 253542 DEBUG oslo_concurrency.lockutils [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.178 253542 DEBUG nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.178 253542 WARNING nova.compute.manager [req-888688be-5cbc-4847-8868-5c73433d8b04 req-687b7398-2bbe-4d27-941d-51cdc498b757 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state active and task_state rebuild_spawning.
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.179 253542 DEBUG nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.182 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059583.1820388, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.182 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Resumed (Lifecycle Event)
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.184 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.187 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance spawned successfully.
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.187 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 262 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 288 op/s
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.225 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.226 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.227 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.228 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.229 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.230 253542 DEBUG nova.virt.libvirt.driver [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.238 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.243 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.312 253542 DEBUG nova.compute.manager [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.345 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.401 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.402 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.402 253542 DEBUG nova.objects.instance [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.482 253542 DEBUG oslo_concurrency.lockutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.483 253542 DEBUG oslo_concurrency.lockutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.483 253542 DEBUG nova.network.neutron [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.483 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'info_cache' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:03 compute-0 nova_compute[253538]: 2025-11-25 08:33:03.589 253542 DEBUG oslo_concurrency.lockutils [None req-cb0b45a0-9374-4f2b-9271-6d8352f48120 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002005269380412841 of space, bias 1.0, pg target 0.6015808141238523 quantized to 32 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:33:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:33:04 compute-0 ceph-mon[75015]: pgmap v1491: 321 pgs: 321 active+clean; 262 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 288 op/s
Nov 25 08:33:04 compute-0 nova_compute[253538]: 2025-11-25 08:33:04.528 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Successfully updated port: 56d077f0-8f69-40d8-bd5e-267a70c4c319 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:33:04 compute-0 nova_compute[253538]: 2025-11-25 08:33:04.540 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:04 compute-0 nova_compute[253538]: 2025-11-25 08:33:04.540 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:04 compute-0 nova_compute[253538]: 2025-11-25 08:33:04.540 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:04 compute-0 nova_compute[253538]: 2025-11-25 08:33:04.573 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully updated port: 582f57a6-32d3-44a0-ab47-d147a0bb0f43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:33:04 compute-0 nova_compute[253538]: 2025-11-25 08:33:04.585 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:04 compute-0 nova_compute[253538]: 2025-11-25 08:33:04.586 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:04 compute-0 nova_compute[253538]: 2025-11-25 08:33:04.586 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.117 253542 WARNING nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.118 253542 WARNING nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.120 253542 DEBUG nova.network.neutron [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.132 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.166 253542 DEBUG oslo_concurrency.lockutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.190 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.192 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.200 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.207 253542 DEBUG nova.virt.libvirt.vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.208 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.208 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.209 253542 DEBUG os_vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.210 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.211 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 295 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.1 MiB/s wr, 317 op/s
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.216 253542 INFO os_vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.223 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.228 253542 WARNING nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.233 253542 DEBUG nova.virt.libvirt.host [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.233 253542 DEBUG nova.virt.libvirt.host [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.237 253542 DEBUG nova.virt.libvirt.host [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.237 253542 DEBUG nova.virt.libvirt.host [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.237 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.238 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.238 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.238 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.238 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.239 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.239 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.239 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.239 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.240 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.240 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.240 253542 DEBUG nova.virt.hardware [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.240 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.251 253542 DEBUG oslo_concurrency.processutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3731586341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.748 253542 DEBUG oslo_concurrency.processutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:05 compute-0 nova_compute[253538]: 2025-11-25 08:33:05.787 253542 DEBUG oslo_concurrency.processutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914449147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.267 253542 DEBUG oslo_concurrency.processutils [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.268 253542 DEBUG nova.virt.libvirt.vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.269 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.270 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.271 253542 DEBUG nova.objects.instance [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.282 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <name>instance-00000032</name>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:33:05</nova:creationTime>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 08:33:06 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:06 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:07:cd:40"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <target dev="tap15af3dd8-97"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:33:06 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:33:06 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:06 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:06 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:06 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.283 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.283 253542 DEBUG nova.virt.libvirt.driver [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.283 253542 DEBUG nova.virt.libvirt.vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.284 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.284 253542 DEBUG nova.network.os_vif_util [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.285 253542 DEBUG os_vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.289 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.290 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 NetworkManager[48915]: <info>  [1764059586.2919] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.296 253542 INFO os_vif [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:33:06 compute-0 podman[312136]: 2025-11-25 08:33:06.390743616 +0000 UTC m=+0.064085353 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 08:33:06 compute-0 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 08:33:06 compute-0 ovn_controller[152859]: 2025-11-25T08:33:06Z|00479|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 08:33:06 compute-0 ovn_controller[152859]: 2025-11-25T08:33:06Z|00480|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.429 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 NetworkManager[48915]: <info>  [1764059586.4337] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.443 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.444 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.446 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.457 253542 DEBUG nova.network.neutron [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updating instance_info_cache with network_info: [{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59bab187-ccc6-4a3e-9583-5617a1226f31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.458 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:33:06 compute-0 ovn_controller[152859]: 2025-11-25T08:33:06Z|00481|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 08:33:06 compute-0 ovn_controller[152859]: 2025-11-25T08:33:06Z|00482|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.461 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.461 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1a2fb8-b54a-4f44-8e79-61232d34f108]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.462 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f89f333-97a0-43fd-8903-7a59f515ea0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 systemd-udevd[312165]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.474 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[476a251c-4606-48bd-9baf-731e9238b488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 NetworkManager[48915]: <info>  [1764059586.4798] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:33:06 compute-0 NetworkManager[48915]: <info>  [1764059586.4807] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:33:06 compute-0 systemd-machined[215790]: New machine qemu-63-instance-00000032.
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.492 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[34a17383-b820-4070-85d5-bfb715b58eb7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.504 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:06 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000032.
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.505 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance network_info: |[{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.508 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Start _get_guest_xml network_info=[{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.514 253542 WARNING nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.523 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff101a3-3ff5-48db-9bb8-8d02b163d5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 NetworkManager[48915]: <info>  [1764059586.5348] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Nov 25 08:33:06 compute-0 systemd-udevd[312170]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.537 253542 DEBUG nova.virt.libvirt.host [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.534 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[755797df-0694-49f2-96c8-2e98a1f677e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.537 253542 DEBUG nova.virt.libvirt.host [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:33:06 compute-0 ceph-mon[75015]: pgmap v1492: 321 pgs: 321 active+clean; 295 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.1 MiB/s wr, 317 op/s
Nov 25 08:33:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3731586341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2914449147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.561 253542 DEBUG nova.virt.libvirt.host [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.562 253542 DEBUG nova.virt.libvirt.host [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.562 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.562 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.563 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.564 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.564 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.564 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.564 253542 DEBUG nova.virt.hardware [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.565 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8cfcef-b521-404b-af21-076bc343a839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.567 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.570 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b288d1-ac1b-440b-9f4a-5499bb69efd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.609 253542 DEBUG nova.compute.manager [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-changed-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.610 253542 DEBUG nova.compute.manager [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Refreshing instance network info cache due to event network-changed-56d077f0-8f69-40d8-bd5e-267a70c4c319. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.610 253542 DEBUG oslo_concurrency.lockutils [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.610 253542 DEBUG oslo_concurrency.lockutils [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.610 253542 DEBUG nova.network.neutron [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Refreshing network info cache for port 56d077f0-8f69-40d8-bd5e-267a70c4c319 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:33:06 compute-0 NetworkManager[48915]: <info>  [1764059586.6193] device (tap908154e6-30): carrier: link connected
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.629 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f1c347-31fc-49ee-8710-ace6d6086484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.648 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61a5f850-3c36-4470-b2e3-b0f905cb9543]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494121, 'reachable_time': 43234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312201, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.664 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[04d64fd6-985d-4490-9c43-941557f84270]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494121, 'tstamp': 494121}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312202, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.691 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac18e20-cceb-4331-8e6b-c273793ad7da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494121, 'reachable_time': 43234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312203, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be132dc6-cfb9-44df-a78a-5e5f4e6f3c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.793 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab4ad17-85eb-4e08-8589-68f375179650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.794 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.795 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.795 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.799 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:06 compute-0 NetworkManager[48915]: <info>  [1764059586.7999] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 25 08:33:06 compute-0 ovn_controller[152859]: 2025-11-25T08:33:06Z|00483|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:33:06 compute-0 nova_compute[253538]: 2025-11-25 08:33:06.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.817 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.817 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec027dd-6f5d-48a1-9faf-ebb0084bc049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.818 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:33:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:06.819 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:33:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535910666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.089 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.109 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.113 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 245 op/s
Nov 25 08:33:07 compute-0 podman[312282]: 2025-11-25 08:33:07.21954215 +0000 UTC m=+0.054257678 container create bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 08:33:07 compute-0 systemd[1]: Started libpod-conmon-bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d.scope.
Nov 25 08:33:07 compute-0 podman[312282]: 2025-11-25 08:33:07.188779534 +0000 UTC m=+0.023495082 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:33:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bbb652004c5b4ad3c3896e006b5f7cd4464afd4fecf01ab4a4f22a51d10b5b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:07 compute-0 podman[312282]: 2025-11-25 08:33:07.304571186 +0000 UTC m=+0.139286744 container init bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:33:07 compute-0 podman[312282]: 2025-11-25 08:33:07.310558157 +0000 UTC m=+0.145273685 container start bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.321 253542 DEBUG nova.compute.manager [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.322 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.323 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059587.3185778, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.323 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)
Nov 25 08:33:07 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [NOTICE]   (312354) : New worker (312356) forked
Nov 25 08:33:07 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [NOTICE]   (312354) : Loading success.
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.339 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance rebooted successfully.
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.340 253542 DEBUG nova.compute.manager [None req-6b86692d-3418-4f34-b54b-b2e5d568385f c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.352 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.355 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.382 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.383 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059587.3200617, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.383 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.407 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.413 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.416 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.416 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.417 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.417 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.417 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.419 253542 INFO nova.compute.manager [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Terminating instance
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.422 253542 DEBUG nova.compute.manager [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:33:07 compute-0 kernel: tap79f4b8f5-d5 (unregistering): left promiscuous mode
Nov 25 08:33:07 compute-0 NetworkManager[48915]: <info>  [1764059587.4611] device (tap79f4b8f5-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 ovn_controller[152859]: 2025-11-25T08:33:07Z|00484|binding|INFO|Releasing lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de from this chassis (sb_readonly=0)
Nov 25 08:33:07 compute-0 ovn_controller[152859]: 2025-11-25T08:33:07Z|00485|binding|INFO|Setting lport 79f4b8f5-d582-44c5-b8e0-a82ad73193de down in Southbound
Nov 25 08:33:07 compute-0 ovn_controller[152859]: 2025-11-25T08:33:07Z|00486|binding|INFO|Removing iface tap79f4b8f5-d5 ovn-installed in OVS
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.474 253542 DEBUG nova.compute.manager [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.474 253542 DEBUG oslo_concurrency.lockutils [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.475 253542 DEBUG oslo_concurrency.lockutils [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.475 253542 DEBUG oslo_concurrency.lockutils [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.475 253542 DEBUG nova.compute.manager [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.475 253542 WARNING nova.compute.manager [req-94057288-f1a0-4aa2-972e-2c70bf4585a0 req-f0da8de4-4a9e-47e9-82e0-626ebc977eac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.480 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:9d:e4 10.100.0.11'], port_security=['fa:16:3e:24:9d:e4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0942fc7-74d4-4fc8-9574-4fea9179e71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=79f4b8f5-d582-44c5-b8e0-a82ad73193de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.481 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 79f4b8f5-d582-44c5-b8e0-a82ad73193de in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.483 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.484 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[071f2bc7-3671-44b8-9e83-d20c5bccda6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.484 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 25 08:33:07 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000035.scope: Consumed 5.008s CPU time.
Nov 25 08:33:07 compute-0 systemd-machined[215790]: Machine qemu-62-instance-00000035 terminated.
Nov 25 08:33:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152049899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/535910666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3152049899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.564 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.565 253542 DEBUG nova.virt.libvirt.vif [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-688174086',display_name='tempest-DeleteServersTestJSON-server-688174086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-688174086',id=56,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-qasdqh9g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:01Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=528fb917-0169-441d-b32d-652963344aea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.565 253542 DEBUG nova.network.os_vif_util [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.566 253542 DEBUG nova.network.os_vif_util [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.567 253542 DEBUG nova.objects.instance [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 528fb917-0169-441d-b32d-652963344aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.579 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <uuid>528fb917-0169-441d-b32d-652963344aea</uuid>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <name>instance-00000038</name>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <nova:name>tempest-DeleteServersTestJSON-server-688174086</nova:name>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:33:06</nova:creationTime>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <nova:port uuid="56d077f0-8f69-40d8-bd5e-267a70c4c319">
Nov 25 08:33:07 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <entry name="serial">528fb917-0169-441d-b32d-652963344aea</entry>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <entry name="uuid">528fb917-0169-441d-b32d-652963344aea</entry>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/528fb917-0169-441d-b32d-652963344aea_disk">
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/528fb917-0169-441d-b32d-652963344aea_disk.config">
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:07 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:7a:ce:b3"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <target dev="tap56d077f0-8f"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/console.log" append="off"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:33:07 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:33:07 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:07 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:07 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:07 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.579 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Preparing to wait for external event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.580 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.580 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.580 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.581 253542 DEBUG nova.virt.libvirt.vif [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:32:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-688174086',display_name='tempest-DeleteServersTestJSON-server-688174086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-688174086',id=56,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-qasdqh9g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:01Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=528fb917-0169-441d-b32d-652963344aea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.581 253542 DEBUG nova.network.os_vif_util [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.581 253542 DEBUG nova.network.os_vif_util [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.582 253542 DEBUG os_vif [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.582 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.583 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.591 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56d077f0-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.592 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56d077f0-8f, col_values=(('external_ids', {'iface-id': '56d077f0-8f69-40d8-bd5e-267a70c4c319', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:ce:b3', 'vm-uuid': '528fb917-0169-441d-b32d-652963344aea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 NetworkManager[48915]: <info>  [1764059587.5942] manager: (tap56d077f0-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.603 253542 INFO os_vif [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f')
Nov 25 08:33:07 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [NOTICE]   (312060) : haproxy version is 2.8.14-c23fe91
Nov 25 08:33:07 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [NOTICE]   (312060) : path to executable is /usr/sbin/haproxy
Nov 25 08:33:07 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [WARNING]  (312060) : Exiting Master process...
Nov 25 08:33:07 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [WARNING]  (312060) : Exiting Master process...
Nov 25 08:33:07 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [ALERT]    (312060) : Current worker (312062) exited with code 143 (Terminated)
Nov 25 08:33:07 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[312054]: [WARNING]  (312060) : All workers exited. Exiting... (0)
Nov 25 08:33:07 compute-0 systemd[1]: libpod-6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5.scope: Deactivated successfully.
Nov 25 08:33:07 compute-0 podman[312386]: 2025-11-25 08:33:07.624810043 +0000 UTC m=+0.051769633 container died 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5-userdata-shm.mount: Deactivated successfully.
Nov 25 08:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-27abebbb45e35e11fa193c86fc45de186c88f818d07d7200fb6e69019986f4b1-merged.mount: Deactivated successfully.
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.660 253542 INFO nova.virt.libvirt.driver [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Instance destroyed successfully.
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.660 253542 DEBUG nova.objects.instance [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid c0942fc7-74d4-4fc8-9574-4fea9179e71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:07 compute-0 podman[312386]: 2025-11-25 08:33:07.670568632 +0000 UTC m=+0.097528222 container cleanup 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.679 253542 DEBUG nova.virt.libvirt.vif [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1031241876',display_name='tempest-ServerDiskConfigTestJSON-server-1031241876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1031241876',id=53,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:33:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-p1t9fwfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:03Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=c0942fc7-74d4-4fc8-9574-4fea9179e71b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.679 253542 DEBUG nova.network.os_vif_util [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "address": "fa:16:3e:24:9d:e4", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79f4b8f5-d5", "ovs_interfaceid": "79f4b8f5-d582-44c5-b8e0-a82ad73193de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.680 253542 DEBUG nova.network.os_vif_util [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.680 253542 DEBUG os_vif [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.682 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.682 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79f4b8f5-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:07 compute-0 systemd[1]: libpod-conmon-6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5.scope: Deactivated successfully.
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.688 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.688 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.688 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:7a:ce:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.689 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Using config drive
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.709 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.716 253542 INFO os_vif [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:9d:e4,bridge_name='br-int',has_traffic_filtering=True,id=79f4b8f5-d582-44c5-b8e0-a82ad73193de,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79f4b8f5-d5')
Nov 25 08:33:07 compute-0 podman[312431]: 2025-11-25 08:33:07.741213041 +0000 UTC m=+0.045922855 container remove 6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.747 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[430409f3-b680-4f40-9128-55ea122cb251]: (4, ('Tue Nov 25 08:33:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5)\n6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5\nTue Nov 25 08:33:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5)\n6579f1936f19547b0b95956599794047f5a155eb5f2332d13d8dc270bbe894b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.750 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36871710-1f80-414e-9759-78bccca031a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.751 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:07 compute-0 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 nova_compute[253538]: 2025-11-25 08:33:07.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b00c75e4-e71d-4db2-a15e-e7672a423608]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.793 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[48538c91-86cb-498f-9f61-732f63d542dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.795 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4607aaa-13a2-49bd-b0c6-8253bce5b2e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af550e99-5dcd-4e15-9805-4ce4af4bf1ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493658, 'reachable_time': 32183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312486, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:07 compute-0 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.817 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:33:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:07.817 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[afdc76ae-dea6-4579-aba6-9f8ad19972fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.074 253542 INFO nova.virt.libvirt.driver [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deleting instance files /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b_del
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.075 253542 INFO nova.virt.libvirt.driver [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deletion of /var/lib/nova/instances/c0942fc7-74d4-4fc8-9574-4fea9179e71b_del complete
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.127 253542 INFO nova.compute.manager [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.128 253542 DEBUG oslo.service.loopingcall [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.128 253542 DEBUG nova.compute.manager [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.128 253542 DEBUG nova.network.neutron [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.328 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Creating config drive at /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.333 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_m1mhmf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.473 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_m1mhmf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.495 253542 DEBUG nova.storage.rbd_utils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 528fb917-0169-441d-b32d-652963344aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.498 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config 528fb917-0169-441d-b32d-652963344aea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:08 compute-0 ceph-mon[75015]: pgmap v1493: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 245 op/s
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.654 253542 DEBUG nova.network.neutron [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updated VIF entry in instance network info cache for port 56d077f0-8f69-40d8-bd5e-267a70c4c319. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.655 253542 DEBUG nova.network.neutron [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updating instance_info_cache with network_info: [{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:08 compute-0 nova_compute[253538]: 2025-11-25 08:33:08.671 253542 DEBUG oslo_concurrency.lockutils [req-c137dc15-1230-42c1-a944-37ea4e34f26b req-c7d6fc16-2ebe-4a25-9cd3-9ad0279e9663 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.163 253542 DEBUG nova.network.neutron [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.194 253542 DEBUG nova.network.neutron [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.208 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.212 253542 INFO nova.compute.manager [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Took 1.08 seconds to deallocate network for instance.
Nov 25 08:33:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 286 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.218 253542 DEBUG nova.virt.libvirt.vif [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.218 253542 DEBUG nova.network.os_vif_util [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.219 253542 DEBUG nova.network.os_vif_util [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.219 253542 DEBUG os_vif [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.220 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.220 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.223 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.223 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap582f57a6-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.223 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap582f57a6-32, col_values=(('external_ids', {'iface-id': '582f57a6-32d3-44a0-ab47-d147a0bb0f43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:1a:cb', 'vm-uuid': '8191f951-44bc-4371-957a-f2e7d37c1a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 NetworkManager[48915]: <info>  [1764059589.2255] manager: (tap582f57a6-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.236 253542 INFO os_vif [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32')
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.237 253542 DEBUG nova.virt.libvirt.vif [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.237 253542 DEBUG nova.network.os_vif_util [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.237 253542 DEBUG nova.network.os_vif_util [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.240 253542 DEBUG nova.virt.libvirt.guest [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:7a:1a:cb"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <target dev="tap582f57a6-32"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]: </interface>
Nov 25 08:33:09 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:33:09 compute-0 systemd-udevd[312194]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:09 compute-0 NetworkManager[48915]: <info>  [1764059589.2539] manager: (tap582f57a6-32): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Nov 25 08:33:09 compute-0 kernel: tap582f57a6-32: entered promiscuous mode
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.257 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.258 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 ovn_controller[152859]: 2025-11-25T08:33:09Z|00487|binding|INFO|Claiming lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 for this chassis.
Nov 25 08:33:09 compute-0 ovn_controller[152859]: 2025-11-25T08:33:09Z|00488|binding|INFO|582f57a6-32d3-44a0-ab47-d147a0bb0f43: Claiming fa:16:3e:7a:1a:cb 10.100.0.14
Nov 25 08:33:09 compute-0 NetworkManager[48915]: <info>  [1764059589.2644] device (tap582f57a6-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:33:09 compute-0 NetworkManager[48915]: <info>  [1764059589.2651] device (tap582f57a6-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.267 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:1a:cb 10.100.0.14'], port_security=['fa:16:3e:7a:1a:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=582f57a6-32d3-44a0-ab47-d147a0bb0f43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.268 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 582f57a6-32d3-44a0-ab47-d147a0bb0f43 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.271 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[39bb33ff-1d8d-45ff-b10e-46a4d39b7c92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 ovn_controller[152859]: 2025-11-25T08:33:09Z|00489|binding|INFO|Setting lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 ovn-installed in OVS
Nov 25 08:33:09 compute-0 ovn_controller[152859]: 2025-11-25T08:33:09Z|00490|binding|INFO|Setting lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 up in Southbound
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.308 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.330 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e323eb-8e10-434e-863b-fd720757ad44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.334 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f6085b5a-c409-4583-8654-f7ae7f138669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.365 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.365 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.365 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:1a:58:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.365 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:3a:eb:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.366 253542 DEBUG nova.virt.libvirt.driver [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:7a:1a:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[71a0b079-48ab-410b-8868-c7f5f710b450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e90eaf5c-b71c-4f8a-98da-333ab6b5d9b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312546, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.391 253542 DEBUG oslo_concurrency.processutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.402 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18cd78f8-aa5f-46c7-927d-808bf7b053da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312547, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312547, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.404 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.411 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.411 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.411 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.412 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.441 253542 DEBUG nova.virt.libvirt.guest [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:33:09</nova:creationTime>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:33:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 08:33:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 08:33:09 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:33:09 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:09 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:33:09 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:33:09 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.464 253542 DEBUG oslo_concurrency.lockutils [None req-ed1ff473-fcd5-40ad-9af6-a9eb8b3853e9 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:09 compute-0 ceph-mon[75015]: pgmap v1494: 321 pgs: 321 active+clean; 286 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.852 253542 DEBUG oslo_concurrency.processutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config 528fb917-0169-441d-b32d-652963344aea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.853 253542 INFO nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Deleting local config drive /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea/disk.config because it was imported into RBD.
Nov 25 08:33:09 compute-0 kernel: tap56d077f0-8f: entered promiscuous mode
Nov 25 08:33:09 compute-0 NetworkManager[48915]: <info>  [1764059589.9094] manager: (tap56d077f0-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Nov 25 08:33:09 compute-0 ovn_controller[152859]: 2025-11-25T08:33:09Z|00491|binding|INFO|Claiming lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 for this chassis.
Nov 25 08:33:09 compute-0 ovn_controller[152859]: 2025-11-25T08:33:09Z|00492|binding|INFO|56d077f0-8f69-40d8-bd5e-267a70c4c319: Claiming fa:16:3e:7a:ce:b3 10.100.0.4
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.936 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:ce:b3 10.100.0.4'], port_security=['fa:16:3e:7a:ce:b3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '528fb917-0169-441d-b32d-652963344aea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=56d077f0-8f69-40d8-bd5e-267a70c4c319) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.938 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 56d077f0-8f69-40d8-bd5e-267a70c4c319 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 bound to our chassis
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.940 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:33:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440581534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.951 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b693a906-19fd-4782-8b0c-996934a2cbcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.953 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.954 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.954 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67b84fbe-f78e-43cc-9215-7b01b1767aa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.956 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eca5a557-b52e-4e06-974a-98584bb59387]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 systemd-udevd[312584]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:09 compute-0 ovn_controller[152859]: 2025-11-25T08:33:09Z|00493|binding|INFO|Setting lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 ovn-installed in OVS
Nov 25 08:33:09 compute-0 ovn_controller[152859]: 2025-11-25T08:33:09Z|00494|binding|INFO|Setting lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 up in Southbound
Nov 25 08:33:09 compute-0 systemd-machined[215790]: New machine qemu-64-instance-00000038.
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 NetworkManager[48915]: <info>  [1764059589.9674] device (tap56d077f0-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:33:09 compute-0 NetworkManager[48915]: <info>  [1764059589.9679] device (tap56d077f0-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:33:09 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000038.
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.974 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:09.978 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[cb78c8b3-f9c5-45ec-8931-b71c0ddaa3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.992 253542 DEBUG oslo_concurrency.processutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:09 compute-0 nova_compute[253538]: 2025-11-25 08:33:09.999 253542 DEBUG nova.compute.provider_tree [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0c50e584-cb64-4f71-95bf-e08fc849302e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.016 253542 DEBUG nova.scheduler.client.report [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.034 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.034 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2122eb85-23ec-4acb-aeba-03398380ad1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 NetworkManager[48915]: <info>  [1764059590.0404] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/229)
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.039 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[536fa575-44f3-434b-acbd-8d681bb84171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 systemd-udevd[312587]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.059 253542 INFO nova.scheduler.client.report [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Deleted allocations for instance c0942fc7-74d4-4fc8-9574-4fea9179e71b
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.073 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4e1942-2ad3-416d-896f-86dc65e08042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.077 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e1aa45fd-fb76-4149-ac59-cd8765eef48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 NetworkManager[48915]: <info>  [1764059590.1057] device (tapa66e51b8-e0): carrier: link connected
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.111 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b551b5d-0e6e-490b-a59c-5755662e24db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.129 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b751b99f-f3f0-44ba-9ae6-48cd326659f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494470, 'reachable_time': 26050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312616, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.131 253542 DEBUG oslo_concurrency.lockutils [None req-4723d2bf-00e2-4979-a157-d3c878bd71f2 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.143 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aec1373c-3908-45c4-b067-0767328a601f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494470, 'tstamp': 494470}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312617, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.160 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe55981-7740-4fe2-ba94-9b23ce138098]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494470, 'reachable_time': 26050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312618, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.190 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d93e89c3-9b04-444d-811f-a748354cff53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.251 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ede2a001-807a-4600-82ed-c4a19a576912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:10 compute-0 NetworkManager[48915]: <info>  [1764059590.2549] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Nov 25 08:33:10 compute-0 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.259 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:10 compute-0 ovn_controller[152859]: 2025-11-25T08:33:10Z|00495|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.291 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10e33483-c543-4ee0-90ce-eff858670aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.293 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:33:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:10.293 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.318 253542 DEBUG nova.compute.manager [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.318 253542 DEBUG nova.compute.manager [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-582f57a6-32d3-44a0-ab47-d147a0bb0f43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.319 253542 DEBUG oslo_concurrency.lockutils [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.319 253542 DEBUG oslo_concurrency.lockutils [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.319 253542 DEBUG nova.network.neutron [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port 582f57a6-32d3-44a0-ab47-d147a0bb0f43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.528 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059575.5264928, 6fd0259d-3f5c-487b-906c-db0ac2b00830 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.528 253542 INFO nova.compute.manager [-] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] VM Stopped (Lifecycle Event)
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.549 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059590.5494423, 528fb917-0169-441d-b32d-652963344aea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.550 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] VM Started (Lifecycle Event)
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.556 253542 DEBUG nova.compute.manager [None req-5aa08cfc-0450-425d-8888-466a68a195af - - - - - -] [instance: 6fd0259d-3f5c-487b-906c-db0ac2b00830] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.577 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.581 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059590.550347, 528fb917-0169-441d-b32d-652963344aea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.581 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] VM Paused (Lifecycle Event)
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.597 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.602 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.617 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.621 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.621 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.622 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.622 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.622 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.623 253542 WARNING nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.623 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.623 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.623 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.624 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.624 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.624 253542 WARNING nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-unplugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state deleted and task_state None.
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.625 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.625 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.625 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.625 253542 DEBUG oslo_concurrency.lockutils [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "c0942fc7-74d4-4fc8-9574-4fea9179e71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.626 253542 DEBUG nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] No waiting events found dispatching network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:10 compute-0 nova_compute[253538]: 2025-11-25 08:33:10.626 253542 WARNING nova.compute.manager [req-3d49d313-759b-4639-9598-abbfc7b96d22 req-38add6f3-2994-4461-94c0-ae6654208c07 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received unexpected event network-vif-plugged-79f4b8f5-d582-44c5-b8e0-a82ad73193de for instance with vm_state deleted and task_state None.
Nov 25 08:33:10 compute-0 podman[312691]: 2025-11-25 08:33:10.660926301 +0000 UTC m=+0.043295885 container create 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:33:10 compute-0 systemd[1]: Started libpod-conmon-78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31.scope.
Nov 25 08:33:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3b17704a9232b7fd9a3c6391403db11755eeec93cf71409d4cffb3b11bf81a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:10 compute-0 podman[312691]: 2025-11-25 08:33:10.728774635 +0000 UTC m=+0.111144239 container init 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:33:10 compute-0 podman[312691]: 2025-11-25 08:33:10.733867812 +0000 UTC m=+0.116237396 container start 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:33:10 compute-0 podman[312691]: 2025-11-25 08:33:10.640479511 +0000 UTC m=+0.022849095 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:33:10 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [NOTICE]   (312709) : New worker (312711) forked
Nov 25 08:33:10 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [NOTICE]   (312709) : Loading success.
Nov 25 08:33:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3440581534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:11 compute-0 ovn_controller[152859]: 2025-11-25T08:33:11Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:1a:cb 10.100.0.14
Nov 25 08:33:11 compute-0 ovn_controller[152859]: 2025-11-25T08:33:11Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:1a:cb 10.100.0.14
Nov 25 08:33:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 262 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.441 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.442 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.454 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.514 253542 DEBUG nova.network.neutron [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port 582f57a6-32d3-44a0-ab47-d147a0bb0f43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.514 253542 DEBUG nova.network.neutron [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.529 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.530 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.538 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.539 253542 INFO nova.compute.claims [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.544 253542 DEBUG oslo_concurrency.lockutils [req-9450a81c-2182-4fba-8826-2ec29e919dec req-2459aebe-5265-4d4f-9540-c28782ad7407 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.577 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:33:11 compute-0 nova_compute[253538]: 2025-11-25 08:33:11.670 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:12 compute-0 ceph-mon[75015]: pgmap v1495: 321 pgs: 321 active+clean; 262 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.122 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-9541f2fd-4ec3-47ef-a6a9-66e0052c303f" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.122 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-9541f2fd-4ec3-47ef-a6a9-66e0052c303f" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.123 253542 DEBUG nova.objects.instance [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897483266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.209 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.214 253542 DEBUG nova.compute.provider_tree [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.224 253542 DEBUG nova.scheduler.client.report [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.244 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.244 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.287 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.288 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.300 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.315 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.389 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.390 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.391 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Creating image(s)
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.414 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.441 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.461 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.467 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.505 253542 DEBUG nova.policy [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ad88cb0e4cf4d0b8e4cbec835318015', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.540 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.542 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.543 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.544 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.571 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.576 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.614 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.660 253542 DEBUG nova.objects.instance [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.674 253542 DEBUG nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:33:12 compute-0 podman[312836]: 2025-11-25 08:33:12.826915234 +0000 UTC m=+0.066547149 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.841 253542 DEBUG nova.compute.manager [req-495902d4-5669-491e-95d7-6593ff750674 req-f00d369a-5082-4574-b94e-10a4e5a1992b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Received event network-vif-deleted-79f4b8f5-d582-44c5-b8e0-a82ad73193de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.859 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.905 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] resizing rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.943 253542 DEBUG nova.policy [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.981 253542 DEBUG nova.objects.instance [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'migration_context' on Instance uuid fb888d2a-db54-44dc-8ec7-db417fa3cff6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.995 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.996 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Ensure instance console log exists: /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.996 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.997 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:12 compute-0 nova_compute[253538]: 2025-11-25 08:33:12.997 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2897483266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 248 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.240 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Successfully created port: dc1f5923-d984-4e49-bb97-bc1a77ade410 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.445 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.446 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.446 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.447 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.447 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.447 253542 WARNING nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 for instance with vm_state active and task_state None.
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.447 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.448 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.448 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.448 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.449 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.449 253542 WARNING nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 for instance with vm_state active and task_state None.
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.449 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.449 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.450 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.450 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.450 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Processing event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.450 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.451 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.451 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.451 253542 DEBUG oslo_concurrency.lockutils [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.452 253542 DEBUG nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] No waiting events found dispatching network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.452 253542 WARNING nova.compute.manager [req-9cca2f5a-cc7a-4b60-b2e2-c988fc95f339 req-1f1e712b-9930-451d-ae47-cfc9d6624291 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received unexpected event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 for instance with vm_state building and task_state spawning.
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.452 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.455 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059593.4555595, 528fb917-0169-441d-b32d-652963344aea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.456 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] VM Resumed (Lifecycle Event)
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.457 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.460 253542 INFO nova.virt.libvirt.driver [-] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance spawned successfully.
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.460 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.475 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.479 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.482 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.482 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.483 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.483 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.484 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.484 253542 DEBUG nova.virt.libvirt.driver [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.508 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.548 253542 INFO nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Took 11.99 seconds to spawn the instance on the hypervisor.
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.549 253542 DEBUG nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.583 253542 INFO nova.compute.manager [None req-e714bd7c-bfc8-4e2d-b57b-cb7682c01223 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Pausing
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.584 253542 DEBUG nova.objects.instance [None req-e714bd7c-bfc8-4e2d-b57b-cb7682c01223 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.634 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059593.63393, 0feca801-4630-4450-b915-616d8496ab51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.635 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Paused (Lifecycle Event)
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.643 253542 DEBUG nova.compute.manager [None req-e714bd7c-bfc8-4e2d-b57b-cb7682c01223 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.645 253542 INFO nova.compute.manager [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Took 13.02 seconds to build instance.
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.674 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.677 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.679 253542 DEBUG oslo_concurrency.lockutils [None req-09265231-b5f9-435b-aa4b-158217848829 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.706 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.739 253542 DEBUG nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Successfully updated port: 9541f2fd-4ec3-47ef-a6a9-66e0052c303f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.753 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.754 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.754 253542 DEBUG nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.899 253542 WARNING nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.900 253542 WARNING nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:33:13 compute-0 nova_compute[253538]: 2025-11-25 08:33:13.900 253542 WARNING nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:33:14 compute-0 nova_compute[253538]: 2025-11-25 08:33:14.098 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Successfully updated port: dc1f5923-d984-4e49-bb97-bc1a77ade410 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:33:14 compute-0 nova_compute[253538]: 2025-11-25 08:33:14.111 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:14 compute-0 nova_compute[253538]: 2025-11-25 08:33:14.112 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquired lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:14 compute-0 nova_compute[253538]: 2025-11-25 08:33:14.112 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:14 compute-0 ceph-mon[75015]: pgmap v1496: 321 pgs: 321 active+clean; 248 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Nov 25 08:33:14 compute-0 nova_compute[253538]: 2025-11-25 08:33:14.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:14 compute-0 nova_compute[253538]: 2025-11-25 08:33:14.246 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:33:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 287 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 234 op/s
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.266 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.267 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.267 253542 INFO nova.compute.manager [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Shelving
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.283 253542 DEBUG nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.421 253542 DEBUG nova.network.neutron [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Updating instance_info_cache with network_info: [{"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.440 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Releasing lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.440 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance network_info: |[{"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.444 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start _get_guest_xml network_info=[{"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.447 253542 WARNING nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.452 253542 DEBUG nova.virt.libvirt.host [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.453 253542 DEBUG nova.virt.libvirt.host [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.456 253542 DEBUG nova.virt.libvirt.host [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.456 253542 DEBUG nova.virt.libvirt.host [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.456 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.457 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.457 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.458 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.458 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.458 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.459 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.459 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.460 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.460 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.460 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.461 253542 DEBUG nova.virt.hardware [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.464 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.752 253542 INFO nova.compute.manager [None req-21b76c73-efff-4931-8e4f-c94d0ccb4ebd c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Unpausing
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.754 253542 DEBUG nova.objects.instance [None req-21b76c73-efff-4931-8e4f-c94d0ccb4ebd c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:15 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.778 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059595.7784793, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.779 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.781 253542 DEBUG nova.virt.libvirt.guest [None req-21b76c73-efff-4931-8e4f-c94d0ccb4ebd c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.782 253542 DEBUG nova.compute.manager [None req-21b76c73-efff-4931-8e4f-c94d0ccb4ebd c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.791 253542 DEBUG nova.compute.manager [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-changed-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.791 253542 DEBUG nova.compute.manager [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Refreshing instance network info cache due to event network-changed-dc1f5923-d984-4e49-bb97-bc1a77ade410. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.791 253542 DEBUG oslo_concurrency.lockutils [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.792 253542 DEBUG oslo_concurrency.lockutils [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.792 253542 DEBUG nova.network.neutron [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Refreshing network info cache for port dc1f5923-d984-4e49-bb97-bc1a77ade410 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.805 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.811 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:15 compute-0 podman[312949]: 2025-11-25 08:33:15.827297672 +0000 UTC m=+0.074035331 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.840 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 25 08:33:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127209050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.945 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.966 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:15 compute-0 nova_compute[253538]: 2025-11-25 08:33:15.969 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117530641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:16 compute-0 ceph-mon[75015]: pgmap v1497: 321 pgs: 321 active+clean; 287 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 234 op/s
Nov 25 08:33:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3127209050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.435 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.438 253542 DEBUG nova.virt.libvirt.vif [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2088861040',display_name='tempest-ServerDiskConfigTestJSON-server-2088861040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2088861040',id=57,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-piq0ju9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:12Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=fb888d2a-db54-44dc-8ec7-db417fa3cff6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.438 253542 DEBUG nova.network.os_vif_util [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.439 253542 DEBUG nova.network.os_vif_util [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.440 253542 DEBUG nova.objects.instance [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb888d2a-db54-44dc-8ec7-db417fa3cff6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.457 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <uuid>fb888d2a-db54-44dc-8ec7-db417fa3cff6</uuid>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <name>instance-00000039</name>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-2088861040</nova:name>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:33:15</nova:creationTime>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <nova:user uuid="7ad88cb0e4cf4d0b8e4cbec835318015">tempest-ServerDiskConfigTestJSON-1655399928-project-member</nova:user>
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <nova:project uuid="dc93aa65bef7473d961e0cad1e8f2962">tempest-ServerDiskConfigTestJSON-1655399928</nova:project>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <nova:port uuid="dc1f5923-d984-4e49-bb97-bc1a77ade410">
Nov 25 08:33:16 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <entry name="serial">fb888d2a-db54-44dc-8ec7-db417fa3cff6</entry>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <entry name="uuid">fb888d2a-db54-44dc-8ec7-db417fa3cff6</entry>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk">
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config">
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:16 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:7d:1c:27"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <target dev="tapdc1f5923-d9"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/console.log" append="off"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:33:16 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:33:16 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:16 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:16 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:16 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.463 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Preparing to wait for external event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.464 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.464 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.465 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.465 253542 DEBUG nova.virt.libvirt.vif [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2088861040',display_name='tempest-ServerDiskConfigTestJSON-server-2088861040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2088861040',id=57,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-piq0ju9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:12Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=fb888d2a-db54-44dc-8ec7-db417fa3cff6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.466 253542 DEBUG nova.network.os_vif_util [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.467 253542 DEBUG nova.network.os_vif_util [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.468 253542 DEBUG os_vif [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.469 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.470 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.472 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc1f5923-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.473 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc1f5923-d9, col_values=(('external_ids', {'iface-id': 'dc1f5923-d984-4e49-bb97-bc1a77ade410', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:1c:27', 'vm-uuid': 'fb888d2a-db54-44dc-8ec7-db417fa3cff6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:16 compute-0 NetworkManager[48915]: <info>  [1764059596.4756] manager: (tapdc1f5923-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.484 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.485 253542 INFO os_vif [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9')
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.549 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.549 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.550 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] No VIF found with MAC fa:16:3e:7d:1c:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.550 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Using config drive
Nov 25 08:33:16 compute-0 nova_compute[253538]: 2025-11-25 08:33:16.569 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Nov 25 08:33:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2117530641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:17 compute-0 nova_compute[253538]: 2025-11-25 08:33:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:18 compute-0 ceph-mon[75015]: pgmap v1498: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.124 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Creating config drive at /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.132 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2w_aq4k4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.197 253542 DEBUG nova.compute.manager [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-changed-9541f2fd-4ec3-47ef-a6a9-66e0052c303f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.198 253542 DEBUG nova.compute.manager [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing instance network info cache due to event network-changed-9541f2fd-4ec3-47ef-a6a9-66e0052c303f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.199 253542 DEBUG oslo_concurrency.lockutils [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 212 op/s
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.288 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2w_aq4k4" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.316 253542 DEBUG nova.storage.rbd_utils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] rbd image fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.320 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.550 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.579 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:33:19 compute-0 nova_compute[253538]: 2025-11-25 08:33:19.579 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:19 compute-0 ceph-mon[75015]: pgmap v1499: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 212 op/s
Nov 25 08:33:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123912326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.031 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.112 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.113 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.120 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.120 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.128 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.128 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.133 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.134 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.211 253542 DEBUG oslo_concurrency.processutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config fb888d2a-db54-44dc-8ec7-db417fa3cff6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.211 253542 INFO nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Deleting local config drive /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6/disk.config because it was imported into RBD.
Nov 25 08:33:20 compute-0 kernel: tapdc1f5923-d9: entered promiscuous mode
Nov 25 08:33:20 compute-0 NetworkManager[48915]: <info>  [1764059600.2616] manager: (tapdc1f5923-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.263 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:20 compute-0 ovn_controller[152859]: 2025-11-25T08:33:20Z|00496|binding|INFO|Claiming lport dc1f5923-d984-4e49-bb97-bc1a77ade410 for this chassis.
Nov 25 08:33:20 compute-0 ovn_controller[152859]: 2025-11-25T08:33:20Z|00497|binding|INFO|dc1f5923-d984-4e49-bb97-bc1a77ade410: Claiming fa:16:3e:7d:1c:27 10.100.0.6
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.273 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:1c:27 10.100.0.6'], port_security=['fa:16:3e:7d:1c:27 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fb888d2a-db54-44dc-8ec7-db417fa3cff6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=dc1f5923-d984-4e49-bb97-bc1a77ade410) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.275 162739 INFO neutron.agent.ovn.metadata.agent [-] Port dc1f5923-d984-4e49-bb97-bc1a77ade410 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 bound to our chassis
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.277 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:33:20 compute-0 ovn_controller[152859]: 2025-11-25T08:33:20Z|00498|binding|INFO|Setting lport dc1f5923-d984-4e49-bb97-bc1a77ade410 ovn-installed in OVS
Nov 25 08:33:20 compute-0 ovn_controller[152859]: 2025-11-25T08:33:20Z|00499|binding|INFO|Setting lport dc1f5923-d984-4e49-bb97-bc1a77ade410 up in Southbound
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ca74b7f6-9115-420e-990f-9c5bf595ac86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.288 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.289 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb25945d-61 in ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.295 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb25945d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.295 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[605b3903-63eb-4559-ae55-6a912ff58a83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.296 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf67c914-1267-4bc4-8510-757c668b5b35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 systemd-machined[215790]: New machine qemu-65-instance-00000039.
Nov 25 08:33:20 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000039.
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.309 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[18896c09-9a11-42e1-bd67-00fa6905df4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.328 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6fb02e-cd2a-4445-953b-735a64287850]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 systemd-udevd[313116]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:20 compute-0 NetworkManager[48915]: <info>  [1764059600.3536] device (tapdc1f5923-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:33:20 compute-0 NetworkManager[48915]: <info>  [1764059600.3546] device (tapdc1f5923-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.357 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c94712b-9997-4fa7-b28b-7e5bca84c862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.365 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6f3402-79c0-4d84-9e0c-e5aa1a2068fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 NetworkManager[48915]: <info>  [1764059600.3658] manager: (tapeb25945d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.401 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c64831-2ad3-42f0-930d-1a8066a7bdf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.405 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[83c99464-6d09-44b0-b7b2-73ceddcb8a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 NetworkManager[48915]: <info>  [1764059600.4295] device (tapeb25945d-60): carrier: link connected
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.435 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[61209d49-69b9-4c30-a361-b972731baf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59386e88-8819-496c-aff1-d6bf0e585a7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495502, 'reachable_time': 43707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313146, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.470 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.471 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3612MB free_disk=59.85540008544922GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.471 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.472 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.479 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c48f7aa2-e791-4c1e-858b-724903bad402]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:fcf3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495502, 'tstamp': 495502}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313147, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.500 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c067e86-de80-4c1c-af4f-80380dd77b93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb25945d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:fc:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495502, 'reachable_time': 43707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313148, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.535 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2829fd98-b34e-47b3-ab85-ea26dcf491db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.569 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0feca801-4630-4450-b915-616d8496ab51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.569 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8191f951-44bc-4371-957a-f2e7d37c1a32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.570 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 528fb917-0169-441d-b32d-652963344aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.570 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance fb888d2a-db54-44dc-8ec7-db417fa3cff6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.570 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.571 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.597 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5911e3-f49e-4341-8827-7711d515885d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.600 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.600 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.600 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb25945d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:20 compute-0 kernel: tapeb25945d-60: entered promiscuous mode
Nov 25 08:33:20 compute-0 NetworkManager[48915]: <info>  [1764059600.6035] manager: (tapeb25945d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.606 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb25945d-60, col_values=(('external_ids', {'iface-id': 'f4a838c4-0817-4ff4-8792-2e2721905e98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:20 compute-0 ovn_controller[152859]: 2025-11-25T08:33:20Z|00500|binding|INFO|Releasing lport f4a838c4-0817-4ff4-8792-2e2721905e98 from this chassis (sb_readonly=0)
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.622 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[552464f0-0d7e-4249-b3d2-9590c94be22c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.624 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/eb25945d-6002-4a99-b682-034a8a3dc901.pid.haproxy
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID eb25945d-6002-4a99-b682-034a8a3dc901
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:33:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:20.625 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'env', 'PROCESS_TAG=haproxy-eb25945d-6002-4a99-b682-034a8a3dc901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb25945d-6002-4a99-b682-034a8a3dc901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.673 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.753 253542 DEBUG nova.network.neutron [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Updated VIF entry in instance network info cache for port dc1f5923-d984-4e49-bb97-bc1a77ade410. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.754 253542 DEBUG nova.network.neutron [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Updating instance_info_cache with network_info: [{"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:20 compute-0 nova_compute[253538]: 2025-11-25 08:33:20.781 253542 DEBUG oslo_concurrency.lockutils [req-a3e9e76f-a050-484e-a9d1-77728fdda6bd req-92a3a015-dc81-4b54-9b3b-bf11151f547c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-fb888d2a-db54-44dc-8ec7-db417fa3cff6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Nov 25 08:33:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007355957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4123912326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.265 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.271 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.284 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.344 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.346 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:21 compute-0 sudo[313239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:21 compute-0 sudo[313239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:21 compute-0 sudo[313239]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:21 compute-0 podman[313238]: 2025-11-25 08:33:21.36302011 +0000 UTC m=+0.032602017 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:33:21 compute-0 sudo[313275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:33:21 compute-0 sudo[313275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:21 compute-0 sudo[313275]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.516 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059601.5157537, fb888d2a-db54-44dc-8ec7-db417fa3cff6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.517 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] VM Started (Lifecycle Event)
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.535 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.538 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059601.5161715, fb888d2a-db54-44dc-8ec7-db417fa3cff6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.539 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] VM Paused (Lifecycle Event)
Nov 25 08:33:21 compute-0 sudo[313305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:21 compute-0 sudo[313305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:21 compute-0 sudo[313305]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.554 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.557 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:21 compute-0 podman[313238]: 2025-11-25 08:33:21.563119658 +0000 UTC m=+0.232701535 container create e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.573 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:33:21 compute-0 sudo[313330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:33:21 compute-0 sudo[313330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.634 253542 DEBUG nova.network.neutron [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.707 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.708 253542 DEBUG oslo_concurrency.lockutils [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.708 253542 DEBUG nova.network.neutron [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Refreshing network info cache for port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.712 253542 DEBUG nova.virt.libvirt.vif [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.712 253542 DEBUG nova.network.os_vif_util [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.713 253542 DEBUG nova.network.os_vif_util [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.713 253542 DEBUG os_vif [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.714 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.715 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:21 compute-0 systemd[1]: Started libpod-conmon-e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b.scope.
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.717 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.718 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9541f2fd-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.719 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9541f2fd-4e, col_values=(('external_ids', {'iface-id': '9541f2fd-4ec3-47ef-a6a9-66e0052c303f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:39:c6', 'vm-uuid': '8191f951-44bc-4371-957a-f2e7d37c1a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:21 compute-0 NetworkManager[48915]: <info>  [1764059601.7224] manager: (tap9541f2fd-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.740 253542 INFO os_vif [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e')
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.741 253542 DEBUG nova.virt.libvirt.vif [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.741 253542 DEBUG nova.network.os_vif_util [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.742 253542 DEBUG nova.network.os_vif_util [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.745 253542 DEBUG nova.virt.libvirt.guest [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:b9:39:c6"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <target dev="tap9541f2fd-4e"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]: </interface>
Nov 25 08:33:21 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:33:21 compute-0 kernel: tap9541f2fd-4e: entered promiscuous mode
Nov 25 08:33:21 compute-0 NetworkManager[48915]: <info>  [1764059601.7564] manager: (tap9541f2fd-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Nov 25 08:33:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:21 compute-0 systemd-udevd[313130]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed956ae5cd2d22f379fbcbbfc4cf7ec18f54c995101ca09b784bc90a153cec72/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:21 compute-0 ovn_controller[152859]: 2025-11-25T08:33:21Z|00501|binding|INFO|Claiming lport 9541f2fd-4ec3-47ef-a6a9-66e0052c303f for this chassis.
Nov 25 08:33:21 compute-0 ovn_controller[152859]: 2025-11-25T08:33:21Z|00502|binding|INFO|9541f2fd-4ec3-47ef-a6a9-66e0052c303f: Claiming fa:16:3e:b9:39:c6 10.100.0.5
Nov 25 08:33:21 compute-0 NetworkManager[48915]: <info>  [1764059601.7744] device (tap9541f2fd-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:33:21 compute-0 NetworkManager[48915]: <info>  [1764059601.7752] device (tap9541f2fd-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:33:21 compute-0 ovn_controller[152859]: 2025-11-25T08:33:21Z|00503|binding|INFO|Setting lport 9541f2fd-4ec3-47ef-a6a9-66e0052c303f ovn-installed in OVS
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:21 compute-0 ovn_controller[152859]: 2025-11-25T08:33:21Z|00504|binding|INFO|Setting lport 9541f2fd-4ec3-47ef-a6a9-66e0052c303f up in Southbound
Nov 25 08:33:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:21.823 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:39:c6 10.100.0.5'], port_security=['fa:16:3e:b9:39:c6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-790044012', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-790044012', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9541f2fd-4ec3-47ef-a6a9-66e0052c303f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:21 compute-0 podman[313238]: 2025-11-25 08:33:21.834017909 +0000 UTC m=+0.503599816 container init e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:33:21 compute-0 podman[313238]: 2025-11-25 08:33:21.844401308 +0000 UTC m=+0.513983185 container start e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:33:21 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [NOTICE]   (313379) : New worker (313383) forked
Nov 25 08:33:21 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [NOTICE]   (313379) : Loading success.
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.909 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.909 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.909 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:1a:58:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.910 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:3a:eb:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.910 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:7a:1a:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.910 253542 DEBUG nova.virt.libvirt.driver [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:b9:39:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:21.943 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:33:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:21.946 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:33:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:21.960 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[92fefc84-c9d8-4b27-81f3-01df33e39fd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:21 compute-0 nova_compute[253538]: 2025-11-25 08:33:21.992 253542 DEBUG nova.virt.libvirt.guest [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:33:21</nova:creationTime>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:33:21 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 08:33:21 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 08:33:21 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 08:33:21 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:33:21 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:21 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:33:21 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:33:21 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.017 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f602d176-501c-4f39-b391-f83210c2c782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.021 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c1846b33-9686-4971-bd74-21c11d006192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.037 253542 DEBUG nova.compute.manager [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.037 253542 DEBUG oslo_concurrency.lockutils [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.037 253542 DEBUG oslo_concurrency.lockutils [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.038 253542 DEBUG oslo_concurrency.lockutils [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.038 253542 DEBUG nova.compute.manager [req-891871b1-31a7-4ea9-bc48-2a15563445f6 req-254fd201-49c9-43e4-8811-4ecf5c8c9efb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Processing event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.038 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.046 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059602.0463004, fb888d2a-db54-44dc-8ec7-db417fa3cff6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.046 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] VM Resumed (Lifecycle Event)
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.049 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.053 253542 INFO nova.virt.libvirt.driver [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance spawned successfully.
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.054 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.058 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[18f17941-7d38-4c07-b48a-7a2897b25d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.063 253542 DEBUG oslo_concurrency.lockutils [None req-1bbadccb-c6ee-41c2-883b-6cdb2308cf5a 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-9541f2fd-4ec3-47ef-a6a9-66e0052c303f" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.066 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.071 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.075 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63537506-ff1c-4bfd-b2f6-2d0d160db728]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313400, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.090 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.091 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.091 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.092 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.092 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.093 253542 DEBUG nova.virt.libvirt.driver [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.096 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cd297f-d05c-413d-a485-b8dd2aea124d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313401, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313401, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.097 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.098 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.101 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:22.103 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:22 compute-0 sudo[313330]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.266 253542 INFO nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Took 9.88 seconds to spawn the instance on the hypervisor.
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.267 253542 DEBUG nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:22 compute-0 ceph-mon[75015]: pgmap v1500: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Nov 25 08:33:22 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3007355957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:33:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:33:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:33:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:33:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:33:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:33:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 98858d83-4ef4-4cbe-ad4e-0e01660e7266 does not exist
Nov 25 08:33:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev beaeef86-70aa-4762-8f03-cd1ef0d76ee4 does not exist
Nov 25 08:33:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 26dccc6c-2afc-4309-9281-5d3cd0f8265b does not exist
Nov 25 08:33:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:33:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:33:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:33:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:33:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:33:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.399 253542 INFO nova.compute.manager [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Took 10.90 seconds to build instance.
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.432 253542 DEBUG oslo_concurrency.lockutils [None req-81d88d46-8ae0-4bea-8353-0eaa046e7abc 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:22 compute-0 sudo[313416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:22 compute-0 sudo[313416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:22 compute-0 sudo[313416]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:22 compute-0 sudo[313441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:33:22 compute-0 sudo[313441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:22 compute-0 sudo[313441]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:22 compute-0 sudo[313466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:22 compute-0 sudo[313466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:22 compute-0 sudo[313466]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:22 compute-0 sudo[313491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:33:22 compute-0 sudo[313491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.659 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059587.6583495, c0942fc7-74d4-4fc8-9574-4fea9179e71b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.660 253542 INFO nova.compute.manager [-] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] VM Stopped (Lifecycle Event)
Nov 25 08:33:22 compute-0 nova_compute[253538]: 2025-11-25 08:33:22.677 253542 DEBUG nova.compute.manager [None req-c77bd0b9-4498-45a3-b736-6fa40271e370 - - - - - -] [instance: c0942fc7-74d4-4fc8-9574-4fea9179e71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:22 compute-0 ovn_controller[152859]: 2025-11-25T08:33:22Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:33:23 compute-0 podman[313551]: 2025-11-25 08:33:23.001515957 +0000 UTC m=+0.058718819 container create 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 08:33:23 compute-0 podman[313551]: 2025-11-25 08:33:22.962463617 +0000 UTC m=+0.019666489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:33:23 compute-0 systemd[1]: Started libpod-conmon-0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8.scope.
Nov 25 08:33:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:23 compute-0 podman[313551]: 2025-11-25 08:33:23.18358537 +0000 UTC m=+0.240788232 container init 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 25 08:33:23 compute-0 podman[313551]: 2025-11-25 08:33:23.193390243 +0000 UTC m=+0.250593095 container start 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 08:33:23 compute-0 serene_almeida[313565]: 167 167
Nov 25 08:33:23 compute-0 systemd[1]: libpod-0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8.scope: Deactivated successfully.
Nov 25 08:33:23 compute-0 conmon[313565]: conmon 0d04284f5d6c7a5b663a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8.scope/container/memory.events
Nov 25 08:33:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Nov 25 08:33:23 compute-0 podman[313551]: 2025-11-25 08:33:23.229153995 +0000 UTC m=+0.286356847 container attach 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 08:33:23 compute-0 podman[313551]: 2025-11-25 08:33:23.230001797 +0000 UTC m=+0.287204669 container died 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 08:33:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:33:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:33:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:33:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:33:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.318 253542 DEBUG nova.compute.manager [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.321 253542 DEBUG oslo_concurrency.lockutils [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.321 253542 DEBUG oslo_concurrency.lockutils [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.322 253542 DEBUG oslo_concurrency.lockutils [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.322 253542 DEBUG nova.compute.manager [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.323 253542 WARNING nova.compute.manager [req-4cd832e7-ed61-4647-817f-cba40f80edd3 req-bef9a756-a49a-40e6-a604-0da79f140caa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f for instance with vm_state active and task_state None.
Nov 25 08:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-56e0d7094f993eb703b400e74939d34d77075dff8cc8cd9364e06beae62b565e-merged.mount: Deactivated successfully.
Nov 25 08:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:33:23 compute-0 podman[313551]: 2025-11-25 08:33:23.6501711 +0000 UTC m=+0.707373972 container remove 0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:33:23 compute-0 systemd[1]: libpod-conmon-0d04284f5d6c7a5b663a053ebe31651c7cc53c2d38a75eb526214916b64670a8.scope: Deactivated successfully.
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.890 253542 DEBUG nova.network.neutron [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updated VIF entry in instance network info cache for port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.891 253542 DEBUG nova.network.neutron [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:23 compute-0 podman[313593]: 2025-11-25 08:33:23.908444152 +0000 UTC m=+0.052081081 container create ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:33:23 compute-0 nova_compute[253538]: 2025-11-25 08:33:23.909 253542 DEBUG oslo_concurrency.lockutils [req-7c5cbe18-4074-4fe0-b624-6fbdbc718a64 req-a832312b-ec77-4f03-a9b5-c2a2ba13565e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:23 compute-0 systemd[1]: Started libpod-conmon-ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0.scope.
Nov 25 08:33:23 compute-0 podman[313593]: 2025-11-25 08:33:23.88121661 +0000 UTC m=+0.024853559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:33:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:24 compute-0 podman[313593]: 2025-11-25 08:33:24.106891774 +0000 UTC m=+0.250528723 container init ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:33:24 compute-0 podman[313593]: 2025-11-25 08:33:24.113222225 +0000 UTC m=+0.256859194 container start ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 08:33:24 compute-0 podman[313593]: 2025-11-25 08:33:24.146077808 +0000 UTC m=+0.289714757 container attach ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:33:24 compute-0 nova_compute[253538]: 2025-11-25 08:33:24.285 253542 DEBUG nova.compute.manager [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:24 compute-0 nova_compute[253538]: 2025-11-25 08:33:24.286 253542 DEBUG oslo_concurrency.lockutils [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:24 compute-0 nova_compute[253538]: 2025-11-25 08:33:24.287 253542 DEBUG oslo_concurrency.lockutils [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:24 compute-0 nova_compute[253538]: 2025-11-25 08:33:24.287 253542 DEBUG oslo_concurrency.lockutils [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:24 compute-0 nova_compute[253538]: 2025-11-25 08:33:24.287 253542 DEBUG nova.compute.manager [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] No waiting events found dispatching network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:24 compute-0 nova_compute[253538]: 2025-11-25 08:33:24.287 253542 WARNING nova.compute.manager [req-24a92d83-b85d-4de1-a5b8-f2eb9a87df61 req-277b93f0-c508-4252-9ce8-1a6dd530b7cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received unexpected event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 for instance with vm_state active and task_state None.
Nov 25 08:33:24 compute-0 ovn_controller[152859]: 2025-11-25T08:33:24Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:39:c6 10.100.0.5
Nov 25 08:33:24 compute-0 ovn_controller[152859]: 2025-11-25T08:33:24Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:39:c6 10.100.0.5
Nov 25 08:33:24 compute-0 nova_compute[253538]: 2025-11-25 08:33:24.346 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:24 compute-0 ceph-mon[75015]: pgmap v1501: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Nov 25 08:33:25 compute-0 sharp_chatterjee[313609]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:33:25 compute-0 sharp_chatterjee[313609]: --> relative data size: 1.0
Nov 25 08:33:25 compute-0 sharp_chatterjee[313609]: --> All data devices are unavailable
Nov 25 08:33:25 compute-0 systemd[1]: libpod-ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0.scope: Deactivated successfully.
Nov 25 08:33:25 compute-0 podman[313593]: 2025-11-25 08:33:25.150300338 +0000 UTC m=+1.293937267 container died ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 08:33:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Nov 25 08:33:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-98274429cf82ccaddf7a858c684a0f74f27d533c50c85be35abaad5dd5a1d605-merged.mount: Deactivated successfully.
Nov 25 08:33:25 compute-0 nova_compute[253538]: 2025-11-25 08:33:25.445 253542 DEBUG nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:33:25 compute-0 nova_compute[253538]: 2025-11-25 08:33:25.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:26 compute-0 ceph-mon[75015]: pgmap v1502: 321 pgs: 321 active+clean; 295 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Nov 25 08:33:26 compute-0 podman[313593]: 2025-11-25 08:33:26.342709245 +0000 UTC m=+2.486346184 container remove ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:33:26 compute-0 systemd[1]: libpod-conmon-ef9d800720ed4002728f2801005b1cb5059b485c638dd19445a5d59b2194beb0.scope: Deactivated successfully.
Nov 25 08:33:26 compute-0 sudo[313491]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:26 compute-0 sudo[313649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:26 compute-0 sudo[313649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:26 compute-0 sudo[313649]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:26 compute-0 nova_compute[253538]: 2025-11-25 08:33:26.501 253542 DEBUG nova.compute.manager [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:26 compute-0 nova_compute[253538]: 2025-11-25 08:33:26.501 253542 DEBUG oslo_concurrency.lockutils [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:26 compute-0 nova_compute[253538]: 2025-11-25 08:33:26.501 253542 DEBUG oslo_concurrency.lockutils [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:26 compute-0 nova_compute[253538]: 2025-11-25 08:33:26.501 253542 DEBUG oslo_concurrency.lockutils [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:26 compute-0 nova_compute[253538]: 2025-11-25 08:33:26.502 253542 DEBUG nova.compute.manager [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:26 compute-0 nova_compute[253538]: 2025-11-25 08:33:26.502 253542 WARNING nova.compute.manager [req-a85980e0-7c57-48b7-867a-a2de6b7113c4 req-1c52ad12-35eb-447b-89e6-04062cf804ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-9541f2fd-4ec3-47ef-a6a9-66e0052c303f for instance with vm_state active and task_state None.
Nov 25 08:33:26 compute-0 sudo[313674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:33:26 compute-0 sudo[313674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:26 compute-0 sudo[313674]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:26 compute-0 sudo[313699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:26 compute-0 sudo[313699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:26 compute-0 sudo[313699]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:26 compute-0 sudo[313724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:33:26 compute-0 sudo[313724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:26 compute-0 nova_compute[253538]: 2025-11-25 08:33:26.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:27 compute-0 podman[313788]: 2025-11-25 08:33:27.024976141 +0000 UTC m=+0.028054995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:33:27 compute-0 podman[313788]: 2025-11-25 08:33:27.173526383 +0000 UTC m=+0.176605177 container create a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:33:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 300 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1006 KiB/s wr, 199 op/s
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.290 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-223207be-35e0-4b8b-bf78-113792059910" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.291 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-223207be-35e0-4b8b-bf78-113792059910" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.314 253542 DEBUG nova.objects.instance [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.331 253542 DEBUG nova.virt.libvirt.vif [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.331 253542 DEBUG nova.network.os_vif_util [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.332 253542 DEBUG nova.network.os_vif_util [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.337 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.340 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.344 253542 DEBUG nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Attempting to detach device tap223207be-35 from instance 8191f951-44bc-4371-957a-f2e7d37c1a32 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.345 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:3a:eb:0c"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <target dev="tap223207be-35"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]: </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:33:27 compute-0 systemd[1]: Started libpod-conmon-a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e.scope.
Nov 25 08:33:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.456 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:33:27 compute-0 podman[313788]: 2025-11-25 08:33:27.465423008 +0000 UTC m=+0.468501812 container init a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.467 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface>not found in domain: <domain type='kvm' id='58'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <name>instance-00000034</name>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:33:21</nova:creationTime>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:33:27 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='serial'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='uuid'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk' index='2'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config' index='1'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:1a:58:19'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='tapd8bd16e1-36'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:3a:eb:0c'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='tap223207be-35'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='net1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:7a:1a:cb'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='tap582f57a6-32'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='net2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:b9:39:c6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='tap9541f2fd-4e'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='net3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </target>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </console>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c36,c489</label>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c489</imagelabel>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:33:27 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:27 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.468 253542 INFO nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tap223207be-35 from instance 8191f951-44bc-4371-957a-f2e7d37c1a32 from the persistent domain config.
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.468 253542 DEBUG nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] (1/8): Attempting to detach device tap223207be-35 with device alias net1 from instance 8191f951-44bc-4371-957a-f2e7d37c1a32 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.468 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:3a:eb:0c"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <target dev="tap223207be-35"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]: </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:33:27 compute-0 podman[313788]: 2025-11-25 08:33:27.475665394 +0000 UTC m=+0.478744168 container start a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:33:27 compute-0 goofy_gates[313804]: 167 167
Nov 25 08:33:27 compute-0 systemd[1]: libpod-a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e.scope: Deactivated successfully.
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:33:27 compute-0 kernel: tap223207be-35 (unregistering): left promiscuous mode
Nov 25 08:33:27 compute-0 NetworkManager[48915]: <info>  [1764059607.5853] device (tap223207be-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:27 compute-0 ovn_controller[152859]: 2025-11-25T08:33:27Z|00505|binding|INFO|Releasing lport 223207be-35e0-4b8b-bf78-113792059910 from this chassis (sb_readonly=0)
Nov 25 08:33:27 compute-0 ovn_controller[152859]: 2025-11-25T08:33:27Z|00506|binding|INFO|Setting lport 223207be-35e0-4b8b-bf78-113792059910 down in Southbound
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:27 compute-0 ovn_controller[152859]: 2025-11-25T08:33:27Z|00507|binding|INFO|Removing iface tap223207be-35 ovn-installed in OVS
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.602 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764059607.6025817, 8191f951-44bc-4371-957a-f2e7d37c1a32 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.604 253542 DEBUG nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Start waiting for the detach event from libvirt for device tap223207be-35 with device alias net1 for instance 8191f951-44bc-4371-957a-f2e7d37c1a32 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.605 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.612 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface>not found in domain: <domain type='kvm' id='58'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <name>instance-00000034</name>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:33:21</nova:creationTime>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="223207be-35e0-4b8b-bf78-113792059910">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:33:27 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='serial'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='uuid'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk' index='2'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config' index='1'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 podman[313788]: 2025-11-25 08:33:27.617591469 +0000 UTC m=+0.620670323 container attach a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:1a:58:19'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='tapd8bd16e1-36'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:7a:1a:cb'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='tap582f57a6-32'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='net2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:b9:39:c6'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target dev='tap9541f2fd-4e'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='net3'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       </target>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </console>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:33:27 compute-0 podman[313788]: 2025-11-25 08:33:27.619187941 +0000 UTC m=+0.622266705 container died a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c36,c489</label>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c489</imagelabel>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:33:27 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:27 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.613 253542 INFO nova.virt.libvirt.driver [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tap223207be-35 from instance 8191f951-44bc-4371-957a-f2e7d37c1a32 from the live domain config.
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.614 253542 DEBUG nova.virt.libvirt.vif [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.614 253542 DEBUG nova.network.os_vif_util [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.615 253542 DEBUG nova.network.os_vif_util [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.615 253542 DEBUG os_vif [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.618 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap223207be-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.633 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:eb:0c 10.100.0.12'], port_security=['fa:16:3e:3a:eb:0c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=223207be-35e0-4b8b-bf78-113792059910) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.635 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 223207be-35e0-4b8b-bf78-113792059910 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.641 253542 INFO os_vif [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35')
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.638 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.642 253542 DEBUG nova.virt.libvirt.guest [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:33:27</nova:creationTime>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 08:33:27 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:33:27 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:27 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:33:27 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:33:27 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.659 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d94eb40c-e4ef-4666-a3b9-ebda647840dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.696 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aae8a132-c5cb-4e06-98ad-e8a13f72ef8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.699 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[34dae8cf-bb93-4363-9746-78866093dd78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.757 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9761f06e-ac55-494e-a033-ba83f7fe85f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5f54db-029e-4230-8fc2-197a7551f25a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313832, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.791 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[670fdf36-ec00-4b06-a4aa-b56037c8e388]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313833, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313833, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.794 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.796 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:27 compute-0 nova_compute[253538]: 2025-11-25 08:33:27.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.797 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.797 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.798 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:27.798 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-14ff2af751f70f01b043bc8c408fb365f5a32a9c6de7b29eafc2c3a4c60fda83-merged.mount: Deactivated successfully.
Nov 25 08:33:28 compute-0 podman[313788]: 2025-11-25 08:33:28.140368768 +0000 UTC m=+1.143447532 container remove a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:33:28 compute-0 systemd[1]: libpod-conmon-a7f6d496c311e2fc85886fe31accaca71ecbf474c326342bf69ef70ac456099e.scope: Deactivated successfully.
Nov 25 08:33:28 compute-0 ceph-mon[75015]: pgmap v1503: 321 pgs: 321 active+clean; 300 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1006 KiB/s wr, 199 op/s
Nov 25 08:33:28 compute-0 podman[313842]: 2025-11-25 08:33:28.418579006 +0000 UTC m=+0.050659913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:33:28 compute-0 podman[313842]: 2025-11-25 08:33:28.523381762 +0000 UTC m=+0.155462699 container create 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.593 253542 DEBUG nova.compute.manager [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.593 253542 DEBUG oslo_concurrency.lockutils [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.594 253542 DEBUG oslo_concurrency.lockutils [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.594 253542 DEBUG oslo_concurrency.lockutils [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.594 253542 DEBUG nova.compute.manager [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-unplugged-223207be-35e0-4b8b-bf78-113792059910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.594 253542 WARNING nova.compute.manager [req-fa14ca78-fcd1-46da-ada8-65ae4e5bb2c8 req-c42f14fa-2329-40a6-9ab5-fe44ee0003c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-unplugged-223207be-35e0-4b8b-bf78-113792059910 for instance with vm_state active and task_state None.
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.637 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.638 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.638 253542 DEBUG nova.network.neutron [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:28 compute-0 systemd[1]: Started libpod-conmon-8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31.scope.
Nov 25 08:33:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.700 253542 DEBUG nova.compute.manager [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-deleted-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.701 253542 INFO nova.compute.manager [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Neutron deleted interface 223207be-35e0-4b8b-bf78-113792059910; detaching it from the instance and deleting it from the info cache
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.702 253542 DEBUG nova.network.neutron [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:28 compute-0 podman[313842]: 2025-11-25 08:33:28.721726263 +0000 UTC m=+0.353807200 container init 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:33:28 compute-0 podman[313842]: 2025-11-25 08:33:28.731368582 +0000 UTC m=+0.363449489 container start 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:33:28 compute-0 podman[313842]: 2025-11-25 08:33:28.735694429 +0000 UTC m=+0.367775366 container attach 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.863 253542 DEBUG nova.objects.instance [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'system_metadata' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.885 253542 DEBUG nova.objects.instance [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'flavor' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.905 253542 DEBUG nova.virt.libvirt.vif [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.905 253542 DEBUG nova.network.os_vif_util [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.906 253542 DEBUG nova.network.os_vif_util [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.910 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.914 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface>not found in domain: <domain type='kvm' id='58'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <name>instance-00000034</name>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:33:27</nova:creationTime>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:33:28 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='serial'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='uuid'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk' index='2'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config' index='1'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:1a:58:19'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='tapd8bd16e1-36'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:7a:1a:cb'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='tap582f57a6-32'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='net2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:b9:39:c6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='tap9541f2fd-4e'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='net3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </target>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </console>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c36,c489</label>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c489</imagelabel>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:33:28 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:28 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.914 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.920 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3a:eb:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap223207be-35"/></interface>not found in domain: <domain type='kvm' id='58'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <name>instance-00000034</name>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <uuid>8191f951-44bc-4371-957a-f2e7d37c1a32</uuid>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:33:27</nova:creationTime>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:33:28 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='serial'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='uuid'>8191f951-44bc-4371-957a-f2e7d37c1a32</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk' index='2'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/8191f951-44bc-4371-957a-f2e7d37c1a32_disk.config' index='1'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:1a:58:19'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='tapd8bd16e1-36'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:7a:1a:cb'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='tap582f57a6-32'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='net2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:b9:39:c6'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target dev='tap9541f2fd-4e'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='net3'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       </target>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32/console.log' append='off'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </console>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </input>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c36,c489</label>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c489</imagelabel>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:33:28 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:28 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.921 253542 WARNING nova.virt.libvirt.driver [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Detaching interface fa:16:3e:3a:eb:0c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap223207be-35' not found.
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.922 253542 DEBUG nova.virt.libvirt.vif [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.922 253542 DEBUG nova.network.os_vif_util [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "223207be-35e0-4b8b-bf78-113792059910", "address": "fa:16:3e:3a:eb:0c", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223207be-35", "ovs_interfaceid": "223207be-35e0-4b8b-bf78-113792059910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.923 253542 DEBUG nova.network.os_vif_util [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.923 253542 DEBUG os_vif [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.925 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap223207be-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.925 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.928 253542 INFO os_vif [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:eb:0c,bridge_name='br-int',has_traffic_filtering=True,id=223207be-35e0-4b8b-bf78-113792059910,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223207be-35')
Nov 25 08:33:28 compute-0 nova_compute[253538]: 2025-11-25 08:33:28.929 253542 DEBUG nova.virt.libvirt.guest [req-48af5876-65ca-4802-b482-d07f1f694d56 req-db19be76-ba5d-4c9c-bdf0-e0b5743b2797 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2072750142</nova:name>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:33:28</nova:creationTime>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="d8bd16e1-3695-474d-be04-7fdf44bee803">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="582f57a6-32d3-44a0-ab47-d147a0bb0f43">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     <nova:port uuid="9541f2fd-4ec3-47ef-a6a9-66e0052c303f">
Nov 25 08:33:28 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:33:28 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:33:28 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:33:28 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:33:28 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:33:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:33:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433025928' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:33:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:33:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433025928' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:33:29 compute-0 ovn_controller[152859]: 2025-11-25T08:33:29Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:ce:b3 10.100.0.4
Nov 25 08:33:29 compute-0 ovn_controller[152859]: 2025-11-25T08:33:29Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:ce:b3 10.100.0.4
Nov 25 08:33:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 303 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 159 op/s
Nov 25 08:33:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:29.272 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:29 compute-0 nova_compute[253538]: 2025-11-25 08:33:29.272 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:29.274 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:33:29 compute-0 nova_compute[253538]: 2025-11-25 08:33:29.274 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:29 compute-0 nova_compute[253538]: 2025-11-25 08:33:29.274 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:29 compute-0 nova_compute[253538]: 2025-11-25 08:33:29.275 253542 INFO nova.compute.manager [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Rebooting instance
Nov 25 08:33:29 compute-0 nova_compute[253538]: 2025-11-25 08:33:29.286 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:29 compute-0 nova_compute[253538]: 2025-11-25 08:33:29.286 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:29 compute-0 nova_compute[253538]: 2025-11-25 08:33:29.286 253542 DEBUG nova.network.neutron [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/433025928' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:33:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/433025928' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:33:29 compute-0 elastic_darwin[313859]: {
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:     "0": [
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:         {
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "devices": [
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "/dev/loop3"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             ],
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_name": "ceph_lv0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_size": "21470642176",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "name": "ceph_lv0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "tags": {
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cluster_name": "ceph",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.crush_device_class": "",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.encrypted": "0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osd_id": "0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.type": "block",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.vdo": "0"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             },
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "type": "block",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "vg_name": "ceph_vg0"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:         }
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:     ],
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:     "1": [
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:         {
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "devices": [
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "/dev/loop4"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             ],
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_name": "ceph_lv1",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_size": "21470642176",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "name": "ceph_lv1",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "tags": {
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cluster_name": "ceph",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.crush_device_class": "",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.encrypted": "0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osd_id": "1",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.type": "block",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.vdo": "0"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             },
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "type": "block",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "vg_name": "ceph_vg1"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:         }
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:     ],
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:     "2": [
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:         {
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "devices": [
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "/dev/loop5"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             ],
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_name": "ceph_lv2",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_size": "21470642176",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "name": "ceph_lv2",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "tags": {
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.cluster_name": "ceph",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.crush_device_class": "",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.encrypted": "0",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osd_id": "2",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.type": "block",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:                 "ceph.vdo": "0"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             },
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "type": "block",
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:             "vg_name": "ceph_vg2"
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:         }
Nov 25 08:33:29 compute-0 elastic_darwin[313859]:     ]
Nov 25 08:33:29 compute-0 elastic_darwin[313859]: }
Nov 25 08:33:29 compute-0 systemd[1]: libpod-8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31.scope: Deactivated successfully.
Nov 25 08:33:29 compute-0 podman[313842]: 2025-11-25 08:33:29.606588395 +0000 UTC m=+1.238669332 container died 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-51c3a8fa0b9121f1c8446a516c7153b04f9329d971deb1fa3dfe8c51e1bdffd3-merged.mount: Deactivated successfully.
Nov 25 08:33:29 compute-0 podman[313842]: 2025-11-25 08:33:29.655410167 +0000 UTC m=+1.287491074 container remove 8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 08:33:29 compute-0 systemd[1]: libpod-conmon-8d2f63eeb39430ca49e42b4fac24132445c815c01488089da83fa5199188be31.scope: Deactivated successfully.
Nov 25 08:33:29 compute-0 sudo[313724]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:29 compute-0 sudo[313879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:29 compute-0 sudo[313879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:29 compute-0 sudo[313879]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:29 compute-0 sudo[313904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:33:29 compute-0 sudo[313904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:29 compute-0 sudo[313904]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:29 compute-0 sudo[313929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:29 compute-0 sudo[313929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:29 compute-0 sudo[313929]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:29 compute-0 sudo[313954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:33:29 compute-0 sudo[313954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:30 compute-0 podman[314019]: 2025-11-25 08:33:30.330070409 +0000 UTC m=+0.046325287 container create cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:33:30 compute-0 systemd[1]: Started libpod-conmon-cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb.scope.
Nov 25 08:33:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:30 compute-0 podman[314019]: 2025-11-25 08:33:30.395655271 +0000 UTC m=+0.111910179 container init cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:33:30 compute-0 podman[314019]: 2025-11-25 08:33:30.402027793 +0000 UTC m=+0.118282671 container start cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 08:33:30 compute-0 podman[314019]: 2025-11-25 08:33:30.405025313 +0000 UTC m=+0.121280211 container attach cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 08:33:30 compute-0 bold_spence[314036]: 167 167
Nov 25 08:33:30 compute-0 systemd[1]: libpod-cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb.scope: Deactivated successfully.
Nov 25 08:33:30 compute-0 conmon[314036]: conmon cc5d00d3da3cca2a3aff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb.scope/container/memory.events
Nov 25 08:33:30 compute-0 podman[314019]: 2025-11-25 08:33:30.311616942 +0000 UTC m=+0.027871840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:33:30 compute-0 podman[314041]: 2025-11-25 08:33:30.447984478 +0000 UTC m=+0.026329318 container died cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5dd66dd306135f39a966fd87e162aea3afbfd7cf1e2fa055c0e1faee25f77b9-merged.mount: Deactivated successfully.
Nov 25 08:33:30 compute-0 ceph-mon[75015]: pgmap v1504: 321 pgs: 321 active+clean; 303 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 159 op/s
Nov 25 08:33:30 compute-0 podman[314041]: 2025-11-25 08:33:30.488937939 +0000 UTC m=+0.067282759 container remove cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:33:30 compute-0 systemd[1]: libpod-conmon-cc5d00d3da3cca2a3affe7a5211241602a7cd4a1ffcd8d8b6726258a7102d5fb.scope: Deactivated successfully.
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.534 253542 INFO nova.network.neutron [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Port 223207be-35e0-4b8b-bf78-113792059910 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:30 compute-0 podman[314063]: 2025-11-25 08:33:30.702212371 +0000 UTC m=+0.044828616 container create 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:33:30 compute-0 systemd[1]: Started libpod-conmon-3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5.scope.
Nov 25 08:33:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:30 compute-0 podman[314063]: 2025-11-25 08:33:30.685227404 +0000 UTC m=+0.027843679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:30 compute-0 podman[314063]: 2025-11-25 08:33:30.795324793 +0000 UTC m=+0.137941088 container init 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:33:30 compute-0 podman[314063]: 2025-11-25 08:33:30.803255366 +0000 UTC m=+0.145871611 container start 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:33:30 compute-0 podman[314063]: 2025-11-25 08:33:30.806270067 +0000 UTC m=+0.148886322 container attach 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.815 253542 DEBUG nova.compute.manager [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.816 253542 DEBUG oslo_concurrency.lockutils [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.816 253542 DEBUG oslo_concurrency.lockutils [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.816 253542 DEBUG oslo_concurrency.lockutils [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.816 253542 DEBUG nova.compute.manager [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.817 253542 WARNING nova.compute.manager [req-fd32e393-3363-473d-9796-701b281b57e1 req-d30b300b-39c9-486e-a329-b3ab53583679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-223207be-35e0-4b8b-bf78-113792059910 for instance with vm_state active and task_state None.
Nov 25 08:33:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.910 162739 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ea1847ed-a3ec-4944-b720-ee87acf36b74 with type ""
Nov 25 08:33:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.912 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:39:c6 10.100.0.5'], port_security=['fa:16:3e:b9:39:c6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-790044012', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-790044012', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9541f2fd-4ec3-47ef-a6a9-66e0052c303f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.913 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:33:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.915 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:33:30 compute-0 ovn_controller[152859]: 2025-11-25T08:33:30Z|00508|binding|INFO|Removing iface tap9541f2fd-4e ovn-installed in OVS
Nov 25 08:33:30 compute-0 ovn_controller[152859]: 2025-11-25T08:33:30Z|00509|binding|INFO|Removing lport 9541f2fd-4ec3-47ef-a6a9-66e0052c303f ovn-installed in OVS
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.931 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32c9317b-3a7a-4f59-9f4e-e4496aae36fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:30 compute-0 nova_compute[253538]: 2025-11-25 08:33:30.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.965 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[96a5f9ac-11ed-414f-90fb-108a69894281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:30.968 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8c381c-a325-46e0-a140-7f96530ca563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.001 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c39d6667-535e-41d4-8af7-bfb00655d8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.019 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29768961-2caa-4257-addc-0b306f1b8d7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314089, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[edaea7b9-1c11-4a99-bced-4e98e3621be8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314090, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314090, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.040 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.096 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.097 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.097 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.097 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.098 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.181 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.182 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.183 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.185 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.186 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.187 253542 INFO nova.compute.manager [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Terminating instance
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.188 253542 DEBUG nova.compute.manager [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:33:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 312 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 157 op/s
Nov 25 08:33:31 compute-0 kernel: tapd8bd16e1-36 (unregistering): left promiscuous mode
Nov 25 08:33:31 compute-0 NetworkManager[48915]: <info>  [1764059611.2518] device (tapd8bd16e1-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:31 compute-0 kernel: tap582f57a6-32 (unregistering): left promiscuous mode
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 ovn_controller[152859]: 2025-11-25T08:33:31Z|00510|binding|INFO|Releasing lport d8bd16e1-3695-474d-be04-7fdf44bee803 from this chassis (sb_readonly=0)
Nov 25 08:33:31 compute-0 ovn_controller[152859]: 2025-11-25T08:33:31Z|00511|binding|INFO|Setting lport d8bd16e1-3695-474d-be04-7fdf44bee803 down in Southbound
Nov 25 08:33:31 compute-0 ovn_controller[152859]: 2025-11-25T08:33:31Z|00512|binding|INFO|Removing iface tapd8bd16e1-36 ovn-installed in OVS
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.273 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 NetworkManager[48915]: <info>  [1764059611.2750] device (tap582f57a6-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.283 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:58:19 10.100.0.11'], port_security=['fa:16:3e:1a:58:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab6e7e4a-351f-4b59-b94e-a7f51f236dd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d8bd16e1-3695-474d-be04-7fdf44bee803) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.286 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d8bd16e1-3695-474d-be04-7fdf44bee803 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.289 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:33:31 compute-0 kernel: tap9541f2fd-4e (unregistering): left promiscuous mode
Nov 25 08:33:31 compute-0 NetworkManager[48915]: <info>  [1764059611.3120] device (tap9541f2fd-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.319 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab4d1fe-9ca3-4836-a40b-9bce1a91fbf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 ovn_controller[152859]: 2025-11-25T08:33:31Z|00513|binding|INFO|Releasing lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 from this chassis (sb_readonly=0)
Nov 25 08:33:31 compute-0 ovn_controller[152859]: 2025-11-25T08:33:31Z|00514|binding|INFO|Setting lport 582f57a6-32d3-44a0-ab47-d147a0bb0f43 down in Southbound
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.333 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 ovn_controller[152859]: 2025-11-25T08:33:31Z|00515|binding|INFO|Removing iface tap582f57a6-32 ovn-installed in OVS
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.340 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:1a:cb 10.100.0.14'], port_security=['fa:16:3e:7a:1a:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8191f951-44bc-4371-957a-f2e7d37c1a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=582f57a6-32d3-44a0-ab47-d147a0bb0f43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.364 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5485558a-b46a-4ca1-aa00-6e6c22486a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.366 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.367 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[69575316-ee99-4a9d-9462-f93015a861cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Deactivated successfully.
Nov 25 08:33:31 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Consumed 15.499s CPU time.
Nov 25 08:33:31 compute-0 systemd-machined[215790]: Machine qemu-58-instance-00000034 terminated.
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.394 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[acef6736-1fd9-4868-8ac7-204b990f0007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.414 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a47d0d65-357b-4a66-8d71-3a17d6fce061]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 826, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 826, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490985, 'reachable_time': 15416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314112, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 NetworkManager[48915]: <info>  [1764059611.4190] manager: (tap582f57a6-32): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Nov 25 08:33:31 compute-0 NetworkManager[48915]: <info>  [1764059611.4297] manager: (tap9541f2fd-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.439 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2fd808-453b-4487-be93-a88b19368c5d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490996, 'tstamp': 490996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314127, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490999, 'tstamp': 490999}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314127, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.440 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.453 253542 INFO nova.virt.libvirt.driver [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Instance destroyed successfully.
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.453 253542 DEBUG nova.objects.instance [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'resources' on Instance uuid 8191f951-44bc-4371-957a-f2e7d37c1a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.456 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.456 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.457 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.457 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.457 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.458 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 582f57a6-32d3-44a0-ab47-d147a0bb0f43 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.461 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.463 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0828591-373b-4cfe-801d-bba912c36f0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:31.463 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace which is not needed anymore
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.474 253542 DEBUG nova.virt.libvirt.vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.475 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.476 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.476 253542 DEBUG os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.478 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8bd16e1-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.487 253542 INFO os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:58:19,bridge_name='br-int',has_traffic_filtering=True,id=d8bd16e1-3695-474d-be04-7fdf44bee803,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8bd16e1-36')
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.488 253542 DEBUG nova.virt.libvirt.vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.489 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.489 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.490 253542 DEBUG os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.491 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap582f57a6-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.499 253542 INFO os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:1a:cb,bridge_name='br-int',has_traffic_filtering=True,id=582f57a6-32d3-44a0-ab47-d147a0bb0f43,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap582f57a6-32')
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.499 253542 DEBUG nova.virt.libvirt.vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2072750142',display_name='tempest-AttachInterfacesTestJSON-server-2072750142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2072750142',id=52,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlIbVdTjrPi47Xp1ay/t1WIi0WKTQJPoNHCD5jEl7ye+GEjFhiA/mB2rFe7IhKIxVjHVKF2hIwCbWYxBLjoyv+NSB0KC8NG+zdVJzTKR2nUpIyzOpLkhsEIm7Jy2dcm6w==',key_name='tempest-keypair-2060783228',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-558nuebf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:32:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=8191f951-44bc-4371-957a-f2e7d37c1a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.500 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.500 253542 DEBUG nova.network.os_vif_util [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.500 253542 DEBUG os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.502 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9541f2fd-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.503 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.506 253542 INFO os_vif [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:39:c6,bridge_name='br-int',has_traffic_filtering=True,id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9541f2fd-4e')
Nov 25 08:33:31 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [NOTICE]   (309993) : haproxy version is 2.8.14-c23fe91
Nov 25 08:33:31 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [NOTICE]   (309993) : path to executable is /usr/sbin/haproxy
Nov 25 08:33:31 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [WARNING]  (309993) : Exiting Master process...
Nov 25 08:33:31 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [WARNING]  (309993) : Exiting Master process...
Nov 25 08:33:31 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [ALERT]    (309993) : Current worker (309999) exited with code 143 (Terminated)
Nov 25 08:33:31 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[309957]: [WARNING]  (309993) : All workers exited. Exiting... (0)
Nov 25 08:33:31 compute-0 systemd[1]: libpod-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54.scope: Deactivated successfully.
Nov 25 08:33:31 compute-0 conmon[309957]: conmon 9f6e487e8d7e168d80b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54.scope/container/memory.events
Nov 25 08:33:31 compute-0 podman[314189]: 2025-11-25 08:33:31.602137757 +0000 UTC m=+0.045930015 container died 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54-userdata-shm.mount: Deactivated successfully.
Nov 25 08:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fc5cfaa8d3aa0802256d8920c39f377bf45d88d3a7eb5b3dfababcae814e2ac-merged.mount: Deactivated successfully.
Nov 25 08:33:31 compute-0 podman[314189]: 2025-11-25 08:33:31.683153864 +0000 UTC m=+0.126946122 container cleanup 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:33:31 compute-0 systemd[1]: libpod-conmon-9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54.scope: Deactivated successfully.
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]: {
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "osd_id": 1,
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "type": "bluestore"
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:     },
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "osd_id": 2,
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "type": "bluestore"
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:     },
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "osd_id": 0,
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:         "type": "bluestore"
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]:     }
Nov 25 08:33:31 compute-0 wonderful_hamilton[314079]: }
Nov 25 08:33:31 compute-0 systemd[1]: libpod-3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5.scope: Deactivated successfully.
Nov 25 08:33:31 compute-0 conmon[314079]: conmon 3dd93b34bac3793ca1c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5.scope/container/memory.events
Nov 25 08:33:31 compute-0 podman[314063]: 2025-11-25 08:33:31.83407638 +0000 UTC m=+1.176692625 container died 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.906 253542 DEBUG nova.network.neutron [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.931 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:31 compute-0 nova_compute[253538]: 2025-11-25 08:33:31.932 253542 DEBUG nova.compute.manager [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c449e9c7c85f013e30844a99ea13890446f37682b82672e076560ac8f88d97e-merged.mount: Deactivated successfully.
Nov 25 08:33:32 compute-0 podman[314063]: 2025-11-25 08:33:32.143912647 +0000 UTC m=+1.486528902 container remove 3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hamilton, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:33:32 compute-0 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 08:33:32 compute-0 systemd[1]: libpod-conmon-3dd93b34bac3793ca1c98de596fd3402b8cb72247791f6152fb8508a8b3005d5.scope: Deactivated successfully.
Nov 25 08:33:32 compute-0 NetworkManager[48915]: <info>  [1764059612.1655] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:32 compute-0 podman[314236]: 2025-11-25 08:33:32.16706191 +0000 UTC m=+0.454307542 container remove 9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.173 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1e3649-69e7-4851-93ba-8a837d49bde3]: (4, ('Tue Nov 25 08:33:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54)\n9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54\nTue Nov 25 08:33:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54)\n9f6e487e8d7e168d80b68d990e56d9004a91b65410e023e55400f4e4213b6b54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.175 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4924befc-6705-44bc-9de5-1403d5e553d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.181 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:32 compute-0 sudo[313954]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:32 compute-0 ovn_controller[152859]: 2025-11-25T08:33:32Z|00516|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 08:33:32 compute-0 ovn_controller[152859]: 2025-11-25T08:33:32Z|00517|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 kernel: tap9bf3cbfa-70: left promiscuous mode
Nov 25 08:33:32 compute-0 ovn_controller[152859]: 2025-11-25T08:33:32Z|00518|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 08:33:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.243 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '10', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:33:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:33:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:33:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f5342b74-0d7a-4f9c-8638-cc7a1fb0c94d does not exist
Nov 25 08:33:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e7a2f0ed-1f54-4994-9f78-403b9c146648 does not exist
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.265 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.267 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 08:33:32 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000032.scope: Consumed 13.684s CPU time.
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.270 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd850f6e-3ab3-4779-b776-a75678fe55bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 systemd-machined[215790]: Machine qemu-63-instance-00000032 terminated.
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.285 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[696041c5-2289-4727-8d78-097d9db4763a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.287 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2040811-a899-4cc1-ac3e-6c9e6c93431c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.310 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6acdef-19b1-49ed-8563-9adaf67f9a44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490978, 'reachable_time': 38226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314296, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bf3cbfa\x2d7e0d\x2d4c98\x2d99a2\x2d4ca14fb6bbbe.mount: Deactivated successfully.
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.315 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.315 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7e310f34-9025-4a57-830f-4fb7e0c9b313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.316 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.317 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.318 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9addd85e-e12b-4e3f-9a6f-2a9dfee76879]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.319 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore
Nov 25 08:33:32 compute-0 sudo[314272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:33:32 compute-0 sudo[314272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:32 compute-0 sudo[314272]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:32 compute-0 sudo[314305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:33:32 compute-0 sudo[314305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:33:32 compute-0 sudo[314305]: pam_unix(sudo:session): session closed for user root
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.420 253542 DEBUG nova.compute.manager [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.421 253542 DEBUG oslo_concurrency.lockutils [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.421 253542 DEBUG oslo_concurrency.lockutils [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.421 253542 DEBUG oslo_concurrency.lockutils [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.421 253542 DEBUG nova.compute.manager [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-unplugged-d8bd16e1-3695-474d-be04-7fdf44bee803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.422 253542 DEBUG nova.compute.manager [req-1bb502bd-de8c-4092-8372-32144f1e5ef6 req-8e91c61b-e9d3-4cde-bc15-c53d3f8faf3d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-d8bd16e1-3695-474d-be04-7fdf44bee803 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.433 253542 INFO nova.virt.libvirt.driver [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Deleting instance files /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32_del
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.434 253542 INFO nova.virt.libvirt.driver [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Deletion of /var/lib/nova/instances/8191f951-44bc-4371-957a-f2e7d37c1a32_del complete
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.472 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.473 253542 DEBUG nova.objects.instance [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [NOTICE]   (312354) : haproxy version is 2.8.14-c23fe91
Nov 25 08:33:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [NOTICE]   (312354) : path to executable is /usr/sbin/haproxy
Nov 25 08:33:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [WARNING]  (312354) : Exiting Master process...
Nov 25 08:33:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [ALERT]    (312354) : Current worker (312356) exited with code 143 (Terminated)
Nov 25 08:33:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[312349]: [WARNING]  (312354) : All workers exited. Exiting... (0)
Nov 25 08:33:32 compute-0 systemd[1]: libpod-bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d.scope: Deactivated successfully.
Nov 25 08:33:32 compute-0 podman[314342]: 2025-11-25 08:33:32.48929202 +0000 UTC m=+0.057942028 container died bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.491 253542 DEBUG nova.virt.libvirt.vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.492 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.492 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.493 253542 DEBUG os_vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.494 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.497 253542 INFO nova.compute.manager [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Took 1.31 seconds to destroy the instance on the hypervisor.
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.498 253542 DEBUG oslo.service.loopingcall [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.499 253542 DEBUG nova.compute.manager [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.499 253542 DEBUG nova.network.neutron [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:33:32 compute-0 ceph-mon[75015]: pgmap v1505: 321 pgs: 321 active+clean; 312 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 157 op/s
Nov 25 08:33:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:33:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.508 253542 INFO os_vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.516 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.521 253542 WARNING nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d-userdata-shm.mount: Deactivated successfully.
Nov 25 08:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bbb652004c5b4ad3c3896e006b5f7cd4464afd4fecf01ab4a4f22a51d10b5b9-merged.mount: Deactivated successfully.
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.530 253542 DEBUG nova.virt.libvirt.host [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.530 253542 DEBUG nova.virt.libvirt.host [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.534 253542 DEBUG nova.virt.libvirt.host [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.534 253542 DEBUG nova.virt.libvirt.host [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.534 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.535 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.535 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.535 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.536 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.537 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.537 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.537 253542 DEBUG nova.virt.hardware [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.537 253542 DEBUG nova.objects.instance [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:32 compute-0 podman[314342]: 2025-11-25 08:33:32.540372883 +0000 UTC m=+0.109022891 container cleanup bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:33:32 compute-0 systemd[1]: libpod-conmon-bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d.scope: Deactivated successfully.
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.550 253542 DEBUG oslo_concurrency.processutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:32 compute-0 podman[314378]: 2025-11-25 08:33:32.60129681 +0000 UTC m=+0.039559574 container remove bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.612 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0c6a54-10b3-4f7d-a0be-98678300159e]: (4, ('Tue Nov 25 08:33:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d)\nbdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d\nTue Nov 25 08:33:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (bdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d)\nbdef0da94390ea813c5d4981c3a419e9fafc5becd6725743a2c0041cc0ba302d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0462a7-7552-459b-b8b0-1b3c6ec219ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.614 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 kernel: tap908154e6-30: left promiscuous mode
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.635 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.635 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e637d905-1e8d-4940-bdbb-699e88b1cc51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.653 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01a4a9d6-0005-473c-b99d-db939941c60f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f42c2a8d-fa77-42fe-9f82-93d37111748e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.670 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[404e5754-9173-434b-ad83-e30f0584c6e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494111, 'reachable_time': 40353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314393, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.674 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:33:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:32.674 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d19c718e-ab7d-4045-84b7-271128a65845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.723 253542 DEBUG nova.network.neutron [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "address": "fa:16:3e:b9:39:c6", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9541f2fd-4e", "ovs_interfaceid": "9541f2fd-4ec3-47ef-a6a9-66e0052c303f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.766 253542 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.766 253542 DEBUG nova.network.neutron [-] Unable to show port 9541f2fd-4ec3-47ef-a6a9-66e0052c303f as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.857 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-8191f951-44bc-4371-957a-f2e7d37c1a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.878 253542 DEBUG oslo_concurrency.lockutils [None req-56603a31-362f-47a6-8425-b7532e4dcfee 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-8191f951-44bc-4371-957a-f2e7d37c1a32-223207be-35e0-4b8b-bf78-113792059910" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.896 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-deleted-9541f2fd-4ec3-47ef-a6a9-66e0052c303f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.897 253542 INFO nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Neutron deleted interface 9541f2fd-4ec3-47ef-a6a9-66e0052c303f; detaching it from the instance and deleting it from the info cache
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.897 253542 DEBUG nova.network.neutron [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [{"id": "d8bd16e1-3695-474d-be04-7fdf44bee803", "address": "fa:16:3e:1a:58:19", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8bd16e1-36", "ovs_interfaceid": "d8bd16e1-3695-474d-be04-7fdf44bee803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "address": "fa:16:3e:7a:1a:cb", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap582f57a6-32", "ovs_interfaceid": "582f57a6-32d3-44a0-ab47-d147a0bb0f43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956643919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.961 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Detach interface failed, port_id=9541f2fd-4ec3-47ef-a6a9-66e0052c303f, reason: Instance 8191f951-44bc-4371-957a-f2e7d37c1a32 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.961 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.962 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.963 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.963 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.964 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-unplugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.964 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-unplugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.964 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.965 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.965 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.965 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.966 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.966 253542 WARNING nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-582f57a6-32d3-44a0-ab47-d147a0bb0f43 for instance with vm_state active and task_state deleting.
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.966 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.966 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.967 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.967 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.967 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.967 253542 WARNING nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG oslo_concurrency.lockutils [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.968 253542 DEBUG nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.969 253542 WARNING nova.compute.manager [req-b5891d6f-bf3f-418d-8e47-6f6436b64dfd req-fcbab208-1c29-4531-9bfb-389cd3786246 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:33:32 compute-0 nova_compute[253538]: 2025-11-25 08:33:32.976 253542 DEBUG oslo_concurrency.processutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.003 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.003 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.004 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.004 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.004 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.005 253542 INFO nova.compute.manager [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Terminating instance
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.006 253542 DEBUG nova.compute.manager [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.009 253542 DEBUG oslo_concurrency.processutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:33 compute-0 kernel: tapdc1f5923-d9 (unregistering): left promiscuous mode
Nov 25 08:33:33 compute-0 NetworkManager[48915]: <info>  [1764059613.0892] device (tapdc1f5923-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 ovn_controller[152859]: 2025-11-25T08:33:33Z|00519|binding|INFO|Releasing lport dc1f5923-d984-4e49-bb97-bc1a77ade410 from this chassis (sb_readonly=0)
Nov 25 08:33:33 compute-0 ovn_controller[152859]: 2025-11-25T08:33:33Z|00520|binding|INFO|Setting lport dc1f5923-d984-4e49-bb97-bc1a77ade410 down in Southbound
Nov 25 08:33:33 compute-0 ovn_controller[152859]: 2025-11-25T08:33:33Z|00521|binding|INFO|Removing iface tapdc1f5923-d9 ovn-installed in OVS
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.118 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:1c:27 10.100.0.6'], port_security=['fa:16:3e:7d:1c:27 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fb888d2a-db54-44dc-8ec7-db417fa3cff6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb25945d-6002-4a99-b682-034a8a3dc901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc93aa65bef7473d961e0cad1e8f2962', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e75a559d-2985-4816-b432-9eef78e9b129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d4314d0-90db-4d2a-a971-774f6d589653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=dc1f5923-d984-4e49-bb97-bc1a77ade410) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.119 162739 INFO neutron.agent.ovn.metadata.agent [-] Port dc1f5923-d984-4e49-bb97-bc1a77ade410 in datapath eb25945d-6002-4a99-b682-034a8a3dc901 unbound from our chassis
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.122 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb25945d-6002-4a99-b682-034a8a3dc901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.123 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[353f3ed2-c880-47dc-9f82-661d7a82f73d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.124 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 namespace which is not needed anymore
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Deactivated successfully.
Nov 25 08:33:33 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Consumed 11.529s CPU time.
Nov 25 08:33:33 compute-0 systemd-machined[215790]: Machine qemu-65-instance-00000039 terminated.
Nov 25 08:33:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 296 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Nov 25 08:33:33 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [NOTICE]   (313379) : haproxy version is 2.8.14-c23fe91
Nov 25 08:33:33 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [NOTICE]   (313379) : path to executable is /usr/sbin/haproxy
Nov 25 08:33:33 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [WARNING]  (313379) : Exiting Master process...
Nov 25 08:33:33 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [ALERT]    (313379) : Current worker (313383) exited with code 143 (Terminated)
Nov 25 08:33:33 compute-0 neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901[313358]: [WARNING]  (313379) : All workers exited. Exiting... (0)
Nov 25 08:33:33 compute-0 systemd[1]: libpod-e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b.scope: Deactivated successfully.
Nov 25 08:33:33 compute-0 podman[314470]: 2025-11-25 08:33:33.260529778 +0000 UTC m=+0.055008580 container died e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.299 253542 INFO nova.virt.libvirt.driver [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Instance destroyed successfully.
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.300 253542 DEBUG nova.objects.instance [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lazy-loading 'resources' on Instance uuid fb888d2a-db54-44dc-8ec7-db417fa3cff6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.312 253542 DEBUG nova.virt.libvirt.vif [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2088861040',display_name='tempest-ServerDiskConfigTestJSON-server-2088861040',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2088861040',id=57,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:33:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc93aa65bef7473d961e0cad1e8f2962',ramdisk_id='',reservation_id='r-piq0ju9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1655399928',owner_user_name='tempest-ServerDiskConfigTestJSON-1655399928-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:31Z,user_data=None,user_id='7ad88cb0e4cf4d0b8e4cbec835318015',uuid=fb888d2a-db54-44dc-8ec7-db417fa3cff6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.313 253542 DEBUG nova.network.os_vif_util [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converting VIF {"id": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "address": "fa:16:3e:7d:1c:27", "network": {"id": "eb25945d-6002-4a99-b682-034a8a3dc901", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1802666542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc93aa65bef7473d961e0cad1e8f2962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc1f5923-d9", "ovs_interfaceid": "dc1f5923-d984-4e49-bb97-bc1a77ade410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.315 253542 DEBUG nova.network.os_vif_util [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.315 253542 DEBUG os_vif [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b-userdata-shm.mount: Deactivated successfully.
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.319 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc1f5923-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed956ae5cd2d22f379fbcbbfc4cf7ec18f54c995101ca09b784bc90a153cec72-merged.mount: Deactivated successfully.
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.353 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:33 compute-0 podman[314470]: 2025-11-25 08:33:33.353727972 +0000 UTC m=+0.148206774 container cleanup e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.355 253542 INFO os_vif [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:1c:27,bridge_name='br-int',has_traffic_filtering=True,id=dc1f5923-d984-4e49-bb97-bc1a77ade410,network=Network(eb25945d-6002-4a99-b682-034a8a3dc901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc1f5923-d9')
Nov 25 08:33:33 compute-0 systemd[1]: libpod-conmon-e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b.scope: Deactivated successfully.
Nov 25 08:33:33 compute-0 podman[314507]: 2025-11-25 08:33:33.419239383 +0000 UTC m=+0.042324598 container remove e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.424 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0069469d-9629-4fdb-b60e-0e51d1b0514d]: (4, ('Tue Nov 25 08:33:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b)\ne03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b\nTue Nov 25 08:33:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 (e03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b)\ne03fdcecaa738056887a121fee2fa47bbc6cdb5f263c3a08e5530469a1b34b0b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.426 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c66ebabb-b95a-40b6-84fe-9fe9f30436e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.427 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb25945d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.429 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 kernel: tapeb25945d-60: left promiscuous mode
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.431 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.434 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7c24ab10-ae22-4916-81bb-629a5a206f29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5e680b-3f86-40bc-91d2-3f6658000610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.451 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c23955e1-b4e9-4290-a967-d2780969018b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777305441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.466 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9fab8219-970b-4152-9198-a052cae7c587]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495494, 'reachable_time': 32216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314533, 'error': None, 'target': 'ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 systemd[1]: run-netns-ovnmeta\x2deb25945d\x2d6002\x2d4a99\x2db682\x2d034a8a3dc901.mount: Deactivated successfully.
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.468 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb25945d-6002-4a99-b682-034a8a3dc901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.468 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4bad87-e68a-41e0-bc4c-bbf3701f68fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.482 253542 DEBUG oslo_concurrency.processutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.483 253542 DEBUG nova.virt.libvirt.vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.484 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.485 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.486 253542 DEBUG nova.objects.instance [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.499 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <name>instance-00000032</name>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:33:32</nova:creationTime>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 08:33:33 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:07:cd:40"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <target dev="tap15af3dd8-97"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:33:33 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:33:33 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:33 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:33 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:33 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.501 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.502 253542 DEBUG nova.virt.libvirt.driver [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.503 253542 DEBUG nova.virt.libvirt.vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.503 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2956643919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3777305441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.504 253542 DEBUG nova.network.os_vif_util [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.504 253542 DEBUG os_vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.505 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.505 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.509 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.509 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 NetworkManager[48915]: <info>  [1764059613.5119] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.513 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.518 253542 INFO os_vif [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:33:33 compute-0 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 08:33:33 compute-0 NetworkManager[48915]: <info>  [1764059613.5838] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Nov 25 08:33:33 compute-0 systemd-udevd[314134]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 ovn_controller[152859]: 2025-11-25T08:33:33Z|00522|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 08:33:33 compute-0 ovn_controller[152859]: 2025-11-25T08:33:33Z|00523|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.594 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.595 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.597 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:33:33 compute-0 NetworkManager[48915]: <info>  [1764059613.5995] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:33:33 compute-0 NetworkManager[48915]: <info>  [1764059613.6005] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:33:33 compute-0 ovn_controller[152859]: 2025-11-25T08:33:33Z|00524|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 08:33:33 compute-0 ovn_controller[152859]: 2025-11-25T08:33:33Z|00525|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.608 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.609 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00a99641-0f49-43a3-9c14-7e309dfe4f6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.610 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.613 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.613 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44ece8cf-5865-4ea8-8e40-7f6491047be3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb0109a-c1f4-435c-828c-f4b7be4d4ddc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 systemd-machined[215790]: New machine qemu-66-instance-00000032.
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.625 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0e746305-5cc4-49a4-bbe4-a298a67bb7ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-00000032.
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.648 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[950a124f-9303-4d17-a1ec-74b6a42b9062]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.673 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c480c082-068f-4fb7-82c0-d9a3f724a636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 NetworkManager[48915]: <info>  [1764059613.6801] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.679 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e16d78c3-030c-4bc9-8bb4-ce7ad16d854b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.711 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7af771-bcc7-4f9a-b1ce-16fb2942bd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.714 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7594475f-cab6-4bdb-b179-e227b1f4b42d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.716 253542 INFO nova.virt.libvirt.driver [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Deleting instance files /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6_del
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.717 253542 INFO nova.virt.libvirt.driver [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Deletion of /var/lib/nova/instances/fb888d2a-db54-44dc-8ec7-db417fa3cff6_del complete
Nov 25 08:33:33 compute-0 NetworkManager[48915]: <info>  [1764059613.7416] device (tap908154e6-30): carrier: link connected
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.747 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a38f5d05-d8bd-4e0a-8e8e-91bd61995c53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.765 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[229d605c-4b97-451b-894e-7c0387b098bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314580, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.766 253542 INFO nova.compute.manager [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.766 253542 DEBUG oslo.service.loopingcall [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.767 253542 DEBUG nova.compute.manager [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.767 253542 DEBUG nova.network.neutron [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.781 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66027d0b-c3a7-4e27-a0b1-311bc486a284]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496834, 'tstamp': 496834}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314581, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.799 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3281fe4d-7bd5-4f6d-b079-d02d38ae8e0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314582, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.827 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24ede5e0-50c5-4346-bd38-83527a09616e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.888 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2568daaa-3ce2-411a-9706-2a563b2ead2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.889 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.889 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.889 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:33 compute-0 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 08:33:33 compute-0 NetworkManager[48915]: <info>  [1764059613.8919] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.894 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 ovn_controller[152859]: 2025-11-25T08:33:33Z|00526|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.911 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.912 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68fdae4a-5b69-403e-be51-17c7dea7aae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.913 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:33:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:33.914 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.968 253542 DEBUG nova.network.neutron [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.989 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.989 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059613.9894643, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.990 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.991 253542 DEBUG nova.compute.manager [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.996 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance rebooted successfully.
Nov 25 08:33:33 compute-0 nova_compute[253538]: 2025-11-25 08:33:33.996 253542 DEBUG nova.compute.manager [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.037 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.040 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.055 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.056 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059613.9907825, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.056 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.068 253542 INFO nova.compute.manager [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Took 1.57 seconds to deallocate network for instance.
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.073 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.077 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.095 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.181 253542 DEBUG oslo_concurrency.lockutils [None req-2cd0e226-b95f-4d12-96f9-bbfd794cf495 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.242 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.242 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:34 compute-0 podman[314653]: 2025-11-25 08:33:34.322038767 +0000 UTC m=+0.069310814 container create 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.351 253542 DEBUG oslo_concurrency.processutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:34 compute-0 systemd[1]: Started libpod-conmon-66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803.scope.
Nov 25 08:33:34 compute-0 podman[314653]: 2025-11-25 08:33:34.275225998 +0000 UTC m=+0.022498095 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:33:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b287a9705456981153ae9eff66d536d8de2c4f253b08bf0d27b3e6ca96100e2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:34 compute-0 podman[314653]: 2025-11-25 08:33:34.407786332 +0000 UTC m=+0.155058399 container init 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:33:34 compute-0 podman[314653]: 2025-11-25 08:33:34.418217901 +0000 UTC m=+0.165489948 container start 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:33:34 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [NOTICE]   (314674) : New worker (314676) forked
Nov 25 08:33:34 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [NOTICE]   (314674) : Loading success.
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.516 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.517 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.517 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.518 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.518 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] No waiting events found dispatching network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.518 253542 WARNING nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received unexpected event network-vif-plugged-d8bd16e1-3695-474d-be04-7fdf44bee803 for instance with vm_state deleted and task_state None.
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.519 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-unplugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.519 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.520 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.520 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.520 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] No waiting events found dispatching network-vif-unplugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.521 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-unplugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:33:34 compute-0 ceph-mon[75015]: pgmap v1506: 321 pgs: 321 active+clean; 296 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.521 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.522 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.523 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.523 253542 DEBUG oslo_concurrency.lockutils [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.523 253542 DEBUG nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] No waiting events found dispatching network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.524 253542 WARNING nova.compute.manager [req-958ad06d-4ddf-46c6-b7eb-4fb17c96b89d req-ff6382c1-2808-4e29-9096-bad183bf704a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received unexpected event network-vif-plugged-dc1f5923-d984-4e49-bb97-bc1a77ade410 for instance with vm_state active and task_state deleting.
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.565 253542 DEBUG nova.network.neutron [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.677 253542 INFO nova.compute.manager [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Took 0.91 seconds to deallocate network for instance.
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.772 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105950564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.849 253542 DEBUG oslo_concurrency.processutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.855 253542 DEBUG nova.compute.provider_tree [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.867 253542 DEBUG nova.scheduler.client.report [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.907 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.910 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:34 compute-0 nova_compute[253538]: 2025-11-25 08:33:34.948 253542 INFO nova.scheduler.client.report [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Deleted allocations for instance 8191f951-44bc-4371-957a-f2e7d37c1a32
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.011 253542 DEBUG oslo_concurrency.processutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.048 253542 DEBUG oslo_concurrency.lockutils [None req-458a06ae-eb9f-40ca-bf5d-32973710d0dd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "8191f951-44bc-4371-957a-f2e7d37c1a32" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.063 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-deleted-d8bd16e1-3695-474d-be04-7fdf44bee803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.063 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Received event network-vif-deleted-582f57a6-32d3-44a0-ab47-d147a0bb0f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.063 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.064 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.064 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.064 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.065 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.065 253542 WARNING nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.065 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.065 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.066 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.066 253542 DEBUG oslo_concurrency.lockutils [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.066 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.067 253542 WARNING nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.067 253542 DEBUG nova.compute.manager [req-d5c08f4e-9720-4f4b-aacf-93f928b709f6 req-a12bc3c7-5e20-4ef6-8001-0e3507a422d7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Received event network-vif-deleted-dc1f5923-d984-4e49-bb97-bc1a77ade410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 222 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 215 op/s
Nov 25 08:33:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884688720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.452 253542 DEBUG oslo_concurrency.processutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.459 253542 DEBUG nova.compute.provider_tree [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.473 253542 DEBUG nova.scheduler.client.report [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.496 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.525 253542 INFO nova.scheduler.client.report [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Deleted allocations for instance fb888d2a-db54-44dc-8ec7-db417fa3cff6
Nov 25 08:33:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2105950564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2884688720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.575 253542 DEBUG oslo_concurrency.lockutils [None req-3a2f73f6-f459-4b47-8c06-62a351d4ec6d 7ad88cb0e4cf4d0b8e4cbec835318015 dc93aa65bef7473d961e0cad1e8f2962 - - default default] Lock "fb888d2a-db54-44dc-8ec7-db417fa3cff6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:35 compute-0 nova_compute[253538]: 2025-11-25 08:33:35.638 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:36.277 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:36 compute-0 ceph-mon[75015]: pgmap v1507: 321 pgs: 321 active+clean; 222 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 215 op/s
Nov 25 08:33:36 compute-0 nova_compute[253538]: 2025-11-25 08:33:36.545 253542 DEBUG nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:33:36 compute-0 podman[314728]: 2025-11-25 08:33:36.796776247 +0000 UTC m=+0.043507390 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 08:33:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 214 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 186 op/s
Nov 25 08:33:38 compute-0 nova_compute[253538]: 2025-11-25 08:33:38.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:38 compute-0 ceph-mon[75015]: pgmap v1508: 321 pgs: 321 active+clean; 214 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 186 op/s
Nov 25 08:33:38 compute-0 kernel: tap56d077f0-8f (unregistering): left promiscuous mode
Nov 25 08:33:38 compute-0 NetworkManager[48915]: <info>  [1764059618.7767] device (tap56d077f0-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:33:38 compute-0 ovn_controller[152859]: 2025-11-25T08:33:38Z|00527|binding|INFO|Releasing lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 from this chassis (sb_readonly=0)
Nov 25 08:33:38 compute-0 ovn_controller[152859]: 2025-11-25T08:33:38Z|00528|binding|INFO|Setting lport 56d077f0-8f69-40d8-bd5e-267a70c4c319 down in Southbound
Nov 25 08:33:38 compute-0 ovn_controller[152859]: 2025-11-25T08:33:38Z|00529|binding|INFO|Removing iface tap56d077f0-8f ovn-installed in OVS
Nov 25 08:33:38 compute-0 nova_compute[253538]: 2025-11-25 08:33:38.787 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:38 compute-0 nova_compute[253538]: 2025-11-25 08:33:38.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.809 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:ce:b3 10.100.0.4'], port_security=['fa:16:3e:7a:ce:b3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '528fb917-0169-441d-b32d-652963344aea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=56d077f0-8f69-40d8-bd5e-267a70c4c319) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.811 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 56d077f0-8f69-40d8-bd5e-267a70c4c319 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis
Nov 25 08:33:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.812 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:33:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9da6864d-a619-4a85-a5ff-ee792d142ba8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:38.815 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore
Nov 25 08:33:38 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Deactivated successfully.
Nov 25 08:33:38 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Consumed 13.730s CPU time.
Nov 25 08:33:38 compute-0 systemd-machined[215790]: Machine qemu-64-instance-00000038 terminated.
Nov 25 08:33:38 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [NOTICE]   (312709) : haproxy version is 2.8.14-c23fe91
Nov 25 08:33:38 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [NOTICE]   (312709) : path to executable is /usr/sbin/haproxy
Nov 25 08:33:38 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [WARNING]  (312709) : Exiting Master process...
Nov 25 08:33:38 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [ALERT]    (312709) : Current worker (312711) exited with code 143 (Terminated)
Nov 25 08:33:38 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[312705]: [WARNING]  (312709) : All workers exited. Exiting... (0)
Nov 25 08:33:38 compute-0 systemd[1]: libpod-78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31.scope: Deactivated successfully.
Nov 25 08:33:38 compute-0 podman[314771]: 2025-11-25 08:33:38.943855253 +0000 UTC m=+0.046074950 container died 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31-userdata-shm.mount: Deactivated successfully.
Nov 25 08:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f3b17704a9232b7fd9a3c6391403db11755eeec93cf71409d4cffb3b11bf81a-merged.mount: Deactivated successfully.
Nov 25 08:33:38 compute-0 podman[314771]: 2025-11-25 08:33:38.992486819 +0000 UTC m=+0.094706526 container cleanup 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 08:33:39 compute-0 systemd[1]: libpod-conmon-78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31.scope: Deactivated successfully.
Nov 25 08:33:39 compute-0 podman[314805]: 2025-11-25 08:33:39.061798272 +0000 UTC m=+0.044823195 container remove 78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.067 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3204d20a-8d5e-4f21-b4ca-fb1a4df1ca07]: (4, ('Tue Nov 25 08:33:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31)\n78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31\nTue Nov 25 08:33:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31)\n78d4b6a7caeeac4acbf7fb46f257a6945cc87c7f6a44d0822f61b7d110045a31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1488f68-95fc-474a-b62a-e16f9943312a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.070 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:39 compute-0 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 08:33:39 compute-0 nova_compute[253538]: 2025-11-25 08:33:39.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:39 compute-0 nova_compute[253538]: 2025-11-25 08:33:39.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.150 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0885ac72-9c03-4c86-bdba-525e656f9d42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.163 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[16763442-708b-4853-8d01-948e6acbe837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.165 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9858a396-a3df-4713-9a09-48a4965352a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.182 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd53454c-f119-4c72-bc67-d428e55fc112]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494462, 'reachable_time': 38439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314825, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:39 compute-0 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.185 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:33:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:39.185 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[29b087f8-35b7-4234-a7d7-21e939d3949d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Nov 25 08:33:39 compute-0 nova_compute[253538]: 2025-11-25 08:33:39.563 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance shutdown successfully after 24 seconds.
Nov 25 08:33:39 compute-0 nova_compute[253538]: 2025-11-25 08:33:39.569 253542 INFO nova.virt.libvirt.driver [-] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance destroyed successfully.
Nov 25 08:33:39 compute-0 nova_compute[253538]: 2025-11-25 08:33:39.570 253542 DEBUG nova.objects.instance [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 528fb917-0169-441d-b32d-652963344aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:39 compute-0 nova_compute[253538]: 2025-11-25 08:33:39.780 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Beginning cold snapshot process
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.037 253542 DEBUG nova.virt.libvirt.imagebackend [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.275 253542 DEBUG nova.compute.manager [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-vif-unplugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.276 253542 DEBUG oslo_concurrency.lockutils [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.276 253542 DEBUG oslo_concurrency.lockutils [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.277 253542 DEBUG oslo_concurrency.lockutils [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.277 253542 DEBUG nova.compute.manager [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] No waiting events found dispatching network-vif-unplugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.278 253542 WARNING nova.compute.manager [req-560ab25e-9be3-47f9-a1dc-bb3e47bbe5c0 req-dcae2697-f9be-4252-b422-fe434042c605 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received unexpected event network-vif-unplugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.319 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] creating snapshot(1d443195410148ad889f449ade5f87f7) on rbd image(528fb917-0169-441d-b32d-652963344aea_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:33:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 25 08:33:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 25 08:33:40 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 25 08:33:40 compute-0 ceph-mon[75015]: pgmap v1509: 321 pgs: 321 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.642 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] cloning vms/528fb917-0169-441d-b32d-652963344aea_disk@1d443195410148ad889f449ade5f87f7 to images/98ee8490-b027-417d-923b-76479289f395 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:40 compute-0 nova_compute[253538]: 2025-11-25 08:33:40.757 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] flattening images/98ee8490-b027-417d-923b-76479289f395 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:33:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:41.059 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:41 compute-0 nova_compute[253538]: 2025-11-25 08:33:41.140 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] removing snapshot(1d443195410148ad889f449ade5f87f7) on rbd image(528fb917-0169-441d-b32d-652963344aea_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:33:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 660 KiB/s wr, 188 op/s
Nov 25 08:33:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 25 08:33:41 compute-0 ceph-mon[75015]: osdmap e177: 3 total, 3 up, 3 in
Nov 25 08:33:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 25 08:33:41 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 25 08:33:41 compute-0 nova_compute[253538]: 2025-11-25 08:33:41.650 253542 DEBUG nova.storage.rbd_utils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] creating snapshot(snap) on rbd image(98ee8490-b027-417d-923b-76479289f395) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:33:42 compute-0 nova_compute[253538]: 2025-11-25 08:33:42.395 253542 DEBUG nova.compute.manager [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:42 compute-0 nova_compute[253538]: 2025-11-25 08:33:42.396 253542 DEBUG oslo_concurrency.lockutils [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "528fb917-0169-441d-b32d-652963344aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:42 compute-0 nova_compute[253538]: 2025-11-25 08:33:42.396 253542 DEBUG oslo_concurrency.lockutils [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:42 compute-0 nova_compute[253538]: 2025-11-25 08:33:42.397 253542 DEBUG oslo_concurrency.lockutils [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:42 compute-0 nova_compute[253538]: 2025-11-25 08:33:42.397 253542 DEBUG nova.compute.manager [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] No waiting events found dispatching network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:33:42 compute-0 nova_compute[253538]: 2025-11-25 08:33:42.398 253542 WARNING nova.compute.manager [req-deb71369-3041-4a47-b9d7-c7694d285d3a req-31ae0f90-e1ce-47fe-8fdb-0b0858c7e5b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received unexpected event network-vif-plugged-56d077f0-8f69-40d8-bd5e-267a70c4c319 for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 08:33:42 compute-0 ovn_controller[152859]: 2025-11-25T08:33:42Z|00530|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:33:42 compute-0 nova_compute[253538]: 2025-11-25 08:33:42.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:42 compute-0 ceph-mon[75015]: pgmap v1511: 321 pgs: 321 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 660 KiB/s wr, 188 op/s
Nov 25 08:33:42 compute-0 ceph-mon[75015]: osdmap e178: 3 total, 3 up, 3 in
Nov 25 08:33:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 25 08:33:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 25 08:33:42 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 25 08:33:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 223 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.4 MiB/s wr, 120 op/s
Nov 25 08:33:43 compute-0 nova_compute[253538]: 2025-11-25 08:33:43.515 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:43 compute-0 ceph-mon[75015]: osdmap e179: 3 total, 3 up, 3 in
Nov 25 08:33:43 compute-0 ceph-mon[75015]: pgmap v1514: 321 pgs: 321 active+clean; 223 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.4 MiB/s wr, 120 op/s
Nov 25 08:33:43 compute-0 podman[314967]: 2025-11-25 08:33:43.854152712 +0000 UTC m=+0.094484251 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:33:43 compute-0 nova_compute[253538]: 2025-11-25 08:33:43.943 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Snapshot image upload complete
Nov 25 08:33:43 compute-0 nova_compute[253538]: 2025-11-25 08:33:43.943 253542 DEBUG nova.compute.manager [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:43 compute-0 nova_compute[253538]: 2025-11-25 08:33:43.994 253542 INFO nova.compute.manager [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Shelve offloading
Nov 25 08:33:44 compute-0 nova_compute[253538]: 2025-11-25 08:33:44.002 253542 INFO nova.virt.libvirt.driver [-] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance destroyed successfully.
Nov 25 08:33:44 compute-0 nova_compute[253538]: 2025-11-25 08:33:44.003 253542 DEBUG nova.compute.manager [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:44 compute-0 nova_compute[253538]: 2025-11-25 08:33:44.006 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:44 compute-0 nova_compute[253538]: 2025-11-25 08:33:44.006 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:44 compute-0 nova_compute[253538]: 2025-11-25 08:33:44.007 253542 DEBUG nova.network.neutron [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Nov 25 08:33:45 compute-0 nova_compute[253538]: 2025-11-25 08:33:45.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 25 08:33:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 25 08:33:46 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 25 08:33:46 compute-0 nova_compute[253538]: 2025-11-25 08:33:46.449 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059611.4465885, 8191f951-44bc-4371-957a-f2e7d37c1a32 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:46 compute-0 nova_compute[253538]: 2025-11-25 08:33:46.449 253542 INFO nova.compute.manager [-] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] VM Stopped (Lifecycle Event)
Nov 25 08:33:46 compute-0 nova_compute[253538]: 2025-11-25 08:33:46.471 253542 DEBUG nova.compute.manager [None req-adb8f138-196b-4ed1-98a6-602f46fae392 - - - - - -] [instance: 8191f951-44bc-4371-957a-f2e7d37c1a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:46 compute-0 ceph-mon[75015]: pgmap v1515: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Nov 25 08:33:46 compute-0 ceph-mon[75015]: osdmap e180: 3 total, 3 up, 3 in
Nov 25 08:33:46 compute-0 podman[314992]: 2025-11-25 08:33:46.856099882 +0000 UTC m=+0.097571893 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:33:47 compute-0 ovn_controller[152859]: 2025-11-25T08:33:47Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.164 253542 DEBUG nova.network.neutron [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updating instance_info_cache with network_info: [{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.181 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 7.8 MiB/s wr, 176 op/s
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.716 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.717 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.732 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:33:47 compute-0 ceph-mon[75015]: pgmap v1517: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 7.8 MiB/s wr, 176 op/s
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.812 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.812 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.821 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.821 253542 INFO nova.compute.claims [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:33:47 compute-0 nova_compute[253538]: 2025-11-25 08:33:47.947 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.280 253542 INFO nova.virt.libvirt.driver [-] [instance: 528fb917-0169-441d-b32d-652963344aea] Instance destroyed successfully.
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.282 253542 DEBUG nova.objects.instance [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 528fb917-0169-441d-b32d-652963344aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.292 253542 DEBUG nova.virt.libvirt.vif [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-688174086',display_name='tempest-DeleteServersTestJSON-server-688174086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-688174086',id=56,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:33:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-qasdqh9g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member',shelved_at='2025-11-25T08:33:43.943593',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='98ee8490-b027-417d-923b-76479289f395'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:33:39Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=528fb917-0169-441d-b32d-652963344aea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.292 253542 DEBUG nova.network.os_vif_util [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56d077f0-8f", "ovs_interfaceid": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.293 253542 DEBUG nova.network.os_vif_util [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.293 253542 DEBUG os_vif [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.295 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56d077f0-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.296 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059613.2900736, fb888d2a-db54-44dc-8ec7-db417fa3cff6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.297 253542 INFO nova.compute.manager [-] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] VM Stopped (Lifecycle Event)
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.302 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.305 253542 INFO os_vif [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ce:b3,bridge_name='br-int',has_traffic_filtering=True,id=56d077f0-8f69-40d8-bd5e-267a70c4c319,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56d077f0-8f')
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.326 253542 DEBUG nova.compute.manager [None req-6f45b46c-99cd-453e-b384-1e92b29703bd - - - - - -] [instance: fb888d2a-db54-44dc-8ec7-db417fa3cff6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.368 253542 DEBUG nova.compute.manager [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Received event network-changed-56d077f0-8f69-40d8-bd5e-267a70c4c319 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.369 253542 DEBUG nova.compute.manager [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Refreshing instance network info cache due to event network-changed-56d077f0-8f69-40d8-bd5e-267a70c4c319. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.370 253542 DEBUG oslo_concurrency.lockutils [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.371 253542 DEBUG oslo_concurrency.lockutils [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.371 253542 DEBUG nova.network.neutron [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Refreshing network info cache for port 56d077f0-8f69-40d8-bd5e-267a70c4c319 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:33:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184927680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.436 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.443 253542 DEBUG nova.compute.provider_tree [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.455 253542 DEBUG nova.scheduler.client.report [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.473 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.474 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.518 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.519 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.557 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.576 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.656 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.659 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.659 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Creating image(s)
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.687 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.716 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.745 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.750 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.838 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.839 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.840 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.840 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.863 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:48 compute-0 nova_compute[253538]: 2025-11-25 08:33:48.867 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3184927680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:49 compute-0 nova_compute[253538]: 2025-11-25 08:33:49.215 253542 DEBUG nova.policy [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:33:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.1 MiB/s wr, 174 op/s
Nov 25 08:33:50 compute-0 ceph-mon[75015]: pgmap v1518: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.1 MiB/s wr, 174 op/s
Nov 25 08:33:50 compute-0 nova_compute[253538]: 2025-11-25 08:33:50.644 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:50 compute-0 nova_compute[253538]: 2025-11-25 08:33:50.693 253542 DEBUG nova.network.neutron [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updated VIF entry in instance network info cache for port 56d077f0-8f69-40d8-bd5e-267a70c4c319. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:33:50 compute-0 nova_compute[253538]: 2025-11-25 08:33:50.694 253542 DEBUG nova.network.neutron [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Updating instance_info_cache with network_info: [{"id": "56d077f0-8f69-40d8-bd5e-267a70c4c319", "address": "fa:16:3e:7a:ce:b3", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": null, "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap56d077f0-8f", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:50 compute-0 nova_compute[253538]: 2025-11-25 08:33:50.713 253542 DEBUG oslo_concurrency.lockutils [req-489109fd-810e-48aa-a0b8-81b32d9beccd req-a18d564c-8f4f-4d7a-9f78-dfa7c2ab0290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-528fb917-0169-441d-b32d-652963344aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.4 MiB/s wr, 140 op/s
Nov 25 08:33:51 compute-0 nova_compute[253538]: 2025-11-25 08:33:51.307 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Successfully created port: 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:33:52 compute-0 nova_compute[253538]: 2025-11-25 08:33:52.017 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:52 compute-0 ceph-mon[75015]: pgmap v1519: 321 pgs: 321 active+clean; 281 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.4 MiB/s wr, 140 op/s
Nov 25 08:33:52 compute-0 nova_compute[253538]: 2025-11-25 08:33:52.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:52 compute-0 nova_compute[253538]: 2025-11-25 08:33:52.319 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] resizing rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.008 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Successfully updated port: 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.024 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.024 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.024 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.088 253542 DEBUG nova.compute.manager [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.089 253542 DEBUG nova.compute.manager [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.089 253542 DEBUG oslo_concurrency.lockutils [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.164 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 291 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 136 op/s
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:33:53
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images', 'backups']
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.299 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.335 253542 DEBUG nova.objects.instance [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'migration_context' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.349 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.349 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Ensure instance console log exists: /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.350 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.351 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:53 compute-0 nova_compute[253538]: 2025-11-25 08:33:53.351 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:33:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.016 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059619.0152655, 528fb917-0169-441d-b32d-652963344aea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.016 253542 INFO nova.compute.manager [-] [instance: 528fb917-0169-441d-b32d-652963344aea] VM Stopped (Lifecycle Event)
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.027 253542 DEBUG nova.network.neutron [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.031 253542 DEBUG nova.compute.manager [None req-4a26b254-2479-46d9-b2fd-91a6059c934d - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.036 253542 DEBUG nova.compute.manager [None req-4a26b254-2479-46d9-b2fd-91a6059c934d - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.045 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.046 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance network_info: |[{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.046 253542 DEBUG oslo_concurrency.lockutils [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.046 253542 DEBUG nova.network.neutron [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.050 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start _get_guest_xml network_info=[{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.052 253542 INFO nova.compute.manager [None req-4a26b254-2479-46d9-b2fd-91a6059c934d - - - - - -] [instance: 528fb917-0169-441d-b32d-652963344aea] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.056 253542 WARNING nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.066 253542 DEBUG nova.virt.libvirt.host [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.066 253542 DEBUG nova.virt.libvirt.host [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.107 253542 DEBUG nova.virt.libvirt.host [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.108 253542 DEBUG nova.virt.libvirt.host [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.108 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.109 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.109 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.110 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.110 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.110 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.111 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.111 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.111 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.111 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.112 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.112 253542 DEBUG nova.virt.hardware [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.115 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:54 compute-0 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 08:33:54 compute-0 ceph-mon[75015]: pgmap v1520: 321 pgs: 321 active+clean; 291 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 136 op/s
Nov 25 08:33:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164742920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.560 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.592 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.596 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.890 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Deleting instance files /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea_del
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.892 253542 INFO nova.virt.libvirt.driver [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 528fb917-0169-441d-b32d-652963344aea] Deletion of /var/lib/nova/instances/528fb917-0169-441d-b32d-652963344aea_del complete
Nov 25 08:33:54 compute-0 nova_compute[253538]: 2025-11-25 08:33:54.997 253542 INFO nova.scheduler.client.report [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 528fb917-0169-441d-b32d-652963344aea
Nov 25 08:33:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:33:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/277159241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.036 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.037 253542 DEBUG nova.virt.libvirt.vif [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.038 253542 DEBUG nova.network.os_vif_util [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.039 253542 DEBUG nova.network.os_vif_util [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.040 253542 DEBUG nova.objects.instance [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.043 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.043 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.052 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <uuid>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</uuid>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <name>instance-0000003a</name>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:33:54</nova:creationTime>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 08:33:55 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <system>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <entry name="serial">5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <entry name="uuid">5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </system>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <os>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   </os>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <features>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   </features>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk">
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config">
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       </source>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:33:55 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:60:42:da"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <target dev="tap1682bdaf-1d"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log" append="off"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <video>
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </video>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:33:55 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:33:55 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:33:55 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:33:55 compute-0 nova_compute[253538]: </domain>
Nov 25 08:33:55 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.052 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Preparing to wait for external event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.053 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.053 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.053 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.054 253542 DEBUG nova.virt.libvirt.vif [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:33:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.054 253542 DEBUG nova.network.os_vif_util [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.055 253542 DEBUG nova.network.os_vif_util [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.055 253542 DEBUG os_vif [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.056 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.057 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.062 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1682bdaf-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.063 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1682bdaf-1d, col_values=(('external_ids', {'iface-id': '1682bdaf-1dd6-4036-8d17-a169dbaaca8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:42:da', 'vm-uuid': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:55 compute-0 NetworkManager[48915]: <info>  [1764059635.0656] manager: (tap1682bdaf-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.070 253542 INFO os_vif [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d')
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.134 253542 DEBUG oslo_concurrency.processutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.222 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.225 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.225 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:60:42:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.226 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Using config drive
Nov 25 08:33:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 289 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 682 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.250 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.502 253542 DEBUG nova.network.neutron [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.503 253542 DEBUG nova.network.neutron [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.523 253542 DEBUG oslo_concurrency.lockutils [req-85b5f7b4-76de-47ae-9771-07c0ddc04761 req-6ce601b4-76f2-4a2d-b78b-a826a4ce7ce6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.546 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Creating config drive at /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config
Nov 25 08:33:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3164742920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/277159241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.558 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz454n58_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:33:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023425656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.616 253542 DEBUG oslo_concurrency.processutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.626 253542 DEBUG nova.compute.provider_tree [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.644 253542 DEBUG nova.scheduler.client.report [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.651 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.675 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.722 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz454n58_" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.745 253542 DEBUG nova.storage.rbd_utils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.748 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.779 253542 DEBUG oslo_concurrency.lockutils [None req-af0be6ff-9106-4536-a846-3295c2b2f70b a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "528fb917-0169-441d-b32d-652963344aea" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 40.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:33:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.942 253542 DEBUG oslo_concurrency.processutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config 5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.943 253542 INFO nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Deleting local config drive /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/disk.config because it was imported into RBD.
Nov 25 08:33:55 compute-0 kernel: tap1682bdaf-1d: entered promiscuous mode
Nov 25 08:33:55 compute-0 NetworkManager[48915]: <info>  [1764059635.9935] manager: (tap1682bdaf-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Nov 25 08:33:55 compute-0 ovn_controller[152859]: 2025-11-25T08:33:55Z|00531|binding|INFO|Claiming lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f for this chassis.
Nov 25 08:33:55 compute-0 ovn_controller[152859]: 2025-11-25T08:33:55Z|00532|binding|INFO|1682bdaf-1dd6-4036-8d17-a169dbaaca8f: Claiming fa:16:3e:60:42:da 10.100.0.9
Nov 25 08:33:55 compute-0 nova_compute[253538]: 2025-11-25 08:33:55.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.004 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:42:da 10.100.0.9'], port_security=['fa:16:3e:60:42:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1682bdaf-1dd6-4036-8d17-a169dbaaca8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.005 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.006 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:33:56 compute-0 ovn_controller[152859]: 2025-11-25T08:33:56Z|00533|binding|INFO|Setting lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f ovn-installed in OVS
Nov 25 08:33:56 compute-0 ovn_controller[152859]: 2025-11-25T08:33:56Z|00534|binding|INFO|Setting lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f up in Southbound
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.019 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb976d80-d3df-4d37-bacd-de49a17926d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.020 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bf3cbfa-71 in ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.022 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bf3cbfa-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.022 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[845b3e57-0f16-4d50-a9e3-927b4e3b20a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.023 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ea466ccc-c3cb-465f-9cfc-53c1eb3cf720]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 systemd-udevd[315383]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.033 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b81d991a-0534-4249-af61-eda4791e0a90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 NetworkManager[48915]: <info>  [1764059636.0452] device (tap1682bdaf-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:33:56 compute-0 systemd-machined[215790]: New machine qemu-67-instance-0000003a.
Nov 25 08:33:56 compute-0 NetworkManager[48915]: <info>  [1764059636.0464] device (tap1682bdaf-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.054 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12f02f5f-6022-43be-a661-6613f489b21f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003a.
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.081 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[794826d2-1346-46aa-bb72-06d6e6471881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a778c9-ced7-41df-b3f0-a6746aa5a34e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 NetworkManager[48915]: <info>  [1764059636.0884] manager: (tap9bf3cbfa-70): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Nov 25 08:33:56 compute-0 systemd-udevd[315387]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.113 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[afa14bc7-dda4-415c-a21d-584129275d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.116 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3e420b-ff8f-485f-90c6-efff38d39a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 NetworkManager[48915]: <info>  [1764059636.1351] device (tap9bf3cbfa-70): carrier: link connected
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.141 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0f9f52-5db3-4a89-9084-4f7c4cefae56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.158 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd5775b-86aa-4aa2-903d-9288e7aa115b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315415, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.172 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c125742-094d-42ec-b242-4829f62fe2d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:8fc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499073, 'tstamp': 499073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315416, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.196 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[149c3794-3259-411c-907f-98d5f9e34412]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315417, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.233 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[854c6dce-3e7f-4f23-9803-65ec84793df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.295 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd1bdd7-e770-4f4f-ba49-b2e10b414c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.296 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.296 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.297 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:56 compute-0 kernel: tap9bf3cbfa-70: entered promiscuous mode
Nov 25 08:33:56 compute-0 NetworkManager[48915]: <info>  [1764059636.2992] manager: (tap9bf3cbfa-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.302 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:33:56 compute-0 ovn_controller[152859]: 2025-11-25T08:33:56Z|00535|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.304 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d4a9b2-c4ee-499a-9348-b5fc42bc6c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.306 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.pid.haproxy
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:33:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:33:56.306 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'env', 'PROCESS_TAG=haproxy-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.319 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.532 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059636.5310957, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.532 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] VM Started (Lifecycle Event)
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.547 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.553 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059636.5314503, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.554 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] VM Paused (Lifecycle Event)
Nov 25 08:33:56 compute-0 ceph-mon[75015]: pgmap v1521: 321 pgs: 321 active+clean; 289 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 682 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Nov 25 08:33:56 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3023425656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.570 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.574 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:33:56 compute-0 nova_compute[253538]: 2025-11-25 08:33:56.590 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:33:56 compute-0 podman[315491]: 2025-11-25 08:33:56.720445854 +0000 UTC m=+0.073379283 container create fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 08:33:56 compute-0 podman[315491]: 2025-11-25 08:33:56.671431267 +0000 UTC m=+0.024364776 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:33:56 compute-0 systemd[1]: Started libpod-conmon-fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc.scope.
Nov 25 08:33:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ce801fda8e4bb6e919da95eaa73f45324ff65118d3fb3c246879cdb73c93b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:33:56 compute-0 podman[315491]: 2025-11-25 08:33:56.849290248 +0000 UTC m=+0.202223707 container init fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:33:56 compute-0 podman[315491]: 2025-11-25 08:33:56.85607795 +0000 UTC m=+0.209011409 container start fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:33:56 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [NOTICE]   (315510) : New worker (315512) forked
Nov 25 08:33:56 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [NOTICE]   (315510) : Loading success.
Nov 25 08:33:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 248 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 1.9 MiB/s wr, 114 op/s
Nov 25 08:33:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 25 08:33:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 25 08:33:57 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 25 08:33:58 compute-0 ceph-mon[75015]: pgmap v1522: 321 pgs: 321 active+clean; 248 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 1.9 MiB/s wr, 114 op/s
Nov 25 08:33:58 compute-0 ceph-mon[75015]: osdmap e181: 3 total, 3 up, 3 in
Nov 25 08:33:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 213 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Nov 25 08:33:59 compute-0 ceph-mon[75015]: pgmap v1524: 321 pgs: 321 active+clean; 213 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.498 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.499 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.516 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.592 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.593 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.604 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.605 253542 INFO nova.compute.claims [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.759 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.860 253542 DEBUG nova.compute.manager [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.860 253542 DEBUG oslo_concurrency.lockutils [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.861 253542 DEBUG oslo_concurrency.lockutils [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.861 253542 DEBUG oslo_concurrency.lockutils [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.861 253542 DEBUG nova.compute.manager [req-8d4df46c-5c3b-4839-974e-20c5ce0129ab req-e9777216-3578-4979-b5b4-e716f21f9c17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Processing event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.862 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.882 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059640.87377, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.882 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] VM Resumed (Lifecycle Event)
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.885 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.889 253542 INFO nova.virt.libvirt.driver [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance spawned successfully.
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.890 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.913 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.920 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.925 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.926 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.927 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.927 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.928 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.928 253542 DEBUG nova.virt.libvirt.driver [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:00.938 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.009 253542 INFO nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Took 12.35 seconds to spawn the instance on the hypervisor.
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.010 253542 DEBUG nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.082 253542 INFO nova.compute.manager [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Took 13.30 seconds to build instance.
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.095 253542 DEBUG oslo_concurrency.lockutils [None req-f62146bf-1a2c-40d8-a6cb-799a30cd3c43 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022972059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.223 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.229 253542 DEBUG nova.compute.provider_tree [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 190 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.242 253542 DEBUG nova.scheduler.client.report [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.264 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.264 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.321 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.322 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.337 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.355 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.439 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.440 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.441 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Creating image(s)
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.466 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.488 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.506 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.509 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4022972059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.580 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.581 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.582 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.582 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.639 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.643 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:01 compute-0 nova_compute[253538]: 2025-11-25 08:34:01.679 253542 DEBUG nova.policy [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4a674c4114a4e4fb5e446089be3ffc0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'adf7500b3b404802bc7f4ada42a72100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:02 compute-0 nova_compute[253538]: 2025-11-25 08:34:02.017 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:02 compute-0 nova_compute[253538]: 2025-11-25 08:34:02.086 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] resizing rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:02 compute-0 nova_compute[253538]: 2025-11-25 08:34:02.165 253542 DEBUG nova.objects.instance [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lazy-loading 'migration_context' on Instance uuid e3f4ee5b-6bb5-456f-b522-426ea1ebf32f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:02 compute-0 nova_compute[253538]: 2025-11-25 08:34:02.177 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:02 compute-0 nova_compute[253538]: 2025-11-25 08:34:02.178 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Ensure instance console log exists: /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:02 compute-0 nova_compute[253538]: 2025-11-25 08:34:02.178 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:02 compute-0 nova_compute[253538]: 2025-11-25 08:34:02.179 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:02 compute-0 nova_compute[253538]: 2025-11-25 08:34:02.179 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:02 compute-0 ceph-mon[75015]: pgmap v1525: 321 pgs: 321 active+clean; 190 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Nov 25 08:34:03 compute-0 nova_compute[253538]: 2025-11-25 08:34:03.079 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Successfully created port: 40faec4b-dd3f-4659-972d-beeeb707761f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:34:03 compute-0 nova_compute[253538]: 2025-11-25 08:34:03.120 253542 DEBUG nova.compute.manager [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:03 compute-0 nova_compute[253538]: 2025-11-25 08:34:03.121 253542 DEBUG oslo_concurrency.lockutils [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:03 compute-0 nova_compute[253538]: 2025-11-25 08:34:03.122 253542 DEBUG oslo_concurrency.lockutils [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:03 compute-0 nova_compute[253538]: 2025-11-25 08:34:03.122 253542 DEBUG oslo_concurrency.lockutils [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:03 compute-0 nova_compute[253538]: 2025-11-25 08:34:03.122 253542 DEBUG nova.compute.manager [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:03 compute-0 nova_compute[253538]: 2025-11-25 08:34:03.123 253542 WARNING nova.compute.manager [req-67d6f468-f4dd-4493-8bfb-4e5a1792bc9b req-dfd3a586-5eaa-40f7-998b-ea3f6f0cd6f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f for instance with vm_state active and task_state None.
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 192 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 2.7 MiB/s wr, 135 op/s
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013273924350459074 of space, bias 1.0, pg target 0.39821773051377224 quantized to 32 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:34:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:34:04 compute-0 nova_compute[253538]: 2025-11-25 08:34:04.490 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:04 compute-0 nova_compute[253538]: 2025-11-25 08:34:04.490 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:04 compute-0 nova_compute[253538]: 2025-11-25 08:34:04.523 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:34:04 compute-0 ceph-mon[75015]: pgmap v1526: 321 pgs: 321 active+clean; 192 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 2.7 MiB/s wr, 135 op/s
Nov 25 08:34:04 compute-0 nova_compute[253538]: 2025-11-25 08:34:04.814 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:04 compute-0 nova_compute[253538]: 2025-11-25 08:34:04.815 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:04 compute-0 nova_compute[253538]: 2025-11-25 08:34:04.821 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:34:04 compute-0 nova_compute[253538]: 2025-11-25 08:34:04.821 253542 INFO nova.compute.claims [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.027 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 215 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Nov 25 08:34:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685134027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.462 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.467 253542 DEBUG nova.compute.provider_tree [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.484 253542 DEBUG nova.scheduler.client.report [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.505 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.505 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.534 253542 DEBUG nova.compute.manager [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.535 253542 DEBUG nova.compute.manager [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.535 253542 DEBUG oslo_concurrency.lockutils [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.535 253542 DEBUG oslo_concurrency.lockutils [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.535 253542 DEBUG nova.network.neutron [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/685134027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.576 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.577 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.595 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.627 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.717 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.718 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.718 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Creating image(s)
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.750 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.777 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.801 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.805 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.908 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.910 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.911 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.912 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.938 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:05 compute-0 nova_compute[253538]: 2025-11-25 08:34:05.942 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc bf44124c-1a65-4bde-a777-043ae1a53557_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 25 08:34:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 25 08:34:05 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.162 253542 DEBUG nova.policy [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.165 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Successfully updated port: 40faec4b-dd3f-4659-972d-beeeb707761f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:06 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.182 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.183 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquired lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.183 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.268 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc bf44124c-1a65-4bde-a777-043ae1a53557_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.329 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.359 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.425 253542 DEBUG nova.objects.instance [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.443 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.444 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Ensure instance console log exists: /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.445 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.445 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:06 compute-0 nova_compute[253538]: 2025-11-25 08:34:06.445 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:06 compute-0 ceph-mon[75015]: pgmap v1527: 321 pgs: 321 active+clean; 215 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Nov 25 08:34:06 compute-0 ceph-mon[75015]: osdmap e182: 3 total, 3 up, 3 in
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.147 253542 DEBUG nova.network.neutron [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Updating instance_info_cache with network_info: [{"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.170 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Releasing lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.171 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance network_info: |[{"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.175 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start _get_guest_xml network_info=[{"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.180 253542 WARNING nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.185 253542 DEBUG nova.virt.libvirt.host [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.186 253542 DEBUG nova.virt.libvirt.host [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.190 253542 DEBUG nova.virt.libvirt.host [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.191 253542 DEBUG nova.virt.libvirt.host [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.191 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.192 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.192 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.193 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.193 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.193 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.194 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.194 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.194 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.195 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.195 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.196 253542 DEBUG nova.virt.hardware [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.199 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 242 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.243 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Successfully created port: 269f9bd3-f267-459c-8e24-4b1f6c943345 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.517 253542 DEBUG nova.network.neutron [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.518 253542 DEBUG nova.network.neutron [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.537 253542 DEBUG oslo_concurrency.lockutils [req-59cdd7fd-9e07-41b6-826d-42e952c14328 req-435e780f-6b9f-4aaa-b2d8-53f2c8abbb0c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429719939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.689 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.714 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:07 compute-0 nova_compute[253538]: 2025-11-25 08:34:07.718 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:07 compute-0 podman[315937]: 2025-11-25 08:34:07.792019866 +0000 UTC m=+0.049386429 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.077 253542 DEBUG nova.compute.manager [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-changed-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.078 253542 DEBUG nova.compute.manager [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Refreshing instance network info cache due to event network-changed-40faec4b-dd3f-4659-972d-beeeb707761f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.079 253542 DEBUG oslo_concurrency.lockutils [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.079 253542 DEBUG oslo_concurrency.lockutils [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.080 253542 DEBUG nova.network.neutron [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Refreshing network info cache for port 40faec4b-dd3f-4659-972d-beeeb707761f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796066015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.150 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.151 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.167 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.169 253542 DEBUG nova.virt.libvirt.vif [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1084102530',display_name='tempest-InstanceActionsNegativeTestJSON-server-1084102530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1084102530',id=59,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='adf7500b3b404802bc7f4ada42a72100',ramdisk_id='',reservation_id='r-gymefxtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-800535511',owner_user_name='tempest-InstanceActionsNegativeTestJSON-800535511-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:01Z,user_data=None,user_id='d4a674c4114a4e4fb5e446089be3ffc0',uuid=e3f4ee5b-6bb5-456f-b522-426ea1ebf32f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.170 253542 DEBUG nova.network.os_vif_util [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converting VIF {"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.171 253542 DEBUG nova.network.os_vif_util [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.173 253542 DEBUG nova.objects.instance [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3f4ee5b-6bb5-456f-b522-426ea1ebf32f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.176 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.193 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <uuid>e3f4ee5b-6bb5-456f-b522-426ea1ebf32f</uuid>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <name>instance-0000003b</name>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1084102530</nova:name>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:07</nova:creationTime>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <nova:user uuid="d4a674c4114a4e4fb5e446089be3ffc0">tempest-InstanceActionsNegativeTestJSON-800535511-project-member</nova:user>
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <nova:project uuid="adf7500b3b404802bc7f4ada42a72100">tempest-InstanceActionsNegativeTestJSON-800535511</nova:project>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <nova:port uuid="40faec4b-dd3f-4659-972d-beeeb707761f">
Nov 25 08:34:08 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <entry name="serial">e3f4ee5b-6bb5-456f-b522-426ea1ebf32f</entry>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <entry name="uuid">e3f4ee5b-6bb5-456f-b522-426ea1ebf32f</entry>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk">
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config">
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:37:c8:d4"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <target dev="tap40faec4b-dd"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/console.log" append="off"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:08 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:08 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:08 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:08 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:08 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.203 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Preparing to wait for external event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.204 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.204 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.205 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.206 253542 DEBUG nova.virt.libvirt.vif [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1084102530',display_name='tempest-InstanceActionsNegativeTestJSON-server-1084102530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1084102530',id=59,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='adf7500b3b404802bc7f4ada42a72100',ramdisk_id='',reservation_id='r-gymefxtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-800535511',owner_user_name='tempest-InstanceActionsNegativeTestJSON-800535511-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:01Z,user_data=None,user_id='d4a674c4114a4e4fb5e446089be3ffc0',uuid=e3f4ee5b-6bb5-456f-b522-426ea1ebf32f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.206 253542 DEBUG nova.network.os_vif_util [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converting VIF {"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.207 253542 DEBUG nova.network.os_vif_util [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.208 253542 DEBUG os_vif [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.209 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.210 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.214 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40faec4b-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.217 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap40faec4b-dd, col_values=(('external_ids', {'iface-id': '40faec4b-dd3f-4659-972d-beeeb707761f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:c8:d4', 'vm-uuid': 'e3f4ee5b-6bb5-456f-b522-426ea1ebf32f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.219 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:08 compute-0 NetworkManager[48915]: <info>  [1764059648.2212] manager: (tap40faec4b-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.230 253542 INFO os_vif [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd')
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.270 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.271 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.278 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.279 253542 INFO nova.compute.claims [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.307 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.308 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.308 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] No VIF found with MAC fa:16:3e:37:c8:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.308 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Using config drive
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.329 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.398 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Successfully updated port: 269f9bd3-f267-459c-8e24-4b1f6c943345 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.414 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.415 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.415 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.485 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:08 compute-0 ceph-mon[75015]: pgmap v1529: 321 pgs: 321 active+clean; 242 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Nov 25 08:34:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1429719939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1796066015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779625697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.924 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.931 253542 DEBUG nova.compute.provider_tree [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.944 253542 DEBUG nova.scheduler.client.report [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.967 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:08 compute-0 nova_compute[253538]: 2025-11-25 08:34:08.970 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.040 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.042 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.069 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.088 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:34:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 258 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 136 op/s
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.390 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.394 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.395 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.396 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Creating image(s)
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.428 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.455 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.489 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.493 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.581 253542 DEBUG nova.policy [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.583 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.584 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.585 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.587 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2779625697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:09 compute-0 ceph-mon[75015]: pgmap v1530: 321 pgs: 321 active+clean; 258 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 136 op/s
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.621 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:09 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.627 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 52d39d67-b456-44e4-8804-2de0c941edae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:09.999 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 52d39d67-b456-44e4-8804-2de0c941edae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.060 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] resizing rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.154 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Creating config drive at /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.161 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe23862qi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.200 253542 DEBUG nova.objects.instance [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'migration_context' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.213 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.214 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Ensure instance console log exists: /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.215 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.216 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.216 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.300 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe23862qi" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.331 253542 DEBUG nova.storage.rbd_utils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] rbd image e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.336 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.486 253542 DEBUG oslo_concurrency.processutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.487 253542 INFO nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Deleting local config drive /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f/disk.config because it was imported into RBD.
Nov 25 08:34:10 compute-0 virtqemud[253839]: End of file while reading data: Input/output error
Nov 25 08:34:10 compute-0 virtqemud[253839]: End of file while reading data: Input/output error
Nov 25 08:34:10 compute-0 NetworkManager[48915]: <info>  [1764059650.5421] manager: (tap40faec4b-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Nov 25 08:34:10 compute-0 kernel: tap40faec4b-dd: entered promiscuous mode
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.545 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:10 compute-0 ovn_controller[152859]: 2025-11-25T08:34:10Z|00536|binding|INFO|Claiming lport 40faec4b-dd3f-4659-972d-beeeb707761f for this chassis.
Nov 25 08:34:10 compute-0 ovn_controller[152859]: 2025-11-25T08:34:10Z|00537|binding|INFO|40faec4b-dd3f-4659-972d-beeeb707761f: Claiming fa:16:3e:37:c8:d4 10.100.0.4
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.552 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:c8:d4 10.100.0.4'], port_security=['fa:16:3e:37:c8:d4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e3f4ee5b-6bb5-456f-b522-426ea1ebf32f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'adf7500b3b404802bc7f4ada42a72100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89f00338-7004-4e97-a33e-2330c787d850', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f67d87c1-e8c8-46a3-b6be-f7e585c56ed4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=40faec4b-dd3f-4659-972d-beeeb707761f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.554 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 40faec4b-dd3f-4659-972d-beeeb707761f in datapath 610430d6-5ea7-4c04-9b64-2dc2d0a55169 bound to our chassis
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.555 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 610430d6-5ea7-4c04-9b64-2dc2d0a55169
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32228a32-8d82-40ce-993d-7c2e06fc23e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.571 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap610430d6-51 in ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.574 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap610430d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.574 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1004302b-5d26-41f8-832d-a664df834df1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.575 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59960b4b-575a-4aef-8cf8-50b4bd5aaa00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_controller[152859]: 2025-11-25T08:34:10Z|00538|binding|INFO|Setting lport 40faec4b-dd3f-4659-972d-beeeb707761f ovn-installed in OVS
Nov 25 08:34:10 compute-0 ovn_controller[152859]: 2025-11-25T08:34:10Z|00539|binding|INFO|Setting lport 40faec4b-dd3f-4659-972d-beeeb707761f up in Southbound
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.577 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:10 compute-0 systemd-udevd[316239]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.589 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[dccc95bf-ff2b-4358-a0e3-0d55be3283ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 systemd-machined[215790]: New machine qemu-68-instance-0000003b.
Nov 25 08:34:10 compute-0 NetworkManager[48915]: <info>  [1764059650.6023] device (tap40faec4b-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:10 compute-0 NetworkManager[48915]: <info>  [1764059650.6036] device (tap40faec4b-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.604 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0a210e8b-587f-4784-bc8a-4016ec0b6693]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003b.
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.633 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7b8fd0-488f-418a-befc-eea3cdc2c0f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03813421-5fa8-4ab6-940a-32977ede8628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 NetworkManager[48915]: <info>  [1764059650.6440] manager: (tap610430d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/249)
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.654 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.673 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6449b333-2ce0-4e55-a996-785a324229e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.676 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c883bf40-00ac-4ac0-b76d-34512bd672b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 NetworkManager[48915]: <info>  [1764059650.6961] device (tap610430d6-50): carrier: link connected
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.701 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfd3718-b9fd-4177-9ede-543404dd212b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.721 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7b00ee-55b1-4c00-aa8d-d9ac75a4a7be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap610430d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:c7:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500529, 'reachable_time': 34207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316271, 'error': None, 'target': 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.736 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f94fd54f-631b-4c13-9f65-29d61a921d8f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:c76b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500529, 'tstamp': 500529}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316272, 'error': None, 'target': 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.757 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15339852-b9fe-4f1b-9406-e4e06fde75d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap610430d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:c7:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500529, 'reachable_time': 34207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316273, 'error': None, 'target': 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.782 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae4ce48-0654-4d69-86f6-14059845e110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e164d01f-fa54-4644-a744-4de320ac025a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.849 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap610430d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.849 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.849 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap610430d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:10 compute-0 NetworkManager[48915]: <info>  [1764059650.8521] manager: (tap610430d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 25 08:34:10 compute-0 kernel: tap610430d6-50: entered promiscuous mode
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.855 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap610430d6-50, col_values=(('external_ids', {'iface-id': 'cdf0d105-3698-42de-8860-e30ea5c5fbfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:10 compute-0 ovn_controller[152859]: 2025-11-25T08:34:10Z|00540|binding|INFO|Releasing lport cdf0d105-3698-42de-8860-e30ea5c5fbfa from this chassis (sb_readonly=0)
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.857 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:10 compute-0 nova_compute[253538]: 2025-11-25 08:34:10.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.878 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/610430d6-5ea7-4c04-9b64-2dc2d0a55169.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/610430d6-5ea7-4c04-9b64-2dc2d0a55169.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.878 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7943f8d5-e455-455e-a6ba-be80491eac73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.880 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-610430d6-5ea7-4c04-9b64-2dc2d0a55169
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/610430d6-5ea7-4c04-9b64-2dc2d0a55169.pid.haproxy
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 610430d6-5ea7-4c04-9b64-2dc2d0a55169
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:34:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:10.881 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'env', 'PROCESS_TAG=haproxy-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/610430d6-5ea7-4c04-9b64-2dc2d0a55169.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:34:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.056 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059651.0560687, e3f4ee5b-6bb5-456f-b522-426ea1ebf32f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.056 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] VM Started (Lifecycle Event)
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.071 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.074 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059651.0583458, e3f4ee5b-6bb5-456f-b522-426ea1ebf32f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.074 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] VM Paused (Lifecycle Event)
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.088 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.091 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.103 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.188 253542 DEBUG nova.compute.manager [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-changed-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.189 253542 DEBUG nova.compute.manager [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Refreshing instance network info cache due to event network-changed-269f9bd3-f267-459c-8e24-4b1f6c943345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.189 253542 DEBUG oslo_concurrency.lockutils [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 262 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 143 op/s
Nov 25 08:34:11 compute-0 podman[316346]: 2025-11-25 08:34:11.277754098 +0000 UTC m=+0.052924523 container create a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:34:11 compute-0 systemd[1]: Started libpod-conmon-a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f.scope.
Nov 25 08:34:11 compute-0 podman[316346]: 2025-11-25 08:34:11.251292557 +0000 UTC m=+0.026463002 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:34:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e19f3f33dd4cf5e2ac16478c7cf2cb2437ef71f6951caaa3e6af27e077a7e4a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:11 compute-0 podman[316346]: 2025-11-25 08:34:11.370738607 +0000 UTC m=+0.145909082 container init a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:34:11 compute-0 podman[316346]: 2025-11-25 08:34:11.375843665 +0000 UTC m=+0.151014110 container start a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:34:11 compute-0 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [NOTICE]   (316365) : New worker (316367) forked
Nov 25 08:34:11 compute-0 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [NOTICE]   (316365) : Loading success.
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.824 253542 DEBUG nova.compute.manager [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.825 253542 DEBUG oslo_concurrency.lockutils [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.825 253542 DEBUG oslo_concurrency.lockutils [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.826 253542 DEBUG oslo_concurrency.lockutils [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.826 253542 DEBUG nova.compute.manager [req-82648ef4-5e8d-47bf-a2f9-9f14ca568136 req-7f9f86e8-ebb1-4acb-9f25-aa785614bdba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Processing event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.827 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.832 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.833 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059651.8322413, e3f4ee5b-6bb5-456f-b522-426ea1ebf32f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.833 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] VM Resumed (Lifecycle Event)
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.838 253542 INFO nova.virt.libvirt.driver [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance spawned successfully.
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.838 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.856 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.862 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.864 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.864 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.865 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.865 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.866 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.866 253542 DEBUG nova.virt.libvirt.driver [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.887 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.924 253542 INFO nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Took 10.48 seconds to spawn the instance on the hypervisor.
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.924 253542 DEBUG nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.935 253542 DEBUG nova.network.neutron [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Updating instance_info_cache with network_info: [{"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.964 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.964 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance network_info: |[{"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.965 253542 DEBUG oslo_concurrency.lockutils [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.965 253542 DEBUG nova.network.neutron [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Refreshing network info cache for port 269f9bd3-f267-459c-8e24-4b1f6c943345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.968 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start _get_guest_xml network_info=[{"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.972 253542 WARNING nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.978 253542 DEBUG nova.virt.libvirt.host [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.979 253542 DEBUG nova.virt.libvirt.host [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.988 253542 DEBUG nova.virt.libvirt.host [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.988 253542 DEBUG nova.virt.libvirt.host [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.989 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.989 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.990 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.990 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.990 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.991 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.991 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.991 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.992 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.992 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.992 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.993 253542 DEBUG nova.virt.hardware [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:11 compute-0 nova_compute[253538]: 2025-11-25 08:34:11.997 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.035 253542 INFO nova.compute.manager [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Took 11.48 seconds to build instance.
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.056 253542 DEBUG oslo_concurrency.lockutils [None req-de0428eb-b428-4465-9e27-e82bef732ea3 d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.106 253542 DEBUG nova.network.neutron [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Updated VIF entry in instance network info cache for port 40faec4b-dd3f-4659-972d-beeeb707761f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.107 253542 DEBUG nova.network.neutron [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Updating instance_info_cache with network_info: [{"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.109 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Successfully created port: 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.119 253542 DEBUG oslo_concurrency.lockutils [req-ac005210-2cba-4875-8efe-5bf1ff7d88f3 req-5b1ce70b-61d5-48c5-b763-cffab2c1330b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:12 compute-0 ceph-mon[75015]: pgmap v1531: 321 pgs: 321 active+clean; 262 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 143 op/s
Nov 25 08:34:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782277273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.459 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.491 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.496 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:12 compute-0 nova_compute[253538]: 2025-11-25 08:34:12.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:34:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467150374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.032 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.034 253542 DEBUG nova.virt.libvirt.vif [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-960435538',display_name='tempest-DeleteServersTestJSON-server-960435538',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-960435538',id=60,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-31sgadur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:05Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=bf44124c-1a65-4bde-a777-043ae1a53557,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.035 253542 DEBUG nova.network.os_vif_util [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.036 253542 DEBUG nova.network.os_vif_util [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.038 253542 DEBUG nova.objects.instance [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.053 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <uuid>bf44124c-1a65-4bde-a777-043ae1a53557</uuid>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <name>instance-0000003c</name>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <nova:name>tempest-DeleteServersTestJSON-server-960435538</nova:name>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:11</nova:creationTime>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <nova:port uuid="269f9bd3-f267-459c-8e24-4b1f6c943345">
Nov 25 08:34:13 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <entry name="serial">bf44124c-1a65-4bde-a777-043ae1a53557</entry>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <entry name="uuid">bf44124c-1a65-4bde-a777-043ae1a53557</entry>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/bf44124c-1a65-4bde-a777-043ae1a53557_disk">
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/bf44124c-1a65-4bde-a777-043ae1a53557_disk.config">
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:13 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:13:ae:8e"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <target dev="tap269f9bd3-f2"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/console.log" append="off"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:13 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:13 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:13 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:13 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:13 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.055 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Preparing to wait for external event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.055 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.055 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.056 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.057 253542 DEBUG nova.virt.libvirt.vif [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-960435538',display_name='tempest-DeleteServersTestJSON-server-960435538',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-960435538',id=60,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-31sgadur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:05Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=bf44124c-1a65-4bde-a777-043ae1a53557,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.057 253542 DEBUG nova.network.os_vif_util [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.058 253542 DEBUG nova.network.os_vif_util [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.058 253542 DEBUG os_vif [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.060 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.060 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.066 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269f9bd3-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.066 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap269f9bd3-f2, col_values=(('external_ids', {'iface-id': '269f9bd3-f267-459c-8e24-4b1f6c943345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:ae:8e', 'vm-uuid': 'bf44124c-1a65-4bde-a777-043ae1a53557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:13 compute-0 NetworkManager[48915]: <info>  [1764059653.0693] manager: (tap269f9bd3-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.070 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.076 253542 INFO os_vif [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2')
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.126 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.126 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.127 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:13:ae:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.129 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Using config drive
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.167 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 278 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Nov 25 08:34:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1782277273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/467150374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.545 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Successfully updated port: 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.556 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.556 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.557 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.575 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.575 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.612 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Creating config drive at /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.618 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp0ykwq6n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.756 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp0ykwq6n" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.783 253542 DEBUG nova.storage.rbd_utils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image bf44124c-1a65-4bde-a777-043ae1a53557_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.787 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config bf44124c-1a65-4bde-a777-043ae1a53557_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.825 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.850 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.850 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.850 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.851 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.887 253542 DEBUG nova.compute.manager [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.888 253542 DEBUG nova.compute.manager [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.889 253542 DEBUG oslo_concurrency.lockutils [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.896 253542 DEBUG nova.network.neutron [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Updated VIF entry in instance network info cache for port 269f9bd3-f267-459c-8e24-4b1f6c943345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.897 253542 DEBUG nova.network.neutron [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Updating instance_info_cache with network_info: [{"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.924 253542 DEBUG oslo_concurrency.lockutils [req-5451ff9f-1b7d-4ffd-972a-f224f1068dc7 req-fc75a8a2-3e12-4a7a-9630-991b029fdf7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-bf44124c-1a65-4bde-a777-043ae1a53557" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.975 253542 DEBUG oslo_concurrency.processutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config bf44124c-1a65-4bde-a777-043ae1a53557_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:13 compute-0 nova_compute[253538]: 2025-11-25 08:34:13.975 253542 INFO nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Deleting local config drive /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557/disk.config because it was imported into RBD.
Nov 25 08:34:14 compute-0 kernel: tap269f9bd3-f2: entered promiscuous mode
Nov 25 08:34:14 compute-0 NetworkManager[48915]: <info>  [1764059654.0341] manager: (tap269f9bd3-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00541|binding|INFO|Claiming lport 269f9bd3-f267-459c-8e24-4b1f6c943345 for this chassis.
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00542|binding|INFO|269f9bd3-f267-459c-8e24-4b1f6c943345: Claiming fa:16:3e:13:ae:8e 10.100.0.12
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.042 253542 DEBUG nova.compute.manager [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.043 253542 DEBUG oslo_concurrency.lockutils [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.043 253542 DEBUG oslo_concurrency.lockutils [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.044 253542 DEBUG oslo_concurrency.lockutils [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.044 253542 DEBUG nova.compute.manager [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] No waiting events found dispatching network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.044 253542 WARNING nova.compute.manager [req-7be5a64c-2458-4de5-9fc2-a86c8e1a738c req-58a5e110-e63b-4b9e-a1bf-788df1ff37a0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received unexpected event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f for instance with vm_state active and task_state None.
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00543|binding|INFO|Setting lport 269f9bd3-f267-459c-8e24-4b1f6c943345 ovn-installed in OVS
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 systemd-udevd[316517]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00544|binding|INFO|Setting lport 269f9bd3-f267-459c-8e24-4b1f6c943345 up in Southbound
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.086 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ae:8e 10.100.0.12'], port_security=['fa:16:3e:13:ae:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bf44124c-1a65-4bde-a777-043ae1a53557', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=269f9bd3-f267-459c-8e24-4b1f6c943345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.088 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 269f9bd3-f267-459c-8e24-4b1f6c943345 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 bound to our chassis
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.089 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.100 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[788132f1-f076-444d-8242-f25028f7289c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.101 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.103 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.103 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d13b36b5-1c90-4d0a-9e7b-080f55d2747e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 NetworkManager[48915]: <info>  [1764059654.1077] device (tap269f9bd3-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.108 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[641b8c19-4bec-451f-8a94-48a8a1110ce4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 NetworkManager[48915]: <info>  [1764059654.1088] device (tap269f9bd3-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:14 compute-0 systemd-machined[215790]: New machine qemu-69-instance-0000003c.
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.117 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[04993176-0095-43ab-a472-1d447779b02d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003c.
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.142 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdc8a31-0c3b-45d9-a44f-ce2ce103bba3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 podman[316508]: 2025-11-25 08:34:14.146722155 +0000 UTC m=+0.087881733 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.176 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9f392d85-3c3d-4a90-83f1-126d4db82971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 NetworkManager[48915]: <info>  [1764059654.1851] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.184 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e89011dd-8d5c-4458-ba66-0b8479da06cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.223 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a14d6b4-9f79-4742-8f59-15b3550cac24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.226 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b7ac97-a041-44c7-a2f6-8d22232c9ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 NetworkManager[48915]: <info>  [1764059654.2456] device (tapa66e51b8-e0): carrier: link connected
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.250 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7905d2cd-e11e-43b0-a934-b1c67eccd7fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.270 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae58236a-7597-4860-a78b-e9706d37cf4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500884, 'reachable_time': 36770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316561, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.287 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[810984ab-f9bd-4da6-88f5-cebf7dfd011d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500884, 'tstamp': 500884}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316562, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.302 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0de82015-6cae-460b-929d-af914e7ca3cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500884, 'reachable_time': 36770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316563, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ceph-mon[75015]: pgmap v1532: 321 pgs: 321 active+clean; 278 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.335 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd13f4cc-0e2c-4397-924c-4dd51f2c0834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.390 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e545cd4e-9adc-4137-a3ad-8fe9374ec5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.391 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.392 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:14 compute-0 NetworkManager[48915]: <info>  [1764059654.3944] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.393 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.396 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.397 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00545|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.415 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.416 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2854f836-cfc9-4cdc-b7c0-03ab83e5018b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.417 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.418 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:42:da 10.100.0.9
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:42:da 10.100.0.9
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.488 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059654.4873943, bf44124c-1a65-4bde-a777-043ae1a53557 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.488 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] VM Started (Lifecycle Event)
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.517 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.521 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059654.48775, bf44124c-1a65-4bde-a777-043ae1a53557 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.521 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] VM Paused (Lifecycle Event)
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.588 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.600 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.618 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.746 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.746 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.747 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.747 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.747 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.749 253542 INFO nova.compute.manager [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Terminating instance
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.750 253542 DEBUG nova.compute.manager [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:34:14 compute-0 podman[316637]: 2025-11-25 08:34:14.766736778 +0000 UTC m=+0.055366769 container create 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 08:34:14 compute-0 kernel: tap40faec4b-dd (unregistering): left promiscuous mode
Nov 25 08:34:14 compute-0 NetworkManager[48915]: <info>  [1764059654.7962] device (tap40faec4b-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:34:14 compute-0 systemd[1]: Started libpod-conmon-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624.scope.
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00546|binding|INFO|Releasing lport 40faec4b-dd3f-4659-972d-beeeb707761f from this chassis (sb_readonly=0)
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00547|binding|INFO|Setting lport 40faec4b-dd3f-4659-972d-beeeb707761f down in Southbound
Nov 25 08:34:14 compute-0 ovn_controller[152859]: 2025-11-25T08:34:14Z|00548|binding|INFO|Removing iface tap40faec4b-dd ovn-installed in OVS
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.820 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:c8:d4 10.100.0.4'], port_security=['fa:16:3e:37:c8:d4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e3f4ee5b-6bb5-456f-b522-426ea1ebf32f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'adf7500b3b404802bc7f4ada42a72100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89f00338-7004-4e97-a33e-2330c787d850', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f67d87c1-e8c8-46a3-b6be-f7e585c56ed4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=40faec4b-dd3f-4659-972d-beeeb707761f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:14 compute-0 podman[316637]: 2025-11-25 08:34:14.735466747 +0000 UTC m=+0.024096758 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245a978a5022c98149f488e1744d8b7505a54bb18353d96e818cdced9046a6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:14 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Nov 25 08:34:14 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003b.scope: Consumed 3.320s CPU time.
Nov 25 08:34:14 compute-0 systemd-machined[215790]: Machine qemu-68-instance-0000003b terminated.
Nov 25 08:34:14 compute-0 podman[316637]: 2025-11-25 08:34:14.845489234 +0000 UTC m=+0.134119265 container init 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:34:14 compute-0 podman[316637]: 2025-11-25 08:34:14.851780634 +0000 UTC m=+0.140410645 container start 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 08:34:14 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [NOTICE]   (316659) : New worker (316661) forked
Nov 25 08:34:14 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [NOTICE]   (316659) : Loading success.
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.911 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 40faec4b-dd3f-4659-972d-beeeb707761f in datapath 610430d6-5ea7-4c04-9b64-2dc2d0a55169 unbound from our chassis
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.913 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 610430d6-5ea7-4c04-9b64-2dc2d0a55169, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.914 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a25f166a-aaeb-41c7-a521-8b057df21e1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:14.914 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 namespace which is not needed anymore
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.915 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.916 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.935 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.986 253542 INFO nova.virt.libvirt.driver [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Instance destroyed successfully.
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.986 253542 DEBUG nova.objects.instance [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lazy-loading 'resources' on Instance uuid e3f4ee5b-6bb5-456f-b522-426ea1ebf32f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.998 253542 DEBUG nova.virt.libvirt.vif [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1084102530',display_name='tempest-InstanceActionsNegativeTestJSON-server-1084102530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1084102530',id=59,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='adf7500b3b404802bc7f4ada42a72100',ramdisk_id='',reservation_id='r-gymefxtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-800535511',owner_user_name='tempest-InstanceActionsNegativeTestJSON-800535511-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:11Z,user_data=None,user_id='d4a674c4114a4e4fb5e446089be3ffc0',uuid=e3f4ee5b-6bb5-456f-b522-426ea1ebf32f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.998 253542 DEBUG nova.network.os_vif_util [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converting VIF {"id": "40faec4b-dd3f-4659-972d-beeeb707761f", "address": "fa:16:3e:37:c8:d4", "network": {"id": "610430d6-5ea7-4c04-9b64-2dc2d0a55169", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-598339358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adf7500b3b404802bc7f4ada42a72100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap40faec4b-dd", "ovs_interfaceid": "40faec4b-dd3f-4659-972d-beeeb707761f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:14 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.999 253542 DEBUG nova.network.os_vif_util [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:14.999 253542 DEBUG os_vif [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.003 253542 DEBUG nova.network.neutron [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.004 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40faec4b-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.008 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.008 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.011 253542 INFO os_vif [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:c8:d4,bridge_name='br-int',has_traffic_filtering=True,id=40faec4b-dd3f-4659-972d-beeeb707761f,network=Network(610430d6-5ea7-4c04-9b64-2dc2d0a55169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap40faec4b-dd')
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.031 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.031 253542 INFO nova.compute.claims [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.034 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.034 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance network_info: |[{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.035 253542 DEBUG oslo_concurrency.lockutils [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.035 253542 DEBUG nova.network.neutron [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.038 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start _get_guest_xml network_info=[{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.045 253542 WARNING nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.054 253542 DEBUG nova.virt.libvirt.host [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.055 253542 DEBUG nova.virt.libvirt.host [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:15 compute-0 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [NOTICE]   (316365) : haproxy version is 2.8.14-c23fe91
Nov 25 08:34:15 compute-0 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [NOTICE]   (316365) : path to executable is /usr/sbin/haproxy
Nov 25 08:34:15 compute-0 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [WARNING]  (316365) : Exiting Master process...
Nov 25 08:34:15 compute-0 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [ALERT]    (316365) : Current worker (316367) exited with code 143 (Terminated)
Nov 25 08:34:15 compute-0 neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169[316361]: [WARNING]  (316365) : All workers exited. Exiting... (0)
Nov 25 08:34:15 compute-0 systemd[1]: libpod-a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f.scope: Deactivated successfully.
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.064 253542 DEBUG nova.virt.libvirt.host [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.066 253542 DEBUG nova.virt.libvirt.host [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.066 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.066 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.067 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.067 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.067 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.068 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.068 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.068 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.068 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.069 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.069 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:15 compute-0 podman[316697]: 2025-11-25 08:34:15.069291399 +0000 UTC m=+0.054338731 container died a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.069 253542 DEBUG nova.virt.hardware [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.073 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f-userdata-shm.mount: Deactivated successfully.
Nov 25 08:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e19f3f33dd4cf5e2ac16478c7cf2cb2437ef71f6951caaa3e6af27e077a7e4a-merged.mount: Deactivated successfully.
Nov 25 08:34:15 compute-0 podman[316697]: 2025-11-25 08:34:15.125952363 +0000 UTC m=+0.110999695 container cleanup a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:34:15 compute-0 systemd[1]: libpod-conmon-a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f.scope: Deactivated successfully.
Nov 25 08:34:15 compute-0 podman[316742]: 2025-11-25 08:34:15.203408584 +0000 UTC m=+0.056854489 container remove a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0838eac5-43e9-4312-9c33-d1c4e2e2b491]: (4, ('Tue Nov 25 08:34:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 (a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f)\na9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f\nTue Nov 25 08:34:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 (a9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f)\na9aa281d687f553f489151220e0518c490d12d8d9ea2df973ae70a1b883b253f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79e36448-8cdc-45b6-96f4-6684f479b756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.213 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap610430d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:15 compute-0 kernel: tap610430d6-50: left promiscuous mode
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.238 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d6fed4-77df-4138-ae20-2706d67953ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 328 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.5 MiB/s wr, 170 op/s
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.254 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25f1c2bd-4990-4244-8281-a3ab1aea1018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.255 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[acb004c7-67df-4299-ac5d-7ec3ceaab88f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.269 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.271 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b6960f-8823-4988-ae76-50a7dbf9fe04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500522, 'reachable_time': 42502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316777, 'error': None, 'target': 'ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d610430d6\x2d5ea7\x2d4c04\x2d9b64\x2d2dc2d0a55169.mount: Deactivated successfully.
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.276 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-610430d6-5ea7-4c04-9b64-2dc2d0a55169 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:34:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:15.277 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[73520c1e-80f4-4574-9649-4d5df306534d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.305 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.322 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.322 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.323 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.367 253542 INFO nova.virt.libvirt.driver [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Deleting instance files /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_del
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.369 253542 INFO nova.virt.libvirt.driver [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Deletion of /var/lib/nova/instances/e3f4ee5b-6bb5-456f-b522-426ea1ebf32f_del complete
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.417 253542 INFO nova.compute.manager [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Took 0.67 seconds to destroy the instance on the hypervisor.
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.419 253542 DEBUG oslo.service.loopingcall [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.419 253542 DEBUG nova.compute.manager [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.420 253542 DEBUG nova.network.neutron [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:34:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191464744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.555 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.586 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.591 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369036725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.767 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.772 253542 DEBUG nova.compute.provider_tree [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.784 253542 DEBUG nova.scheduler.client.report [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.802 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.803 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.850 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.850 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.866 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.879 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:34:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.964 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.965 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.965 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating image(s)
Nov 25 08:34:15 compute-0 nova_compute[253538]: 2025-11-25 08:34:15.984 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.006 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/348268589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.028 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.031 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.064 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.065 253542 DEBUG nova.virt.libvirt.vif [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.066 253542 DEBUG nova.network.os_vif_util [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.066 253542 DEBUG nova.network.os_vif_util [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.067 253542 DEBUG nova.objects.instance [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.084 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <uuid>52d39d67-b456-44e4-8804-2de0c941edae</uuid>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <name>instance-0000003d</name>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:15</nova:creationTime>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 08:34:16 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <entry name="serial">52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <entry name="uuid">52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/52d39d67-b456-44e4-8804-2de0c941edae_disk">
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/52d39d67-b456-44e4-8804-2de0c941edae_disk.config">
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:16 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:4d:ce:d4"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <target dev="tap9fa407fa-66"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log" append="off"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:16 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:16 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:16 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:16 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:16 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.085 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Preparing to wait for external event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.085 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.085 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.085 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.086 253542 DEBUG nova.virt.libvirt.vif [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.086 253542 DEBUG nova.network.os_vif_util [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.087 253542 DEBUG nova.network.os_vif_util [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.087 253542 DEBUG os_vif [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.088 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.088 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.091 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fa407fa-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.091 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fa407fa-66, col_values=(('external_ids', {'iface-id': '9fa407fa-661b-4b02-b4f4-656f6ae34cd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:ce:d4', 'vm-uuid': '52d39d67-b456-44e4-8804-2de0c941edae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:16 compute-0 NetworkManager[48915]: <info>  [1764059656.0940] manager: (tap9fa407fa-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.095 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.098 253542 INFO os_vif [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66')
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.104 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.104 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.105 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.105 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.123 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.126 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.192 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.192 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.193 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:4d:ce:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.193 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Using config drive
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.213 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.220 253542 DEBUG nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Processing event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.221 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.222 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.222 253542 DEBUG oslo_concurrency.lockutils [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.222 253542 DEBUG nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] No waiting events found dispatching network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.222 253542 WARNING nova.compute.manager [req-62d0912b-5663-4692-a4cb-6f9d90b3aba9 req-4fed2556-c6a8-4eca-8489-42b5a68d9fef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received unexpected event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 for instance with vm_state building and task_state spawning.
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.223 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.226 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059656.2259817, bf44124c-1a65-4bde-a777-043ae1a53557 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.226 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] VM Resumed (Lifecycle Event)
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.228 253542 DEBUG nova.policy [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c199ca353ed54a53ab7fe37d3089c82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23237e7592b247838e62457157e64e9e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.230 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.247 253542 INFO nova.virt.libvirt.driver [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance spawned successfully.
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.247 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.248 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.253 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.273 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.276 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.277 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.277 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.277 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.278 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.278 253542 DEBUG nova.virt.libvirt.driver [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.290 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-unplugged-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.290 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] No waiting events found dispatching network-vif-unplugged-40faec4b-dd3f-4659-972d-beeeb707761f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-unplugged-40faec4b-dd3f-4659-972d-beeeb707761f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.291 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.292 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.292 253542 DEBUG oslo_concurrency.lockutils [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.292 253542 DEBUG nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] No waiting events found dispatching network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.292 253542 WARNING nova.compute.manager [req-0bdb849e-7695-497b-bcef-93c0fb3071a3 req-afd5851a-4d50-4e76-be92-76a9633d0b8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received unexpected event network-vif-plugged-40faec4b-dd3f-4659-972d-beeeb707761f for instance with vm_state active and task_state deleting.
Nov 25 08:34:16 compute-0 ceph-mon[75015]: pgmap v1533: 321 pgs: 321 active+clean; 328 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.5 MiB/s wr, 170 op/s
Nov 25 08:34:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4191464744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1369036725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/348268589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.420 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.460 253542 INFO nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Took 10.74 seconds to spawn the instance on the hypervisor.
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.460 253542 DEBUG nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.508 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] resizing rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.606 253542 DEBUG nova.objects.instance [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'migration_context' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.608 253542 INFO nova.compute.manager [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Took 11.82 seconds to build instance.
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.625 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.625 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Ensure instance console log exists: /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.626 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.626 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.626 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.630 253542 DEBUG oslo_concurrency.lockutils [None req-264bf83f-c6d5-4d83-a2a4-c0156230e328 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.664 253542 DEBUG nova.network.neutron [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.664 253542 DEBUG nova.network.neutron [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.673 253542 DEBUG nova.network.neutron [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.680 253542 DEBUG oslo_concurrency.lockutils [req-3d4a59f7-5e91-44df-ab9f-983d26bcf780 req-55f998e1-75ae-41d1-af01-ce2112fde043 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.690 253542 INFO nova.compute.manager [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Took 1.27 seconds to deallocate network for instance.
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.729 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.729 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:16 compute-0 nova_compute[253538]: 2025-11-25 08:34:16.855 253542 DEBUG oslo_concurrency.processutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.118 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Creating config drive at /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.127 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz8t7lsc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 321 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.5 MiB/s wr, 241 op/s
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.274 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz8t7lsc" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.304 253542 DEBUG nova.storage.rbd_utils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] rbd image 52d39d67-b456-44e4-8804-2de0c941edae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.308 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config 52d39d67-b456-44e4-8804-2de0c941edae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519464579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.352 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Successfully created port: 9200cc12-927d-418b-99c1-ca0421535979 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.357 253542 DEBUG oslo_concurrency.processutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.363 253542 DEBUG nova.compute.provider_tree [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.381 253542 DEBUG nova.scheduler.client.report [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.421 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.451 253542 INFO nova.scheduler.client.report [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Deleted allocations for instance e3f4ee5b-6bb5-456f-b522-426ea1ebf32f
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.452 253542 DEBUG oslo_concurrency.processutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config 52d39d67-b456-44e4-8804-2de0c941edae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.452 253542 INFO nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Deleting local config drive /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/disk.config because it was imported into RBD.
Nov 25 08:34:17 compute-0 NetworkManager[48915]: <info>  [1764059657.4967] manager: (tap9fa407fa-66): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Nov 25 08:34:17 compute-0 kernel: tap9fa407fa-66: entered promiscuous mode
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:17 compute-0 ovn_controller[152859]: 2025-11-25T08:34:17Z|00549|binding|INFO|Claiming lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for this chassis.
Nov 25 08:34:17 compute-0 ovn_controller[152859]: 2025-11-25T08:34:17Z|00550|binding|INFO|9fa407fa-661b-4b02-b4f4-656f6ae34cd8: Claiming fa:16:3e:4d:ce:d4 10.100.0.8
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.507 253542 DEBUG oslo_concurrency.lockutils [None req-0352b895-b5eb-4885-b29d-154a1d5181fb d4a674c4114a4e4fb5e446089be3ffc0 adf7500b3b404802bc7f4ada42a72100 - - default default] Lock "e3f4ee5b-6bb5-456f-b522-426ea1ebf32f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.507 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ce:d4 10.100.0.8'], port_security=['fa:16:3e:4d:ce:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9fa407fa-661b-4b02-b4f4-656f6ae34cd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.508 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.510 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.526 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[906b886a-2ed6-4661-a1bf-88845063611d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:17 compute-0 ovn_controller[152859]: 2025-11-25T08:34:17Z|00551|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 ovn-installed in OVS
Nov 25 08:34:17 compute-0 ovn_controller[152859]: 2025-11-25T08:34:17Z|00552|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 up in Southbound
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.541 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.547 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:17 compute-0 systemd-udevd[317114]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.559 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3d41e5f0-0b4b-4efc-896b-35f66f665413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.562 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[318d5717-8010-45cb-95c9-136f90b7583c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:17 compute-0 systemd-machined[215790]: New machine qemu-70-instance-0000003d.
Nov 25 08:34:17 compute-0 NetworkManager[48915]: <info>  [1764059657.5682] device (tap9fa407fa-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:17 compute-0 NetworkManager[48915]: <info>  [1764059657.5692] device (tap9fa407fa-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:17 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003d.
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.589 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e289bd42-f6e1-4d24-abb8-a8a570b8dcea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.608 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[776bcfc5-0859-4bb2-aeda-a9347a5daf45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317130, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[311749a8-d93d-4c14-9148-45c2751fd292]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317137, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317137, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:17 compute-0 nova_compute[253538]: 2025-11-25 08:34:17.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.627 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:17.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:17 compute-0 podman[317099]: 2025-11-25 08:34:17.647798269 +0000 UTC m=+0.125782161 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.041 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059658.0411303, 52d39d67-b456-44e4-8804-2de0c941edae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.042 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] VM Started (Lifecycle Event)
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.055 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.059 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059658.041302, 52d39d67-b456-44e4-8804-2de0c941edae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.059 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] VM Paused (Lifecycle Event)
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.076 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.079 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.098 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.295 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Received event network-vif-deleted-40faec4b-dd3f-4659-972d-beeeb707761f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.296 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.296 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Processing event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.297 253542 DEBUG oslo_concurrency.lockutils [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.298 253542 DEBUG nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.298 253542 WARNING nova.compute.manager [req-f468f4a1-8713-4273-98b7-27025f3c796f req-3b9e0d06-8123-4217-b452-e6651a54e056 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state building and task_state spawning.
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.299 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.303 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059658.3034234, 52d39d67-b456-44e4-8804-2de0c941edae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.303 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] VM Resumed (Lifecycle Event)
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.304 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.307 253542 INFO nova.virt.libvirt.driver [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance spawned successfully.
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.307 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.322 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.327 253542 DEBUG oslo_concurrency.lockutils [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.327 253542 DEBUG oslo_concurrency.lockutils [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.328 253542 DEBUG nova.compute.manager [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.328 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.332 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.332 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.333 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.333 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.333 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.334 253542 DEBUG nova.virt.libvirt.driver [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.336 253542 DEBUG nova.compute.manager [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.337 253542 DEBUG nova.objects.instance [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'flavor' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:18 compute-0 ceph-mon[75015]: pgmap v1534: 321 pgs: 321 active+clean; 321 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.5 MiB/s wr, 241 op/s
Nov 25 08:34:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3519464579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.362 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.377 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Successfully updated port: 9200cc12-927d-418b-99c1-ca0421535979 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.378 253542 DEBUG nova.virt.libvirt.driver [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.396 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.396 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.396 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.411 253542 INFO nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Took 9.02 seconds to spawn the instance on the hypervisor.
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.412 253542 DEBUG nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.483 253542 INFO nova.compute.manager [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Took 10.24 seconds to build instance.
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.496 253542 DEBUG oslo_concurrency.lockutils [None req-ea508da6-2d37-4e51-9a1b-53e06fde3653 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:18 compute-0 nova_compute[253538]: 2025-11-25 08:34:18.555 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:34:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 335 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.6 MiB/s wr, 278 op/s
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.726 253542 DEBUG nova.network.neutron [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Updating instance_info_cache with network_info: [{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.744 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.745 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance network_info: |[{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.748 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start _get_guest_xml network_info=[{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.925 253542 WARNING nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.931 253542 DEBUG nova.virt.libvirt.host [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.931 253542 DEBUG nova.virt.libvirt.host [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.934 253542 DEBUG nova.virt.libvirt.host [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.935 253542 DEBUG nova.virt.libvirt.host [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.935 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.936 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.936 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.937 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.937 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.937 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.938 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.938 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.938 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.939 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.939 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.939 253542 DEBUG nova.virt.hardware [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:19 compute-0 nova_compute[253538]: 2025-11-25 08:34:19.943 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:20 compute-0 ceph-mon[75015]: pgmap v1535: 321 pgs: 321 active+clean; 335 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.6 MiB/s wr, 278 op/s
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.407 253542 DEBUG nova.compute.manager [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-changed-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.408 253542 DEBUG nova.compute.manager [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Refreshing instance network info cache due to event network-changed-9200cc12-927d-418b-99c1-ca0421535979. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.408 253542 DEBUG oslo_concurrency.lockutils [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.409 253542 DEBUG oslo_concurrency.lockutils [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.409 253542 DEBUG nova.network.neutron [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Refreshing network info cache for port 9200cc12-927d-418b-99c1-ca0421535979 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134672587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.440 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.466 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.470 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203499940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.957 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.959 253542 DEBUG nova.virt.libvirt.vif [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-tempest.common.compute-instance-614557291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:15Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.960 253542 DEBUG nova.network.os_vif_util [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.962 253542 DEBUG nova.network.os_vif_util [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.964 253542 DEBUG nova.objects.instance [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.980 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <uuid>420c5373-d9c4-4da0-9658-90eff9a19f8d</uuid>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <name>instance-0000003e</name>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <nova:name>tempest-tempest.common.compute-instance-614557291</nova:name>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:19</nova:creationTime>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <nova:port uuid="9200cc12-927d-418b-99c1-ca0421535979">
Nov 25 08:34:20 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <entry name="serial">420c5373-d9c4-4da0-9658-90eff9a19f8d</entry>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <entry name="uuid">420c5373-d9c4-4da0-9658-90eff9a19f8d</entry>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/420c5373-d9c4-4da0-9658-90eff9a19f8d_disk">
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config">
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:20 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:73:f0:9b"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <target dev="tap9200cc12-92"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/console.log" append="off"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:20 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:20 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:20 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:20 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:20 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.981 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Preparing to wait for external event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.981 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.982 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.982 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.983 253542 DEBUG nova.virt.libvirt.vif [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-tempest.common.compute-instance-614557291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:15Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.983 253542 DEBUG nova.network.os_vif_util [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.984 253542 DEBUG nova.network.os_vif_util [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.985 253542 DEBUG os_vif [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.985 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.989 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9200cc12-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.990 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9200cc12-92, col_values=(('external_ids', {'iface-id': '9200cc12-927d-418b-99c1-ca0421535979', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:f0:9b', 'vm-uuid': '420c5373-d9c4-4da0-9658-90eff9a19f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:20 compute-0 NetworkManager[48915]: <info>  [1764059660.9927] manager: (tap9200cc12-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.994 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:20 compute-0 nova_compute[253538]: 2025-11-25 08:34:20.999 253542 INFO os_vif [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92')
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.048 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.049 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.049 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No VIF found with MAC fa:16:3e:73:f0:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.050 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Using config drive
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.075 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 341 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 330 op/s
Nov 25 08:34:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3134672587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/203499940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.463 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating config drive at /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.472 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6a3fg7gu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.620 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6a3fg7gu" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.644 253542 DEBUG nova.storage.rbd_utils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.647 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.815 253542 DEBUG oslo_concurrency.processutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.816 253542 INFO nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deleting local config drive /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config because it was imported into RBD.
Nov 25 08:34:21 compute-0 kernel: tap9200cc12-92: entered promiscuous mode
Nov 25 08:34:21 compute-0 ovn_controller[152859]: 2025-11-25T08:34:21Z|00553|binding|INFO|Claiming lport 9200cc12-927d-418b-99c1-ca0421535979 for this chassis.
Nov 25 08:34:21 compute-0 ovn_controller[152859]: 2025-11-25T08:34:21Z|00554|binding|INFO|9200cc12-927d-418b-99c1-ca0421535979: Claiming fa:16:3e:73:f0:9b 10.100.0.11
Nov 25 08:34:21 compute-0 NetworkManager[48915]: <info>  [1764059661.8829] manager: (tap9200cc12-92): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:21 compute-0 ovn_controller[152859]: 2025-11-25T08:34:21Z|00555|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 ovn-installed in OVS
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:21 compute-0 nova_compute[253538]: 2025-11-25 08:34:21.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:21 compute-0 systemd-udevd[317339]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.931 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:f0:9b 10.100.0.11'], port_security=['fa:16:3e:73:f0:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '420c5373-d9c4-4da0-9658-90eff9a19f8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c602cd86-40c0-467a-8b7a-b573e0a7cefa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9200cc12-927d-418b-99c1-ca0421535979) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:21 compute-0 ovn_controller[152859]: 2025-11-25T08:34:21Z|00556|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 up in Southbound
Nov 25 08:34:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.932 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9200cc12-927d-418b-99c1-ca0421535979 in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis
Nov 25 08:34:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.933 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:34:21 compute-0 systemd-machined[215790]: New machine qemu-71-instance-0000003e.
Nov 25 08:34:21 compute-0 NetworkManager[48915]: <info>  [1764059661.9423] device (tap9200cc12-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:21 compute-0 NetworkManager[48915]: <info>  [1764059661.9429] device (tap9200cc12-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:21 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-0000003e.
Nov 25 08:34:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.956 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60e04df7-37c5-40bf-833d-b51e9b0781b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.994 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9faadc58-27ca-4ea1-8b23-8500ff166054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:21.997 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d88bbcdf-d498-46ff-b814-0bd432433ff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.033 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2bc74b-9159-4f26-b361-bc6f4a1d85b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.050 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[015e0c6e-fe91-43ab-841a-62720a4e776d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317354, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.074 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d81bdc1-ca1e-4d87-a283-15f0c5b844ff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496845, 'tstamp': 496845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317355, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496848, 'tstamp': 496848}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317355, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.076 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.077 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.079 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.079 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.079 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:22.079 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101747595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.130 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.221 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.221 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.225 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.225 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.229 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.230 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.234 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.234 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.238 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.238 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.333 253542 DEBUG nova.network.neutron [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Updated VIF entry in instance network info cache for port 9200cc12-927d-418b-99c1-ca0421535979. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.333 253542 DEBUG nova.network.neutron [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Updating instance_info_cache with network_info: [{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.347 253542 DEBUG oslo_concurrency.lockutils [req-e4737e5f-2d51-4d01-89ac-2ec49c12161a req-482d4b3f-959f-4a11-8f8c-2960d1a7d876 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-420c5373-d9c4-4da0-9658-90eff9a19f8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.361 253542 DEBUG nova.compute.manager [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.361 253542 DEBUG oslo_concurrency.lockutils [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.361 253542 DEBUG oslo_concurrency.lockutils [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.362 253542 DEBUG oslo_concurrency.lockutils [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.362 253542 DEBUG nova.compute.manager [req-077676c9-9500-4d50-b580-1d398dd006a0 req-b047daa1-e92a-4074-805d-ff09b8e77525 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Processing event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:22 compute-0 ceph-mon[75015]: pgmap v1536: 321 pgs: 321 active+clean; 341 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 330 op/s
Nov 25 08:34:22 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1101747595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.446 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.447 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059662.446147, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.447 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Started (Lifecycle Event)
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.451 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.455 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance spawned successfully.
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.455 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.469 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.475 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.478 253542 DEBUG nova.compute.manager [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.478 253542 DEBUG nova.compute.manager [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.479 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.479 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.479 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.482 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.482 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.483 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.483 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.483 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.484 253542 DEBUG nova.virt.libvirt.driver [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.510 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.510 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059662.448038, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.510 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Paused (Lifecycle Event)
Nov 25 08:34:22 compute-0 ovn_controller[152859]: 2025-11-25T08:34:22Z|00557|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:34:22 compute-0 ovn_controller[152859]: 2025-11-25T08:34:22Z|00558|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 08:34:22 compute-0 ovn_controller[152859]: 2025-11-25T08:34:22Z|00559|binding|INFO|Releasing lport 98660c0c-0936-4c4d-9a89-87b784d8d5cc from this chassis (sb_readonly=0)
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.552 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.555 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059662.4502008, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.556 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Resumed (Lifecycle Event)
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.559 253542 INFO nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Took 6.60 seconds to spawn the instance on the hypervisor.
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.560 253542 DEBUG nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.561 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.562 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3394MB free_disk=59.83451461791992GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.562 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.562 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.572 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.577 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.610 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.656 253542 INFO nova.compute.manager [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Took 7.67 seconds to build instance.
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.673 253542 DEBUG oslo_concurrency.lockutils [None req-898de7fa-98d1-45c8-93c3-87953770422a c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.681 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0feca801-4630-4450-b915-616d8496ab51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance bf44124c-1a65-4bde-a777-043ae1a53557 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 52d39d67-b456-44e4-8804-2de0c941edae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 420c5373-d9c4-4da0-9658-90eff9a19f8d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.682 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:34:22 compute-0 nova_compute[253538]: 2025-11-25 08:34:22.791 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264092626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.235 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.246 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.7 MiB/s wr, 362 op/s
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.260 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.284 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.284 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:34:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3264092626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.854 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.855 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.877 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.877 253542 DEBUG nova.compute.manager [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.877 253542 DEBUG nova.compute.manager [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.878 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.878 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:23 compute-0 nova_compute[253538]: 2025-11-25 08:34:23.878 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:24 compute-0 ceph-mon[75015]: pgmap v1537: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.7 MiB/s wr, 362 op/s
Nov 25 08:34:24 compute-0 nova_compute[253538]: 2025-11-25 08:34:24.514 253542 DEBUG nova.compute.manager [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:24 compute-0 nova_compute[253538]: 2025-11-25 08:34:24.515 253542 DEBUG oslo_concurrency.lockutils [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:24 compute-0 nova_compute[253538]: 2025-11-25 08:34:24.515 253542 DEBUG oslo_concurrency.lockutils [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:24 compute-0 nova_compute[253538]: 2025-11-25 08:34:24.515 253542 DEBUG oslo_concurrency.lockutils [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:24 compute-0 nova_compute[253538]: 2025-11-25 08:34:24.516 253542 DEBUG nova.compute.manager [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:24 compute-0 nova_compute[253538]: 2025-11-25 08:34:24.516 253542 WARNING nova.compute.manager [req-0dae40eb-26df-4f2b-987e-b79dc1a6a54c req-17484b22-b8b4-4414-a40e-2661b1780037 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state None.
Nov 25 08:34:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 5.3 MiB/s wr, 391 op/s
Nov 25 08:34:25 compute-0 nova_compute[253538]: 2025-11-25 08:34:25.670 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:25 compute-0 nova_compute[253538]: 2025-11-25 08:34:25.671 253542 DEBUG nova.network.neutron [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:25 compute-0 nova_compute[253538]: 2025-11-25 08:34:25.692 253542 DEBUG oslo_concurrency.lockutils [req-b0e577e2-64fe-4d13-b40d-920268e99398 req-60579c43-8260-4204-b3f3-1e6ff05305da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:25 compute-0 nova_compute[253538]: 2025-11-25 08:34:25.724 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:25 compute-0 nova_compute[253538]: 2025-11-25 08:34:25.810 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:25 compute-0 nova_compute[253538]: 2025-11-25 08:34:25.810 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:25 compute-0 nova_compute[253538]: 2025-11-25 08:34:25.811 253542 DEBUG nova.objects.instance [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:25 compute-0 nova_compute[253538]: 2025-11-25 08:34:25.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.212 253542 INFO nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Rebuilding instance
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.279 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.304 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.474 253542 DEBUG nova.objects.instance [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.485 253542 DEBUG nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.496 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:26 compute-0 ceph-mon[75015]: pgmap v1538: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 5.3 MiB/s wr, 391 op/s
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.514 253542 DEBUG nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.567 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_requests' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.581 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.591 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.602 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'migration_context' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.611 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.620 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.827 253542 DEBUG nova.policy [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.939 253542 DEBUG nova.compute.manager [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.940 253542 DEBUG nova.compute.manager [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.940 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.940 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:26 compute-0 nova_compute[253538]: 2025-11-25 08:34:26.940 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 313 op/s
Nov 25 08:34:27 compute-0 nova_compute[253538]: 2025-11-25 08:34:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.427 253542 DEBUG nova.virt.libvirt.driver [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:34:28 compute-0 ceph-mon[75015]: pgmap v1539: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 313 op/s
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.655 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.655 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.673 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG nova.compute.manager [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG nova.compute.manager [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.674 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.866 253542 DEBUG nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Successfully updated port: f2a4b65b-419e-44be-9413-f01693268aa8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:28 compute-0 nova_compute[253538]: 2025-11-25 08:34:28.899 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:34:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1929195072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:34:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:34:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1929195072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.093 253542 DEBUG nova.compute.manager [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.093 253542 DEBUG nova.compute.manager [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-f2a4b65b-419e-44be-9413-f01693268aa8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.094 253542 DEBUG oslo_concurrency.lockutils [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 248 op/s
Nov 25 08:34:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1929195072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:34:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1929195072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.823 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.823 253542 DEBUG nova.network.neutron [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.916 253542 DEBUG oslo_concurrency.lockutils [req-4ca38ef0-59e5-4b3f-984f-bd85d5dff43f req-1be6f862-8c45-4a0f-8e1b-d1a651f789ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.916 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.917 253542 DEBUG nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.977 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059654.976347, e3f4ee5b-6bb5-456f-b522-426ea1ebf32f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:29 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.977 253542 INFO nova.compute.manager [-] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] VM Stopped (Lifecycle Event)
Nov 25 08:34:30 compute-0 nova_compute[253538]: 2025-11-25 08:34:29.999 253542 DEBUG nova.compute.manager [None req-b7fb386f-e36b-4863-b455-4db65d1f22e7 - - - - - -] [instance: e3f4ee5b-6bb5-456f-b522-426ea1ebf32f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:30 compute-0 nova_compute[253538]: 2025-11-25 08:34:30.411 253542 WARNING nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:34:30 compute-0 ceph-mon[75015]: pgmap v1540: 321 pgs: 321 active+clean; 341 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 248 op/s
Nov 25 08:34:30 compute-0 nova_compute[253538]: 2025-11-25 08:34:30.728 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:30 compute-0 nova_compute[253538]: 2025-11-25 08:34:30.993 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:31 compute-0 ovn_controller[152859]: 2025-11-25T08:34:31Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:ae:8e 10.100.0.12
Nov 25 08:34:31 compute-0 ovn_controller[152859]: 2025-11-25T08:34:31Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:ae:8e 10.100.0.12
Nov 25 08:34:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 349 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 1017 KiB/s wr, 192 op/s
Nov 25 08:34:31 compute-0 ceph-mon[75015]: pgmap v1541: 321 pgs: 321 active+clean; 349 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 1017 KiB/s wr, 192 op/s
Nov 25 08:34:31 compute-0 nova_compute[253538]: 2025-11-25 08:34:31.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.092 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.093 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:34:32 compute-0 sudo[317422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:32 compute-0 sudo[317422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:32 compute-0 sudo[317422]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:32 compute-0 sudo[317447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:34:32 compute-0 sudo[317447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:32 compute-0 sudo[317447]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:32 compute-0 sudo[317472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:32 compute-0 sudo[317472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:32 compute-0 sudo[317472]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:32 compute-0 sudo[317497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 08:34:32 compute-0 sudo[317497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.780 253542 DEBUG nova.network.neutron [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.811 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.812 253542 DEBUG oslo_concurrency.lockutils [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.813 253542 DEBUG nova.network.neutron [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port f2a4b65b-419e-44be-9413-f01693268aa8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.817 253542 DEBUG nova.virt.libvirt.vif [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.817 253542 DEBUG nova.network.os_vif_util [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.818 253542 DEBUG nova.network.os_vif_util [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.818 253542 DEBUG os_vif [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.821 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2a4b65b-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.822 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2a4b65b-41, col_values=(('external_ids', {'iface-id': 'f2a4b65b-419e-44be-9413-f01693268aa8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:ed:fc', 'vm-uuid': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:32 compute-0 NetworkManager[48915]: <info>  [1764059672.8242] manager: (tapf2a4b65b-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.832 253542 INFO os_vif [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.832 253542 DEBUG nova.virt.libvirt.vif [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.833 253542 DEBUG nova.network.os_vif_util [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.833 253542 DEBUG nova.network.os_vif_util [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.835 253542 DEBUG nova.virt.libvirt.guest [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:34:32 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 08:34:32 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:34:32 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:32 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:34:32 compute-0 nova_compute[253538]:   <target dev="tapf2a4b65b-41"/>
Nov 25 08:34:32 compute-0 nova_compute[253538]: </interface>
Nov 25 08:34:32 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:34:32 compute-0 kernel: tapf2a4b65b-41: entered promiscuous mode
Nov 25 08:34:32 compute-0 NetworkManager[48915]: <info>  [1764059672.8463] manager: (tapf2a4b65b-41): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:32 compute-0 ovn_controller[152859]: 2025-11-25T08:34:32Z|00560|binding|INFO|Claiming lport f2a4b65b-419e-44be-9413-f01693268aa8 for this chassis.
Nov 25 08:34:32 compute-0 ovn_controller[152859]: 2025-11-25T08:34:32Z|00561|binding|INFO|f2a4b65b-419e-44be-9413-f01693268aa8: Claiming fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.856 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ed:fc 10.100.0.3'], port_security=['fa:16:3e:26:ed:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f2a4b65b-419e-44be-9413-f01693268aa8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.857 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f2a4b65b-419e-44be-9413-f01693268aa8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.859 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:34:32 compute-0 ovn_controller[152859]: 2025-11-25T08:34:32Z|00562|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 ovn-installed in OVS
Nov 25 08:34:32 compute-0 ovn_controller[152859]: 2025-11-25T08:34:32Z|00563|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 up in Southbound
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.880 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c40fd45b-8f96-4c8d-b1f4-da1f23b01e04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:32 compute-0 systemd-udevd[317552]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:32 compute-0 NetworkManager[48915]: <info>  [1764059672.9089] device (tapf2a4b65b-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:32 compute-0 NetworkManager[48915]: <info>  [1764059672.9101] device (tapf2a4b65b-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.911 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e6cc68-b044-4700-91d7-ef8fc711a167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.914 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[363f7f09-3ed5-4cba-8ef6-e3b7490f476b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.948 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c70b2c5b-989f-4ad2-8820-dd94fd77e48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.972 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8ed971-e780-462a-95f1-91b416e3252f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317570, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.985 253542 DEBUG nova.virt.libvirt.driver [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.986 253542 DEBUG nova.virt.libvirt.driver [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.986 253542 DEBUG nova.virt.libvirt.driver [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:60:42:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:32 compute-0 nova_compute[253538]: 2025-11-25 08:34:32.986 253542 DEBUG nova.virt.libvirt.driver [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:26:ed:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.990 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5a4d2e-2ccf-460a-84bf-85fbae382c04]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317573, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317573, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:32.991 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:33.062 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:33.063 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:33.063 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:33.064 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.073 253542 DEBUG nova.virt.libvirt.guest [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:33 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:34:33</nova:creationTime>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 08:34:33 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 08:34:33 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:34:33 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:33 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:34:33 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:34:33 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.098 253542 DEBUG oslo_concurrency.lockutils [None req-c368117c-2586-4b3a-8fd2-af52c4c1b427 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:33 compute-0 ovn_controller[152859]: 2025-11-25T08:34:33Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:ce:d4 10.100.0.8
Nov 25 08:34:33 compute-0 ovn_controller[152859]: 2025-11-25T08:34:33Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:ce:d4 10.100.0.8
Nov 25 08:34:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 360 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Nov 25 08:34:33 compute-0 podman[317604]: 2025-11-25 08:34:33.472498943 +0000 UTC m=+0.306198961 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:34:33 compute-0 podman[317624]: 2025-11-25 08:34:33.685464067 +0000 UTC m=+0.109184616 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:34:33 compute-0 podman[317604]: 2025-11-25 08:34:33.700453389 +0000 UTC m=+0.534153417 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:34:33 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.797 253542 DEBUG nova.compute.manager [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 DEBUG oslo_concurrency.lockutils [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 DEBUG oslo_concurrency.lockutils [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 DEBUG oslo_concurrency.lockutils [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 DEBUG nova.compute.manager [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:33 compute-0 nova_compute[253538]: 2025-11-25 08:34:33.798 253542 WARNING nova.compute.manager [req-ed729e2c-e8e4-4120-9f00-8591584dad7a req-e7d61782-b5db-4aee-8f8b-534332db073d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.
Nov 25 08:34:34 compute-0 sudo[317497]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:34:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:34:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:34 compute-0 ceph-mon[75015]: pgmap v1542: 321 pgs: 321 active+clean; 360 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Nov 25 08:34:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:34 compute-0 sudo[317759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:34 compute-0 sudo[317759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:34 compute-0 sudo[317759]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:34 compute-0 sudo[317784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:34:34 compute-0 sudo[317784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:34 compute-0 sudo[317784]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:34 compute-0 sudo[317809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:34 compute-0 sudo[317809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:34 compute-0 sudo[317809]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:34 compute-0 sudo[317834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:34:34 compute-0 sudo[317834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:35.095 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:35 compute-0 sudo[317834]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:34:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:34:35 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:34:35 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:35 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev fa43844e-6990-4241-af3d-4b8e73492270 does not exist
Nov 25 08:34:35 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4d3f1dd5-f01d-42ee-ad50-a0835956ceec does not exist
Nov 25 08:34:35 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 981f73f4-017f-47a9-8878-f8da23fe9ad5 does not exist
Nov 25 08:34:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:34:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:34:35 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:34:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 402 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Nov 25 08:34:35 compute-0 sudo[317891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:35 compute-0 sudo[317891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:35 compute-0 sudo[317891]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:35 compute-0 sudo[317916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:34:35 compute-0 sudo[317916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:35 compute-0 sudo[317916]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:34:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:34:35 compute-0 sudo[317941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:35 compute-0 sudo[317941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:35 compute-0 sudo[317941]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:35 compute-0 sudo[317966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:34:35 compute-0 sudo[317966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:35 compute-0 nova_compute[253538]: 2025-11-25 08:34:35.730 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:35 compute-0 ovn_controller[152859]: 2025-11-25T08:34:35Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 08:34:35 compute-0 ovn_controller[152859]: 2025-11-25T08:34:35Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 08:34:35 compute-0 podman[318031]: 2025-11-25 08:34:35.850233518 +0000 UTC m=+0.039526313 container create fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 08:34:35 compute-0 systemd[1]: Started libpod-conmon-fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f.scope.
Nov 25 08:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:35 compute-0 podman[318031]: 2025-11-25 08:34:35.83206959 +0000 UTC m=+0.021362395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:34:35 compute-0 podman[318031]: 2025-11-25 08:34:35.93551131 +0000 UTC m=+0.124804185 container init fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:34:35 compute-0 podman[318031]: 2025-11-25 08:34:35.948300303 +0000 UTC m=+0.137593098 container start fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 08:34:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:35 compute-0 podman[318031]: 2025-11-25 08:34:35.951608923 +0000 UTC m=+0.140901738 container attach fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:34:35 compute-0 interesting_meninsky[318047]: 167 167
Nov 25 08:34:35 compute-0 systemd[1]: libpod-fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f.scope: Deactivated successfully.
Nov 25 08:34:35 compute-0 conmon[318047]: conmon fc9b57bb1e3109838022 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f.scope/container/memory.events
Nov 25 08:34:35 compute-0 podman[318031]: 2025-11-25 08:34:35.959186416 +0000 UTC m=+0.148479211 container died fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:34:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fd26b4d4cdb87f7444b710014334203ce2a13bc41437ac210f2f56ee48e733e-merged.mount: Deactivated successfully.
Nov 25 08:34:36 compute-0 podman[318031]: 2025-11-25 08:34:36.023574867 +0000 UTC m=+0.212867662 container remove fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:34:36 compute-0 systemd[1]: libpod-conmon-fc9b57bb1e3109838022b18530d42369d994b869e7f0e25ffd67abfeaa0c048f.scope: Deactivated successfully.
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.222 253542 DEBUG nova.network.neutron [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port f2a4b65b-419e-44be-9413-f01693268aa8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.223 253542 DEBUG nova.network.neutron [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.238 253542 DEBUG oslo_concurrency.lockutils [req-0cf3f703-c207-457e-add0-fb5a3155aacf req-7505f40a-58ba-4b90-a0cb-5f9c4d248156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:36 compute-0 podman[318071]: 2025-11-25 08:34:36.271046979 +0000 UTC m=+0.057428705 container create 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 08:34:36 compute-0 systemd[1]: Started libpod-conmon-4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0.scope.
Nov 25 08:34:36 compute-0 podman[318071]: 2025-11-25 08:34:36.24248467 +0000 UTC m=+0.028866426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:34:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.364 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.365 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.386 253542 DEBUG nova.objects.instance [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:36 compute-0 podman[318071]: 2025-11-25 08:34:36.387844987 +0000 UTC m=+0.174226713 container init 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.396 253542 DEBUG nova.compute.manager [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.396 253542 DEBUG oslo_concurrency.lockutils [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.397 253542 DEBUG oslo_concurrency.lockutils [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.397 253542 DEBUG oslo_concurrency.lockutils [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.398 253542 DEBUG nova.compute.manager [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:36 compute-0 podman[318071]: 2025-11-25 08:34:36.398510094 +0000 UTC m=+0.184891810 container start 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.398 253542 WARNING nova.compute.manager [req-85943c80-fdf8-41ff-a524-178cc9e05615 req-9f55746a-46a4-4284-b226-adf7c488ba32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.407 253542 DEBUG nova.virt.libvirt.vif [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.407 253542 DEBUG nova.network.os_vif_util [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.408 253542 DEBUG nova.network.os_vif_util [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.412 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.414 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:34:36 compute-0 podman[318071]: 2025-11-25 08:34:36.417278308 +0000 UTC m=+0.203660024 container attach 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.417 253542 DEBUG nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Attempting to detach device tapf2a4b65b-41 from instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.418 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <target dev="tapf2a4b65b-41"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]: </interface>
Nov 25 08:34:36 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:34:36 compute-0 ceph-mon[75015]: pgmap v1543: 321 pgs: 321 active+clean; 402 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.424 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.427 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface>not found in domain: <domain type='kvm' id='67'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <name>instance-0000003a</name>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <uuid>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</uuid>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:34:33</nova:creationTime>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:34:36 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='serial'>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='uuid'>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk' index='2'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config' index='1'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:60:42:da'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target dev='tap1682bdaf-1d'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:26:ed:fc'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target dev='tapf2a4b65b-41'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='net1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <source path='/dev/pts/1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log' append='off'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </target>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <source path='/dev/pts/1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log' append='off'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </console>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c36,c548</label>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c548</imagelabel>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:34:36 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:36 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.428 253542 INFO nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tapf2a4b65b-41 from instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b from the persistent domain config.
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.428 253542 DEBUG nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] (1/8): Attempting to detach device tapf2a4b65b-41 with device alias net1 from instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.429 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <target dev="tapf2a4b65b-41"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]: </interface>
Nov 25 08:34:36 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:34:36 compute-0 kernel: tapf2a4b65b-41 (unregistering): left promiscuous mode
Nov 25 08:34:36 compute-0 NetworkManager[48915]: <info>  [1764059676.5406] device (tapf2a4b65b-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.563 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764059676.5634155, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.564 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:36 compute-0 ovn_controller[152859]: 2025-11-25T08:34:36Z|00564|binding|INFO|Releasing lport f2a4b65b-419e-44be-9413-f01693268aa8 from this chassis (sb_readonly=0)
Nov 25 08:34:36 compute-0 ovn_controller[152859]: 2025-11-25T08:34:36Z|00565|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 down in Southbound
Nov 25 08:34:36 compute-0 ovn_controller[152859]: 2025-11-25T08:34:36Z|00566|binding|INFO|Removing iface tapf2a4b65b-41 ovn-installed in OVS
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.568 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.571 253542 DEBUG nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Start waiting for the detach event from libvirt for device tapf2a4b65b-41 with device alias net1 for instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.572 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.572 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ed:fc 10.100.0.3'], port_security=['fa:16:3e:26:ed:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f2a4b65b-419e-44be-9413-f01693268aa8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.573 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f2a4b65b-419e-44be-9413-f01693268aa8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.575 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.576 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface>not found in domain: <domain type='kvm' id='67'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <name>instance-0000003a</name>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <uuid>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</uuid>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:34:33</nova:creationTime>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:34:36 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='serial'>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='uuid'>5c6656ef-7ad0-4eb4-a597-aa9a8078805b</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk' index='2'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_disk.config' index='1'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:60:42:da'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target dev='tap1682bdaf-1d'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <source path='/dev/pts/1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log' append='off'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       </target>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <source path='/dev/pts/1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b/console.log' append='off'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </console>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c36,c548</label>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c36,c548</imagelabel>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:34:36 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:36 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.576 253542 INFO nova.virt.libvirt.driver [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tapf2a4b65b-41 from instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b from the live domain config.
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.577 253542 DEBUG nova.virt.libvirt.vif [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.577 253542 DEBUG nova.network.os_vif_util [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.579 253542 DEBUG nova.network.os_vif_util [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.579 253542 DEBUG os_vif [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.585 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2a4b65b-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.593 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c573077-0fb8-41cc-addd-2e89d39c4d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.616 253542 INFO os_vif [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.616 253542 DEBUG nova.virt.libvirt.guest [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:name>tempest-tempest.common.compute-instance-632172140</nova:name>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:34:36</nova:creationTime>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     <nova:port uuid="1682bdaf-1dd6-4036-8d17-a169dbaaca8f">
Nov 25 08:34:36 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:34:36 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:36 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:34:36 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:34:36 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.636 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a3d26f-cef5-4472-89f7-94e47e71e8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.640 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f4454cb7-e215-48bf-a809-5c893c3a6ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.672 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ca12de-4637-461f-b60e-4d703b5d0839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.675 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd25f7bb-a4ee-4c54-93f3-8d53372ac5cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318103, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2911c0da-3226-41e0-903e-bed028aabf5a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318104, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318104, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.707 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:36 compute-0 nova_compute[253538]: 2025-11-25 08:34:36.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.710 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.710 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.710 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:36.710 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 407 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 165 op/s
Nov 25 08:34:37 compute-0 dreamy_clarke[318087]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:34:37 compute-0 dreamy_clarke[318087]: --> relative data size: 1.0
Nov 25 08:34:37 compute-0 dreamy_clarke[318087]: --> All data devices are unavailable
Nov 25 08:34:37 compute-0 systemd[1]: libpod-4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0.scope: Deactivated successfully.
Nov 25 08:34:37 compute-0 systemd[1]: libpod-4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0.scope: Consumed 1.027s CPU time.
Nov 25 08:34:37 compute-0 podman[318071]: 2025-11-25 08:34:37.540539877 +0000 UTC m=+1.326921583 container died 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 08:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b06b83c690e9075e320bf9de04a406a4caab642e3b15187f68ef7f03244740ec-merged.mount: Deactivated successfully.
Nov 25 08:34:37 compute-0 podman[318071]: 2025-11-25 08:34:37.600772296 +0000 UTC m=+1.387154052 container remove 4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_clarke, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:34:37 compute-0 systemd[1]: libpod-conmon-4ef2272bc06eb5c39aa3018a0599c5bba776465bd03561f8bab724a4236696c0.scope: Deactivated successfully.
Nov 25 08:34:37 compute-0 sudo[317966]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:37 compute-0 sudo[318144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:37 compute-0 sudo[318144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:37 compute-0 sudo[318144]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:37 compute-0 sudo[318169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:34:37 compute-0 sudo[318169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:37 compute-0 sudo[318169]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:37 compute-0 sudo[318194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:37 compute-0 sudo[318194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:37 compute-0 sudo[318194]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:37 compute-0 nova_compute[253538]: 2025-11-25 08:34:37.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:37 compute-0 sudo[318220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:34:37 compute-0 sudo[318220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:37 compute-0 podman[318218]: 2025-11-25 08:34:37.972203178 +0000 UTC m=+0.104512349 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 08:34:38 compute-0 nova_compute[253538]: 2025-11-25 08:34:38.203 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:38 compute-0 nova_compute[253538]: 2025-11-25 08:34:38.204 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:38 compute-0 nova_compute[253538]: 2025-11-25 08:34:38.204 253542 DEBUG nova.network.neutron [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:38 compute-0 ovn_controller[152859]: 2025-11-25T08:34:38Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:f0:9b 10.100.0.11
Nov 25 08:34:38 compute-0 ovn_controller[152859]: 2025-11-25T08:34:38Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:f0:9b 10.100.0.11
Nov 25 08:34:38 compute-0 podman[318305]: 2025-11-25 08:34:38.301942201 +0000 UTC m=+0.061695449 container create f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 08:34:38 compute-0 systemd[1]: Started libpod-conmon-f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6.scope.
Nov 25 08:34:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:38 compute-0 podman[318305]: 2025-11-25 08:34:38.279002233 +0000 UTC m=+0.038755501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:34:38 compute-0 podman[318305]: 2025-11-25 08:34:38.388431954 +0000 UTC m=+0.148185232 container init f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:34:38 compute-0 podman[318305]: 2025-11-25 08:34:38.396208863 +0000 UTC m=+0.155962111 container start f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 08:34:38 compute-0 podman[318305]: 2025-11-25 08:34:38.399801901 +0000 UTC m=+0.159555139 container attach f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:34:38 compute-0 sleepy_wu[318321]: 167 167
Nov 25 08:34:38 compute-0 systemd[1]: libpod-f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6.scope: Deactivated successfully.
Nov 25 08:34:38 compute-0 podman[318305]: 2025-11-25 08:34:38.403354926 +0000 UTC m=+0.163108184 container died f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 08:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5b968a5c9d1b1215b896e920353779a8abd4970046db97f2d423a9840f70b35-merged.mount: Deactivated successfully.
Nov 25 08:34:38 compute-0 ceph-mon[75015]: pgmap v1544: 321 pgs: 321 active+clean; 407 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 165 op/s
Nov 25 08:34:38 compute-0 podman[318305]: 2025-11-25 08:34:38.443722471 +0000 UTC m=+0.203475719 container remove f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 08:34:38 compute-0 systemd[1]: libpod-conmon-f71d2e8b636b9bb1acf278db0f387e37f62110c685827a0550b4968b6e8a37f6.scope: Deactivated successfully.
Nov 25 08:34:38 compute-0 podman[318345]: 2025-11-25 08:34:38.695938519 +0000 UTC m=+0.051745371 container create f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:34:38 compute-0 systemd[1]: Started libpod-conmon-f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490.scope.
Nov 25 08:34:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:38 compute-0 podman[318345]: 2025-11-25 08:34:38.678514261 +0000 UTC m=+0.034321143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:38 compute-0 podman[318345]: 2025-11-25 08:34:38.787720766 +0000 UTC m=+0.143527618 container init f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 08:34:38 compute-0 podman[318345]: 2025-11-25 08:34:38.794620991 +0000 UTC m=+0.150427843 container start f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 08:34:38 compute-0 podman[318345]: 2025-11-25 08:34:38.797461697 +0000 UTC m=+0.153268569 container attach f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 08:34:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 420 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 177 op/s
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.421 253542 DEBUG nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.422 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.422 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.422 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.422 253542 DEBUG nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 WARNING nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG oslo_concurrency.lockutils [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.423 253542 DEBUG nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.424 253542 WARNING nova.compute.manager [req-7397d89e-4554-4a36-bb1f-1a69bd43d1c4 req-5c14f039-68e6-4ac5-9216-a56db52abc99 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.
Nov 25 08:34:39 compute-0 nova_compute[253538]: 2025-11-25 08:34:39.473 253542 DEBUG nova.virt.libvirt.driver [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:34:39 compute-0 kind_booth[318362]: {
Nov 25 08:34:39 compute-0 kind_booth[318362]:     "0": [
Nov 25 08:34:39 compute-0 kind_booth[318362]:         {
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "devices": [
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "/dev/loop3"
Nov 25 08:34:39 compute-0 kind_booth[318362]:             ],
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_name": "ceph_lv0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_size": "21470642176",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "name": "ceph_lv0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "tags": {
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cluster_name": "ceph",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.crush_device_class": "",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.encrypted": "0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osd_id": "0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.type": "block",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.vdo": "0"
Nov 25 08:34:39 compute-0 kind_booth[318362]:             },
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "type": "block",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "vg_name": "ceph_vg0"
Nov 25 08:34:39 compute-0 kind_booth[318362]:         }
Nov 25 08:34:39 compute-0 kind_booth[318362]:     ],
Nov 25 08:34:39 compute-0 kind_booth[318362]:     "1": [
Nov 25 08:34:39 compute-0 kind_booth[318362]:         {
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "devices": [
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "/dev/loop4"
Nov 25 08:34:39 compute-0 kind_booth[318362]:             ],
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_name": "ceph_lv1",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_size": "21470642176",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "name": "ceph_lv1",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "tags": {
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cluster_name": "ceph",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.crush_device_class": "",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.encrypted": "0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osd_id": "1",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.type": "block",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.vdo": "0"
Nov 25 08:34:39 compute-0 kind_booth[318362]:             },
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "type": "block",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "vg_name": "ceph_vg1"
Nov 25 08:34:39 compute-0 kind_booth[318362]:         }
Nov 25 08:34:39 compute-0 kind_booth[318362]:     ],
Nov 25 08:34:39 compute-0 kind_booth[318362]:     "2": [
Nov 25 08:34:39 compute-0 kind_booth[318362]:         {
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "devices": [
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "/dev/loop5"
Nov 25 08:34:39 compute-0 kind_booth[318362]:             ],
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_name": "ceph_lv2",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_size": "21470642176",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "name": "ceph_lv2",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "tags": {
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.cluster_name": "ceph",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.crush_device_class": "",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.encrypted": "0",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osd_id": "2",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.type": "block",
Nov 25 08:34:39 compute-0 kind_booth[318362]:                 "ceph.vdo": "0"
Nov 25 08:34:39 compute-0 kind_booth[318362]:             },
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "type": "block",
Nov 25 08:34:39 compute-0 kind_booth[318362]:             "vg_name": "ceph_vg2"
Nov 25 08:34:39 compute-0 kind_booth[318362]:         }
Nov 25 08:34:39 compute-0 kind_booth[318362]:     ]
Nov 25 08:34:39 compute-0 kind_booth[318362]: }
Nov 25 08:34:39 compute-0 systemd[1]: libpod-f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490.scope: Deactivated successfully.
Nov 25 08:34:39 compute-0 podman[318345]: 2025-11-25 08:34:39.629177921 +0000 UTC m=+0.984984773 container died f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3caa84de915cca0f43548acb9c2a267f1698fd81e1651a5556c5c80faf00fef5-merged.mount: Deactivated successfully.
Nov 25 08:34:40 compute-0 podman[318345]: 2025-11-25 08:34:40.271532055 +0000 UTC m=+1.627338907 container remove f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:34:40 compute-0 systemd[1]: libpod-conmon-f8ec6237a1bbb32d11dd70963275acea0cfc9fef92fdb4ddd691c5b6b75fa490.scope: Deactivated successfully.
Nov 25 08:34:40 compute-0 sudo[318220]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:40 compute-0 sudo[318385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:40 compute-0 sudo[318385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:40 compute-0 sudo[318385]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:40 compute-0 ceph-mon[75015]: pgmap v1545: 321 pgs: 321 active+clean; 420 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 177 op/s
Nov 25 08:34:40 compute-0 sudo[318410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:34:40 compute-0 sudo[318410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:40 compute-0 sudo[318410]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:40 compute-0 sudo[318435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:40 compute-0 sudo[318435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:40 compute-0 sudo[318435]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:40 compute-0 sudo[318460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:34:40 compute-0 sudo[318460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:40 compute-0 nova_compute[253538]: 2025-11-25 08:34:40.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:40 compute-0 podman[318523]: 2025-11-25 08:34:40.988671219 +0000 UTC m=+0.066085557 container create 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:34:41 compute-0 podman[318523]: 2025-11-25 08:34:40.944836421 +0000 UTC m=+0.022250779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:34:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:41.058 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:41.061 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:41.062 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:41 compute-0 systemd[1]: Started libpod-conmon-7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0.scope.
Nov 25 08:34:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:41 compute-0 podman[318523]: 2025-11-25 08:34:41.129995487 +0000 UTC m=+0.207409855 container init 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 08:34:41 compute-0 podman[318523]: 2025-11-25 08:34:41.140279984 +0000 UTC m=+0.217694322 container start 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:34:41 compute-0 hardcore_shaw[318539]: 167 167
Nov 25 08:34:41 compute-0 systemd[1]: libpod-7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0.scope: Deactivated successfully.
Nov 25 08:34:41 compute-0 conmon[318539]: conmon 7673b519e9e5a7d9c0d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0.scope/container/memory.events
Nov 25 08:34:41 compute-0 podman[318523]: 2025-11-25 08:34:41.219935744 +0000 UTC m=+0.297350182 container attach 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 08:34:41 compute-0 podman[318523]: 2025-11-25 08:34:41.220989633 +0000 UTC m=+0.298403981 container died 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:34:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 437 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 931 KiB/s rd, 6.4 MiB/s wr, 187 op/s
Nov 25 08:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-70b5a491dda378af6bd9eef8366851c6de8b71bdfde4f81b60633de5a1394d53-merged.mount: Deactivated successfully.
Nov 25 08:34:41 compute-0 podman[318523]: 2025-11-25 08:34:41.43110671 +0000 UTC m=+0.508521078 container remove 7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:34:41 compute-0 systemd[1]: libpod-conmon-7673b519e9e5a7d9c0d25820b3c379b239ba6f146e2c271e2c064462734072d0.scope: Deactivated successfully.
Nov 25 08:34:41 compute-0 nova_compute[253538]: 2025-11-25 08:34:41.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:41 compute-0 podman[318564]: 2025-11-25 08:34:41.707180669 +0000 UTC m=+0.060015694 container create d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:34:41 compute-0 systemd[1]: Started libpod-conmon-d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81.scope.
Nov 25 08:34:41 compute-0 podman[318564]: 2025-11-25 08:34:41.677843001 +0000 UTC m=+0.030678046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:34:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:41 compute-0 nova_compute[253538]: 2025-11-25 08:34:41.787 253542 INFO nova.network.neutron [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Port f2a4b65b-419e-44be-9413-f01693268aa8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 08:34:41 compute-0 nova_compute[253538]: 2025-11-25 08:34:41.788 253542 DEBUG nova.network.neutron [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:41 compute-0 podman[318564]: 2025-11-25 08:34:41.804102604 +0000 UTC m=+0.156937599 container init d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:34:41 compute-0 podman[318564]: 2025-11-25 08:34:41.816737543 +0000 UTC m=+0.169572528 container start d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:34:41 compute-0 podman[318564]: 2025-11-25 08:34:41.821777839 +0000 UTC m=+0.174612824 container attach d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:34:41 compute-0 nova_compute[253538]: 2025-11-25 08:34:41.872 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:41 compute-0 nova_compute[253538]: 2025-11-25 08:34:41.894 253542 DEBUG oslo_concurrency.lockutils [None req-5d44d1bd-3f6f-48a5-8fa6-7c656f5e1d6c 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-5c6656ef-7ad0-4eb4-a597-aa9a8078805b-f2a4b65b-419e-44be-9413-f01693268aa8" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:42 compute-0 ceph-mon[75015]: pgmap v1546: 321 pgs: 321 active+clean; 437 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 931 KiB/s rd, 6.4 MiB/s wr, 187 op/s
Nov 25 08:34:42 compute-0 kernel: tap269f9bd3-f2 (unregistering): left promiscuous mode
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.488 253542 INFO nova.virt.libvirt.driver [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance shutdown successfully after 24 seconds.
Nov 25 08:34:42 compute-0 NetworkManager[48915]: <info>  [1764059682.4935] device (tap269f9bd3-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:34:42 compute-0 ovn_controller[152859]: 2025-11-25T08:34:42Z|00567|binding|INFO|Releasing lport 269f9bd3-f267-459c-8e24-4b1f6c943345 from this chassis (sb_readonly=0)
Nov 25 08:34:42 compute-0 ovn_controller[152859]: 2025-11-25T08:34:42Z|00568|binding|INFO|Setting lport 269f9bd3-f267-459c-8e24-4b1f6c943345 down in Southbound
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:42 compute-0 ovn_controller[152859]: 2025-11-25T08:34:42Z|00569|binding|INFO|Removing iface tap269f9bd3-f2 ovn-installed in OVS
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.512 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ae:8e 10.100.0.12'], port_security=['fa:16:3e:13:ae:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bf44124c-1a65-4bde-a777-043ae1a53557', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=269f9bd3-f267-459c-8e24-4b1f6c943345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.513 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 269f9bd3-f267-459c-8e24-4b1f6c943345 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.516 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.517 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3802cc64-0afb-4050-afc1-dca398deaed7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.518 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:42 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Nov 25 08:34:42 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Consumed 15.318s CPU time.
Nov 25 08:34:42 compute-0 systemd-machined[215790]: Machine qemu-69-instance-0000003c terminated.
Nov 25 08:34:42 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [NOTICE]   (316659) : haproxy version is 2.8.14-c23fe91
Nov 25 08:34:42 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [NOTICE]   (316659) : path to executable is /usr/sbin/haproxy
Nov 25 08:34:42 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [WARNING]  (316659) : Exiting Master process...
Nov 25 08:34:42 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [ALERT]    (316659) : Current worker (316661) exited with code 143 (Terminated)
Nov 25 08:34:42 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[316655]: [WARNING]  (316659) : All workers exited. Exiting... (0)
Nov 25 08:34:42 compute-0 systemd[1]: libpod-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624.scope: Deactivated successfully.
Nov 25 08:34:42 compute-0 conmon[316655]: conmon 6e22b195a22364538f85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624.scope/container/memory.events
Nov 25 08:34:42 compute-0 podman[318616]: 2025-11-25 08:34:42.647998934 +0000 UTC m=+0.045434391 container died 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624-userdata-shm.mount: Deactivated successfully.
Nov 25 08:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8245a978a5022c98149f488e1744d8b7505a54bb18353d96e818cdced9046a6b-merged.mount: Deactivated successfully.
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.735 253542 INFO nova.virt.libvirt.driver [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance destroyed successfully.
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.735 253542 DEBUG nova.objects.instance [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'numa_topology' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.760 253542 DEBUG nova.compute.manager [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:42 compute-0 podman[318616]: 2025-11-25 08:34:42.772411718 +0000 UTC m=+0.169847175 container cleanup 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:34:42 compute-0 systemd[1]: libpod-conmon-6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624.scope: Deactivated successfully.
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.799 253542 DEBUG oslo_concurrency.lockutils [None req-1aa77871-377d-42f4-9eb6-3670589c0274 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:42 compute-0 nervous_lamport[318580]: {
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "osd_id": 1,
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "type": "bluestore"
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:     },
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "osd_id": 2,
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "type": "bluestore"
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:     },
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "osd_id": 0,
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:         "type": "bluestore"
Nov 25 08:34:42 compute-0 nervous_lamport[318580]:     }
Nov 25 08:34:42 compute-0 nervous_lamport[318580]: }
Nov 25 08:34:42 compute-0 systemd[1]: libpod-d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81.scope: Deactivated successfully.
Nov 25 08:34:42 compute-0 systemd[1]: libpod-d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81.scope: Consumed 1.006s CPU time.
Nov 25 08:34:42 compute-0 podman[318564]: 2025-11-25 08:34:42.83460246 +0000 UTC m=+1.187437455 container died d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 08:34:42 compute-0 podman[318674]: 2025-11-25 08:34:42.864274257 +0000 UTC m=+0.069190951 container remove 6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.872 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb1a323-d470-43d3-ab62-6b5486a97d40]: (4, ('Tue Nov 25 08:34:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624)\n6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624\nTue Nov 25 08:34:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624)\n6e22b195a22364538f85e0288f00a1284d5f2f55e8968e456a8c6ed978d6d624\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-efa0c99149a76e4629cdf6ad8208140f9f21c4b5900f96d3066fd2df05d6641d-merged.mount: Deactivated successfully.
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.876 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1adc95ac-af36-4cd1-94fa-058dddb86417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.878 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:42 compute-0 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:42 compute-0 nova_compute[253538]: 2025-11-25 08:34:42.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15ea0fc6-14b2-4013-948f-7f9b4534dcfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:42 compute-0 podman[318564]: 2025-11-25 08:34:42.914547048 +0000 UTC m=+1.267382033 container remove d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:34:42 compute-0 systemd[1]: libpod-conmon-d3fffb18af60199a32f4f73f08ae07f91a8fd4c38a2c63e6482f22fe43218b81.scope: Deactivated successfully.
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.923 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[50fc3d7b-f231-44cd-871e-c7ab82526ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.926 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27fc10a1-98f4-46b3-a361-2fd93fa32c23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.947 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[114497d1-af42-45ee-99f4-8f0aaa81061c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500876, 'reachable_time': 34904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318707, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:42 compute-0 sudo[318460]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:42 compute-0 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.950 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:34:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:42.951 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[87b46483-ccbe-443f-8998-ab35d46a300b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:34:42 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:34:42 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:42 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ce77821b-a426-4c46-8d24-c31e9078ce39 does not exist
Nov 25 08:34:42 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0cafe8a4-51f5-435c-b5d9-1fd27d273cd4 does not exist
Nov 25 08:34:43 compute-0 sudo[318708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:34:43 compute-0 sudo[318708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:43 compute-0 sudo[318708]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.025 253542 DEBUG nova.compute.manager [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.025 253542 DEBUG nova.compute.manager [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing instance network info cache due to event network-changed-1682bdaf-1dd6-4036-8d17-a169dbaaca8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.025 253542 DEBUG oslo_concurrency.lockutils [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.026 253542 DEBUG oslo_concurrency.lockutils [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.026 253542 DEBUG nova.network.neutron [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Refreshing network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:43 compute-0 sudo[318733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:34:43 compute-0 sudo[318733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:34:43 compute-0 sudo[318733]: pam_unix(sudo:session): session closed for user root
Nov 25 08:34:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 440 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 939 KiB/s rd, 6.0 MiB/s wr, 187 op/s
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.390 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.391 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.391 253542 DEBUG nova.objects.instance [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:34:43 compute-0 ceph-mon[75015]: pgmap v1547: 321 pgs: 321 active+clean; 440 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 939 KiB/s rd, 6.0 MiB/s wr, 187 op/s
Nov 25 08:34:43 compute-0 nova_compute[253538]: 2025-11-25 08:34:43.994 253542 DEBUG nova.objects.instance [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'pci_requests' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.008 253542 DEBUG nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.160 253542 DEBUG nova.network.neutron [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updated VIF entry in instance network info cache for port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.160 253542 DEBUG nova.network.neutron [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [{"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.336 253542 DEBUG oslo_concurrency.lockutils [req-24668577-37a9-4f3b-8375-bfcb7a368548 req-8f1b0644-1ad0-49d9-a80f-b2b95b174283 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5c6656ef-7ad0-4eb4-a597-aa9a8078805b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.360 253542 DEBUG nova.policy [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '329d8dc9d78743d4a09a38fef3a9143d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d8307470c794815a028592990efca57', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.367 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.368 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.392 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.453 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.454 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.474 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.503 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.504 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.506 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.506 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.517 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.518 253542 INFO nova.compute.claims [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.529 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.544 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.596 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:44 compute-0 nova_compute[253538]: 2025-11-25 08:34:44.722 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:44 compute-0 podman[318758]: 2025-11-25 08:34:44.851656971 +0000 UTC m=+0.083436064 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:34:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784611586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.154 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.161 253542 DEBUG nova.compute.provider_tree [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.186 253542 DEBUG nova.scheduler.client.report [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1784611586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.215 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.216 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.220 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.227 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.227 253542 INFO nova.compute.claims [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:34:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 440 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 740 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.298 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.299 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.325 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.349 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.508 253542 DEBUG nova.policy [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc986518148d44de9f5908ed5be317bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.532 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.623 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.625 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.625 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Creating image(s)
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.651 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.679 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.710 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.713 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.744 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.784 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.785 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.786 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.786 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.810 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.813 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790279086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.978 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:45 compute-0 nova_compute[253538]: 2025-11-25 08:34:45.984 253542 DEBUG nova.compute.provider_tree [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.004 253542 DEBUG nova.scheduler.client.report [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.043 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.045 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.048 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.053 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.054 253542 INFO nova.compute.claims [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.109 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.109 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.162 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.177 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:46 compute-0 ceph-mon[75015]: pgmap v1548: 321 pgs: 321 active+clean; 440 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 740 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Nov 25 08:34:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2790279086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.206 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.245 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] resizing rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.368 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.370 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.371 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Creating image(s)
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.396 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.424 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.454 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.457 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.492 253542 DEBUG nova.policy [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc986518148d44de9f5908ed5be317bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.498 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.498 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-9fa407fa-661b-4b02-b4f4-656f6ae34cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.498 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.498 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.499 253542 DEBUG nova.network.neutron [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.505 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'migration_context' on Instance uuid d99c7a05-3cc3-4a8b-bce4-1185023a269f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.523 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.523 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Ensure instance console log exists: /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.523 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.524 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.524 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.537 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.538 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.539 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.540 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.568 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.572 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:46 compute-0 nova_compute[253538]: 2025-11-25 08:34:46.666 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.031 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.081 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] resizing rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836616779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.201 253542 DEBUG nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Successfully updated port: f2a4b65b-419e-44be-9413-f01693268aa8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.203 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2836616779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.209 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'migration_context' on Instance uuid a4fd9f97-b160-432d-9cb7-0fa3874c6468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.213 253542 DEBUG nova.compute.provider_tree [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.215 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Successfully created port: 182a5a1a-c06d-4265-857f-3ea363ae01c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.218 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.219 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.219 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Ensure instance console log exists: /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.219 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.220 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.220 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.226 253542 DEBUG nova.scheduler.client.report [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 477 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.274 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.275 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.429 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.430 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.431 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.431 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.432 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.434 253542 INFO nova.compute.manager [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Terminating instance
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.436 253542 DEBUG nova.compute.manager [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.438 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.438 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.455 253542 INFO nova.virt.libvirt.driver [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Instance destroyed successfully.
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.456 253542 DEBUG nova.objects.instance [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid bf44124c-1a65-4bde-a777-043ae1a53557 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.467 253542 DEBUG nova.virt.libvirt.vif [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-960435538',display_name='tempest-DeleteServersTestJSON-server-960435538',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-960435538',id=60,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-31sgadur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:42Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=bf44124c-1a65-4bde-a777-043ae1a53557,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.468 253542 DEBUG nova.network.os_vif_util [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "269f9bd3-f267-459c-8e24-4b1f6c943345", "address": "fa:16:3e:13:ae:8e", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap269f9bd3-f2", "ovs_interfaceid": "269f9bd3-f267-459c-8e24-4b1f6c943345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.469 253542 DEBUG nova.network.os_vif_util [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.470 253542 DEBUG os_vif [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.474 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f9bd3-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.477 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.484 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.487 253542 INFO os_vif [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ae:8e,bridge_name='br-int',has_traffic_filtering=True,id=269f9bd3-f267-459c-8e24-4b1f6c943345,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap269f9bd3-f2')
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.508 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.643 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.646 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.647 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Creating image(s)
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.676 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.701 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.730 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.734 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.776 253542 DEBUG nova.policy [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc986518148d44de9f5908ed5be317bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.789 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.826 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.827 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.828 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.828 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.860 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.865 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:47 compute-0 podman[319246]: 2025-11-25 08:34:47.880213056 +0000 UTC m=+0.127040886 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true)
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.904 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Successfully created port: f9d205bf-0705-485d-b89c-f9b9c3cdccdb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.976 253542 INFO nova.virt.libvirt.driver [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Deleting instance files /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557_del
Nov 25 08:34:47 compute-0 nova_compute[253538]: 2025-11-25 08:34:47.977 253542 INFO nova.virt.libvirt.driver [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Deletion of /var/lib/nova/instances/bf44124c-1a65-4bde-a777-043ae1a53557_del complete
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.028 253542 INFO nova.compute.manager [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Took 0.59 seconds to destroy the instance on the hypervisor.
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.029 253542 DEBUG oslo.service.loopingcall [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.031 253542 DEBUG nova.compute.manager [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.031 253542 DEBUG nova.network.neutron [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.104 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Successfully updated port: 182a5a1a-c06d-4265-857f-3ea363ae01c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.119 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.120 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquired lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.120 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.200 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:48 compute-0 ceph-mon[75015]: pgmap v1549: 321 pgs: 321 active+clean; 477 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.255 253542 DEBUG nova.network.neutron [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.257 253542 DEBUG nova.network.neutron [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.264 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] resizing rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.290 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.291 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-unplugged-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] No waiting events found dispatching network-vif-unplugged-269f9bd3-f267-459c-8e24-4b1f6c943345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 WARNING nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received unexpected event network-vif-unplugged-269f9bd3-f267-459c-8e24-4b1f6c943345 for instance with vm_state stopped and task_state None.
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.292 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 DEBUG oslo_concurrency.lockutils [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 DEBUG nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] No waiting events found dispatching network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.293 253542 WARNING nova.compute.manager [req-f6c2b60a-6c85-4550-93dc-064e1b75996f req-4fd63bb9-699b-4b69-ad64-5f3fb93d45c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received unexpected event network-vif-plugged-269f9bd3-f267-459c-8e24-4b1f6c943345 for instance with vm_state stopped and task_state None.
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.294 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.294 253542 DEBUG nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.316 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.342 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'migration_context' on Instance uuid e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.353 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.353 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Ensure instance console log exists: /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.354 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.354 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.354 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.476 253542 DEBUG nova.compute.manager [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-changed-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.477 253542 DEBUG nova.compute.manager [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Refreshing instance network info cache due to event network-changed-182a5a1a-c06d-4265-857f-3ea363ae01c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.477 253542 DEBUG oslo_concurrency.lockutils [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.520 253542 WARNING nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe already exists in list: networks containing: ['9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe']. ignoring it
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.856 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Successfully created port: 54f02527-a6c1-4059-aa22-2c19fc6f351d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.877 253542 DEBUG nova.network.neutron [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.902 253542 INFO nova.compute.manager [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Took 0.87 seconds to deallocate network for instance.
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.957 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:48 compute-0 nova_compute[253538]: 2025-11-25 08:34:48.957 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.154 253542 DEBUG oslo_concurrency.processutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.227 253542 DEBUG nova.compute.manager [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-changed-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.228 253542 DEBUG nova.compute.manager [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing instance network info cache due to event network-changed-f2a4b65b-419e-44be-9413-f01693268aa8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.228 253542 DEBUG oslo_concurrency.lockutils [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 481 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 4.6 MiB/s wr, 136 op/s
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.305 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Updating instance_info_cache with network_info: [{"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.323 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Releasing lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.323 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance network_info: |[{"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.324 253542 DEBUG oslo_concurrency.lockutils [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.324 253542 DEBUG nova.network.neutron [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Refreshing network info cache for port 182a5a1a-c06d-4265-857f-3ea363ae01c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.326 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start _get_guest_xml network_info=[{"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.328 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Successfully updated port: f9d205bf-0705-485d-b89c-f9b9c3cdccdb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.333 253542 WARNING nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.339 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.340 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.347 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.348 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.349 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.349 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.350 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.350 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.351 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.351 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.351 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.351 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.352 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.352 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.352 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.352 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.356 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.395 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.396 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquired lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.396 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.560 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:34:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62237574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.638 253542 DEBUG oslo_concurrency.processutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.644 253542 DEBUG nova.compute.provider_tree [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.661 253542 DEBUG nova.scheduler.client.report [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.687 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.730 253542 INFO nova.scheduler.client.report [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance bf44124c-1a65-4bde-a777-043ae1a53557
Nov 25 08:34:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654611122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.794 253542 DEBUG oslo_concurrency.lockutils [None req-72368f87-f064-4902-b8f7-414f65b8954f a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "bf44124c-1a65-4bde-a777-043ae1a53557" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.800 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.819 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:49 compute-0 nova_compute[253538]: 2025-11-25 08:34:49.823 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:50 compute-0 kernel: tap9200cc12-92 (unregistering): left promiscuous mode
Nov 25 08:34:50 compute-0 NetworkManager[48915]: <info>  [1764059690.0619] device (tap9200cc12-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:34:50 compute-0 ovn_controller[152859]: 2025-11-25T08:34:50Z|00570|binding|INFO|Releasing lport 9200cc12-927d-418b-99c1-ca0421535979 from this chassis (sb_readonly=0)
Nov 25 08:34:50 compute-0 ovn_controller[152859]: 2025-11-25T08:34:50Z|00571|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 down in Southbound
Nov 25 08:34:50 compute-0 ovn_controller[152859]: 2025-11-25T08:34:50Z|00572|binding|INFO|Removing iface tap9200cc12-92 ovn-installed in OVS
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.129 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.134 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:f0:9b 10.100.0.11'], port_security=['fa:16:3e:73:f0:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '420c5373-d9c4-4da0-9658-90eff9a19f8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c602cd86-40c0-467a-8b7a-b573e0a7cefa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9200cc12-927d-418b-99c1-ca0421535979) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.135 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9200cc12-927d-418b-99c1-ca0421535979 in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.137 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.154 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d79a1b80-a3ea-4672-b9d1-c8eae02ee0c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 25 08:34:50 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003e.scope: Consumed 15.762s CPU time.
Nov 25 08:34:50 compute-0 systemd-machined[215790]: Machine qemu-71-instance-0000003e terminated.
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.193 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e05ef19b-553e-4aa4-859a-0cd804df9691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.196 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a30f896a-6ab5-4227-9d09-5f2b55e614f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.205 253542 DEBUG nova.network.neutron [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.224 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dac3b8d3-270a-4852-9dd8-b850804b13d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.226 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.227 253542 DEBUG oslo_concurrency.lockutils [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.227 253542 DEBUG nova.network.neutron [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Refreshing network info cache for port f2a4b65b-419e-44be-9413-f01693268aa8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.234 253542 DEBUG nova.virt.libvirt.vif [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.234 253542 DEBUG nova.network.os_vif_util [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.235 253542 DEBUG nova.network.os_vif_util [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.235 253542 DEBUG os_vif [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.236 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.237 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.242 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2a4b65b-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.242 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[907d4763-7a52-4387-98d5-fb18cd5cd0c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319480, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.243 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2a4b65b-41, col_values=(('external_ids', {'iface-id': 'f2a4b65b-419e-44be-9413-f01693268aa8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:ed:fc', 'vm-uuid': '52d39d67-b456-44e4-8804-2de0c941edae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 NetworkManager[48915]: <info>  [1764059690.2458] manager: (tapf2a4b65b-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.247 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.251 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.252 253542 INFO os_vif [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.253 253542 DEBUG nova.virt.libvirt.vif [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.253 253542 DEBUG nova.network.os_vif_util [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.254 253542 DEBUG nova.network.os_vif_util [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.256 253542 DEBUG nova.virt.libvirt.guest [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <target dev="tapf2a4b65b-41"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]: </interface>
Nov 25 08:34:50 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:34:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539818481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.260 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c27125c2-a79b-4fc5-8c4d-f95bf29ac8aa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496845, 'tstamp': 496845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319483, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496848, 'tstamp': 496848}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319483, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.262 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 kernel: tapf2a4b65b-41: entered promiscuous mode
Nov 25 08:34:50 compute-0 systemd-udevd[319472]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:50 compute-0 NetworkManager[48915]: <info>  [1764059690.2690] manager: (tapf2a4b65b-41): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Nov 25 08:34:50 compute-0 ovn_controller[152859]: 2025-11-25T08:34:50Z|00573|binding|INFO|Claiming lport f2a4b65b-419e-44be-9413-f01693268aa8 for this chassis.
Nov 25 08:34:50 compute-0 ovn_controller[152859]: 2025-11-25T08:34:50Z|00574|binding|INFO|f2a4b65b-419e-44be-9413-f01693268aa8: Claiming fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.271 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.271 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.272 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.272 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.273 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.281 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ed:fc 10.100.0.3'], port_security=['fa:16:3e:26:ed:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '7', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f2a4b65b-419e-44be-9413-f01693268aa8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:50 compute-0 NetworkManager[48915]: <info>  [1764059690.2818] device (tapf2a4b65b-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.282 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f2a4b65b-419e-44be-9413-f01693268aa8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe bound to our chassis
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.283 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Successfully updated port: 54f02527-a6c1-4059-aa22-2c19fc6f351d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:50 compute-0 NetworkManager[48915]: <info>  [1764059690.2844] device (tapf2a4b65b-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.284 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.286 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.287 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-1',id=63,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:45Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=d99c7a05-3cc3-4a8b-bce4-1185023a269f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.288 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.288 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.289 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'pci_devices' on Instance uuid d99c7a05-3cc3-4a8b-bce4-1185023a269f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:50 compute-0 ovn_controller[152859]: 2025-11-25T08:34:50Z|00575|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 ovn-installed in OVS
Nov 25 08:34:50 compute-0 ovn_controller[152859]: 2025-11-25T08:34:50Z|00576|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 up in Southbound
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.301 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.302 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquired lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.302 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.304 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2105de87-15ce-4b63-8dc4-f99524b6035c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.308 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <uuid>d99c7a05-3cc3-4a8b-bce4-1185023a269f</uuid>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <name>instance-0000003f</name>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1822986507-1</nova:name>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:49</nova:creationTime>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <nova:user uuid="dc986518148d44de9f5908ed5be317bd">tempest-ListServersNegativeTestJSON-704382836-project-member</nova:user>
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <nova:project uuid="ce0bc5c65f2f47a9a854ec892fe53bc8">tempest-ListServersNegativeTestJSON-704382836</nova:project>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <nova:port uuid="182a5a1a-c06d-4265-857f-3ea363ae01c2">
Nov 25 08:34:50 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <entry name="serial">d99c7a05-3cc3-4a8b-bce4-1185023a269f</entry>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <entry name="uuid">d99c7a05-3cc3-4a8b-bce4-1185023a269f</entry>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk">
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config">
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:50 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:35:cb:10"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <target dev="tap182a5a1a-c0"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/console.log" append="off"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:50 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:50 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.309 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Preparing to wait for external event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.309 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.310 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.310 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.311 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-1',id=63,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:45Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=d99c7a05-3cc3-4a8b-bce4-1185023a269f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.312 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.313 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.313 253542 DEBUG os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.315 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.315 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.316 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 ceph-mon[75015]: pgmap v1550: 321 pgs: 321 active+clean; 481 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 4.6 MiB/s wr, 136 op/s
Nov 25 08:34:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/62237574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3654611122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/539818481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.323 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.323 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap182a5a1a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.324 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap182a5a1a-c0, col_values=(('external_ids', {'iface-id': '182a5a1a-c06d-4265-857f-3ea363ae01c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:cb:10', 'vm-uuid': 'd99c7a05-3cc3-4a8b-bce4-1185023a269f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.325 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 NetworkManager[48915]: <info>  [1764059690.3265] manager: (tap182a5a1a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.328 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.332 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.333 253542 INFO os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0')
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.337 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1253b33d-8249-494a-9c12-72b61dc2710e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.340 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Updating instance_info_cache with network_info: [{"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:50 compute-0 NetworkManager[48915]: <info>  [1764059690.3420] manager: (tap9200cc12-92): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.342 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e61d76cc-d80d-4600-ac01-25fef6893bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.365 253542 DEBUG nova.virt.libvirt.driver [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.366 253542 DEBUG nova.virt.libvirt.driver [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.366 253542 DEBUG nova.virt.libvirt.driver [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:4d:ce:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.366 253542 DEBUG nova.virt.libvirt.driver [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] No VIF found with MAC fa:16:3e:26:ed:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.370 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Releasing lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.370 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance network_info: |[{"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.373 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start _get_guest_xml network_info=[{"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.375 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b0a2ab-d925-47bf-9797-adc25f5b7f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.380 253542 WARNING nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.385 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.386 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.391 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.392 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.392 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.393 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.392 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[31e8ea5b-2318-43ff-9003-a54214c68d4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319514, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.393 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.393 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.394 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.394 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.394 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.395 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.395 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.395 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.395 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.396 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.399 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c40488f9-c421-458d-89b9-8ab7988fd169]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319515, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319515, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.408 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.415 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.415 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.416 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:50.416 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.439 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.445 253542 DEBUG nova.virt.libvirt.guest [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:34:50</nova:creationTime>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 08:34:50 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:34:50 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:50 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:34:50 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:34:50 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.454 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.454 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.454 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No VIF found with MAC fa:16:3e:35:cb:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.455 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Using config drive
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.477 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.484 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.488 253542 DEBUG oslo_concurrency.lockutils [None req-a4883817-8c02-41e5-8d27-475adf7618fd 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.804 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance shutdown successfully after 24 seconds.
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.832 253542 DEBUG nova.compute.manager [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Received event network-vif-deleted-269f9bd3-f267-459c-8e24-4b1f6c943345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.832 253542 DEBUG nova.compute.manager [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.833 253542 DEBUG oslo_concurrency.lockutils [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.834 253542 DEBUG oslo_concurrency.lockutils [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.835 253542 DEBUG oslo_concurrency.lockutils [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.835 253542 DEBUG nova.compute.manager [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.836 253542 WARNING nova.compute.manager [req-374d4aff-10a9-48cf-ac90-e3042ce6f710 req-812488c2-b586-45dd-a882-13cc59a57c36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state rebuilding.
Nov 25 08:34:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.839 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance destroyed successfully.
Nov 25 08:34:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506731877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.848 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance destroyed successfully.
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.849 253542 DEBUG nova.virt.libvirt.vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-ServerActionsTestJSON-server-1864891791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:25Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.850 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.852 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.852 253542 DEBUG os_vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.855 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9200cc12-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.859 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.878 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.882 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:50 compute-0 nova_compute[253538]: 2025-11-25 08:34:50.926 253542 INFO os_vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92')
Nov 25 08:34:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.176 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Creating config drive at /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.181 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0if3ix6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.221 253542 DEBUG nova.network.neutron [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Updated VIF entry in instance network info cache for port 182a5a1a-c06d-4265-857f-3ea363ae01c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.222 253542 DEBUG nova.network.neutron [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Updating instance_info_cache with network_info: [{"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.242 253542 DEBUG oslo_concurrency.lockutils [req-055411a1-74ac-4424-b1d2-dc4c518f2f32 req-fdbc24b2-668a-49d4-8d56-13563317ccd7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-d99c7a05-3cc3-4a8b-bce4-1185023a269f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 517 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 5.6 MiB/s wr, 136 op/s
Nov 25 08:34:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790800774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.328 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0if3ix6" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.383 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.387 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.418 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.421 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-2',id=64,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:46Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=a4fd9f97-b160-432d-9cb7-0fa3874c6468,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.421 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.422 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.423 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4fd9f97-b160-432d-9cb7-0fa3874c6468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/506731877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3790800774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.439 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <uuid>a4fd9f97-b160-432d-9cb7-0fa3874c6468</uuid>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <name>instance-00000040</name>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1822986507-2</nova:name>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:50</nova:creationTime>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <nova:user uuid="dc986518148d44de9f5908ed5be317bd">tempest-ListServersNegativeTestJSON-704382836-project-member</nova:user>
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <nova:project uuid="ce0bc5c65f2f47a9a854ec892fe53bc8">tempest-ListServersNegativeTestJSON-704382836</nova:project>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <nova:port uuid="f9d205bf-0705-485d-b89c-f9b9c3cdccdb">
Nov 25 08:34:51 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <entry name="serial">a4fd9f97-b160-432d-9cb7-0fa3874c6468</entry>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <entry name="uuid">a4fd9f97-b160-432d-9cb7-0fa3874c6468</entry>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk">
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config">
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:9d:08:11"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <target dev="tapf9d205bf-07"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/console.log" append="off"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:51 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:51 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:51 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:51 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:51 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.439 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Preparing to wait for external event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.440 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.440 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.440 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.441 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-2',id=64,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:46Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=a4fd9f97-b160-432d-9cb7-0fa3874c6468,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.441 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.441 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.442 253542 DEBUG os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.442 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.442 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.443 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.444 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.444 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9d205bf-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.445 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf9d205bf-07, col_values=(('external_ids', {'iface-id': 'f9d205bf-0705-485d-b89c-f9b9c3cdccdb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:08:11', 'vm-uuid': 'a4fd9f97-b160-432d-9cb7-0fa3874c6468'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:51 compute-0 NetworkManager[48915]: <info>  [1764059691.4474] manager: (tapf9d205bf-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.449 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.452 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.453 253542 INFO os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07')
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.576 253542 DEBUG nova.compute.manager [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-changed-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.576 253542 DEBUG nova.compute.manager [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Refreshing instance network info cache due to event network-changed-f9d205bf-0705-485d-b89c-f9b9c3cdccdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.576 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.577 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.577 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Refreshing network info cache for port f9d205bf-0705-485d-b89c-f9b9c3cdccdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.599 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.599 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.600 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No VIF found with MAC fa:16:3e:9d:08:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.601 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Using config drive
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.630 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:51 compute-0 ovn_controller[152859]: 2025-11-25T08:34:51Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 08:34:51 compute-0 ovn_controller[152859]: 2025-11-25T08:34:51Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:ed:fc 10.100.0.3
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.942 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config d99c7a05-3cc3-4a8b-bce4-1185023a269f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:51 compute-0 nova_compute[253538]: 2025-11-25 08:34:51.943 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Deleting local config drive /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f/disk.config because it was imported into RBD.
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.0024] manager: (tap182a5a1a-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Nov 25 08:34:52 compute-0 kernel: tap182a5a1a-c0: entered promiscuous mode
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00577|binding|INFO|Claiming lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 for this chassis.
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00578|binding|INFO|182a5a1a-c06d-4265-857f-3ea363ae01c2: Claiming fa:16:3e:35:cb:10 10.100.0.3
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.016 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:cb:10 10.100.0.3'], port_security=['fa:16:3e:35:cb:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd99c7a05-3cc3-4a8b-bce4-1185023a269f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=182a5a1a-c06d-4265-857f-3ea363ae01c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.018 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 182a5a1a-c06d-4265-857f-3ea363ae01c2 in datapath 837c6e7b-bab2-4553-9d96-986f67153365 bound to our chassis
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.022 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.0247] device (tap182a5a1a-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.0259] device (tap182a5a1a-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.042 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[991959d9-5dda-4473-9f82-3f5179f75bfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.043 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap837c6e7b-b1 in ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.045 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap837c6e7b-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.046 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca0bdea-6740-4ca3-ae5c-394f20e8f23d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.047 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e748b42-7d96-4fdb-a004-48e049666926]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 systemd-machined[215790]: New machine qemu-72-instance-0000003f.
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00579|binding|INFO|Setting lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 ovn-installed in OVS
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00580|binding|INFO|Setting lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 up in Southbound
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:52 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000003f.
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.064 253542 DEBUG nova.network.neutron [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updated VIF entry in instance network info cache for port f2a4b65b-419e-44be-9413-f01693268aa8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.065 253542 DEBUG nova.network.neutron [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.068 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a46125-b82e-47c8-9845-e86cd8c26108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.081 253542 DEBUG oslo_concurrency.lockutils [req-195059ee-9d4d-4fad-8132-8722f099b55d req-adf28043-f52f-4401-8dc9-d2dbed02789f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.086 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fc17e2-3307-4a0e-a9ef-05cc1a0dd23d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.128 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c998cff4-d8dc-45fb-b865-962db41f20b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.133 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e91f3490-111c-4f9a-839b-fc0be4f2f163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.1358] manager: (tap837c6e7b-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/267)
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.136 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Creating config drive at /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.142 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpodx1ooj5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.174 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb205ae-ed0b-4b50-875d-83be77185a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.177 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd6010f-1ede-4d93-af6f-c40051db5a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.182 253542 DEBUG nova.network.neutron [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Updating instance_info_cache with network_info: [{"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.200 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Releasing lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.200 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance network_info: |[{"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.2034] device (tap837c6e7b-b0): carrier: link connected
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.203 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start _get_guest_xml network_info=[{"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.210 253542 WARNING nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.211 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[feb653fc-ba33-4c83-a709-f56a3b78813b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.216 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.217 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.222 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.223 253542 DEBUG nova.virt.libvirt.host [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.223 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.223 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.224 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.225 253542 DEBUG nova.virt.hardware [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.228 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.231 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d0ee90-e9be-429a-beda-a154b4b962aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319729, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.247 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52238428-da13-4e05-9c84-4713af5f90cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:6e5a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504680, 'tstamp': 504680}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319730, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bba6fc07-c8d2-4e8a-a730-f356ff0442f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319732, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.289 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpodx1ooj5" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.303 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[711651d4-e4c5-4db4-93e7-a1f58ebcfa69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.315 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.323 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.363 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deleting instance files /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d_del
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.365 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deletion of /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d_del complete
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.368 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0666f55-c7f7-4e5d-853f-a02a775a2048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.369 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.369 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.370 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.3723] manager: (tap837c6e7b-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:52 compute-0 kernel: tap837c6e7b-b0: entered promiscuous mode
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.378 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00581|binding|INFO|Releasing lport f915f58f-151e-47fb-a373-5fd022b7fd3c from this chassis (sb_readonly=0)
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.401 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/837c6e7b-bab2-4553-9d96-986f67153365.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/837c6e7b-bab2-4553-9d96-986f67153365.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.402 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c319bbda-7cd3-4016-8b16-96af0be3253b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.403 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/837c6e7b-bab2-4553-9d96-986f67153365.pid.haproxy
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.404 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'env', 'PROCESS_TAG=haproxy-837c6e7b-bab2-4553-9d96-986f67153365', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/837c6e7b-bab2-4553-9d96-986f67153365.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:34:52 compute-0 ceph-mon[75015]: pgmap v1551: 321 pgs: 321 active+clean; 517 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 5.6 MiB/s wr, 136 op/s
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.490 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config a4fd9f97-b160-432d-9cb7-0fa3874c6468_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.491 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Deleting local config drive /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468/disk.config because it was imported into RBD.
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.503 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.504 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating image(s)
Nov 25 08:34:52 compute-0 kernel: tapf9d205bf-07: entered promiscuous mode
Nov 25 08:34:52 compute-0 systemd-udevd[319719]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.5354] manager: (tapf9d205bf-07): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.532 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00582|binding|INFO|Claiming lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb for this chassis.
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00583|binding|INFO|f9d205bf-0705-485d-b89c-f9b9c3cdccdb: Claiming fa:16:3e:9d:08:11 10.100.0.6
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.5476] device (tapf9d205bf-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:52 compute-0 NetworkManager[48915]: <info>  [1764059692.5490] device (tapf9d205bf-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:52.549 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:08:11 10.100.0.6'], port_security=['fa:16:3e:9d:08:11 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a4fd9f97-b160-432d-9cb7-0fa3874c6468', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f9d205bf-0705-485d-b89c-f9b9c3cdccdb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00584|binding|INFO|Setting lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb ovn-installed in OVS
Nov 25 08:34:52 compute-0 ovn_controller[152859]: 2025-11-25T08:34:52Z|00585|binding|INFO|Setting lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb up in Southbound
Nov 25 08:34:52 compute-0 systemd-machined[215790]: New machine qemu-73-instance-00000040.
Nov 25 08:34:52 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-00000040.
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.588 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.622 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.625 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359251585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.685 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.686 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.710 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.713 253542 DEBUG nova.objects.instance [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'flavor' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.737 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.743 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.784 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.785 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.786 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.786 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.809 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.815 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:52 compute-0 podman[319901]: 2025-11-25 08:34:52.845081291 +0000 UTC m=+0.110740738 container create 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 08:34:52 compute-0 podman[319901]: 2025-11-25 08:34:52.754806825 +0000 UTC m=+0.020466262 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.870 253542 DEBUG nova.virt.libvirt.vif [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.871 253542 DEBUG nova.network.os_vif_util [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.873 253542 DEBUG nova.network.os_vif_util [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.879 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.883 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.887 253542 DEBUG nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Attempting to detach device tapf2a4b65b-41 from instance 52d39d67-b456-44e4-8804-2de0c941edae from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.888 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <target dev="tapf2a4b65b-41"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]: </interface>
Nov 25 08:34:52 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:34:52 compute-0 systemd[1]: Started libpod-conmon-3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430.scope.
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.917 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.930 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface>not found in domain: <domain type='kvm' id='70'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <name>instance-0000003d</name>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <uuid>52d39d67-b456-44e4-8804-2de0c941edae</uuid>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:34:50</nova:creationTime>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:34:52 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <entry name='serial'>52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <entry name='uuid'>52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/52d39d67-b456-44e4-8804-2de0c941edae_disk' index='2'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/52d39d67-b456-44e4-8804-2de0c941edae_disk.config' index='1'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:4d:ce:d4'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target dev='tap9fa407fa-66'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:26:ed:fc'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target dev='tapf2a4b65b-41'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='net1'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log' append='off'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       </target>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/2'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log' append='off'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </console>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c468,c889</label>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c468,c889</imagelabel>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:34:52 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:34:52 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:52 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.931 253542 INFO nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tapf2a4b65b-41 from instance 52d39d67-b456-44e4-8804-2de0c941edae from the persistent domain config.
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.931 253542 DEBUG nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] (1/8): Attempting to detach device tapf2a4b65b-41 with device alias net1 from instance 52d39d67-b456-44e4-8804-2de0c941edae from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.932 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:26:ed:fc"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]:   <target dev="tapf2a4b65b-41"/>
Nov 25 08:34:52 compute-0 nova_compute[253538]: </interface>
Nov 25 08:34:52 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:34:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598152ed7896db807c1194d497cbf99580dfa554ebe08979ae464ebb4bcff12a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.956 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.957 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.957 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.958 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.958 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.958 253542 WARNING nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state rebuild_spawning.
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.958 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.959 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.959 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.959 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.959 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.960 253542 WARNING nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.960 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.961 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.961 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.961 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.962 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.962 253542 WARNING nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.962 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.963 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.963 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.963 253542 DEBUG oslo_concurrency.lockutils [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:52 compute-0 nova_compute[253538]: 2025-11-25 08:34:52.964 253542 DEBUG nova.compute.manager [req-12b4da8a-f19b-4777-9837-86f8dd5ae7e6 req-a8944e43-4288-41f0-8ebf-8dc94e8bbbd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Processing event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:52 compute-0 podman[319901]: 2025-11-25 08:34:52.97156816 +0000 UTC m=+0.237227597 container init 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:34:52 compute-0 podman[319901]: 2025-11-25 08:34:52.977422027 +0000 UTC m=+0.243081444 container start 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:34:53 compute-0 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [NOTICE]   (320027) : New worker (320037) forked
Nov 25 08:34:53 compute-0 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [NOTICE]   (320027) : Loading success.
Nov 25 08:34:53 compute-0 kernel: tapf2a4b65b-41 (unregistering): left promiscuous mode
Nov 25 08:34:53 compute-0 NetworkManager[48915]: <info>  [1764059693.0544] device (tapf2a4b65b-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:34:53 compute-0 ovn_controller[152859]: 2025-11-25T08:34:53Z|00586|binding|INFO|Releasing lport f2a4b65b-419e-44be-9413-f01693268aa8 from this chassis (sb_readonly=0)
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 ovn_controller[152859]: 2025-11-25T08:34:53Z|00587|binding|INFO|Setting lport f2a4b65b-419e-44be-9413-f01693268aa8 down in Southbound
Nov 25 08:34:53 compute-0 ovn_controller[152859]: 2025-11-25T08:34:53Z|00588|binding|INFO|Removing iface tapf2a4b65b-41 ovn-installed in OVS
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.071 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ed:fc 10.100.0.3'], port_security=['fa:16:3e:26:ed:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-663266119', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '9', 'neutron:security_group_ids': '2fc6083a-2d5e-4949-a854-57468915c521', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f2a4b65b-419e-44be-9413-f01693268aa8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.073 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.0727398, d99c7a05-3cc3-4a8b-bce4-1185023a269f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.073 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] VM Started (Lifecycle Event)
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.077 253542 DEBUG nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Start waiting for the detach event from libvirt for device tapf2a4b65b-41 with device alias net1 for instance 52d39d67-b456-44e4-8804-2de0c941edae _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.080 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.093 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.099 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.0748725, d99c7a05-3cc3-4a8b-bce4-1185023a269f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.099 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] VM Paused (Lifecycle Event)
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.123 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.127 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.136 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f9d205bf-0705-485d-b89c-f9b9c3cdccdb in datapath 837c6e7b-bab2-4553-9d96-986f67153365 unbound from our chassis
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.137 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.144 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.144 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764059693.0751965, 52d39d67-b456-44e4-8804-2de0c941edae => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.145 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.149 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:ed:fc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf2a4b65b-41"/></interface>not found in domain: <domain type='kvm' id='70'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <name>instance-0000003d</name>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <uuid>52d39d67-b456-44e4-8804-2de0c941edae</uuid>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:34:50</nova:creationTime>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:port uuid="f2a4b65b-419e-44be-9413-f01693268aa8">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:34:53 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name='serial'>52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name='uuid'>52d39d67-b456-44e4-8804-2de0c941edae</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/52d39d67-b456-44e4-8804-2de0c941edae_disk' index='2'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/52d39d67-b456-44e4-8804-2de0c941edae_disk.config' index='1'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:4d:ce:d4'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target dev='tap9fa407fa-66'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log' append='off'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </target>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/2'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <source path='/dev/pts/2'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae/console.log' append='off'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </console>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </input>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c468,c889</label>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c468,c889</imagelabel>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:34:53 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:53 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.150 253542 INFO nova.virt.libvirt.driver [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully detached device tapf2a4b65b-41 from instance 52d39d67-b456-44e4-8804-2de0c941edae from the live domain config.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.150 253542 DEBUG nova.virt.libvirt.vif [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.151 253542 DEBUG nova.network.os_vif_util [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.151 253542 DEBUG nova.network.os_vif_util [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.152 253542 DEBUG os_vif [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.153 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.153 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2a4b65b-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.153 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4aa813-ef37-4f98-a71b-d3762e553dd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.157 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.159 253542 INFO os_vif [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.159 253542 DEBUG nova.virt.libvirt.guest [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:name>tempest-tempest.common.compute-instance-1833307559</nova:name>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:34:53</nova:creationTime>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:user uuid="329d8dc9d78743d4a09a38fef3a9143d">tempest-AttachInterfacesTestJSON-1895576257-project-member</nova:user>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:project uuid="7d8307470c794815a028592990efca57">tempest-AttachInterfacesTestJSON-1895576257</nova:project>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:port uuid="9fa407fa-661b-4b02-b4f4-656f6ae34cd8">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:34:53 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:34:53 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.184 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[53e506e8-0bf4-4c20-a62b-5649efe42d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.187 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6940c9-4de0-49b6-b212-439e49dc9c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.218 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8df9f674-7a00-4827-8b6b-b3d5f45e7fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.236 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4ec501-54ed-42a7-b150-edd07436a456]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320080, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678124908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:34:53
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.254 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe6af74-6591-40c9-8856-b066b6c310ed]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504693, 'tstamp': 504693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320091, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504696, 'tstamp': 504696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320091, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.255 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.259 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.259 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f2a4b65b-419e-44be-9413-f01693268aa8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.260 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.261 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 485 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 5.4 MiB/s wr, 120 op/s
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.261 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-3',id=65,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:47Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=e07ccbcb-d60d-4c15-95c2-9f5046ab99a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.261 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.262 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.263 253542 DEBUG nova.objects.instance [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'pci_devices' on Instance uuid e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.277 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57692c65-ac6e-439c-ba2f-a60f7dd67191]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.281 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <uuid>e07ccbcb-d60d-4c15-95c2-9f5046ab99a3</uuid>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <name>instance-00000041</name>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1822986507-3</nova:name>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:52</nova:creationTime>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <nova:user uuid="dc986518148d44de9f5908ed5be317bd">tempest-ListServersNegativeTestJSON-704382836-project-member</nova:user>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <nova:project uuid="ce0bc5c65f2f47a9a854ec892fe53bc8">tempest-ListServersNegativeTestJSON-704382836</nova:project>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <nova:port uuid="54f02527-a6c1-4059-aa22-2c19fc6f351d">
Nov 25 08:34:53 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name="serial">e07ccbcb-d60d-4c15-95c2-9f5046ab99a3</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name="uuid">e07ccbcb-d60d-4c15-95c2-9f5046ab99a3</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk">
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config">
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:eb:30:f0"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <target dev="tap54f02527-a6"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/console.log" append="off"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:53 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:53 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:53 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:53 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:53 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.281 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Preparing to wait for external event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.282 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.282 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.282 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.283 253542 DEBUG nova.virt.libvirt.vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-3',id=65,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:47Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=e07ccbcb-d60d-4c15-95c2-9f5046ab99a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.283 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.283 253542 DEBUG nova.network.os_vif_util [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.283 253542 DEBUG os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.284 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.284 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.284 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.288 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54f02527-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.288 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54f02527-a6, col_values=(('external_ids', {'iface-id': '54f02527-a6c1-4059-aa22-2c19fc6f351d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:30:f0', 'vm-uuid': 'e07ccbcb-d60d-4c15-95c2-9f5046ab99a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 NetworkManager[48915]: <info>  [1764059693.2903] manager: (tap54f02527-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.294 253542 INFO os_vif [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6')
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.308 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[11afd92c-97bf-428a-8a73-a86010826d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.311 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7e6305-69ca-47de-8ae2-3eb43ba81718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.351 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5dc36c-5285-4a69-bfbe-9f4c0706b761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.356 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.356 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.356 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] No VIF found with MAC fa:16:3e:eb:30:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.356 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Using config drive
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.371 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86c422da-1b1e-44d7-8dd3-14262f23ab88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320110, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.380 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa5d27f-f97c-4282-8a99-80623eb990e6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320126, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320126, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.396 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.404 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.404 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.404 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:53.405 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.421 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.420985, a4fd9f97-b160-432d-9cb7-0fa3874c6468 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.421 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] VM Started (Lifecycle Event)
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.423 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.425 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.428 253542 INFO nova.virt.libvirt.driver [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance spawned successfully.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.428 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.435 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Updated VIF entry in instance network info cache for port f9d205bf-0705-485d-b89c-f9b9c3cdccdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.435 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Updating instance_info_cache with network_info: [{"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.449 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.454 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a4fd9f97-b160-432d-9cb7-0fa3874c6468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.454 253542 DEBUG nova.compute.manager [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-changed-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.454 253542 DEBUG nova.compute.manager [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Refreshing instance network info cache due to event network-changed-54f02527-a6c1-4059-aa22-2c19fc6f351d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.455 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.455 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.455 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Refreshing network info cache for port 54f02527-a6c1-4059-aa22-2c19fc6f351d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.457 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.460 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.460 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.461 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.461 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.461 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.462 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.486 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.486 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.4210463, a4fd9f97-b160-432d-9cb7-0fa3874c6468 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.487 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] VM Paused (Lifecycle Event)
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.510 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.513 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.4251022, a4fd9f97-b160-432d-9cb7-0fa3874c6468 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.513 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] VM Resumed (Lifecycle Event)
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.530 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Took 7.16 seconds to spawn the instance on the hypervisor.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.530 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.537 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.538 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.539 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.547 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.564 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:34:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/359251585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2678124908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.580 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.632 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Took 9.11 seconds to build instance.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.645 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.659 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.686 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.687 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.719 253542 DEBUG nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.719 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.719 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Processing event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.720 253542 DEBUG oslo_concurrency.lockutils [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.721 253542 DEBUG nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] No waiting events found dispatching network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.721 253542 WARNING nova.compute.manager [req-f6ae82d6-e364-4abb-8ece-107726fca1a6 req-71950707-8fe0-454b-a29f-063bf22cfb78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received unexpected event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 for instance with vm_state building and task_state spawning.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.722 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.722 253542 INFO nova.compute.claims [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.725 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.732 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] resizing rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.776 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Creating config drive at /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.780 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf4hcm1wz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.812 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059693.7284205, d99c7a05-3cc3-4a8b-bce4-1185023a269f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.812 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] VM Resumed (Lifecycle Event)
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.816 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.821 253542 INFO nova.virt.libvirt.driver [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance spawned successfully.
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.821 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:34:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.834 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.841 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.847 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.847 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.848 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.848 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.848 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.849 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.871 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.911 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.912 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Ensure instance console log exists: /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.912 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.913 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.913 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.914 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start _get_guest_xml network_info=[{"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.916 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf4hcm1wz" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.917 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Took 8.29 seconds to spawn the instance on the hypervisor.
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.917 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.934 253542 DEBUG nova.storage.rbd_utils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] rbd image e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.937 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.981 253542 WARNING nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.987 253542 DEBUG nova.virt.libvirt.host [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.988 253542 DEBUG nova.virt.libvirt.host [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.991 253542 DEBUG nova.virt.libvirt.host [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.992 253542 DEBUG nova.virt.libvirt.host [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.992 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.992 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.993 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.993 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.993 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.993 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.994 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.994 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.994 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.995 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.995 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.995 253542 DEBUG nova.virt.hardware [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:53 compute-0 nova_compute[253538]: 2025-11-25 08:34:53.995 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.017 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.065 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Took 9.60 seconds to build instance.
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.082 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.112 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.344 253542 DEBUG oslo_concurrency.processutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.346 253542 INFO nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Deleting local config drive /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3/disk.config because it was imported into RBD.
Nov 25 08:34:54 compute-0 NetworkManager[48915]: <info>  [1764059694.4078] manager: (tap54f02527-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Nov 25 08:34:54 compute-0 kernel: tap54f02527-a6: entered promiscuous mode
Nov 25 08:34:54 compute-0 systemd-udevd[320109]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:54 compute-0 ovn_controller[152859]: 2025-11-25T08:34:54Z|00589|binding|INFO|Claiming lport 54f02527-a6c1-4059-aa22-2c19fc6f351d for this chassis.
Nov 25 08:34:54 compute-0 ovn_controller[152859]: 2025-11-25T08:34:54Z|00590|binding|INFO|54f02527-a6c1-4059-aa22-2c19fc6f351d: Claiming fa:16:3e:eb:30:f0 10.100.0.10
Nov 25 08:34:54 compute-0 NetworkManager[48915]: <info>  [1764059694.4267] device (tap54f02527-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:54 compute-0 NetworkManager[48915]: <info>  [1764059694.4275] device (tap54f02527-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.426 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:f0 10.100.0.10'], port_security=['fa:16:3e:eb:30:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e07ccbcb-d60d-4c15-95c2-9f5046ab99a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=54f02527-a6c1-4059-aa22-2c19fc6f351d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.427 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 54f02527-a6c1-4059-aa22-2c19fc6f351d in datapath 837c6e7b-bab2-4553-9d96-986f67153365 bound to our chassis
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.428 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 08:34:54 compute-0 ovn_controller[152859]: 2025-11-25T08:34:54Z|00591|binding|INFO|Setting lport 54f02527-a6c1-4059-aa22-2c19fc6f351d ovn-installed in OVS
Nov 25 08:34:54 compute-0 ovn_controller[152859]: 2025-11-25T08:34:54Z|00592|binding|INFO|Setting lport 54f02527-a6c1-4059-aa22-2c19fc6f351d up in Southbound
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.445 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa56a9b5-7596-4485-a20f-503db02d1b97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.445 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:54 compute-0 systemd-machined[215790]: New machine qemu-74-instance-00000041.
Nov 25 08:34:54 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000041.
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.480 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad041cef-90f9-490a-bcf5-eae81d46b427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.486 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6af6dd1e-7cfd-4d7d-9d92-46947e991aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.517 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7e074554-ba76-4d68-8df4-7f8d0df2be60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609599863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[16c780a3-f9f9-4fa5-ab1c-fa30463b7de8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320312, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.552 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7445a4e-f1ff-4a56-a68e-1d477432abcc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504693, 'tstamp': 504693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320315, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504696, 'tstamp': 504696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320315, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.578 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.581 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:54.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:54 compute-0 ceph-mon[75015]: pgmap v1552: 321 pgs: 321 active+clean; 485 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 5.4 MiB/s wr, 120 op/s
Nov 25 08:34:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1609599863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.601 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.607 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.656 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.685 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.686 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquired lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.686 253542 DEBUG nova.network.neutron [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724557845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.763 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.777 253542 DEBUG nova.compute.provider_tree [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.789 253542 DEBUG nova.scheduler.client.report [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.810 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.811 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.849 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.849 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.866 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.882 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.970 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.971 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:34:54 compute-0 nova_compute[253538]: 2025-11-25 08:34:54.971 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Creating image(s)
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.000 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.025 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.051 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.055 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.100 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.101 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.101 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.102 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.102 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] No waiting events found dispatching network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.102 253542 WARNING nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received unexpected event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb for instance with vm_state active and task_state None.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.102 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.103 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.103 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.103 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.104 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.104 253542 WARNING nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-unplugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.104 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.104 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.105 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.105 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.105 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.105 253542 WARNING nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-f2a4b65b-419e-44be-9413-f01693268aa8 for instance with vm_state active and task_state None.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.106 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.106 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.106 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.106 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.107 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Processing event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.107 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.108 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.108 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.108 253542 DEBUG oslo_concurrency.lockutils [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.109 253542 DEBUG nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] No waiting events found dispatching network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.109 253542 WARNING nova.compute.manager [req-a2dd39d2-8bad-45ef-8b32-ddc36a635c2c req-ab604b62-b2a7-42aa-9d1b-322381b6cd21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received unexpected event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d for instance with vm_state building and task_state spawning.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.139 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.144 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.144 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.145 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107995451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.167 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:55 compute-0 sshd-session[320284]: Invalid user vyos from 193.32.162.151 port 49590
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.176 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2f20fb1c-0a44-4209-aa4a-020331708117_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.213 253542 DEBUG nova.policy [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a649c62aaacd4f01a93ea978066f5976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.216 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.218 253542 DEBUG nova.virt.libvirt.vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-ServerActionsTestJSON-server-1864891791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:52Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.218 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.219 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.221 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <uuid>420c5373-d9c4-4da0-9658-90eff9a19f8d</uuid>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <name>instance-0000003e</name>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestJSON-server-1864891791</nova:name>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:53</nova:creationTime>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <nova:port uuid="9200cc12-927d-418b-99c1-ca0421535979">
Nov 25 08:34:55 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <entry name="serial">420c5373-d9c4-4da0-9658-90eff9a19f8d</entry>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <entry name="uuid">420c5373-d9c4-4da0-9658-90eff9a19f8d</entry>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/420c5373-d9c4-4da0-9658-90eff9a19f8d_disk">
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config">
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:55 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:73:f0:9b"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <target dev="tap9200cc12-92"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/console.log" append="off"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:55 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:55 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:55 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:55 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:55 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.222 253542 DEBUG nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Preparing to wait for external event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.222 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.222 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.223 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.223 253542 DEBUG nova.virt.libvirt.vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-ServerActionsTestJSON-server-1864891791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:52Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.224 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.224 253542 DEBUG nova.network.os_vif_util [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.224 253542 DEBUG os_vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.225 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.226 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.227 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Updated VIF entry in instance network info cache for port 54f02527-a6c1-4059-aa22-2c19fc6f351d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.227 253542 DEBUG nova.network.neutron [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Updating instance_info_cache with network_info: [{"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.231 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9200cc12-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.232 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9200cc12-92, col_values=(('external_ids', {'iface-id': '9200cc12-927d-418b-99c1-ca0421535979', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:f0:9b', 'vm-uuid': '420c5373-d9c4-4da0-9658-90eff9a19f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 NetworkManager[48915]: <info>  [1764059695.2340] manager: (tap9200cc12-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.243 253542 INFO os_vif [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92')
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.245 253542 DEBUG oslo_concurrency.lockutils [req-8a499574-6252-475b-9fd0-636a7b00a694 req-5722426d-7de3-4937-b016-6981a6265ead b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 437 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 913 KiB/s rd, 6.2 MiB/s wr, 192 op/s
Nov 25 08:34:55 compute-0 sshd-session[320284]: Connection closed by invalid user vyos 193.32.162.151 port 49590 [preauth]
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.294 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.294 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.294 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] No VIF found with MAC fa:16:3e:73:f0:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.295 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Using config drive
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.323 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.329 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059695.3204894, e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.329 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] VM Started (Lifecycle Event)
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.333 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.341 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.345 253542 INFO nova.virt.libvirt.driver [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance spawned successfully.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.346 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.352 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.356 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.357 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.358 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.358 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.359 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.359 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.364 253542 INFO nova.compute.manager [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Terminating instance
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.365 253542 DEBUG nova.compute.manager [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.376 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.385 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'keypairs' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.391 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.391 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.392 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.392 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.393 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.394 253542 DEBUG nova.virt.libvirt.driver [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.403 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.403 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059695.32059, e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.404 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] VM Paused (Lifecycle Event)
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.424 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.428 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059695.3370552, e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.428 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] VM Resumed (Lifecycle Event)
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.451 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.454 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.464 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Took 7.82 seconds to spawn the instance on the hypervisor.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.465 253542 DEBUG nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.475 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:34:55 compute-0 kernel: tap9fa407fa-66 (unregistering): left promiscuous mode
Nov 25 08:34:55 compute-0 NetworkManager[48915]: <info>  [1764059695.4963] device (tap9fa407fa-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.503 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2f20fb1c-0a44-4209-aa4a-020331708117_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00593|binding|INFO|Releasing lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 from this chassis (sb_readonly=0)
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00594|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 down in Southbound
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00595|binding|INFO|Removing iface tap9fa407fa-66 ovn-installed in OVS
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.519 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ce:d4 10.100.0.8'], port_security=['fa:16:3e:4d:ce:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9fa407fa-661b-4b02-b4f4-656f6ae34cd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.521 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.522 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[21fccf96-67c0-45a0-86ca-0fae5a55a1e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 25 08:34:55 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Consumed 15.665s CPU time.
Nov 25 08:34:55 compute-0 systemd-machined[215790]: Machine qemu-70-instance-0000003d terminated.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.560 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.570 253542 INFO nova.compute.manager [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Took 10.99 seconds to build instance.
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.576 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce4c197-de97-4c1b-a941-28ec79611651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.588 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0845d5-925e-4787-b8fb-d8050e2740d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 kernel: tap9fa407fa-66: entered promiscuous mode
Nov 25 08:34:55 compute-0 systemd-udevd[320298]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:55 compute-0 NetworkManager[48915]: <info>  [1764059695.5990] manager: (tap9fa407fa-66): new Tun device (/org/freedesktop/NetworkManager/Devices/273)
Nov 25 08:34:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2724557845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/107995451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:55 compute-0 kernel: tap9fa407fa-66 (unregistering): left promiscuous mode
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00596|binding|INFO|Claiming lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for this chassis.
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00597|binding|INFO|9fa407fa-661b-4b02-b4f4-656f6ae34cd8: Claiming fa:16:3e:4d:ce:d4 10.100.0.8
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.618 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ce:d4 10.100.0.8'], port_security=['fa:16:3e:4d:ce:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9fa407fa-661b-4b02-b4f4-656f6ae34cd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.627 253542 DEBUG oslo_concurrency.lockutils [None req-da202073-cdcd-4b87-9d6e-525198fc199b dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.632 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6d15d9-f5a3-4434-800e-f482769a4956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.633 253542 INFO nova.virt.libvirt.driver [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Instance destroyed successfully.
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.634 253542 DEBUG nova.objects.instance [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'resources' on Instance uuid 52d39d67-b456-44e4-8804-2de0c941edae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00598|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 ovn-installed in OVS
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00599|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 up in Southbound
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00600|binding|INFO|Releasing lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 from this chassis (sb_readonly=1)
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00601|if_status|INFO|Dropped 2 log messages in last 138 seconds (most recently, 138 seconds ago) due to excessive rate
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00602|if_status|INFO|Not setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 down as sb is readonly
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00603|binding|INFO|Removing iface tap9fa407fa-66 ovn-installed in OVS
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00604|binding|INFO|Releasing lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 from this chassis (sb_readonly=0)
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.645 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] resizing rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:34:55 compute-0 ovn_controller[152859]: 2025-11-25T08:34:55Z|00605|binding|INFO|Setting lport 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 down in Southbound
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.649 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9236022-d031-47bb-993b-05f339516801]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320567, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.659 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ce:d4 10.100.0.8'], port_security=['fa:16:3e:4d:ce:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '52d39d67-b456-44e4-8804-2de0c941edae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9fa407fa-661b-4b02-b4f4-656f6ae34cd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.673 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2907a8f-ebf8-4572-b462-c318ebe62857]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320576, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320576, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.675 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.683 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.683 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.683 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.684 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.686 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.687 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.705 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e25c23d8-75c6-4103-b1c8-c45cc0d43b07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.712 253542 DEBUG nova.virt.libvirt.vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.713 253542 DEBUG nova.network.os_vif_util [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.714 253542 DEBUG nova.network.os_vif_util [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.715 253542 DEBUG os_vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.718 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.718 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fa407fa-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.728 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.730 253542 INFO os_vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:ce:d4,bridge_name='br-int',has_traffic_filtering=True,id=9fa407fa-661b-4b02-b4f4-656f6ae34cd8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fa407fa-66')
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.731 253542 DEBUG nova.virt.libvirt.vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1833307559',display_name='tempest-tempest.common.compute-instance-1833307559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1833307559',id=61,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-14irc3k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=52d39d67-b456-44e4-8804-2de0c941edae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.731 253542 DEBUG nova.network.os_vif_util [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "f2a4b65b-419e-44be-9413-f01693268aa8", "address": "fa:16:3e:26:ed:fc", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2a4b65b-41", "ovs_interfaceid": "f2a4b65b-419e-44be-9413-f01693268aa8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.732 253542 DEBUG nova.network.os_vif_util [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.733 253542 DEBUG os_vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.735 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2a4b65b-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.735 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.737 253542 INFO os_vif [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ed:fc,bridge_name='br-int',has_traffic_filtering=True,id=f2a4b65b-419e-44be-9413-f01693268aa8,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf2a4b65b-41')
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.771 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f901cd64-886c-4acb-908b-689f7ed42438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.779 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[108691f3-a3da-4524-9859-bc8048008ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.818 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6b80ba2c-1369-4451-a092-ad742c31dfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.830 253542 DEBUG nova.objects.instance [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f20fb1c-0a44-4209-aa4a-020331708117 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.840 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5700870d-09f3-4f71-af4d-b7b57206be82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320632, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.841 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.841 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Ensure instance console log exists: /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.841 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.842 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.842 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.862 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fc601ee1-398a-4d47-9126-7180343cedf4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320633, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320633, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.863 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 nova_compute[253538]: 2025-11-25 08:34:55.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.870 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.870 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.870 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.871 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.872 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9fa407fa-661b-4b02-b4f4-656f6ae34cd8 in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.874 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.891 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59bd7269-6b85-4835-99b8-0fa52089f67f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.932 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[df3d383c-8449-4e72-89f8-7d76c2578af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.946 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0dacfcad-ca84-4da9-b0b1-e4ad6b081180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:34:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:55.984 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[83869591-d767-4577-a0d9-52b093e2e8f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[944043f7-a47c-49de-b787-6348e3faac95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bf3cbfa-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:8f:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499073, 'reachable_time': 17548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320641, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.027 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95eb7f2c-9c20-4582-8fbf-fcb85d1bcee7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499086, 'tstamp': 499086}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320642, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bf3cbfa-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499088, 'tstamp': 499088}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320642, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.029 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bf3cbfa-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bf3cbfa-70, col_values=(('external_ids', {'iface-id': '98660c0c-0936-4c4d-9a89-87b784d8d5cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.037 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.136 253542 INFO nova.virt.libvirt.driver [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Deleting instance files /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae_del
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.137 253542 INFO nova.virt.libvirt.driver [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Deletion of /var/lib/nova/instances/52d39d67-b456-44e4-8804-2de0c941edae_del complete
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.245 253542 INFO nova.compute.manager [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.246 253542 DEBUG oslo.service.loopingcall [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.246 253542 DEBUG nova.compute.manager [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.246 253542 DEBUG nova.network.neutron [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.271 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Creating config drive at /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.291 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuauc57n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.334 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Successfully created port: 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.441 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuauc57n" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.473 253542 DEBUG nova.storage.rbd_utils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] rbd image 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.478 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:56 compute-0 ceph-mon[75015]: pgmap v1553: 321 pgs: 321 active+clean; 437 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 913 KiB/s rd, 6.2 MiB/s wr, 192 op/s
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.669 253542 DEBUG oslo_concurrency.processutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config 420c5373-d9c4-4da0-9658-90eff9a19f8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.670 253542 INFO nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deleting local config drive /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d/disk.config because it was imported into RBD.
Nov 25 08:34:56 compute-0 NetworkManager[48915]: <info>  [1764059696.7202] manager: (tap9200cc12-92): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Nov 25 08:34:56 compute-0 kernel: tap9200cc12-92: entered promiscuous mode
Nov 25 08:34:56 compute-0 ovn_controller[152859]: 2025-11-25T08:34:56Z|00606|binding|INFO|Claiming lport 9200cc12-927d-418b-99c1-ca0421535979 for this chassis.
Nov 25 08:34:56 compute-0 ovn_controller[152859]: 2025-11-25T08:34:56Z|00607|binding|INFO|9200cc12-927d-418b-99c1-ca0421535979: Claiming fa:16:3e:73:f0:9b 10.100.0.11
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.733 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:f0:9b 10.100.0.11'], port_security=['fa:16:3e:73:f0:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '420c5373-d9c4-4da0-9658-90eff9a19f8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c602cd86-40c0-467a-8b7a-b573e0a7cefa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9200cc12-927d-418b-99c1-ca0421535979) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.734 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9200cc12-927d-418b-99c1-ca0421535979 in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.736 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:34:56 compute-0 ovn_controller[152859]: 2025-11-25T08:34:56Z|00608|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 ovn-installed in OVS
Nov 25 08:34:56 compute-0 ovn_controller[152859]: 2025-11-25T08:34:56Z|00609|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 up in Southbound
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.755 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f52c02da-d870-47d1-a3f6-df50c73e39b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:56 compute-0 systemd-udevd[320699]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:34:56 compute-0 systemd-machined[215790]: New machine qemu-75-instance-0000003e.
Nov 25 08:34:56 compute-0 NetworkManager[48915]: <info>  [1764059696.7832] device (tap9200cc12-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:34:56 compute-0 NetworkManager[48915]: <info>  [1764059696.7840] device (tap9200cc12-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:34:56 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-0000003e.
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.792 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cb519d5d-d374-4b3f-99f3-207b465c20f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.798 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9abf77f1-beb7-442d-b9c7-2ef022f065d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.828 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[784bb3da-b0f9-43b9-b07f-6cff0c57cd12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.844 253542 DEBUG nova.compute.manager [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.845 253542 DEBUG oslo_concurrency.lockutils [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.845 253542 DEBUG oslo_concurrency.lockutils [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.845 253542 DEBUG oslo_concurrency.lockutils [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.846 253542 DEBUG nova.compute.manager [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.846 253542 DEBUG nova.compute.manager [req-e81dd064-4199-439f-8119-829fddde841f req-a445ea1c-7353-4362-b2a9-fe5106ffdd05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.846 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa281a3-716b-41a0-a28d-29fab07f32b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320709, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.854 253542 INFO nova.network.neutron [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Port f2a4b65b-419e-44be-9413-f01693268aa8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.855 253542 DEBUG nova.network.neutron [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [{"id": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "address": "fa:16:3e:4d:ce:d4", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fa407fa-66", "ovs_interfaceid": "9fa407fa-661b-4b02-b4f4-656f6ae34cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.870 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1c7bcc-0ea6-4da5-9a5b-71a12c35fcd7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496845, 'tstamp': 496845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320710, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496848, 'tstamp': 496848}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320710, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.871 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.874 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.875 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.875 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:56.875 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.876 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Releasing lock "refresh_cache-52d39d67-b456-44e4-8804-2de0c941edae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:56 compute-0 nova_compute[253538]: 2025-11-25 08:34:56.896 253542 DEBUG oslo_concurrency.lockutils [None req-aabf42ea-bd42-4934-91c6-12ef85d0f85e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "interface-52d39d67-b456-44e4-8804-2de0c941edae-f2a4b65b-419e-44be-9413-f01693268aa8" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.121 253542 DEBUG nova.compute.manager [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.122 253542 DEBUG oslo_concurrency.lockutils [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.122 253542 DEBUG oslo_concurrency.lockutils [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.123 253542 DEBUG oslo_concurrency.lockutils [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.123 253542 DEBUG nova.compute.manager [req-06fb5981-e24c-4b2c-aa98-792beaee7fc6 req-37960740-111f-459b-8c02-263dcff0e60b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Processing event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.218 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 420c5373-d9c4-4da0-9658-90eff9a19f8d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.218 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059697.217494, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.218 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Started (Lifecycle Event)
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.220 253542 DEBUG nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.224 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.230 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance spawned successfully.
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.232 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.234 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.237 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.251 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.251 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.252 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.252 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.253 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.254 253542 DEBUG nova.virt.libvirt.driver [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.258 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.259 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059697.2177112, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.259 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Paused (Lifecycle Event)
Nov 25 08:34:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 445 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.5 MiB/s wr, 325 op/s
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.291 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.294 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059697.2267354, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.294 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Resumed (Lifecycle Event)
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.324 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.327 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.340 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.363 253542 DEBUG nova.compute.manager [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.422 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.423 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.423 253542 DEBUG nova.objects.instance [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.475 253542 DEBUG oslo_concurrency.lockutils [None req-5d8b5969-a11f-4efe-b1ba-bf5400b29765 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.590 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Successfully updated port: 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.613 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.613 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquired lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.614 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.733 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059682.732818, bf44124c-1a65-4bde-a777-043ae1a53557 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.733 253542 INFO nova.compute.manager [-] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] VM Stopped (Lifecycle Event)
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.752 253542 DEBUG nova.compute.manager [None req-7c0358f1-3e28-44c3-8787-6cd667603606 - - - - - -] [instance: bf44124c-1a65-4bde-a777-043ae1a53557] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.809 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.936 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.937 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.938 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.938 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.939 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.940 253542 INFO nova.compute.manager [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Terminating instance
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.941 253542 DEBUG nova.compute.manager [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:34:57 compute-0 kernel: tap182a5a1a-c0 (unregistering): left promiscuous mode
Nov 25 08:34:57 compute-0 NetworkManager[48915]: <info>  [1764059697.9778] device (tap182a5a1a-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:34:57 compute-0 nova_compute[253538]: 2025-11-25 08:34:57.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:57 compute-0 ovn_controller[152859]: 2025-11-25T08:34:57Z|00610|binding|INFO|Releasing lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 from this chassis (sb_readonly=0)
Nov 25 08:34:57 compute-0 ovn_controller[152859]: 2025-11-25T08:34:57Z|00611|binding|INFO|Setting lport 182a5a1a-c06d-4265-857f-3ea363ae01c2 down in Southbound
Nov 25 08:34:57 compute-0 ovn_controller[152859]: 2025-11-25T08:34:57Z|00612|binding|INFO|Removing iface tap182a5a1a-c0 ovn-installed in OVS
Nov 25 08:34:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:57.996 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:cb:10 10.100.0.3'], port_security=['fa:16:3e:35:cb:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd99c7a05-3cc3-4a8b-bce4-1185023a269f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=182a5a1a-c06d-4265-857f-3ea363ae01c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:57.997 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 182a5a1a-c06d-4265-857f-3ea363ae01c2 in datapath 837c6e7b-bab2-4553-9d96-986f67153365 unbound from our chassis
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:57.999 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79a31f81-a187-40f2-8455-5f3e73dd316b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:58 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 25 08:34:58 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Consumed 5.036s CPU time.
Nov 25 08:34:58 compute-0 systemd-machined[215790]: Machine qemu-72-instance-0000003f terminated.
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.040 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5fa50d-17a5-46c2-9c02-97028853e315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.043 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc62b71-289e-4456-b9c4-2deefe1b4398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.085 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba86a56b-11d3-4acf-bee1-c148d2b4b844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.106 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d642e0be-1597-4599-b3d8-e8b92cc9df7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320763, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.125 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[666c6dd3-031a-49fc-b45c-88fc1b212269]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504693, 'tstamp': 504693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320764, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504696, 'tstamp': 504696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320764, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.178 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.179 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.179 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:58.179 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.188 253542 INFO nova.virt.libvirt.driver [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Instance destroyed successfully.
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.188 253542 DEBUG nova.objects.instance [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'resources' on Instance uuid d99c7a05-3cc3-4a8b-bce4-1185023a269f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.198 253542 DEBUG nova.virt.libvirt.vif [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-1',id=63,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:54Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=d99c7a05-3cc3-4a8b-bce4-1185023a269f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.199 253542 DEBUG nova.network.os_vif_util [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "address": "fa:16:3e:35:cb:10", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap182a5a1a-c0", "ovs_interfaceid": "182a5a1a-c06d-4265-857f-3ea363ae01c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.200 253542 DEBUG nova.network.os_vif_util [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.200 253542 DEBUG os_vif [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.203 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap182a5a1a-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.209 253542 INFO os_vif [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:cb:10,bridge_name='br-int',has_traffic_filtering=True,id=182a5a1a-c06d-4265-857f-3ea363ae01c2,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap182a5a1a-c0')
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.451 253542 DEBUG nova.network.neutron [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.509 253542 INFO nova.compute.manager [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Took 2.26 seconds to deallocate network for instance.
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.603 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.603 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:58 compute-0 ceph-mon[75015]: pgmap v1554: 321 pgs: 321 active+clean; 445 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.5 MiB/s wr, 325 op/s
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.782 253542 DEBUG oslo_concurrency.processutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.952 253542 DEBUG nova.network.neutron [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Updating instance_info_cache with network_info: [{"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.987 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Releasing lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.988 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance network_info: |[{"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:34:58 compute-0 nova_compute[253538]: 2025-11-25 08:34:58.992 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start _get_guest_xml network_info=[{"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.002 253542 WARNING nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.012 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.013 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.013 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.013 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.013 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.014 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.015 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.016 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-unplugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "52d39d67-b456-44e4-8804-2de0c941edae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.017 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] No waiting events found dispatching network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received unexpected event network-vif-plugged-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 for instance with vm_state deleted and task_state None.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-unplugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.018 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] No waiting events found dispatching network-vif-unplugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-unplugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.019 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.020 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.020 253542 DEBUG oslo_concurrency.lockutils [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.020 253542 DEBUG nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] No waiting events found dispatching network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.020 253542 WARNING nova.compute.manager [req-a1f6d729-cdf4-4e2d-af49-d7a4fa845c0a req-09ad2581-6ab9-419f-a41d-47b02ba29df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received unexpected event network-vif-plugged-182a5a1a-c06d-4265-857f-3ea363ae01c2 for instance with vm_state active and task_state deleting.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.021 253542 DEBUG nova.virt.libvirt.host [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.022 253542 DEBUG nova.virt.libvirt.host [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.025 253542 DEBUG nova.virt.libvirt.host [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.026 253542 DEBUG nova.virt.libvirt.host [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.026 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.026 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.027 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.028 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.028 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.028 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.028 253542 DEBUG nova.virt.hardware [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.032 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.112 253542 INFO nova.virt.libvirt.driver [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Deleting instance files /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f_del
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.114 253542 INFO nova.virt.libvirt.driver [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Deletion of /var/lib/nova/instances/d99c7a05-3cc3-4a8b-bce4-1185023a269f_del complete
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.165 253542 INFO nova.compute.manager [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Took 1.22 seconds to destroy the instance on the hypervisor.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.166 253542 DEBUG oslo.service.loopingcall [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.166 253542 DEBUG nova.compute.manager [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.166 253542 DEBUG nova.network.neutron [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.194 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.195 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.195 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.195 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.195 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.196 253542 WARNING nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state None.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.196 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-changed-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.196 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Refreshing instance network info cache due to event network-changed-69b7733c-f471-4b5d-9fe9-b9b25d5836d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.196 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.197 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.197 253542 DEBUG nova.network.neutron [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Refreshing network info cache for port 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:34:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 453 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.0 MiB/s wr, 392 op/s
Nov 25 08:34:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:34:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3581526924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.298 253542 DEBUG oslo_concurrency.processutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.307 253542 DEBUG nova.compute.provider_tree [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.323 253542 DEBUG nova.scheduler.client.report [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.355 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3819003180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.479 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.503 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.509 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.556 253542 INFO nova.scheduler.client.report [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Deleted allocations for instance 52d39d67-b456-44e4-8804-2de0c941edae
Nov 25 08:34:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3581526924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:34:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3819003180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.644 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.645 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.645 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.645 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.646 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.647 253542 INFO nova.compute.manager [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Terminating instance
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.649 253542 DEBUG nova.compute.manager [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:34:59 compute-0 kernel: tap9200cc12-92 (unregistering): left promiscuous mode
Nov 25 08:34:59 compute-0 NetworkManager[48915]: <info>  [1764059699.6965] device (tap9200cc12-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:34:59 compute-0 ovn_controller[152859]: 2025-11-25T08:34:59Z|00613|binding|INFO|Releasing lport 9200cc12-927d-418b-99c1-ca0421535979 from this chassis (sb_readonly=0)
Nov 25 08:34:59 compute-0 ovn_controller[152859]: 2025-11-25T08:34:59Z|00614|binding|INFO|Setting lport 9200cc12-927d-418b-99c1-ca0421535979 down in Southbound
Nov 25 08:34:59 compute-0 ovn_controller[152859]: 2025-11-25T08:34:59Z|00615|binding|INFO|Removing iface tap9200cc12-92 ovn-installed in OVS
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.755 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:f0:9b 10.100.0.11'], port_security=['fa:16:3e:73:f0:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '420c5373-d9c4-4da0-9658-90eff9a19f8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c602cd86-40c0-467a-8b7a-b573e0a7cefa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9200cc12-927d-418b-99c1-ca0421535979) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:34:59 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 25 08:34:59 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003e.scope: Consumed 2.820s CPU time.
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.756 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9200cc12-927d-418b-99c1-ca0421535979 in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.759 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:34:59 compute-0 systemd-machined[215790]: Machine qemu-75-instance-0000003e terminated.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.761 253542 DEBUG oslo_concurrency.lockutils [None req-d67c3dc3-4176-4c68-afa1-6ea74d5b348e 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "52d39d67-b456-44e4-8804-2de0c941edae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed165fcc-b3ce-42e2-9cc9-d7388f464d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.810 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[985a7323-2bcb-416e-b0c1-67646a92db9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.812 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[607becfd-411a-4784-bea8-fe3eede2648b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.852 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0804ae8a-75c4-4e4b-835a-a821c5795a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca0999e-e2aa-4a36-b4e0-88fedd8460d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496834, 'reachable_time': 23814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320888, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:59 compute-0 NetworkManager[48915]: <info>  [1764059699.8709] manager: (tap9200cc12-92): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.887 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d640a2c-ef89-4eb9-8616-f51b7f2e971e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496845, 'tstamp': 496845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320892, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap908154e6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496848, 'tstamp': 496848}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320892, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.889 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.895 253542 INFO nova.virt.libvirt.driver [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Instance destroyed successfully.
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.895 253542 DEBUG nova.objects.instance [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 420c5373-d9c4-4da0-9658-90eff9a19f8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.898 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.898 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.898 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:34:59.899 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.907 253542 DEBUG nova.virt.libvirt.vif [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:34:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-614557291',display_name='tempest-ServerActionsTestJSON-server-1864891791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-614557291',id=62,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-x0c0gdce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:57Z,user_data=None,user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=420c5373-d9c4-4da0-9658-90eff9a19f8d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.907 253542 DEBUG nova.network.os_vif_util [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "9200cc12-927d-418b-99c1-ca0421535979", "address": "fa:16:3e:73:f0:9b", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9200cc12-92", "ovs_interfaceid": "9200cc12-927d-418b-99c1-ca0421535979", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.908 253542 DEBUG nova.network.os_vif_util [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.908 253542 DEBUG os_vif [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.910 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9200cc12-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.915 253542 INFO os_vif [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:f0:9b,bridge_name='br-int',has_traffic_filtering=True,id=9200cc12-927d-418b-99c1-ca0421535979,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9200cc12-92')
Nov 25 08:34:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:34:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172560043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.967 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.968 253542 DEBUG nova.virt.libvirt.vif [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-756634894',display_name='tempest-DeleteServersTestJSON-server-756634894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-756634894',id=66,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6zflsg5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:54Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=2f20fb1c-0a44-4209-aa4a-020331708117,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.969 253542 DEBUG nova.network.os_vif_util [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.970 253542 DEBUG nova.network.os_vif_util [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.971 253542 DEBUG nova.objects.instance [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f20fb1c-0a44-4209-aa4a-020331708117 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.988 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <uuid>2f20fb1c-0a44-4209-aa4a-020331708117</uuid>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <name>instance-00000042</name>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <nova:name>tempest-DeleteServersTestJSON-server-756634894</nova:name>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:34:59</nova:creationTime>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <nova:user uuid="a649c62aaacd4f01a93ea978066f5976">tempest-DeleteServersTestJSON-2095694504-project-member</nova:user>
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <nova:project uuid="a9c243220ecd4ba3af10cdbc0ea76bd6">tempest-DeleteServersTestJSON-2095694504</nova:project>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <nova:port uuid="69b7733c-f471-4b5d-9fe9-b9b25d5836d9">
Nov 25 08:34:59 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <system>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <entry name="serial">2f20fb1c-0a44-4209-aa4a-020331708117</entry>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <entry name="uuid">2f20fb1c-0a44-4209-aa4a-020331708117</entry>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </system>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <os>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   </os>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <features>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   </features>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2f20fb1c-0a44-4209-aa4a-020331708117_disk">
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2f20fb1c-0a44-4209-aa4a-020331708117_disk.config">
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:34:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:a2:ff:8f"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <target dev="tap69b7733c-f4"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/console.log" append="off"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <video>
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </video>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:34:59 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:34:59 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:34:59 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:34:59 compute-0 nova_compute[253538]: </domain>
Nov 25 08:34:59 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.998 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Preparing to wait for external event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.998 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.998 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.998 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:34:59 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.999 253542 DEBUG nova.virt.libvirt.vif [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:34:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-756634894',display_name='tempest-DeleteServersTestJSON-server-756634894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-756634894',id=66,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6zflsg5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:34:54Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=2f20fb1c-0a44-4209-aa4a-020331708117,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:34:59.999 253542 DEBUG nova.network.os_vif_util [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.000 253542 DEBUG nova.network.os_vif_util [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.000 253542 DEBUG os_vif [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.004 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.004 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.011 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69b7733c-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.011 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69b7733c-f4, col_values=(('external_ids', {'iface-id': '69b7733c-f471-4b5d-9fe9-b9b25d5836d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:ff:8f', 'vm-uuid': '2f20fb1c-0a44-4209-aa4a-020331708117'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:00 compute-0 NetworkManager[48915]: <info>  [1764059700.0152] manager: (tap69b7733c-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.022 253542 INFO os_vif [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4')
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.041 253542 DEBUG nova.network.neutron [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.068 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.068 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.069 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] No VIF found with MAC fa:16:3e:a2:ff:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.069 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Using config drive
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.095 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.107 253542 INFO nova.compute.manager [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Took 0.94 seconds to deallocate network for instance.
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.172 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.173 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.324 253542 DEBUG oslo_concurrency.processutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.375 253542 DEBUG nova.network.neutron [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Updated VIF entry in instance network info cache for port 69b7733c-f471-4b5d-9fe9-b9b25d5836d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.376 253542 DEBUG nova.network.neutron [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Updating instance_info_cache with network_info: [{"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.389 253542 DEBUG oslo_concurrency.lockutils [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2f20fb1c-0a44-4209-aa4a-020331708117" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.389 253542 DEBUG nova.compute.manager [req-987a49e5-3794-4f15-8b59-420fe96250f8 req-95d4a563-e7e0-499b-b654-7927bd7c2c28 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Received event network-vif-deleted-9fa407fa-661b-4b02-b4f4-656f6ae34cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.467 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.468 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.469 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.470 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.470 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.472 253542 INFO nova.compute.manager [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Terminating instance
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.474 253542 DEBUG nova.compute.manager [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.526 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Creating config drive at /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.531 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ad94_ok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.680 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ad94_ok" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.748 253542 DEBUG nova.storage.rbd_utils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] rbd image 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.761 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420102889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:00 compute-0 ceph-mon[75015]: pgmap v1555: 321 pgs: 321 active+clean; 453 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.0 MiB/s wr, 392 op/s
Nov 25 08:35:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1172560043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:00 compute-0 kernel: tap1682bdaf-1d (unregistering): left promiscuous mode
Nov 25 08:35:00 compute-0 NetworkManager[48915]: <info>  [1764059700.7762] device (tap1682bdaf-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:00 compute-0 ovn_controller[152859]: 2025-11-25T08:35:00Z|00616|binding|INFO|Releasing lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f from this chassis (sb_readonly=0)
Nov 25 08:35:00 compute-0 ovn_controller[152859]: 2025-11-25T08:35:00Z|00617|binding|INFO|Setting lport 1682bdaf-1dd6-4036-8d17-a169dbaaca8f down in Southbound
Nov 25 08:35:00 compute-0 ovn_controller[152859]: 2025-11-25T08:35:00Z|00618|binding|INFO|Removing iface tap1682bdaf-1d ovn-installed in OVS
Nov 25 08:35:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.808 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:42:da 10.100.0.9'], port_security=['fa:16:3e:60:42:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5c6656ef-7ad0-4eb4-a597-aa9a8078805b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d8307470c794815a028592990efca57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '379bb9ab-2c12-4eea-bd54-6b8a24f607d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51a82dc-da84-4ad1-90c6-51b8e242435f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1682bdaf-1dd6-4036-8d17-a169dbaaca8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.810 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1682bdaf-1dd6-4036-8d17-a169dbaaca8f in datapath 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe unbound from our chassis
Nov 25 08:35:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.811 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8e108de8-9efb-484e-9cca-947de2a31918]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:00.813 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe namespace which is not needed anymore
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.827 253542 DEBUG oslo_concurrency.processutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:00 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 25 08:35:00 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Consumed 15.229s CPU time.
Nov 25 08:35:00 compute-0 systemd-machined[215790]: Machine qemu-67-instance-0000003a terminated.
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.834 253542 DEBUG nova.compute.provider_tree [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.847 253542 DEBUG nova.scheduler.client.report [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.875 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.911 253542 INFO nova.scheduler.client.report [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Deleted allocations for instance d99c7a05-3cc3-4a8b-bce4-1185023a269f
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.918 253542 INFO nova.virt.libvirt.driver [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Instance destroyed successfully.
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.918 253542 DEBUG nova.objects.instance [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lazy-loading 'resources' on Instance uuid 5c6656ef-7ad0-4eb4-a597-aa9a8078805b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.930 253542 DEBUG nova.virt.libvirt.vif [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632172140',display_name='tempest-tempest.common.compute-instance-632172140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632172140',id=58,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI45VX5ozelo+A26Yolp08RcM4mInQuWdkCriWdcAJEvmrG1M64+l6O6qrC1PEYY9Zv1hNrRdaOuY2Hx3qn6BPjsgdWVfumtuAipvIEJaR4T3qitr35JgGW4++DGBtyuWA==',key_name='tempest-keypair-1507828644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:34:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d8307470c794815a028592990efca57',ramdisk_id='',reservation_id='r-yi71ppnf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1895576257',owner_user_name='tempest-AttachInterfacesTestJSON-1895576257-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='329d8dc9d78743d4a09a38fef3a9143d',uuid=5c6656ef-7ad0-4eb4-a597-aa9a8078805b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.932 253542 DEBUG nova.network.os_vif_util [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converting VIF {"id": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "address": "fa:16:3e:60:42:da", "network": {"id": "9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2079765623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d8307470c794815a028592990efca57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1682bdaf-1d", "ovs_interfaceid": "1682bdaf-1dd6-4036-8d17-a169dbaaca8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.933 253542 DEBUG nova.network.os_vif_util [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.933 253542 DEBUG os_vif [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.935 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1682bdaf-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.937 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.944 253542 INFO os_vif [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:42:da,bridge_name='br-int',has_traffic_filtering=True,id=1682bdaf-1dd6-4036-8d17-a169dbaaca8f,network=Network(9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1682bdaf-1d')
Nov 25 08:35:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:00 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [NOTICE]   (315510) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:00 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [NOTICE]   (315510) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:00 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [WARNING]  (315510) : Exiting Master process...
Nov 25 08:35:00 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [ALERT]    (315510) : Current worker (315512) exited with code 143 (Terminated)
Nov 25 08:35:00 compute-0 neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe[315506]: [WARNING]  (315510) : All workers exited. Exiting... (0)
Nov 25 08:35:00 compute-0 systemd[1]: libpod-fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc.scope: Deactivated successfully.
Nov 25 08:35:00 compute-0 nova_compute[253538]: 2025-11-25 08:35:00.995 253542 DEBUG oslo_concurrency.lockutils [None req-62ab5a8c-f0ba-4dbd-86c2-7733e309b0ca dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "d99c7a05-3cc3-4a8b-bce4-1185023a269f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:01 compute-0 podman[321034]: 2025-11-25 08:35:01.003083965 +0000 UTC m=+0.070111345 container died fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:35:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 416 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.4 MiB/s wr, 437 op/s
Nov 25 08:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-17ce801fda8e4bb6e919da95eaa73f45324ff65118d3fb3c246879cdb73c93b9-merged.mount: Deactivated successfully.
Nov 25 08:35:01 compute-0 podman[321034]: 2025-11-25 08:35:01.449767131 +0000 UTC m=+0.516794501 container cleanup fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:35:01 compute-0 systemd[1]: libpod-conmon-fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc.scope: Deactivated successfully.
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.485 253542 DEBUG oslo_concurrency.processutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config 2f20fb1c-0a44-4209-aa4a-020331708117_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.486 253542 INFO nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Deleting local config drive /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117/disk.config because it was imported into RBD.
Nov 25 08:35:01 compute-0 podman[321085]: 2025-11-25 08:35:01.531148908 +0000 UTC m=+0.057799835 container remove fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fedb1dd7-f100-47d0-b7c4-ecfabe31ab31]: (4, ('Tue Nov 25 08:35:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc)\nfd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc\nTue Nov 25 08:35:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe (fd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc)\nfd84595ec77f0159adf977cc77075b96e42e38dd6f668ef949dec2535d8255cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.541 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18e10e87-5976-4d9b-8f82-e90d27578346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.542 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bf3cbfa-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:01 compute-0 kernel: tap9bf3cbfa-70: left promiscuous mode
Nov 25 08:35:01 compute-0 kernel: tap69b7733c-f4: entered promiscuous mode
Nov 25 08:35:01 compute-0 systemd-udevd[320899]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:01 compute-0 NetworkManager[48915]: <info>  [1764059701.5539] manager: (tap69b7733c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Nov 25 08:35:01 compute-0 NetworkManager[48915]: <info>  [1764059701.5641] device (tap69b7733c-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:35:01 compute-0 NetworkManager[48915]: <info>  [1764059701.5652] device (tap69b7733c-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:35:01 compute-0 ovn_controller[152859]: 2025-11-25T08:35:01Z|00619|binding|INFO|Claiming lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 for this chassis.
Nov 25 08:35:01 compute-0 ovn_controller[152859]: 2025-11-25T08:35:01Z|00620|binding|INFO|69b7733c-f471-4b5d-9fe9-b9b25d5836d9: Claiming fa:16:3e:a2:ff:8f 10.100.0.4
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.572 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2f7c9b-585b-428d-b529-34b8467df6cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.589 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ff:8f 10.100.0.4'], port_security=['fa:16:3e:a2:ff:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f20fb1c-0a44-4209-aa4a-020331708117', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=69b7733c-f471-4b5d-9fe9-b9b25d5836d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:01 compute-0 ovn_controller[152859]: 2025-11-25T08:35:01Z|00621|binding|INFO|Setting lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 ovn-installed in OVS
Nov 25 08:35:01 compute-0 ovn_controller[152859]: 2025-11-25T08:35:01Z|00622|binding|INFO|Setting lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 up in Southbound
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.592 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b29a1e8-935f-44e7-995d-6bb973ff3fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.594 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc757cfd-d497-45d8-9b8a-02f3a59821a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 systemd-machined[215790]: New machine qemu-76-instance-00000042.
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.603 253542 INFO nova.virt.libvirt.driver [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deleting instance files /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d_del
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.604 253542 INFO nova.virt.libvirt.driver [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deletion of /var/lib/nova/instances/420c5373-d9c4-4da0-9658-90eff9a19f8d_del complete
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.609 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6161fa95-5712-42c2-b994-7cd45772ff1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499067, 'reachable_time': 44139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321114, 'error': None, 'target': 'ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.611 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bf3cbfa-7e0d-4c98-99a2-4ca14fb6bbbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.612 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a606b700-b5ad-4272-b5fb-97e8ba11876c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.612 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.614 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:35:01 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000042.
Nov 25 08:35:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bf3cbfa\x2d7e0d\x2d4c98\x2d99a2\x2d4ca14fb6bbbe.mount: Deactivated successfully.
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.635 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f2099ec5-b240-4899-a95f-d16f20a43f22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.638 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66e51b8-e1 in ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.639 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66e51b8-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[281f25ac-2ba9-4050-91b1-7b6ee457111f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3dd98c0-4293-4f9e-a13d-2fe2590be237]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.657 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9d421845-85b2-41b2-bee9-614c1106baa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.685 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b70f6d29-74cf-4f7f-b4d6-511bac96dc52]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.695 253542 INFO nova.compute.manager [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Took 2.05 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.695 253542 DEBUG oslo.service.loopingcall [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.695 253542 DEBUG nova.compute.manager [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.696 253542 DEBUG nova.network.neutron [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.723 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[879b237f-e2c3-41f9-89f4-6ea7a0a4b8c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 NetworkManager[48915]: <info>  [1764059701.7312] manager: (tapa66e51b8-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/278)
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.731 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2abf82fc-cf6f-43bc-87e3-153518fa88d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.765 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9b89c7-1fdc-493d-8f66-5f8e6b0902a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.769 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a53d6770-305e-4a8b-a549-19ab55a989a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1420102889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:01 compute-0 ceph-mon[75015]: pgmap v1556: 321 pgs: 321 active+clean; 416 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.4 MiB/s wr, 437 op/s
Nov 25 08:35:01 compute-0 NetworkManager[48915]: <info>  [1764059701.8014] device (tapa66e51b8-e0): carrier: link connected
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.809 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[51784082-e1b0-4334-8e34-218b8aa904ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.832 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4ac16f-851e-4b47-ad5a-45c37df44baa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505640, 'reachable_time': 36641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321146, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.849 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d21f907-6aa2-4669-b4ad-b86c521e91d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:2c20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505640, 'tstamp': 505640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321147, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57859526-096d-4356-ac31-7e9243d41b45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66e51b8-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:2c:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505640, 'reachable_time': 36641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321148, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.883 253542 DEBUG nova.compute.manager [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-unplugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.884 253542 DEBUG oslo_concurrency.lockutils [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.884 253542 DEBUG oslo_concurrency.lockutils [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.884 253542 DEBUG oslo_concurrency.lockutils [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.884 253542 DEBUG nova.compute.manager [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-unplugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.885 253542 DEBUG nova.compute.manager [req-ff4d7b5a-506b-4a62-b1d7-d7dd0167aa29 req-a8715817-61a4-4e62-bbea-6269116df64a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-unplugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.892 253542 INFO nova.virt.libvirt.driver [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Deleting instance files /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_del
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.892 253542 INFO nova.virt.libvirt.driver [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Deletion of /var/lib/nova/instances/5c6656ef-7ad0-4eb4-a597-aa9a8078805b_del complete
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[adea9f82-5ba9-45a6-91a7-f37058d9af73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.943 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.943 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.943 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.943 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-unplugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Received event network-vif-deleted-182a5a1a-c06d-4265-857f-3ea363ae01c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.944 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.945 253542 DEBUG oslo_concurrency.lockutils [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.945 253542 DEBUG nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] No waiting events found dispatching network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.945 253542 WARNING nova.compute.manager [req-2da4079d-f730-4344-bf7f-d672340c624e req-e69986b2-dada-417c-8af5-6e9544d6507c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received unexpected event network-vif-plugged-9200cc12-927d-418b-99c1-ca0421535979 for instance with vm_state active and task_state deleting.
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.978 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f726e2-8fe4-4139-b0dc-1a44b6904c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.980 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.980 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.980 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66e51b8-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:01 compute-0 NetworkManager[48915]: <info>  [1764059701.9827] manager: (tapa66e51b8-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Nov 25 08:35:01 compute-0 kernel: tapa66e51b8-e0: entered promiscuous mode
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.986 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66e51b8-e0, col_values=(('external_ids', {'iface-id': 'c0d74b17-7eba-4096-a861-b9247777e01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:01 compute-0 ovn_controller[152859]: 2025-11-25T08:35:01Z|00623|binding|INFO|Releasing lport c0d74b17-7eba-4096-a861-b9247777e01c from this chassis (sb_readonly=0)
Nov 25 08:35:01 compute-0 nova_compute[253538]: 2025-11-25 08:35:01.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.991 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.992 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c542b7c-aa02-40f2-8579-8fc3db65f722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.993 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/a66e51b8-ecb0-4289-a1b5-d5e379727721.pid.haproxy
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID a66e51b8-ecb0-4289-a1b5-d5e379727721
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:35:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:01.995 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'env', 'PROCESS_TAG=haproxy-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66e51b8-ecb0-4289-a1b5-d5e379727721.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.046 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059702.0462747, 2f20fb1c-0a44-4209-aa4a-020331708117 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.047 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Started (Lifecycle Event)
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.061 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.070 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059702.0463982, 2f20fb1c-0a44-4209-aa4a-020331708117 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.070 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Paused (Lifecycle Event)
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.084 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.087 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.103 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.150 253542 INFO nova.compute.manager [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Took 1.68 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.150 253542 DEBUG oslo.service.loopingcall [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.151 253542 DEBUG nova.compute.manager [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:02 compute-0 nova_compute[253538]: 2025-11-25 08:35:02.151 253542 DEBUG nova.network.neutron [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:02 compute-0 podman[321222]: 2025-11-25 08:35:02.357777864 +0000 UTC m=+0.057214359 container create ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:35:02 compute-0 systemd[1]: Started libpod-conmon-ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47.scope.
Nov 25 08:35:02 compute-0 podman[321222]: 2025-11-25 08:35:02.327792138 +0000 UTC m=+0.027228653 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:35:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686e5e97ca0c4d027b1dcfdd9419f6c15acdd73a33a167ddf2036db8efab363c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:02 compute-0 podman[321222]: 2025-11-25 08:35:02.451122212 +0000 UTC m=+0.150558717 container init ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:35:02 compute-0 podman[321222]: 2025-11-25 08:35:02.459142858 +0000 UTC m=+0.158579353 container start ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:02 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [NOTICE]   (321241) : New worker (321243) forked
Nov 25 08:35:02 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [NOTICE]   (321241) : Loading success.
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 361 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 4.4 MiB/s wr, 452 op/s
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.429 253542 DEBUG nova.network.neutron [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.446 253542 INFO nova.compute.manager [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Took 1.75 seconds to deallocate network for instance.
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.497 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.497 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.595 253542 DEBUG nova.network.neutron [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.616 253542 INFO nova.compute.manager [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Took 1.47 seconds to deallocate network for instance.
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.657 253542 DEBUG oslo_concurrency.processutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.702 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.805 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.806 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.806 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.806 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.806 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.808 253542 INFO nova.compute.manager [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Terminating instance
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.809 253542 DEBUG nova.compute.manager [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002689950531516572 of space, bias 1.0, pg target 0.8069851594549716 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:35:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:35:03 compute-0 kernel: tapf9d205bf-07 (unregistering): left promiscuous mode
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.951 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.951 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.951 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.952 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.952 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.953 253542 INFO nova.compute.manager [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Terminating instance
Nov 25 08:35:03 compute-0 NetworkManager[48915]: <info>  [1764059703.9559] device (tapf9d205bf-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.957 253542 DEBUG nova.compute.manager [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:03 compute-0 ovn_controller[152859]: 2025-11-25T08:35:03Z|00624|binding|INFO|Releasing lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb from this chassis (sb_readonly=0)
Nov 25 08:35:03 compute-0 ovn_controller[152859]: 2025-11-25T08:35:03Z|00625|binding|INFO|Setting lport f9d205bf-0705-485d-b89c-f9b9c3cdccdb down in Southbound
Nov 25 08:35:03 compute-0 ovn_controller[152859]: 2025-11-25T08:35:03Z|00626|binding|INFO|Removing iface tapf9d205bf-07 ovn-installed in OVS
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:03.976 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:08:11 10.100.0.6'], port_security=['fa:16:3e:9d:08:11 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a4fd9f97-b160-432d-9cb7-0fa3874c6468', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f9d205bf-0705-485d-b89c-f9b9c3cdccdb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:03.977 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f9d205bf-0705-485d-b89c-f9b9c3cdccdb in datapath 837c6e7b-bab2-4553-9d96-986f67153365 unbound from our chassis
Nov 25 08:35:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:03.979 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 837c6e7b-bab2-4553-9d96-986f67153365
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.994 253542 DEBUG nova.compute.manager [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.994 253542 DEBUG oslo_concurrency.lockutils [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.994 253542 DEBUG oslo_concurrency.lockutils [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:03.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[04128c00-854f-4737-86ee-a1d67bd494bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.995 253542 DEBUG oslo_concurrency.lockutils [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.995 253542 DEBUG nova.compute.manager [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] No waiting events found dispatching network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.996 253542 WARNING nova.compute.manager [req-379ba195-352e-4dd4-9bec-5f845e03ed74 req-a4d25dfd-98d8-4ca1-81ff-ac6327fef6cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received unexpected event network-vif-plugged-1682bdaf-1dd6-4036-8d17-a169dbaaca8f for instance with vm_state deleted and task_state None.
Nov 25 08:35:03 compute-0 nova_compute[253538]: 2025-11-25 08:35:03.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Deactivated successfully.
Nov 25 08:35:04 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Consumed 10.820s CPU time.
Nov 25 08:35:04 compute-0 systemd-machined[215790]: Machine qemu-73-instance-00000040 terminated.
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.022 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2b681a2a-d8eb-42c3-b74c-f5b16f6c623e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.025 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2c1fab-22af-4905-bec5-1302483d6bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.053 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[de6f9dec-01bb-48b8-835d-c8d5061c2bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 kernel: tap54f02527-a6 (unregistering): left promiscuous mode
Nov 25 08:35:04 compute-0 NetworkManager[48915]: <info>  [1764059704.0680] device (tap54f02527-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 ovn_controller[152859]: 2025-11-25T08:35:04Z|00627|binding|INFO|Releasing lport 54f02527-a6c1-4059-aa22-2c19fc6f351d from this chassis (sb_readonly=0)
Nov 25 08:35:04 compute-0 ovn_controller[152859]: 2025-11-25T08:35:04Z|00628|binding|INFO|Setting lport 54f02527-a6c1-4059-aa22-2c19fc6f351d down in Southbound
Nov 25 08:35:04 compute-0 ovn_controller[152859]: 2025-11-25T08:35:04Z|00629|binding|INFO|Removing iface tap54f02527-a6 ovn-installed in OVS
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.080 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.083 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc7d6d5-0582-4f8f-9d20-6ec264311e8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap837c6e7b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:6e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504680, 'reachable_time': 28680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321283, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.088 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:f0 10.100.0.10'], port_security=['fa:16:3e:eb:30:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e07ccbcb-d60d-4c15-95c2-9f5046ab99a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-837c6e7b-bab2-4553-9d96-986f67153365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce0bc5c65f2f47a9a854ec892fe53bc8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a89d18be-0725-45a6-b621-49391e35be2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=332be39e-bd54-4968-b056-968d29abecfa, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=54f02527-a6c1-4059-aa22-2c19fc6f351d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.107 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d81ffb5-802b-42f2-82f1-ea7f45585280]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504693, 'tstamp': 504693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321288, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap837c6e7b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504696, 'tstamp': 504696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321288, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.115 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c6e7b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap837c6e7b-b0, col_values=(('external_ids', {'iface-id': 'f915f58f-151e-47fb-a373-5fd022b7fd3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.127 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 54f02527-a6c1-4059-aa22-2c19fc6f351d in datapath 837c6e7b-bab2-4553-9d96-986f67153365 unbound from our chassis
Nov 25 08:35:04 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Deactivated successfully.
Nov 25 08:35:04 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Consumed 9.233s CPU time.
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.129 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 837c6e7b-bab2-4553-9d96-986f67153365, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.129 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ccb488-b107-4d4f-8c37-b590037e9131]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.130 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 namespace which is not needed anymore
Nov 25 08:35:04 compute-0 systemd-machined[215790]: Machine qemu-74-instance-00000041 terminated.
Nov 25 08:35:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3635438713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.178 253542 DEBUG oslo_concurrency.processutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.187 253542 DEBUG nova.compute.provider_tree [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.195 253542 INFO nova.virt.libvirt.driver [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Instance destroyed successfully.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.196 253542 DEBUG nova.objects.instance [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'resources' on Instance uuid e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.202 253542 DEBUG nova.scheduler.client.report [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.213 253542 DEBUG nova.virt.libvirt.vif [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-3',id=65,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-25T08:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:55Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=e07ccbcb-d60d-4c15-95c2-9f5046ab99a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.214 253542 DEBUG nova.network.os_vif_util [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "address": "fa:16:3e:eb:30:f0", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f02527-a6", "ovs_interfaceid": "54f02527-a6c1-4059-aa22-2c19fc6f351d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.215 253542 DEBUG nova.network.os_vif_util [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.215 253542 DEBUG os_vif [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.218 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54f02527-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.227 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.233 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.235 253542 INFO os_vif [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:f0,bridge_name='br-int',has_traffic_filtering=True,id=54f02527-a6c1-4059-aa22-2c19fc6f351d,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f02527-a6')
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.265 253542 INFO nova.virt.libvirt.driver [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Instance destroyed successfully.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.266 253542 DEBUG nova.objects.instance [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lazy-loading 'resources' on Instance uuid a4fd9f97-b160-432d-9cb7-0fa3874c6468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:04 compute-0 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [NOTICE]   (320027) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:04 compute-0 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [NOTICE]   (320027) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:04 compute-0 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [WARNING]  (320027) : Exiting Master process...
Nov 25 08:35:04 compute-0 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [ALERT]    (320027) : Current worker (320037) exited with code 143 (Terminated)
Nov 25 08:35:04 compute-0 neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365[320001]: [WARNING]  (320027) : All workers exited. Exiting... (0)
Nov 25 08:35:04 compute-0 systemd[1]: libpod-3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430.scope: Deactivated successfully.
Nov 25 08:35:04 compute-0 podman[321324]: 2025-11-25 08:35:04.280414496 +0000 UTC m=+0.047467636 container died 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.282 253542 DEBUG nova.virt.libvirt.vif [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1822986507',display_name='tempest-ListServersNegativeTestJSON-server-1822986507-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1822986507-2',id=64,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T08:34:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce0bc5c65f2f47a9a854ec892fe53bc8',ramdisk_id='',reservation_id='r-prpznqfv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-704382836',owner_user_name='tempest-ListServersNegativeTestJSON-704382836-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:34:53Z,user_data=None,user_id='dc986518148d44de9f5908ed5be317bd',uuid=a4fd9f97-b160-432d-9cb7-0fa3874c6468,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.282 253542 DEBUG nova.network.os_vif_util [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converting VIF {"id": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "address": "fa:16:3e:9d:08:11", "network": {"id": "837c6e7b-bab2-4553-9d96-986f67153365", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-60200676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce0bc5c65f2f47a9a854ec892fe53bc8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d205bf-07", "ovs_interfaceid": "f9d205bf-0705-485d-b89c-f9b9c3cdccdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.283 253542 DEBUG nova.network.os_vif_util [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.283 253542 DEBUG os_vif [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.285 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9d205bf-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.291 253542 INFO nova.scheduler.client.report [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Deleted allocations for instance 420c5373-d9c4-4da0-9658-90eff9a19f8d
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.294 253542 INFO os_vif [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:08:11,bridge_name='br-int',has_traffic_filtering=True,id=f9d205bf-0705-485d-b89c-f9b9c3cdccdb,network=Network(837c6e7b-bab2-4553-9d96-986f67153365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d205bf-07')
Nov 25 08:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-598152ed7896db807c1194d497cbf99580dfa554ebe08979ae464ebb4bcff12a-merged.mount: Deactivated successfully.
Nov 25 08:35:04 compute-0 podman[321324]: 2025-11-25 08:35:04.330353189 +0000 UTC m=+0.097406289 container cleanup 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:35:04 compute-0 systemd[1]: libpod-conmon-3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430.scope: Deactivated successfully.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.377 253542 DEBUG oslo_concurrency.lockutils [None req-79bcfe17-6c4e-4b92-86db-d7c7e382865b c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "420c5373-d9c4-4da0-9658-90eff9a19f8d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:04 compute-0 ceph-mon[75015]: pgmap v1557: 321 pgs: 321 active+clean; 361 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 4.4 MiB/s wr, 452 op/s
Nov 25 08:35:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3635438713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:04 compute-0 podman[321396]: 2025-11-25 08:35:04.403445613 +0000 UTC m=+0.051759322 container remove 3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.408 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:04 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:04 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Processing event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.409 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] No waiting events found dispatching network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 WARNING nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received unexpected event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 for instance with vm_state building and task_state spawning.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.410 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Received event network-vif-deleted-9200cc12-927d-418b-99c1-ca0421535979 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Received event network-vif-deleted-1682bdaf-1dd6-4036-8d17-a169dbaaca8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-unplugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG oslo_concurrency.lockutils [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.411 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] No waiting events found dispatching network-vif-unplugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.412 253542 DEBUG nova.compute.manager [req-d303be84-1983-4854-bd01-7718361d1a8c req-c3d5276e-f4c1-4e2a-91f1-9868921db278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-unplugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.411 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ddea79e4-2efb-468f-8085-b2d67e854840]: (4, ('Tue Nov 25 08:35:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 (3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430)\n3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430\nTue Nov 25 08:35:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 (3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430)\n3430c8305e50c4278e220e4add931691e3ee2ec9f471304956bdde4a55cfd430\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.412 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.413 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72fa818a-b90e-4e2e-a6d0-8bcfa96e67c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.414 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c6e7b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.416 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 kernel: tap837c6e7b-b0: left promiscuous mode
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.419 253542 DEBUG oslo_concurrency.processutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9df85d97-97e4-46de-9fe4-f3005a4d7f46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60d29175-b96d-4311-87f0-7032cbd2929c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6cef38-2637-41d4-b64e-2d72105fdcf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.460 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.461 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059704.4195974, 2f20fb1c-0a44-4209-aa4a-020331708117 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.462 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Resumed (Lifecycle Event)
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.463 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.467 253542 INFO nova.virt.libvirt.driver [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance spawned successfully.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.467 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.476 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1710d80c-a254-4a86-b9de-6a1d6ee41d63]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504671, 'reachable_time': 24555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321412, 'error': None, 'target': 'ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d837c6e7b\x2dbab2\x2d4553\x2d9d96\x2d986f67153365.mount: Deactivated successfully.
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.481 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-837c6e7b-bab2-4553-9d96-986f67153365 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:04.482 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0051f546-20e0-41e6-89e2-8a64b0fa4e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.489 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.495 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.499 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.499 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.500 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.500 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.501 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.501 253542 DEBUG nova.virt.libvirt.driver [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.529 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.572 253542 INFO nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Took 9.60 seconds to spawn the instance on the hypervisor.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.573 253542 DEBUG nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.647 253542 INFO nova.compute.manager [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Took 11.02 seconds to build instance.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.662 253542 DEBUG oslo_concurrency.lockutils [None req-488a352f-842c-4afb-9b2e-192f1999f70c a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.729 253542 INFO nova.virt.libvirt.driver [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Deleting instance files /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_del
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.730 253542 INFO nova.virt.libvirt.driver [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Deletion of /var/lib/nova/instances/e07ccbcb-d60d-4c15-95c2-9f5046ab99a3_del complete
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.775 253542 INFO nova.virt.libvirt.driver [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Deleting instance files /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468_del
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.776 253542 INFO nova.virt.libvirt.driver [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Deletion of /var/lib/nova/instances/a4fd9f97-b160-432d-9cb7-0fa3874c6468_del complete
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.785 253542 INFO nova.compute.manager [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.786 253542 DEBUG oslo.service.loopingcall [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.787 253542 DEBUG nova.compute.manager [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.787 253542 DEBUG nova.network.neutron [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.824 253542 INFO nova.compute.manager [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Took 1.01 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.825 253542 DEBUG oslo.service.loopingcall [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.825 253542 DEBUG nova.compute.manager [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.826 253542 DEBUG nova.network.neutron [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531562368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.907 253542 DEBUG oslo_concurrency.processutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.913 253542 DEBUG nova.compute.provider_tree [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.926 253542 DEBUG nova.scheduler.client.report [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.947 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:04 compute-0 nova_compute[253538]: 2025-11-25 08:35:04.983 253542 INFO nova.scheduler.client.report [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Deleted allocations for instance 5c6656ef-7ad0-4eb4-a597-aa9a8078805b
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.062 253542 DEBUG oslo_concurrency.lockutils [None req-71b5dbd9-53be-49dc-9125-f878a11f92ea 329d8dc9d78743d4a09a38fef3a9143d 7d8307470c794815a028592990efca57 - - default default] Lock "5c6656ef-7ad0-4eb4-a597-aa9a8078805b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.257 253542 DEBUG nova.objects.instance [None req-f68b9c9e-aef8-46c1-add9-b3b1d2c03b48 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f20fb1c-0a44-4209-aa4a-020331708117 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 228 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 3.6 MiB/s wr, 499 op/s
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.292 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059705.292629, 2f20fb1c-0a44-4209-aa4a-020331708117 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.293 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Paused (Lifecycle Event)
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.314 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.319 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.338 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 08:35:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1531562368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:05 compute-0 kernel: tap69b7733c-f4 (unregistering): left promiscuous mode
Nov 25 08:35:05 compute-0 NetworkManager[48915]: <info>  [1764059705.4887] device (tap69b7733c-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:05 compute-0 ovn_controller[152859]: 2025-11-25T08:35:05Z|00630|binding|INFO|Releasing lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 from this chassis (sb_readonly=0)
Nov 25 08:35:05 compute-0 ovn_controller[152859]: 2025-11-25T08:35:05Z|00631|binding|INFO|Setting lport 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 down in Southbound
Nov 25 08:35:05 compute-0 ovn_controller[152859]: 2025-11-25T08:35:05Z|00632|binding|INFO|Removing iface tap69b7733c-f4 ovn-installed in OVS
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.555 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ff:8f 10.100.0.4'], port_security=['fa:16:3e:a2:ff:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f20fb1c-0a44-4209-aa4a-020331708117', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9c243220ecd4ba3af10cdbc0ea76bd6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5deaf81-ec7a-4196-8622-4e499ce185db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864356ca-a329-4a45-a3a1-6cef04812832, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=69b7733c-f471-4b5d-9fe9-b9b25d5836d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.557 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 69b7733c-f471-4b5d-9fe9-b9b25d5836d9 in datapath a66e51b8-ecb0-4289-a1b5-d5e379727721 unbound from our chassis
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.560 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66e51b8-ecb0-4289-a1b5-d5e379727721, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.561 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af1dab43-3410-4183-8ca4-86cbf014e174]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.562 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 namespace which is not needed anymore
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.580 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:05 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Deactivated successfully.
Nov 25 08:35:05 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Consumed 1.322s CPU time.
Nov 25 08:35:05 compute-0 systemd-machined[215790]: Machine qemu-76-instance-00000042 terminated.
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.681 253542 DEBUG nova.compute.manager [None req-f68b9c9e-aef8-46c1-add9-b3b1d2c03b48 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:05 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [NOTICE]   (321241) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:05 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [NOTICE]   (321241) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:05 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [WARNING]  (321241) : Exiting Master process...
Nov 25 08:35:05 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [WARNING]  (321241) : Exiting Master process...
Nov 25 08:35:05 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [ALERT]    (321241) : Current worker (321243) exited with code 143 (Terminated)
Nov 25 08:35:05 compute-0 neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721[321237]: [WARNING]  (321241) : All workers exited. Exiting... (0)
Nov 25 08:35:05 compute-0 systemd[1]: libpod-ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47.scope: Deactivated successfully.
Nov 25 08:35:05 compute-0 podman[321461]: 2025-11-25 08:35:05.72570264 +0000 UTC m=+0.057098255 container died ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-686e5e97ca0c4d027b1dcfdd9419f6c15acdd73a33a167ddf2036db8efab363c-merged.mount: Deactivated successfully.
Nov 25 08:35:05 compute-0 podman[321461]: 2025-11-25 08:35:05.785199349 +0000 UTC m=+0.116594954 container cleanup ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:05 compute-0 systemd[1]: libpod-conmon-ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47.scope: Deactivated successfully.
Nov 25 08:35:05 compute-0 podman[321502]: 2025-11-25 08:35:05.860630596 +0000 UTC m=+0.049157182 container remove ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60fc96ca-ffe0-4df2-abca-c4be787fa0e8]: (4, ('Tue Nov 25 08:35:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47)\nac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47\nTue Nov 25 08:35:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 (ac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47)\nac55993ff18ecc64dd91216bc21f1d60c7edd635ec4e1931e146e7a696109f47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[083fe962-ed47-45b3-965a-a9d3a10b2dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.870 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66e51b8-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:05 compute-0 kernel: tapa66e51b8-e0: left promiscuous mode
Nov 25 08:35:05 compute-0 nova_compute[253538]: 2025-11-25 08:35:05.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.894 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38606bdc-97a6-433f-9306-0541ad4a67ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.911 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8550ca-eed0-43f4-855c-911d43f026d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.912 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fc454c-e93f-486d-a79b-cac0253f0ce0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.936 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1a1100-079c-4de0-b565-3cbd7ef2224f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505631, 'reachable_time': 33367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321520, 'error': None, 'target': 'ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.939 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66e51b8-ecb0-4289-a1b5-d5e379727721 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:05.939 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5524840b-45e1-464f-8b09-f8cb067f81b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:05 compute-0 systemd[1]: run-netns-ovnmeta\x2da66e51b8\x2decb0\x2d4289\x2da1b5\x2dd5e379727721.mount: Deactivated successfully.
Nov 25 08:35:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:06 compute-0 ceph-mon[75015]: pgmap v1558: 321 pgs: 321 active+clean; 228 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 3.6 MiB/s wr, 499 op/s
Nov 25 08:35:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 191 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.8 MiB/s wr, 442 op/s
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.509 253542 DEBUG nova.network.neutron [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.534 253542 DEBUG nova.network.neutron [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.537 253542 INFO nova.compute.manager [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Took 2.71 seconds to deallocate network for instance.
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.557 253542 INFO nova.compute.manager [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Took 2.77 seconds to deallocate network for instance.
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.595 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.596 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.610 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.611 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.611 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.612 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.612 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] No waiting events found dispatching network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.612 253542 WARNING nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received unexpected event network-vif-plugged-f9d205bf-0705-485d-b89c-f9b9c3cdccdb for instance with vm_state deleted and task_state None.
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.613 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-unplugged-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.613 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.613 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.613 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.614 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] No waiting events found dispatching network-vif-unplugged-54f02527-a6c1-4059-aa22-2c19fc6f351d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.614 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-unplugged-54f02527-a6c1-4059-aa22-2c19fc6f351d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.615 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.615 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.615 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.616 253542 DEBUG oslo_concurrency.lockutils [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.616 253542 DEBUG nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] No waiting events found dispatching network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.616 253542 WARNING nova.compute.manager [req-ff357fed-772b-497d-a8fc-f77b68476ec2 req-cd4f7fe3-bc0a-41d2-ba41-5a055e074582 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received unexpected event network-vif-plugged-54f02527-a6c1-4059-aa22-2c19fc6f351d for instance with vm_state active and task_state deleting.
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.618 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.692 253542 DEBUG oslo_concurrency.processutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.810 253542 DEBUG oslo_concurrency.lockutils [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.811 253542 DEBUG oslo_concurrency.lockutils [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.811 253542 DEBUG nova.compute.manager [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.816 253542 DEBUG nova.compute.manager [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.817 253542 DEBUG nova.objects.instance [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:07 compute-0 nova_compute[253538]: 2025-11-25 08:35:07.836 253542 DEBUG nova.virt.libvirt.driver [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:35:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628093699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.137 253542 DEBUG oslo_concurrency.processutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.143 253542 DEBUG nova.compute.provider_tree [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.154 253542 DEBUG nova.scheduler.client.report [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.173 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.175 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.198 253542 INFO nova.scheduler.client.report [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Deleted allocations for instance a4fd9f97-b160-432d-9cb7-0fa3874c6468
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.266 253542 DEBUG oslo_concurrency.processutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.303 253542 DEBUG oslo_concurrency.lockutils [None req-22a055bc-1d78-41e0-94c2-05bf396d3c16 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "a4fd9f97-b160-432d-9cb7-0fa3874c6468" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:08 compute-0 ceph-mon[75015]: pgmap v1559: 321 pgs: 321 active+clean; 191 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.8 MiB/s wr, 442 op/s
Nov 25 08:35:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1628093699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/764482650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.691 253542 DEBUG oslo_concurrency.processutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.697 253542 DEBUG nova.compute.provider_tree [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.709 253542 DEBUG nova.scheduler.client.report [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.727 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.757 253542 INFO nova.scheduler.client.report [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Deleted allocations for instance e07ccbcb-d60d-4c15-95c2-9f5046ab99a3
Nov 25 08:35:08 compute-0 podman[321565]: 2025-11-25 08:35:08.792107702 +0000 UTC m=+0.048179357 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 08:35:08 compute-0 nova_compute[253538]: 2025-11-25 08:35:08.812 253542 DEBUG oslo_concurrency.lockutils [None req-7d88309f-64d0-46d8-b872-576c223d70b5 dc986518148d44de9f5908ed5be317bd ce0bc5c65f2f47a9a854ec892fe53bc8 - - default default] Lock "e07ccbcb-d60d-4c15-95c2-9f5046ab99a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.188 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.189 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.189 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.190 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.190 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.191 253542 INFO nova.compute.manager [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Terminating instance
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.192 253542 DEBUG nova.compute.manager [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.198 253542 INFO nova.virt.libvirt.driver [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Instance destroyed successfully.
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.198 253542 DEBUG nova.objects.instance [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lazy-loading 'resources' on Instance uuid 2f20fb1c-0a44-4209-aa4a-020331708117 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.209 253542 DEBUG nova.virt.libvirt.vif [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:34:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-756634894',display_name='tempest-DeleteServersTestJSON-server-756634894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-756634894',id=66,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a9c243220ecd4ba3af10cdbc0ea76bd6',ramdisk_id='',reservation_id='r-6zflsg5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-2095694504',owner_user_name='tempest-DeleteServersTestJSON-2095694504-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:05Z,user_data=None,user_id='a649c62aaacd4f01a93ea978066f5976',uuid=2f20fb1c-0a44-4209-aa4a-020331708117,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.209 253542 DEBUG nova.network.os_vif_util [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converting VIF {"id": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "address": "fa:16:3e:a2:ff:8f", "network": {"id": "a66e51b8-ecb0-4289-a1b5-d5e379727721", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-903247260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9c243220ecd4ba3af10cdbc0ea76bd6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69b7733c-f4", "ovs_interfaceid": "69b7733c-f471-4b5d-9fe9-b9b25d5836d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.210 253542 DEBUG nova.network.os_vif_util [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.210 253542 DEBUG os_vif [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.213 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69b7733c-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.219 253542 INFO os_vif [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ff:8f,bridge_name='br-int',has_traffic_filtering=True,id=69b7733c-f471-4b5d-9fe9-b9b25d5836d9,network=Network(a66e51b8-ecb0-4289-a1b5-d5e379727721),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69b7733c-f4')
Nov 25 08:35:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 169 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.5 MiB/s wr, 341 op/s
Nov 25 08:35:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/764482650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.710 253542 INFO nova.virt.libvirt.driver [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Deleting instance files /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117_del
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.711 253542 INFO nova.virt.libvirt.driver [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Deletion of /var/lib/nova/instances/2f20fb1c-0a44-4209-aa4a-020331708117_del complete
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.757 253542 INFO nova.compute.manager [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Took 0.56 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.758 253542 DEBUG oslo.service.loopingcall [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.759 253542 DEBUG nova.compute.manager [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:09 compute-0 nova_compute[253538]: 2025-11-25 08:35:09.759 253542 DEBUG nova.network.neutron [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:10 compute-0 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 08:35:10 compute-0 NetworkManager[48915]: <info>  [1764059710.1130] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:10 compute-0 ovn_controller[152859]: 2025-11-25T08:35:10Z|00633|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 08:35:10 compute-0 ovn_controller[152859]: 2025-11-25T08:35:10Z|00634|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 08:35:10 compute-0 ovn_controller[152859]: 2025-11-25T08:35:10Z|00635|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.148 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '12', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.149 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.151 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.152 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[210c3741-b8fb-4ba8-891e-a3b0cd202062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.153 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:10 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 08:35:10 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000032.scope: Consumed 17.728s CPU time.
Nov 25 08:35:10 compute-0 systemd-machined[215790]: Machine qemu-66-instance-00000032 terminated.
Nov 25 08:35:10 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [NOTICE]   (314674) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:10 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [NOTICE]   (314674) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:10 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [WARNING]  (314674) : Exiting Master process...
Nov 25 08:35:10 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [ALERT]    (314674) : Current worker (314676) exited with code 143 (Terminated)
Nov 25 08:35:10 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[314669]: [WARNING]  (314674) : All workers exited. Exiting... (0)
Nov 25 08:35:10 compute-0 systemd[1]: libpod-66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803.scope: Deactivated successfully.
Nov 25 08:35:10 compute-0 podman[321627]: 2025-11-25 08:35:10.290003549 +0000 UTC m=+0.044399294 container died 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:35:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b287a9705456981153ae9eff66d536d8de2c4f253b08bf0d27b3e6ca96100e2d-merged.mount: Deactivated successfully.
Nov 25 08:35:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:10 compute-0 podman[321627]: 2025-11-25 08:35:10.329083359 +0000 UTC m=+0.083479084 container cleanup 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:10 compute-0 systemd[1]: libpod-conmon-66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803.scope: Deactivated successfully.
Nov 25 08:35:10 compute-0 podman[321657]: 2025-11-25 08:35:10.392414722 +0000 UTC m=+0.042853742 container remove 66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.397 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9940e066-a224-4d00-ae62-094c71da7b00]: (4, ('Tue Nov 25 08:35:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803)\n66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803\nTue Nov 25 08:35:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803)\n66d7c79572db99ff1b5636cd54dec519fd2c6d6de66365ff85c02c686dacf803\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.399 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d87984ec-2c12-4fc8-bc63-f052f629f042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.400 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.444 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:10 compute-0 kernel: tap908154e6-30: left promiscuous mode
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.464 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.467 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a9c84a-a990-4db2-b5fe-911fb70dbade]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:10 compute-0 ceph-mon[75015]: pgmap v1560: 321 pgs: 321 active+clean; 169 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.5 MiB/s wr, 341 op/s
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.480 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d01d519a-ad76-4d0a-b83a-c1dc25c30708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.481 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3622cf3a-a253-4187-9e71-731c9ec30433]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.493 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7797dc00-1e9a-4e43-9133-c4ed0b5d8e3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496826, 'reachable_time': 34901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321686, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.495 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:10.495 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a06ceb-aed9-43ec-9fd7-1a0485fb5ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.497 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-unplugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] No waiting events found dispatching network-vif-unplugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.498 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-unplugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.499 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Received event network-vif-deleted-f9d205bf-0705-485d-b89c-f9b9c3cdccdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.499 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Received event network-vif-deleted-54f02527-a6c1-4059-aa22-2c19fc6f351d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.499 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.499 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.500 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.500 253542 DEBUG oslo_concurrency.lockutils [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.500 253542 DEBUG nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] No waiting events found dispatching network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.500 253542 WARNING nova.compute.manager [req-a85fc9a1-78c4-4091-a5df-6e9b62728afd req-b7b78e0b-d147-458e-b043-fa6eb26199fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received unexpected event network-vif-plugged-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 for instance with vm_state suspended and task_state deleting.
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.544 253542 DEBUG nova.network.neutron [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.573 253542 INFO nova.compute.manager [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Took 0.81 seconds to deallocate network for instance.
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.627 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059695.61694, 52d39d67-b456-44e4-8804-2de0c941edae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.628 253542 INFO nova.compute.manager [-] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] VM Stopped (Lifecycle Event)
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.643 253542 DEBUG nova.compute.manager [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.643 253542 DEBUG oslo_concurrency.lockutils [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.644 253542 DEBUG oslo_concurrency.lockutils [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.644 253542 DEBUG oslo_concurrency.lockutils [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.644 253542 DEBUG nova.compute.manager [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.644 253542 WARNING nova.compute.manager [req-3e56227c-ed54-4558-8f6e-66b7b93e5dfa req-c245171d-ffd8-4110-8edd-d98acb30b661 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state powering-off.
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.647 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.647 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.648 253542 DEBUG nova.compute.manager [None req-a9c3559c-4910-451c-bee8-1335b1bf61b4 - - - - - -] [instance: 52d39d67-b456-44e4-8804-2de0c941edae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.709 253542 DEBUG oslo_concurrency.processutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.855 253542 INFO nova.virt.libvirt.driver [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance shutdown successfully after 3 seconds.
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.860 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.860 253542 DEBUG nova.objects.instance [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.877 253542 DEBUG nova.compute.manager [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:10 compute-0 nova_compute[253538]: 2025-11-25 08:35:10.931 253542 DEBUG oslo_concurrency.lockutils [None req-4b6f99df-06c9-49d2-8253-b9379e379d90 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427142117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:11 compute-0 nova_compute[253538]: 2025-11-25 08:35:11.180 253542 DEBUG oslo_concurrency.processutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:11 compute-0 nova_compute[253538]: 2025-11-25 08:35:11.186 253542 DEBUG nova.compute.provider_tree [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:11 compute-0 nova_compute[253538]: 2025-11-25 08:35:11.200 253542 DEBUG nova.scheduler.client.report [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:11 compute-0 nova_compute[253538]: 2025-11-25 08:35:11.221 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 147 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 615 KiB/s wr, 282 op/s
Nov 25 08:35:11 compute-0 nova_compute[253538]: 2025-11-25 08:35:11.276 253542 INFO nova.scheduler.client.report [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Deleted allocations for instance 2f20fb1c-0a44-4209-aa4a-020331708117
Nov 25 08:35:11 compute-0 nova_compute[253538]: 2025-11-25 08:35:11.359 253542 DEBUG oslo_concurrency.lockutils [None req-312f15ca-4eb9-4d06-8c07-859f37fb3975 a649c62aaacd4f01a93ea978066f5976 a9c243220ecd4ba3af10cdbc0ea76bd6 - - default default] Lock "2f20fb1c-0a44-4209-aa4a-020331708117" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/427142117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.176 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.194 253542 DEBUG oslo_concurrency.lockutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.194 253542 DEBUG oslo_concurrency.lockutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.194 253542 DEBUG nova.network.neutron [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.194 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'info_cache' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:12 compute-0 ceph-mon[75015]: pgmap v1561: 321 pgs: 321 active+clean; 147 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 615 KiB/s wr, 282 op/s
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.560 253542 DEBUG nova.compute.manager [req-697b3ecf-193e-4f25-bf0e-6e131f90124a req-f29b0fb8-ac0e-4fb4-90eb-23ab29ac105e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Received event network-vif-deleted-69b7733c-f471-4b5d-9fe9-b9b25d5836d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.725 253542 DEBUG nova.compute.manager [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.726 253542 DEBUG oslo_concurrency.lockutils [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.726 253542 DEBUG oslo_concurrency.lockutils [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.726 253542 DEBUG oslo_concurrency.lockutils [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.727 253542 DEBUG nova.compute.manager [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:12 compute-0 nova_compute[253538]: 2025-11-25 08:35:12.727 253542 WARNING nova.compute.manager [req-e38a8e7d-7924-41a5-b9d5-eb72f28d6cba req-ab6a5fdb-5fe7-4aed-8941-cd22fb2c0cfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state stopped and task_state powering-on.
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.185 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059698.18283, d99c7a05-3cc3-4a8b-bce4-1185023a269f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.186 253542 INFO nova.compute.manager [-] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] VM Stopped (Lifecycle Event)
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.206 253542 DEBUG nova.compute.manager [None req-81db9d12-62c4-4de8-9adf-6a8bf78990ab - - - - - -] [instance: d99c7a05-3cc3-4a8b-bce4-1185023a269f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 140 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 24 KiB/s wr, 189 op/s
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.589 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.942 253542 DEBUG nova.network.neutron [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.970 253542 DEBUG oslo_concurrency.lockutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.996 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.
Nov 25 08:35:13 compute-0 nova_compute[253538]: 2025-11-25 08:35:13.996 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.010 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.020 253542 DEBUG nova.virt.libvirt.vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.021 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.022 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.022 253542 DEBUG os_vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.025 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.027 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.028 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.030 253542 INFO os_vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.037 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start _get_guest_xml network_info=[{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.040 253542 WARNING nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.046 253542 DEBUG nova.virt.libvirt.host [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.046 253542 DEBUG nova.virt.libvirt.host [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.049 253542 DEBUG nova.virt.libvirt.host [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.049 253542 DEBUG nova.virt.libvirt.host [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.050 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.050 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.051 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.051 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.051 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.051 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.052 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.052 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.052 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.053 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.053 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.053 253542 DEBUG nova.virt.hardware [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.053 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.066 253542 DEBUG oslo_concurrency.processutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603385909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:14 compute-0 ceph-mon[75015]: pgmap v1562: 321 pgs: 321 active+clean; 140 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 24 KiB/s wr, 189 op/s
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.517 253542 DEBUG oslo_concurrency.processutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.549 253542 DEBUG oslo_concurrency.processutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.583 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.882 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059699.8811498, 420c5373-d9c4-4da0-9658-90eff9a19f8d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.883 253542 INFO nova.compute.manager [-] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] VM Stopped (Lifecycle Event)
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.898 253542 DEBUG nova.compute.manager [None req-86ae75ba-a465-4286-a688-0fffc66ecbdb - - - - - -] [instance: 420c5373-d9c4-4da0-9658-90eff9a19f8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904315795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.986 253542 DEBUG oslo_concurrency.processutils [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.989 253542 DEBUG nova.virt.libvirt.vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.990 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.991 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:14 compute-0 nova_compute[253538]: 2025-11-25 08:35:14.993 253542 DEBUG nova.objects.instance [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.009 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <uuid>0feca801-4630-4450-b915-616d8496ab51</uuid>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <name>instance-00000032</name>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestJSON-server-1351113969</nova:name>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:35:14</nova:creationTime>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <nova:user uuid="c199ca353ed54a53ab7fe37d3089c82a">tempest-ServerActionsTestJSON-1880843108-project-member</nova:user>
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <nova:project uuid="23237e7592b247838e62457157e64e9e">tempest-ServerActionsTestJSON-1880843108</nova:project>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <nova:port uuid="15af3dd8-9788-4a34-b4b2-d3b24300cd4c">
Nov 25 08:35:15 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <system>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <entry name="serial">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <entry name="uuid">0feca801-4630-4450-b915-616d8496ab51</entry>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </system>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <os>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   </os>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <features>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   </features>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk">
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0feca801-4630-4450-b915-616d8496ab51_disk.config">
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:15 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:07:cd:40"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <target dev="tap15af3dd8-97"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51/console.log" append="off"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <video>
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </video>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:35:15 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:35:15 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:35:15 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:35:15 compute-0 nova_compute[253538]: </domain>
Nov 25 08:35:15 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.010 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.011 253542 DEBUG nova.virt.libvirt.driver [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.012 253542 DEBUG nova.virt.libvirt.vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.012 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.013 253542 DEBUG nova.network.os_vif_util [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.013 253542 DEBUG os_vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.015 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.015 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.019 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.019 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 NetworkManager[48915]: <info>  [1764059715.0765] manager: (tap15af3dd8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.080 253542 INFO os_vif [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:35:15 compute-0 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 08:35:15 compute-0 NetworkManager[48915]: <info>  [1764059715.1734] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Nov 25 08:35:15 compute-0 ovn_controller[152859]: 2025-11-25T08:35:15Z|00636|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 08:35:15 compute-0 ovn_controller[152859]: 2025-11-25T08:35:15Z|00637|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.173 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 podman[321773]: 2025-11-25 08:35:15.182883991 +0000 UTC m=+0.073614859 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.187 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '13', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.190 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.192 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:35:15 compute-0 ovn_controller[152859]: 2025-11-25T08:35:15Z|00638|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 08:35:15 compute-0 ovn_controller[152859]: 2025-11-25T08:35:15Z|00639|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.206 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[88641a7b-baa0-4f24-9602-4bd50cccbec3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.207 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.209 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb97d23-b2af-436c-9c81-9213cac752e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.210 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[345b537f-a23a-457c-b0e7-84f39ade06b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 systemd-udevd[321809]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:15 compute-0 systemd-machined[215790]: New machine qemu-77-instance-00000032.
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.223 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e7651176-0b0a-47ae-a4d7-19e9c711020b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000032.
Nov 25 08:35:15 compute-0 NetworkManager[48915]: <info>  [1764059715.2418] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:35:15 compute-0 NetworkManager[48915]: <info>  [1764059715.2427] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.252 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5282d0-d03a-4de8-a5fb-c77a5f466c37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 11 KiB/s wr, 156 op/s
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.277 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[68d1d40b-0dda-4465-ac74-11eb5ffe6ee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 systemd-udevd[321813]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0907a705-6274-48a4-ab52-67ac02f7b50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 NetworkManager[48915]: <info>  [1764059715.2847] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/282)
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.313 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc7d450-00f0-454f-89f5-cbd89b01733e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.316 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3c17bf-7427-4d9b-b864-84e455f3bdd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 NetworkManager[48915]: <info>  [1764059715.3388] device (tap908154e6-30): carrier: link connected
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.346 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a916c606-dcbe-48b3-8e5a-e48678425d6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.364 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ed39dd-2634-421c-918b-0df904c99d74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506993, 'reachable_time': 30597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321841, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.380 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe7e294-53f6-4fb3-a69f-7fc9d4a0580f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506993, 'tstamp': 506993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321842, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.399 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d26e34e0-3ba4-4809-b1ff-9ab6a411dc2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506993, 'reachable_time': 30597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321843, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.431 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3971ce30-5258-43eb-ba33-79be3cbfe0d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.494 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[13537e89-8144-486b-8e21-50836a30454e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.495 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 NetworkManager[48915]: <info>  [1764059715.4980] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Nov 25 08:35:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/603385909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/904315795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:15 compute-0 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.575 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 ovn_controller[152859]: 2025-11-25T08:35:15Z|00640|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.580 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.581 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cac754c4-7abf-4cd4-a9b8-512fdc84aeba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.581 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:35:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:15.583 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.596 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.632 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.632 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059715.6318457, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.633 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.635 253542 DEBUG nova.compute.manager [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.639 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance rebooted successfully.
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.639 253542 DEBUG nova.compute.manager [None req-01405e48-7ab3-417a-ae6c-48ef451d06ba c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.660 253542 DEBUG nova.compute.manager [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.660 253542 DEBUG oslo_concurrency.lockutils [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.661 253542 DEBUG oslo_concurrency.lockutils [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.661 253542 DEBUG oslo_concurrency.lockutils [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.661 253542 DEBUG nova.compute.manager [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.661 253542 WARNING nova.compute.manager [req-b7c527b9-5ffd-4a68-9642-85e3e454dd2e req-18f01761-3498-45c6-acfb-b8f09c461397 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state stopped and task_state powering-on.
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.663 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.666 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.688 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.689 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059715.6325064, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.689 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.712 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.715 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.906 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059700.9039412, 5c6656ef-7ad0-4eb4-a597-aa9a8078805b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.906 253542 INFO nova.compute.manager [-] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] VM Stopped (Lifecycle Event)
Nov 25 08:35:15 compute-0 nova_compute[253538]: 2025-11-25 08:35:15.933 253542 DEBUG nova.compute.manager [None req-34ad2fe7-b995-48da-a2c5-cb2b21db8c38 - - - - - -] [instance: 5c6656ef-7ad0-4eb4-a597-aa9a8078805b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:15 compute-0 podman[321917]: 2025-11-25 08:35:15.992558142 +0000 UTC m=+0.051164686 container create cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 08:35:16 compute-0 systemd[1]: Started libpod-conmon-cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b.scope.
Nov 25 08:35:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a31a78ffe15d62f0c5013ee06ef65d7b8dd91142ec16803a1a277f69dadb23b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:16 compute-0 podman[321917]: 2025-11-25 08:35:15.967823067 +0000 UTC m=+0.026429631 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:35:16 compute-0 podman[321917]: 2025-11-25 08:35:16.076487438 +0000 UTC m=+0.135094032 container init cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:35:16 compute-0 podman[321917]: 2025-11-25 08:35:16.087022671 +0000 UTC m=+0.145629225 container start cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:16 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [NOTICE]   (321936) : New worker (321938) forked
Nov 25 08:35:16 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [NOTICE]   (321936) : Loading success.
Nov 25 08:35:16 compute-0 ovn_controller[152859]: 2025-11-25T08:35:16Z|00641|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:35:16 compute-0 nova_compute[253538]: 2025-11-25 08:35:16.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:16 compute-0 ceph-mon[75015]: pgmap v1563: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 11 KiB/s wr, 156 op/s
Nov 25 08:35:16 compute-0 ovn_controller[152859]: 2025-11-25T08:35:16Z|00642|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:35:16 compute-0 nova_compute[253538]: 2025-11-25 08:35:16.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.0 KiB/s wr, 136 op/s
Nov 25 08:35:17 compute-0 nova_compute[253538]: 2025-11-25 08:35:17.748 253542 DEBUG nova.compute.manager [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:17 compute-0 nova_compute[253538]: 2025-11-25 08:35:17.749 253542 DEBUG oslo_concurrency.lockutils [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:17 compute-0 nova_compute[253538]: 2025-11-25 08:35:17.749 253542 DEBUG oslo_concurrency.lockutils [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:17 compute-0 nova_compute[253538]: 2025-11-25 08:35:17.749 253542 DEBUG oslo_concurrency.lockutils [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:17 compute-0 nova_compute[253538]: 2025-11-25 08:35:17.750 253542 DEBUG nova.compute.manager [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:17 compute-0 nova_compute[253538]: 2025-11-25 08:35:17.750 253542 WARNING nova.compute.manager [req-367d3f26-4fb7-4170-959c-2dd2fa8d6311 req-211d545d-65a3-458d-b115-4cef40fb8d40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:35:18 compute-0 nova_compute[253538]: 2025-11-25 08:35:18.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:18 compute-0 nova_compute[253538]: 2025-11-25 08:35:18.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:18 compute-0 nova_compute[253538]: 2025-11-25 08:35:18.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 08:35:18 compute-0 nova_compute[253538]: 2025-11-25 08:35:18.580 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 08:35:18 compute-0 ceph-mon[75015]: pgmap v1564: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.0 KiB/s wr, 136 op/s
Nov 25 08:35:18 compute-0 podman[321947]: 2025-11-25 08:35:18.842468016 +0000 UTC m=+0.093032652 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:35:19 compute-0 nova_compute[253538]: 2025-11-25 08:35:19.194 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059704.191218, e07ccbcb-d60d-4c15-95c2-9f5046ab99a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:19 compute-0 nova_compute[253538]: 2025-11-25 08:35:19.195 253542 INFO nova.compute.manager [-] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] VM Stopped (Lifecycle Event)
Nov 25 08:35:19 compute-0 nova_compute[253538]: 2025-11-25 08:35:19.212 253542 DEBUG nova.compute.manager [None req-eb9e7493-4570-4b79-bea8-8b556d29863f - - - - - -] [instance: e07ccbcb-d60d-4c15-95c2-9f5046ab99a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:19 compute-0 nova_compute[253538]: 2025-11-25 08:35:19.261 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059704.2376945, a4fd9f97-b160-432d-9cb7-0fa3874c6468 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:19 compute-0 nova_compute[253538]: 2025-11-25 08:35:19.261 253542 INFO nova.compute.manager [-] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] VM Stopped (Lifecycle Event)
Nov 25 08:35:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.0 KiB/s wr, 124 op/s
Nov 25 08:35:19 compute-0 nova_compute[253538]: 2025-11-25 08:35:19.286 253542 DEBUG nova.compute.manager [None req-5c63b1a7-2038-4e84-af7a-520278b886c5 - - - - - -] [instance: a4fd9f97-b160-432d-9cb7-0fa3874c6468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:19 compute-0 ceph-mon[75015]: pgmap v1565: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.0 KiB/s wr, 124 op/s
Nov 25 08:35:20 compute-0 nova_compute[253538]: 2025-11-25 08:35:20.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:20 compute-0 nova_compute[253538]: 2025-11-25 08:35:20.681 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059705.6809561, 2f20fb1c-0a44-4209-aa4a-020331708117 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:20 compute-0 nova_compute[253538]: 2025-11-25 08:35:20.682 253542 INFO nova.compute.manager [-] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] VM Stopped (Lifecycle Event)
Nov 25 08:35:20 compute-0 nova_compute[253538]: 2025-11-25 08:35:20.700 253542 DEBUG nova.compute.manager [None req-c157c6cf-732a-4df1-b659-095f9f17cdb2 - - - - - -] [instance: 2f20fb1c-0a44-4209-aa4a-020331708117] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:20 compute-0 nova_compute[253538]: 2025-11-25 08:35:20.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.1 KiB/s wr, 115 op/s
Nov 25 08:35:21 compute-0 nova_compute[253538]: 2025-11-25 08:35:21.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:21 compute-0 nova_compute[253538]: 2025-11-25 08:35:21.576 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:21 compute-0 nova_compute[253538]: 2025-11-25 08:35:21.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:21 compute-0 nova_compute[253538]: 2025-11-25 08:35:21.595 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:21 compute-0 nova_compute[253538]: 2025-11-25 08:35:21.595 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:21 compute-0 nova_compute[253538]: 2025-11-25 08:35:21.595 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:35:21 compute-0 nova_compute[253538]: 2025-11-25 08:35:21.596 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:21 compute-0 nova_compute[253538]: 2025-11-25 08:35:21.678 253542 DEBUG nova.objects.instance [None req-9f740164-f431-46f6-a0c8-8a900b27e40e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.196 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059722.1954565, 0feca801-4630-4450-b915-616d8496ab51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.196 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Paused (Lifecycle Event)
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.213 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.218 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.232 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 08:35:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890726360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.519 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.923s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.590 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.592 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:35:22 compute-0 ceph-mon[75015]: pgmap v1566: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.1 KiB/s wr, 115 op/s
Nov 25 08:35:22 compute-0 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 08:35:22 compute-0 NetworkManager[48915]: <info>  [1764059722.7004] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:22 compute-0 ovn_controller[152859]: 2025-11-25T08:35:22Z|00643|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 08:35:22 compute-0 ovn_controller[152859]: 2025-11-25T08:35:22Z|00644|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 08:35:22 compute-0 ovn_controller[152859]: 2025-11-25T08:35:22Z|00645|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 08:35:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.719 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '14', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.722 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:35:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.723 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.724 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f54a0704-8a8c-4dba-a67d-e4ea579dc412]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:22.725 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:22 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 08:35:22 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000032.scope: Consumed 7.326s CPU time.
Nov 25 08:35:22 compute-0 systemd-machined[215790]: Machine qemu-77-instance-00000032 terminated.
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.816 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.817 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3853MB free_disk=59.942623138427734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.818 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.818 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.846 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:22 compute-0 nova_compute[253538]: 2025-11-25 08:35:22.861 253542 DEBUG nova.compute.manager [None req-9f740164-f431-46f6-a0c8-8a900b27e40e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:22 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [NOTICE]   (321936) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:22 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [NOTICE]   (321936) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:22 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [WARNING]  (321936) : Exiting Master process...
Nov 25 08:35:22 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [WARNING]  (321936) : Exiting Master process...
Nov 25 08:35:22 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [ALERT]    (321936) : Current worker (321938) exited with code 143 (Terminated)
Nov 25 08:35:22 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[321932]: [WARNING]  (321936) : All workers exited. Exiting... (0)
Nov 25 08:35:22 compute-0 systemd[1]: libpod-cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b.scope: Deactivated successfully.
Nov 25 08:35:22 compute-0 podman[322021]: 2025-11-25 08:35:22.876967406 +0000 UTC m=+0.060632229 container died cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 08:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a31a78ffe15d62f0c5013ee06ef65d7b8dd91142ec16803a1a277f69dadb23b-merged.mount: Deactivated successfully.
Nov 25 08:35:22 compute-0 podman[322021]: 2025-11-25 08:35:22.92137505 +0000 UTC m=+0.105039863 container cleanup cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:22 compute-0 systemd[1]: libpod-conmon-cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b.scope: Deactivated successfully.
Nov 25 08:35:23 compute-0 podman[322058]: 2025-11-25 08:35:23.002433589 +0000 UTC m=+0.058832412 container remove cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.007 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[687f659e-1608-455c-b7d2-9dd1a6605e84]: (4, ('Tue Nov 25 08:35:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b)\ncfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b\nTue Nov 25 08:35:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (cfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b)\ncfd868e91fe20210b80519512156df31c00ac2f978f08bf54e2e99bdc458958b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.009 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a37d22fc-2ed2-4569-bc08-968da5fa871f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.010 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:23 compute-0 kernel: tap908154e6-30: left promiscuous mode
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.013 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0feca801-4630-4450-b915-616d8496ab51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.013 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.014 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.033 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24943546-06b1-4297-a32b-ad6d509746f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.046 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b91ae9-bab6-4ee9-afc6-d47cf2a2ee8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.048 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25be9ef3-4ed6-4ddb-acde-bb4a2b88bc94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.061 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[944b09f9-90f5-4405-aad0-4c26cba89f56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506987, 'reachable_time': 35124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322077, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.064 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:23.064 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fdd051-9ccc-4e26-9cab-ffa6891149eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.103 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 KiB/s wr, 93 op/s
Nov 25 08:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.456 253542 DEBUG nova.compute.manager [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.456 253542 DEBUG oslo_concurrency.lockutils [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.457 253542 DEBUG oslo_concurrency.lockutils [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.457 253542 DEBUG oslo_concurrency.lockutils [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.457 253542 DEBUG nova.compute.manager [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.457 253542 WARNING nova.compute.manager [req-dfe08cc9-e6ae-43b9-baa8-761aef1f392a req-46df6485-92de-4611-9574-b6a1a78dba7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-unplugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state suspended and task_state None.
Nov 25 08:35:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106798545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.563 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.568 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.586 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.615 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:35:23 compute-0 nova_compute[253538]: 2025-11-25 08:35:23.615 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3890726360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1106798545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:24 compute-0 nova_compute[253538]: 2025-11-25 08:35:24.594 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:24 compute-0 ceph-mon[75015]: pgmap v1567: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 KiB/s wr, 93 op/s
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.135 253542 INFO nova.compute.manager [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Resuming
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.136 253542 DEBUG nova.objects.instance [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'flavor' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.175 253542 DEBUG oslo_concurrency.lockutils [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.176 253542 DEBUG oslo_concurrency.lockutils [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquired lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.177 253542 DEBUG nova.network.neutron [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:35:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 83 op/s
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.606 253542 DEBUG nova.compute.manager [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.607 253542 DEBUG oslo_concurrency.lockutils [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.607 253542 DEBUG oslo_concurrency.lockutils [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.607 253542 DEBUG oslo_concurrency.lockutils [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.607 253542 DEBUG nova.compute.manager [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.608 253542 WARNING nova.compute.manager [req-30d106b9-6879-4cca-9d6c-13df7c5f5c69 req-a450639b-7319-491b-a8cc-9c31353f4e11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state suspended and task_state resuming.
Nov 25 08:35:25 compute-0 nova_compute[253538]: 2025-11-25 08:35:25.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:26 compute-0 ceph-mon[75015]: pgmap v1568: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 83 op/s
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.874 253542 DEBUG nova.network.neutron [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [{"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.896 253542 DEBUG oslo_concurrency.lockutils [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Releasing lock "refresh_cache-0feca801-4630-4450-b915-616d8496ab51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.903 253542 DEBUG nova.virt.libvirt.vif [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.904 253542 DEBUG nova.network.os_vif_util [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.905 253542 DEBUG nova.network.os_vif_util [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.905 253542 DEBUG os_vif [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.907 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.908 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.911 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15af3dd8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.912 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15af3dd8-97, col_values=(('external_ids', {'iface-id': '15af3dd8-9788-4a34-b4b2-d3b24300cd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:cd:40', 'vm-uuid': '0feca801-4630-4450-b915-616d8496ab51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.912 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.913 253542 INFO os_vif [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:35:26 compute-0 nova_compute[253538]: 2025-11-25 08:35:26.935 253542 DEBUG nova.objects.instance [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'numa_topology' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:27 compute-0 kernel: tap15af3dd8-97: entered promiscuous mode
Nov 25 08:35:27 compute-0 ovn_controller[152859]: 2025-11-25T08:35:27Z|00646|binding|INFO|Claiming lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c for this chassis.
Nov 25 08:35:27 compute-0 ovn_controller[152859]: 2025-11-25T08:35:27Z|00647|binding|INFO|15af3dd8-9788-4a34-b4b2-d3b24300cd4c: Claiming fa:16:3e:07:cd:40 10.100.0.13
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 NetworkManager[48915]: <info>  [1764059727.0204] manager: (tap15af3dd8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Nov 25 08:35:27 compute-0 ovn_controller[152859]: 2025-11-25T08:35:27Z|00648|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c ovn-installed in OVS
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.036 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 systemd-udevd[322114]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:27 compute-0 systemd-machined[215790]: New machine qemu-78-instance-00000032.
Nov 25 08:35:27 compute-0 ovn_controller[152859]: 2025-11-25T08:35:27Z|00649|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c up in Southbound
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.061 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '15', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.063 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 bound to our chassis
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.064 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:35:27 compute-0 NetworkManager[48915]: <info>  [1764059727.0660] device (tap15af3dd8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:35:27 compute-0 NetworkManager[48915]: <info>  [1764059727.0668] device (tap15af3dd8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:35:27 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000032.
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.075 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[09e2be14-e5fb-4e50-b5f8-1debbf13292a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.076 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap908154e6-31 in ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.078 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap908154e6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.078 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6d0b52-5123-43ff-87f1-5b6213dd788b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.079 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53857431-a190-4942-a45f-5f8416c51690]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.093 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a34ad5-a9d3-404c-9365-5b1b4ff0bb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.104 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b92fe500-b099-4361-b4f8-48104f1b1140]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.134 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[36a1745a-3897-48aa-bc41-156bd2fc42bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.140 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ebbbe2-baac-48d7-96c5-50cbfc931b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 systemd-udevd[322116]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:27 compute-0 NetworkManager[48915]: <info>  [1764059727.1418] manager: (tap908154e6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.176 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[77081318-ec25-4f75-9863-afe1e4fbfe57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.179 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c165424-c4c1-46d2-a938-e27a653a30af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 NetworkManager[48915]: <info>  [1764059727.2050] device (tap908154e6-30): carrier: link connected
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.213 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba833ee-4e21-40f2-b56d-0f123befc591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.235 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90ff64ab-b4cb-4663-b58f-9151037a8ab7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508180, 'reachable_time': 33009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322147, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.251 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5d91ba-864d-4cff-b343-21a8506ef3ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:5909'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508180, 'tstamp': 508180}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322148, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.279 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a89d1850-e270-4514-8513-9b866b70058e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap908154e6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:59:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508180, 'reachable_time': 33009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322149, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.312 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2e5b52-30e5-412a-a707-d50c1de10d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.372 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.373 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.375 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35a5f83a-e7e4-4d75-87f3-f5ec572812de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.376 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.376 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.377 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap908154e6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 NetworkManager[48915]: <info>  [1764059727.3794] manager: (tap908154e6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Nov 25 08:35:27 compute-0 kernel: tap908154e6-30: entered promiscuous mode
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.383 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap908154e6-30, col_values=(('external_ids', {'iface-id': 'a5c69233-73e9-45f3-95c2-e76d52711966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 ovn_controller[152859]: 2025-11-25T08:35:27Z|00650|binding|INFO|Releasing lport a5c69233-73e9-45f3-95c2-e76d52711966 from this chassis (sb_readonly=0)
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.397 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.403 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.405 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d06e43af-8949-46dc-a190-94be5525bcaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.407 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/908154e6-322e-4607-bb65-df3f3f8daca6.pid.haproxy
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 908154e6-322e-4607-bb65-df3f3f8daca6
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:35:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:27.407 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'env', 'PROCESS_TAG=haproxy-908154e6-322e-4607-bb65-df3f3f8daca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/908154e6-322e-4607-bb65-df3f3f8daca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.494 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.495 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.503 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.504 253542 INFO nova.compute.claims [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:35:27 compute-0 nova_compute[253538]: 2025-11-25 08:35:27.626 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:27 compute-0 podman[322181]: 2025-11-25 08:35:27.772207532 +0000 UTC m=+0.042610546 container create 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 08:35:27 compute-0 systemd[1]: Started libpod-conmon-1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81.scope.
Nov 25 08:35:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9436bb21b625d65243e687d48f6f742a3a3013b2c6388d6c2603db017f5bb0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:27 compute-0 podman[322181]: 2025-11-25 08:35:27.75019437 +0000 UTC m=+0.020597414 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:35:27 compute-0 podman[322181]: 2025-11-25 08:35:27.854517234 +0000 UTC m=+0.124920248 container init 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:35:27 compute-0 podman[322181]: 2025-11-25 08:35:27.867081432 +0000 UTC m=+0.137484446 container start 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:35:27 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [NOTICE]   (322235) : New worker (322254) forked
Nov 25 08:35:27 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [NOTICE]   (322235) : Loading success.
Nov 25 08:35:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3784469813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.081 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0feca801-4630-4450-b915-616d8496ab51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.083 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059728.0814462, 0feca801-4630-4450-b915-616d8496ab51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.083 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Started (Lifecycle Event)
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.088 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.095 253542 DEBUG nova.compute.provider_tree [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.099 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.104 253542 DEBUG nova.compute.manager [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.104 253542 DEBUG nova.objects.instance [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.108 253542 DEBUG nova.scheduler.client.report [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.114 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.130 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance running successfully.
Nov 25 08:35:28 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.131 253542 DEBUG nova.virt.libvirt.guest [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.131 253542 DEBUG nova.compute.manager [None req-496ec884-0571-4657-9326-42d315c9f10e c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.134 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.134 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059728.0860713, 0feca801-4630-4450-b915-616d8496ab51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.134 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Resumed (Lifecycle Event)
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.157 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.158 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.161 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.164 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.186 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.235 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.261 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.287 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.399 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.401 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.402 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Creating image(s)
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.428 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.454 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.477 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.481 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.570 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.571 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.572 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.572 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.594 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.597 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:28 compute-0 ceph-mon[75015]: pgmap v1569: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Nov 25 08:35:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3784469813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.927 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:35:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3935479186' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:35:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:35:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3935479186' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:35:28 compute-0 nova_compute[253538]: 2025-11-25 08:35:28.994 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] resizing rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.105 253542 DEBUG nova.objects.instance [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lazy-loading 'migration_context' on Instance uuid cab0bbd2-96e3-43ed-970b-0b49c7581fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.119 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.120 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Ensure instance console log exists: /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.120 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.120 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.121 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.122 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.127 253542 WARNING nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.131 253542 DEBUG nova.virt.libvirt.host [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.131 253542 DEBUG nova.virt.libvirt.host [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.135 253542 DEBUG nova.virt.libvirt.host [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.135 253542 DEBUG nova.virt.libvirt.host [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.136 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.136 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.137 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.137 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.137 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.137 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.138 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.139 253542 DEBUG nova.virt.hardware [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.142 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 85 B/s wr, 31 op/s
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265566824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.606 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.628 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:29 compute-0 nova_compute[253538]: 2025-11-25 08:35:29.632 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3935479186' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:35:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3935479186' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:35:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2265566824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.079 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659968424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.113 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.116 253542 DEBUG nova.objects.instance [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lazy-loading 'pci_devices' on Instance uuid cab0bbd2-96e3-43ed-970b-0b49c7581fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.132 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <uuid>cab0bbd2-96e3-43ed-970b-0b49c7581fef</uuid>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <name>instance-00000043</name>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersAaction247Test-server-1613086338</nova:name>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:35:29</nova:creationTime>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <nova:user uuid="78e69376f7924a5695ba6f4672139f68">tempest-ServersAaction247Test-1472148412-project-member</nova:user>
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <nova:project uuid="d2de800215da4cac8d9fd3a6a3cf4a55">tempest-ServersAaction247Test-1472148412</nova:project>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <system>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <entry name="serial">cab0bbd2-96e3-43ed-970b-0b49c7581fef</entry>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <entry name="uuid">cab0bbd2-96e3-43ed-970b-0b49c7581fef</entry>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     </system>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <os>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   </os>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <features>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   </features>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk">
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config">
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:30 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/console.log" append="off"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <video>
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     </video>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:35:30 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:35:30 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:35:30 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:35:30 compute-0 nova_compute[253538]: </domain>
Nov 25 08:35:30 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.225 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.226 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.227 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Using config drive
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.265 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:30 compute-0 nova_compute[253538]: 2025-11-25 08:35:30.787 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:30 compute-0 ceph-mon[75015]: pgmap v1570: 321 pgs: 321 active+clean; 123 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 85 B/s wr, 31 op/s
Nov 25 08:35:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2659968424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.166 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Creating config drive at /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.173 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9x7oxwh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 142 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 637 KiB/s wr, 30 op/s
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.322 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9x7oxwh" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.348 253542 DEBUG nova.storage.rbd_utils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] rbd image cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.351 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.541 253542 DEBUG oslo_concurrency.processutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.542 253542 INFO nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Deleting local config drive /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef/disk.config because it was imported into RBD.
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:31 compute-0 systemd-machined[215790]: New machine qemu-79-instance-00000043.
Nov 25 08:35:31 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000043.
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.745 253542 DEBUG nova.compute.manager [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.746 253542 DEBUG oslo_concurrency.lockutils [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.747 253542 DEBUG oslo_concurrency.lockutils [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.748 253542 DEBUG oslo_concurrency.lockutils [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.748 253542 DEBUG nova.compute.manager [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.749 253542 WARNING nova.compute.manager [req-91404eef-f298-4cbd-af09-5da3630b7d28 req-b1ae69a8-ff58-4a01-9a8b-7fb93af3ee2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state None.
Nov 25 08:35:31 compute-0 ceph-mon[75015]: pgmap v1571: 321 pgs: 321 active+clean; 142 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 637 KiB/s wr, 30 op/s
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.824 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.825 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.826 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.826 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.826 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.828 253542 INFO nova.compute.manager [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Terminating instance
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.829 253542 DEBUG nova.compute.manager [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:35:31 compute-0 kernel: tap15af3dd8-97 (unregistering): left promiscuous mode
Nov 25 08:35:31 compute-0 NetworkManager[48915]: <info>  [1764059731.8765] device (tap15af3dd8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:31 compute-0 ovn_controller[152859]: 2025-11-25T08:35:31Z|00651|binding|INFO|Releasing lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c from this chassis (sb_readonly=0)
Nov 25 08:35:31 compute-0 ovn_controller[152859]: 2025-11-25T08:35:31Z|00652|binding|INFO|Setting lport 15af3dd8-9788-4a34-b4b2-d3b24300cd4c down in Southbound
Nov 25 08:35:31 compute-0 ovn_controller[152859]: 2025-11-25T08:35:31Z|00653|binding|INFO|Removing iface tap15af3dd8-97 ovn-installed in OVS
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.886 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.898 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:cd:40 10.100.0.13'], port_security=['fa:16:3e:07:cd:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0feca801-4630-4450-b915-616d8496ab51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-908154e6-322e-4607-bb65-df3f3f8daca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23237e7592b247838e62457157e64e9e', 'neutron:revision_number': '16', 'neutron:security_group_ids': '3358d53f-61a0-46d5-a068-e095dc0322d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f04e87cb-da21-4cc9-be16-4ad52b84fb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=15af3dd8-9788-4a34-b4b2-d3b24300cd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.901 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 15af3dd8-9788-4a34-b4b2-d3b24300cd4c in datapath 908154e6-322e-4607-bb65-df3f3f8daca6 unbound from our chassis
Nov 25 08:35:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.902 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 908154e6-322e-4607-bb65-df3f3f8daca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.904 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c71ba676-22c2-4c80-b547-46342e183d2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:31.906 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 namespace which is not needed anymore
Nov 25 08:35:31 compute-0 nova_compute[253538]: 2025-11-25 08:35:31.910 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:31 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 08:35:31 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000032.scope: Consumed 4.796s CPU time.
Nov 25 08:35:31 compute-0 systemd-machined[215790]: Machine qemu-78-instance-00000032 terminated.
Nov 25 08:35:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [NOTICE]   (322235) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [NOTICE]   (322235) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [WARNING]  (322235) : Exiting Master process...
Nov 25 08:35:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [WARNING]  (322235) : Exiting Master process...
Nov 25 08:35:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [ALERT]    (322235) : Current worker (322254) exited with code 143 (Terminated)
Nov 25 08:35:32 compute-0 neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6[322215]: [WARNING]  (322235) : All workers exited. Exiting... (0)
Nov 25 08:35:32 compute-0 systemd[1]: libpod-1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81.scope: Deactivated successfully.
Nov 25 08:35:32 compute-0 podman[322598]: 2025-11-25 08:35:32.043515808 +0000 UTC m=+0.047388205 container died 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.079 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.083 253542 INFO nova.virt.libvirt.driver [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Instance destroyed successfully.
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.084 253542 DEBUG nova.objects.instance [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lazy-loading 'resources' on Instance uuid 0feca801-4630-4450-b915-616d8496ab51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.095 253542 DEBUG nova.virt.libvirt.vif [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1351113969',display_name='tempest-ServerActionsTestJSON-server-1351113969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1351113969',id=50,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5Fk56Q4lNqV9K7WDUeyi9Q8LXfzmP3pHaY3lQ+esJSSZiBEhWQZtQw4QEFpwpSqYGNN6+MiKdvSMZAjIxsIMhhevlSp0lI0bm/7fQanmTm+NtC/LaRja7uscM7lRA6IQ==',key_name='tempest-keypair-23618085',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23237e7592b247838e62457157e64e9e',ramdisk_id='',reservation_id='r-ui990d0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1880843108',owner_user_name='tempest-ServerActionsTestJSON-1880843108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c199ca353ed54a53ab7fe37d3089c82a',uuid=0feca801-4630-4450-b915-616d8496ab51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.095 253542 DEBUG nova.network.os_vif_util [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converting VIF {"id": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "address": "fa:16:3e:07:cd:40", "network": {"id": "908154e6-322e-4607-bb65-df3f3f8daca6", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1395486665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23237e7592b247838e62457157e64e9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15af3dd8-97", "ovs_interfaceid": "15af3dd8-9788-4a34-b4b2-d3b24300cd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.096 253542 DEBUG nova.network.os_vif_util [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.097 253542 DEBUG os_vif [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.099 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15af3dd8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a9436bb21b625d65243e687d48f6f742a3a3013b2c6388d6c2603db017f5bb0-merged.mount: Deactivated successfully.
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.106 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.109 253542 INFO os_vif [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:cd:40,bridge_name='br-int',has_traffic_filtering=True,id=15af3dd8-9788-4a34-b4b2-d3b24300cd4c,network=Network(908154e6-322e-4607-bb65-df3f3f8daca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15af3dd8-97')
Nov 25 08:35:32 compute-0 podman[322598]: 2025-11-25 08:35:32.109646514 +0000 UTC m=+0.113518901 container cleanup 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 08:35:32 compute-0 systemd[1]: libpod-conmon-1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81.scope: Deactivated successfully.
Nov 25 08:35:32 compute-0 podman[322635]: 2025-11-25 08:35:32.174361454 +0000 UTC m=+0.045443442 container remove 1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.179 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c9970b97-4889-4591-b20d-e88cd54e3e32]: (4, ('Tue Nov 25 08:35:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81)\n1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81\nTue Nov 25 08:35:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 (1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81)\n1f8218895f077b5577d2b628b28c3f3a7162df45e0b0a88502945a35a3202f81\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.181 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65ff64ef-3388-4cde-8a12-3e23f579bd5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.182 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap908154e6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:32 compute-0 kernel: tap908154e6-30: left promiscuous mode
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.185 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.183 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.201 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc36eff-aea2-48bc-9742-2cb4be42fd1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.218 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[22fa8bd0-b370-4471-8ea7-471358d1e991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.220 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efc8b560-6b5f-46b9-be58-977ea40102b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.242 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37cb74a9-9e73-4992-bd05-708fd74cbb3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508172, 'reachable_time': 43632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322668, 'error': None, 'target': 'ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d908154e6\x2d322e\x2d4607\x2dbb65\x2ddf3f3f8daca6.mount: Deactivated successfully.
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.247 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-908154e6-322e-4607-bb65-df3f3f8daca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.248 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b1fa98f1-4b0b-4a68-a01b-25a9da0d39ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:32.248 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.526 253542 INFO nova.virt.libvirt.driver [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Deleting instance files /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51_del
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.527 253542 INFO nova.virt.libvirt.driver [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Deletion of /var/lib/nova/instances/0feca801-4630-4450-b915-616d8496ab51_del complete
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.571 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.571 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.582 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059732.581664, cab0bbd2-96e3-43ed-970b-0b49c7581fef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.583 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] VM Resumed (Lifecycle Event)
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.584 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.584 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.587 253542 INFO nova.virt.libvirt.driver [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance spawned successfully.
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.588 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.611 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.618 253542 INFO nova.compute.manager [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.619 253542 DEBUG oslo.service.loopingcall [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.620 253542 DEBUG nova.compute.manager [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.621 253542 DEBUG nova.network.neutron [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.627 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.631 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.632 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.633 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.634 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.634 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.635 253542 DEBUG nova.virt.libvirt.driver [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.667 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.667 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059732.585585, cab0bbd2-96e3-43ed-970b-0b49c7581fef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.667 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] VM Started (Lifecycle Event)
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.692 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.694 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.711 253542 INFO nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 4.31 seconds to spawn the instance on the hypervisor.
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.712 253542 DEBUG nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.713 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.765 253542 INFO nova.compute.manager [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 5.29 seconds to build instance.
Nov 25 08:35:32 compute-0 nova_compute[253538]: 2025-11-25 08:35:32.783 253542 DEBUG oslo_concurrency.lockutils [None req-040bbfc9-8ac2-47a0-959c-a5377d60a6bc 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:33.251 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 160 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.4 MiB/s wr, 10 op/s
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.836 253542 DEBUG nova.compute.manager [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.836 253542 DEBUG oslo_concurrency.lockutils [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0feca801-4630-4450-b915-616d8496ab51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.836 253542 DEBUG oslo_concurrency.lockutils [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.837 253542 DEBUG oslo_concurrency.lockutils [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.837 253542 DEBUG nova.compute.manager [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] No waiting events found dispatching network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.837 253542 WARNING nova.compute.manager [req-a3a95b4b-2356-4331-be37-2f269d8e8742 req-70f5e85a-1a7d-4a7b-ae22-dec03ccb8dbb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received unexpected event network-vif-plugged-15af3dd8-9788-4a34-b4b2-d3b24300cd4c for instance with vm_state active and task_state deleting.
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.883 253542 DEBUG nova.network.neutron [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.904 253542 INFO nova.compute.manager [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] Took 1.28 seconds to deallocate network for instance.
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.944 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.945 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:33 compute-0 nova_compute[253538]: 2025-11-25 08:35:33.962 253542 DEBUG nova.compute.manager [req-09053c5a-a672-4e4c-97c0-a66975d96667 req-382fd079-8979-470b-83ca-5d7e8ec7b13d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0feca801-4630-4450-b915-616d8496ab51] Received event network-vif-deleted-15af3dd8-9788-4a34-b4b2-d3b24300cd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:34 compute-0 nova_compute[253538]: 2025-11-25 08:35:34.036 253542 DEBUG oslo_concurrency.processutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:34 compute-0 ceph-mon[75015]: pgmap v1572: 321 pgs: 321 active+clean; 160 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.4 MiB/s wr, 10 op/s
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.350576) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734350603, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2120, "num_deletes": 253, "total_data_size": 3162604, "memory_usage": 3207088, "flush_reason": "Manual Compaction"}
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734376424, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3094649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30739, "largest_seqno": 32858, "table_properties": {"data_size": 3085310, "index_size": 5705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20612, "raw_average_key_size": 20, "raw_value_size": 3066062, "raw_average_value_size": 3062, "num_data_blocks": 252, "num_entries": 1001, "num_filter_entries": 1001, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059539, "oldest_key_time": 1764059539, "file_creation_time": 1764059734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 25886 microseconds, and 5894 cpu microseconds.
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.376461) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3094649 bytes OK
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.376478) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.378709) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.378722) EVENT_LOG_v1 {"time_micros": 1764059734378718, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.378738) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3153607, prev total WAL file size 3153607, number of live WAL files 2.
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.379663) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3022KB)], [68(6971KB)]
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734379718, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10233739, "oldest_snapshot_seqno": -1}
Nov 25 08:35:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397339520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:34 compute-0 nova_compute[253538]: 2025-11-25 08:35:34.488 253542 DEBUG oslo_concurrency.processutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:34 compute-0 nova_compute[253538]: 2025-11-25 08:35:34.493 253542 DEBUG nova.compute.provider_tree [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:34 compute-0 nova_compute[253538]: 2025-11-25 08:35:34.510 253542 DEBUG nova.scheduler.client.report [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5790 keys, 8519501 bytes, temperature: kUnknown
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734525497, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8519501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8480901, "index_size": 23002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 145745, "raw_average_key_size": 25, "raw_value_size": 8376996, "raw_average_value_size": 1446, "num_data_blocks": 939, "num_entries": 5790, "num_filter_entries": 5790, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.525729) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8519501 bytes
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.528631) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.2 rd, 58.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.8 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6312, records dropped: 522 output_compression: NoCompression
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.528649) EVENT_LOG_v1 {"time_micros": 1764059734528640, "job": 38, "event": "compaction_finished", "compaction_time_micros": 145861, "compaction_time_cpu_micros": 30827, "output_level": 6, "num_output_files": 1, "total_output_size": 8519501, "num_input_records": 6312, "num_output_records": 5790, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734529150, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059734530179, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.379537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:35:34 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:35:34.530262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:35:34 compute-0 nova_compute[253538]: 2025-11-25 08:35:34.531 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:34 compute-0 nova_compute[253538]: 2025-11-25 08:35:34.567 253542 INFO nova.scheduler.client.report [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Deleted allocations for instance 0feca801-4630-4450-b915-616d8496ab51
Nov 25 08:35:34 compute-0 nova_compute[253538]: 2025-11-25 08:35:34.655 253542 DEBUG oslo_concurrency.lockutils [None req-55f650df-f7c4-4b19-8bef-7a3b81e30460 c199ca353ed54a53ab7fe37d3089c82a 23237e7592b247838e62457157e64e9e - - default default] Lock "0feca801-4630-4450-b915-616d8496ab51" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.180 253542 DEBUG nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.218 253542 INFO nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] instance snapshotting
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.219 253542 DEBUG nova.objects.instance [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lazy-loading 'flavor' on Instance uuid cab0bbd2-96e3-43ed-970b-0b49c7581fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 127 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 822 KiB/s rd, 1.8 MiB/s wr, 89 op/s
Nov 25 08:35:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2397339520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.511 253542 INFO nova.virt.libvirt.driver [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Beginning live snapshot process
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.688 253542 DEBUG nova.virt.libvirt.imagebackend [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.760 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.761 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.762 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.763 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.763 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.766 253542 INFO nova.compute.manager [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Terminating instance
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.768 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "refresh_cache-cab0bbd2-96e3-43ed-970b-0b49c7581fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.768 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquired lock "refresh_cache-cab0bbd2-96e3-43ed-970b-0b49c7581fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.769 253542 DEBUG nova.network.neutron [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.834 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.835 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.910 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:35:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.991 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:35 compute-0 nova_compute[253538]: 2025-11-25 08:35:35.992 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.000 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.000 253542 INFO nova.compute.claims [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.022 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] creating snapshot(befacc2a29f94475a36706369e285e5b) on rbd image(cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.065 253542 DEBUG nova.network.neutron [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.153 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 25 08:35:36 compute-0 ceph-mon[75015]: pgmap v1573: 321 pgs: 321 active+clean; 127 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 822 KiB/s rd, 1.8 MiB/s wr, 89 op/s
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.388 253542 DEBUG nova.network.neutron [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 25 08:35:36 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.415 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Releasing lock "refresh_cache-cab0bbd2-96e3-43ed-970b-0b49c7581fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.416 253542 DEBUG nova.compute.manager [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.455 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] cloning vms/cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk@befacc2a29f94475a36706369e285e5b to images/dd6f5947-0c8f-4755-b44f-279630d6448e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:35:36 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000043.scope: Deactivated successfully.
Nov 25 08:35:36 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000043.scope: Consumed 4.851s CPU time.
Nov 25 08:35:36 compute-0 systemd-machined[215790]: Machine qemu-79-instance-00000043 terminated.
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.560 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] flattening images/dd6f5947-0c8f-4755-b44f-279630d6448e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.646 253542 INFO nova.virt.libvirt.driver [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance destroyed successfully.
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.647 253542 DEBUG nova.objects.instance [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lazy-loading 'resources' on Instance uuid cab0bbd2-96e3-43ed-970b-0b49c7581fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315898247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.683 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.688 253542 DEBUG nova.compute.provider_tree [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.705 253542 DEBUG nova.scheduler.client.report [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.738 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.739 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.828 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.828 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.882 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.898 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] removing snapshot(befacc2a29f94475a36706369e285e5b) on rbd image(cab0bbd2-96e3-43ed-970b-0b49c7581fef_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:35:36 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.902 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:36.999 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.001 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.002 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Creating image(s)
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.032 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.061 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.083 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.086 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.185 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.186 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.187 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.187 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.210 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.213 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9396c5ff-9457-400c-8916-ecd03eded0c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 112 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.390 253542 DEBUG nova.policy [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '596f8d994ec145beb9244f5f01713555', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b96d13f13da43468269abb6dc6185d1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:35:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 25 08:35:37 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.473 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.474 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 25 08:35:37 compute-0 ceph-mon[75015]: osdmap e183: 3 total, 3 up, 3 in
Nov 25 08:35:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2315898247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.489 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:35:37 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.547 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.548 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.555 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.556 253542 INFO nova.compute.claims [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.570 253542 DEBUG nova.storage.rbd_utils [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] creating snapshot(snap) on rbd image(dd6f5947-0c8f-4755-b44f-279630d6448e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.624 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9396c5ff-9457-400c-8916-ecd03eded0c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.698 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] resizing rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.787 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.826 253542 DEBUG nova.objects.instance [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 9396c5ff-9457-400c-8916-ecd03eded0c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.837 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.838 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Ensure instance console log exists: /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.838 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.838 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:37 compute-0 nova_compute[253538]: 2025-11-25 08:35:37.839 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:38 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716399622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.216 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.224 253542 DEBUG nova.compute.provider_tree [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.247 253542 DEBUG nova.scheduler.client.report [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.298 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.300 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.361 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.362 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.373 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Successfully created port: 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.387 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.418 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:35:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 25 08:35:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 25 08:35:38 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 25 08:35:38 compute-0 ceph-mon[75015]: pgmap v1575: 321 pgs: 321 active+clean; 112 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Nov 25 08:35:38 compute-0 ceph-mon[75015]: osdmap e184: 3 total, 3 up, 3 in
Nov 25 08:35:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3716399622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.505 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.507 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.507 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Creating image(s)
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.537 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.561 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.588 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.591 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.679 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.680 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.680 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.680 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.698 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.700 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4e9d3984-d789-45e1-83e3-8909597d3265_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:38 compute-0 nova_compute[253538]: 2025-11-25 08:35:38.985 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4e9d3984-d789-45e1-83e3-8909597d3265_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.033 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] resizing rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.112 253542 DEBUG nova.objects.instance [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.125 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.126 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Ensure instance console log exists: /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.126 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.126 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.127 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.174 253542 DEBUG nova.policy [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4de06a7985be4463b069db269e2882d4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:35:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 135 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.4 MiB/s wr, 355 op/s
Nov 25 08:35:39 compute-0 ceph-mon[75015]: osdmap e185: 3 total, 3 up, 3 in
Nov 25 08:35:39 compute-0 podman[323289]: 2025-11-25 08:35:39.80706653 +0000 UTC m=+0.056382516 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:35:39 compute-0 nova_compute[253538]: 2025-11-25 08:35:39.831 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Successfully created port: d553507f-4019-4ce0-b549-4d221b9089cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.015 253542 INFO nova.virt.libvirt.driver [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Snapshot image upload complete
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.015 253542 INFO nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 4.78 seconds to snapshot the instance on the hypervisor.
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.056 253542 DEBUG nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.300 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Successfully updated port: 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.313 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.313 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquired lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.313 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:35:40 compute-0 ceph-mon[75015]: pgmap v1578: 321 pgs: 321 active+clean; 135 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.4 MiB/s wr, 355 op/s
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.673 253542 DEBUG nova.compute.manager [None req-6945dc43-ae27-49fa-97b6-869081f2415a 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.711 253542 DEBUG nova.compute.manager [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-changed-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.712 253542 DEBUG nova.compute.manager [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Refreshing instance network info cache due to event network-changed-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.712 253542 DEBUG oslo_concurrency.lockutils [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.726 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.881 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Successfully updated port: d553507f-4019-4ce0-b549-4d221b9089cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.901 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.901 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquired lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.901 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:35:40 compute-0 nova_compute[253538]: 2025-11-25 08:35:40.957 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:41.059 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:41.060 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:41.060 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:41 compute-0 nova_compute[253538]: 2025-11-25 08:35:41.104 253542 DEBUG nova.compute.manager [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-changed-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:41 compute-0 nova_compute[253538]: 2025-11-25 08:35:41.105 253542 DEBUG nova.compute.manager [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Refreshing instance network info cache due to event network-changed-d553507f-4019-4ce0-b549-4d221b9089cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:35:41 compute-0 nova_compute[253538]: 2025-11-25 08:35:41.105 253542 DEBUG oslo_concurrency.lockutils [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:35:41 compute-0 nova_compute[253538]: 2025-11-25 08:35:41.113 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:35:41 compute-0 nova_compute[253538]: 2025-11-25 08:35:41.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 185 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.6 MiB/s wr, 232 op/s
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.442 253542 DEBUG nova.network.neutron [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Updating instance_info_cache with network_info: [{"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.479 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Releasing lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.480 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance network_info: |[{"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.480 253542 DEBUG oslo_concurrency.lockutils [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.481 253542 DEBUG nova.network.neutron [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Refreshing network info cache for port 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.486 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start _get_guest_xml network_info=[{"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.493 253542 WARNING nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.503 253542 DEBUG nova.virt.libvirt.host [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.504 253542 DEBUG nova.virt.libvirt.host [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.520 253542 DEBUG nova.virt.libvirt.host [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.521 253542 DEBUG nova.virt.libvirt.host [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.522 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.522 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.523 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.523 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.524 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.524 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.525 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.525 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.526 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.526 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.527 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:35:42 compute-0 ceph-mon[75015]: pgmap v1579: 321 pgs: 321 active+clean; 185 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.6 MiB/s wr, 232 op/s
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.527 253542 DEBUG nova.virt.hardware [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.532 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.591 253542 INFO nova.virt.libvirt.driver [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Deleting instance files /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef_del
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.593 253542 INFO nova.virt.libvirt.driver [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Deletion of /var/lib/nova/instances/cab0bbd2-96e3-43ed-970b-0b49c7581fef_del complete
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.683 253542 INFO nova.compute.manager [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 6.27 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.685 253542 DEBUG oslo.service.loopingcall [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.686 253542 DEBUG nova.compute.manager [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.686 253542 DEBUG nova.network.neutron [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.838 253542 DEBUG nova.network.neutron [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.841 253542 DEBUG nova.network.neutron [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.855 253542 DEBUG nova.network.neutron [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:42 compute-0 nova_compute[253538]: 2025-11-25 08:35:42.871 253542 INFO nova.compute.manager [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Took 0.18 seconds to deallocate network for instance.
Nov 25 08:35:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557205148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.010 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.032 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.037 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.076 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Releasing lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.077 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance network_info: |[{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.078 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.078 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.079 253542 DEBUG oslo_concurrency.lockutils [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.079 253542 DEBUG nova.network.neutron [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Refreshing network info cache for port d553507f-4019-4ce0-b549-4d221b9089cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.082 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start _get_guest_xml network_info=[{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.095 253542 WARNING nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.102 253542 DEBUG nova.virt.libvirt.host [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.103 253542 DEBUG nova.virt.libvirt.host [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.105 253542 DEBUG nova.virt.libvirt.host [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.106 253542 DEBUG nova.virt.libvirt.host [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.107 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.107 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.108 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.108 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.108 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.109 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.109 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.109 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.110 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.110 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.110 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.111 253542 DEBUG nova.virt.hardware [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.115 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:43 compute-0 sudo[323351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:43 compute-0 sudo[323351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:43 compute-0 sudo[323351]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.198 253542 DEBUG oslo_concurrency.processutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:43 compute-0 sudo[323377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:35:43 compute-0 sudo[323377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:43 compute-0 sudo[323377]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:43 compute-0 sudo[323422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:43 compute-0 sudo[323422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:43 compute-0 sudo[323422]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 218 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 9.3 MiB/s wr, 320 op/s
Nov 25 08:35:43 compute-0 sudo[323466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:35:43 compute-0 sudo[323466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586325570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.510 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.512 253542 DEBUG nova.virt.libvirt.vif [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-82069614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-82069614',id=68,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b96d13f13da43468269abb6dc6185d1',ramdisk_id='',reservation_id='r-0e395ga0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1033089060',owner_user_name='tempest-InstanceActionsV221TestJSON-1033089060-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:35:36Z,user_data=None,user_id='596f8d994ec145beb9244f5f01713555',uuid=9396c5ff-9457-400c-8916-ecd03eded0c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.512 253542 DEBUG nova.network.os_vif_util [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converting VIF {"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.527 253542 DEBUG nova.network.os_vif_util [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.529 253542 DEBUG nova.objects.instance [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9396c5ff-9457-400c-8916-ecd03eded0c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/557205148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/586325570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.546 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <uuid>9396c5ff-9457-400c-8916-ecd03eded0c1</uuid>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <name>instance-00000044</name>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-82069614</nova:name>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:35:42</nova:creationTime>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <nova:user uuid="596f8d994ec145beb9244f5f01713555">tempest-InstanceActionsV221TestJSON-1033089060-project-member</nova:user>
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <nova:project uuid="3b96d13f13da43468269abb6dc6185d1">tempest-InstanceActionsV221TestJSON-1033089060</nova:project>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <nova:port uuid="33ae6d28-9d12-4e42-9874-7f5c7a27c9c8">
Nov 25 08:35:43 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <system>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <entry name="serial">9396c5ff-9457-400c-8916-ecd03eded0c1</entry>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <entry name="uuid">9396c5ff-9457-400c-8916-ecd03eded0c1</entry>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </system>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <os>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   </os>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <features>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   </features>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9396c5ff-9457-400c-8916-ecd03eded0c1_disk">
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config">
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:43 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:bb:e0:fa"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <target dev="tap33ae6d28-9d"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/console.log" append="off"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <video>
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </video>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:35:43 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:35:43 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:35:43 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:35:43 compute-0 nova_compute[253538]: </domain>
Nov 25 08:35:43 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.548 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Preparing to wait for external event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.549 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.550 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.550 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.551 253542 DEBUG nova.virt.libvirt.vif [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-82069614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-82069614',id=68,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b96d13f13da43468269abb6dc6185d1',ramdisk_id='',reservation_id='r-0e395ga0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1033089060',owner_user_name='tempest-InstanceActionsV221TestJSON-1033089060-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:35:36Z,user_data=None,user_id='596f8d994ec145beb9244f5f01713555',uuid=9396c5ff-9457-400c-8916-ecd03eded0c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204648431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.552 253542 DEBUG nova.network.os_vif_util [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converting VIF {"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.553 253542 DEBUG nova.network.os_vif_util [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.554 253542 DEBUG os_vif [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.556 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.556 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.562 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33ae6d28-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.563 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33ae6d28-9d, col_values=(('external_ids', {'iface-id': '33ae6d28-9d12-4e42-9874-7f5c7a27c9c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:e0:fa', 'vm-uuid': '9396c5ff-9457-400c-8916-ecd03eded0c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:43 compute-0 NetworkManager[48915]: <info>  [1764059743.5662] manager: (tap33ae6d28-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.565 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.575 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.577 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.577 253542 INFO os_vif [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d')
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.600 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.624 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.711 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.712 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.713 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] No VIF found with MAC fa:16:3e:bb:e0:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.713 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Using config drive
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371612900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.734 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.739 253542 DEBUG oslo_concurrency.processutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.743 253542 DEBUG nova.compute.provider_tree [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.764 253542 DEBUG nova.scheduler.client.report [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.791 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.816 253542 INFO nova.scheduler.client.report [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Deleted allocations for instance cab0bbd2-96e3-43ed-970b-0b49c7581fef
Nov 25 08:35:43 compute-0 sudo[323466]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.878 253542 DEBUG oslo_concurrency.lockutils [None req-9db5aae1-9953-40d4-8e1a-8d4bdd6e5498 78e69376f7924a5695ba6f4672139f68 d2de800215da4cac8d9fd3a6a3cf4a55 - - default default] Lock "cab0bbd2-96e3-43ed-970b-0b49c7581fef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.886 253542 DEBUG nova.network.neutron [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Updated VIF entry in instance network info cache for port 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.886 253542 DEBUG nova.network.neutron [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Updating instance_info_cache with network_info: [{"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:35:43 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c97e4d49-54f7-46cc-8d0e-2441f3163053 does not exist
Nov 25 08:35:43 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 37b68f44-8f84-4fe8-9d47-b8e793bb0d76 does not exist
Nov 25 08:35:43 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1c440712-b92e-4656-98ca-30b5deb6b155 does not exist
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:35:43 compute-0 nova_compute[253538]: 2025-11-25 08:35:43.901 253542 DEBUG oslo_concurrency.lockutils [req-0e4fb5a9-20ab-4129-8a16-ded88e0a087b req-7a2d93d7-d203-4267-baea-4b01d2c267fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9396c5ff-9457-400c-8916-ecd03eded0c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:35:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:35:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:35:43 compute-0 sudo[323606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:43 compute-0 sudo[323606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:43 compute-0 sudo[323606]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:44 compute-0 sudo[323631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:35:44 compute-0 sudo[323631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:44 compute-0 sudo[323631]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327401896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:44 compute-0 sudo[323656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:44 compute-0 sudo[323656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:44 compute-0 sudo[323656]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.079 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.080 253542 DEBUG nova.virt.libvirt.vif [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:35:38Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.081 253542 DEBUG nova.network.os_vif_util [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.081 253542 DEBUG nova.network.os_vif_util [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.083 253542 DEBUG nova.objects.instance [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.095 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <uuid>4e9d3984-d789-45e1-83e3-8909597d3265</uuid>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <name>instance-00000045</name>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <nova:name>tempest-InstanceActionsTestJSON-server-1670811507</nova:name>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:35:43</nova:creationTime>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <nova:user uuid="4de06a7985be4463b069db269e2882d4">tempest-InstanceActionsTestJSON-270987687-project-member</nova:user>
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <nova:project uuid="a20ef4bed55a408c8933a4956b2dd3e4">tempest-InstanceActionsTestJSON-270987687</nova:project>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <nova:port uuid="d553507f-4019-4ce0-b549-4d221b9089cd">
Nov 25 08:35:44 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <system>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <entry name="serial">4e9d3984-d789-45e1-83e3-8909597d3265</entry>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <entry name="uuid">4e9d3984-d789-45e1-83e3-8909597d3265</entry>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </system>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <os>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   </os>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <features>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   </features>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4e9d3984-d789-45e1-83e3-8909597d3265_disk">
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4e9d3984-d789-45e1-83e3-8909597d3265_disk.config">
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:44 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d8:47:0f"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <target dev="tapd553507f-40"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/console.log" append="off"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <video>
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </video>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:35:44 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:35:44 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:35:44 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:35:44 compute-0 nova_compute[253538]: </domain>
Nov 25 08:35:44 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.097 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Preparing to wait for external event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.097 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.097 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.098 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.098 253542 DEBUG nova.virt.libvirt.vif [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:35:38Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.098 253542 DEBUG nova.network.os_vif_util [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.099 253542 DEBUG nova.network.os_vif_util [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.099 253542 DEBUG os_vif [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.100 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.104 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd553507f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.104 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd553507f-40, col_values=(('external_ids', {'iface-id': 'd553507f-4019-4ce0-b549-4d221b9089cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:47:0f', 'vm-uuid': '4e9d3984-d789-45e1-83e3-8909597d3265'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 NetworkManager[48915]: <info>  [1764059744.1067] manager: (tapd553507f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.113 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.114 253542 INFO os_vif [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40')
Nov 25 08:35:44 compute-0 sudo[323683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:35:44 compute-0 sudo[323683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.168 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.168 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.169 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] No VIF found with MAC fa:16:3e:d8:47:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.169 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Using config drive
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.191 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.198 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Creating config drive at /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.203 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeemjyazy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.347 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeemjyazy" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.373 253542 DEBUG nova.storage.rbd_utils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] rbd image 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.376 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:44 compute-0 podman[323787]: 2025-11-25 08:35:44.438116994 +0000 UTC m=+0.042285167 container create 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 08:35:44 compute-0 systemd[1]: Started libpod-conmon-0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d.scope.
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.497 253542 DEBUG nova.network.neutron [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updated VIF entry in instance network info cache for port d553507f-4019-4ce0-b549-4d221b9089cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.498 253542 DEBUG nova.network.neutron [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.512 253542 DEBUG oslo_concurrency.lockutils [req-ffe973bb-b0df-4a54-b49f-2474602e89ab req-9a2630d4-249b-42ee-8a59-7de5dc9b9479 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:44 compute-0 podman[323787]: 2025-11-25 08:35:44.418000984 +0000 UTC m=+0.022169167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.524 253542 DEBUG oslo_concurrency.processutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config 9396c5ff-9457-400c-8916-ecd03eded0c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.525 253542 INFO nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Deleting local config drive /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1/disk.config because it was imported into RBD.
Nov 25 08:35:44 compute-0 podman[323787]: 2025-11-25 08:35:44.531407061 +0000 UTC m=+0.135575254 container init 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:35:44 compute-0 podman[323787]: 2025-11-25 08:35:44.537247798 +0000 UTC m=+0.141415981 container start 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:35:44 compute-0 ceph-mon[75015]: pgmap v1580: 321 pgs: 321 active+clean; 218 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 9.3 MiB/s wr, 320 op/s
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4204648431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/371612900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:35:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3327401896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:44 compute-0 podman[323787]: 2025-11-25 08:35:44.540626679 +0000 UTC m=+0.144794902 container attach 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:35:44 compute-0 systemd[1]: libpod-0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d.scope: Deactivated successfully.
Nov 25 08:35:44 compute-0 modest_edison[323820]: 167 167
Nov 25 08:35:44 compute-0 conmon[323820]: conmon 0a2f9a1f75369844990c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d.scope/container/memory.events
Nov 25 08:35:44 compute-0 podman[323787]: 2025-11-25 08:35:44.544284167 +0000 UTC m=+0.148452360 container died 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 25 08:35:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f1ce2f8c4119389a9f2ccc5b4f30564935916e4bc888b0271bc8f6281ca1f03-merged.mount: Deactivated successfully.
Nov 25 08:35:44 compute-0 NetworkManager[48915]: <info>  [1764059744.5812] manager: (tap33ae6d28-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Nov 25 08:35:44 compute-0 kernel: tap33ae6d28-9d: entered promiscuous mode
Nov 25 08:35:44 compute-0 podman[323787]: 2025-11-25 08:35:44.5871657 +0000 UTC m=+0.191333893 container remove 0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 ovn_controller[152859]: 2025-11-25T08:35:44Z|00654|binding|INFO|Claiming lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 for this chassis.
Nov 25 08:35:44 compute-0 ovn_controller[152859]: 2025-11-25T08:35:44Z|00655|binding|INFO|33ae6d28-9d12-4e42-9874-7f5c7a27c9c8: Claiming fa:16:3e:bb:e0:fa 10.100.0.12
Nov 25 08:35:44 compute-0 systemd[1]: libpod-conmon-0a2f9a1f75369844990c86af5292d08665e280d72186e37dc4c970206e59214d.scope: Deactivated successfully.
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.601 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Creating config drive at /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.606 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl6v_0lha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.608 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:e0:fa 10.100.0.12'], port_security=['fa:16:3e:bb:e0:fa 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9396c5ff-9457-400c-8916-ecd03eded0c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b96d13f13da43468269abb6dc6185d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '086c6d0d-fe38-46de-a484-0a651367668f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92cc438c-7163-401b-8f9b-e1ec1d29a1db, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.609 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 in datapath 77b0065f-12e9-4121-b463-93a7fd9a5ff0 bound to our chassis
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.610 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77b0065f-12e9-4121-b463-93a7fd9a5ff0
Nov 25 08:35:44 compute-0 systemd-udevd[323851]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d93afc-459a-4319-910a-63cb47539277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.624 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77b0065f-11 in ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.626 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77b0065f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.626 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f344dd3b-3423-4a7b-a3d4-9888cb1999f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.627 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ecfbfaf2-f86e-4ad9-a994-73e434eb5f9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 systemd-machined[215790]: New machine qemu-80-instance-00000044.
Nov 25 08:35:44 compute-0 NetworkManager[48915]: <info>  [1764059744.6327] device (tap33ae6d28-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:35:44 compute-0 NetworkManager[48915]: <info>  [1764059744.6339] device (tap33ae6d28-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.639 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[397a8170-d4c2-4b37-adba-083a5b368c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000044.
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.663 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[28567bf5-ee02-48b9-a34f-d98356553680]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 ovn_controller[152859]: 2025-11-25T08:35:44Z|00656|binding|INFO|Setting lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 ovn-installed in OVS
Nov 25 08:35:44 compute-0 ovn_controller[152859]: 2025-11-25T08:35:44Z|00657|binding|INFO|Setting lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 up in Southbound
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.693 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bfeaecdf-6fda-4da2-8276-293a287ba311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[359b5409-f26a-4197-92a2-93bb9596cc4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 NetworkManager[48915]: <info>  [1764059744.6994] manager: (tap77b0065f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.729 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[75efa693-72cf-49db-a90d-6144ab93fb99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.732 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a836d88b-7e60-4bf2-aff1-6e78a280f9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 NetworkManager[48915]: <info>  [1764059744.7523] device (tap77b0065f-10): carrier: link connected
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.755 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl6v_0lha" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.757 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2f38a843-f857-4e80-b374-2a15ca1f24ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 podman[323878]: 2025-11-25 08:35:44.761397532 +0000 UTC m=+0.039872102 container create 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[edf737d7-0d63-448e-8c0d-5ba2a895a222]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77b0065f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:0a:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509935, 'reachable_time': 24057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323910, 'error': None, 'target': 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.786 253542 DEBUG nova.storage.rbd_utils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] rbd image 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.793 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.796 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b26553d5-ce85-4775-9150-e9b97c14bb97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aff'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509935, 'tstamp': 509935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323927, 'error': None, 'target': 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 systemd[1]: Started libpod-conmon-2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df.scope.
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.814 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c16c373-5206-454a-abec-6854ca44a100]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77b0065f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:0a:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509935, 'reachable_time': 24057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323934, 'error': None, 'target': 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:44 compute-0 podman[323878]: 2025-11-25 08:35:44.746142653 +0000 UTC m=+0.024617233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.846 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3b24c2b1-519f-480b-a1d5-da8d22a6e6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 podman[323878]: 2025-11-25 08:35:44.855000848 +0000 UTC m=+0.133475448 container init 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 08:35:44 compute-0 podman[323878]: 2025-11-25 08:35:44.86323506 +0000 UTC m=+0.141709630 container start 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 08:35:44 compute-0 podman[323878]: 2025-11-25 08:35:44.867634017 +0000 UTC m=+0.146108617 container attach 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00b24520-3105-41df-b297-b0c3f0780dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77b0065f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.911 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77b0065f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 NetworkManager[48915]: <info>  [1764059744.9134] manager: (tap77b0065f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Nov 25 08:35:44 compute-0 kernel: tap77b0065f-10: entered promiscuous mode
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.922 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77b0065f-10, col_values=(('external_ids', {'iface-id': '9fb4a874-a628-417d-9e88-cf4274450252'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:44 compute-0 ovn_controller[152859]: 2025-11-25T08:35:44Z|00658|binding|INFO|Releasing lport 9fb4a874-a628-417d-9e88-cf4274450252 from this chassis (sb_readonly=0)
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.945 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.950 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77b0065f-12e9-4121-b463-93a7fd9a5ff0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77b0065f-12e9-4121-b463-93a7fd9a5ff0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[612c1b10-cb22-419a-a5a0-71f48e206cdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.951 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-77b0065f-12e9-4121-b463-93a7fd9a5ff0
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/77b0065f-12e9-4121-b463-93a7fd9a5ff0.pid.haproxy
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 77b0065f-12e9-4121-b463-93a7fd9a5ff0
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:35:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:44.952 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'env', 'PROCESS_TAG=haproxy-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77b0065f-12e9-4121-b463-93a7fd9a5ff0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.957 253542 DEBUG oslo_concurrency.processutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config 4e9d3984-d789-45e1-83e3-8909597d3265_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:44 compute-0 nova_compute[253538]: 2025-11-25 08:35:44.958 253542 INFO nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Deleting local config drive /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/disk.config because it was imported into RBD.
Nov 25 08:35:45 compute-0 NetworkManager[48915]: <info>  [1764059745.0039] manager: (tapd553507f-40): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Nov 25 08:35:45 compute-0 systemd-udevd[323891]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:45 compute-0 kernel: tapd553507f-40: entered promiscuous mode
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:45 compute-0 ovn_controller[152859]: 2025-11-25T08:35:45Z|00659|binding|INFO|Claiming lport d553507f-4019-4ce0-b549-4d221b9089cd for this chassis.
Nov 25 08:35:45 compute-0 ovn_controller[152859]: 2025-11-25T08:35:45Z|00660|binding|INFO|d553507f-4019-4ce0-b549-4d221b9089cd: Claiming fa:16:3e:d8:47:0f 10.100.0.9
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:45 compute-0 NetworkManager[48915]: <info>  [1764059745.0187] device (tapd553507f-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:35:45 compute-0 NetworkManager[48915]: <info>  [1764059745.0196] device (tapd553507f-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.028 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:47:0f 10.100.0.9'], port_security=['fa:16:3e:d8:47:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e9d3984-d789-45e1-83e3-8909597d3265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66249d1f-478b-4b2b-a784-933c0556752e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '99b8ce64-92d7-4b35-99db-f9518100b84f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891e4e5-b944-481c-a074-f90df3e14ac2, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d553507f-4019-4ce0-b549-4d221b9089cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:45 compute-0 systemd-machined[215790]: New machine qemu-81-instance-00000045.
Nov 25 08:35:45 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000045.
Nov 25 08:35:45 compute-0 ovn_controller[152859]: 2025-11-25T08:35:45Z|00661|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd ovn-installed in OVS
Nov 25 08:35:45 compute-0 ovn_controller[152859]: 2025-11-25T08:35:45Z|00662|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd up in Southbound
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.100 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.146 253542 DEBUG nova.compute.manager [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.147 253542 DEBUG oslo_concurrency.lockutils [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.147 253542 DEBUG oslo_concurrency.lockutils [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.148 253542 DEBUG oslo_concurrency.lockutils [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.148 253542 DEBUG nova.compute.manager [req-ab56c401-c095-491e-ab31-719a47bb04a7 req-47ceb8d0-1ea1-4601-957e-956c8769967d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Processing event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.174 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.174 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.1735632, 9396c5ff-9457-400c-8916-ecd03eded0c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.175 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] VM Started (Lifecycle Event)
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.177 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.180 253542 INFO nova.virt.libvirt.driver [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance spawned successfully.
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.180 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.195 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.206 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.210 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.211 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.211 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.212 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.212 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.213 253542 DEBUG nova.virt.libvirt.driver [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.241 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.242 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.1745691, 9396c5ff-9457-400c-8916-ecd03eded0c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.242 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] VM Paused (Lifecycle Event)
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.260 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.263 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.1773849, 9396c5ff-9457-400c-8916-ecd03eded0c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.263 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] VM Resumed (Lifecycle Event)
Nov 25 08:35:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 199 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.4 MiB/s wr, 201 op/s
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.296 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.300 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.344 253542 INFO nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Took 8.34 seconds to spawn the instance on the hypervisor.
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.344 253542 DEBUG nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:45 compute-0 podman[324087]: 2025-11-25 08:35:45.357569815 +0000 UTC m=+0.052096151 container create 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.379 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:35:45 compute-0 systemd[1]: Started libpod-conmon-9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6.scope.
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.400 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.3994076, 4e9d3984-d789-45e1-83e3-8909597d3265 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.400 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Started (Lifecycle Event)
Nov 25 08:35:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49574312e8f4af9bf3a8c8febb2214b41ab32d1992c105e91eb0c713b6ff73e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.427 253542 INFO nova.compute.manager [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Took 9.45 seconds to build instance.
Nov 25 08:35:45 compute-0 podman[324087]: 2025-11-25 08:35:45.332962903 +0000 UTC m=+0.027489249 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.431 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:45 compute-0 podman[324087]: 2025-11-25 08:35:45.433778573 +0000 UTC m=+0.128304899 container init 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.435 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059745.40055, 4e9d3984-d789-45e1-83e3-8909597d3265 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.436 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Paused (Lifecycle Event)
Nov 25 08:35:45 compute-0 podman[324087]: 2025-11-25 08:35:45.440260838 +0000 UTC m=+0.134787164 container start 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:35:45 compute-0 podman[324106]: 2025-11-25 08:35:45.445375835 +0000 UTC m=+0.054998629 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.454 253542 DEBUG oslo_concurrency.lockutils [None req-fcd9fa15-e55d-4b89-a908-feb02df038df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:45 compute-0 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [NOTICE]   (324133) : New worker (324135) forked
Nov 25 08:35:45 compute-0 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [NOTICE]   (324133) : Loading success.
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.470 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.472 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.493 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.525 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d553507f-4019-4ce0-b549-4d221b9089cd in datapath 66249d1f-478b-4b2b-a784-933c0556752e unbound from our chassis
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.527 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.536 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[34f1439c-c6b0-43ac-a391-b071658c0316]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.537 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66249d1f-41 in ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.539 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66249d1f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.539 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65d85625-b87c-4e0f-bf7f-cf06a807345c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f113b3c-0c6c-429a-815d-380c6f573d12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.553 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[aa255a23-1708-43b7-b587-6e0ab2fa843d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.580 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb39e3f0-6d7d-46e3-9800-4b32ab493cdc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.620 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ae55d5c7-4f0b-441a-927c-e084ce19c701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.626 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af6390b7-f465-4470-9417-3132034cade4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 NetworkManager[48915]: <info>  [1764059745.6277] manager: (tap66249d1f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/293)
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.661 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2fdfa6-9346-4de5-9c15-397f8d018685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.668 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[82d87c45-c98d-4662-9997-0fc9777a89c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 NetworkManager[48915]: <info>  [1764059745.6952] device (tap66249d1f-40): carrier: link connected
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.703 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[20eaed6a-4466-4ac5-9ae6-f48cd136cc89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.726 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fcaa06-2b42-4f9f-99ff-0c27b156b68e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66249d1f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:29:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510029, 'reachable_time': 39756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324162, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.747 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93aa8b7c-4133-4c9d-b810-107401657fad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:29ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510029, 'tstamp': 510029}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324163, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.768 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8144f26d-ef07-4220-8cd9-b627bd5a5afe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66249d1f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:29:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510029, 'reachable_time': 39756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324165, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.799 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8418c390-5a66-41a1-810c-3ad37a5a7e61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.860 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f44377-01e6-459b-85d3-407f894f8afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.862 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66249d1f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.862 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.862 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66249d1f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:45 compute-0 kernel: tap66249d1f-40: entered promiscuous mode
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:45 compute-0 NetworkManager[48915]: <info>  [1764059745.8649] manager: (tap66249d1f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.867 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66249d1f-40, col_values=(('external_ids', {'iface-id': '57f8eb8e-0895-4599-b15e-b1a08378dfc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:45 compute-0 ovn_controller[152859]: 2025-11-25T08:35:45Z|00663|binding|INFO|Releasing lport 57f8eb8e-0895-4599-b15e-b1a08378dfc1 from this chassis (sb_readonly=0)
Nov 25 08:35:45 compute-0 nova_compute[253538]: 2025-11-25 08:35:45.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.893 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.894 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[164cb1cb-90d4-4d6c-92ea-9e764d339417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.895 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:35:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:45.896 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'env', 'PROCESS_TAG=haproxy-66249d1f-478b-4b2b-a784-933c0556752e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66249d1f-478b-4b2b-a784-933c0556752e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:35:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 25 08:35:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 25 08:35:45 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 25 08:35:45 compute-0 affectionate_kepler[323936]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:35:45 compute-0 affectionate_kepler[323936]: --> relative data size: 1.0
Nov 25 08:35:45 compute-0 affectionate_kepler[323936]: --> All data devices are unavailable
Nov 25 08:35:46 compute-0 systemd[1]: libpod-2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df.scope: Deactivated successfully.
Nov 25 08:35:46 compute-0 systemd[1]: libpod-2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df.scope: Consumed 1.070s CPU time.
Nov 25 08:35:46 compute-0 podman[323878]: 2025-11-25 08:35:46.041508116 +0000 UTC m=+1.319982686 container died 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 08:35:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f4f82949cc3592642d3e401181e19ab18fee3a0fa68cbeae83abc8f2bdc59f0-merged.mount: Deactivated successfully.
Nov 25 08:35:46 compute-0 podman[323878]: 2025-11-25 08:35:46.096933656 +0000 UTC m=+1.375408226 container remove 2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:35:46 compute-0 systemd[1]: libpod-conmon-2034211cc0a1cf7ee0163ae64ce42c5b087fccefc3e15669487030bf4094b1df.scope: Deactivated successfully.
Nov 25 08:35:46 compute-0 sudo[323683]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:46 compute-0 sudo[324202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:46 compute-0 sudo[324202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:46 compute-0 sudo[324202]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:46 compute-0 sudo[324242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:35:46 compute-0 sudo[324242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:46 compute-0 sudo[324242]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:46 compute-0 podman[324264]: 2025-11-25 08:35:46.291543197 +0000 UTC m=+0.060021505 container create 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:35:46 compute-0 sudo[324283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:46 compute-0 sudo[324283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:46 compute-0 sudo[324283]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:46 compute-0 systemd[1]: Started libpod-conmon-382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec.scope.
Nov 25 08:35:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:46 compute-0 podman[324264]: 2025-11-25 08:35:46.261031497 +0000 UTC m=+0.029509825 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcacdbab04bdd2d6eb0a58f860b5d214abfab69870a1def2a93eb666e475b92f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:46 compute-0 podman[324264]: 2025-11-25 08:35:46.370984292 +0000 UTC m=+0.139462610 container init 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:35:46 compute-0 sudo[324312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:35:46 compute-0 sudo[324312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:46 compute-0 podman[324264]: 2025-11-25 08:35:46.380430185 +0000 UTC m=+0.148908493 container start 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:35:46 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [NOTICE]   (324341) : New worker (324343) forked
Nov 25 08:35:46 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [NOTICE]   (324341) : Loading success.
Nov 25 08:35:46 compute-0 ceph-mon[75015]: pgmap v1581: 321 pgs: 321 active+clean; 199 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.4 MiB/s wr, 201 op/s
Nov 25 08:35:46 compute-0 ceph-mon[75015]: osdmap e186: 3 total, 3 up, 3 in
Nov 25 08:35:46 compute-0 podman[324394]: 2025-11-25 08:35:46.738413807 +0000 UTC m=+0.045698189 container create 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:35:46 compute-0 systemd[1]: Started libpod-conmon-9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954.scope.
Nov 25 08:35:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:46 compute-0 podman[324394]: 2025-11-25 08:35:46.713015114 +0000 UTC m=+0.020299536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:35:46 compute-0 podman[324394]: 2025-11-25 08:35:46.811058339 +0000 UTC m=+0.118342821 container init 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:35:46 compute-0 podman[324394]: 2025-11-25 08:35:46.819220888 +0000 UTC m=+0.126505280 container start 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:35:46 compute-0 podman[324394]: 2025-11-25 08:35:46.82189312 +0000 UTC m=+0.129177512 container attach 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:35:46 compute-0 dreamy_tharp[324410]: 167 167
Nov 25 08:35:46 compute-0 systemd[1]: libpod-9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954.scope: Deactivated successfully.
Nov 25 08:35:46 compute-0 podman[324394]: 2025-11-25 08:35:46.825827066 +0000 UTC m=+0.133111448 container died 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:35:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d75aed93030466703e9360beb8f7435f6f5a5304f6c269e3930f053d390a1e40-merged.mount: Deactivated successfully.
Nov 25 08:35:46 compute-0 podman[324394]: 2025-11-25 08:35:46.860873858 +0000 UTC m=+0.168158250 container remove 9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 08:35:46 compute-0 systemd[1]: libpod-conmon-9d37edf9b09a376f994979597ce7d5cbc80b5a3d54ab0b62398051ec0e887954.scope: Deactivated successfully.
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.081 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059732.08056, 0feca801-4630-4450-b915-616d8496ab51 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.082 253542 INFO nova.compute.manager [-] [instance: 0feca801-4630-4450-b915-616d8496ab51] VM Stopped (Lifecycle Event)
Nov 25 08:35:47 compute-0 podman[324433]: 2025-11-25 08:35:47.081203609 +0000 UTC m=+0.052388358 container create 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.103 253542 DEBUG nova.compute.manager [None req-a99a98c5-bb74-4a7a-a84d-c3785864261e - - - - - -] [instance: 0feca801-4630-4450-b915-616d8496ab51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:47 compute-0 systemd[1]: Started libpod-conmon-3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85.scope.
Nov 25 08:35:47 compute-0 podman[324433]: 2025-11-25 08:35:47.061562191 +0000 UTC m=+0.032746960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:35:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:47 compute-0 podman[324433]: 2025-11-25 08:35:47.197266399 +0000 UTC m=+0.168451228 container init 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 08:35:47 compute-0 podman[324433]: 2025-11-25 08:35:47.20807502 +0000 UTC m=+0.179259769 container start 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 08:35:47 compute-0 podman[324433]: 2025-11-25 08:35:47.220187665 +0000 UTC m=+0.191372494 container attach 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:35:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 168 op/s
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.334 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] No waiting events found dispatching network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 WARNING nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received unexpected event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 for instance with vm_state active and task_state None.
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.335 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Processing event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.336 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.337 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.337 253542 DEBUG oslo_concurrency.lockutils [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.337 253542 DEBUG nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.337 253542 WARNING nova.compute.manager [req-0c600357-d53b-4950-8df2-a3ea5cc3d702 req-98012491-6ed3-4d73-900a-3af66d591f14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state building and task_state spawning.
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.339 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.346 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059747.3459518, 4e9d3984-d789-45e1-83e3-8909597d3265 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.347 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Resumed (Lifecycle Event)
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.361 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.366 253542 INFO nova.virt.libvirt.driver [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance spawned successfully.
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.367 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.395 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.396 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.397 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.399 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.400 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.402 253542 DEBUG nova.virt.libvirt.driver [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.415 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.423 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.450 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.481 253542 INFO nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Took 8.98 seconds to spawn the instance on the hypervisor.
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.482 253542 DEBUG nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.565 253542 INFO nova.compute.manager [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Took 10.03 seconds to build instance.
Nov 25 08:35:47 compute-0 nova_compute[253538]: 2025-11-25 08:35:47.594 253542 DEBUG oslo_concurrency.lockutils [None req-11d37a5c-d5cb-48ae-a62e-f9b39d45d4b5 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]: {
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:     "0": [
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:         {
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "devices": [
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "/dev/loop3"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             ],
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_name": "ceph_lv0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_size": "21470642176",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "name": "ceph_lv0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "tags": {
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cluster_name": "ceph",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.crush_device_class": "",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.encrypted": "0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osd_id": "0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.type": "block",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.vdo": "0"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             },
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "type": "block",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "vg_name": "ceph_vg0"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:         }
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:     ],
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:     "1": [
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:         {
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "devices": [
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "/dev/loop4"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             ],
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_name": "ceph_lv1",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_size": "21470642176",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "name": "ceph_lv1",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "tags": {
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cluster_name": "ceph",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.crush_device_class": "",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.encrypted": "0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osd_id": "1",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.type": "block",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.vdo": "0"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             },
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "type": "block",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "vg_name": "ceph_vg1"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:         }
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:     ],
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:     "2": [
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:         {
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "devices": [
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "/dev/loop5"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             ],
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_name": "ceph_lv2",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_size": "21470642176",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "name": "ceph_lv2",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "tags": {
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.cluster_name": "ceph",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.crush_device_class": "",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.encrypted": "0",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osd_id": "2",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.type": "block",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:                 "ceph.vdo": "0"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             },
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "type": "block",
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:             "vg_name": "ceph_vg2"
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:         }
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]:     ]
Nov 25 08:35:48 compute-0 strange_ishizaka[324449]: }
Nov 25 08:35:48 compute-0 systemd[1]: libpod-3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85.scope: Deactivated successfully.
Nov 25 08:35:48 compute-0 podman[324433]: 2025-11-25 08:35:48.170297 +0000 UTC m=+1.141481749 container died 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.252 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.252 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.252 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.253 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.253 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.254 253542 INFO nova.compute.manager [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Terminating instance
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.255 253542 DEBUG nova.compute.manager [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-821e11d79e5afa54e6aac158217c9d3623ae4c5f773936e81b24162aba5586d7-merged.mount: Deactivated successfully.
Nov 25 08:35:48 compute-0 ceph-mon[75015]: pgmap v1583: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 168 op/s
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.729 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.729 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.730 253542 INFO nova.compute.manager [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Rebooting instance
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.748 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.748 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquired lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.749 253542 DEBUG nova.network.neutron [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:35:48 compute-0 kernel: tap33ae6d28-9d (unregistering): left promiscuous mode
Nov 25 08:35:48 compute-0 NetworkManager[48915]: <info>  [1764059748.7560] device (tap33ae6d28-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:48 compute-0 ovn_controller[152859]: 2025-11-25T08:35:48Z|00664|binding|INFO|Releasing lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 from this chassis (sb_readonly=0)
Nov 25 08:35:48 compute-0 ovn_controller[152859]: 2025-11-25T08:35:48Z|00665|binding|INFO|Setting lport 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 down in Southbound
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:48 compute-0 ovn_controller[152859]: 2025-11-25T08:35:48Z|00666|binding|INFO|Removing iface tap33ae6d28-9d ovn-installed in OVS
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.799 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:e0:fa 10.100.0.12'], port_security=['fa:16:3e:bb:e0:fa 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9396c5ff-9457-400c-8916-ecd03eded0c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b96d13f13da43468269abb6dc6185d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '086c6d0d-fe38-46de-a484-0a651367668f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92cc438c-7163-401b-8f9b-e1ec1d29a1db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.800 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 in datapath 77b0065f-12e9-4121-b463-93a7fd9a5ff0 unbound from our chassis
Nov 25 08:35:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.801 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77b0065f-12e9-4121-b463-93a7fd9a5ff0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.816 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fb565bd2-4d01-4396-81f7-2a291e9c9280]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:48.817 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 namespace which is not needed anymore
Nov 25 08:35:48 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000044.scope: Deactivated successfully.
Nov 25 08:35:48 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000044.scope: Consumed 3.615s CPU time.
Nov 25 08:35:48 compute-0 systemd-machined[215790]: Machine qemu-80-instance-00000044 terminated.
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.962 253542 INFO nova.virt.libvirt.driver [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Instance destroyed successfully.
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.963 253542 DEBUG nova.objects.instance [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lazy-loading 'resources' on Instance uuid 9396c5ff-9457-400c-8916-ecd03eded0c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.976 253542 DEBUG nova.virt.libvirt.vif [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-82069614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-82069614',id=68,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b96d13f13da43468269abb6dc6185d1',ramdisk_id='',reservation_id='r-0e395ga0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-1033089060',owner_user_name='tempest-InstanceActionsV221TestJSON-1033089060-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:45Z,user_data=None,user_id='596f8d994ec145beb9244f5f01713555',uuid=9396c5ff-9457-400c-8916-ecd03eded0c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.977 253542 DEBUG nova.network.os_vif_util [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converting VIF {"id": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "address": "fa:16:3e:bb:e0:fa", "network": {"id": "77b0065f-12e9-4121-b463-93a7fd9a5ff0", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-579014610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b96d13f13da43468269abb6dc6185d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33ae6d28-9d", "ovs_interfaceid": "33ae6d28-9d12-4e42-9874-7f5c7a27c9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.977 253542 DEBUG nova.network.os_vif_util [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.978 253542 DEBUG os_vif [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.980 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33ae6d28-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:48 compute-0 nova_compute[253538]: 2025-11-25 08:35:48.985 253542 INFO os_vif [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:fa,bridge_name='br-int',has_traffic_filtering=True,id=33ae6d28-9d12-4e42-9874-7f5c7a27c9c8,network=Network(77b0065f-12e9-4121-b463-93a7fd9a5ff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33ae6d28-9d')
Nov 25 08:35:49 compute-0 podman[324433]: 2025-11-25 08:35:49.034807335 +0000 UTC m=+2.005992084 container remove 3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 08:35:49 compute-0 sudo[324312]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:49 compute-0 podman[324500]: 2025-11-25 08:35:49.107984901 +0000 UTC m=+0.149187270 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:35:49 compute-0 sudo[324538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:49 compute-0 sudo[324538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:49 compute-0 sudo[324538]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:49 compute-0 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [NOTICE]   (324133) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:49 compute-0 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [NOTICE]   (324133) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:49 compute-0 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [ALERT]    (324133) : Current worker (324135) exited with code 143 (Terminated)
Nov 25 08:35:49 compute-0 neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0[324112]: [WARNING]  (324133) : All workers exited. Exiting... (0)
Nov 25 08:35:49 compute-0 systemd[1]: libpod-9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6.scope: Deactivated successfully.
Nov 25 08:35:49 compute-0 podman[324550]: 2025-11-25 08:35:49.159805474 +0000 UTC m=+0.053336285 container died 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:35:49 compute-0 sudo[324586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:35:49 compute-0 sudo[324586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:49 compute-0 sudo[324586]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-49574312e8f4af9bf3a8c8febb2214b41ab32d1992c105e91eb0c713b6ff73e1-merged.mount: Deactivated successfully.
Nov 25 08:35:49 compute-0 sudo[324623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:49 compute-0 systemd[1]: libpod-conmon-3590f92a42b13a64075226fa82e13f879043952a665f7d4894341b120d80fd85.scope: Deactivated successfully.
Nov 25 08:35:49 compute-0 sudo[324623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:49 compute-0 sudo[324623]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:49 compute-0 podman[324550]: 2025-11-25 08:35:49.263146701 +0000 UTC m=+0.156677532 container cleanup 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:35:49 compute-0 systemd[1]: libpod-conmon-9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6.scope: Deactivated successfully.
Nov 25 08:35:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.8 MiB/s wr, 172 op/s
Nov 25 08:35:49 compute-0 sudo[324648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:35:49 compute-0 sudo[324648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:49 compute-0 podman[324662]: 2025-11-25 08:35:49.335816384 +0000 UTC m=+0.041668301 container remove 9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.341 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c63a2e59-3ea0-41d0-94c9-c97e3a8175da]: (4, ('Tue Nov 25 08:35:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 (9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6)\n9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6\nTue Nov 25 08:35:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 (9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6)\n9a907ba470efda663bb059281b20d70f9de26488b0442ace9ecf1bf7f12590e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.342 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[584f4014-2ab7-42fc-80e3-1500badeb16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.345 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77b0065f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:49 compute-0 kernel: tap77b0065f-10: left promiscuous mode
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.364 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.366 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3752506a-c563-48a5-b997-44d202aaa7de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.388 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[07b128ce-4e97-42b9-895d-9d37b021a9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.390 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45d8882f-93f2-452a-bc6b-0cccc98458d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a485ca8-c812-4b10-8b78-a8479692f665]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509928, 'reachable_time': 36644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324689, 'error': None, 'target': 'ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d77b0065f\x2d12e9\x2d4121\x2db463\x2d93a7fd9a5ff0.mount: Deactivated successfully.
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.413 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77b0065f-12e9-4121-b463-93a7fd9a5ff0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:49.413 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8706cead-7c74-4b45-aff4-72487fdd5d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.444 253542 DEBUG nova.compute.manager [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-unplugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.444 253542 DEBUG oslo_concurrency.lockutils [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.444 253542 DEBUG oslo_concurrency.lockutils [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.445 253542 DEBUG oslo_concurrency.lockutils [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.445 253542 DEBUG nova.compute.manager [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] No waiting events found dispatching network-vif-unplugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.445 253542 DEBUG nova.compute.manager [req-41ac899d-c8b4-429f-8fab-0b605ac53bbc req-74f5da56-745f-4dbf-8564-3b748e3f9c3e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-unplugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.508 253542 INFO nova.virt.libvirt.driver [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Deleting instance files /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1_del
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.509 253542 INFO nova.virt.libvirt.driver [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Deletion of /var/lib/nova/instances/9396c5ff-9457-400c-8916-ecd03eded0c1_del complete
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.621 253542 INFO nova.compute.manager [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Took 1.37 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.621 253542 DEBUG oslo.service.loopingcall [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.622 253542 DEBUG nova.compute.manager [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:49 compute-0 nova_compute[253538]: 2025-11-25 08:35:49.622 253542 DEBUG nova.network.neutron [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:49 compute-0 podman[324731]: 2025-11-25 08:35:49.700604259 +0000 UTC m=+0.038466835 container create 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:35:49 compute-0 ceph-mon[75015]: pgmap v1584: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.8 MiB/s wr, 172 op/s
Nov 25 08:35:49 compute-0 systemd[1]: Started libpod-conmon-2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d.scope.
Nov 25 08:35:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:49 compute-0 podman[324731]: 2025-11-25 08:35:49.682094751 +0000 UTC m=+0.019957347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:35:49 compute-0 podman[324731]: 2025-11-25 08:35:49.800517084 +0000 UTC m=+0.138379750 container init 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:35:49 compute-0 podman[324731]: 2025-11-25 08:35:49.811805237 +0000 UTC m=+0.149667813 container start 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:35:49 compute-0 podman[324731]: 2025-11-25 08:35:49.814465959 +0000 UTC m=+0.152328615 container attach 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:35:49 compute-0 keen_perlman[324748]: 167 167
Nov 25 08:35:49 compute-0 systemd[1]: libpod-2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d.scope: Deactivated successfully.
Nov 25 08:35:49 compute-0 conmon[324748]: conmon 2a5d00baee5dc425c593 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d.scope/container/memory.events
Nov 25 08:35:49 compute-0 podman[324753]: 2025-11-25 08:35:49.891570082 +0000 UTC m=+0.041281371 container died 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 08:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cbc937b4efd33327833009a85f307c6f48ced3ff857d1583fdd5163be30b4f7-merged.mount: Deactivated successfully.
Nov 25 08:35:49 compute-0 podman[324753]: 2025-11-25 08:35:49.939250133 +0000 UTC m=+0.088961442 container remove 2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:35:49 compute-0 systemd[1]: libpod-conmon-2a5d00baee5dc425c593e8b5c8b27d9524d2c9ed7d67ba6cf533b7d0a539705d.scope: Deactivated successfully.
Nov 25 08:35:50 compute-0 podman[324774]: 2025-11-25 08:35:50.184921004 +0000 UTC m=+0.044766724 container create e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 08:35:50 compute-0 systemd[1]: Started libpod-conmon-e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9.scope.
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.224 253542 DEBUG nova.network.neutron [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:50 compute-0 podman[324774]: 2025-11-25 08:35:50.168370289 +0000 UTC m=+0.028216029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.268 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Releasing lock "refresh_cache-4e9d3984-d789-45e1-83e3-8909597d3265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.269 253542 DEBUG nova.compute.manager [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:50 compute-0 podman[324774]: 2025-11-25 08:35:50.27705996 +0000 UTC m=+0.136905700 container init e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 08:35:50 compute-0 podman[324774]: 2025-11-25 08:35:50.283598567 +0000 UTC m=+0.143444287 container start e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:50 compute-0 podman[324774]: 2025-11-25 08:35:50.286277259 +0000 UTC m=+0.146122999 container attach e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:35:50 compute-0 kernel: tapd553507f-40 (unregistering): left promiscuous mode
Nov 25 08:35:50 compute-0 NetworkManager[48915]: <info>  [1764059750.4894] device (tapd553507f-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 ovn_controller[152859]: 2025-11-25T08:35:50Z|00667|binding|INFO|Releasing lport d553507f-4019-4ce0-b549-4d221b9089cd from this chassis (sb_readonly=0)
Nov 25 08:35:50 compute-0 ovn_controller[152859]: 2025-11-25T08:35:50Z|00668|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd down in Southbound
Nov 25 08:35:50 compute-0 ovn_controller[152859]: 2025-11-25T08:35:50Z|00669|binding|INFO|Removing iface tapd553507f-40 ovn-installed in OVS
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.539 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:47:0f 10.100.0.9'], port_security=['fa:16:3e:d8:47:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e9d3984-d789-45e1-83e3-8909597d3265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66249d1f-478b-4b2b-a784-933c0556752e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '99b8ce64-92d7-4b35-99db-f9518100b84f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891e4e5-b944-481c-a074-f90df3e14ac2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d553507f-4019-4ce0-b549-4d221b9089cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.540 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.541 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d553507f-4019-4ce0-b549-4d221b9089cd in datapath 66249d1f-478b-4b2b-a784-933c0556752e unbound from our chassis
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.542 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66249d1f-478b-4b2b-a784-933c0556752e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.543 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[123a7ffb-4f38-4474-9c68-dd776a10d21a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.544 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e namespace which is not needed anymore
Nov 25 08:35:50 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 25 08:35:50 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Consumed 3.544s CPU time.
Nov 25 08:35:50 compute-0 systemd-machined[215790]: Machine qemu-81-instance-00000045 terminated.
Nov 25 08:35:50 compute-0 NetworkManager[48915]: <info>  [1764059750.6714] manager: (tapd553507f-40): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.674 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.690 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.694 253542 INFO nova.virt.libvirt.driver [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance destroyed successfully.
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.694 253542 DEBUG nova.objects.instance [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'resources' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.706 253542 DEBUG nova.virt.libvirt.vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:50Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.706 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.707 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.707 253542 DEBUG os_vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.709 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd553507f-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.716 253542 INFO os_vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40')
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.727 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start _get_guest_xml network_info=[{"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.732 253542 WARNING nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.737 253542 DEBUG nova.virt.libvirt.host [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:35:50 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [NOTICE]   (324341) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:50 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [NOTICE]   (324341) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:50 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [WARNING]  (324341) : Exiting Master process...
Nov 25 08:35:50 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [WARNING]  (324341) : Exiting Master process...
Nov 25 08:35:50 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [ALERT]    (324341) : Current worker (324343) exited with code 143 (Terminated)
Nov 25 08:35:50 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[324315]: [WARNING]  (324341) : All workers exited. Exiting... (0)
Nov 25 08:35:50 compute-0 systemd[1]: libpod-382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec.scope: Deactivated successfully.
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.742 253542 DEBUG nova.virt.libvirt.host [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:35:50 compute-0 podman[324818]: 2025-11-25 08:35:50.749618711 +0000 UTC m=+0.071480392 container died 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.750 253542 DEBUG nova.virt.libvirt.host [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.750 253542 DEBUG nova.virt.libvirt.host [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.751 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.751 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.751 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.752 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.752 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.752 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.752 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.753 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.753 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.753 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.753 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.754 253542 DEBUG nova.virt.hardware [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.754 253542 DEBUG nova.objects.instance [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.767 253542 DEBUG oslo_concurrency.processutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcacdbab04bdd2d6eb0a58f860b5d214abfab69870a1def2a93eb666e475b92f-merged.mount: Deactivated successfully.
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 podman[324818]: 2025-11-25 08:35:50.821493333 +0000 UTC m=+0.143354984 container cleanup 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:35:50 compute-0 systemd[1]: libpod-conmon-382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec.scope: Deactivated successfully.
Nov 25 08:35:50 compute-0 podman[324857]: 2025-11-25 08:35:50.890006104 +0000 UTC m=+0.047468797 container remove 382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.895 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0871f00c-91b5-41ca-ae20-58ef58c13b19]: (4, ('Tue Nov 25 08:35:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e (382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec)\n382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec\nTue Nov 25 08:35:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e (382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec)\n382bcc9e9cd06446e4d63c9f940deab344c4f9c9f42d4897cc96091c1e1e2dec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.902 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[050a1d39-0aec-4bb2-8997-6eff98a9e381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.903 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66249d1f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:50 compute-0 kernel: tap66249d1f-40: left promiscuous mode
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 nova_compute[253538]: 2025-11-25 08:35:50.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.926 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65ccae7a-80fb-4d60-9aad-21c0165001ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.941 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d61e01-3b70-4f20-a474-388b431fe101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.943 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[07ca9778-66bb-4591-8161-4bee54c42322]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.961 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[facac444-c169-4ad4-9245-7d4e0c5aacb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510021, 'reachable_time': 18556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324889, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.963 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:50.964 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1686cacf-3fe6-4a66-a61f-222af962a1c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d66249d1f\x2d478b\x2d4b2b\x2da784\x2d933c0556752e.mount: Deactivated successfully.
Nov 25 08:35:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]: {
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "osd_id": 1,
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "type": "bluestore"
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:     },
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "osd_id": 2,
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "type": "bluestore"
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:     },
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "osd_id": 0,
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:         "type": "bluestore"
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]:     }
Nov 25 08:35:51 compute-0 cranky_agnesi[324791]: }
Nov 25 08:35:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4211642391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.246 253542 DEBUG nova.network.neutron [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:51 compute-0 systemd[1]: libpod-e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9.scope: Deactivated successfully.
Nov 25 08:35:51 compute-0 conmon[324791]: conmon e5ffbbdb1a81db44c454 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9.scope/container/memory.events
Nov 25 08:35:51 compute-0 podman[324774]: 2025-11-25 08:35:51.257437259 +0000 UTC m=+1.117282999 container died e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.259 253542 DEBUG oslo_concurrency.processutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdbc9fb77ea03d21db0655dff2fc7edbcc98e3091b6a250b10edcb7175f36e60-merged.mount: Deactivated successfully.
Nov 25 08:35:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 159 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 224 op/s
Nov 25 08:35:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4211642391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.296 253542 DEBUG oslo_concurrency.processutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:51 compute-0 podman[324774]: 2025-11-25 08:35:51.318708856 +0000 UTC m=+1.178554576 container remove e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.330 253542 INFO nova.compute.manager [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Took 1.71 seconds to deallocate network for instance.
Nov 25 08:35:51 compute-0 systemd[1]: libpod-conmon-e5ffbbdb1a81db44c4545f529ea861d35fd23ec94ac4a3c7d31e14df9a3da1a9.scope: Deactivated successfully.
Nov 25 08:35:51 compute-0 sudo[324648]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:35:51 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:35:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:35:51 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:35:51 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a02c63c4-ba47-4d3e-a31d-c3f27d71e1a8 does not exist
Nov 25 08:35:51 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev cac9dff5-de03-498a-939e-1a25cdd45215 does not exist
Nov 25 08:35:51 compute-0 sudo[324951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:35:51 compute-0 sudo[324951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:51 compute-0 sudo[324951]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.454 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.454 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:51 compute-0 sudo[324995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:35:51 compute-0 sudo[324995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:35:51 compute-0 sudo[324995]: pam_unix(sudo:session): session closed for user root
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.563 253542 DEBUG oslo_concurrency.processutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.610 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.612 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.612 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.613 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.613 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] No waiting events found dispatching network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.614 253542 WARNING nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received unexpected event network-vif-plugged-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 for instance with vm_state deleted and task_state None.
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.614 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-unplugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.615 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.615 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.616 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.616 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-unplugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.617 253542 WARNING nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-unplugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.617 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Received event network-vif-deleted-33ae6d28-9d12-4e42-9874-7f5c7a27c9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.618 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.618 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.619 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.619 253542 DEBUG oslo_concurrency.lockutils [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.620 253542 DEBUG nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.620 253542 WARNING nova.compute.manager [req-7666324b-2549-41a8-861e-c962d8f29030 req-1ebf8616-b2b9-4967-9eee-096fa0bcf4c8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state active and task_state reboot_started_hard.
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.639 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059736.6345212, cab0bbd2-96e3-43ed-970b-0b49c7581fef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.639 253542 INFO nova.compute.manager [-] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] VM Stopped (Lifecycle Event)
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.658 253542 DEBUG nova.compute.manager [None req-d5c8f677-cf87-4069-bdb7-0179c9186399 - - - - - -] [instance: cab0bbd2-96e3-43ed-970b-0b49c7581fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:35:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082249916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.747 253542 DEBUG oslo_concurrency.processutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.749 253542 DEBUG nova.virt.libvirt.vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:50Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.750 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.752 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.754 253542 DEBUG nova.objects.instance [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.775 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <uuid>4e9d3984-d789-45e1-83e3-8909597d3265</uuid>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <name>instance-00000045</name>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <nova:name>tempest-InstanceActionsTestJSON-server-1670811507</nova:name>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:35:50</nova:creationTime>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <nova:user uuid="4de06a7985be4463b069db269e2882d4">tempest-InstanceActionsTestJSON-270987687-project-member</nova:user>
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <nova:project uuid="a20ef4bed55a408c8933a4956b2dd3e4">tempest-InstanceActionsTestJSON-270987687</nova:project>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <nova:port uuid="d553507f-4019-4ce0-b549-4d221b9089cd">
Nov 25 08:35:51 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <system>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <entry name="serial">4e9d3984-d789-45e1-83e3-8909597d3265</entry>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <entry name="uuid">4e9d3984-d789-45e1-83e3-8909597d3265</entry>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </system>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <os>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   </os>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <features>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   </features>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4e9d3984-d789-45e1-83e3-8909597d3265_disk">
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4e9d3984-d789-45e1-83e3-8909597d3265_disk.config">
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:35:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d8:47:0f"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <target dev="tapd553507f-40"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265/console.log" append="off"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <video>
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </video>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:35:51 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:35:51 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:35:51 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:35:51 compute-0 nova_compute[253538]: </domain>
Nov 25 08:35:51 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.778 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.778 253542 DEBUG nova.virt.libvirt.driver [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.780 253542 DEBUG nova.virt.libvirt.vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:50Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.780 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.781 253542 DEBUG nova.network.os_vif_util [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.782 253542 DEBUG os_vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.784 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.785 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.790 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd553507f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.791 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd553507f-40, col_values=(('external_ids', {'iface-id': 'd553507f-4019-4ce0-b549-4d221b9089cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:47:0f', 'vm-uuid': '4e9d3984-d789-45e1-83e3-8909597d3265'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:51 compute-0 NetworkManager[48915]: <info>  [1764059751.7952] manager: (tapd553507f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.796 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.806 253542 INFO os_vif [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40')
Nov 25 08:35:51 compute-0 kernel: tapd553507f-40: entered promiscuous mode
Nov 25 08:35:51 compute-0 NetworkManager[48915]: <info>  [1764059751.8956] manager: (tapd553507f-40): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Nov 25 08:35:51 compute-0 ovn_controller[152859]: 2025-11-25T08:35:51Z|00670|binding|INFO|Claiming lport d553507f-4019-4ce0-b549-4d221b9089cd for this chassis.
Nov 25 08:35:51 compute-0 ovn_controller[152859]: 2025-11-25T08:35:51Z|00671|binding|INFO|d553507f-4019-4ce0-b549-4d221b9089cd: Claiming fa:16:3e:d8:47:0f 10.100.0.9
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.926 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:47:0f 10.100.0.9'], port_security=['fa:16:3e:d8:47:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e9d3984-d789-45e1-83e3-8909597d3265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66249d1f-478b-4b2b-a784-933c0556752e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '99b8ce64-92d7-4b35-99db-f9518100b84f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891e4e5-b944-481c-a074-f90df3e14ac2, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d553507f-4019-4ce0-b549-4d221b9089cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.927 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d553507f-4019-4ce0-b549-4d221b9089cd in datapath 66249d1f-478b-4b2b-a784-933c0556752e bound to our chassis
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.929 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 08:35:51 compute-0 systemd-udevd[325055]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:51 compute-0 ovn_controller[152859]: 2025-11-25T08:35:51Z|00672|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd ovn-installed in OVS
Nov 25 08:35:51 compute-0 ovn_controller[152859]: 2025-11-25T08:35:51Z|00673|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd up in Southbound
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:51 compute-0 systemd-machined[215790]: New machine qemu-82-instance-00000045.
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.942 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab375394-1abf-4515-9f6d-893a629016df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.944 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66249d1f-41 in ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:35:51 compute-0 nova_compute[253538]: 2025-11-25 08:35:51.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.946 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66249d1f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[46a3fabe-6ad2-467b-99c7-16e5701a06b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.947 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0beab3af-2b77-444d-90ac-7fdb95861af7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:51 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-00000045.
Nov 25 08:35:51 compute-0 NetworkManager[48915]: <info>  [1764059751.9589] device (tapd553507f-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:35:51 compute-0 NetworkManager[48915]: <info>  [1764059751.9600] device (tapd553507f-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.958 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[13835c2a-dad2-4f7c-a3d8-580507d6781f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:51.988 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32c3782e-b201-4f41-a113-976d4033cec4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444701479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.019 253542 DEBUG oslo_concurrency.processutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.023 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e308f19c-1d38-483c-b8a4-f3378cb965b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.025 253542 DEBUG nova.compute.provider_tree [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.028 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65638bb4-5de3-4d99-bcca-eab1026483f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 NetworkManager[48915]: <info>  [1764059752.0294] manager: (tap66249d1f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/298)
Nov 25 08:35:52 compute-0 systemd-udevd[325060]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.038 253542 DEBUG nova.scheduler.client.report [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.064 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[12823ccd-2bd7-4fe6-a509-7d52cbd909b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.067 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b20ca17e-8305-40e8-8661-4e0a47bb8338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 NetworkManager[48915]: <info>  [1764059752.0897] device (tap66249d1f-40): carrier: link connected
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.094 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.094 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d2cd01-5e7a-494d-a718-9b9088876eff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.111 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[596e70f0-b5b4-4de6-a593-2f5d46df790b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66249d1f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:29:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510668, 'reachable_time': 32783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325091, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.131 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8e9a4a-d00e-43c2-b9a9-e084aa1804d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:29ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510668, 'tstamp': 510668}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325092, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.150 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66678557-d654-41ff-9251-626671d96448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66249d1f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:29:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510668, 'reachable_time': 32783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325093, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.160 253542 INFO nova.scheduler.client.report [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Deleted allocations for instance 9396c5ff-9457-400c-8916-ecd03eded0c1
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.186 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d39e1a5-79cf-45cc-acb7-0005abbb7515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.249 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[978cbf4f-8594-4180-b73f-14cbfdcddb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.250 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66249d1f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.251 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66249d1f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:52 compute-0 NetworkManager[48915]: <info>  [1764059752.2545] manager: (tap66249d1f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Nov 25 08:35:52 compute-0 kernel: tap66249d1f-40: entered promiscuous mode
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.256 253542 DEBUG oslo_concurrency.lockutils [None req-aa0f63c9-da87-49af-abeb-54e2523715df 596f8d994ec145beb9244f5f01713555 3b96d13f13da43468269abb6dc6185d1 - - default default] Lock "9396c5ff-9457-400c-8916-ecd03eded0c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66249d1f-40, col_values=(('external_ids', {'iface-id': '57f8eb8e-0895-4599-b15e-b1a08378dfc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:52 compute-0 ovn_controller[152859]: 2025-11-25T08:35:52Z|00674|binding|INFO|Releasing lport 57f8eb8e-0895-4599-b15e-b1a08378dfc1 from this chassis (sb_readonly=0)
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.277 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.278 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.281 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eff374fb-4ac1-4d41-845e-b4047473d0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.282 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/66249d1f-478b-4b2b-a784-933c0556752e.pid.haproxy
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 66249d1f-478b-4b2b-a784-933c0556752e
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:35:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:52.283 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'env', 'PROCESS_TAG=haproxy-66249d1f-478b-4b2b-a784-933c0556752e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66249d1f-478b-4b2b-a784-933c0556752e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.347 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 4e9d3984-d789-45e1-83e3-8909597d3265 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.347 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059752.3455155, 4e9d3984-d789-45e1-83e3-8909597d3265 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.347 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Resumed (Lifecycle Event)
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.350 253542 DEBUG nova.compute.manager [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.353 253542 INFO nova.virt.libvirt.driver [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance rebooted successfully.
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.354 253542 DEBUG nova.compute.manager [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:52 compute-0 ceph-mon[75015]: pgmap v1585: 321 pgs: 321 active+clean; 159 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 224 op/s
Nov 25 08:35:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:35:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:35:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4082249916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:35:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/444701479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.536 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.543 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.563 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.564 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059752.3478527, 4e9d3984-d789-45e1-83e3-8909597d3265 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.564 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Started (Lifecycle Event)
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.584 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.589 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:35:52 compute-0 nova_compute[253538]: 2025-11-25 08:35:52.606 253542 DEBUG oslo_concurrency.lockutils [None req-b76fc119-15bb-4231-90b7-ef24446d6cfa 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:52 compute-0 podman[325167]: 2025-11-25 08:35:52.662646746 +0000 UTC m=+0.044963250 container create 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:35:52 compute-0 systemd[1]: Started libpod-conmon-8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c.scope.
Nov 25 08:35:52 compute-0 podman[325167]: 2025-11-25 08:35:52.638973289 +0000 UTC m=+0.021289813 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:35:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676346057d689cb023f9cac0e4de20ceef90b417e6dbb2d3799ce4df248a42ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:35:52 compute-0 podman[325167]: 2025-11-25 08:35:52.758662896 +0000 UTC m=+0.140979450 container init 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:35:52 compute-0 podman[325167]: 2025-11-25 08:35:52.765785938 +0000 UTC m=+0.148102452 container start 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 08:35:52 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [NOTICE]   (325186) : New worker (325188) forked
Nov 25 08:35:52 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [NOTICE]   (325186) : Loading success.
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:35:53
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'volumes', 'vms', 'backups']
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 134 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 32 KiB/s wr, 233 op/s
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.663 253542 DEBUG nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.664 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.665 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.665 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.665 253542 DEBUG nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.665 253542 WARNING nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state active and task_state None.
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.666 253542 DEBUG nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.666 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.666 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.666 253542 DEBUG oslo_concurrency.lockutils [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.667 253542 DEBUG nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] No waiting events found dispatching network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:35:53 compute-0 nova_compute[253538]: 2025-11-25 08:35:53.667 253542 WARNING nova.compute.manager [req-df7ed42f-2184-4737-8431-cc92fe4130af req-3f46bbd0-956e-4453-b182-55e006bd8de0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received unexpected event network-vif-plugged-d553507f-4019-4ce0-b549-4d221b9089cd for instance with vm_state active and task_state None.
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:35:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:35:54 compute-0 ceph-mon[75015]: pgmap v1586: 321 pgs: 321 active+clean; 134 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 32 KiB/s wr, 233 op/s
Nov 25 08:35:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 134 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 32 KiB/s wr, 266 op/s
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.613 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.615 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.615 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.616 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.616 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.618 253542 INFO nova.compute.manager [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Terminating instance
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.621 253542 DEBUG nova.compute.manager [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:35:55 compute-0 kernel: tapd553507f-40 (unregistering): left promiscuous mode
Nov 25 08:35:55 compute-0 NetworkManager[48915]: <info>  [1764059755.6628] device (tapd553507f-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 ovn_controller[152859]: 2025-11-25T08:35:55Z|00675|binding|INFO|Releasing lport d553507f-4019-4ce0-b549-4d221b9089cd from this chassis (sb_readonly=0)
Nov 25 08:35:55 compute-0 ovn_controller[152859]: 2025-11-25T08:35:55Z|00676|binding|INFO|Setting lport d553507f-4019-4ce0-b549-4d221b9089cd down in Southbound
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 ovn_controller[152859]: 2025-11-25T08:35:55Z|00677|binding|INFO|Removing iface tapd553507f-40 ovn-installed in OVS
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.693 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:47:0f 10.100.0.9'], port_security=['fa:16:3e:d8:47:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e9d3984-d789-45e1-83e3-8909597d3265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66249d1f-478b-4b2b-a784-933c0556752e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a20ef4bed55a408c8933a4956b2dd3e4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '99b8ce64-92d7-4b35-99db-f9518100b84f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891e4e5-b944-481c-a074-f90df3e14ac2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d553507f-4019-4ce0-b549-4d221b9089cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.695 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d553507f-4019-4ce0-b549-4d221b9089cd in datapath 66249d1f-478b-4b2b-a784-933c0556752e unbound from our chassis
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.696 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66249d1f-478b-4b2b-a784-933c0556752e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.697 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1403c730-08ee-4fdc-8aa8-b85f5169e4bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.697 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e namespace which is not needed anymore
Nov 25 08:35:55 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 25 08:35:55 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000045.scope: Consumed 3.808s CPU time.
Nov 25 08:35:55 compute-0 systemd-machined[215790]: Machine qemu-82-instance-00000045 terminated.
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.796 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [NOTICE]   (325186) : haproxy version is 2.8.14-c23fe91
Nov 25 08:35:55 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [NOTICE]   (325186) : path to executable is /usr/sbin/haproxy
Nov 25 08:35:55 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [WARNING]  (325186) : Exiting Master process...
Nov 25 08:35:55 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [ALERT]    (325186) : Current worker (325188) exited with code 143 (Terminated)
Nov 25 08:35:55 compute-0 neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e[325182]: [WARNING]  (325186) : All workers exited. Exiting... (0)
Nov 25 08:35:55 compute-0 systemd[1]: libpod-8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c.scope: Deactivated successfully.
Nov 25 08:35:55 compute-0 podman[325222]: 2025-11-25 08:35:55.842713013 +0000 UTC m=+0.048097844 container died 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.854 253542 INFO nova.virt.libvirt.driver [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Instance destroyed successfully.
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.855 253542 DEBUG nova.objects.instance [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lazy-loading 'resources' on Instance uuid 4e9d3984-d789-45e1-83e3-8909597d3265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.867 253542 DEBUG nova.virt.libvirt.vif [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1670811507',display_name='tempest-InstanceActionsTestJSON-server-1670811507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1670811507',id=69,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a20ef4bed55a408c8933a4956b2dd3e4',ramdisk_id='',reservation_id='r-7noz6z35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-270987687',owner_user_name='tempest-InstanceActionsTestJSON-270987687-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:35:52Z,user_data=None,user_id='4de06a7985be4463b069db269e2882d4',uuid=4e9d3984-d789-45e1-83e3-8909597d3265,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.868 253542 DEBUG nova.network.os_vif_util [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converting VIF {"id": "d553507f-4019-4ce0-b549-4d221b9089cd", "address": "fa:16:3e:d8:47:0f", "network": {"id": "66249d1f-478b-4b2b-a784-933c0556752e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-179501336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a20ef4bed55a408c8933a4956b2dd3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd553507f-40", "ovs_interfaceid": "d553507f-4019-4ce0-b549-4d221b9089cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c-userdata-shm.mount: Deactivated successfully.
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.869 253542 DEBUG nova.network.os_vif_util [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.869 253542 DEBUG os_vif [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-676346057d689cb023f9cac0e4de20ceef90b417e6dbb2d3799ce4df248a42ed-merged.mount: Deactivated successfully.
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.871 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd553507f-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.875 253542 INFO os_vif [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:47:0f,bridge_name='br-int',has_traffic_filtering=True,id=d553507f-4019-4ce0-b549-4d221b9089cd,network=Network(66249d1f-478b-4b2b-a784-933c0556752e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd553507f-40')
Nov 25 08:35:55 compute-0 podman[325222]: 2025-11-25 08:35:55.879031829 +0000 UTC m=+0.084416650 container cleanup 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 08:35:55 compute-0 systemd[1]: libpod-conmon-8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c.scope: Deactivated successfully.
Nov 25 08:35:55 compute-0 podman[325270]: 2025-11-25 08:35:55.939146114 +0000 UTC m=+0.037356175 container remove 8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.944 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e25f94b8-b3e1-40a3-97fd-3d423ffe49ed]: (4, ('Tue Nov 25 08:35:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e (8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c)\n8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c\nTue Nov 25 08:35:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e (8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c)\n8bb7ce6de75d314a91eb621d3fd8a3ea32ded08d9eb82cd69f1e17ed6ec4ca6c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[39ccc98c-4e1f-49c9-91ee-3b36598aef66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.947 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66249d1f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 kernel: tap66249d1f-40: left promiscuous mode
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:35:55 compute-0 nova_compute[253538]: 2025-11-25 08:35:55.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.968 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97b2738b-0a20-4b0f-a133-66adbb5b98a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.979 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b5657-6d95-4d41-a5b8-6fb97c04ae85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.980 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d5d257-bfa4-451b-a006-c48294be27d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.995 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddfb548-f4aa-433d-81ba-94d0021be331]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510661, 'reachable_time': 19678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325297, 'error': None, 'target': 'ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.998 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66249d1f-478b-4b2b-a784-933c0556752e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:35:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d66249d1f\x2d478b\x2d4b2b\x2da784\x2d933c0556752e.mount: Deactivated successfully.
Nov 25 08:35:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:35:55.998 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6a37bc94-fd85-401c-97d1-814bcc4c7849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:35:56 compute-0 nova_compute[253538]: 2025-11-25 08:35:56.178 253542 INFO nova.virt.libvirt.driver [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Deleting instance files /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265_del
Nov 25 08:35:56 compute-0 nova_compute[253538]: 2025-11-25 08:35:56.179 253542 INFO nova.virt.libvirt.driver [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Deletion of /var/lib/nova/instances/4e9d3984-d789-45e1-83e3-8909597d3265_del complete
Nov 25 08:35:56 compute-0 nova_compute[253538]: 2025-11-25 08:35:56.261 253542 INFO nova.compute.manager [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Took 0.64 seconds to destroy the instance on the hypervisor.
Nov 25 08:35:56 compute-0 nova_compute[253538]: 2025-11-25 08:35:56.262 253542 DEBUG oslo.service.loopingcall [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:35:56 compute-0 nova_compute[253538]: 2025-11-25 08:35:56.262 253542 DEBUG nova.compute.manager [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:35:56 compute-0 nova_compute[253538]: 2025-11-25 08:35:56.263 253542 DEBUG nova.network.neutron [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:35:56 compute-0 ceph-mon[75015]: pgmap v1587: 321 pgs: 321 active+clean; 134 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 32 KiB/s wr, 266 op/s
Nov 25 08:35:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 117 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 29 KiB/s wr, 279 op/s
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.674 253542 DEBUG nova.network.neutron [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.727 253542 DEBUG nova.compute.manager [req-bf4488fa-86b6-41ec-a06a-0c65cc6fd139 req-6e775f5f-bc47-4452-a775-05927c3cec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Received event network-vif-deleted-d553507f-4019-4ce0-b549-4d221b9089cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.727 253542 INFO nova.compute.manager [req-bf4488fa-86b6-41ec-a06a-0c65cc6fd139 req-6e775f5f-bc47-4452-a775-05927c3cec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Neutron deleted interface d553507f-4019-4ce0-b549-4d221b9089cd; detaching it from the instance and deleting it from the info cache
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.728 253542 DEBUG nova.network.neutron [req-bf4488fa-86b6-41ec-a06a-0c65cc6fd139 req-6e775f5f-bc47-4452-a775-05927c3cec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.778 253542 INFO nova.compute.manager [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Took 1.51 seconds to deallocate network for instance.
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.784 253542 DEBUG nova.compute.manager [req-bf4488fa-86b6-41ec-a06a-0c65cc6fd139 req-6e775f5f-bc47-4452-a775-05927c3cec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Detach interface failed, port_id=d553507f-4019-4ce0-b549-4d221b9089cd, reason: Instance 4e9d3984-d789-45e1-83e3-8909597d3265 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.824 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.825 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:35:57 compute-0 nova_compute[253538]: 2025-11-25 08:35:57.875 253542 DEBUG oslo_concurrency.processutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:35:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:35:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259423462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:58 compute-0 nova_compute[253538]: 2025-11-25 08:35:58.352 253542 DEBUG oslo_concurrency.processutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:35:58 compute-0 nova_compute[253538]: 2025-11-25 08:35:58.360 253542 DEBUG nova.compute.provider_tree [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:35:58 compute-0 nova_compute[253538]: 2025-11-25 08:35:58.375 253542 DEBUG nova.scheduler.client.report [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:35:58 compute-0 ceph-mon[75015]: pgmap v1588: 321 pgs: 321 active+clean; 117 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 29 KiB/s wr, 279 op/s
Nov 25 08:35:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4259423462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:35:58 compute-0 nova_compute[253538]: 2025-11-25 08:35:58.405 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:58 compute-0 nova_compute[253538]: 2025-11-25 08:35:58.438 253542 INFO nova.scheduler.client.report [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Deleted allocations for instance 4e9d3984-d789-45e1-83e3-8909597d3265
Nov 25 08:35:58 compute-0 nova_compute[253538]: 2025-11-25 08:35:58.549 253542 DEBUG oslo_concurrency.lockutils [None req-96ec4fff-ef93-444c-a38f-13891d9af9b3 4de06a7985be4463b069db269e2882d4 a20ef4bed55a408c8933a4956b2dd3e4 - - default default] Lock "4e9d3984-d789-45e1-83e3-8909597d3265" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:35:58 compute-0 nova_compute[253538]: 2025-11-25 08:35:58.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:35:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 105 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 27 KiB/s wr, 241 op/s
Nov 25 08:36:00 compute-0 ceph-mon[75015]: pgmap v1589: 321 pgs: 321 active+clean; 105 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 27 KiB/s wr, 241 op/s
Nov 25 08:36:00 compute-0 nova_compute[253538]: 2025-11-25 08:36:00.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:00 compute-0 nova_compute[253538]: 2025-11-25 08:36:00.869 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:00 compute-0 nova_compute[253538]: 2025-11-25 08:36:00.869 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:00 compute-0 nova_compute[253538]: 2025-11-25 08:36:00.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:00 compute-0 nova_compute[253538]: 2025-11-25 08:36:00.898 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.040 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.040 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.046 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.047 253542 INFO nova.compute.claims [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:01 compute-0 anacron[157083]: Job `cron.weekly' started
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.151 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:01 compute-0 anacron[157083]: Job `cron.weekly' terminated
Nov 25 08:36:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 15 KiB/s wr, 225 op/s
Nov 25 08:36:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762858983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.604 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.610 253542 DEBUG nova.compute.provider_tree [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.625 253542 DEBUG nova.scheduler.client.report [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.656 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.657 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.704 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.704 253542 DEBUG nova.network.neutron [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.732 253542 INFO nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.759 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.862 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.863 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.863 253542 INFO nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Creating image(s)
Nov 25 08:36:01 compute-0 rsyslogd[1007]: imjournal from <np0005534516:nova_compute>: begin to drop messages due to rate-limiting
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.880 253542 DEBUG nova.storage.rbd_utils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] rbd image 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.900 253542 DEBUG nova.storage.rbd_utils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] rbd image 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.917 253542 DEBUG nova.storage.rbd_utils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] rbd image 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.920 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.960 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.961 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.975 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.997 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.998 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:01 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.999 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:01.999 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.023 253542 DEBUG nova.storage.rbd_utils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] rbd image 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.026 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.058 253542 DEBUG nova.policy [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01007a4199f147399e37549741f618ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db308c5623d94a66b4201cf58d23c882', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.083 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.084 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.091 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.091 253542 INFO nova.compute.claims [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.245 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.290 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.353 253542 DEBUG nova.storage.rbd_utils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] resizing rbd image 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.428 253542 DEBUG nova.objects.instance [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lazy-loading 'migration_context' on Instance uuid 2274b091-0cde-4f9c-a067-0bec4dd614e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.439 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.439 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Ensure instance console log exists: /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.439 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.440 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.440 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:02 compute-0 ceph-mon[75015]: pgmap v1590: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 15 KiB/s wr, 225 op/s
Nov 25 08:36:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/762858983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002107800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.699 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.705 253542 DEBUG nova.compute.provider_tree [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.728 253542 DEBUG nova.scheduler.client.report [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.758 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.759 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.832 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.833 253542 DEBUG nova.network.neutron [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.864 253542 INFO nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:02 compute-0 nova_compute[253538]: 2025-11-25 08:36:02.986 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.166 253542 DEBUG nova.policy [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ad675e78b1b34f1c92c57e42532c3c20', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '488c6d53000c47848dba6b7be6b4ff40', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 103 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 568 KiB/s wr, 165 op/s
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.400 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.402 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.403 253542 INFO nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Creating image(s)
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.434 253542 DEBUG nova.storage.rbd_utils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.454 253542 DEBUG nova.storage.rbd_utils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4002107800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.481 253542 DEBUG nova.storage.rbd_utils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.484 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.550 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.551 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.552 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.552 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.581 253542 DEBUG nova.storage.rbd_utils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.586 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.728 253542 DEBUG nova.network.neutron [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Successfully created port: 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00010810420329491437 of space, bias 1.0, pg target 0.03243126098847431 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:36:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.894 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.941 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059748.9247053, 9396c5ff-9457-400c-8916-ecd03eded0c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.941 253542 INFO nova.compute.manager [-] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] VM Stopped (Lifecycle Event)
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.947 253542 DEBUG nova.storage.rbd_utils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] resizing rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:03 compute-0 nova_compute[253538]: 2025-11-25 08:36:03.973 253542 DEBUG nova.compute.manager [None req-c24c4469-84c2-411b-9d67-c182c8023889 - - - - - -] [instance: 9396c5ff-9457-400c-8916-ecd03eded0c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:04 compute-0 nova_compute[253538]: 2025-11-25 08:36:04.022 253542 DEBUG nova.objects.instance [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'migration_context' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:04 compute-0 nova_compute[253538]: 2025-11-25 08:36:04.034 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:04 compute-0 nova_compute[253538]: 2025-11-25 08:36:04.034 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Ensure instance console log exists: /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:04 compute-0 nova_compute[253538]: 2025-11-25 08:36:04.035 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:04 compute-0 nova_compute[253538]: 2025-11-25 08:36:04.035 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:04 compute-0 nova_compute[253538]: 2025-11-25 08:36:04.035 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:04 compute-0 ceph-mon[75015]: pgmap v1591: 321 pgs: 321 active+clean; 103 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 568 KiB/s wr, 165 op/s
Nov 25 08:36:04 compute-0 nova_compute[253538]: 2025-11-25 08:36:04.605 253542 DEBUG nova.network.neutron [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Successfully created port: 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:36:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 128 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 106 op/s
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.433 253542 DEBUG nova.network.neutron [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Successfully updated port: 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.455 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "refresh_cache-2274b091-0cde-4f9c-a067-0bec4dd614e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.455 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquired lock "refresh_cache-2274b091-0cde-4f9c-a067-0bec4dd614e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.455 253542 DEBUG nova.network.neutron [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.719 253542 DEBUG nova.network.neutron [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.957 253542 DEBUG nova.compute.manager [req-50cd09ed-749b-46bd-be05-3824453449c7 req-cb9642e0-4bb1-41db-a5fa-a03c63139c6f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received event network-changed-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.958 253542 DEBUG nova.compute.manager [req-50cd09ed-749b-46bd-be05-3824453449c7 req-cb9642e0-4bb1-41db-a5fa-a03c63139c6f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Refreshing instance network info cache due to event network-changed-453cfd64-d4ea-4656-935f-7d19b9b2f2d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:36:05 compute-0 nova_compute[253538]: 2025-11-25 08:36:05.958 253542 DEBUG oslo_concurrency.lockutils [req-50cd09ed-749b-46bd-be05-3824453449c7 req-cb9642e0-4bb1-41db-a5fa-a03c63139c6f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2274b091-0cde-4f9c-a067-0bec4dd614e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:06 compute-0 nova_compute[253538]: 2025-11-25 08:36:06.551 253542 DEBUG nova.network.neutron [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Successfully updated port: 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:36:06 compute-0 nova_compute[253538]: 2025-11-25 08:36:06.565 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:06 compute-0 nova_compute[253538]: 2025-11-25 08:36:06.565 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquired lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:06 compute-0 nova_compute[253538]: 2025-11-25 08:36:06.566 253542 DEBUG nova.network.neutron [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:06 compute-0 ceph-mon[75015]: pgmap v1592: 321 pgs: 321 active+clean; 128 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 106 op/s
Nov 25 08:36:06 compute-0 nova_compute[253538]: 2025-11-25 08:36:06.822 253542 DEBUG nova.network.neutron [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.155 253542 DEBUG nova.network.neutron [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Updating instance_info_cache with network_info: [{"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.172 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Releasing lock "refresh_cache-2274b091-0cde-4f9c-a067-0bec4dd614e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.172 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Instance network_info: |[{"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.173 253542 DEBUG oslo_concurrency.lockutils [req-50cd09ed-749b-46bd-be05-3824453449c7 req-cb9642e0-4bb1-41db-a5fa-a03c63139c6f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2274b091-0cde-4f9c-a067-0bec4dd614e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.173 253542 DEBUG nova.network.neutron [req-50cd09ed-749b-46bd-be05-3824453449c7 req-cb9642e0-4bb1-41db-a5fa-a03c63139c6f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Refreshing network info cache for port 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.180 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Start _get_guest_xml network_info=[{"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.207 253542 WARNING nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.211 253542 DEBUG nova.virt.libvirt.host [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.212 253542 DEBUG nova.virt.libvirt.host [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.216 253542 DEBUG nova.virt.libvirt.host [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.217 253542 DEBUG nova.virt.libvirt.host [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.217 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.218 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.218 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.219 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.219 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.219 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.220 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.220 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.220 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.221 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.221 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.222 253542 DEBUG nova.virt.hardware [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.226 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 163 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 935 KiB/s rd, 3.0 MiB/s wr, 111 op/s
Nov 25 08:36:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076725826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.667 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.729 253542 DEBUG nova.storage.rbd_utils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] rbd image 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:07 compute-0 nova_compute[253538]: 2025-11-25 08:36:07.733 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:07 compute-0 ceph-mon[75015]: pgmap v1593: 321 pgs: 321 active+clean; 163 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 935 KiB/s rd, 3.0 MiB/s wr, 111 op/s
Nov 25 08:36:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2076725826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2444957371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.264 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.267 253542 DEBUG nova.virt.libvirt.vif [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2147422183',display_name='tempest-ServerPasswordTestJSON-server-2147422183',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2147422183',id=70,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db308c5623d94a66b4201cf58d23c882',ramdisk_id='',reservation_id='r-ld33cfuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-2048964165',owner_user_name='tempest-ServerPasswordTestJSON-2048964165-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:01Z,user_data=None,user_id='01007a4199f147399e37549741f618ff',uuid=2274b091-0cde-4f9c-a067-0bec4dd614e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.268 253542 DEBUG nova.network.os_vif_util [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Converting VIF {"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.269 253542 DEBUG nova.network.os_vif_util [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:0a:9c,bridge_name='br-int',has_traffic_filtering=True,id=453cfd64-d4ea-4656-935f-7d19b9b2f2d7,network=Network(5cbc3ec6-ae8e-4961-a680-2893fb482362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap453cfd64-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.271 253542 DEBUG nova.objects.instance [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2274b091-0cde-4f9c-a067-0bec4dd614e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.277 253542 DEBUG nova.compute.manager [req-422fee16-2bae-49bc-b56d-136995e0fcae req-a569f086-c111-48a5-bcab-595385c45c5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-changed-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.277 253542 DEBUG nova.compute.manager [req-422fee16-2bae-49bc-b56d-136995e0fcae req-a569f086-c111-48a5-bcab-595385c45c5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Refreshing instance network info cache due to event network-changed-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.278 253542 DEBUG oslo_concurrency.lockutils [req-422fee16-2bae-49bc-b56d-136995e0fcae req-a569f086-c111-48a5-bcab-595385c45c5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.288 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <uuid>2274b091-0cde-4f9c-a067-0bec4dd614e4</uuid>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <name>instance-00000046</name>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerPasswordTestJSON-server-2147422183</nova:name>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:07</nova:creationTime>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <nova:user uuid="01007a4199f147399e37549741f618ff">tempest-ServerPasswordTestJSON-2048964165-project-member</nova:user>
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <nova:project uuid="db308c5623d94a66b4201cf58d23c882">tempest-ServerPasswordTestJSON-2048964165</nova:project>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <nova:port uuid="453cfd64-d4ea-4656-935f-7d19b9b2f2d7">
Nov 25 08:36:08 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <entry name="serial">2274b091-0cde-4f9c-a067-0bec4dd614e4</entry>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <entry name="uuid">2274b091-0cde-4f9c-a067-0bec4dd614e4</entry>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2274b091-0cde-4f9c-a067-0bec4dd614e4_disk">
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2274b091-0cde-4f9c-a067-0bec4dd614e4_disk.config">
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:8b:0a:9c"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <target dev="tap453cfd64-d4"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4/console.log" append="off"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:08 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:08 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:08 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:08 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:08 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.290 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Preparing to wait for external event network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.291 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.291 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.292 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.293 253542 DEBUG nova.virt.libvirt.vif [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:35:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2147422183',display_name='tempest-ServerPasswordTestJSON-server-2147422183',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2147422183',id=70,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db308c5623d94a66b4201cf58d23c882',ramdisk_id='',reservation_id='r-ld33cfuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-2048964165',owner_user_name='tempest-ServerPasswordTestJSON-2048964165-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:01Z,user_data=None,user_id='01007a4199f147399e37549741f618ff',uuid=2274b091-0cde-4f9c-a067-0bec4dd614e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.293 253542 DEBUG nova.network.os_vif_util [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Converting VIF {"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.294 253542 DEBUG nova.network.os_vif_util [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:0a:9c,bridge_name='br-int',has_traffic_filtering=True,id=453cfd64-d4ea-4656-935f-7d19b9b2f2d7,network=Network(5cbc3ec6-ae8e-4961-a680-2893fb482362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap453cfd64-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.295 253542 DEBUG os_vif [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:0a:9c,bridge_name='br-int',has_traffic_filtering=True,id=453cfd64-d4ea-4656-935f-7d19b9b2f2d7,network=Network(5cbc3ec6-ae8e-4961-a680-2893fb482362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap453cfd64-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.297 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.298 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.302 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.303 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap453cfd64-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.303 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap453cfd64-d4, col_values=(('external_ids', {'iface-id': '453cfd64-d4ea-4656-935f-7d19b9b2f2d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:0a:9c', 'vm-uuid': '2274b091-0cde-4f9c-a067-0bec4dd614e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.304 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:08 compute-0 NetworkManager[48915]: <info>  [1764059768.3056] manager: (tap453cfd64-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.308 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.312 253542 INFO os_vif [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:0a:9c,bridge_name='br-int',has_traffic_filtering=True,id=453cfd64-d4ea-4656-935f-7d19b9b2f2d7,network=Network(5cbc3ec6-ae8e-4961-a680-2893fb482362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap453cfd64-d4')
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.371 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.372 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.372 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] No VIF found with MAC fa:16:3e:8b:0a:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.373 253542 INFO nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Using config drive
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.394 253542 DEBUG nova.storage.rbd_utils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] rbd image 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.493 253542 DEBUG nova.network.neutron [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updating instance_info_cache with network_info: [{"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.512 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Releasing lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.513 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance network_info: |[{"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.514 253542 DEBUG oslo_concurrency.lockutils [req-422fee16-2bae-49bc-b56d-136995e0fcae req-a569f086-c111-48a5-bcab-595385c45c5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.514 253542 DEBUG nova.network.neutron [req-422fee16-2bae-49bc-b56d-136995e0fcae req-a569f086-c111-48a5-bcab-595385c45c5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Refreshing network info cache for port 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.520 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Start _get_guest_xml network_info=[{"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.525 253542 WARNING nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.531 253542 DEBUG nova.virt.libvirt.host [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.532 253542 DEBUG nova.virt.libvirt.host [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.540 253542 DEBUG nova.virt.libvirt.host [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.541 253542 DEBUG nova.virt.libvirt.host [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.541 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.541 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.542 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.542 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.543 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.543 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.543 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.544 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.544 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.544 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.545 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.545 253542 DEBUG nova.virt.hardware [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.548 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.839 253542 INFO nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Creating config drive at /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4/disk.config
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.846 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzmj4yfrq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/714958616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2444957371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:08 compute-0 nova_compute[253538]: 2025-11-25 08:36:08.966 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.014 253542 DEBUG nova.storage.rbd_utils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.018 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.054 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzmj4yfrq" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.080 253542 DEBUG nova.storage.rbd_utils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] rbd image 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.083 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4/disk.config 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.244 253542 DEBUG oslo_concurrency.processutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4/disk.config 2274b091-0cde-4f9c-a067-0bec4dd614e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.245 253542 INFO nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Deleting local config drive /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4/disk.config because it was imported into RBD.
Nov 25 08:36:09 compute-0 kernel: tap453cfd64-d4: entered promiscuous mode
Nov 25 08:36:09 compute-0 NetworkManager[48915]: <info>  [1764059769.2902] manager: (tap453cfd64-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Nov 25 08:36:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Nov 25 08:36:09 compute-0 systemd-udevd[325890]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.325 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 ovn_controller[152859]: 2025-11-25T08:36:09Z|00678|binding|INFO|Claiming lport 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 for this chassis.
Nov 25 08:36:09 compute-0 ovn_controller[152859]: 2025-11-25T08:36:09Z|00679|binding|INFO|453cfd64-d4ea-4656-935f-7d19b9b2f2d7: Claiming fa:16:3e:8b:0a:9c 10.100.0.9
Nov 25 08:36:09 compute-0 NetworkManager[48915]: <info>  [1764059769.3335] device (tap453cfd64-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:36:09 compute-0 NetworkManager[48915]: <info>  [1764059769.3345] device (tap453cfd64-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.340 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:0a:9c 10.100.0.9'], port_security=['fa:16:3e:8b:0a:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2274b091-0cde-4f9c-a067-0bec4dd614e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cbc3ec6-ae8e-4961-a680-2893fb482362', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db308c5623d94a66b4201cf58d23c882', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c2397ac-3731-4985-98f9-e8527af8874d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba89d329-3291-480b-8438-1f63d287dc6d, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=453cfd64-d4ea-4656-935f-7d19b9b2f2d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.341 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 in datapath 5cbc3ec6-ae8e-4961-a680-2893fb482362 bound to our chassis
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.342 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cbc3ec6-ae8e-4961-a680-2893fb482362
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.354 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a53477a-095f-4f41-9641-1720ff9fd8b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.355 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cbc3ec6-a1 in ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.357 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cbc3ec6-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.357 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce9a71f-4f4a-456d-98c4-2a3d3500d542]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.358 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[77a2541f-cef2-47e9-933f-bc0b9d30dd75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 systemd-machined[215790]: New machine qemu-83-instance-00000046.
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.380 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef1978b-276f-4db9-b62e-88f8822839a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-00000046.
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.405 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3738600-e1b6-4324-8800-ee422c300db2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.410 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 ovn_controller[152859]: 2025-11-25T08:36:09Z|00680|binding|INFO|Setting lport 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 ovn-installed in OVS
Nov 25 08:36:09 compute-0 ovn_controller[152859]: 2025-11-25T08:36:09Z|00681|binding|INFO|Setting lport 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 up in Southbound
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.416 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.435 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[43e84beb-a38e-4415-b1a5-eb6ec7cabe5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 NetworkManager[48915]: <info>  [1764059769.4546] manager: (tap5cbc3ec6-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/302)
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.453 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba262d5-deef-49dd-ac55-959f7549f515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140653896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.495 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cce38dbc-4684-434e-a32c-449cd60f0b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.498 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c180cf0-d8b6-45f6-b6bd-055a60c5d0a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.505 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.507 253542 DEBUG nova.virt.libvirt.vif [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1272068251',display_name='tempest-ServerRescueTestJSON-server-1272068251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1272068251',id=71,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='488c6d53000c47848dba6b7be6b4ff40',ramdisk_id='',reservation_id='r-eevo0tl5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-324239197',owner_user_name='tempest-ServerRescueTestJSON-324239197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:03Z,user_data=None,user_id='ad675e78b1b34f1c92c57e42532c3c20',uuid=884b0bf9-764d-4aa8-8bcb-c9e8644a0dad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.507 253542 DEBUG nova.network.os_vif_util [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converting VIF {"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.508 253542 DEBUG nova.network.os_vif_util [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:fe:4e,bridge_name='br-int',has_traffic_filtering=True,id=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b844d5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.509 253542 DEBUG nova.objects.instance [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.530 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <uuid>884b0bf9-764d-4aa8-8bcb-c9e8644a0dad</uuid>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <name>instance-00000047</name>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueTestJSON-server-1272068251</nova:name>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:08</nova:creationTime>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <nova:user uuid="ad675e78b1b34f1c92c57e42532c3c20">tempest-ServerRescueTestJSON-324239197-project-member</nova:user>
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <nova:project uuid="488c6d53000c47848dba6b7be6b4ff40">tempest-ServerRescueTestJSON-324239197</nova:project>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <nova:port uuid="88b844d5-7175-4dc1-92cc-d7a4d59e1d1a">
Nov 25 08:36:09 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <entry name="serial">884b0bf9-764d-4aa8-8bcb-c9e8644a0dad</entry>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <entry name="uuid">884b0bf9-764d-4aa8-8bcb-c9e8644a0dad</entry>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk">
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config">
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:09 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:a4:fe:4e"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <target dev="tap88b844d5-71"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/console.log" append="off"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:09 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:09 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:09 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:09 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:09 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.530 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Preparing to wait for external event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.531 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.531 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.531 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.532 253542 DEBUG nova.virt.libvirt.vif [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1272068251',display_name='tempest-ServerRescueTestJSON-server-1272068251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1272068251',id=71,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='488c6d53000c47848dba6b7be6b4ff40',ramdisk_id='',reservation_id='r-eevo0tl5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-324239197',owner_user_name='tempest-ServerRescueTestJSON-324239197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:03Z,user_data=None,user_id='ad675e78b1b34f1c92c57e42532c3c20',uuid=884b0bf9-764d-4aa8-8bcb-c9e8644a0dad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.533 253542 DEBUG nova.network.os_vif_util [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converting VIF {"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.533 253542 DEBUG nova.network.os_vif_util [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:fe:4e,bridge_name='br-int',has_traffic_filtering=True,id=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b844d5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.534 253542 DEBUG os_vif [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:fe:4e,bridge_name='br-int',has_traffic_filtering=True,id=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b844d5-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.535 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.535 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:09 compute-0 NetworkManager[48915]: <info>  [1764059769.5381] device (tap5cbc3ec6-a0): carrier: link connected
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.538 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88b844d5-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.538 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88b844d5-71, col_values=(('external_ids', {'iface-id': '88b844d5-7175-4dc1-92cc-d7a4d59e1d1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:fe:4e', 'vm-uuid': '884b0bf9-764d-4aa8-8bcb-c9e8644a0dad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:09 compute-0 NetworkManager[48915]: <info>  [1764059769.5406] manager: (tap88b844d5-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.541 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.545 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[16e106c4-c3a7-4123-b01d-148e17a7c001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.547 253542 INFO os_vif [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:fe:4e,bridge_name='br-int',has_traffic_filtering=True,id=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b844d5-71')
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.560 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3214af-c228-4187-a383-24a25823a0d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cbc3ec6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:54:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 207], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512413, 'reachable_time': 25543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325930, 'error': None, 'target': 'ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71d5de29-0ba7-4bff-9c07-1c8117a4fe45]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:5407'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512413, 'tstamp': 512413}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325931, 'error': None, 'target': 'ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.591 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9c61e0-d946-43a7-aaff-0253f4a6cfd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cbc3ec6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:54:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 207], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512413, 'reachable_time': 25543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325932, 'error': None, 'target': 'ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.619 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.619 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.620 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No VIF found with MAC fa:16:3e:a4:fe:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.620 253542 INFO nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Using config drive
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.625 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc62553-1d1b-464d-a126-7f56a0abce44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.650 253542 DEBUG nova.storage.rbd_utils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[039bdce4-7d99-44e6-8dbf-16a40d02326f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.691 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cbc3ec6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.692 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.693 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cbc3ec6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 NetworkManager[48915]: <info>  [1764059769.6967] manager: (tap5cbc3ec6-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Nov 25 08:36:09 compute-0 kernel: tap5cbc3ec6-a0: entered promiscuous mode
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.701 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.702 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cbc3ec6-a0, col_values=(('external_ids', {'iface-id': 'ed9b05cb-224d-49ac-a6d7-cc526992c633'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:09 compute-0 ovn_controller[152859]: 2025-11-25T08:36:09Z|00682|binding|INFO|Releasing lport ed9b05cb-224d-49ac-a6d7-cc526992c633 from this chassis (sb_readonly=0)
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.727 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cbc3ec6-ae8e-4961-a680-2893fb482362.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cbc3ec6-ae8e-4961-a680-2893fb482362.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.729 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4517aa6e-c82d-496d-a66c-3aaf192eaeb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.730 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-5cbc3ec6-ae8e-4961-a680-2893fb482362
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/5cbc3ec6-ae8e-4961-a680-2893fb482362.pid.haproxy
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 5cbc3ec6-ae8e-4961-a680-2893fb482362
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:36:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:09.734 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362', 'env', 'PROCESS_TAG=haproxy-5cbc3ec6-ae8e-4961-a680-2893fb482362', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cbc3ec6-ae8e-4961-a680-2893fb482362.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.948 253542 DEBUG nova.network.neutron [req-50cd09ed-749b-46bd-be05-3824453449c7 req-cb9642e0-4bb1-41db-a5fa-a03c63139c6f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Updated VIF entry in instance network info cache for port 453cfd64-d4ea-4656-935f-7d19b9b2f2d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.949 253542 DEBUG nova.network.neutron [req-50cd09ed-749b-46bd-be05-3824453449c7 req-cb9642e0-4bb1-41db-a5fa-a03c63139c6f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Updating instance_info_cache with network_info: [{"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:09 compute-0 nova_compute[253538]: 2025-11-25 08:36:09.964 253542 DEBUG oslo_concurrency.lockutils [req-50cd09ed-749b-46bd-be05-3824453449c7 req-cb9642e0-4bb1-41db-a5fa-a03c63139c6f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2274b091-0cde-4f9c-a067-0bec4dd614e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.014 253542 INFO nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Creating config drive at /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.019 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7tg3tdy3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/714958616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:10 compute-0 ceph-mon[75015]: pgmap v1594: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Nov 25 08:36:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2140653896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.055 253542 DEBUG nova.network.neutron [req-422fee16-2bae-49bc-b56d-136995e0fcae req-a569f086-c111-48a5-bcab-595385c45c5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updated VIF entry in instance network info cache for port 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.056 253542 DEBUG nova.network.neutron [req-422fee16-2bae-49bc-b56d-136995e0fcae req-a569f086-c111-48a5-bcab-595385c45c5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updating instance_info_cache with network_info: [{"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.081 253542 DEBUG oslo_concurrency.lockutils [req-422fee16-2bae-49bc-b56d-136995e0fcae req-a569f086-c111-48a5-bcab-595385c45c5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.157 253542 DEBUG nova.compute.manager [req-3d0efe06-e117-4534-b23e-dcad57f263fa req-625d7b69-ca33-4b52-b216-1493ef1e16c4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received event network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.157 253542 DEBUG oslo_concurrency.lockutils [req-3d0efe06-e117-4534-b23e-dcad57f263fa req-625d7b69-ca33-4b52-b216-1493ef1e16c4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.157 253542 DEBUG oslo_concurrency.lockutils [req-3d0efe06-e117-4534-b23e-dcad57f263fa req-625d7b69-ca33-4b52-b216-1493ef1e16c4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.158 253542 DEBUG oslo_concurrency.lockutils [req-3d0efe06-e117-4534-b23e-dcad57f263fa req-625d7b69-ca33-4b52-b216-1493ef1e16c4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.158 253542 DEBUG nova.compute.manager [req-3d0efe06-e117-4534-b23e-dcad57f263fa req-625d7b69-ca33-4b52-b216-1493ef1e16c4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Processing event network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.164 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7tg3tdy3" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:10 compute-0 podman[326024]: 2025-11-25 08:36:10.121861408 +0000 UTC m=+0.024399207 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.244 253542 DEBUG nova.storage.rbd_utils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.247 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.289 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059770.26775, 2274b091-0cde-4f9c-a067-0bec4dd614e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.290 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] VM Started (Lifecycle Event)
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.297 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.303 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.315 253542 INFO nova.virt.libvirt.driver [-] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Instance spawned successfully.
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.316 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.320 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.323 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.336 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.336 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059770.2682548, 2274b091-0cde-4f9c-a067-0bec4dd614e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.337 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] VM Paused (Lifecycle Event)
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.342 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.342 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.343 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.343 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.344 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.345 253542 DEBUG nova.virt.libvirt.driver [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.349 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.353 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059770.3020456, 2274b091-0cde-4f9c-a067-0bec4dd614e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.353 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] VM Resumed (Lifecycle Event)
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.381 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.384 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.402 253542 INFO nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Took 8.54 seconds to spawn the instance on the hypervisor.
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.403 253542 DEBUG nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.405 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:10 compute-0 podman[326024]: 2025-11-25 08:36:10.427865892 +0000 UTC m=+0.330403651 container create 34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.458 253542 INFO nova.compute.manager [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Took 9.44 seconds to build instance.
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.471 253542 DEBUG oslo_concurrency.lockutils [None req-3b99383b-0b20-4892-abc5-089d79671081 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:10 compute-0 systemd[1]: Started libpod-conmon-34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33.scope.
Nov 25 08:36:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4bcfab369b68938cf2cec994b7db58753fe7c908345e8b79fce0e986aa36c57/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:10 compute-0 podman[326076]: 2025-11-25 08:36:10.615131895 +0000 UTC m=+0.156237139 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:36:10 compute-0 podman[326024]: 2025-11-25 08:36:10.673173916 +0000 UTC m=+0.575711675 container init 34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:36:10 compute-0 podman[326024]: 2025-11-25 08:36:10.679451764 +0000 UTC m=+0.581989533 container start 34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 08:36:10 compute-0 neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362[326091]: [NOTICE]   (326101) : New worker (326103) forked
Nov 25 08:36:10 compute-0 neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362[326091]: [NOTICE]   (326101) : Loading success.
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.853 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059755.851816, 4e9d3984-d789-45e1-83e3-8909597d3265 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.853 253542 INFO nova.compute.manager [-] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] VM Stopped (Lifecycle Event)
Nov 25 08:36:10 compute-0 nova_compute[253538]: 2025-11-25 08:36:10.870 253542 DEBUG nova.compute.manager [None req-ff04fb2f-4ac4-4f6d-87ab-ada81bc37a03 - - - - - -] [instance: 4e9d3984-d789-45e1-83e3-8909597d3265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.175 253542 DEBUG oslo_concurrency.processutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.928s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.177 253542 INFO nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Deleting local config drive /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config because it was imported into RBD.
Nov 25 08:36:11 compute-0 NetworkManager[48915]: <info>  [1764059771.2439] manager: (tap88b844d5-71): new Tun device (/org/freedesktop/NetworkManager/Devices/305)
Nov 25 08:36:11 compute-0 kernel: tap88b844d5-71: entered promiscuous mode
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:11 compute-0 ovn_controller[152859]: 2025-11-25T08:36:11Z|00683|binding|INFO|Claiming lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for this chassis.
Nov 25 08:36:11 compute-0 ovn_controller[152859]: 2025-11-25T08:36:11Z|00684|binding|INFO|88b844d5-7175-4dc1-92cc-d7a4d59e1d1a: Claiming fa:16:3e:a4:fe:4e 10.100.0.5
Nov 25 08:36:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:11.259 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:fe:4e 10.100.0.5'], port_security=['fa:16:3e:a4:fe:4e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '884b0bf9-764d-4aa8-8bcb-c9e8644a0dad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:11.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 bound to our chassis
Nov 25 08:36:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:11.261 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:36:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:11.262 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdd0ad6-0b95-4315-8ff3-02764745a510]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:11 compute-0 NetworkManager[48915]: <info>  [1764059771.2710] device (tap88b844d5-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:36:11 compute-0 NetworkManager[48915]: <info>  [1764059771.2720] device (tap88b844d5-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:36:11 compute-0 systemd-machined[215790]: New machine qemu-84-instance-00000047.
Nov 25 08:36:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Nov 25 08:36:11 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-00000047.
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.344 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:11 compute-0 ovn_controller[152859]: 2025-11-25T08:36:11Z|00685|binding|INFO|Setting lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a ovn-installed in OVS
Nov 25 08:36:11 compute-0 ovn_controller[152859]: 2025-11-25T08:36:11Z|00686|binding|INFO|Setting lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a up in Southbound
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.824 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059771.8243184, 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.825 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] VM Started (Lifecycle Event)
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.852 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.855 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059771.8251314, 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.855 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] VM Paused (Lifecycle Event)
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.871 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.873 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:11 compute-0 nova_compute[253538]: 2025-11-25 08:36:11.887 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.344 253542 DEBUG nova.compute.manager [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received event network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.344 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.344 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.345 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.345 253542 DEBUG nova.compute.manager [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] No waiting events found dispatching network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.345 253542 WARNING nova.compute.manager [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received unexpected event network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 for instance with vm_state active and task_state None.
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.345 253542 DEBUG nova.compute.manager [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.345 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.346 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.346 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.346 253542 DEBUG nova.compute.manager [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Processing event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.346 253542 DEBUG nova.compute.manager [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.347 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.347 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.347 253542 DEBUG oslo_concurrency.lockutils [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.347 253542 DEBUG nova.compute.manager [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] No waiting events found dispatching network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.348 253542 WARNING nova.compute.manager [req-4ccca4fa-2b44-46df-8fd9-8aeea4a12e80 req-48db4528-67bc-41ee-b2a6-684bfd5cee76 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received unexpected event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for instance with vm_state building and task_state spawning.
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.348 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.351 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059772.3514857, 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.351 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] VM Resumed (Lifecycle Event)
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.353 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.355 253542 INFO nova.virt.libvirt.driver [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance spawned successfully.
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.356 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.375 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.379 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.379 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.380 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.380 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.381 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.381 253542 DEBUG nova.virt.libvirt.driver [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.385 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.410 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:12 compute-0 ceph-mon[75015]: pgmap v1595: 321 pgs: 321 active+clean; 180 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.438 253542 INFO nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Took 9.04 seconds to spawn the instance on the hypervisor.
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.439 253542 DEBUG nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.509 253542 INFO nova.compute.manager [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Took 10.45 seconds to build instance.
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.526 253542 DEBUG oslo_concurrency.lockutils [None req-2faec43d-7c54-4836-8668-ee47bf58f13f ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.642 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.643 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.643 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.644 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.645 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.645 253542 INFO nova.compute.manager [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Terminating instance
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.646 253542 DEBUG nova.compute.manager [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:36:12 compute-0 kernel: tap453cfd64-d4 (unregistering): left promiscuous mode
Nov 25 08:36:12 compute-0 NetworkManager[48915]: <info>  [1764059772.8095] device (tap453cfd64-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:12 compute-0 ovn_controller[152859]: 2025-11-25T08:36:12Z|00687|binding|INFO|Releasing lport 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 from this chassis (sb_readonly=0)
Nov 25 08:36:12 compute-0 ovn_controller[152859]: 2025-11-25T08:36:12Z|00688|binding|INFO|Setting lport 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 down in Southbound
Nov 25 08:36:12 compute-0 ovn_controller[152859]: 2025-11-25T08:36:12Z|00689|binding|INFO|Removing iface tap453cfd64-d4 ovn-installed in OVS
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:12.835 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:0a:9c 10.100.0.9'], port_security=['fa:16:3e:8b:0a:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2274b091-0cde-4f9c-a067-0bec4dd614e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cbc3ec6-ae8e-4961-a680-2893fb482362', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db308c5623d94a66b4201cf58d23c882', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c2397ac-3731-4985-98f9-e8527af8874d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba89d329-3291-480b-8438-1f63d287dc6d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=453cfd64-d4ea-4656-935f-7d19b9b2f2d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:12.836 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 453cfd64-d4ea-4656-935f-7d19b9b2f2d7 in datapath 5cbc3ec6-ae8e-4961-a680-2893fb482362 unbound from our chassis
Nov 25 08:36:12 compute-0 nova_compute[253538]: 2025-11-25 08:36:12.835 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:12.837 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cbc3ec6-ae8e-4961-a680-2893fb482362, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:36:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:12.839 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6f1977-f22b-4e83-b1ce-b33e315ad695]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:12.839 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362 namespace which is not needed anymore
Nov 25 08:36:12 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000046.scope: Deactivated successfully.
Nov 25 08:36:12 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000046.scope: Consumed 3.006s CPU time.
Nov 25 08:36:12 compute-0 systemd-machined[215790]: Machine qemu-83-instance-00000046 terminated.
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.030 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.030 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.047 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:13 compute-0 neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362[326091]: [NOTICE]   (326101) : haproxy version is 2.8.14-c23fe91
Nov 25 08:36:13 compute-0 neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362[326091]: [NOTICE]   (326101) : path to executable is /usr/sbin/haproxy
Nov 25 08:36:13 compute-0 neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362[326091]: [WARNING]  (326101) : Exiting Master process...
Nov 25 08:36:13 compute-0 neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362[326091]: [ALERT]    (326101) : Current worker (326103) exited with code 143 (Terminated)
Nov 25 08:36:13 compute-0 neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362[326091]: [WARNING]  (326101) : All workers exited. Exiting... (0)
Nov 25 08:36:13 compute-0 systemd[1]: libpod-34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33.scope: Deactivated successfully.
Nov 25 08:36:13 compute-0 podman[326200]: 2025-11-25 08:36:13.08813762 +0000 UTC m=+0.122628067 container died 34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.094 253542 INFO nova.virt.libvirt.driver [-] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Instance destroyed successfully.
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.096 253542 DEBUG nova.objects.instance [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lazy-loading 'resources' on Instance uuid 2274b091-0cde-4f9c-a067-0bec4dd614e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.107 253542 DEBUG nova.virt.libvirt.vif [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:35:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2147422183',display_name='tempest-ServerPasswordTestJSON-server-2147422183',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2147422183',id=70,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db308c5623d94a66b4201cf58d23c882',ramdisk_id='',reservation_id='r-ld33cfuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-2048964165',owner_user_name='tempest-ServerPasswordTestJSON-2048964165-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:36:12Z,user_data=None,user_id='01007a4199f147399e37549741f618ff',uuid=2274b091-0cde-4f9c-a067-0bec4dd614e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.108 253542 DEBUG nova.network.os_vif_util [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Converting VIF {"id": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "address": "fa:16:3e:8b:0a:9c", "network": {"id": "5cbc3ec6-ae8e-4961-a680-2893fb482362", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1965213762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db308c5623d94a66b4201cf58d23c882", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap453cfd64-d4", "ovs_interfaceid": "453cfd64-d4ea-4656-935f-7d19b9b2f2d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.109 253542 DEBUG nova.network.os_vif_util [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:0a:9c,bridge_name='br-int',has_traffic_filtering=True,id=453cfd64-d4ea-4656-935f-7d19b9b2f2d7,network=Network(5cbc3ec6-ae8e-4961-a680-2893fb482362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap453cfd64-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.109 253542 DEBUG os_vif [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:0a:9c,bridge_name='br-int',has_traffic_filtering=True,id=453cfd64-d4ea-4656-935f-7d19b9b2f2d7,network=Network(5cbc3ec6-ae8e-4961-a680-2893fb482362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap453cfd64-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.111 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap453cfd64-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.113 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.117 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.117 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.120 253542 INFO os_vif [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:0a:9c,bridge_name='br-int',has_traffic_filtering=True,id=453cfd64-d4ea-4656-935f-7d19b9b2f2d7,network=Network(5cbc3ec6-ae8e-4961-a680-2893fb482362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap453cfd64-d4')
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.141 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.141 253542 INFO nova.compute.claims [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33-userdata-shm.mount: Deactivated successfully.
Nov 25 08:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4bcfab369b68938cf2cec994b7db58753fe7c908345e8b79fce0e986aa36c57-merged.mount: Deactivated successfully.
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.253 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 181 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 844 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.419 253542 INFO nova.compute.manager [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Rescuing
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.420 253542 DEBUG oslo_concurrency.lockutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.420 253542 DEBUG oslo_concurrency.lockutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquired lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.421 253542 DEBUG nova.network.neutron [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:13 compute-0 podman[326200]: 2025-11-25 08:36:13.428540609 +0000 UTC m=+0.463031026 container cleanup 34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:36:13 compute-0 systemd[1]: libpod-conmon-34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33.scope: Deactivated successfully.
Nov 25 08:36:13 compute-0 podman[326277]: 2025-11-25 08:36:13.506189236 +0000 UTC m=+0.049157212 container remove 34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.512 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[671e526c-b7d6-411d-b602-176d53c7f0c2]: (4, ('Tue Nov 25 08:36:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362 (34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33)\n34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33\nTue Nov 25 08:36:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362 (34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33)\n34a90a18d3ecf4e6793bba2fab0be0c1d5003909ec6aa9e7757f16821921bd33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.514 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a4352b87-0484-4a0e-ab43-37640ed86e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.515 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cbc3ec6-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:13 compute-0 kernel: tap5cbc3ec6-a0: left promiscuous mode
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.535 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fac9dad4-8924-472f-80d6-9bed23d8a03b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.553 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[185c31ce-5f65-4bec-81a8-f296f9079d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.556 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f9fb95-3ccf-4f8e-828a-52101a5250c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[48612ec2-0378-4a44-92e6-db7d31a4975c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512402, 'reachable_time': 25872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326291, 'error': None, 'target': 'ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d5cbc3ec6\x2dae8e\x2d4961\x2da680\x2d2893fb482362.mount: Deactivated successfully.
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.579 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cbc3ec6-ae8e-4961-a680-2893fb482362 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:36:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:13.579 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[284f2c96-7f50-4e6d-8c79-f3929bf309e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153778789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.793 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.799 253542 DEBUG nova.compute.provider_tree [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.810 253542 DEBUG nova.scheduler.client.report [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.829 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.829 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.868 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.869 253542 DEBUG nova.network.neutron [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.884 253542 INFO nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.899 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.996 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.997 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:13 compute-0 nova_compute[253538]: 2025-11-25 08:36:13.998 253542 INFO nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Creating image(s)
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.050 253542 DEBUG nova.storage.rbd_utils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.105 253542 DEBUG nova.storage.rbd_utils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.129 253542 DEBUG nova.storage.rbd_utils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.132 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.175 253542 DEBUG nova.policy [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee3a6261ded642fa9ef617b29b026d86', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa56d31750374b64b67d1be19bb4e989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.203 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.204 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.205 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.205 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.224 253542 DEBUG nova.storage.rbd_utils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.227 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.467 253542 DEBUG nova.compute.manager [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received event network-vif-unplugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.467 253542 DEBUG oslo_concurrency.lockutils [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.467 253542 DEBUG oslo_concurrency.lockutils [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.468 253542 DEBUG oslo_concurrency.lockutils [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.468 253542 DEBUG nova.compute.manager [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] No waiting events found dispatching network-vif-unplugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.468 253542 DEBUG nova.compute.manager [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received event network-vif-unplugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.468 253542 DEBUG nova.compute.manager [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received event network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.468 253542 DEBUG oslo_concurrency.lockutils [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.469 253542 DEBUG oslo_concurrency.lockutils [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.469 253542 DEBUG oslo_concurrency.lockutils [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.469 253542 DEBUG nova.compute.manager [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] No waiting events found dispatching network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.469 253542 WARNING nova.compute.manager [req-f3888902-708d-4b96-9285-f5fb018adea6 req-6436750f-4348-4f8f-978f-7c055ef0e0af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received unexpected event network-vif-plugged-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 for instance with vm_state active and task_state deleting.
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.591 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.591 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.592 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.611 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.611 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:36:14 compute-0 nova_compute[253538]: 2025-11-25 08:36:14.611 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:14 compute-0 ceph-mon[75015]: pgmap v1596: 321 pgs: 321 active+clean; 181 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 844 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Nov 25 08:36:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2153778789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 163 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 139 op/s
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.689 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:15 compute-0 ceph-mon[75015]: pgmap v1597: 321 pgs: 321 active+clean; 163 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 139 op/s
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.770 253542 INFO nova.virt.libvirt.driver [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Deleting instance files /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4_del
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.771 253542 INFO nova.virt.libvirt.driver [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Deletion of /var/lib/nova/instances/2274b091-0cde-4f9c-a067-0bec4dd614e4_del complete
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.788 253542 DEBUG nova.storage.rbd_utils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] resizing rbd image 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:15 compute-0 podman[326414]: 2025-11-25 08:36:15.843701979 +0000 UTC m=+0.094957723 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.929 253542 DEBUG nova.network.neutron [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Successfully created port: 8bb4cdd1-8082-4ad3-9350-be7270fb373b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.939 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.939 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.950 253542 INFO nova.compute.manager [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Took 3.30 seconds to destroy the instance on the hypervisor.
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.951 253542 DEBUG oslo.service.loopingcall [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.951 253542 DEBUG nova.compute.manager [-] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.951 253542 DEBUG nova.network.neutron [-] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:36:15 compute-0 nova_compute[253538]: 2025-11-25 08:36:15.967 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.040 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.041 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.050 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.050 253542 INFO nova.compute.claims [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.105785) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059776105824, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 691, "num_deletes": 257, "total_data_size": 741161, "memory_usage": 754040, "flush_reason": "Manual Compaction"}
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059776171335, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 733555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32859, "largest_seqno": 33549, "table_properties": {"data_size": 729902, "index_size": 1433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8593, "raw_average_key_size": 19, "raw_value_size": 722360, "raw_average_value_size": 1626, "num_data_blocks": 63, "num_entries": 444, "num_filter_entries": 444, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059735, "oldest_key_time": 1764059735, "file_creation_time": 1764059776, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 65636 microseconds, and 3805 cpu microseconds.
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.171410) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 733555 bytes OK
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.171444) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.174650) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.174670) EVENT_LOG_v1 {"time_micros": 1764059776174664, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.174700) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 737459, prev total WAL file size 737459, number of live WAL files 2.
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.175956) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323539' seq:0, type:0; will stop at (end)
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(716KB)], [71(8319KB)]
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059776176029, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9253056, "oldest_snapshot_seqno": -1}
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.217 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.255 253542 DEBUG nova.network.neutron [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updating instance_info_cache with network_info: [{"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.270 253542 DEBUG oslo_concurrency.lockutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Releasing lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.277 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.278 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.278 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5704 keys, 9140151 bytes, temperature: kUnknown
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059776403232, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9140151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9100819, "index_size": 23990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 144992, "raw_average_key_size": 25, "raw_value_size": 8997054, "raw_average_value_size": 1577, "num_data_blocks": 978, "num_entries": 5704, "num_filter_entries": 5704, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059776, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.403789) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9140151 bytes
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.458049) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.7 rd, 40.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(25.1) write-amplify(12.5) OK, records in: 6234, records dropped: 530 output_compression: NoCompression
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.458100) EVENT_LOG_v1 {"time_micros": 1764059776458081, "job": 40, "event": "compaction_finished", "compaction_time_micros": 227496, "compaction_time_cpu_micros": 24787, "output_level": 6, "num_output_files": 1, "total_output_size": 9140151, "num_input_records": 6234, "num_output_records": 5704, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059776458883, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059776462376, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.175757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.462517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.462523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.462524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.462526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:36:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:36:16.462527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.558 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:36:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838270062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.699 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.708 253542 DEBUG nova.compute.provider_tree [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.722 253542 DEBUG nova.scheduler.client.report [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.781 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.781 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.793 253542 DEBUG nova.objects.instance [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'migration_context' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.821 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.822 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Ensure instance console log exists: /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.822 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.823 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.823 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.831 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.832 253542 DEBUG nova.network.neutron [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.847 253542 INFO nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.874 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.970 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.972 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.973 253542 INFO nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Creating image(s)
Nov 25 08:36:16 compute-0 nova_compute[253538]: 2025-11-25 08:36:16.993 253542 DEBUG nova.storage.rbd_utils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.056 253542 DEBUG nova.storage.rbd_utils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.079 253542 DEBUG nova.storage.rbd_utils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.090 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.137 253542 DEBUG nova.network.neutron [-] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.156 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.157 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.158 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.158 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.279 253542 DEBUG nova.storage.rbd_utils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.284 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 155 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.4 MiB/s wr, 207 op/s
Nov 25 08:36:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2838270062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.331 253542 DEBUG nova.network.neutron [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Successfully updated port: 8bb4cdd1-8082-4ad3-9350-be7270fb373b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.343 253542 DEBUG nova.policy [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee3a6261ded642fa9ef617b29b026d86', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa56d31750374b64b67d1be19bb4e989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.345 253542 INFO nova.compute.manager [-] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Took 1.39 seconds to deallocate network for instance.
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.347 253542 DEBUG nova.compute.manager [req-ea0e799f-128c-46b6-98de-ccc01525e312 req-a02052c6-4e0a-45e8-a609-92bb3fe4de4a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Received event network-vif-deleted-453cfd64-d4ea-4656-935f-7d19b9b2f2d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.377 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.378 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquired lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.379 253542 DEBUG nova.network.neutron [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.402 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.403 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.469 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "04d190be-1443-48a9-ad51-3625b65dff6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.470 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.506 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.518 253542 DEBUG oslo_concurrency.processutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.563 253542 DEBUG nova.network.neutron [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.584 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427616083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.958 253542 DEBUG oslo_concurrency.processutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.965 253542 DEBUG nova.compute.provider_tree [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:17 compute-0 nova_compute[253538]: 2025-11-25 08:36:17.980 253542 DEBUG nova.scheduler.client.report [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.010 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.014 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.025 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.026 253542 INFO nova.compute.claims [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.144 253542 INFO nova.scheduler.client.report [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Deleted allocations for instance 2274b091-0cde-4f9c-a067-0bec4dd614e4
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.221 253542 DEBUG oslo_concurrency.lockutils [None req-0dc91119-7029-4d31-9f84-3986e9e063ca 01007a4199f147399e37549741f618ff db308c5623d94a66b4201cf58d23c882 - - default default] Lock "2274b091-0cde-4f9c-a067-0bec4dd614e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.230 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:18 compute-0 ceph-mon[75015]: pgmap v1598: 321 pgs: 321 active+clean; 155 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.4 MiB/s wr, 207 op/s
Nov 25 08:36:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2427616083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.613 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updating instance_info_cache with network_info: [{"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.629 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.630 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.631 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.631 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.631 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:36:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616305422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.691 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.698 253542 DEBUG nova.compute.provider_tree [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.711 253542 DEBUG nova.scheduler.client.report [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.749 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.750 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.805 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.805 253542 DEBUG nova.network.neutron [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.824 253542 INFO nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.839 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.963 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.966 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:18 compute-0 nova_compute[253538]: 2025-11-25 08:36:18.967 253542 INFO nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Creating image(s)
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.003 253542 DEBUG nova.storage.rbd_utils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 04d190be-1443-48a9-ad51-3625b65dff6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.035 253542 DEBUG nova.storage.rbd_utils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 04d190be-1443-48a9-ad51-3625b65dff6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.058 253542 DEBUG nova.storage.rbd_utils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 04d190be-1443-48a9-ad51-3625b65dff6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.062 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.108 253542 DEBUG nova.network.neutron [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Successfully created port: 41bc52e3-37ed-4096-9d29-9868b1e29c3b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.119 253542 DEBUG nova.policy [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee3a6261ded642fa9ef617b29b026d86', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa56d31750374b64b67d1be19bb4e989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.123 253542 DEBUG nova.network.neutron [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Updating instance_info_cache with network_info: [{"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.139 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.172 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Releasing lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.172 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance network_info: |[{"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.173 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.177 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Start _get_guest_xml network_info=[{"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.178 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.179 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.180 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.201 253542 DEBUG nova.storage.rbd_utils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 04d190be-1443-48a9-ad51-3625b65dff6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.205 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 04d190be-1443-48a9-ad51-3625b65dff6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.298 253542 DEBUG nova.compute.manager [req-608ff305-8cf5-43bc-8a77-d1408c6160e8 req-bb227ba3-5ef9-4893-bf1c-b7e333bcfcd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-changed-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.299 253542 DEBUG nova.compute.manager [req-608ff305-8cf5-43bc-8a77-d1408c6160e8 req-bb227ba3-5ef9-4893-bf1c-b7e333bcfcd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Refreshing instance network info cache due to event network-changed-8bb4cdd1-8082-4ad3-9350-be7270fb373b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.300 253542 DEBUG oslo_concurrency.lockutils [req-608ff305-8cf5-43bc-8a77-d1408c6160e8 req-bb227ba3-5ef9-4893-bf1c-b7e333bcfcd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.300 253542 DEBUG oslo_concurrency.lockutils [req-608ff305-8cf5-43bc-8a77-d1408c6160e8 req-bb227ba3-5ef9-4893-bf1c-b7e333bcfcd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.300 253542 DEBUG nova.network.neutron [req-608ff305-8cf5-43bc-8a77-d1408c6160e8 req-bb227ba3-5ef9-4893-bf1c-b7e333bcfcd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Refreshing network info cache for port 8bb4cdd1-8082-4ad3-9350-be7270fb373b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:36:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 158 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 196 op/s
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.308 253542 DEBUG nova.storage.rbd_utils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] resizing rbd image b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.534 253542 WARNING nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.550 253542 DEBUG nova.virt.libvirt.host [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.552 253542 DEBUG nova.virt.libvirt.host [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.557 253542 DEBUG nova.virt.libvirt.host [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.558 253542 DEBUG nova.virt.libvirt.host [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.559 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.559 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.560 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.561 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.562 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.562 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.563 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.563 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.564 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.564 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.564 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.565 253542 DEBUG nova.virt.hardware [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:19 compute-0 nova_compute[253538]: 2025-11-25 08:36:19.571 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/616305422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:19 compute-0 podman[326808]: 2025-11-25 08:36:19.857627957 +0000 UTC m=+0.105244500 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.033 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "774c40d8-01ae-49cc-ad06-05003232491a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.033 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.055 253542 DEBUG nova.objects.instance [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'migration_context' on Instance uuid b62dacb0-2605-4b3f-b00a-9ecf5d2728f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.066 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.067 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Ensure instance console log exists: /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.067 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.068 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.068 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.083 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.143 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.144 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.150 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.150 253542 INFO nova.compute.claims [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1564819450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.177 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.200 253542 DEBUG nova.storage.rbd_utils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.205 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.373 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.409 253542 DEBUG nova.network.neutron [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Successfully created port: 62ab60cc-67a7-4f48-a964-8684f3731d02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.472 253542 DEBUG nova.network.neutron [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Successfully updated port: 41bc52e3-37ed-4096-9d29-9868b1e29c3b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.489 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "refresh_cache-b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.490 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquired lock "refresh_cache-b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.490 253542 DEBUG nova.network.neutron [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.621 253542 DEBUG nova.compute.manager [req-bfd573cc-317f-435c-a0b3-57a0921e29e2 req-29752aef-f3bb-4358-8e40-491ec3c6e9f1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received event network-changed-41bc52e3-37ed-4096-9d29-9868b1e29c3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.621 253542 DEBUG nova.compute.manager [req-bfd573cc-317f-435c-a0b3-57a0921e29e2 req-29752aef-f3bb-4358-8e40-491ec3c6e9f1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Refreshing instance network info cache due to event network-changed-41bc52e3-37ed-4096-9d29-9868b1e29c3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.622 253542 DEBUG oslo_concurrency.lockutils [req-bfd573cc-317f-435c-a0b3-57a0921e29e2 req-29752aef-f3bb-4358-8e40-491ec3c6e9f1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322393036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:20 compute-0 nova_compute[253538]: 2025-11-25 08:36:20.675 253542 DEBUG nova.network.neutron [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:36:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 199 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 219 op/s
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.417 253542 DEBUG nova.network.neutron [req-608ff305-8cf5-43bc-8a77-d1408c6160e8 req-bb227ba3-5ef9-4893-bf1c-b7e333bcfcd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Updated VIF entry in instance network info cache for port 8bb4cdd1-8082-4ad3-9350-be7270fb373b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.418 253542 DEBUG nova.network.neutron [req-608ff305-8cf5-43bc-8a77-d1408c6160e8 req-bb227ba3-5ef9-4893-bf1c-b7e333bcfcd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Updating instance_info_cache with network_info: [{"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.431 253542 DEBUG oslo_concurrency.lockutils [req-608ff305-8cf5-43bc-8a77-d1408c6160e8 req-bb227ba3-5ef9-4893-bf1c-b7e333bcfcd2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.518 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.552 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.553 253542 DEBUG nova.virt.libvirt.vif [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1875494502',display_name='tempest-ListServerFiltersTestJSON-instance-1875494502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1875494502',id=72,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-usfwimbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:13Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=6c10a34e-4126-4e88-ad4d-ba7c407a379e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.553 253542 DEBUG nova.network.os_vif_util [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.554 253542 DEBUG nova.network.os_vif_util [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.555 253542 DEBUG nova.objects.instance [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836118839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.568 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <uuid>6c10a34e-4126-4e88-ad4d-ba7c407a379e</uuid>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <name>instance-00000048</name>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1875494502</nova:name>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:19</nova:creationTime>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <nova:user uuid="ee3a6261ded642fa9ef617b29b026d86">tempest-ListServerFiltersTestJSON-1878680398-project-member</nova:user>
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <nova:project uuid="aa56d31750374b64b67d1be19bb4e989">tempest-ListServerFiltersTestJSON-1878680398</nova:project>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <nova:port uuid="8bb4cdd1-8082-4ad3-9350-be7270fb373b">
Nov 25 08:36:21 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <entry name="serial">6c10a34e-4126-4e88-ad4d-ba7c407a379e</entry>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <entry name="uuid">6c10a34e-4126-4e88-ad4d-ba7c407a379e</entry>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk">
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk.config">
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:21 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:a4:57:05"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <target dev="tap8bb4cdd1-80"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/console.log" append="off"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:21 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:21 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:21 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:21 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:21 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.569 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Preparing to wait for external event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.570 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.570 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.571 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.571 253542 DEBUG nova.virt.libvirt.vif [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1875494502',display_name='tempest-ListServerFiltersTestJSON-instance-1875494502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1875494502',id=72,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-usfwimbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:13Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=6c10a34e-4126-4e88-ad4d-ba7c407a379e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.572 253542 DEBUG nova.network.os_vif_util [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.572 253542 DEBUG nova.network.os_vif_util [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.572 253542 DEBUG os_vif [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.573 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.573 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.574 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.576 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8bb4cdd1-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.576 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8bb4cdd1-80, col_values=(('external_ids', {'iface-id': '8bb4cdd1-8082-4ad3-9350-be7270fb373b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:57:05', 'vm-uuid': '6c10a34e-4126-4e88-ad4d-ba7c407a379e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.578 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:21 compute-0 NetworkManager[48915]: <info>  [1764059781.5791] manager: (tap8bb4cdd1-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.580 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.590 253542 INFO os_vif [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80')
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.599 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.606 253542 DEBUG nova.compute.provider_tree [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.610 253542 DEBUG nova.network.neutron [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Updating instance_info_cache with network_info: [{"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:21 compute-0 ceph-mon[75015]: pgmap v1599: 321 pgs: 321 active+clean; 158 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 196 op/s
Nov 25 08:36:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1564819450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2322393036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.620 253542 DEBUG nova.scheduler.client.report [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.657 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Releasing lock "refresh_cache-b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.657 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Instance network_info: |[{"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.657 253542 DEBUG oslo_concurrency.lockutils [req-bfd573cc-317f-435c-a0b3-57a0921e29e2 req-29752aef-f3bb-4358-8e40-491ec3c6e9f1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.658 253542 DEBUG nova.network.neutron [req-bfd573cc-317f-435c-a0b3-57a0921e29e2 req-29752aef-f3bb-4358-8e40-491ec3c6e9f1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Refreshing network info cache for port 41bc52e3-37ed-4096-9d29-9868b1e29c3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.662 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Start _get_guest_xml network_info=[{"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '64385127-d622-49bb-be38-b33beb2692d1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.667 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.667 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.675 253542 WARNING nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.681 253542 DEBUG nova.virt.libvirt.host [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.682 253542 DEBUG nova.virt.libvirt.host [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.685 253542 DEBUG nova.virt.libvirt.host [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.686 253542 DEBUG nova.virt.libvirt.host [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.686 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.686 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.689 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.690 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.693 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.693 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.694 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.694 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.695 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.695 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.695 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.695 253542 DEBUG nova.virt.hardware [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.702 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.759 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.761 253542 DEBUG nova.network.neutron [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.776 253542 DEBUG nova.network.neutron [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Successfully updated port: 62ab60cc-67a7-4f48-a964-8684f3731d02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.808 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 04d190be-1443-48a9-ad51-3625b65dff6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.863 253542 INFO nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.870 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "refresh_cache-04d190be-1443-48a9-ad51-3625b65dff6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.871 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquired lock "refresh_cache-04d190be-1443-48a9-ad51-3625b65dff6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.875 253542 DEBUG nova.network.neutron [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.920 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.929 253542 DEBUG nova.storage.rbd_utils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] resizing rbd image 04d190be-1443-48a9-ad51-3625b65dff6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.982 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.983 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.984 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No VIF found with MAC fa:16:3e:a4:57:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:36:21 compute-0 nova_compute[253538]: 2025-11-25 08:36:21.984 253542 INFO nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Using config drive
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.007 253542 DEBUG nova.storage.rbd_utils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479190312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.154 253542 DEBUG nova.policy [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1aaf041a5d4344a1b22e039b0d22e198', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fb75cadcaecd4a2fb54df6dd80902908', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.158 253542 DEBUG nova.network.neutron [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.168 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.192 253542 DEBUG nova.storage.rbd_utils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.196 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.269 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.271 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.272 253542 INFO nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Creating image(s)
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.296 253542 DEBUG nova.storage.rbd_utils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] rbd image 774c40d8-01ae-49cc-ad06-05003232491a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.319 253542 DEBUG nova.storage.rbd_utils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] rbd image 774c40d8-01ae-49cc-ad06-05003232491a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.352 253542 DEBUG nova.storage.rbd_utils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] rbd image 774c40d8-01ae-49cc-ad06-05003232491a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.367 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.452 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.457 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.458 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.458 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.481 253542 DEBUG nova.storage.rbd_utils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] rbd image 774c40d8-01ae-49cc-ad06-05003232491a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.484 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 774c40d8-01ae-49cc-ad06-05003232491a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.629 253542 INFO nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Creating config drive at /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/disk.config
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.638 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3jsplcud execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.801 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3jsplcud" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895370780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.855 253542 DEBUG nova.storage.rbd_utils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.859 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/disk.config 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.908 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.910 253542 DEBUG nova.virt.libvirt.vif [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-48731400',display_name='tempest-ListServerFiltersTestJSON-instance-48731400',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-48731400',id=73,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-eg67u00c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:16Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=b62dacb0-2605-4b3f-b00a-9ecf5d2728f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.911 253542 DEBUG nova.network.os_vif_util [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.912 253542 DEBUG nova.network.os_vif_util [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:21:01,bridge_name='br-int',has_traffic_filtering=True,id=41bc52e3-37ed-4096-9d29-9868b1e29c3b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc52e3-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.914 253542 DEBUG nova.objects.instance [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'pci_devices' on Instance uuid b62dacb0-2605-4b3f-b00a-9ecf5d2728f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.931 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <uuid>b62dacb0-2605-4b3f-b00a-9ecf5d2728f7</uuid>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <name>instance-00000049</name>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-48731400</nova:name>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:21</nova:creationTime>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <nova:user uuid="ee3a6261ded642fa9ef617b29b026d86">tempest-ListServerFiltersTestJSON-1878680398-project-member</nova:user>
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <nova:project uuid="aa56d31750374b64b67d1be19bb4e989">tempest-ListServerFiltersTestJSON-1878680398</nova:project>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <nova:port uuid="41bc52e3-37ed-4096-9d29-9868b1e29c3b">
Nov 25 08:36:22 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <entry name="serial">b62dacb0-2605-4b3f-b00a-9ecf5d2728f7</entry>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <entry name="uuid">b62dacb0-2605-4b3f-b00a-9ecf5d2728f7</entry>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk">
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk.config">
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:22 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:91:21:01"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <target dev="tap41bc52e3-37"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7/console.log" append="off"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:22 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:22 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:22 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:22 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:22 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.934 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Preparing to wait for external event network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.935 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.935 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.935 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.936 253542 DEBUG nova.virt.libvirt.vif [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-48731400',display_name='tempest-ListServerFiltersTestJSON-instance-48731400',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-48731400',id=73,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-eg67u00c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:16Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=b62dacb0-2605-4b3f-b00a-9ecf5d2728f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.936 253542 DEBUG nova.network.os_vif_util [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.937 253542 DEBUG nova.network.os_vif_util [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:21:01,bridge_name='br-int',has_traffic_filtering=True,id=41bc52e3-37ed-4096-9d29-9868b1e29c3b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc52e3-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.937 253542 DEBUG os_vif [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:21:01,bridge_name='br-int',has_traffic_filtering=True,id=41bc52e3-37ed-4096-9d29-9868b1e29c3b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc52e3-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.939 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.940 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.940 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.943 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.943 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41bc52e3-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.943 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41bc52e3-37, col_values=(('external_ids', {'iface-id': '41bc52e3-37ed-4096-9d29-9868b1e29c3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:21:01', 'vm-uuid': 'b62dacb0-2605-4b3f-b00a-9ecf5d2728f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.945 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:22 compute-0 NetworkManager[48915]: <info>  [1764059782.9466] manager: (tap41bc52e3-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:22 compute-0 nova_compute[253538]: 2025-11-25 08:36:22.957 253542 INFO os_vif [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:21:01,bridge_name='br-int',has_traffic_filtering=True,id=41bc52e3-37ed-4096-9d29-9868b1e29c3b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc52e3-37')
Nov 25 08:36:23 compute-0 ceph-mon[75015]: pgmap v1600: 321 pgs: 321 active+clean; 199 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 219 op/s
Nov 25 08:36:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3836118839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3479190312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.166 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.167 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.167 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No VIF found with MAC fa:16:3e:91:21:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.168 253542 INFO nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Using config drive
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.248 253542 DEBUG nova.storage.rbd_utils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.263 253542 DEBUG nova.compute.manager [req-45875d7f-4110-4b11-8a0d-42d5f40eb66d req-d1c7a93e-3039-4730-8619-4a8de76e6411 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Received event network-changed-62ab60cc-67a7-4f48-a964-8684f3731d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.264 253542 DEBUG nova.compute.manager [req-45875d7f-4110-4b11-8a0d-42d5f40eb66d req-d1c7a93e-3039-4730-8619-4a8de76e6411 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Refreshing instance network info cache due to event network-changed-62ab60cc-67a7-4f48-a964-8684f3731d02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.264 253542 DEBUG oslo_concurrency.lockutils [req-45875d7f-4110-4b11-8a0d-42d5f40eb66d req-d1c7a93e-3039-4730-8619-4a8de76e6411 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-04d190be-1443-48a9-ad51-3625b65dff6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.279 253542 DEBUG nova.network.neutron [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Successfully created port: 8d7ef71e-d272-4075-b3f5-8a029c7c860f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.295 253542 DEBUG nova.network.neutron [req-bfd573cc-317f-435c-a0b3-57a0921e29e2 req-29752aef-f3bb-4358-8e40-491ec3c6e9f1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Updated VIF entry in instance network info cache for port 41bc52e3-37ed-4096-9d29-9868b1e29c3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.296 253542 DEBUG nova.network.neutron [req-bfd573cc-317f-435c-a0b3-57a0921e29e2 req-29752aef-f3bb-4358-8e40-491ec3c6e9f1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Updating instance_info_cache with network_info: [{"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 253 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 238 op/s
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.308 253542 DEBUG oslo_concurrency.lockutils [req-bfd573cc-317f-435c-a0b3-57a0921e29e2 req-29752aef-f3bb-4358-8e40-491ec3c6e9f1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.378 253542 DEBUG nova.objects.instance [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'migration_context' on Instance uuid 04d190be-1443-48a9-ad51-3625b65dff6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.388 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.389 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Ensure instance console log exists: /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.389 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.389 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.389 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.478 253542 DEBUG nova.network.neutron [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Updating instance_info_cache with network_info: [{"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.504 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Releasing lock "refresh_cache-04d190be-1443-48a9-ad51-3625b65dff6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.504 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Instance network_info: |[{"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.504 253542 DEBUG oslo_concurrency.lockutils [req-45875d7f-4110-4b11-8a0d-42d5f40eb66d req-d1c7a93e-3039-4730-8619-4a8de76e6411 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-04d190be-1443-48a9-ad51-3625b65dff6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.504 253542 DEBUG nova.network.neutron [req-45875d7f-4110-4b11-8a0d-42d5f40eb66d req-d1c7a93e-3039-4730-8619-4a8de76e6411 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Refreshing network info cache for port 62ab60cc-67a7-4f48-a964-8684f3731d02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.507 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Start _get_guest_xml network_info=[{"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.510 253542 WARNING nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.514 253542 DEBUG nova.virt.libvirt.host [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.515 253542 DEBUG nova.virt.libvirt.host [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.521 253542 DEBUG nova.virt.libvirt.host [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.521 253542 DEBUG nova.virt.libvirt.host [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.522 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.522 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3f28470-2767-4983-bb62-a706449905cc',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.522 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.522 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.523 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.523 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.523 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.523 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.524 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.524 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.524 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.524 253542 DEBUG nova.virt.hardware [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.526 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.571 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.594 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.595 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.736 253542 INFO nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Creating config drive at /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7/disk.config
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.742 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp76kruauq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.897 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp76kruauq" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.937 253542 DEBUG nova.storage.rbd_utils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:23 compute-0 nova_compute[253538]: 2025-11-25 08:36:23.942 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7/disk.config b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806358985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.012 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.045 253542 DEBUG nova.storage.rbd_utils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 04d190be-1443-48a9-ad51-3625b65dff6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.052 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.152 253542 DEBUG nova.network.neutron [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Successfully updated port: 8d7ef71e-d272-4075-b3f5-8a029c7c860f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.187 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "refresh_cache-774c40d8-01ae-49cc-ad06-05003232491a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.187 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquired lock "refresh_cache-774c40d8-01ae-49cc-ad06-05003232491a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.188 253542 DEBUG nova.network.neutron [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.258 253542 DEBUG nova.compute.manager [req-148d6696-291f-4e51-b74d-f7027b8e481f req-99dfaa31-d012-49ea-a45f-33b98d6a58fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received event network-changed-8d7ef71e-d272-4075-b3f5-8a029c7c860f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.259 253542 DEBUG nova.compute.manager [req-148d6696-291f-4e51-b74d-f7027b8e481f req-99dfaa31-d012-49ea-a45f-33b98d6a58fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Refreshing instance network info cache due to event network-changed-8d7ef71e-d272-4075-b3f5-8a029c7c860f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.259 253542 DEBUG oslo_concurrency.lockutils [req-148d6696-291f-4e51-b74d-f7027b8e481f req-99dfaa31-d012-49ea-a45f-33b98d6a58fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-774c40d8-01ae-49cc-ad06-05003232491a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953618894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.290 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.695s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.355 253542 DEBUG nova.network.neutron [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.360 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.360 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.363 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.363 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.367 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.367 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.550 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.551 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3810MB free_disk=59.93784713745117GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.551 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.552 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.619 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 6c10a34e-4126-4e88-ad4d-ba7c407a379e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.619 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance b62dacb0-2605-4b3f-b00a-9ecf5d2728f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.619 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 04d190be-1443-48a9-ad51-3625b65dff6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.619 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 774c40d8-01ae-49cc-ad06-05003232491a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.619 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.619 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1216MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.713 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2907313733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.838 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.841 253542 DEBUG nova.virt.libvirt.vif [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-488069752',display_name='tempest-ListServerFiltersTestJSON-instance-488069752',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-488069752',id=74,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-4ha8394r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:18Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=04d190be-1443-48a9-ad51-3625b65dff6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.841 253542 DEBUG nova.network.os_vif_util [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.842 253542 DEBUG nova.network.os_vif_util [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:f3:02,bridge_name='br-int',has_traffic_filtering=True,id=62ab60cc-67a7-4f48-a964-8684f3731d02,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62ab60cc-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.844 253542 DEBUG nova.objects.instance [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 04d190be-1443-48a9-ad51-3625b65dff6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.857 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <uuid>04d190be-1443-48a9-ad51-3625b65dff6c</uuid>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <name>instance-0000004a</name>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <memory>196608</memory>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-488069752</nova:name>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:23</nova:creationTime>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <nova:flavor name="m1.micro">
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <nova:memory>192</nova:memory>
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <nova:user uuid="ee3a6261ded642fa9ef617b29b026d86">tempest-ListServerFiltersTestJSON-1878680398-project-member</nova:user>
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <nova:project uuid="aa56d31750374b64b67d1be19bb4e989">tempest-ListServerFiltersTestJSON-1878680398</nova:project>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <nova:port uuid="62ab60cc-67a7-4f48-a964-8684f3731d02">
Nov 25 08:36:24 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <entry name="serial">04d190be-1443-48a9-ad51-3625b65dff6c</entry>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <entry name="uuid">04d190be-1443-48a9-ad51-3625b65dff6c</entry>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/04d190be-1443-48a9-ad51-3625b65dff6c_disk">
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/04d190be-1443-48a9-ad51-3625b65dff6c_disk.config">
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:24 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:84:f3:02"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <target dev="tap62ab60cc-67"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c/console.log" append="off"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:24 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:24 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:24 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:24 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:24 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.859 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Preparing to wait for external event network-vif-plugged-62ab60cc-67a7-4f48-a964-8684f3731d02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.859 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.860 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.860 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.861 253542 DEBUG nova.virt.libvirt.vif [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-488069752',display_name='tempest-ListServerFiltersTestJSON-instance-488069752',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-488069752',id=74,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-4ha8394r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:18Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=04d190be-1443-48a9-ad51-3625b65dff6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.861 253542 DEBUG nova.network.os_vif_util [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.862 253542 DEBUG nova.network.os_vif_util [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:f3:02,bridge_name='br-int',has_traffic_filtering=True,id=62ab60cc-67a7-4f48-a964-8684f3731d02,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62ab60cc-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.862 253542 DEBUG os_vif [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:f3:02,bridge_name='br-int',has_traffic_filtering=True,id=62ab60cc-67a7-4f48-a964-8684f3731d02,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62ab60cc-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.864 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.864 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62ab60cc-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62ab60cc-67, col_values=(('external_ids', {'iface-id': '62ab60cc-67a7-4f48-a964-8684f3731d02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:f3:02', 'vm-uuid': '04d190be-1443-48a9-ad51-3625b65dff6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:24 compute-0 NetworkManager[48915]: <info>  [1764059784.9077] manager: (tap62ab60cc-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:24 compute-0 nova_compute[253538]: 2025-11-25 08:36:24.921 253542 INFO os_vif [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:f3:02,bridge_name='br-int',has_traffic_filtering=True,id=62ab60cc-67a7-4f48-a964-8684f3731d02,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62ab60cc-67')
Nov 25 08:36:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3895370780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:24 compute-0 ceph-mon[75015]: pgmap v1601: 321 pgs: 321 active+clean; 253 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 238 op/s
Nov 25 08:36:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3806358985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.054 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.056 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.056 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] No VIF found with MAC fa:16:3e:84:f3:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.058 253542 INFO nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Using config drive
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.108 253542 DEBUG nova.storage.rbd_utils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 04d190be-1443-48a9-ad51-3625b65dff6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878051064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.283 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.289 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 273 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 207 op/s
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.305 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.335 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.336 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.483 253542 INFO nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Creating config drive at /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c/disk.config
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.487 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6p1t95j0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.523 253542 DEBUG nova.network.neutron [req-45875d7f-4110-4b11-8a0d-42d5f40eb66d req-d1c7a93e-3039-4730-8619-4a8de76e6411 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Updated VIF entry in instance network info cache for port 62ab60cc-67a7-4f48-a964-8684f3731d02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.524 253542 DEBUG nova.network.neutron [req-45875d7f-4110-4b11-8a0d-42d5f40eb66d req-d1c7a93e-3039-4730-8619-4a8de76e6411 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Updating instance_info_cache with network_info: [{"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.543 253542 DEBUG oslo_concurrency.lockutils [req-45875d7f-4110-4b11-8a0d-42d5f40eb66d req-d1c7a93e-3039-4730-8619-4a8de76e6411 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-04d190be-1443-48a9-ad51-3625b65dff6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.585 253542 DEBUG nova.network.neutron [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Updating instance_info_cache with network_info: [{"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.624 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Releasing lock "refresh_cache-774c40d8-01ae-49cc-ad06-05003232491a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.625 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Instance network_info: |[{"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.625 253542 DEBUG oslo_concurrency.lockutils [req-148d6696-291f-4e51-b74d-f7027b8e481f req-99dfaa31-d012-49ea-a45f-33b98d6a58fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-774c40d8-01ae-49cc-ad06-05003232491a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.625 253542 DEBUG nova.network.neutron [req-148d6696-291f-4e51-b74d-f7027b8e481f req-99dfaa31-d012-49ea-a45f-33b98d6a58fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Refreshing network info cache for port 8d7ef71e-d272-4075-b3f5-8a029c7c860f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.627 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6p1t95j0" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.653 253542 DEBUG nova.storage.rbd_utils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] rbd image 04d190be-1443-48a9-ad51-3625b65dff6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:25 compute-0 nova_compute[253538]: 2025-11-25 08:36:25.660 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c/disk.config 04d190be-1443-48a9-ad51-3625b65dff6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2953618894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2907313733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2878051064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:26 compute-0 ceph-mon[75015]: pgmap v1602: 321 pgs: 321 active+clean; 273 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 207 op/s
Nov 25 08:36:26 compute-0 nova_compute[253538]: 2025-11-25 08:36:26.319 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:26 compute-0 nova_compute[253538]: 2025-11-25 08:36:26.345 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:26 compute-0 nova_compute[253538]: 2025-11-25 08:36:26.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:26 compute-0 nova_compute[253538]: 2025-11-25 08:36:26.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:26 compute-0 nova_compute[253538]: 2025-11-25 08:36:26.713 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.202 253542 DEBUG nova.network.neutron [req-148d6696-291f-4e51-b74d-f7027b8e481f req-99dfaa31-d012-49ea-a45f-33b98d6a58fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Updated VIF entry in instance network info cache for port 8d7ef71e-d272-4075-b3f5-8a029c7c860f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.203 253542 DEBUG nova.network.neutron [req-148d6696-291f-4e51-b74d-f7027b8e481f req-99dfaa31-d012-49ea-a45f-33b98d6a58fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Updating instance_info_cache with network_info: [{"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.217 253542 DEBUG oslo_concurrency.lockutils [req-148d6696-291f-4e51-b74d-f7027b8e481f req-99dfaa31-d012-49ea-a45f-33b98d6a58fc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-774c40d8-01ae-49cc-ad06-05003232491a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 280 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 191 op/s
Nov 25 08:36:27 compute-0 ceph-mon[75015]: pgmap v1603: 321 pgs: 321 active+clean; 280 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 191 op/s
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.729 253542 DEBUG oslo_concurrency.processutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/disk.config 6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.870s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.730 253542 INFO nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Deleting local config drive /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/disk.config because it was imported into RBD.
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.746 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 774c40d8-01ae-49cc-ad06-05003232491a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:27 compute-0 NetworkManager[48915]: <info>  [1764059787.7938] manager: (tap8bb4cdd1-80): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Nov 25 08:36:27 compute-0 kernel: tap8bb4cdd1-80: entered promiscuous mode
Nov 25 08:36:27 compute-0 ovn_controller[152859]: 2025-11-25T08:36:27Z|00690|binding|INFO|Claiming lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b for this chassis.
Nov 25 08:36:27 compute-0 ovn_controller[152859]: 2025-11-25T08:36:27Z|00691|binding|INFO|8bb4cdd1-8082-4ad3-9350-be7270fb373b: Claiming fa:16:3e:a4:57:05 10.100.0.3
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.824 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:57:05 10.100.0.3'], port_security=['fa:16:3e:a4:57:05 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6c10a34e-4126-4e88-ad4d-ba7c407a379e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa56d31750374b64b67d1be19bb4e989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d5f0080-1d0d-4c90-8c84-7d1437ba109d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4cd11-f7ee-427b-8b83-e8d21ceafdd1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8bb4cdd1-8082-4ad3-9350-be7270fb373b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:27 compute-0 systemd-udevd[327468]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.826 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8bb4cdd1-8082-4ad3-9350-be7270fb373b in datapath 30bd3cca-f71b-4541-ae95-8d0eb4dfe470 bound to our chassis
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.827 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:36:27 compute-0 systemd-machined[215790]: New machine qemu-85-instance-00000048.
Nov 25 08:36:27 compute-0 NetworkManager[48915]: <info>  [1764059787.8421] device (tap8bb4cdd1-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.839 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b15af25e-b4b4-4b24-9acc-d9aec0c35678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.841 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap30bd3cca-f1 in ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:36:27 compute-0 NetworkManager[48915]: <info>  [1764059787.8431] device (tap8bb4cdd1-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.844 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap30bd3cca-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.844 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb8796d-73a1-48ca-bf63-556aad5c52c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.846 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad1c82b-efc6-48a0-9d52-646094c99913]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.856 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3137c3-f0c4-45c4-accb-1f3624fe26f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000048.
Nov 25 08:36:27 compute-0 ovn_controller[152859]: 2025-11-25T08:36:27Z|00692|binding|INFO|Setting lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b ovn-installed in OVS
Nov 25 08:36:27 compute-0 ovn_controller[152859]: 2025-11-25T08:36:27Z|00693|binding|INFO|Setting lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b up in Southbound
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.881 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f589e06-7978-44af-a474-b728756d7dc5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.890 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:27 compute-0 nova_compute[253538]: 2025-11-25 08:36:27.898 253542 DEBUG nova.storage.rbd_utils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] resizing rbd image 774c40d8-01ae-49cc-ad06-05003232491a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.908 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c52b083-feaa-4bed-9472-962df05c55b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.913 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0a4cfa-bf1d-458c-a08e-25c863818e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 systemd-udevd[327476]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:36:27 compute-0 NetworkManager[48915]: <info>  [1764059787.9148] manager: (tap30bd3cca-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.944 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[610cb690-18ef-4c0b-8f01-8b3f7e9633bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.946 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[93ef9dff-656e-434c-981f-1f20b57ed0d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 NetworkManager[48915]: <info>  [1764059787.9688] device (tap30bd3cca-f0): carrier: link connected
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.974 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d6c7e5-44ef-4b1c-8c02-1a2bf388913c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:27.990 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2733f464-cd0b-47fe-90c7-5f90f1260909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30bd3cca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:18:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514256, 'reachable_time': 18429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327552, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[091f5e0f-fb82-4e80-869b-7460f387520c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:1842'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514256, 'tstamp': 514256}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327553, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.027 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0b239b8f-a13f-48c6-baad-08df5c56d1e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30bd3cca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:18:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514256, 'reachable_time': 18429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327554, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.056 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9d82a3-d9c3-47e8-90b5-2faa335c016a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.081 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059773.0802622, 2274b091-0cde-4f9c-a067-0bec4dd614e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.081 253542 INFO nova.compute.manager [-] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] VM Stopped (Lifecycle Event)
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.097 253542 DEBUG nova.compute.manager [None req-47a0e74e-3edd-4142-b00e-38ea68eacce0 - - - - - -] [instance: 2274b091-0cde-4f9c-a067-0bec4dd614e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.213 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed41b58b-c9f7-489b-a82c-83c555acf446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.215 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bd3cca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.216 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.217 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bd3cca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:28 compute-0 NetworkManager[48915]: <info>  [1764059788.2197] manager: (tap30bd3cca-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.219 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 kernel: tap30bd3cca-f0: entered promiscuous mode
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.230 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30bd3cca-f0, col_values=(('external_ids', {'iface-id': 'fb637403-e23d-4de9-9c54-1f8015bf829f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00694|binding|INFO|Releasing lport fb637403-e23d-4de9-9c54-1f8015bf829f from this chassis (sb_readonly=0)
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.261 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.262 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/30bd3cca-f71b-4541-ae95-8d0eb4dfe470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/30bd3cca-f71b-4541-ae95-8d0eb4dfe470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[89a08248-464e-4ef3-b709-da3e8d964be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.265 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/30bd3cca-f71b-4541-ae95-8d0eb4dfe470.pid.haproxy
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.268 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'env', 'PROCESS_TAG=haproxy-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/30bd3cca-f71b-4541-ae95-8d0eb4dfe470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.396 253542 DEBUG oslo_concurrency.processutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7/disk.config b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.398 253542 INFO nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Deleting local config drive /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7/disk.config because it was imported into RBD.
Nov 25 08:36:28 compute-0 NetworkManager[48915]: <info>  [1764059788.4609] manager: (tap41bc52e3-37): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Nov 25 08:36:28 compute-0 kernel: tap41bc52e3-37: entered promiscuous mode
Nov 25 08:36:28 compute-0 NetworkManager[48915]: <info>  [1764059788.4850] device (tap41bc52e3-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:36:28 compute-0 NetworkManager[48915]: <info>  [1764059788.4858] device (tap41bc52e3-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.481 253542 DEBUG nova.compute.manager [req-a89e5358-c026-465a-a526-7864be6788d9 req-3e8c5cc9-2bd4-43e6-aa72-a03b79546e7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.482 253542 DEBUG oslo_concurrency.lockutils [req-a89e5358-c026-465a-a526-7864be6788d9 req-3e8c5cc9-2bd4-43e6-aa72-a03b79546e7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.482 253542 DEBUG oslo_concurrency.lockutils [req-a89e5358-c026-465a-a526-7864be6788d9 req-3e8c5cc9-2bd4-43e6-aa72-a03b79546e7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.482 253542 DEBUG oslo_concurrency.lockutils [req-a89e5358-c026-465a-a526-7864be6788d9 req-3e8c5cc9-2bd4-43e6-aa72-a03b79546e7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.483 253542 DEBUG nova.compute.manager [req-a89e5358-c026-465a-a526-7864be6788d9 req-3e8c5cc9-2bd4-43e6-aa72-a03b79546e7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Processing event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00695|binding|INFO|Claiming lport 41bc52e3-37ed-4096-9d29-9868b1e29c3b for this chassis.
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00696|binding|INFO|41bc52e3-37ed-4096-9d29-9868b1e29c3b: Claiming fa:16:3e:91:21:01 10.100.0.9
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.560 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:21:01 10.100.0.9'], port_security=['fa:16:3e:91:21:01 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b62dacb0-2605-4b3f-b00a-9ecf5d2728f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa56d31750374b64b67d1be19bb4e989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d5f0080-1d0d-4c90-8c84-7d1437ba109d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4cd11-f7ee-427b-8b83-e8d21ceafdd1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=41bc52e3-37ed-4096-9d29-9868b1e29c3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00697|binding|INFO|Setting lport 41bc52e3-37ed-4096-9d29-9868b1e29c3b ovn-installed in OVS
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00698|binding|INFO|Setting lport 41bc52e3-37ed-4096-9d29-9868b1e29c3b up in Southbound
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 systemd-machined[215790]: New machine qemu-86-instance-00000049.
Nov 25 08:36:28 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-00000049.
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.645 253542 DEBUG oslo_concurrency.processutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c/disk.config 04d190be-1443-48a9-ad51-3625b65dff6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.986s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.646 253542 INFO nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Deleting local config drive /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c/disk.config because it was imported into RBD.
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.659 253542 DEBUG nova.objects.instance [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lazy-loading 'migration_context' on Instance uuid 774c40d8-01ae-49cc-ad06-05003232491a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.680 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.680 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Ensure instance console log exists: /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.681 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.682 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.682 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.685 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Start _get_guest_xml network_info=[{"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.690 253542 WARNING nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:28 compute-0 kernel: tap62ab60cc-67: entered promiscuous mode
Nov 25 08:36:28 compute-0 NetworkManager[48915]: <info>  [1764059788.6984] manager: (tap62ab60cc-67): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.698 253542 DEBUG nova.virt.libvirt.host [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.699 253542 DEBUG nova.virt.libvirt.host [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00699|binding|INFO|Claiming lport 62ab60cc-67a7-4f48-a964-8684f3731d02 for this chassis.
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00700|binding|INFO|62ab60cc-67a7-4f48-a964-8684f3731d02: Claiming fa:16:3e:84:f3:02 10.100.0.12
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.715 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:f3:02 10.100.0.12'], port_security=['fa:16:3e:84:f3:02 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '04d190be-1443-48a9-ad51-3625b65dff6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa56d31750374b64b67d1be19bb4e989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d5f0080-1d0d-4c90-8c84-7d1437ba109d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4cd11-f7ee-427b-8b83-e8d21ceafdd1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=62ab60cc-67a7-4f48-a964-8684f3731d02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.716 253542 DEBUG nova.virt.libvirt.host [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.717 253542 DEBUG nova.virt.libvirt.host [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.718 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.718 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.719 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.720 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.720 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.720 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:28 compute-0 NetworkManager[48915]: <info>  [1764059788.7209] device (tap62ab60cc-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.721 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.721 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:28 compute-0 NetworkManager[48915]: <info>  [1764059788.7218] device (tap62ab60cc-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.722 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.722 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.722 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.723 253542 DEBUG nova.virt.hardware [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:28 compute-0 podman[327644]: 2025-11-25 08:36:28.724345708 +0000 UTC m=+0.077544235 container create 53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.728 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00701|binding|INFO|Setting lport 62ab60cc-67a7-4f48-a964-8684f3731d02 ovn-installed in OVS
Nov 25 08:36:28 compute-0 ovn_controller[152859]: 2025-11-25T08:36:28Z|00702|binding|INFO|Setting lport 62ab60cc-67a7-4f48-a964-8684f3731d02 up in Southbound
Nov 25 08:36:28 compute-0 systemd-machined[215790]: New machine qemu-87-instance-0000004a.
Nov 25 08:36:28 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-0000004a.
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 systemd[1]: Started libpod-conmon-53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0.scope.
Nov 25 08:36:28 compute-0 podman[327644]: 2025-11-25 08:36:28.680184201 +0000 UTC m=+0.033382748 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:36:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e28477818bfc2658c000012d9190692087ae09da101bd0facd99ecffda3b6d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:28 compute-0 podman[327644]: 2025-11-25 08:36:28.803242289 +0000 UTC m=+0.156440836 container init 53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:36:28 compute-0 podman[327644]: 2025-11-25 08:36:28.813338819 +0000 UTC m=+0.166537346 container start 53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:36:28 compute-0 neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470[327701]: [NOTICE]   (327711) : New worker (327714) forked
Nov 25 08:36:28 compute-0 neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470[327701]: [NOTICE]   (327711) : Loading success.
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.846 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059788.8460057, 6c10a34e-4126-4e88-ad4d-ba7c407a379e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.847 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] VM Started (Lifecycle Event)
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.852 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.860 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 41bc52e3-37ed-4096-9d29-9868b1e29c3b in datapath 30bd3cca-f71b-4541-ae95-8d0eb4dfe470 unbound from our chassis
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.862 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.869 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.871 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.879 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[80875c66-937e-4e3c-83aa-b6445d133798]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.880 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.883 253542 INFO nova.virt.libvirt.driver [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance spawned successfully.
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.884 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.904 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.905 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059788.8462193, 6c10a34e-4126-4e88-ad4d-ba7c407a379e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.905 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] VM Paused (Lifecycle Event)
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.914 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6c64985f-45a7-418d-94c2-29e8a6e77ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.917 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ada52d-8244-4ec6-918b-45eff19eb66e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.917 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.918 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.919 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.920 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.920 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.921 253542 DEBUG nova.virt.libvirt.driver [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.927 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.931 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059788.8667445, 6c10a34e-4126-4e88-ad4d-ba7c407a379e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.931 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] VM Resumed (Lifecycle Event)
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.936 253542 DEBUG nova.compute.manager [req-55ea6113-3bd3-40ff-9f49-8c6d349372bf req-dfb3415a-a0bd-4602-ae7e-ae4c0a391f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received event network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.936 253542 DEBUG oslo_concurrency.lockutils [req-55ea6113-3bd3-40ff-9f49-8c6d349372bf req-dfb3415a-a0bd-4602-ae7e-ae4c0a391f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.937 253542 DEBUG oslo_concurrency.lockutils [req-55ea6113-3bd3-40ff-9f49-8c6d349372bf req-dfb3415a-a0bd-4602-ae7e-ae4c0a391f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.937 253542 DEBUG oslo_concurrency.lockutils [req-55ea6113-3bd3-40ff-9f49-8c6d349372bf req-dfb3415a-a0bd-4602-ae7e-ae4c0a391f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.937 253542 DEBUG nova.compute.manager [req-55ea6113-3bd3-40ff-9f49-8c6d349372bf req-dfb3415a-a0bd-4602-ae7e-ae4c0a391f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Processing event network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.942 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b5e69d-9e44-4160-aab9-ce7e466046d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.952 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.956 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.959 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15009138-03d0-46d2-963a-ab9c1ca2e268]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30bd3cca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:18:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514256, 'reachable_time': 18429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327747, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.977 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d415891-b682-4e19-885c-8933a88d3076]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514276, 'tstamp': 514276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327748, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514280, 'tstamp': 514280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327748, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.979 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bd3cca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.982 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bd3cca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.982 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.982 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30bd3cca-f0, col_values=(('external_ids', {'iface-id': 'fb637403-e23d-4de9-9c54-1f8015bf829f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.983 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.983 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 62ab60cc-67a7-4f48-a964-8684f3731d02 in datapath 30bd3cca-f71b-4541-ae95-8d0eb4dfe470 unbound from our chassis
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.983 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.985 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:36:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:36:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2200244604' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:36:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:36:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2200244604' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.995 253542 INFO nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Took 15.00 seconds to spawn the instance on the hypervisor.
Nov 25 08:36:28 compute-0 nova_compute[253538]: 2025-11-25 08:36:28.996 253542 DEBUG nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:28.999 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[616da134-54aa-498e-b480-27e574ed33e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.026 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[98044e11-87a1-4aba-abec-3818965e28c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.029 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7022c3e1-42f9-4903-8a4f-d16687b22d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2200244604' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:36:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2200244604' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.057 253542 INFO nova.compute.manager [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Took 15.97 seconds to build instance.
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.059 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e9ec1d-bb13-496b-bfc2-3fc4ab0968c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.077 253542 DEBUG oslo_concurrency.lockutils [None req-804c3ae4-faf1-4921-a916-94b044fffc34 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.078 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8520adfd-d712-479e-989b-98ac26a68ede]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30bd3cca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:18:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514256, 'reachable_time': 18429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327754, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.093 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb1352b-1c10-4230-9742-ba630c63c93b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514276, 'tstamp': 514276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327755, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514280, 'tstamp': 514280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327755, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.095 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bd3cca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.097 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bd3cca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.097 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.097 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.098 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30bd3cca-f0, col_values=(('external_ids', {'iface-id': 'fb637403-e23d-4de9-9c54-1f8015bf829f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:29.098 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421271548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.238 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.264 253542 DEBUG nova.storage.rbd_utils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] rbd image 774c40d8-01ae-49cc-ad06-05003232491a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.268 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 297 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 6.3 MiB/s wr, 124 op/s
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.438 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059789.4379547, 04d190be-1443-48a9-ad51-3625b65dff6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.438 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] VM Started (Lifecycle Event)
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.457 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.460 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059789.4406674, 04d190be-1443-48a9-ad51-3625b65dff6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.460 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] VM Paused (Lifecycle Event)
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.472 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.482 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.482 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.488 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.497 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "806e081d-6b1a-4909-be7c-5490c631ebfe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.498 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "806e081d-6b1a-4909-be7c-5490c631ebfe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.501 253542 INFO nova.virt.libvirt.driver [-] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Instance spawned successfully.
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.501 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.510 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.511 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059789.4715147, b62dacb0-2605-4b3f-b00a-9ecf5d2728f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.511 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] VM Started (Lifecycle Event)
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.530 253542 DEBUG nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.536 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.544 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.544 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.546 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.546 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.548 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.548 253542 DEBUG nova.virt.libvirt.driver [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.558 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.596 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.597 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059789.4716775, b62dacb0-2605-4b3f-b00a-9ecf5d2728f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.597 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] VM Paused (Lifecycle Event)
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.620 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.625 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059789.4754765, b62dacb0-2605-4b3f-b00a-9ecf5d2728f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.626 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] VM Resumed (Lifecycle Event)
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.633 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.633 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.640 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.640 253542 INFO nova.compute.claims [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.646 253542 INFO nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Took 12.67 seconds to spawn the instance on the hypervisor.
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.647 253542 DEBUG nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.666 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.669 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.695 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.706 253542 INFO nova.compute.manager [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Took 13.69 seconds to build instance.
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.724 253542 DEBUG oslo_concurrency.lockutils [None req-fd936a8e-d836-4038-a457-1a811e52a4e5 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265262333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.825 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.827 253542 DEBUG nova.virt.libvirt.vif [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-2049682504',display_name='tempest-ServerMetadataTestJSON-server-2049682504',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-2049682504',id=75,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb75cadcaecd4a2fb54df6dd80902908',ramdisk_id='',reservation_id='r-jy21tybp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-470098851',owner_user_name='tempest-ServerMetadataTestJSON-470098851-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:21Z,user_data=None,user_id='1aaf041a5d4344a1b22e039b0d22e198',uuid=774c40d8-01ae-49cc-ad06-05003232491a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.828 253542 DEBUG nova.network.os_vif_util [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Converting VIF {"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.829 253542 DEBUG nova.network.os_vif_util [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=8d7ef71e-d272-4075-b3f5-8a029c7c860f,network=Network(46e51a76-3d39-490e-ba9a-1d5ac591a461),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7ef71e-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.830 253542 DEBUG nova.objects.instance [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lazy-loading 'pci_devices' on Instance uuid 774c40d8-01ae-49cc-ad06-05003232491a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.832 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.882 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <uuid>774c40d8-01ae-49cc-ad06-05003232491a</uuid>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <name>instance-0000004b</name>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerMetadataTestJSON-server-2049682504</nova:name>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:28</nova:creationTime>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <nova:user uuid="1aaf041a5d4344a1b22e039b0d22e198">tempest-ServerMetadataTestJSON-470098851-project-member</nova:user>
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <nova:project uuid="fb75cadcaecd4a2fb54df6dd80902908">tempest-ServerMetadataTestJSON-470098851</nova:project>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <nova:port uuid="8d7ef71e-d272-4075-b3f5-8a029c7c860f">
Nov 25 08:36:29 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <entry name="serial">774c40d8-01ae-49cc-ad06-05003232491a</entry>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <entry name="uuid">774c40d8-01ae-49cc-ad06-05003232491a</entry>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/774c40d8-01ae-49cc-ad06-05003232491a_disk">
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/774c40d8-01ae-49cc-ad06-05003232491a_disk.config">
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:29 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d1:94:3d"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <target dev="tap8d7ef71e-d2"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a/console.log" append="off"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:29 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:29 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:29 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:29 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:29 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.889 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Preparing to wait for external event network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.890 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "774c40d8-01ae-49cc-ad06-05003232491a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.891 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.891 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.892 253542 DEBUG nova.virt.libvirt.vif [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-2049682504',display_name='tempest-ServerMetadataTestJSON-server-2049682504',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-2049682504',id=75,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb75cadcaecd4a2fb54df6dd80902908',ramdisk_id='',reservation_id='r-jy21tybp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-470098851',owner_user_name='tempest-ServerMetadataTestJSON-470098851-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:21Z,user_data=None,user_id='1aaf041a5d4344a1b22e039b0d22e198',uuid=774c40d8-01ae-49cc-ad06-05003232491a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.893 253542 DEBUG nova.network.os_vif_util [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Converting VIF {"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.894 253542 DEBUG nova.network.os_vif_util [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=8d7ef71e-d272-4075-b3f5-8a029c7c860f,network=Network(46e51a76-3d39-490e-ba9a-1d5ac591a461),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7ef71e-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.895 253542 DEBUG os_vif [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=8d7ef71e-d272-4075-b3f5-8a029c7c860f,network=Network(46e51a76-3d39-490e-ba9a-1d5ac591a461),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7ef71e-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.897 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.898 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.902 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d7ef71e-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.903 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d7ef71e-d2, col_values=(('external_ids', {'iface-id': '8d7ef71e-d272-4075-b3f5-8a029c7c860f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:94:3d', 'vm-uuid': '774c40d8-01ae-49cc-ad06-05003232491a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:29 compute-0 NetworkManager[48915]: <info>  [1764059789.9054] manager: (tap8d7ef71e-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.914 253542 INFO os_vif [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=8d7ef71e-d272-4075-b3f5-8a029c7c860f,network=Network(46e51a76-3d39-490e-ba9a-1d5ac591a461),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7ef71e-d2')
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.960 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.960 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.961 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] No VIF found with MAC fa:16:3e:d1:94:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.961 253542 INFO nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Using config drive
Nov 25 08:36:29 compute-0 nova_compute[253538]: 2025-11-25 08:36:29.980 253542 DEBUG nova.storage.rbd_utils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] rbd image 774c40d8-01ae-49cc-ad06-05003232491a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/421271548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:30 compute-0 ceph-mon[75015]: pgmap v1604: 321 pgs: 321 active+clean; 297 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 6.3 MiB/s wr, 124 op/s
Nov 25 08:36:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3265262333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925613141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.307 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.312 253542 DEBUG nova.compute.provider_tree [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.324 253542 DEBUG nova.scheduler.client.report [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.364 253542 INFO nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Creating config drive at /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a/disk.config
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.368 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1lq4qxup execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.408 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.410 253542 DEBUG nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.511 253542 DEBUG nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.518 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1lq4qxup" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.547 253542 DEBUG nova.storage.rbd_utils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] rbd image 774c40d8-01ae-49cc-ad06-05003232491a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.551 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a/disk.config 774c40d8-01ae-49cc-ad06-05003232491a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.597 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.599 253542 INFO nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.607 253542 DEBUG nova.compute.manager [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.608 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.609 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.609 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.610 253542 DEBUG nova.compute.manager [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] No waiting events found dispatching network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.610 253542 WARNING nova.compute.manager [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received unexpected event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b for instance with vm_state active and task_state None.
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.611 253542 DEBUG nova.compute.manager [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Received event network-vif-plugged-62ab60cc-67a7-4f48-a964-8684f3731d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.611 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.612 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.613 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.613 253542 DEBUG nova.compute.manager [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Processing event network-vif-plugged-62ab60cc-67a7-4f48-a964-8684f3731d02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.614 253542 DEBUG nova.compute.manager [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Received event network-vif-plugged-62ab60cc-67a7-4f48-a964-8684f3731d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.615 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.615 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.616 253542 DEBUG oslo_concurrency.lockutils [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.616 253542 DEBUG nova.compute.manager [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] No waiting events found dispatching network-vif-plugged-62ab60cc-67a7-4f48-a964-8684f3731d02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.617 253542 WARNING nova.compute.manager [req-b5407e27-0dab-4607-a664-31ca9cf53a9c req-5a889769-65a4-4710-b3b3-9a8458a1c599 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Received unexpected event network-vif-plugged-62ab60cc-67a7-4f48-a964-8684f3731d02 for instance with vm_state building and task_state spawning.
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.619 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.624 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.625 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059790.6242664, 04d190be-1443-48a9-ad51-3625b65dff6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.627 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] VM Resumed (Lifecycle Event)
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.635 253542 INFO nova.virt.libvirt.driver [-] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Instance spawned successfully.
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.636 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.647 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.654 253542 DEBUG nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.659 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.664 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.665 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.665 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.666 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.667 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.667 253542 DEBUG nova.virt.libvirt.driver [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.692 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.723 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.724 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.728 253542 DEBUG oslo_concurrency.processutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a/disk.config 774c40d8-01ae-49cc-ad06-05003232491a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.730 253542 INFO nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Deleting local config drive /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a/disk.config because it was imported into RBD.
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.732 253542 INFO nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Took 11.77 seconds to spawn the instance on the hypervisor.
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.736 253542 DEBUG nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.755 253542 DEBUG nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:30 compute-0 NetworkManager[48915]: <info>  [1764059790.7751] manager: (tap8d7ef71e-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/315)
Nov 25 08:36:30 compute-0 kernel: tap8d7ef71e-d2: entered promiscuous mode
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.783 253542 DEBUG nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:30 compute-0 ovn_controller[152859]: 2025-11-25T08:36:30Z|00703|binding|INFO|Claiming lport 8d7ef71e-d272-4075-b3f5-8a029c7c860f for this chassis.
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.784 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:30 compute-0 ovn_controller[152859]: 2025-11-25T08:36:30Z|00704|binding|INFO|8d7ef71e-d272-4075-b3f5-8a029c7c860f: Claiming fa:16:3e:d1:94:3d 10.100.0.9
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.784 253542 INFO nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Creating image(s)
Nov 25 08:36:30 compute-0 NetworkManager[48915]: <info>  [1764059790.7914] device (tap8d7ef71e-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:36:30 compute-0 NetworkManager[48915]: <info>  [1764059790.7928] device (tap8d7ef71e-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.799 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:94:3d 10.100.0.9'], port_security=['fa:16:3e:d1:94:3d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '774c40d8-01ae-49cc-ad06-05003232491a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-46e51a76-3d39-490e-ba9a-1d5ac591a461', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb75cadcaecd4a2fb54df6dd80902908', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba2bb850-2bd0-481a-a946-d9347bf738fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bcba803-2428-46d3-9e5d-7a3639faf1f8, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8d7ef71e-d272-4075-b3f5-8a029c7c860f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.800 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7ef71e-d272-4075-b3f5-8a029c7c860f in datapath 46e51a76-3d39-490e-ba9a-1d5ac591a461 bound to our chassis
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.802 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 46e51a76-3d39-490e-ba9a-1d5ac591a461
Nov 25 08:36:30 compute-0 systemd-machined[215790]: New machine qemu-88-instance-0000004b.
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.820 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87ae50e9-7bf9-490a-ab54-e8e4a479b961]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.821 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap46e51a76-31 in ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:36:30 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-0000004b.
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.823 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap46e51a76-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.823 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ade0a1f5-7bd9-4d94-8052-fca1b29df5f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.825 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f7062925-53da-4fe1-a331-5150ee6018ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.843 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8060cb5e-edfa-4d9c-88ed-a9bae73b0432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.851 253542 DEBUG nova.storage.rbd_utils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 806e081d-6b1a-4909-be7c-5490c631ebfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.869 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[595d3402-af1e-4f97-8a9d-bb9e2e8c7f46]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 ovn_controller[152859]: 2025-11-25T08:36:30Z|00705|binding|INFO|Setting lport 8d7ef71e-d272-4075-b3f5-8a029c7c860f ovn-installed in OVS
Nov 25 08:36:30 compute-0 ovn_controller[152859]: 2025-11-25T08:36:30Z|00706|binding|INFO|Setting lport 8d7ef71e-d272-4075-b3f5-8a029c7c860f up in Southbound
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.895 253542 DEBUG nova.storage.rbd_utils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 806e081d-6b1a-4909-be7c-5490c631ebfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.900 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad3d0a-6472-46c0-ad81-02d8562d6f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 NetworkManager[48915]: <info>  [1764059790.9131] manager: (tap46e51a76-30): new Veth device (/org/freedesktop/NetworkManager/Devices/316)
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5615776-17dd-4516-9848-77a72974b258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 systemd-udevd[328040]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.964 253542 DEBUG nova.storage.rbd_utils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 806e081d-6b1a-4909-be7c-5490c631ebfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.967 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[deb2fe22-9f97-4215-ac89-b3e4b3d5149c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:30.970 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfd556b-d404-4e14-bca4-3bd382d41607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:30 compute-0 nova_compute[253538]: 2025-11-25 08:36:30.980 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:30 compute-0 NetworkManager[48915]: <info>  [1764059790.9936] device (tap46e51a76-30): carrier: link connected
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.002 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e902e770-a45b-4027-9a05-630172ce1b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.021 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81b032ec-6020-4c62-a4cd-a287df24c7bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap46e51a76-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:27:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514559, 'reachable_time': 20477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328065, 'error': None, 'target': 'ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.025 253542 INFO nova.compute.manager [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Took 13.46 seconds to build instance.
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.034 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.035 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.042 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.042 253542 INFO nova.compute.claims [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.045 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.046 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.046 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.047 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.053 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[77b8c023-2818-4c11-98bf-47e2d876a184]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:27bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514559, 'tstamp': 514559}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328066, 'error': None, 'target': 'ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1925613141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.072 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35d0f6fc-c351-4b65-8af0-593f7cfe9dcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap46e51a76-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:27:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514559, 'reachable_time': 20477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328076, 'error': None, 'target': 'ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.080 253542 DEBUG nova.storage.rbd_utils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 806e081d-6b1a-4909-be7c-5490c631ebfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.089 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 806e081d-6b1a-4909-be7c-5490c631ebfe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.099 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e78129-dd50-4f21-828d-25c5eebe1104]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.125 253542 DEBUG oslo_concurrency.lockutils [None req-30680af9-34c3-47ab-8b2a-f27d808f941c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.170 253542 DEBUG nova.compute.manager [req-1237c299-9003-4336-a8c3-57a719213797 req-d36dd74e-8275-4fa0-b7df-cea45f57750c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received event network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.171 253542 DEBUG oslo_concurrency.lockutils [req-1237c299-9003-4336-a8c3-57a719213797 req-d36dd74e-8275-4fa0-b7df-cea45f57750c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.172 253542 DEBUG oslo_concurrency.lockutils [req-1237c299-9003-4336-a8c3-57a719213797 req-d36dd74e-8275-4fa0-b7df-cea45f57750c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.173 253542 DEBUG oslo_concurrency.lockutils [req-1237c299-9003-4336-a8c3-57a719213797 req-d36dd74e-8275-4fa0-b7df-cea45f57750c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.173 253542 DEBUG nova.compute.manager [req-1237c299-9003-4336-a8c3-57a719213797 req-d36dd74e-8275-4fa0-b7df-cea45f57750c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] No waiting events found dispatching network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.174 253542 WARNING nova.compute.manager [req-1237c299-9003-4336-a8c3-57a719213797 req-d36dd74e-8275-4fa0-b7df-cea45f57750c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received unexpected event network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b for instance with vm_state active and task_state None.
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.207 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aafc0d31-ecdb-4844-abfa-b72fc21086dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.208 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46e51a76-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.209 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.209 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46e51a76-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:31 compute-0 NetworkManager[48915]: <info>  [1764059791.2120] manager: (tap46e51a76-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Nov 25 08:36:31 compute-0 kernel: tap46e51a76-30: entered promiscuous mode
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.221 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap46e51a76-30, col_values=(('external_ids', {'iface-id': '8241a3c6-f71d-4ce0-aad7-9ad7154e1d73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:31 compute-0 ovn_controller[152859]: 2025-11-25T08:36:31Z|00707|binding|INFO|Releasing lport 8241a3c6-f71d-4ce0-aad7-9ad7154e1d73 from this chassis (sb_readonly=0)
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.225 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/46e51a76-3d39-490e-ba9a-1d5ac591a461.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/46e51a76-3d39-490e-ba9a-1d5ac591a461.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.235 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf9dd05-7896-456a-b3f7-457b158297b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.238 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-46e51a76-3d39-490e-ba9a-1d5ac591a461
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/46e51a76-3d39-490e-ba9a-1d5ac591a461.pid.haproxy
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 46e51a76-3d39-490e-ba9a-1d5ac591a461
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:36:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:31.241 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461', 'env', 'PROCESS_TAG=haproxy-46e51a76-3d39-490e-ba9a-1d5ac591a461', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/46e51a76-3d39-490e-ba9a-1d5ac591a461.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.247 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 333 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 179 KiB/s rd, 7.5 MiB/s wr, 117 op/s
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.313 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.421 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 806e081d-6b1a-4909-be7c-5490c631ebfe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.545 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059791.4934845, 774c40d8-01ae-49cc-ad06-05003232491a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.546 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] VM Started (Lifecycle Event)
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.553 253542 DEBUG nova.storage.rbd_utils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] resizing rbd image 806e081d-6b1a-4909-be7c-5490c631ebfe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.579 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.584 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059791.4935944, 774c40d8-01ae-49cc-ad06-05003232491a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.585 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] VM Paused (Lifecycle Event)
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.600 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.606 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.648 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.655 253542 DEBUG nova.objects.instance [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'migration_context' on Instance uuid 806e081d-6b1a-4909-be7c-5490c631ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.666 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.666 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Ensure instance console log exists: /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.667 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.667 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.667 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.669 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.673 253542 WARNING nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.676 253542 DEBUG nova.virt.libvirt.host [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.677 253542 DEBUG nova.virt.libvirt.host [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.682 253542 DEBUG nova.virt.libvirt.host [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.683 253542 DEBUG nova.virt.libvirt.host [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.683 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.683 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.684 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.684 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.684 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.684 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.685 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.685 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.685 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.685 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.685 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.686 253542 DEBUG nova.virt.hardware [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.688 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:31 compute-0 podman[328269]: 2025-11-25 08:36:31.721795916 +0000 UTC m=+0.056849551 container create b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:36:31 compute-0 systemd[1]: Started libpod-conmon-b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79.scope.
Nov 25 08:36:31 compute-0 podman[328269]: 2025-11-25 08:36:31.693176535 +0000 UTC m=+0.028230220 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:36:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4a0343413517a24bf4daf3bd714bfbfe09767cc3869688a3f94670942274a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:31 compute-0 podman[328269]: 2025-11-25 08:36:31.81699346 +0000 UTC m=+0.152047115 container init b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:36:31 compute-0 podman[328269]: 2025-11-25 08:36:31.824057312 +0000 UTC m=+0.159110947 container start b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:36:31 compute-0 neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461[328284]: [NOTICE]   (328288) : New worker (328292) forked
Nov 25 08:36:31 compute-0 neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461[328284]: [NOTICE]   (328288) : Loading success.
Nov 25 08:36:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224171160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.933 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.939 253542 DEBUG nova.compute.provider_tree [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.952 253542 DEBUG nova.scheduler.client.report [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.973 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:31 compute-0 nova_compute[253538]: 2025-11-25 08:36:31.974 253542 DEBUG nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.014 253542 DEBUG nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.027 253542 INFO nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.046 253542 DEBUG nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:32 compute-0 ceph-mon[75015]: pgmap v1605: 321 pgs: 321 active+clean; 333 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 179 KiB/s rd, 7.5 MiB/s wr, 117 op/s
Nov 25 08:36:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4224171160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.124 253542 DEBUG nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.126 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.127 253542 INFO nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Creating image(s)
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.156 253542 DEBUG nova.storage.rbd_utils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.191 253542 DEBUG nova.storage.rbd_utils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268048412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.221 253542 DEBUG nova.storage.rbd_utils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.224 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.255 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.291 253542 DEBUG nova.storage.rbd_utils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 806e081d-6b1a-4909-be7c-5490c631ebfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.303 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.342 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.343 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.345 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.346 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.372 253542 DEBUG nova.storage.rbd_utils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.376 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.697 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566800783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.772 253542 DEBUG nova.storage.rbd_utils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] resizing rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.798 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.800 253542 DEBUG nova.objects.instance [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'pci_devices' on Instance uuid 806e081d-6b1a-4909-be7c-5490c631ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.812 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <uuid>806e081d-6b1a-4909-be7c-5490c631ebfe</uuid>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <name>instance-0000004c</name>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerShowV247Test-server-332636866</nova:name>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:31</nova:creationTime>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <nova:user uuid="6e7ed13625c1463da308fbfab28c4541">tempest-ServerShowV247Test-1492743735-project-member</nova:user>
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <nova:project uuid="504c5057c1894355afeafd57cd62b7ab">tempest-ServerShowV247Test-1492743735</nova:project>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <entry name="serial">806e081d-6b1a-4909-be7c-5490c631ebfe</entry>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <entry name="uuid">806e081d-6b1a-4909-be7c-5490c631ebfe</entry>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/806e081d-6b1a-4909-be7c-5490c631ebfe_disk">
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/806e081d-6b1a-4909-be7c-5490c631ebfe_disk.config">
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:32 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe/console.log" append="off"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:32 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:32 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:32 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:32 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:32 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.869 253542 DEBUG nova.objects.instance [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'migration_context' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.879 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.879 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Ensure instance console log exists: /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.879 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.880 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.880 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.881 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.886 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.886 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.886 253542 INFO nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Using config drive
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.903 253542 DEBUG nova.storage.rbd_utils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 806e081d-6b1a-4909-be7c-5490c631ebfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.910 253542 WARNING nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.914 253542 DEBUG nova.virt.libvirt.host [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.914 253542 DEBUG nova.virt.libvirt.host [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.917 253542 DEBUG nova.virt.libvirt.host [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.917 253542 DEBUG nova.virt.libvirt.host [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.917 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.918 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.918 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.918 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.918 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.919 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.919 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.919 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.919 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.919 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.920 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.920 253542 DEBUG nova.virt.hardware [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:32 compute-0 nova_compute[253538]: 2025-11-25 08:36:32.922 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1268048412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2566800783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.075 253542 INFO nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Creating config drive at /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe/disk.config
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.080 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpck2kem5s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.221 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpck2kem5s" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.250 253542 DEBUG nova.storage.rbd_utils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 806e081d-6b1a-4909-be7c-5490c631ebfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.254 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe/disk.config 806e081d-6b1a-4909-be7c-5490c631ebfe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 386 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.2 MiB/s wr, 294 op/s
Nov 25 08:36:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737359623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.398 253542 DEBUG oslo_concurrency.processutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe/disk.config 806e081d-6b1a-4909-be7c-5490c631ebfe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.400 253542 INFO nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Deleting local config drive /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe/disk.config because it was imported into RBD.
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.401 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.436 253542 DEBUG nova.storage.rbd_utils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.444 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:33 compute-0 systemd-machined[215790]: New machine qemu-89-instance-0000004c.
Nov 25 08:36:33 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-0000004c.
Nov 25 08:36:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427021540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.886 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.888 253542 DEBUG nova.objects.instance [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'pci_devices' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.920 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <uuid>89bd5b48-efcd-45aa-98f5-e9d9f8373467</uuid>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <name>instance-0000004d</name>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerShowV247Test-server-170954124</nova:name>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:32</nova:creationTime>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <nova:user uuid="6e7ed13625c1463da308fbfab28c4541">tempest-ServerShowV247Test-1492743735-project-member</nova:user>
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <nova:project uuid="504c5057c1894355afeafd57cd62b7ab">tempest-ServerShowV247Test-1492743735</nova:project>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <entry name="serial">89bd5b48-efcd-45aa-98f5-e9d9f8373467</entry>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <entry name="uuid">89bd5b48-efcd-45aa-98f5-e9d9f8373467</entry>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk">
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config">
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/console.log" append="off"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:33 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:33 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:33 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:33 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:33 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.965 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.969 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.969 253542 INFO nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Using config drive
Nov 25 08:36:33 compute-0 nova_compute[253538]: 2025-11-25 08:36:33.998 253542 DEBUG nova.storage.rbd_utils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.008 253542 DEBUG nova.compute.manager [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received event network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.009 253542 DEBUG oslo_concurrency.lockutils [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "774c40d8-01ae-49cc-ad06-05003232491a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.009 253542 DEBUG oslo_concurrency.lockutils [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.010 253542 DEBUG oslo_concurrency.lockutils [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.010 253542 DEBUG nova.compute.manager [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Processing event network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.010 253542 DEBUG nova.compute.manager [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received event network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.011 253542 DEBUG oslo_concurrency.lockutils [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "774c40d8-01ae-49cc-ad06-05003232491a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.011 253542 DEBUG oslo_concurrency.lockutils [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.011 253542 DEBUG oslo_concurrency.lockutils [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.011 253542 DEBUG nova.compute.manager [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] No waiting events found dispatching network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.012 253542 WARNING nova.compute.manager [req-231c2a21-6f50-467a-9947-93affb6efb8e req-47cd4b33-ffad-4ccf-9589-f1c6b8e8149c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received unexpected event network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f for instance with vm_state building and task_state spawning.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.014 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.023 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059794.022128, 774c40d8-01ae-49cc-ad06-05003232491a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.024 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] VM Resumed (Lifecycle Event)
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.026 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.031 253542 INFO nova.virt.libvirt.driver [-] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Instance spawned successfully.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.031 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.045 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.055 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.058 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.059 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.060 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.061 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.061 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.062 253542 DEBUG nova.virt.libvirt.driver [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 ceph-mon[75015]: pgmap v1606: 321 pgs: 321 active+clean; 386 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.2 MiB/s wr, 294 op/s
Nov 25 08:36:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1737359623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2427021540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.085 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.129 253542 INFO nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Creating config drive at /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.135 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6x4rv8pg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.176 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059794.1600678, 806e081d-6b1a-4909-be7c-5490c631ebfe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.178 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] VM Resumed (Lifecycle Event)
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.183 253542 INFO nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Took 11.91 seconds to spawn the instance on the hypervisor.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.183 253542 DEBUG nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.184 253542 DEBUG nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.184 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.196 253542 INFO nova.virt.libvirt.driver [-] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Instance spawned successfully.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.196 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.216 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.224 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.224 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.225 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.225 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.225 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.226 253542 DEBUG nova.virt.libvirt.driver [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.229 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.256 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.256 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059794.1605039, 806e081d-6b1a-4909-be7c-5490c631ebfe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.256 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] VM Started (Lifecycle Event)
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.283 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6x4rv8pg" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.310 253542 DEBUG nova.storage.rbd_utils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.315 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.369 253542 INFO nova.compute.manager [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Took 14.24 seconds to build instance.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.375 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.377 253542 INFO nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Took 3.59 seconds to spawn the instance on the hypervisor.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.378 253542 DEBUG nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.384 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.414 253542 DEBUG oslo_concurrency.lockutils [None req-81276052-7311-4f57-a221-04dd0d72565c 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.416 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.453 253542 INFO nova.compute.manager [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Took 4.85 seconds to build instance.
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.469 253542 DEBUG oslo_concurrency.lockutils [None req-b0899546-2ee0-4208-8a7e-f24a64c4bb88 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "806e081d-6b1a-4909-be7c-5490c631ebfe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:34 compute-0 nova_compute[253538]: 2025-11-25 08:36:34.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:35 compute-0 nova_compute[253538]: 2025-11-25 08:36:35.163 253542 DEBUG oslo_concurrency.processutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.849s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:35 compute-0 nova_compute[253538]: 2025-11-25 08:36:35.166 253542 INFO nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Deleting local config drive /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config because it was imported into RBD.
Nov 25 08:36:35 compute-0 systemd-machined[215790]: New machine qemu-90-instance-0000004d.
Nov 25 08:36:35 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-0000004d.
Nov 25 08:36:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 437 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.9 MiB/s wr, 357 op/s
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.177 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059796.1775293, 89bd5b48-efcd-45aa-98f5-e9d9f8373467 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.179 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] VM Resumed (Lifecycle Event)
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.183 253542 DEBUG nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.184 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.187 253542 INFO nova.virt.libvirt.driver [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance spawned successfully.
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.187 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.207 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.212 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.215 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.215 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.215 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.215 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.216 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.216 253542 DEBUG nova.virt.libvirt.driver [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.304 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.305 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059796.1835668, 89bd5b48-efcd-45aa-98f5-e9d9f8373467 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.305 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] VM Started (Lifecycle Event)
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.331 253542 INFO nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Took 4.21 seconds to spawn the instance on the hypervisor.
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.332 253542 DEBUG nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.333 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.338 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.375 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:36:36 compute-0 ceph-mon[75015]: pgmap v1607: 321 pgs: 321 active+clean; 437 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.9 MiB/s wr, 357 op/s
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.434 253542 INFO nova.compute.manager [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Took 5.42 seconds to build instance.
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.459 253542 DEBUG oslo_concurrency.lockutils [None req-fa346d3f-1cc6-4798-9ed1-bd0a7f002cc1 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:36 compute-0 nova_compute[253538]: 2025-11-25 08:36:36.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 445 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.5 MiB/s wr, 415 op/s
Nov 25 08:36:38 compute-0 nova_compute[253538]: 2025-11-25 08:36:38.066 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:36:38 compute-0 ceph-mon[75015]: pgmap v1608: 321 pgs: 321 active+clean; 445 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.5 MiB/s wr, 415 op/s
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.037 253542 INFO nova.compute.manager [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Rebuilding instance
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.248 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'trusted_certs' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.270 253542 DEBUG nova.compute.manager [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 445 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 7.2 MiB/s wr, 445 op/s
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.437 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'pci_requests' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.448 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'pci_devices' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.457 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'resources' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.467 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'migration_context' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.475 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.482 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.723 253542 DEBUG oslo_concurrency.lockutils [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.724 253542 DEBUG oslo_concurrency.lockutils [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.724 253542 DEBUG nova.compute.manager [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.732 253542 DEBUG nova.compute.manager [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.733 253542 DEBUG nova.objects.instance [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'flavor' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.750 253542 DEBUG nova.virt.libvirt.driver [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:36:39 compute-0 nova_compute[253538]: 2025-11-25 08:36:39.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:40 compute-0 ceph-mon[75015]: pgmap v1609: 321 pgs: 321 active+clean; 445 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 7.2 MiB/s wr, 445 op/s
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.135 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "774c40d8-01ae-49cc-ad06-05003232491a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.136 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.136 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "774c40d8-01ae-49cc-ad06-05003232491a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.136 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.136 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.137 253542 INFO nova.compute.manager [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Terminating instance
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.138 253542 DEBUG nova.compute.manager [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:36:40 compute-0 kernel: tap8d7ef71e-d2 (unregistering): left promiscuous mode
Nov 25 08:36:40 compute-0 NetworkManager[48915]: <info>  [1764059800.3940] device (tap8d7ef71e-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:36:40 compute-0 ovn_controller[152859]: 2025-11-25T08:36:40Z|00708|binding|INFO|Releasing lport 8d7ef71e-d272-4075-b3f5-8a029c7c860f from this chassis (sb_readonly=0)
Nov 25 08:36:40 compute-0 ovn_controller[152859]: 2025-11-25T08:36:40Z|00709|binding|INFO|Setting lport 8d7ef71e-d272-4075-b3f5-8a029c7c860f down in Southbound
Nov 25 08:36:40 compute-0 ovn_controller[152859]: 2025-11-25T08:36:40Z|00710|binding|INFO|Removing iface tap8d7ef71e-d2 ovn-installed in OVS
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.411 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:40.438 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:94:3d 10.100.0.9'], port_security=['fa:16:3e:d1:94:3d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '774c40d8-01ae-49cc-ad06-05003232491a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-46e51a76-3d39-490e-ba9a-1d5ac591a461', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb75cadcaecd4a2fb54df6dd80902908', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ba2bb850-2bd0-481a-a946-d9347bf738fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bcba803-2428-46d3-9e5d-7a3639faf1f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8d7ef71e-d272-4075-b3f5-8a029c7c860f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:40.440 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7ef71e-d272-4075-b3f5-8a029c7c860f in datapath 46e51a76-3d39-490e-ba9a-1d5ac591a461 unbound from our chassis
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.440 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:40.441 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 46e51a76-3d39-490e-ba9a-1d5ac591a461, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:36:40 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Nov 25 08:36:40 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004b.scope: Consumed 6.513s CPU time.
Nov 25 08:36:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:40.443 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c300b80-8f70-4fc2-b24a-c3e21eab0f0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:40.444 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461 namespace which is not needed anymore
Nov 25 08:36:40 compute-0 systemd-machined[215790]: Machine qemu-88-instance-0000004b terminated.
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.571 253542 INFO nova.virt.libvirt.driver [-] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Instance destroyed successfully.
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.572 253542 DEBUG nova.objects.instance [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lazy-loading 'resources' on Instance uuid 774c40d8-01ae-49cc-ad06-05003232491a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.586 253542 DEBUG nova.virt.libvirt.vif [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-2049682504',display_name='tempest-ServerMetadataTestJSON-server-2049682504',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-2049682504',id=75,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fb75cadcaecd4a2fb54df6dd80902908',ramdisk_id='',reservation_id='r-jy21tybp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-470098851',owner_user_name='tempest-ServerMetadataTestJSON-470098851-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:36:39Z,user_data=None,user_id='1aaf041a5d4344a1b22e039b0d22e198',uuid=774c40d8-01ae-49cc-ad06-05003232491a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.587 253542 DEBUG nova.network.os_vif_util [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Converting VIF {"id": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "address": "fa:16:3e:d1:94:3d", "network": {"id": "46e51a76-3d39-490e-ba9a-1d5ac591a461", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-62640839-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb75cadcaecd4a2fb54df6dd80902908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7ef71e-d2", "ovs_interfaceid": "8d7ef71e-d272-4075-b3f5-8a029c7c860f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.587 253542 DEBUG nova.network.os_vif_util [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=8d7ef71e-d272-4075-b3f5-8a029c7c860f,network=Network(46e51a76-3d39-490e-ba9a-1d5ac591a461),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7ef71e-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.587 253542 DEBUG os_vif [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=8d7ef71e-d272-4075-b3f5-8a029c7c860f,network=Network(46e51a76-3d39-490e-ba9a-1d5ac591a461),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7ef71e-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.591 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d7ef71e-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.596 253542 INFO os_vif [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=8d7ef71e-d272-4075-b3f5-8a029c7c860f,network=Network(46e51a76-3d39-490e-ba9a-1d5ac591a461),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7ef71e-d2')
Nov 25 08:36:40 compute-0 neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461[328284]: [NOTICE]   (328288) : haproxy version is 2.8.14-c23fe91
Nov 25 08:36:40 compute-0 neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461[328284]: [NOTICE]   (328288) : path to executable is /usr/sbin/haproxy
Nov 25 08:36:40 compute-0 neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461[328284]: [WARNING]  (328288) : Exiting Master process...
Nov 25 08:36:40 compute-0 neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461[328284]: [ALERT]    (328288) : Current worker (328292) exited with code 143 (Terminated)
Nov 25 08:36:40 compute-0 neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461[328284]: [WARNING]  (328288) : All workers exited. Exiting... (0)
Nov 25 08:36:40 compute-0 systemd[1]: libpod-b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79.scope: Deactivated successfully.
Nov 25 08:36:40 compute-0 podman[328842]: 2025-11-25 08:36:40.652082599 +0000 UTC m=+0.113120613 container stop b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:36:40 compute-0 conmon[328284]: conmon b72d04da50501247ba60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79.scope/container/memory.events
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.665 253542 DEBUG nova.compute.manager [req-b6e7c5b0-1470-468b-931e-349c37038cc5 req-6f4acb38-8ce6-4766-86e7-0889d55fb04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received event network-vif-unplugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.665 253542 DEBUG oslo_concurrency.lockutils [req-b6e7c5b0-1470-468b-931e-349c37038cc5 req-6f4acb38-8ce6-4766-86e7-0889d55fb04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "774c40d8-01ae-49cc-ad06-05003232491a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.666 253542 DEBUG oslo_concurrency.lockutils [req-b6e7c5b0-1470-468b-931e-349c37038cc5 req-6f4acb38-8ce6-4766-86e7-0889d55fb04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:40 compute-0 podman[328842]: 2025-11-25 08:36:40.666493882 +0000 UTC m=+0.127531916 container died b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.666 253542 DEBUG oslo_concurrency.lockutils [req-b6e7c5b0-1470-468b-931e-349c37038cc5 req-6f4acb38-8ce6-4766-86e7-0889d55fb04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.678 253542 DEBUG nova.compute.manager [req-b6e7c5b0-1470-468b-931e-349c37038cc5 req-6f4acb38-8ce6-4766-86e7-0889d55fb04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] No waiting events found dispatching network-vif-unplugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.678 253542 DEBUG nova.compute.manager [req-b6e7c5b0-1470-468b-931e-349c37038cc5 req-6f4acb38-8ce6-4766-86e7-0889d55fb04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received event network-vif-unplugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:36:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:40.737 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:40 compute-0 nova_compute[253538]: 2025-11-25 08:36:40.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79-userdata-shm.mount: Deactivated successfully.
Nov 25 08:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c4a0343413517a24bf4daf3bd714bfbfe09767cc3869688a3f94670942274a4-merged.mount: Deactivated successfully.
Nov 25 08:36:40 compute-0 podman[328880]: 2025-11-25 08:36:40.879026133 +0000 UTC m=+0.201487260 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 08:36:40 compute-0 podman[328842]: 2025-11-25 08:36:40.954163261 +0000 UTC m=+0.415201275 container cleanup b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:36:40 compute-0 systemd[1]: libpod-conmon-b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79.scope: Deactivated successfully.
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.062 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.063 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.063 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 446 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 6.3 MiB/s wr, 536 op/s
Nov 25 08:36:41 compute-0 podman[328912]: 2025-11-25 08:36:41.373456778 +0000 UTC m=+0.394580494 container remove b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44c051e8-ae4a-4d04-aeb2-b0d4022c4ccf]: (4, ('Tue Nov 25 08:36:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461 (b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79)\nb72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79\nTue Nov 25 08:36:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461 (b72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79)\nb72d04da50501247ba607d52f56acc0f865b55eff7b66188b6f7a6a399d45f79\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.380 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2d7f19-9da4-4975-b405-09f36af13fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.381 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46e51a76-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:41 compute-0 nova_compute[253538]: 2025-11-25 08:36:41.383 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:41 compute-0 kernel: tap46e51a76-30: left promiscuous mode
Nov 25 08:36:41 compute-0 nova_compute[253538]: 2025-11-25 08:36:41.405 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.408 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b267d2e8-ae3e-47d5-b860-d91a3eb246f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.423 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e2f158-d9b0-4994-b9b7-4c5ec3efcd78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.425 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f720793-5567-40be-887c-fe9e8f999972]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.445 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[afad865d-74b4-4f7b-b3f1-c8a89a1b1805]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514549, 'reachable_time': 25161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328924, 'error': None, 'target': 'ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d46e51a76\x2d3d39\x2d490e\x2dba9a\x2d1d5ac591a461.mount: Deactivated successfully.
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.449 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-46e51a76-3d39-490e-ba9a-1d5ac591a461 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.450 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[80a290cc-30bd-4cee-a109-039ad94f3547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:41.450 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:36:41 compute-0 nova_compute[253538]: 2025-11-25 08:36:41.529 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.084 253542 INFO nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance shutdown successfully after 25 seconds.
Nov 25 08:36:42 compute-0 ceph-mon[75015]: pgmap v1610: 321 pgs: 321 active+clean; 446 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 6.3 MiB/s wr, 536 op/s
Nov 25 08:36:42 compute-0 kernel: tap88b844d5-71 (unregistering): left promiscuous mode
Nov 25 08:36:42 compute-0 NetworkManager[48915]: <info>  [1764059802.6174] device (tap88b844d5-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:36:42 compute-0 ovn_controller[152859]: 2025-11-25T08:36:42Z|00711|binding|INFO|Releasing lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a from this chassis (sb_readonly=0)
Nov 25 08:36:42 compute-0 ovn_controller[152859]: 2025-11-25T08:36:42Z|00712|binding|INFO|Setting lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a down in Southbound
Nov 25 08:36:42 compute-0 ovn_controller[152859]: 2025-11-25T08:36:42Z|00713|binding|INFO|Removing iface tap88b844d5-71 ovn-installed in OVS
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:42.631 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:fe:4e 10.100.0.5'], port_security=['fa:16:3e:a4:fe:4e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '884b0bf9-764d-4aa8-8bcb-c9e8644a0dad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:42.632 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 unbound from our chassis
Nov 25 08:36:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:42.633 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:36:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:42.633 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bac4ce5-3b3d-42bd-b2a8-755da844dc32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:42 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000047.scope: Deactivated successfully.
Nov 25 08:36:42 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000047.scope: Consumed 13.994s CPU time.
Nov 25 08:36:42 compute-0 systemd-machined[215790]: Machine qemu-84-instance-00000047 terminated.
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.736 253542 DEBUG nova.compute.manager [req-400c56e7-d48d-417c-ac45-11cc2e817a1e req-d8e9733a-734c-4a49-a847-ab8fdf49448b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received event network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.737 253542 DEBUG oslo_concurrency.lockutils [req-400c56e7-d48d-417c-ac45-11cc2e817a1e req-d8e9733a-734c-4a49-a847-ab8fdf49448b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "774c40d8-01ae-49cc-ad06-05003232491a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.737 253542 DEBUG oslo_concurrency.lockutils [req-400c56e7-d48d-417c-ac45-11cc2e817a1e req-d8e9733a-734c-4a49-a847-ab8fdf49448b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.737 253542 DEBUG oslo_concurrency.lockutils [req-400c56e7-d48d-417c-ac45-11cc2e817a1e req-d8e9733a-734c-4a49-a847-ab8fdf49448b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.737 253542 DEBUG nova.compute.manager [req-400c56e7-d48d-417c-ac45-11cc2e817a1e req-d8e9733a-734c-4a49-a847-ab8fdf49448b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] No waiting events found dispatching network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.738 253542 WARNING nova.compute.manager [req-400c56e7-d48d-417c-ac45-11cc2e817a1e req-d8e9733a-734c-4a49-a847-ab8fdf49448b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received unexpected event network-vif-plugged-8d7ef71e-d272-4075-b3f5-8a029c7c860f for instance with vm_state active and task_state deleting.
Nov 25 08:36:42 compute-0 NetworkManager[48915]: <info>  [1764059802.9044] manager: (tap88b844d5-71): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.940 253542 INFO nova.virt.libvirt.driver [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance destroyed successfully.
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.941 253542 DEBUG nova.objects.instance [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'numa_topology' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.957 253542 INFO nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Attempting rescue
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.958 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.961 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.961 253542 INFO nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Creating image(s)
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.982 253542 DEBUG nova.storage.rbd_utils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:42 compute-0 nova_compute[253538]: 2025-11-25 08:36:42.985 253542 DEBUG nova.objects.instance [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.016 253542 DEBUG nova.storage.rbd_utils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.130 253542 DEBUG nova.storage.rbd_utils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.135 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.247 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.248 253542 DEBUG oslo_concurrency.lockutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.249 253542 DEBUG oslo_concurrency.lockutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.250 253542 DEBUG oslo_concurrency.lockutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.269 253542 DEBUG nova.storage.rbd_utils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.272 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 443 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.1 MiB/s wr, 552 op/s
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.560 253542 DEBUG nova.compute.manager [req-17325863-f539-4789-ae5b-83bcdc2b1176 req-332fb707-c816-464f-8779-feb328a3e2b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-unplugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.561 253542 DEBUG oslo_concurrency.lockutils [req-17325863-f539-4789-ae5b-83bcdc2b1176 req-332fb707-c816-464f-8779-feb328a3e2b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.562 253542 DEBUG oslo_concurrency.lockutils [req-17325863-f539-4789-ae5b-83bcdc2b1176 req-332fb707-c816-464f-8779-feb328a3e2b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.562 253542 DEBUG oslo_concurrency.lockutils [req-17325863-f539-4789-ae5b-83bcdc2b1176 req-332fb707-c816-464f-8779-feb328a3e2b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.563 253542 DEBUG nova.compute.manager [req-17325863-f539-4789-ae5b-83bcdc2b1176 req-332fb707-c816-464f-8779-feb328a3e2b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] No waiting events found dispatching network-vif-unplugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:43 compute-0 nova_compute[253538]: 2025-11-25 08:36:43.563 253542 WARNING nova.compute.manager [req-17325863-f539-4789-ae5b-83bcdc2b1176 req-332fb707-c816-464f-8779-feb328a3e2b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received unexpected event network-vif-unplugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for instance with vm_state active and task_state rescuing.
Nov 25 08:36:44 compute-0 ceph-mon[75015]: pgmap v1611: 321 pgs: 321 active+clean; 443 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.1 MiB/s wr, 552 op/s
Nov 25 08:36:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 431 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 4.5 MiB/s wr, 378 op/s
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.631 253542 DEBUG nova.compute.manager [req-0581b076-f778-4aae-ab7c-e39d2e22fd54 req-5779e972-eb32-4d82-b6d1-067ddd91117c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.632 253542 DEBUG oslo_concurrency.lockutils [req-0581b076-f778-4aae-ab7c-e39d2e22fd54 req-5779e972-eb32-4d82-b6d1-067ddd91117c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.633 253542 DEBUG oslo_concurrency.lockutils [req-0581b076-f778-4aae-ab7c-e39d2e22fd54 req-5779e972-eb32-4d82-b6d1-067ddd91117c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.634 253542 DEBUG oslo_concurrency.lockutils [req-0581b076-f778-4aae-ab7c-e39d2e22fd54 req-5779e972-eb32-4d82-b6d1-067ddd91117c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.634 253542 DEBUG nova.compute.manager [req-0581b076-f778-4aae-ab7c-e39d2e22fd54 req-5779e972-eb32-4d82-b6d1-067ddd91117c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] No waiting events found dispatching network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.635 253542 WARNING nova.compute.manager [req-0581b076-f778-4aae-ab7c-e39d2e22fd54 req-5779e972-eb32-4d82-b6d1-067ddd91117c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received unexpected event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for instance with vm_state active and task_state rescuing.
Nov 25 08:36:45 compute-0 ceph-mon[75015]: pgmap v1612: 321 pgs: 321 active+clean; 431 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 4.5 MiB/s wr, 378 op/s
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.976 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.977 253542 DEBUG nova.objects.instance [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'migration_context' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.992 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.995 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Start _get_guest_xml network_info=[{"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-337160511-network", "vif_mac": "fa:16:3e:a4:fe:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:45 compute-0 nova_compute[253538]: 2025-11-25 08:36:45.996 253542 DEBUG nova.objects.instance [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'resources' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.009 253542 WARNING nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.018 253542 DEBUG nova.virt.libvirt.host [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.019 253542 DEBUG nova.virt.libvirt.host [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.022 253542 DEBUG nova.virt.libvirt.host [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.023 253542 DEBUG nova.virt.libvirt.host [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.024 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.024 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.024 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.025 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.025 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.025 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.026 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.026 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.026 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.026 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.027 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.027 253542 DEBUG nova.virt.hardware [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.027 253542 DEBUG nova.objects.instance [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.042 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.228 253542 INFO nova.virt.libvirt.driver [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Deleting instance files /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a_del
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.229 253542 INFO nova.virt.libvirt.driver [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Deletion of /var/lib/nova/instances/774c40d8-01ae-49cc-ad06-05003232491a_del complete
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.315 253542 INFO nova.compute.manager [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Took 6.18 seconds to destroy the instance on the hypervisor.
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.316 253542 DEBUG oslo.service.loopingcall [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.317 253542 DEBUG nova.compute.manager [-] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.317 253542 DEBUG nova.network.neutron [-] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.532 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:46 compute-0 ovn_controller[152859]: 2025-11-25T08:36:46Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:57:05 10.100.0.3
Nov 25 08:36:46 compute-0 ovn_controller[152859]: 2025-11-25T08:36:46Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:57:05 10.100.0.3
Nov 25 08:36:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2957839732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.565 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:46 compute-0 nova_compute[253538]: 2025-11-25 08:36:46.566 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:46 compute-0 podman[329079]: 2025-11-25 08:36:46.858882804 +0000 UTC m=+0.111034417 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 08:36:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2957839732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305374950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.074 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.075 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 472 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 6.4 MiB/s wr, 344 op/s
Nov 25 08:36:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3170727119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.640 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.643 253542 DEBUG nova.virt.libvirt.vif [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1272068251',display_name='tempest-ServerRescueTestJSON-server-1272068251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1272068251',id=71,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='488c6d53000c47848dba6b7be6b4ff40',ramdisk_id='',reservation_id='r-eevo0tl5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-324239197',owner_user_name='tempest-ServerRescueTestJSON-324239197-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:12Z,user_data=None,user_id='ad675e78b1b34f1c92c57e42532c3c20',uuid=884b0bf9-764d-4aa8-8bcb-c9e8644a0dad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-337160511-network", "vif_mac": "fa:16:3e:a4:fe:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.644 253542 DEBUG nova.network.os_vif_util [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converting VIF {"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-337160511-network", "vif_mac": "fa:16:3e:a4:fe:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.646 253542 DEBUG nova.network.os_vif_util [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:fe:4e,bridge_name='br-int',has_traffic_filtering=True,id=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b844d5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.648 253542 DEBUG nova.objects.instance [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.679 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <uuid>884b0bf9-764d-4aa8-8bcb-c9e8644a0dad</uuid>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <name>instance-00000047</name>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueTestJSON-server-1272068251</nova:name>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:46</nova:creationTime>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <nova:user uuid="ad675e78b1b34f1c92c57e42532c3c20">tempest-ServerRescueTestJSON-324239197-project-member</nova:user>
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <nova:project uuid="488c6d53000c47848dba6b7be6b4ff40">tempest-ServerRescueTestJSON-324239197</nova:project>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <nova:port uuid="88b844d5-7175-4dc1-92cc-d7a4d59e1d1a">
Nov 25 08:36:47 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <entry name="serial">884b0bf9-764d-4aa8-8bcb-c9e8644a0dad</entry>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <entry name="uuid">884b0bf9-764d-4aa8-8bcb-c9e8644a0dad</entry>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.rescue">
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk">
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <target dev="vdb" bus="virtio"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config.rescue">
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:47 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:a4:fe:4e"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <target dev="tap88b844d5-71"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/console.log" append="off"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:47 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:47 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:47 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:47 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:47 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.689 253542 INFO nova.virt.libvirt.driver [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance destroyed successfully.
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.827 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.827 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.828 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.828 253542 DEBUG nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No VIF found with MAC fa:16:3e:a4:fe:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.829 253542 INFO nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Using config drive
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.861 253542 DEBUG nova.storage.rbd_utils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.877 253542 DEBUG nova.objects.instance [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:47 compute-0 ovn_controller[152859]: 2025-11-25T08:36:47Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:21:01 10.100.0.9
Nov 25 08:36:47 compute-0 ovn_controller[152859]: 2025-11-25T08:36:47Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:21:01 10.100.0.9
Nov 25 08:36:47 compute-0 nova_compute[253538]: 2025-11-25 08:36:47.900 253542 DEBUG nova.objects.instance [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'keypairs' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3305374950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:47 compute-0 ceph-mon[75015]: pgmap v1613: 321 pgs: 321 active+clean; 472 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 6.4 MiB/s wr, 344 op/s
Nov 25 08:36:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3170727119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:47 compute-0 ovn_controller[152859]: 2025-11-25T08:36:47Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:f3:02 10.100.0.12
Nov 25 08:36:47 compute-0 ovn_controller[152859]: 2025-11-25T08:36:47Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:f3:02 10.100.0.12
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.213 253542 DEBUG nova.network.neutron [-] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.229 253542 INFO nova.compute.manager [-] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Took 1.91 seconds to deallocate network for instance.
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.266 253542 DEBUG nova.compute.manager [req-9d0564cd-f983-4901-a888-abd289ce2dd1 req-18dc9888-c58f-4f4c-b99a-e1a88b063698 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Received event network-vif-deleted-8d7ef71e-d272-4075-b3f5-8a029c7c860f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.271 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.271 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.339 253542 INFO nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Creating config drive at /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config.rescue
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.345 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_21ykijn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.442 253542 DEBUG oslo_concurrency.processutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:48.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.486 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_21ykijn" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.511 253542 DEBUG nova.storage.rbd_utils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.515 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config.rescue 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170582391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.900 253542 DEBUG oslo_concurrency.processutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.906 253542 DEBUG nova.compute.provider_tree [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.923 253542 DEBUG nova.scheduler.client.report [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.948 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:48 compute-0 nova_compute[253538]: 2025-11-25 08:36:48.982 253542 INFO nova.scheduler.client.report [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Deleted allocations for instance 774c40d8-01ae-49cc-ad06-05003232491a
Nov 25 08:36:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/170582391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:49 compute-0 nova_compute[253538]: 2025-11-25 08:36:49.037 253542 DEBUG oslo_concurrency.lockutils [None req-b2a18f1f-c50d-493c-91fd-214a7bd40830 1aaf041a5d4344a1b22e039b0d22e198 fb75cadcaecd4a2fb54df6dd80902908 - - default default] Lock "774c40d8-01ae-49cc-ad06-05003232491a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 510 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 8.1 MiB/s wr, 336 op/s
Nov 25 08:36:49 compute-0 nova_compute[253538]: 2025-11-25 08:36:49.573 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:36:49 compute-0 nova_compute[253538]: 2025-11-25 08:36:49.793 253542 DEBUG nova.virt.libvirt.driver [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:36:50 compute-0 ceph-mon[75015]: pgmap v1614: 321 pgs: 321 active+clean; 510 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 8.1 MiB/s wr, 336 op/s
Nov 25 08:36:50 compute-0 nova_compute[253538]: 2025-11-25 08:36:50.382 253542 DEBUG oslo_concurrency.processutils [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config.rescue 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.867s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:50 compute-0 nova_compute[253538]: 2025-11-25 08:36:50.382 253542 INFO nova.virt.libvirt.driver [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Deleting local config drive /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad/disk.config.rescue because it was imported into RBD.
Nov 25 08:36:50 compute-0 kernel: tap88b844d5-71: entered promiscuous mode
Nov 25 08:36:50 compute-0 NetworkManager[48915]: <info>  [1764059810.4616] manager: (tap88b844d5-71): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Nov 25 08:36:50 compute-0 nova_compute[253538]: 2025-11-25 08:36:50.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:50 compute-0 ovn_controller[152859]: 2025-11-25T08:36:50Z|00714|binding|INFO|Claiming lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for this chassis.
Nov 25 08:36:50 compute-0 ovn_controller[152859]: 2025-11-25T08:36:50Z|00715|binding|INFO|88b844d5-7175-4dc1-92cc-d7a4d59e1d1a: Claiming fa:16:3e:a4:fe:4e 10.100.0.5
Nov 25 08:36:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:50.470 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:fe:4e 10.100.0.5'], port_security=['fa:16:3e:a4:fe:4e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '884b0bf9-764d-4aa8-8bcb-c9e8644a0dad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:50.472 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 bound to our chassis
Nov 25 08:36:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:50.472 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:36:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:50.473 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78ffbd51-a5e1-4dae-877c-a78be404b94f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:50 compute-0 nova_compute[253538]: 2025-11-25 08:36:50.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:50 compute-0 ovn_controller[152859]: 2025-11-25T08:36:50Z|00716|binding|INFO|Setting lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a ovn-installed in OVS
Nov 25 08:36:50 compute-0 ovn_controller[152859]: 2025-11-25T08:36:50Z|00717|binding|INFO|Setting lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a up in Southbound
Nov 25 08:36:50 compute-0 nova_compute[253538]: 2025-11-25 08:36:50.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:50 compute-0 systemd-udevd[329220]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:36:50 compute-0 systemd-machined[215790]: New machine qemu-91-instance-00000047.
Nov 25 08:36:50 compute-0 nova_compute[253538]: 2025-11-25 08:36:50.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:50 compute-0 NetworkManager[48915]: <info>  [1764059810.5445] device (tap88b844d5-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:36:50 compute-0 NetworkManager[48915]: <info>  [1764059810.5452] device (tap88b844d5-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:36:50 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-00000047.
Nov 25 08:36:50 compute-0 nova_compute[253538]: 2025-11-25 08:36:50.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:50 compute-0 podman[329211]: 2025-11-25 08:36:50.634662591 +0000 UTC m=+0.112280901 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:36:51 compute-0 nova_compute[253538]: 2025-11-25 08:36:51.281 253542 DEBUG nova.compute.manager [req-3145b167-88bf-42cd-9d92-5a3c944687f5 req-7431ead9-1317-4129-aa69-3b2ec38b2cd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:51 compute-0 nova_compute[253538]: 2025-11-25 08:36:51.282 253542 DEBUG oslo_concurrency.lockutils [req-3145b167-88bf-42cd-9d92-5a3c944687f5 req-7431ead9-1317-4129-aa69-3b2ec38b2cd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:51 compute-0 nova_compute[253538]: 2025-11-25 08:36:51.282 253542 DEBUG oslo_concurrency.lockutils [req-3145b167-88bf-42cd-9d92-5a3c944687f5 req-7431ead9-1317-4129-aa69-3b2ec38b2cd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:51 compute-0 nova_compute[253538]: 2025-11-25 08:36:51.282 253542 DEBUG oslo_concurrency.lockutils [req-3145b167-88bf-42cd-9d92-5a3c944687f5 req-7431ead9-1317-4129-aa69-3b2ec38b2cd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:51 compute-0 nova_compute[253538]: 2025-11-25 08:36:51.283 253542 DEBUG nova.compute.manager [req-3145b167-88bf-42cd-9d92-5a3c944687f5 req-7431ead9-1317-4129-aa69-3b2ec38b2cd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] No waiting events found dispatching network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:51 compute-0 nova_compute[253538]: 2025-11-25 08:36:51.283 253542 WARNING nova.compute.manager [req-3145b167-88bf-42cd-9d92-5a3c944687f5 req-7431ead9-1317-4129-aa69-3b2ec38b2cd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received unexpected event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for instance with vm_state active and task_state rescuing.
Nov 25 08:36:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 544 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 10 MiB/s wr, 359 op/s
Nov 25 08:36:51 compute-0 nova_compute[253538]: 2025-11-25 08:36:51.535 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:51 compute-0 sudo[329287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:36:51 compute-0 sudo[329287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:51 compute-0 sudo[329287]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:51 compute-0 sudo[329330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:36:51 compute-0 sudo[329330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:51 compute-0 sudo[329330]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:51 compute-0 sudo[329355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:36:51 compute-0 sudo[329355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:51 compute-0 sudo[329355]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:51 compute-0 sudo[329380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:36:51 compute-0 sudo[329380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.034 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.035 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059812.0344586, 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.035 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] VM Resumed (Lifecycle Event)
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.039 253542 DEBUG nova.compute.manager [None req-5c700eb3-f4c7-4aa1-a67d-b19918eb8cab ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.065 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.069 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.092 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.093 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059812.0355935, 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.093 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] VM Started (Lifecycle Event)
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.116 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:52 compute-0 nova_compute[253538]: 2025-11-25 08:36:52.120 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:36:52 compute-0 sudo[329380]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:36:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:36:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:36:52 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:36:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:36:52 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:36:52 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9fc8908e-d5cb-4049-b158-c3d1a20fdc50 does not exist
Nov 25 08:36:52 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 956a324b-3425-4654-a4fc-eb4919c19f0a does not exist
Nov 25 08:36:52 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a0d19b47-0f32-4c89-8a76-10b9bf3eba49 does not exist
Nov 25 08:36:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:36:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:36:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:36:52 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:36:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:36:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:36:52 compute-0 sudo[329442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:36:52 compute-0 sudo[329442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:52 compute-0 sudo[329442]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:52 compute-0 ceph-mon[75015]: pgmap v1615: 321 pgs: 321 active+clean; 544 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 10 MiB/s wr, 359 op/s
Nov 25 08:36:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:36:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:36:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:36:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:36:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:36:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:36:52 compute-0 sudo[329467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:36:52 compute-0 sudo[329467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:52 compute-0 sudo[329467]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:52 compute-0 sudo[329492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:36:52 compute-0 sudo[329492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:52 compute-0 sudo[329492]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:52 compute-0 sudo[329517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:36:52 compute-0 sudo[329517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:53 compute-0 podman[329581]: 2025-11-25 08:36:53.108927448 +0000 UTC m=+0.032184508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:36:53
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 577 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 11 MiB/s wr, 324 op/s
Nov 25 08:36:53 compute-0 nova_compute[253538]: 2025-11-25 08:36:53.359 253542 DEBUG nova.compute.manager [req-a1d8fdfc-22d8-45a6-930a-e39de8529e6b req-c0cc4689-394f-44ba-a16b-59d46bc537b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:53 compute-0 nova_compute[253538]: 2025-11-25 08:36:53.360 253542 DEBUG oslo_concurrency.lockutils [req-a1d8fdfc-22d8-45a6-930a-e39de8529e6b req-c0cc4689-394f-44ba-a16b-59d46bc537b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:53 compute-0 nova_compute[253538]: 2025-11-25 08:36:53.360 253542 DEBUG oslo_concurrency.lockutils [req-a1d8fdfc-22d8-45a6-930a-e39de8529e6b req-c0cc4689-394f-44ba-a16b-59d46bc537b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:53 compute-0 nova_compute[253538]: 2025-11-25 08:36:53.360 253542 DEBUG oslo_concurrency.lockutils [req-a1d8fdfc-22d8-45a6-930a-e39de8529e6b req-c0cc4689-394f-44ba-a16b-59d46bc537b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:53 compute-0 nova_compute[253538]: 2025-11-25 08:36:53.360 253542 DEBUG nova.compute.manager [req-a1d8fdfc-22d8-45a6-930a-e39de8529e6b req-c0cc4689-394f-44ba-a16b-59d46bc537b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] No waiting events found dispatching network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:53 compute-0 nova_compute[253538]: 2025-11-25 08:36:53.361 253542 WARNING nova.compute.manager [req-a1d8fdfc-22d8-45a6-930a-e39de8529e6b req-c0cc4689-394f-44ba-a16b-59d46bc537b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received unexpected event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for instance with vm_state rescued and task_state None.
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:36:53 compute-0 podman[329581]: 2025-11-25 08:36:53.508533578 +0000 UTC m=+0.431790618 container create a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaplygin, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 08:36:53 compute-0 systemd[1]: Started libpod-conmon-a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd.scope.
Nov 25 08:36:53 compute-0 ovn_controller[152859]: 2025-11-25T08:36:53Z|00718|binding|INFO|Releasing lport fb637403-e23d-4de9-9c54-1f8015bf829f from this chassis (sb_readonly=0)
Nov 25 08:36:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:36:53 compute-0 nova_compute[253538]: 2025-11-25 08:36:53.678 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:53 compute-0 podman[329581]: 2025-11-25 08:36:53.693721875 +0000 UTC m=+0.616978915 container init a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 08:36:53 compute-0 podman[329581]: 2025-11-25 08:36:53.70382425 +0000 UTC m=+0.627081290 container start a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:36:53 compute-0 optimistic_chaplygin[329597]: 167 167
Nov 25 08:36:53 compute-0 systemd[1]: libpod-a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd.scope: Deactivated successfully.
Nov 25 08:36:53 compute-0 ceph-mon[75015]: pgmap v1616: 321 pgs: 321 active+clean; 577 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 11 MiB/s wr, 324 op/s
Nov 25 08:36:53 compute-0 podman[329581]: 2025-11-25 08:36:53.765598104 +0000 UTC m=+0.688855164 container attach a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:36:53 compute-0 podman[329581]: 2025-11-25 08:36:53.766490898 +0000 UTC m=+0.689747928 container died a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaplygin, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:36:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-243c64bdad56b5beea110f1e2365a612ede46cfdef649a7bc0b61d8f9c0af945-merged.mount: Deactivated successfully.
Nov 25 08:36:54 compute-0 podman[329581]: 2025-11-25 08:36:54.219121542 +0000 UTC m=+1.142378572 container remove a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:36:54 compute-0 systemd[1]: libpod-conmon-a43da95f09a916bf6a2daba848b6d600f3645d689aef929b0efa01a68c1fa3dd.scope: Deactivated successfully.
Nov 25 08:36:54 compute-0 podman[329622]: 2025-11-25 08:36:54.430081932 +0000 UTC m=+0.024126969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:36:54 compute-0 nova_compute[253538]: 2025-11-25 08:36:54.814 253542 INFO nova.virt.libvirt.driver [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance shutdown successfully after 15 seconds.
Nov 25 08:36:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 597 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 11 MiB/s wr, 340 op/s
Nov 25 08:36:55 compute-0 podman[329622]: 2025-11-25 08:36:55.565621877 +0000 UTC m=+1.159666924 container create 1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.571 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059800.5699718, 774c40d8-01ae-49cc-ad06-05003232491a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.572 253542 INFO nova.compute.manager [-] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] VM Stopped (Lifecycle Event)
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.590 253542 DEBUG nova.compute.manager [None req-3f43377b-e2c3-4fa7-9ebd-647d9d91bccd - - - - - -] [instance: 774c40d8-01ae-49cc-ad06-05003232491a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:55 compute-0 systemd[1]: Started libpod-conmon-1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641.scope.
Nov 25 08:36:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e884f6960befe577f92891a61b448d3baa07052630ca3c9864918ffb8902062d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e884f6960befe577f92891a61b448d3baa07052630ca3c9864918ffb8902062d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e884f6960befe577f92891a61b448d3baa07052630ca3c9864918ffb8902062d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e884f6960befe577f92891a61b448d3baa07052630ca3c9864918ffb8902062d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e884f6960befe577f92891a61b448d3baa07052630ca3c9864918ffb8902062d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:55 compute-0 ceph-mon[75015]: pgmap v1617: 321 pgs: 321 active+clean; 597 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 11 MiB/s wr, 340 op/s
Nov 25 08:36:55 compute-0 podman[329622]: 2025-11-25 08:36:55.764893627 +0000 UTC m=+1.358938724 container init 1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 08:36:55 compute-0 podman[329622]: 2025-11-25 08:36:55.776161784 +0000 UTC m=+1.370206791 container start 1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:36:55 compute-0 podman[329622]: 2025-11-25 08:36:55.838431851 +0000 UTC m=+1.432476878 container attach 1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.845 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.845 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.857 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:36:55 compute-0 kernel: tap8bb4cdd1-80 (unregistering): left promiscuous mode
Nov 25 08:36:55 compute-0 NetworkManager[48915]: <info>  [1764059815.9139] device (tap8bb4cdd1-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:36:55 compute-0 ovn_controller[152859]: 2025-11-25T08:36:55Z|00719|binding|INFO|Releasing lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b from this chassis (sb_readonly=0)
Nov 25 08:36:55 compute-0 ovn_controller[152859]: 2025-11-25T08:36:55Z|00720|binding|INFO|Setting lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b down in Southbound
Nov 25 08:36:55 compute-0 ovn_controller[152859]: 2025-11-25T08:36:55Z|00721|binding|INFO|Removing iface tap8bb4cdd1-80 ovn-installed in OVS
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:55.933 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:57:05 10.100.0.3'], port_security=['fa:16:3e:a4:57:05 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6c10a34e-4126-4e88-ad4d-ba7c407a379e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa56d31750374b64b67d1be19bb4e989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d5f0080-1d0d-4c90-8c84-7d1437ba109d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4cd11-f7ee-427b-8b83-e8d21ceafdd1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8bb4cdd1-8082-4ad3-9350-be7270fb373b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:36:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:55.934 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8bb4cdd1-8082-4ad3-9350-be7270fb373b in datapath 30bd3cca-f71b-4541-ae95-8d0eb4dfe470 unbound from our chassis
Nov 25 08:36:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:55.937 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.960 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.962 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.962 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:55.964 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d471008-6827-4c41-8804-ccd1570deb5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.968 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:36:55 compute-0 nova_compute[253538]: 2025-11-25 08:36:55.969 253542 INFO nova.compute.claims [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:36:55 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 08:36:55 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000048.scope: Consumed 15.402s CPU time.
Nov 25 08:36:55 compute-0 systemd-machined[215790]: Machine qemu-85-instance-00000048 terminated.
Nov 25 08:36:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:55.995 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[70471b4a-3d74-492b-95fb-c4b3d5aa14ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:55.999 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4412d5-77b6-4c12-a9ca-4c1d9728755e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:56.046 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[26df7b1f-f7cc-47f9-91b4-54666a2cbc29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:56.065 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcdc479-9390-4fda-bd51-671a4bba48fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30bd3cca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:18:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514256, 'reachable_time': 18429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329662, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.074 253542 INFO nova.virt.libvirt.driver [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance destroyed successfully.
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.076 253542 DEBUG nova.objects.instance [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:56.081 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3db543e4-41c8-4ad8-8e9f-42c8c3d328f6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514276, 'tstamp': 514276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329665, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514280, 'tstamp': 514280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329665, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:36:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:56.083 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bd3cca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.086 253542 DEBUG nova.compute.manager [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:36:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:56.088 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bd3cca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:56.089 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:56.089 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30bd3cca-f0, col_values=(('external_ids', {'iface-id': 'fb637403-e23d-4de9-9c54-1f8015bf829f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:36:56.090 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.090 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.138 253542 DEBUG oslo_concurrency.lockutils [None req-3e9e8eb0-7317-4815-9d6e-e72d67482b1c ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 16.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.147 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.433 253542 DEBUG nova.compute.manager [req-bed4b739-01e8-4f66-9c9f-cff03c6d549f req-8eb50b55-b3e0-4e5a-a67b-279ec8269c53 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-unplugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.435 253542 DEBUG oslo_concurrency.lockutils [req-bed4b739-01e8-4f66-9c9f-cff03c6d549f req-8eb50b55-b3e0-4e5a-a67b-279ec8269c53 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.436 253542 DEBUG oslo_concurrency.lockutils [req-bed4b739-01e8-4f66-9c9f-cff03c6d549f req-8eb50b55-b3e0-4e5a-a67b-279ec8269c53 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.436 253542 DEBUG oslo_concurrency.lockutils [req-bed4b739-01e8-4f66-9c9f-cff03c6d549f req-8eb50b55-b3e0-4e5a-a67b-279ec8269c53 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.437 253542 DEBUG nova.compute.manager [req-bed4b739-01e8-4f66-9c9f-cff03c6d549f req-8eb50b55-b3e0-4e5a-a67b-279ec8269c53 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] No waiting events found dispatching network-vif-unplugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.437 253542 WARNING nova.compute.manager [req-bed4b739-01e8-4f66-9c9f-cff03c6d549f req-8eb50b55-b3e0-4e5a-a67b-279ec8269c53 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received unexpected event network-vif-unplugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b for instance with vm_state stopped and task_state None.
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:36:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:36:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1359091210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.691 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.700 253542 DEBUG nova.compute.provider_tree [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.715 253542 DEBUG nova.scheduler.client.report [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.733 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.734 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.799 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.799 253542 DEBUG nova.network.neutron [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.840 253542 INFO nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:36:56 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1359091210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.855 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.957 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.958 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.958 253542 INFO nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Creating image(s)
Nov 25 08:36:56 compute-0 nova_compute[253538]: 2025-11-25 08:36:56.982 253542 DEBUG nova.storage.rbd_utils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.013 253542 DEBUG nova.storage.rbd_utils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.035 253542 DEBUG nova.storage.rbd_utils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.043 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.145 253542 DEBUG nova.policy [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ad675e78b1b34f1c92c57e42532c3c20', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '488c6d53000c47848dba6b7be6b4ff40', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.317 253542 DEBUG nova.objects.instance [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'flavor' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 598 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 10 MiB/s wr, 341 op/s
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.340 253542 DEBUG oslo_concurrency.lockutils [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.341 253542 DEBUG oslo_concurrency.lockutils [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquired lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.341 253542 DEBUG nova.network.neutron [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.341 253542 DEBUG nova.objects.instance [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'info_cache' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.795 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.796 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.797 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.797 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:57 compute-0 optimistic_panini[329640]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:36:57 compute-0 optimistic_panini[329640]: --> relative data size: 1.0
Nov 25 08:36:57 compute-0 optimistic_panini[329640]: --> All data devices are unavailable
Nov 25 08:36:57 compute-0 systemd[1]: libpod-1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641.scope: Deactivated successfully.
Nov 25 08:36:57 compute-0 systemd[1]: libpod-1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641.scope: Consumed 1.112s CPU time.
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.845 253542 DEBUG nova.storage.rbd_utils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.849 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:57 compute-0 nova_compute[253538]: 2025-11-25 08:36:57.899 253542 DEBUG nova.network.neutron [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Successfully created port: f3740887-8427-4858-b3e7-5c15f52a2484 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:36:57 compute-0 podman[329792]: 2025-11-25 08:36:57.950152509 +0000 UTC m=+0.021441305 container died 1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:36:58 compute-0 ceph-mon[75015]: pgmap v1618: 321 pgs: 321 active+clean; 598 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 10 MiB/s wr, 341 op/s
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.555 253542 DEBUG nova.compute.manager [req-9e0bc6d5-1f73-42d9-bacb-8a2816a4d01f req-f0498b4f-59c0-4bbb-bed5-791dd04816d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.556 253542 DEBUG oslo_concurrency.lockutils [req-9e0bc6d5-1f73-42d9-bacb-8a2816a4d01f req-f0498b4f-59c0-4bbb-bed5-791dd04816d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.557 253542 DEBUG oslo_concurrency.lockutils [req-9e0bc6d5-1f73-42d9-bacb-8a2816a4d01f req-f0498b4f-59c0-4bbb-bed5-791dd04816d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.558 253542 DEBUG oslo_concurrency.lockutils [req-9e0bc6d5-1f73-42d9-bacb-8a2816a4d01f req-f0498b4f-59c0-4bbb-bed5-791dd04816d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.558 253542 DEBUG nova.compute.manager [req-9e0bc6d5-1f73-42d9-bacb-8a2816a4d01f req-f0498b4f-59c0-4bbb-bed5-791dd04816d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] No waiting events found dispatching network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.559 253542 WARNING nova.compute.manager [req-9e0bc6d5-1f73-42d9-bacb-8a2816a4d01f req-f0498b4f-59c0-4bbb-bed5-791dd04816d2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received unexpected event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b for instance with vm_state stopped and task_state powering-on.
Nov 25 08:36:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e884f6960befe577f92891a61b448d3baa07052630ca3c9864918ffb8902062d-merged.mount: Deactivated successfully.
Nov 25 08:36:58 compute-0 podman[329792]: 2025-11-25 08:36:58.710707946 +0000 UTC m=+0.781996742 container remove 1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 08:36:58 compute-0 systemd[1]: libpod-conmon-1d8a76ee8a5bf1e9ac32552972d7393ea2e03e38c758cd3299a9888bdb188641.scope: Deactivated successfully.
Nov 25 08:36:58 compute-0 sudo[329517]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.763 253542 DEBUG nova.network.neutron [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Updating instance_info_cache with network_info: [{"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.781 253542 DEBUG oslo_concurrency.lockutils [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Releasing lock "refresh_cache-6c10a34e-4126-4e88-ad4d-ba7c407a379e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.817 253542 INFO nova.virt.libvirt.driver [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance destroyed successfully.
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.818 253542 DEBUG nova.objects.instance [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.836 253542 DEBUG nova.objects.instance [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'resources' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:58 compute-0 sudo[329826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:36:58 compute-0 sudo[329826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.846 253542 DEBUG nova.virt.libvirt.vif [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1875494502',display_name='tempest-ListServerFiltersTestJSON-instance-1875494502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1875494502',id=72,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-usfwimbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:36:56Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=6c10a34e-4126-4e88-ad4d-ba7c407a379e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.851 253542 DEBUG nova.network.os_vif_util [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.851 253542 DEBUG nova.network.os_vif_util [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.852 253542 DEBUG os_vif [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:36:58 compute-0 sudo[329826]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.861 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8bb4cdd1-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.869 253542 DEBUG nova.network.neutron [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Successfully updated port: f3740887-8427-4858-b3e7-5c15f52a2484 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.872 253542 INFO os_vif [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80')
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.880 253542 DEBUG nova.virt.libvirt.driver [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Start _get_guest_xml network_info=[{"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.881 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.881 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquired lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.881 253542 DEBUG nova.network.neutron [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.886 253542 WARNING nova.virt.libvirt.driver [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.890 253542 DEBUG nova.virt.libvirt.host [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.891 253542 DEBUG nova.virt.libvirt.host [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.897 253542 DEBUG nova.virt.libvirt.host [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.898 253542 DEBUG nova.virt.libvirt.host [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.899 253542 DEBUG nova.virt.libvirt.driver [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.899 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.899 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.900 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.900 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.900 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.900 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.900 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.900 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.901 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.901 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.901 253542 DEBUG nova.virt.hardware [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.901 253542 DEBUG nova.objects.instance [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:58 compute-0 sudo[329851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.917 253542 DEBUG oslo_concurrency.processutils [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:58 compute-0 sudo[329851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:58 compute-0 sudo[329851]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:58 compute-0 nova_compute[253538]: 2025-11-25 08:36:58.953 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:58 compute-0 sudo[329876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:36:58 compute-0 sudo[329876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:58 compute-0 sudo[329876]: pam_unix(sudo:session): session closed for user root
Nov 25 08:36:59 compute-0 sudo[329920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:36:59 compute-0 sudo[329920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.047 253542 DEBUG nova.storage.rbd_utils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] resizing rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.148 253542 DEBUG nova.objects.instance [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'migration_context' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.156 253542 DEBUG nova.network.neutron [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.169 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.169 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Ensure instance console log exists: /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.170 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.170 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.170 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:36:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 602 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.4 MiB/s wr, 316 op/s
Nov 25 08:36:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2374094067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:59 compute-0 podman[330054]: 2025-11-25 08:36:59.385784703 +0000 UTC m=+0.042783628 container create e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.397 253542 DEBUG oslo_concurrency.processutils [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:59 compute-0 systemd[1]: Started libpod-conmon-e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035.scope.
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.446 253542 DEBUG oslo_concurrency.processutils [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:36:59 compute-0 podman[330054]: 2025-11-25 08:36:59.366839246 +0000 UTC m=+0.023838201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:36:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:36:59 compute-0 podman[330054]: 2025-11-25 08:36:59.477927824 +0000 UTC m=+0.134926779 container init e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:36:59 compute-0 podman[330054]: 2025-11-25 08:36:59.484363709 +0000 UTC m=+0.141362634 container start e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 08:36:59 compute-0 zealous_mirzakhani[330089]: 167 167
Nov 25 08:36:59 compute-0 podman[330054]: 2025-11-25 08:36:59.490885056 +0000 UTC m=+0.147884001 container attach e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:36:59 compute-0 systemd[1]: libpod-e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035.scope: Deactivated successfully.
Nov 25 08:36:59 compute-0 podman[330054]: 2025-11-25 08:36:59.49320445 +0000 UTC m=+0.150203395 container died e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 08:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-dae84e4a8c7a8d9d1164f067c80785adcbd1997dddec25edc9b95b6a45e8784c-merged.mount: Deactivated successfully.
Nov 25 08:36:59 compute-0 ceph-mon[75015]: pgmap v1619: 321 pgs: 321 active+clean; 602 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.4 MiB/s wr, 316 op/s
Nov 25 08:36:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2374094067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:59 compute-0 podman[330054]: 2025-11-25 08:36:59.557617545 +0000 UTC m=+0.214616500 container remove e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:36:59 compute-0 systemd[1]: libpod-conmon-e5da2ed6a690eac982ed14a509bbd337aac1660add8b66bd9c38f712bba15035.scope: Deactivated successfully.
Nov 25 08:36:59 compute-0 podman[330135]: 2025-11-25 08:36:59.737091586 +0000 UTC m=+0.041436160 container create 581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_zhukovsky, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 08:36:59 compute-0 systemd[1]: Started libpod-conmon-581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933.scope.
Nov 25 08:36:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:36:59 compute-0 podman[330135]: 2025-11-25 08:36:59.721958184 +0000 UTC m=+0.026302788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae470462d0fc06c8f2b8c3d8a7cc3a06a5f11d8edaf7478ca851dd859fa8695/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae470462d0fc06c8f2b8c3d8a7cc3a06a5f11d8edaf7478ca851dd859fa8695/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae470462d0fc06c8f2b8c3d8a7cc3a06a5f11d8edaf7478ca851dd859fa8695/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae470462d0fc06c8f2b8c3d8a7cc3a06a5f11d8edaf7478ca851dd859fa8695/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:36:59 compute-0 podman[330135]: 2025-11-25 08:36:59.839436886 +0000 UTC m=+0.143781530 container init 581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:36:59 compute-0 podman[330135]: 2025-11-25 08:36:59.847232047 +0000 UTC m=+0.151576621 container start 581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:36:59 compute-0 podman[330135]: 2025-11-25 08:36:59.850874897 +0000 UTC m=+0.155219511 container attach 581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_zhukovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:36:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:36:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/436553277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.924 253542 DEBUG oslo_concurrency.processutils [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.928 253542 DEBUG nova.virt.libvirt.vif [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1875494502',display_name='tempest-ListServerFiltersTestJSON-instance-1875494502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1875494502',id=72,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-usfwimbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:36:56Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=6c10a34e-4126-4e88-ad4d-ba7c407a379e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.929 253542 DEBUG nova.network.os_vif_util [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.931 253542 DEBUG nova.network.os_vif_util [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.934 253542 DEBUG nova.objects.instance [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.957 253542 DEBUG nova.virt.libvirt.driver [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <uuid>6c10a34e-4126-4e88-ad4d-ba7c407a379e</uuid>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <name>instance-00000048</name>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1875494502</nova:name>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:36:58</nova:creationTime>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <nova:user uuid="ee3a6261ded642fa9ef617b29b026d86">tempest-ListServerFiltersTestJSON-1878680398-project-member</nova:user>
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <nova:project uuid="aa56d31750374b64b67d1be19bb4e989">tempest-ListServerFiltersTestJSON-1878680398</nova:project>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <nova:port uuid="8bb4cdd1-8082-4ad3-9350-be7270fb373b">
Nov 25 08:36:59 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <system>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <entry name="serial">6c10a34e-4126-4e88-ad4d-ba7c407a379e</entry>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <entry name="uuid">6c10a34e-4126-4e88-ad4d-ba7c407a379e</entry>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </system>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <os>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   </os>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <features>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   </features>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk">
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6c10a34e-4126-4e88-ad4d-ba7c407a379e_disk.config">
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:36:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:a4:57:05"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <target dev="tap8bb4cdd1-80"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e/console.log" append="off"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <video>
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </video>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:36:59 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:36:59 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:36:59 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:36:59 compute-0 nova_compute[253538]: </domain>
Nov 25 08:36:59 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.959 253542 DEBUG nova.virt.libvirt.driver [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.959 253542 DEBUG nova.virt.libvirt.driver [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.960 253542 DEBUG nova.virt.libvirt.vif [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1875494502',display_name='tempest-ListServerFiltersTestJSON-instance-1875494502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1875494502',id=72,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-usfwimbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:36:56Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=6c10a34e-4126-4e88-ad4d-ba7c407a379e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.961 253542 DEBUG nova.network.os_vif_util [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.961 253542 DEBUG nova.network.os_vif_util [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.962 253542 DEBUG os_vif [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.963 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.964 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.967 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8bb4cdd1-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.968 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8bb4cdd1-80, col_values=(('external_ids', {'iface-id': '8bb4cdd1-8082-4ad3-9350-be7270fb373b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:57:05', 'vm-uuid': '6c10a34e-4126-4e88-ad4d-ba7c407a379e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.970 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:59 compute-0 NetworkManager[48915]: <info>  [1764059819.9709] manager: (tap8bb4cdd1-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:36:59 compute-0 nova_compute[253538]: 2025-11-25 08:36:59.978 253542 INFO os_vif [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80')
Nov 25 08:37:00 compute-0 kernel: tap8bb4cdd1-80: entered promiscuous mode
Nov 25 08:37:00 compute-0 NetworkManager[48915]: <info>  [1764059820.0690] manager: (tap8bb4cdd1-80): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:00 compute-0 ovn_controller[152859]: 2025-11-25T08:37:00Z|00722|binding|INFO|Claiming lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b for this chassis.
Nov 25 08:37:00 compute-0 ovn_controller[152859]: 2025-11-25T08:37:00Z|00723|binding|INFO|8bb4cdd1-8082-4ad3-9350-be7270fb373b: Claiming fa:16:3e:a4:57:05 10.100.0.3
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.088 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:57:05 10.100.0.3'], port_security=['fa:16:3e:a4:57:05 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6c10a34e-4126-4e88-ad4d-ba7c407a379e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa56d31750374b64b67d1be19bb4e989', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8d5f0080-1d0d-4c90-8c84-7d1437ba109d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4cd11-f7ee-427b-8b83-e8d21ceafdd1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8bb4cdd1-8082-4ad3-9350-be7270fb373b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.089 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8bb4cdd1-8082-4ad3-9350-be7270fb373b in datapath 30bd3cca-f71b-4541-ae95-8d0eb4dfe470 bound to our chassis
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.091 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.092 253542 DEBUG nova.network.neutron [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Updating instance_info_cache with network_info: [{"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:00 compute-0 ovn_controller[152859]: 2025-11-25T08:37:00Z|00724|binding|INFO|Setting lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b ovn-installed in OVS
Nov 25 08:37:00 compute-0 ovn_controller[152859]: 2025-11-25T08:37:00Z|00725|binding|INFO|Setting lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b up in Southbound
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:00 compute-0 systemd-udevd[330173]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.114 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Releasing lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.115 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance network_info: |[{"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.119 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Start _get_guest_xml network_info=[{"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.121 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[571f244e-acc4-4fa8-833c-c80dc6d9347a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:00 compute-0 systemd-machined[215790]: New machine qemu-92-instance-00000048.
Nov 25 08:37:00 compute-0 NetworkManager[48915]: <info>  [1764059820.1271] device (tap8bb4cdd1-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:37:00 compute-0 NetworkManager[48915]: <info>  [1764059820.1286] device (tap8bb4cdd1-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:37:00 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-00000048.
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.136 253542 WARNING nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.144 253542 DEBUG nova.virt.libvirt.host [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.145 253542 DEBUG nova.virt.libvirt.host [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.151 253542 DEBUG nova.virt.libvirt.host [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.151 253542 DEBUG nova.virt.libvirt.host [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.152 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.152 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.153 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.153 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.153 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.153 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.154 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.154 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.154 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.154 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.154 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.155 253542 DEBUG nova.virt.hardware [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.158 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.188 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[14e3a5f8-9975-4912-8f5f-7b30c81dd304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.193 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3fd34e-b3d4-45ee-a605-0825d4743541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.229 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd9a9fd-e478-44ef-93bd-40473d1de0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.256 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ca70c606-fd0c-4dc1-bb65-bec91557b1c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30bd3cca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:18:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514256, 'reachable_time': 18429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330187, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.271 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d786c5d0-d0fd-4796-a7fb-e314e50f1a9f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514276, 'tstamp': 514276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330189, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514280, 'tstamp': 514280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330189, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.273 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bd3cca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.289 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bd3cca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.289 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.289 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30bd3cca-f0, col_values=(('external_ids', {'iface-id': 'fb637403-e23d-4de9-9c54-1f8015bf829f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:00.290 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/436553277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812043571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.636 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.655 253542 DEBUG nova.compute.manager [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-changed-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.655 253542 DEBUG nova.compute.manager [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Refreshing instance network info cache due to event network-changed-f3740887-8427-4858-b3e7-5c15f52a2484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.655 253542 DEBUG oslo_concurrency.lockutils [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.656 253542 DEBUG oslo_concurrency.lockutils [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.656 253542 DEBUG nova.network.neutron [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Refreshing network info cache for port f3740887-8427-4858-b3e7-5c15f52a2484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.658 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]: {
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:     "0": [
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:         {
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "devices": [
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "/dev/loop3"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             ],
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_name": "ceph_lv0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_size": "21470642176",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "name": "ceph_lv0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "tags": {
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cluster_name": "ceph",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.crush_device_class": "",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.encrypted": "0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osd_id": "0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.type": "block",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.vdo": "0"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             },
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "type": "block",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "vg_name": "ceph_vg0"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:         }
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:     ],
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:     "1": [
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:         {
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "devices": [
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "/dev/loop4"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             ],
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_name": "ceph_lv1",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_size": "21470642176",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "name": "ceph_lv1",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "tags": {
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cluster_name": "ceph",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.crush_device_class": "",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.encrypted": "0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osd_id": "1",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.type": "block",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.vdo": "0"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             },
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "type": "block",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "vg_name": "ceph_vg1"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:         }
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:     ],
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:     "2": [
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:         {
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "devices": [
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "/dev/loop5"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             ],
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_name": "ceph_lv2",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_size": "21470642176",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "name": "ceph_lv2",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "tags": {
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.cluster_name": "ceph",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.crush_device_class": "",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.encrypted": "0",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osd_id": "2",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.type": "block",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:                 "ceph.vdo": "0"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             },
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "type": "block",
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:             "vg_name": "ceph_vg2"
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:         }
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]:     ]
Nov 25 08:37:00 compute-0 focused_zhukovsky[330151]: }
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.690 253542 DEBUG nova.storage.rbd_utils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:00 compute-0 nova_compute[253538]: 2025-11-25 08:37:00.697 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:00 compute-0 systemd[1]: libpod-581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933.scope: Deactivated successfully.
Nov 25 08:37:00 compute-0 podman[330135]: 2025-11-25 08:37:00.702259899 +0000 UTC m=+1.006604473 container died 581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ae470462d0fc06c8f2b8c3d8a7cc3a06a5f11d8edaf7478ca851dd859fa8695-merged.mount: Deactivated successfully.
Nov 25 08:37:00 compute-0 podman[330135]: 2025-11-25 08:37:00.758518172 +0000 UTC m=+1.062862746 container remove 581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:37:00 compute-0 systemd[1]: libpod-conmon-581a741188d2053e7e9ad59da351b307bb66c1891e88ec1a68071cda346ac933.scope: Deactivated successfully.
Nov 25 08:37:00 compute-0 sudo[329920]: pam_unix(sudo:session): session closed for user root
Nov 25 08:37:00 compute-0 sudo[330247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:37:00 compute-0 sudo[330247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:37:00 compute-0 sudo[330247]: pam_unix(sudo:session): session closed for user root
Nov 25 08:37:00 compute-0 sudo[330291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:37:00 compute-0 sudo[330291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:37:00 compute-0 sudo[330291]: pam_unix(sudo:session): session closed for user root
Nov 25 08:37:01 compute-0 sudo[330316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:37:01 compute-0 sudo[330316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:37:01 compute-0 sudo[330316]: pam_unix(sudo:session): session closed for user root
Nov 25 08:37:01 compute-0 sudo[330341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:37:01 compute-0 sudo[330341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:37:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939710167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.204 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.206 253542 DEBUG nova.virt.libvirt.vif [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1478856521',display_name='tempest-ServerRescueTestJSON-server-1478856521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1478856521',id=78,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='488c6d53000c47848dba6b7be6b4ff40',ramdisk_id='',reservation_id='r-zkq6gjzu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-324239197',owner_user_name='tempest-ServerRescueTestJSON-324239197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:56Z,user_data=None,user_id='ad675e78b1b34f1c92c57e42532c3c20',uuid=02e92d65-4521-4c60-bed4-2e8fc4d243e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.207 253542 DEBUG nova.network.os_vif_util [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converting VIF {"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.208 253542 DEBUG nova.network.os_vif_util [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:b8,bridge_name='br-int',has_traffic_filtering=True,id=f3740887-8427-4858-b3e7-5c15f52a2484,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3740887-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.210 253542 DEBUG nova.objects.instance [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.222 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <uuid>02e92d65-4521-4c60-bed4-2e8fc4d243e4</uuid>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <name>instance-0000004e</name>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueTestJSON-server-1478856521</nova:name>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:37:00</nova:creationTime>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <nova:user uuid="ad675e78b1b34f1c92c57e42532c3c20">tempest-ServerRescueTestJSON-324239197-project-member</nova:user>
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <nova:project uuid="488c6d53000c47848dba6b7be6b4ff40">tempest-ServerRescueTestJSON-324239197</nova:project>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <nova:port uuid="f3740887-8427-4858-b3e7-5c15f52a2484">
Nov 25 08:37:01 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <system>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <entry name="serial">02e92d65-4521-4c60-bed4-2e8fc4d243e4</entry>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <entry name="uuid">02e92d65-4521-4c60-bed4-2e8fc4d243e4</entry>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </system>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <os>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   </os>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <features>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   </features>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk">
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config">
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:01 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:f9:5b:b8"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <target dev="tapf3740887-84"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/console.log" append="off"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <video>
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </video>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:37:01 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:37:01 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:37:01 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:37:01 compute-0 nova_compute[253538]: </domain>
Nov 25 08:37:01 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.223 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Preparing to wait for external event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.224 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.224 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.224 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.225 253542 DEBUG nova.virt.libvirt.vif [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:36:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1478856521',display_name='tempest-ServerRescueTestJSON-server-1478856521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1478856521',id=78,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='488c6d53000c47848dba6b7be6b4ff40',ramdisk_id='',reservation_id='r-zkq6gjzu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-324239197',owner_user_name='tempest-ServerRescueTestJSON-324239197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:36:56Z,user_data=None,user_id='ad675e78b1b34f1c92c57e42532c3c20',uuid=02e92d65-4521-4c60-bed4-2e8fc4d243e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.225 253542 DEBUG nova.network.os_vif_util [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converting VIF {"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.226 253542 DEBUG nova.network.os_vif_util [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:b8,bridge_name='br-int',has_traffic_filtering=True,id=f3740887-8427-4858-b3e7-5c15f52a2484,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3740887-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.226 253542 DEBUG os_vif [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:b8,bridge_name='br-int',has_traffic_filtering=True,id=f3740887-8427-4858-b3e7-5c15f52a2484,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3740887-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.227 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.228 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.230 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3740887-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.231 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf3740887-84, col_values=(('external_ids', {'iface-id': 'f3740887-8427-4858-b3e7-5c15f52a2484', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:5b:b8', 'vm-uuid': '02e92d65-4521-4c60-bed4-2e8fc4d243e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:01 compute-0 NetworkManager[48915]: <info>  [1764059821.2349] manager: (tapf3740887-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.240 253542 INFO os_vif [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:b8,bridge_name='br-int',has_traffic_filtering=True,id=f3740887-8427-4858-b3e7-5c15f52a2484,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3740887-84')
Nov 25 08:37:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 617 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.0 MiB/s wr, 277 op/s
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.332 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.332 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.333 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No VIF found with MAC fa:16:3e:f9:5b:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.333 253542 INFO nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Using config drive
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.363 253542 DEBUG nova.storage.rbd_utils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.449 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 6c10a34e-4126-4e88-ad4d-ba7c407a379e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.450 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059821.4488528, 6c10a34e-4126-4e88-ad4d-ba7c407a379e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.450 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] VM Resumed (Lifecycle Event)
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.453 253542 DEBUG nova.compute.manager [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.458 253542 INFO nova.virt.libvirt.driver [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance rebooted successfully.
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.459 253542 DEBUG nova.compute.manager [None req-a68e2280-3854-4dd2-888d-026290d0593d ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.470 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.477 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:01 compute-0 podman[330464]: 2025-11-25 08:37:01.477237088 +0000 UTC m=+0.113136234 container create 8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 08:37:01 compute-0 podman[330464]: 2025-11-25 08:37:01.385182589 +0000 UTC m=+0.021081755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.506 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.506 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059821.4500113, 6c10a34e-4126-4e88-ad4d-ba7c407a379e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.506 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] VM Started (Lifecycle Event)
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.526 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.535 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:01 compute-0 systemd[1]: Started libpod-conmon-8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8.scope.
Nov 25 08:37:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.685 253542 INFO nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Creating config drive at /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.693 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjhkivosm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:01 compute-0 podman[330464]: 2025-11-25 08:37:01.721281548 +0000 UTC m=+0.357180704 container init 8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:37:01 compute-0 podman[330464]: 2025-11-25 08:37:01.730366656 +0000 UTC m=+0.366265802 container start 8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:37:01 compute-0 festive_gagarin[330484]: 167 167
Nov 25 08:37:01 compute-0 systemd[1]: libpod-8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8.scope: Deactivated successfully.
Nov 25 08:37:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3812043571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1939710167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:01 compute-0 ceph-mon[75015]: pgmap v1620: 321 pgs: 321 active+clean; 617 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.0 MiB/s wr, 277 op/s
Nov 25 08:37:01 compute-0 podman[330464]: 2025-11-25 08:37:01.84464725 +0000 UTC m=+0.480546406 container attach 8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:37:01 compute-0 podman[330464]: 2025-11-25 08:37:01.845011771 +0000 UTC m=+0.480910917 container died 8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.850 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjhkivosm" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb81d668fd3eed253231732a0e408bad32d864766b61c54f4f52eefc679b3578-merged.mount: Deactivated successfully.
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.890 253542 DEBUG nova.storage.rbd_utils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:01 compute-0 nova_compute[253538]: 2025-11-25 08:37:01.897 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:01 compute-0 podman[330464]: 2025-11-25 08:37:01.897582933 +0000 UTC m=+0.533482079 container remove 8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 08:37:01 compute-0 systemd[1]: libpod-conmon-8866b98c798e1327b6d561bcd5d8a0bda2a427b461e083571ea8b08454c541b8.scope: Deactivated successfully.
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.094 253542 DEBUG oslo_concurrency.processutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.095 253542 INFO nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Deleting local config drive /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config because it was imported into RBD.
Nov 25 08:37:02 compute-0 podman[330545]: 2025-11-25 08:37:02.097144382 +0000 UTC m=+0.044628218 container create 0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 08:37:02 compute-0 systemd[1]: Started libpod-conmon-0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548.scope.
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.147 253542 DEBUG nova.network.neutron [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Updated VIF entry in instance network info cache for port f3740887-8427-4858-b3e7-5c15f52a2484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.148 253542 DEBUG nova.network.neutron [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Updating instance_info_cache with network_info: [{"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:02 compute-0 kernel: tapf3740887-84: entered promiscuous mode
Nov 25 08:37:02 compute-0 NetworkManager[48915]: <info>  [1764059822.1509] manager: (tapf3740887-84): new Tun device (/org/freedesktop/NetworkManager/Devices/323)
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.152 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:02 compute-0 ovn_controller[152859]: 2025-11-25T08:37:02Z|00726|binding|INFO|Claiming lport f3740887-8427-4858-b3e7-5c15f52a2484 for this chassis.
Nov 25 08:37:02 compute-0 ovn_controller[152859]: 2025-11-25T08:37:02Z|00727|binding|INFO|f3740887-8427-4858-b3e7-5c15f52a2484: Claiming fa:16:3e:f9:5b:b8 10.100.0.8
Nov 25 08:37:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:02.159 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:5b:b8 10.100.0.8'], port_security=['fa:16:3e:f9:5b:b8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '02e92d65-4521-4c60-bed4-2e8fc4d243e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3740887-8427-4858-b3e7-5c15f52a2484) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:02.161 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3740887-8427-4858-b3e7-5c15f52a2484 in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 bound to our chassis
Nov 25 08:37:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:02.161 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:37:02 compute-0 NetworkManager[48915]: <info>  [1764059822.1638] device (tapf3740887-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:37:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:02.163 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7bbbef-1ea4-4b29-a61d-fd60846ca278]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:02 compute-0 NetworkManager[48915]: <info>  [1764059822.1649] device (tapf3740887-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.166 253542 DEBUG oslo_concurrency.lockutils [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.166 253542 DEBUG nova.compute.manager [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.166 253542 DEBUG oslo_concurrency.lockutils [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.167 253542 DEBUG oslo_concurrency.lockutils [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.167 253542 DEBUG oslo_concurrency.lockutils [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.167 253542 DEBUG nova.compute.manager [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] No waiting events found dispatching network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.167 253542 WARNING nova.compute.manager [req-d8dc155f-cef0-40df-ba80-c25cfd85008c req-2b6ca8ef-fc78-474b-9e7f-10d39a5db0df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received unexpected event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b for instance with vm_state stopped and task_state powering-on.
Nov 25 08:37:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:37:02 compute-0 podman[330545]: 2025-11-25 08:37:02.078843063 +0000 UTC m=+0.026326929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18e613c8ac45577008ebaa35f0301cc68f347c1fc0e923190036866c0e0427/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18e613c8ac45577008ebaa35f0301cc68f347c1fc0e923190036866c0e0427/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18e613c8ac45577008ebaa35f0301cc68f347c1fc0e923190036866c0e0427/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18e613c8ac45577008ebaa35f0301cc68f347c1fc0e923190036866c0e0427/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:37:02 compute-0 ovn_controller[152859]: 2025-11-25T08:37:02Z|00728|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 up in Southbound
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.178 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:02 compute-0 ovn_controller[152859]: 2025-11-25T08:37:02Z|00729|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 ovn-installed in OVS
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.185 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:02 compute-0 podman[330545]: 2025-11-25 08:37:02.195716758 +0000 UTC m=+0.143200604 container init 0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:37:02 compute-0 systemd-machined[215790]: New machine qemu-93-instance-0000004e.
Nov 25 08:37:02 compute-0 podman[330545]: 2025-11-25 08:37:02.203192821 +0000 UTC m=+0.150676657 container start 0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:37:02 compute-0 podman[330545]: 2025-11-25 08:37:02.207099328 +0000 UTC m=+0.154583164 container attach 0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 08:37:02 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004e.
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.758 253542 DEBUG nova.compute.manager [req-cebd3cf0-36b9-4edd-bdcf-41c94b65a652 req-0a5886f3-f3be-49af-aa62-bab5ec981084 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.759 253542 DEBUG oslo_concurrency.lockutils [req-cebd3cf0-36b9-4edd-bdcf-41c94b65a652 req-0a5886f3-f3be-49af-aa62-bab5ec981084 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.759 253542 DEBUG oslo_concurrency.lockutils [req-cebd3cf0-36b9-4edd-bdcf-41c94b65a652 req-0a5886f3-f3be-49af-aa62-bab5ec981084 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.759 253542 DEBUG oslo_concurrency.lockutils [req-cebd3cf0-36b9-4edd-bdcf-41c94b65a652 req-0a5886f3-f3be-49af-aa62-bab5ec981084 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.760 253542 DEBUG nova.compute.manager [req-cebd3cf0-36b9-4edd-bdcf-41c94b65a652 req-0a5886f3-f3be-49af-aa62-bab5ec981084 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] No waiting events found dispatching network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:02 compute-0 nova_compute[253538]: 2025-11-25 08:37:02.760 253542 WARNING nova.compute.manager [req-cebd3cf0-36b9-4edd-bdcf-41c94b65a652 req-0a5886f3-f3be-49af-aa62-bab5ec981084 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received unexpected event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b for instance with vm_state active and task_state None.
Nov 25 08:37:03 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Nov 25 08:37:03 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004d.scope: Consumed 16.769s CPU time.
Nov 25 08:37:03 compute-0 systemd-machined[215790]: Machine qemu-90-instance-0000004d terminated.
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.041 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059823.041366, 02e92d65-4521-4c60-bed4-2e8fc4d243e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.042 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] VM Started (Lifecycle Event)
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.058 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.061 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059823.0435016, 02e92d65-4521-4c60-bed4-2e8fc4d243e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.062 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] VM Paused (Lifecycle Event)
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.074 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.077 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.095 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 656 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.2 MiB/s wr, 257 op/s
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]: {
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "osd_id": 1,
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "type": "bluestore"
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:     },
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "osd_id": 2,
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "type": "bluestore"
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:     },
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "osd_id": 0,
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:         "type": "bluestore"
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]:     }
Nov 25 08:37:03 compute-0 exciting_ramanujan[330571]: }
Nov 25 08:37:03 compute-0 systemd[1]: libpod-0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548.scope: Deactivated successfully.
Nov 25 08:37:03 compute-0 systemd[1]: libpod-0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548.scope: Consumed 1.149s CPU time.
Nov 25 08:37:03 compute-0 podman[330545]: 2025-11-25 08:37:03.408287643 +0000 UTC m=+1.355771479 container died 0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f18e613c8ac45577008ebaa35f0301cc68f347c1fc0e923190036866c0e0427-merged.mount: Deactivated successfully.
Nov 25 08:37:03 compute-0 podman[330545]: 2025-11-25 08:37:03.466430516 +0000 UTC m=+1.413914352 container remove 0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:37:03 compute-0 systemd[1]: libpod-conmon-0d8f2af81511206883c31a0dcee85a510a43f94b8e1994a2d12c83a00f44d548.scope: Deactivated successfully.
Nov 25 08:37:03 compute-0 sudo[330341]: pam_unix(sudo:session): session closed for user root
Nov 25 08:37:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:37:03 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:37:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:37:03 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 010d33f3-c2ff-4798-94d4-467f2550f081 does not exist
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 65f59dae-2094-42cd-9dbd-5254a7bd4593 does not exist
Nov 25 08:37:03 compute-0 sudo[330670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:37:03 compute-0 sudo[330670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:37:03 compute-0 sudo[330670]: pam_unix(sudo:session): session closed for user root
Nov 25 08:37:03 compute-0 sudo[330695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:37:03 compute-0 sudo[330695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:37:03 compute-0 sudo[330695]: pam_unix(sudo:session): session closed for user root
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.652 253542 INFO nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance shutdown successfully after 24 seconds.
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.657 253542 INFO nova.virt.libvirt.driver [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance destroyed successfully.
Nov 25 08:37:03 compute-0 nova_compute[253538]: 2025-11-25 08:37:03.662 253542 INFO nova.virt.libvirt.driver [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance destroyed successfully.
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005244897990330143 of space, bias 1.0, pg target 1.5734693970990428 quantized to 32 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.30262080199583247 quantized to 32 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:37:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.161 253542 INFO nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Deleting instance files /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467_del
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.161 253542 INFO nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Deletion of /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467_del complete
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.498 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.499 253542 INFO nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Creating image(s)
Nov 25 08:37:04 compute-0 ceph-mon[75015]: pgmap v1621: 321 pgs: 321 active+clean; 656 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.2 MiB/s wr, 257 op/s
Nov 25 08:37:04 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:37:04 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.520 253542 DEBUG nova.storage.rbd_utils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.545 253542 DEBUG nova.storage.rbd_utils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.570 253542 DEBUG nova.storage.rbd_utils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.573 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.644 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.645 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.645 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.645 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.675 253542 DEBUG nova.storage.rbd_utils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:04 compute-0 nova_compute[253538]: 2025-11-25 08:37:04.678 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.170 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.232 253542 DEBUG nova.storage.rbd_utils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] resizing rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:37:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 614 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 226 op/s
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.351 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.352 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Ensure instance console log exists: /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.352 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.353 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.353 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.355 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.360 253542 WARNING nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.366 253542 DEBUG nova.virt.libvirt.host [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.367 253542 DEBUG nova.virt.libvirt.host [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.371 253542 DEBUG nova.virt.libvirt.host [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.372 253542 DEBUG nova.virt.libvirt.host [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.372 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.372 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.373 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.373 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.374 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.374 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.374 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.374 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.375 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.375 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.375 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.376 253542 DEBUG nova.virt.hardware [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.376 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'vcpu_model' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.395 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258124961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.904 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.930 253542 DEBUG nova.storage.rbd_utils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:05 compute-0 nova_compute[253538]: 2025-11-25 08:37:05.935 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.112 253542 DEBUG nova.compute.manager [req-a51027e7-ba6e-4979-8599-b72102ea9a55 req-560a6593-77b8-45f7-b6fc-2955493c4366 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.113 253542 DEBUG oslo_concurrency.lockutils [req-a51027e7-ba6e-4979-8599-b72102ea9a55 req-560a6593-77b8-45f7-b6fc-2955493c4366 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.113 253542 DEBUG oslo_concurrency.lockutils [req-a51027e7-ba6e-4979-8599-b72102ea9a55 req-560a6593-77b8-45f7-b6fc-2955493c4366 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.114 253542 DEBUG oslo_concurrency.lockutils [req-a51027e7-ba6e-4979-8599-b72102ea9a55 req-560a6593-77b8-45f7-b6fc-2955493c4366 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.114 253542 DEBUG nova.compute.manager [req-a51027e7-ba6e-4979-8599-b72102ea9a55 req-560a6593-77b8-45f7-b6fc-2955493c4366 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Processing event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.115 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.129 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059826.1185384, 02e92d65-4521-4c60-bed4-2e8fc4d243e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.130 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] VM Resumed (Lifecycle Event)
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.134 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.139 253542 INFO nova.virt.libvirt.driver [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance spawned successfully.
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.139 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.154 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.158 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.159 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.159 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.159 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.160 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.160 253542 DEBUG nova.virt.libvirt.driver [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.163 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.190 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.260 253542 INFO nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Took 9.30 seconds to spawn the instance on the hypervisor.
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.262 253542 DEBUG nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670582271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.377 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.381 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <uuid>89bd5b48-efcd-45aa-98f5-e9d9f8373467</uuid>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <name>instance-0000004d</name>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerShowV247Test-server-170954124</nova:name>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:37:05</nova:creationTime>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <nova:user uuid="6e7ed13625c1463da308fbfab28c4541">tempest-ServerShowV247Test-1492743735-project-member</nova:user>
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <nova:project uuid="504c5057c1894355afeafd57cd62b7ab">tempest-ServerShowV247Test-1492743735</nova:project>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <system>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <entry name="serial">89bd5b48-efcd-45aa-98f5-e9d9f8373467</entry>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <entry name="uuid">89bd5b48-efcd-45aa-98f5-e9d9f8373467</entry>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     </system>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <os>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   </os>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <features>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   </features>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk">
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config">
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:06 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/console.log" append="off"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <video>
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     </video>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:37:06 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:37:06 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:37:06 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:37:06 compute-0 nova_compute[253538]: </domain>
Nov 25 08:37:06 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.383 253542 INFO nova.compute.manager [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Took 10.46 seconds to build instance.
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.422 253542 DEBUG oslo_concurrency.lockutils [None req-92fa137d-49cb-467f-9317-3d6b0dbbe092 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.435 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.435 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.436 253542 INFO nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Using config drive
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.469 253542 DEBUG nova.storage.rbd_utils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.490 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'ec2_ids' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:06 compute-0 ceph-mon[75015]: pgmap v1622: 321 pgs: 321 active+clean; 614 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 226 op/s
Nov 25 08:37:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4258124961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2670582271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.528 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'keypairs' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:06 compute-0 nova_compute[253538]: 2025-11-25 08:37:06.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:06 compute-0 sshd-session[330986]: Invalid user hms from 193.32.162.151 port 36332
Nov 25 08:37:07 compute-0 sshd-session[330986]: Connection closed by invalid user hms 193.32.162.151 port 36332 [preauth]
Nov 25 08:37:07 compute-0 nova_compute[253538]: 2025-11-25 08:37:07.185 253542 INFO nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Creating config drive at /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config
Nov 25 08:37:07 compute-0 nova_compute[253538]: 2025-11-25 08:37:07.193 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnyhabhe5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 610 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 232 op/s
Nov 25 08:37:07 compute-0 nova_compute[253538]: 2025-11-25 08:37:07.338 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnyhabhe5" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:07 compute-0 nova_compute[253538]: 2025-11-25 08:37:07.380 253542 DEBUG nova.storage.rbd_utils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] rbd image 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:07 compute-0 nova_compute[253538]: 2025-11-25 08:37:07.386 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:07 compute-0 nova_compute[253538]: 2025-11-25 08:37:07.550 253542 DEBUG oslo_concurrency.processutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config 89bd5b48-efcd-45aa-98f5-e9d9f8373467_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:07 compute-0 nova_compute[253538]: 2025-11-25 08:37:07.552 253542 INFO nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Deleting local config drive /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467/disk.config because it was imported into RBD.
Nov 25 08:37:07 compute-0 systemd-machined[215790]: New machine qemu-94-instance-0000004d.
Nov 25 08:37:07 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-0000004d.
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.024 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 89bd5b48-efcd-45aa-98f5-e9d9f8373467 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.025 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059828.0236347, 89bd5b48-efcd-45aa-98f5-e9d9f8373467 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.025 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] VM Resumed (Lifecycle Event)
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.029 253542 DEBUG nova.compute.manager [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.030 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.034 253542 INFO nova.virt.libvirt.driver [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance spawned successfully.
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.035 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.043 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.045 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.056 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.056 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.056 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.057 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.057 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.058 253542 DEBUG nova.virt.libvirt.driver [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.067 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.067 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059828.0255115, 89bd5b48-efcd-45aa-98f5-e9d9f8373467 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.067 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] VM Started (Lifecycle Event)
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.096 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.099 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.124 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.165 253542 DEBUG nova.compute.manager [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.279 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.280 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.280 253542 DEBUG nova.objects.instance [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:37:08 compute-0 nova_compute[253538]: 2025-11-25 08:37:08.349 253542 DEBUG oslo_concurrency.lockutils [None req-943caa4a-791f-4455-8ed3-a9f0538e21ac 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:08 compute-0 ceph-mon[75015]: pgmap v1623: 321 pgs: 321 active+clean; 610 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 232 op/s
Nov 25 08:37:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 596 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 241 op/s
Nov 25 08:37:09 compute-0 ceph-mon[75015]: pgmap v1624: 321 pgs: 321 active+clean; 596 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 241 op/s
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.414 253542 DEBUG nova.compute.manager [req-6cb37a08-1f8d-4b72-9918-63b6fd819b9c req-e6f6909a-645f-4856-a901-9e433de0587d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.415 253542 DEBUG oslo_concurrency.lockutils [req-6cb37a08-1f8d-4b72-9918-63b6fd819b9c req-e6f6909a-645f-4856-a901-9e433de0587d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.415 253542 DEBUG oslo_concurrency.lockutils [req-6cb37a08-1f8d-4b72-9918-63b6fd819b9c req-e6f6909a-645f-4856-a901-9e433de0587d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.416 253542 DEBUG oslo_concurrency.lockutils [req-6cb37a08-1f8d-4b72-9918-63b6fd819b9c req-e6f6909a-645f-4856-a901-9e433de0587d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.416 253542 DEBUG nova.compute.manager [req-6cb37a08-1f8d-4b72-9918-63b6fd819b9c req-e6f6909a-645f-4856-a901-9e433de0587d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.417 253542 WARNING nova.compute.manager [req-6cb37a08-1f8d-4b72-9918-63b6fd819b9c req-e6f6909a-645f-4856-a901-9e433de0587d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state active and task_state None.
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.784 253542 INFO nova.compute.manager [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Rescuing
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.785 253542 DEBUG oslo_concurrency.lockutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.785 253542 DEBUG oslo_concurrency.lockutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquired lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:10 compute-0 nova_compute[253538]: 2025-11-25 08:37:10.786 253542 DEBUG nova.network.neutron [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 623 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.7 MiB/s wr, 321 op/s
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.541 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.666 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.667 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.686 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:37:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.784 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.785 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.792 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.792 253542 INFO nova.compute.claims [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:37:11 compute-0 podman[331085]: 2025-11-25 08:37:11.901054193 +0000 UTC m=+0.137166629 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.937 253542 DEBUG nova.scheduler.client.report [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.958 253542 DEBUG nova.scheduler.client.report [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.958 253542 DEBUG nova.compute.provider_tree [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 08:37:11 compute-0 nova_compute[253538]: 2025-11-25 08:37:11.979 253542 DEBUG nova.scheduler.client.report [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.007 253542 DEBUG nova.scheduler.client.report [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.119 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.119 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.119 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.120 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.120 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.121 253542 INFO nova.compute.manager [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Terminating instance
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.123 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "refresh_cache-89bd5b48-efcd-45aa-98f5-e9d9f8373467" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.123 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquired lock "refresh_cache-89bd5b48-efcd-45aa-98f5-e9d9f8373467" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.123 253542 DEBUG nova.network.neutron [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.184 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.222 253542 DEBUG nova.network.neutron [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Updating instance_info_cache with network_info: [{"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.250 253542 DEBUG nova.network.neutron [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.342 253542 DEBUG oslo_concurrency.lockutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Releasing lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:12 compute-0 ceph-mon[75015]: pgmap v1625: 321 pgs: 321 active+clean; 623 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.7 MiB/s wr, 321 op/s
Nov 25 08:37:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534382050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.649 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.662 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.667 253542 DEBUG nova.compute.provider_tree [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.687 253542 DEBUG nova.scheduler.client.report [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.759 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.760 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.864 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.865 253542 DEBUG nova.network.neutron [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.918 253542 INFO nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:37:12 compute-0 nova_compute[253538]: 2025-11-25 08:37:12.940 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.009 253542 DEBUG nova.network.neutron [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.020 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Releasing lock "refresh_cache-89bd5b48-efcd-45aa-98f5-e9d9f8373467" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.020 253542 DEBUG nova.compute.manager [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.041 253542 DEBUG nova.policy [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa221a12ceb248cbac90a621af09d7fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9c8abf549f8d47eba559c32e8ed0679c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.098 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.102 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.103 253542 INFO nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Creating image(s)
Nov 25 08:37:13 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Nov 25 08:37:13 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004d.scope: Consumed 5.382s CPU time.
Nov 25 08:37:13 compute-0 systemd-machined[215790]: Machine qemu-94-instance-0000004d terminated.
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.141 253542 DEBUG nova.storage.rbd_utils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] rbd image a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.165 253542 DEBUG nova.storage.rbd_utils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] rbd image a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.191 253542 DEBUG nova.storage.rbd_utils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] rbd image a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.196 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.247 253542 INFO nova.virt.libvirt.driver [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance destroyed successfully.
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.248 253542 DEBUG nova.objects.instance [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'resources' on Instance uuid 89bd5b48-efcd-45aa-98f5-e9d9f8373467 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.281 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.281 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.282 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.282 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.305 253542 DEBUG nova.storage.rbd_utils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] rbd image a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.308 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 625 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 3.1 MiB/s wr, 359 op/s
Nov 25 08:37:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3534382050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.612 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "04d190be-1443-48a9-ad51-3625b65dff6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.613 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.613 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.614 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.614 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.616 253542 INFO nova.compute.manager [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Terminating instance
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.617 253542 DEBUG nova.compute.manager [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:37:13 compute-0 kernel: tap62ab60cc-67 (unregistering): left promiscuous mode
Nov 25 08:37:13 compute-0 NetworkManager[48915]: <info>  [1764059833.6850] device (tap62ab60cc-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.688 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:13 compute-0 ovn_controller[152859]: 2025-11-25T08:37:13Z|00730|binding|INFO|Releasing lport 62ab60cc-67a7-4f48-a964-8684f3731d02 from this chassis (sb_readonly=0)
Nov 25 08:37:13 compute-0 ovn_controller[152859]: 2025-11-25T08:37:13Z|00731|binding|INFO|Setting lport 62ab60cc-67a7-4f48-a964-8684f3731d02 down in Southbound
Nov 25 08:37:13 compute-0 ovn_controller[152859]: 2025-11-25T08:37:13Z|00732|binding|INFO|Removing iface tap62ab60cc-67 ovn-installed in OVS
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:13 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Nov 25 08:37:13 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004a.scope: Consumed 15.151s CPU time.
Nov 25 08:37:13 compute-0 systemd-machined[215790]: Machine qemu-87-instance-0000004a terminated.
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.784 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:f3:02 10.100.0.12'], port_security=['fa:16:3e:84:f3:02 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '04d190be-1443-48a9-ad51-3625b65dff6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa56d31750374b64b67d1be19bb4e989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d5f0080-1d0d-4c90-8c84-7d1437ba109d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4cd11-f7ee-427b-8b83-e8d21ceafdd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=62ab60cc-67a7-4f48-a964-8684f3731d02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.786 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 62ab60cc-67a7-4f48-a964-8684f3731d02 in datapath 30bd3cca-f71b-4541-ae95-8d0eb4dfe470 unbound from our chassis
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.791 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.809 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f40bd3-897e-4586-9d82-99a1c76380b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.811 253542 DEBUG nova.storage.rbd_utils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] resizing rbd image a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.844 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1778e4eb-ec16-4def-b586-0ecbb33fc6ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.850 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca76573-c00b-4c76-b78e-3913a384dc48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.862 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.871 253542 INFO nova.virt.libvirt.driver [-] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Instance destroyed successfully.
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.872 253542 DEBUG nova.objects.instance [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'resources' on Instance uuid 04d190be-1443-48a9-ad51-3625b65dff6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.885 253542 DEBUG nova.virt.libvirt.vif [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-488069752',display_name='tempest-ListServerFiltersTestJSON-instance-488069752',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-488069752',id=74,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-4ha8394r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:36:30Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=04d190be-1443-48a9-ad51-3625b65dff6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.886 253542 DEBUG nova.network.os_vif_util [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "62ab60cc-67a7-4f48-a964-8684f3731d02", "address": "fa:16:3e:84:f3:02", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62ab60cc-67", "ovs_interfaceid": "62ab60cc-67a7-4f48-a964-8684f3731d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.887 253542 DEBUG nova.network.os_vif_util [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:f3:02,bridge_name='br-int',has_traffic_filtering=True,id=62ab60cc-67a7-4f48-a964-8684f3731d02,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62ab60cc-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.887 253542 DEBUG os_vif [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:f3:02,bridge_name='br-int',has_traffic_filtering=True,id=62ab60cc-67a7-4f48-a964-8684f3731d02,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62ab60cc-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.890 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.891 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62ab60cc-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.898 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[462e99c6-2b65-494d-a413-9f54f012646b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.905 253542 INFO nova.virt.libvirt.driver [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Deleting instance files /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467_del
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.906 253542 INFO nova.virt.libvirt.driver [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Deletion of /var/lib/nova/instances/89bd5b48-efcd-45aa-98f5-e9d9f8373467_del complete
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.919 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74cd915e-b664-44b2-bc5f-55ae96a4410a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30bd3cca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:18:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514256, 'reachable_time': 39357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331312, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce59d7e-5cef-48ba-a588-58a878ff012b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514276, 'tstamp': 514276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331314, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514280, 'tstamp': 514280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331314, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.935 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bd3cca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.937 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bd3cca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.938 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.938 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30bd3cca-f0, col_values=(('external_ids', {'iface-id': 'fb637403-e23d-4de9-9c54-1f8015bf829f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:13.938 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.957 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.958 253542 INFO os_vif [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:f3:02,bridge_name='br-int',has_traffic_filtering=True,id=62ab60cc-67a7-4f48-a964-8684f3731d02,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62ab60cc-67')
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.975 253542 DEBUG nova.objects.instance [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lazy-loading 'migration_context' on Instance uuid a263c70f-c8ce-4ffc-bd62-595fc2e31593 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.987 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.987 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Ensure instance console log exists: /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.988 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.988 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:13 compute-0 nova_compute[253538]: 2025-11-25 08:37:13.988 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.049 253542 INFO nova.compute.manager [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Took 1.03 seconds to destroy the instance on the hypervisor.
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.050 253542 DEBUG oslo.service.loopingcall [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.050 253542 DEBUG nova.compute.manager [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.050 253542 DEBUG nova.network.neutron [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.086 253542 DEBUG nova.network.neutron [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Successfully created port: a48f5b09-487c-4713-a697-b97ef4fc6497 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.180 253542 DEBUG nova.network.neutron [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.200 253542 DEBUG nova.network.neutron [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.215 253542 INFO nova.compute.manager [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Took 0.17 seconds to deallocate network for instance.
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.307 253542 INFO nova.virt.libvirt.driver [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Deleting instance files /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c_del
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.308 253542 INFO nova.virt.libvirt.driver [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Deletion of /var/lib/nova/instances/04d190be-1443-48a9-ad51-3625b65dff6c_del complete
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.374 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.375 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.393 253542 INFO nova.compute.manager [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.393 253542 DEBUG oslo.service.loopingcall [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.394 253542 DEBUG nova.compute.manager [-] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.394 253542 DEBUG nova.network.neutron [-] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:37:14 compute-0 ceph-mon[75015]: pgmap v1626: 321 pgs: 321 active+clean; 625 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 3.1 MiB/s wr, 359 op/s
Nov 25 08:37:14 compute-0 nova_compute[253538]: 2025-11-25 08:37:14.765 253542 DEBUG oslo_concurrency.processutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645153108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.211 253542 DEBUG oslo_concurrency.processutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.218 253542 DEBUG nova.compute.provider_tree [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.231 253542 DEBUG nova.scheduler.client.report [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.279 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 576 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.2 MiB/s wr, 354 op/s
Nov 25 08:37:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1645153108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.449 253542 INFO nova.scheduler.client.report [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Deleted allocations for instance 89bd5b48-efcd-45aa-98f5-e9d9f8373467
Nov 25 08:37:15 compute-0 ovn_controller[152859]: 2025-11-25T08:37:15Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:57:05 10.100.0.3
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.524 253542 DEBUG nova.network.neutron [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Successfully updated port: a48f5b09-487c-4713-a697-b97ef4fc6497 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.575 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.575 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.599 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.599 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.624 253542 DEBUG nova.compute.manager [req-b6309338-0897-4fe9-9f59-96524623b7e1 req-f2f17c70-bd0f-4279-885b-39ac02ef0171 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Received event network-vif-deleted-62ab60cc-67a7-4f48-a964-8684f3731d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.624 253542 INFO nova.compute.manager [req-b6309338-0897-4fe9-9f59-96524623b7e1 req-f2f17c70-bd0f-4279-885b-39ac02ef0171 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Neutron deleted interface 62ab60cc-67a7-4f48-a964-8684f3731d02; detaching it from the instance and deleting it from the info cache
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.624 253542 DEBUG nova.network.neutron [req-b6309338-0897-4fe9-9f59-96524623b7e1 req-f2f17c70-bd0f-4279-885b-39ac02ef0171 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.629 253542 DEBUG oslo_concurrency.lockutils [None req-2818d6ea-b900-449a-b210-c7cf3699528b 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "89bd5b48-efcd-45aa-98f5-e9d9f8373467" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.631 253542 DEBUG nova.compute.manager [req-1397d090-38dd-4513-9118-73ba7a0b511b req-39626e08-02d4-49d3-a1a2-f23e17a55e80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received event network-changed-a48f5b09-487c-4713-a697-b97ef4fc6497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.631 253542 DEBUG nova.compute.manager [req-1397d090-38dd-4513-9118-73ba7a0b511b req-39626e08-02d4-49d3-a1a2-f23e17a55e80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Refreshing instance network info cache due to event network-changed-a48f5b09-487c-4713-a697-b97ef4fc6497. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.631 253542 DEBUG oslo_concurrency.lockutils [req-1397d090-38dd-4513-9118-73ba7a0b511b req-39626e08-02d4-49d3-a1a2-f23e17a55e80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a263c70f-c8ce-4ffc-bd62-595fc2e31593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.631 253542 DEBUG oslo_concurrency.lockutils [req-1397d090-38dd-4513-9118-73ba7a0b511b req-39626e08-02d4-49d3-a1a2-f23e17a55e80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a263c70f-c8ce-4ffc-bd62-595fc2e31593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.631 253542 DEBUG nova.network.neutron [req-1397d090-38dd-4513-9118-73ba7a0b511b req-39626e08-02d4-49d3-a1a2-f23e17a55e80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Refreshing network info cache for port a48f5b09-487c-4713-a697-b97ef4fc6497 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.654 253542 DEBUG nova.network.neutron [-] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.655 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "refresh_cache-a263c70f-c8ce-4ffc-bd62-595fc2e31593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.673 253542 INFO nova.compute.manager [-] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Took 1.28 seconds to deallocate network for instance.
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.681 253542 DEBUG nova.compute.manager [req-b6309338-0897-4fe9-9f59-96524623b7e1 req-f2f17c70-bd0f-4279-885b-39ac02ef0171 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Detach interface failed, port_id=62ab60cc-67a7-4f48-a964-8684f3731d02, reason: Instance 04d190be-1443-48a9-ad51-3625b65dff6c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.784 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:15 compute-0 nova_compute[253538]: 2025-11-25 08:37:15.784 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.451 253542 DEBUG oslo_concurrency.processutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:16 compute-0 ceph-mon[75015]: pgmap v1627: 321 pgs: 321 active+clean; 576 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.2 MiB/s wr, 354 op/s
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.502 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.502 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.503 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.503 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.507 253542 DEBUG nova.network.neutron [req-1397d090-38dd-4513-9118-73ba7a0b511b req-39626e08-02d4-49d3-a1a2-f23e17a55e80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.547 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287753250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.903 253542 DEBUG oslo_concurrency.processutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.910 253542 DEBUG nova.compute.provider_tree [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.923 253542 DEBUG nova.scheduler.client.report [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.955 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.986 253542 INFO nova.scheduler.client.report [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Deleted allocations for instance 04d190be-1443-48a9-ad51-3625b65dff6c
Nov 25 08:37:16 compute-0 nova_compute[253538]: 2025-11-25 08:37:16.998 253542 DEBUG nova.network.neutron [req-1397d090-38dd-4513-9118-73ba7a0b511b req-39626e08-02d4-49d3-a1a2-f23e17a55e80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.031 253542 DEBUG oslo_concurrency.lockutils [req-1397d090-38dd-4513-9118-73ba7a0b511b req-39626e08-02d4-49d3-a1a2-f23e17a55e80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a263c70f-c8ce-4ffc-bd62-595fc2e31593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.032 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquired lock "refresh_cache-a263c70f-c8ce-4ffc-bd62-595fc2e31593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.032 253542 DEBUG nova.network.neutron [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.072 253542 DEBUG oslo_concurrency.lockutils [None req-5baafb24-4bbd-4870-b7e1-1a96e344646a ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "04d190be-1443-48a9-ad51-3625b65dff6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.237 253542 DEBUG nova.network.neutron [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.321 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "806e081d-6b1a-4909-be7c-5490c631ebfe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.321 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "806e081d-6b1a-4909-be7c-5490c631ebfe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.322 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "806e081d-6b1a-4909-be7c-5490c631ebfe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.322 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "806e081d-6b1a-4909-be7c-5490c631ebfe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.322 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "806e081d-6b1a-4909-be7c-5490c631ebfe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.323 253542 INFO nova.compute.manager [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Terminating instance
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.325 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "refresh_cache-806e081d-6b1a-4909-be7c-5490c631ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.325 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquired lock "refresh_cache-806e081d-6b1a-4909-be7c-5490c631ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.325 253542 DEBUG nova.network.neutron [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:37:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 563 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.0 MiB/s wr, 352 op/s
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.332 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.333 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.334 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.334 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.334 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.336 253542 INFO nova.compute.manager [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Terminating instance
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.338 253542 DEBUG nova.compute.manager [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:37:17 compute-0 kernel: tap41bc52e3-37 (unregistering): left promiscuous mode
Nov 25 08:37:17 compute-0 NetworkManager[48915]: <info>  [1764059837.4739] device (tap41bc52e3-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.483 253542 DEBUG nova.network.neutron [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:17 compute-0 ovn_controller[152859]: 2025-11-25T08:37:17Z|00733|binding|INFO|Releasing lport 41bc52e3-37ed-4096-9d29-9868b1e29c3b from this chassis (sb_readonly=0)
Nov 25 08:37:17 compute-0 ovn_controller[152859]: 2025-11-25T08:37:17Z|00734|binding|INFO|Setting lport 41bc52e3-37ed-4096-9d29-9868b1e29c3b down in Southbound
Nov 25 08:37:17 compute-0 ovn_controller[152859]: 2025-11-25T08:37:17Z|00735|binding|INFO|Removing iface tap41bc52e3-37 ovn-installed in OVS
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.498 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:21:01 10.100.0.9'], port_security=['fa:16:3e:91:21:01 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b62dacb0-2605-4b3f-b00a-9ecf5d2728f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa56d31750374b64b67d1be19bb4e989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d5f0080-1d0d-4c90-8c84-7d1437ba109d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4cd11-f7ee-427b-8b83-e8d21ceafdd1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=41bc52e3-37ed-4096-9d29-9868b1e29c3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.500 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 41bc52e3-37ed-4096-9d29-9868b1e29c3b in datapath 30bd3cca-f71b-4541-ae95-8d0eb4dfe470 unbound from our chassis
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.502 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30bd3cca-f71b-4541-ae95-8d0eb4dfe470
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.522 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79405187-ccad-4fa8-a4ef-38b69a9cd96b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4287753250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:17 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000049.scope: Deactivated successfully.
Nov 25 08:37:17 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000049.scope: Consumed 15.535s CPU time.
Nov 25 08:37:17 compute-0 systemd-machined[215790]: Machine qemu-86-instance-00000049 terminated.
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.572 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[20a895e5-acc0-4878-bb1b-97ceda9ae68e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.575 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[59cf60ea-f2ca-4ee8-917e-29ed208403a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.595 253542 INFO nova.virt.libvirt.driver [-] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Instance destroyed successfully.
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.595 253542 DEBUG nova.objects.instance [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'resources' on Instance uuid b62dacb0-2605-4b3f-b00a-9ecf5d2728f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:17 compute-0 podman[331397]: 2025-11-25 08:37:17.60667252 +0000 UTC m=+0.093534179 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.609 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[18fd06e8-09f2-484f-83ed-129c5104103d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.610 253542 DEBUG nova.virt.libvirt.vif [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-48731400',display_name='tempest-ListServerFiltersTestJSON-instance-48731400',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-48731400',id=73,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-eg67u00c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:36:29Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=b62dacb0-2605-4b3f-b00a-9ecf5d2728f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.611 253542 DEBUG nova.network.os_vif_util [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "address": "fa:16:3e:91:21:01", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc52e3-37", "ovs_interfaceid": "41bc52e3-37ed-4096-9d29-9868b1e29c3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.611 253542 DEBUG nova.network.os_vif_util [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:21:01,bridge_name='br-int',has_traffic_filtering=True,id=41bc52e3-37ed-4096-9d29-9868b1e29c3b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc52e3-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.612 253542 DEBUG os_vif [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:21:01,bridge_name='br-int',has_traffic_filtering=True,id=41bc52e3-37ed-4096-9d29-9868b1e29c3b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc52e3-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.614 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.614 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41bc52e3-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.624 253542 INFO os_vif [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:21:01,bridge_name='br-int',has_traffic_filtering=True,id=41bc52e3-37ed-4096-9d29-9868b1e29c3b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc52e3-37')
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.631 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e362234f-1335-441a-a28a-225174e75e88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30bd3cca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:18:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514256, 'reachable_time': 39357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331438, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.650 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5fcbb6-f866-4e7d-9a44-2a0b684035b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514276, 'tstamp': 514276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331454, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap30bd3cca-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514280, 'tstamp': 514280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331454, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.651 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bd3cca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.654 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bd3cca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.654 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.654 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30bd3cca-f0, col_values=(('external_ids', {'iface-id': 'fb637403-e23d-4de9-9c54-1f8015bf829f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:17.655 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.771 253542 DEBUG nova.compute.manager [req-7ddabbe9-61e8-4640-80f0-52b27f98a045 req-da465037-9a50-4711-87f0-76e4fcf705d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received event network-vif-unplugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.771 253542 DEBUG oslo_concurrency.lockutils [req-7ddabbe9-61e8-4640-80f0-52b27f98a045 req-da465037-9a50-4711-87f0-76e4fcf705d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.772 253542 DEBUG oslo_concurrency.lockutils [req-7ddabbe9-61e8-4640-80f0-52b27f98a045 req-da465037-9a50-4711-87f0-76e4fcf705d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.772 253542 DEBUG oslo_concurrency.lockutils [req-7ddabbe9-61e8-4640-80f0-52b27f98a045 req-da465037-9a50-4711-87f0-76e4fcf705d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.772 253542 DEBUG nova.compute.manager [req-7ddabbe9-61e8-4640-80f0-52b27f98a045 req-da465037-9a50-4711-87f0-76e4fcf705d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] No waiting events found dispatching network-vif-unplugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.773 253542 DEBUG nova.compute.manager [req-7ddabbe9-61e8-4640-80f0-52b27f98a045 req-da465037-9a50-4711-87f0-76e4fcf705d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received event network-vif-unplugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.783 253542 DEBUG nova.network.neutron [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.799 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Releasing lock "refresh_cache-806e081d-6b1a-4909-be7c-5490c631ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:17 compute-0 nova_compute[253538]: 2025-11-25 08:37:17.800 253542 DEBUG nova.compute.manager [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:37:18 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Nov 25 08:37:18 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Consumed 14.408s CPU time.
Nov 25 08:37:18 compute-0 systemd-machined[215790]: Machine qemu-89-instance-0000004c terminated.
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.191 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updating instance_info_cache with network_info: [{"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.203 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.204 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.204 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.204 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.204 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.223 253542 INFO nova.virt.libvirt.driver [-] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Instance destroyed successfully.
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.224 253542 DEBUG nova.objects.instance [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lazy-loading 'resources' on Instance uuid 806e081d-6b1a-4909-be7c-5490c631ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:18 compute-0 ceph-mon[75015]: pgmap v1628: 321 pgs: 321 active+clean; 563 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.0 MiB/s wr, 352 op/s
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.699 253542 INFO nova.virt.libvirt.driver [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Deleting instance files /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_del
Nov 25 08:37:18 compute-0 nova_compute[253538]: 2025-11-25 08:37:18.700 253542 INFO nova.virt.libvirt.driver [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Deletion of /var/lib/nova/instances/b62dacb0-2605-4b3f-b00a-9ecf5d2728f7_del complete
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.052 253542 INFO nova.compute.manager [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Took 1.71 seconds to destroy the instance on the hypervisor.
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.052 253542 DEBUG oslo.service.loopingcall [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.053 253542 DEBUG nova.compute.manager [-] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.053 253542 DEBUG nova.network.neutron [-] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.105 253542 INFO nova.virt.libvirt.driver [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Deleting instance files /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe_del
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.106 253542 INFO nova.virt.libvirt.driver [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Deletion of /var/lib/nova/instances/806e081d-6b1a-4909-be7c-5490c631ebfe_del complete
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.281 253542 INFO nova.compute.manager [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Took 1.48 seconds to destroy the instance on the hypervisor.
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.281 253542 DEBUG oslo.service.loopingcall [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.282 253542 DEBUG nova.compute.manager [-] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.282 253542 DEBUG nova.network.neutron [-] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:37:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 520 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 323 op/s
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.439 253542 DEBUG nova.network.neutron [-] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.457 253542 DEBUG nova.network.neutron [-] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.471 253542 INFO nova.compute.manager [-] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Took 0.19 seconds to deallocate network for instance.
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.496 253542 DEBUG nova.network.neutron [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Updating instance_info_cache with network_info: [{"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.531 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.531 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.537 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Releasing lock "refresh_cache-a263c70f-c8ce-4ffc-bd62-595fc2e31593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.537 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Instance network_info: |[{"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.540 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Start _get_guest_xml network_info=[{"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.546 253542 WARNING nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:37:19 compute-0 ceph-mon[75015]: pgmap v1629: 321 pgs: 321 active+clean; 520 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 323 op/s
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.551 253542 DEBUG nova.virt.libvirt.host [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.551 253542 DEBUG nova.virt.libvirt.host [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.554 253542 DEBUG nova.virt.libvirt.host [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.554 253542 DEBUG nova.virt.libvirt.host [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.555 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.555 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.555 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.555 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.556 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.556 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.556 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.556 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.556 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.556 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.556 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.557 253542 DEBUG nova.virt.hardware [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.559 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.884 253542 DEBUG oslo_concurrency.processutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330804515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:19 compute-0 nova_compute[253538]: 2025-11-25 08:37:19.995 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.024 253542 DEBUG nova.storage.rbd_utils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] rbd image a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.028 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039336566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.389 253542 DEBUG oslo_concurrency.processutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.398 253542 DEBUG nova.compute.provider_tree [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.413 253542 DEBUG nova.scheduler.client.report [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.440 253542 DEBUG nova.compute.manager [req-1f5216aa-e305-4a48-8ed0-f2de3c16eb56 req-7b91ba39-ba07-4eef-ae84-56cc2843a2f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received event network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.441 253542 DEBUG oslo_concurrency.lockutils [req-1f5216aa-e305-4a48-8ed0-f2de3c16eb56 req-7b91ba39-ba07-4eef-ae84-56cc2843a2f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.441 253542 DEBUG oslo_concurrency.lockutils [req-1f5216aa-e305-4a48-8ed0-f2de3c16eb56 req-7b91ba39-ba07-4eef-ae84-56cc2843a2f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.441 253542 DEBUG oslo_concurrency.lockutils [req-1f5216aa-e305-4a48-8ed0-f2de3c16eb56 req-7b91ba39-ba07-4eef-ae84-56cc2843a2f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.442 253542 DEBUG nova.compute.manager [req-1f5216aa-e305-4a48-8ed0-f2de3c16eb56 req-7b91ba39-ba07-4eef-ae84-56cc2843a2f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] No waiting events found dispatching network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.442 253542 WARNING nova.compute.manager [req-1f5216aa-e305-4a48-8ed0-f2de3c16eb56 req-7b91ba39-ba07-4eef-ae84-56cc2843a2f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received unexpected event network-vif-plugged-41bc52e3-37ed-4096-9d29-9868b1e29c3b for instance with vm_state active and task_state deleting.
Nov 25 08:37:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154559523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.463 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.467 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.468 253542 DEBUG nova.virt.libvirt.vif [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1704447112',display_name='tempest-ServerMetadataNegativeTestJSON-server-1704447112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1704447112',id=79,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c8abf549f8d47eba559c32e8ed0679c',ramdisk_id='',reservation_id='r-8du0e9l9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-376934399',owner_user_name='tempest-ServerMetadataNegativeTestJSON-376934399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:12Z,user_data=None,user_id='aa221a12ceb248cbac90a621af09d7fb',uuid=a263c70f-c8ce-4ffc-bd62-595fc2e31593,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.469 253542 DEBUG nova.network.os_vif_util [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Converting VIF {"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.470 253542 DEBUG nova.network.os_vif_util [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:47:69,bridge_name='br-int',has_traffic_filtering=True,id=a48f5b09-487c-4713-a697-b97ef4fc6497,network=Network(57fa20d4-05ed-4242-9dcc-d6ff478d5568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48f5b09-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.472 253542 DEBUG nova.objects.instance [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lazy-loading 'pci_devices' on Instance uuid a263c70f-c8ce-4ffc-bd62-595fc2e31593 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.486 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <uuid>a263c70f-c8ce-4ffc-bd62-595fc2e31593</uuid>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <name>instance-0000004f</name>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1704447112</nova:name>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:37:19</nova:creationTime>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <nova:user uuid="aa221a12ceb248cbac90a621af09d7fb">tempest-ServerMetadataNegativeTestJSON-376934399-project-member</nova:user>
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <nova:project uuid="9c8abf549f8d47eba559c32e8ed0679c">tempest-ServerMetadataNegativeTestJSON-376934399</nova:project>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <nova:port uuid="a48f5b09-487c-4713-a697-b97ef4fc6497">
Nov 25 08:37:20 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <system>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <entry name="serial">a263c70f-c8ce-4ffc-bd62-595fc2e31593</entry>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <entry name="uuid">a263c70f-c8ce-4ffc-bd62-595fc2e31593</entry>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </system>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <os>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   </os>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <features>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   </features>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk">
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk.config">
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:20 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:10:47:69"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <target dev="tapa48f5b09-48"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593/console.log" append="off"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <video>
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </video>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:37:20 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:37:20 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:37:20 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:37:20 compute-0 nova_compute[253538]: </domain>
Nov 25 08:37:20 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.486 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Preparing to wait for external event network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.487 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.487 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.487 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.488 253542 DEBUG nova.virt.libvirt.vif [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1704447112',display_name='tempest-ServerMetadataNegativeTestJSON-server-1704447112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1704447112',id=79,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c8abf549f8d47eba559c32e8ed0679c',ramdisk_id='',reservation_id='r-8du0e9l9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-376934399',owner_user_name='tempest-ServerMetadataNegativeTestJSON-376934399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:12Z,user_data=None,user_id='aa221a12ceb248cbac90a621af09d7fb',uuid=a263c70f-c8ce-4ffc-bd62-595fc2e31593,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.489 253542 DEBUG nova.network.os_vif_util [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Converting VIF {"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.489 253542 DEBUG nova.network.os_vif_util [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:47:69,bridge_name='br-int',has_traffic_filtering=True,id=a48f5b09-487c-4713-a697-b97ef4fc6497,network=Network(57fa20d4-05ed-4242-9dcc-d6ff478d5568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48f5b09-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.490 253542 DEBUG os_vif [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:47:69,bridge_name='br-int',has_traffic_filtering=True,id=a48f5b09-487c-4713-a697-b97ef4fc6497,network=Network(57fa20d4-05ed-4242-9dcc-d6ff478d5568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48f5b09-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.491 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.492 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.495 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa48f5b09-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.497 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa48f5b09-48, col_values=(('external_ids', {'iface-id': 'a48f5b09-487c-4713-a697-b97ef4fc6497', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:47:69', 'vm-uuid': 'a263c70f-c8ce-4ffc-bd62-595fc2e31593'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:20 compute-0 NetworkManager[48915]: <info>  [1764059840.4998] manager: (tapa48f5b09-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.505 253542 INFO nova.scheduler.client.report [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Deleted allocations for instance 806e081d-6b1a-4909-be7c-5490c631ebfe
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.507 253542 INFO os_vif [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:47:69,bridge_name='br-int',has_traffic_filtering=True,id=a48f5b09-487c-4713-a697-b97ef4fc6497,network=Network(57fa20d4-05ed-4242-9dcc-d6ff478d5568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48f5b09-48')
Nov 25 08:37:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1330804515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1039336566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2154559523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.591 253542 DEBUG oslo_concurrency.lockutils [None req-cdab134f-597f-47ae-8220-b830149c0aa3 6e7ed13625c1463da308fbfab28c4541 504c5057c1894355afeafd57cd62b7ab - - default default] Lock "806e081d-6b1a-4909-be7c-5490c631ebfe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.610 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.611 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.611 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] No VIF found with MAC fa:16:3e:10:47:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.612 253542 INFO nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Using config drive
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.640 253542 DEBUG nova.storage.rbd_utils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] rbd image a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.718 253542 DEBUG nova.network.neutron [-] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.763 253542 INFO nova.compute.manager [-] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Took 1.71 seconds to deallocate network for instance.
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.829 253542 DEBUG nova.compute.manager [req-4109a95f-52c7-4edc-9693-4c725f9ba3fa req-bc69ee91-98c2-4bfc-874a-2eea480daaa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Received event network-vif-deleted-41bc52e3-37ed-4096-9d29-9868b1e29c3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.852 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.853 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:20 compute-0 podman[331586]: 2025-11-25 08:37:20.889382289 +0000 UTC m=+0.131212066 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:37:20 compute-0 nova_compute[253538]: 2025-11-25 08:37:20.963 253542 DEBUG oslo_concurrency.processutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 467 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.5 MiB/s wr, 333 op/s
Nov 25 08:37:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242186248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.426 253542 DEBUG oslo_concurrency.processutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.431 253542 INFO nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Creating config drive at /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593/disk.config
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.436 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpllg14d2b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.473 253542 DEBUG nova.compute.provider_tree [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.489 253542 DEBUG nova.scheduler.client.report [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.576 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.581 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpllg14d2b" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.616 253542 DEBUG nova.storage.rbd_utils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] rbd image a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.620 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593/disk.config a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:21 compute-0 ceph-mon[75015]: pgmap v1630: 321 pgs: 321 active+clean; 467 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.5 MiB/s wr, 333 op/s
Nov 25 08:37:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2242186248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:21 compute-0 nova_compute[253538]: 2025-11-25 08:37:21.981 253542 INFO nova.scheduler.client.report [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Deleted allocations for instance b62dacb0-2605-4b3f-b00a-9ecf5d2728f7
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.083 253542 DEBUG oslo_concurrency.lockutils [None req-a8a04c11-d814-434c-9a77-712eb4d15de4 ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "b62dacb0-2605-4b3f-b00a-9ecf5d2728f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.119 253542 DEBUG oslo_concurrency.processutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593/disk.config a263c70f-c8ce-4ffc-bd62-595fc2e31593_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.119 253542 INFO nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Deleting local config drive /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593/disk.config because it was imported into RBD.
Nov 25 08:37:22 compute-0 kernel: tapa48f5b09-48: entered promiscuous mode
Nov 25 08:37:22 compute-0 NetworkManager[48915]: <info>  [1764059842.1885] manager: (tapa48f5b09-48): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Nov 25 08:37:22 compute-0 ovn_controller[152859]: 2025-11-25T08:37:22Z|00736|binding|INFO|Claiming lport a48f5b09-487c-4713-a697-b97ef4fc6497 for this chassis.
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.190 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:22 compute-0 ovn_controller[152859]: 2025-11-25T08:37:22Z|00737|binding|INFO|a48f5b09-487c-4713-a697-b97ef4fc6497: Claiming fa:16:3e:10:47:69 10.100.0.5
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:22 compute-0 systemd-udevd[331688]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.220 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:47:69 10.100.0.5'], port_security=['fa:16:3e:10:47:69 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a263c70f-c8ce-4ffc-bd62-595fc2e31593', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57fa20d4-05ed-4242-9dcc-d6ff478d5568', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c8abf549f8d47eba559c32e8ed0679c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '128a41eb-4333-4ba4-949b-c13cdfb57f36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068f6901-0a31-4c35-818b-35b51d41e3f0, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a48f5b09-487c-4713-a697-b97ef4fc6497) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.222 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a48f5b09-487c-4713-a697-b97ef4fc6497 in datapath 57fa20d4-05ed-4242-9dcc-d6ff478d5568 bound to our chassis
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.223 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 57fa20d4-05ed-4242-9dcc-d6ff478d5568
Nov 25 08:37:22 compute-0 systemd-machined[215790]: New machine qemu-95-instance-0000004f.
Nov 25 08:37:22 compute-0 NetworkManager[48915]: <info>  [1764059842.2325] device (tapa48f5b09-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:37:22 compute-0 NetworkManager[48915]: <info>  [1764059842.2338] device (tapa48f5b09-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:37:22 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-0000004f.
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.242 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[99458f19-649b-47fa-a668-b97fa228aa39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.244 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap57fa20d4-01 in ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.246 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap57fa20d4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.246 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc5aa3b-73bf-48cd-bd8a-cbccf6b01939]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.248 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f29599cf-110b-4904-a886-14cab4e95af6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.262 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[549e5c44-f03e-4e4f-b656-e0b2486b4b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.288 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae11c77a-b118-45d5-b3f4-030b2caea2b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_controller[152859]: 2025-11-25T08:37:22Z|00738|binding|INFO|Setting lport a48f5b09-487c-4713-a697-b97ef4fc6497 ovn-installed in OVS
Nov 25 08:37:22 compute-0 ovn_controller[152859]: 2025-11-25T08:37:22Z|00739|binding|INFO|Setting lport a48f5b09-487c-4713-a697-b97ef4fc6497 up in Southbound
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.318 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[91b7764e-d6b3-4b52-a438-4a358c38efbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 NetworkManager[48915]: <info>  [1764059842.3262] manager: (tap57fa20d4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/326)
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.326 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc0cc6b-2330-4ca0-a1ae-3f1d57853e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.384 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[21d427d0-b0e5-43a4-ab8a-da1ff3de5c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.387 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbc945a-fec8-4630-ac8e-0d5b35d05378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 NetworkManager[48915]: <info>  [1764059842.4091] device (tap57fa20d4-00): carrier: link connected
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.414 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fd158ec0-ed96-451c-8c6d-eaf708d4829d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.429 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[076a4a53-a0bd-443b-8c6a-db7834b3be3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57fa20d4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:91:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519700, 'reachable_time': 44401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331723, 'error': None, 'target': 'ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[490010a6-2d56-468e-963b-3e5283c87b87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:91b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519700, 'tstamp': 519700}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331725, 'error': None, 'target': 'ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.456 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2581cc-81b2-4785-b3ce-cc5f03caa366]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57fa20d4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:91:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519700, 'reachable_time': 44401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331726, 'error': None, 'target': 'ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.486 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1a509e-96c9-4480-acf2-028fcd364e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.545 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f453597b-aec9-42d7-abaf-c348d5a04fdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.547 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57fa20d4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.547 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.548 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57fa20d4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:22 compute-0 NetworkManager[48915]: <info>  [1764059842.6062] manager: (tap57fa20d4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Nov 25 08:37:22 compute-0 kernel: tap57fa20d4-00: entered promiscuous mode
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.608 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap57fa20d4-00, col_values=(('external_ids', {'iface-id': '98ca1f42-63ed-4687-9578-26bb52c667f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:22 compute-0 ovn_controller[152859]: 2025-11-25T08:37:22Z|00740|binding|INFO|Releasing lport 98ca1f42-63ed-4687-9578-26bb52c667f7 from this chassis (sb_readonly=0)
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.633 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57fa20d4-05ed-4242-9dcc-d6ff478d5568.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57fa20d4-05ed-4242-9dcc-d6ff478d5568.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.634 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8da38551-c7ba-4258-83f7-d50a542617db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.635 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-57fa20d4-05ed-4242-9dcc-d6ff478d5568
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/57fa20d4-05ed-4242-9dcc-d6ff478d5568.pid.haproxy
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 57fa20d4-05ed-4242-9dcc-d6ff478d5568
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:37:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:22.636 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568', 'env', 'PROCESS_TAG=haproxy-57fa20d4-05ed-4242-9dcc-d6ff478d5568', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/57fa20d4-05ed-4242-9dcc-d6ff478d5568.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.724 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.917 253542 DEBUG nova.compute.manager [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received event network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.918 253542 DEBUG oslo_concurrency.lockutils [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.918 253542 DEBUG oslo_concurrency.lockutils [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.918 253542 DEBUG oslo_concurrency.lockutils [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.918 253542 DEBUG nova.compute.manager [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Processing event network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.919 253542 DEBUG nova.compute.manager [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received event network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.919 253542 DEBUG oslo_concurrency.lockutils [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.919 253542 DEBUG oslo_concurrency.lockutils [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.919 253542 DEBUG oslo_concurrency.lockutils [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.920 253542 DEBUG nova.compute.manager [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] No waiting events found dispatching network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.920 253542 WARNING nova.compute.manager [req-37086046-c778-4882-8986-7c402d46b07b req-c1ab5217-0aea-45be-bbef-efbb252ef5bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received unexpected event network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 for instance with vm_state building and task_state spawning.
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.940 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059842.9401412, a263c70f-c8ce-4ffc-bd62-595fc2e31593 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.941 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] VM Started (Lifecycle Event)
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.943 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.946 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.949 253542 INFO nova.virt.libvirt.driver [-] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Instance spawned successfully.
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.949 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.966 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.967 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.967 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.968 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.968 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.968 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.969 253542 DEBUG nova.virt.libvirt.driver [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:22 compute-0 nova_compute[253538]: 2025-11-25 08:37:22.973 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.050 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.050 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059842.9431496, a263c70f-c8ce-4ffc-bd62-595fc2e31593 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.050 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] VM Paused (Lifecycle Event)
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.064 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.068 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059842.9459455, a263c70f-c8ce-4ffc-bd62-595fc2e31593 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.068 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] VM Resumed (Lifecycle Event)
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.080 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.083 253542 INFO nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Took 9.98 seconds to spawn the instance on the hypervisor.
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.083 253542 DEBUG nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.085 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.108 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:23 compute-0 podman[331800]: 2025-11-25 08:37:23.148111443 +0000 UTC m=+0.059367829 container create 7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.174 253542 INFO nova.compute.manager [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Took 11.42 seconds to build instance.
Nov 25 08:37:23 compute-0 systemd[1]: Started libpod-conmon-7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342.scope.
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.196 253542 DEBUG oslo_concurrency.lockutils [None req-805497ca-a987-49dc-b9dd-20c4d99dbbff aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:23 compute-0 podman[331800]: 2025-11-25 08:37:23.11682577 +0000 UTC m=+0.028082166 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:37:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cece53e3308d40f56648f20327c46ff834bf5478a4e1d2c1b619da5a20ea91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.230 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.230 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.231 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.231 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.231 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.232 253542 INFO nova.compute.manager [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Terminating instance
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.233 253542 DEBUG nova.compute.manager [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:37:23 compute-0 podman[331800]: 2025-11-25 08:37:23.249701351 +0000 UTC m=+0.160957747 container init 7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:37:23 compute-0 podman[331800]: 2025-11-25 08:37:23.256009373 +0000 UTC m=+0.167265759 container start 7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:37:23 compute-0 kernel: tap8bb4cdd1-80 (unregistering): left promiscuous mode
Nov 25 08:37:23 compute-0 NetworkManager[48915]: <info>  [1764059843.2811] device (tap8bb4cdd1-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:37:23 compute-0 ovn_controller[152859]: 2025-11-25T08:37:23Z|00741|binding|INFO|Releasing lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b from this chassis (sb_readonly=0)
Nov 25 08:37:23 compute-0 ovn_controller[152859]: 2025-11-25T08:37:23Z|00742|binding|INFO|Setting lport 8bb4cdd1-8082-4ad3-9350-be7270fb373b down in Southbound
Nov 25 08:37:23 compute-0 neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568[331815]: [NOTICE]   (331819) : New worker (331822) forked
Nov 25 08:37:23 compute-0 neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568[331815]: [NOTICE]   (331819) : Loading success.
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 ovn_controller[152859]: 2025-11-25T08:37:23Z|00743|binding|INFO|Removing iface tap8bb4cdd1-80 ovn-installed in OVS
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.301 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:57:05 10.100.0.3'], port_security=['fa:16:3e:a4:57:05 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6c10a34e-4126-4e88-ad4d-ba7c407a379e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa56d31750374b64b67d1be19bb4e989', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8d5f0080-1d0d-4c90-8c84-7d1437ba109d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4cd11-f7ee-427b-8b83-e8d21ceafdd1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8bb4cdd1-8082-4ad3-9350-be7270fb373b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 417 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 292 op/s
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.332 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8bb4cdd1-8082-4ad3-9350-be7270fb373b in datapath 30bd3cca-f71b-4541-ae95-8d0eb4dfe470 unbound from our chassis
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.334 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 30bd3cca-f71b-4541-ae95-8d0eb4dfe470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.335 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e50ac85-7ab7-4737-b8c1-bc276eaab677]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.335 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470 namespace which is not needed anymore
Nov 25 08:37:23 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 08:37:23 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000048.scope: Consumed 14.869s CPU time.
Nov 25 08:37:23 compute-0 systemd-machined[215790]: Machine qemu-92-instance-00000048 terminated.
Nov 25 08:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:37:23 compute-0 NetworkManager[48915]: <info>  [1764059843.4645] manager: (tap8bb4cdd1-80): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.484 253542 INFO nova.virt.libvirt.driver [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Instance destroyed successfully.
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.484 253542 DEBUG nova.objects.instance [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lazy-loading 'resources' on Instance uuid 6c10a34e-4126-4e88-ad4d-ba7c407a379e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.498 253542 DEBUG nova.virt.libvirt.vif [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1875494502',display_name='tempest-ListServerFiltersTestJSON-instance-1875494502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1875494502',id=72,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa56d31750374b64b67d1be19bb4e989',ramdisk_id='',reservation_id='r-usfwimbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1878680398',owner_user_name='tempest-ListServerFiltersTestJSON-1878680398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:37:01Z,user_data=None,user_id='ee3a6261ded642fa9ef617b29b026d86',uuid=6c10a34e-4126-4e88-ad4d-ba7c407a379e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.499 253542 DEBUG nova.network.os_vif_util [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converting VIF {"id": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "address": "fa:16:3e:a4:57:05", "network": {"id": "30bd3cca-f71b-4541-ae95-8d0eb4dfe470", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-913488754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa56d31750374b64b67d1be19bb4e989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bb4cdd1-80", "ovs_interfaceid": "8bb4cdd1-8082-4ad3-9350-be7270fb373b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.500 253542 DEBUG nova.network.os_vif_util [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.501 253542 DEBUG os_vif [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.505 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8bb4cdd1-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.513 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.516 253542 INFO os_vif [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:57:05,bridge_name='br-int',has_traffic_filtering=True,id=8bb4cdd1-8082-4ad3-9350-be7270fb373b,network=Network(30bd3cca-f71b-4541-ae95-8d0eb4dfe470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bb4cdd1-80')
Nov 25 08:37:23 compute-0 neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470[327701]: [NOTICE]   (327711) : haproxy version is 2.8.14-c23fe91
Nov 25 08:37:23 compute-0 neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470[327701]: [NOTICE]   (327711) : path to executable is /usr/sbin/haproxy
Nov 25 08:37:23 compute-0 neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470[327701]: [WARNING]  (327711) : Exiting Master process...
Nov 25 08:37:23 compute-0 neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470[327701]: [ALERT]    (327711) : Current worker (327714) exited with code 143 (Terminated)
Nov 25 08:37:23 compute-0 neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470[327701]: [WARNING]  (327711) : All workers exited. Exiting... (0)
Nov 25 08:37:23 compute-0 systemd[1]: libpod-53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0.scope: Deactivated successfully.
Nov 25 08:37:23 compute-0 podman[331851]: 2025-11-25 08:37:23.553826709 +0000 UTC m=+0.127996469 container died 53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e28477818bfc2658c000012d9190692087ae09da101bd0facd99ecffda3b6d4-merged.mount: Deactivated successfully.
Nov 25 08:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0-userdata-shm.mount: Deactivated successfully.
Nov 25 08:37:23 compute-0 podman[331851]: 2025-11-25 08:37:23.739992242 +0000 UTC m=+0.314161972 container cleanup 53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:37:23 compute-0 systemd[1]: libpod-conmon-53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0.scope: Deactivated successfully.
Nov 25 08:37:23 compute-0 podman[331906]: 2025-11-25 08:37:23.861745171 +0000 UTC m=+0.082563942 container remove 53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.870 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b1cc0a-2dad-4293-b64c-fbda2aa29605]: (4, ('Tue Nov 25 08:37:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470 (53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0)\n53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0\nTue Nov 25 08:37:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470 (53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0)\n53838f3658ab9d32225ce5cb1d0e41cbf77092af4d4545bd0e0dd1501d76a9c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37b51113-192f-4d35-b8b9-9065cdff883d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.872 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bd3cca-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:23 compute-0 kernel: tap30bd3cca-f0: left promiscuous mode
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.885 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5238bce-2d26-4580-979c-b071c5afba37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b15de2e-7e20-45a3-bc62-6635a22ef5d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb458a0-5e66-4c1a-9941-fe488b816ee0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:23 compute-0 nova_compute[253538]: 2025-11-25 08:37:23.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.921 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72792854-9195-4653-9274-1b5866481977]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514250, 'reachable_time': 33451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331922, 'error': None, 'target': 'ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.924 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-30bd3cca-f71b-4541-ae95-8d0eb4dfe470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:37:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:23.924 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7f781412-1476-482a-9304-068f4643f2aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d30bd3cca\x2df71b\x2d4541\x2dae95\x2d8d0eb4dfe470.mount: Deactivated successfully.
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.104 253542 INFO nova.virt.libvirt.driver [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Deleting instance files /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e_del
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.105 253542 INFO nova.virt.libvirt.driver [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Deletion of /var/lib/nova/instances/6c10a34e-4126-4e88-ad4d-ba7c407a379e_del complete
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.168 253542 INFO nova.compute.manager [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Took 0.93 seconds to destroy the instance on the hypervisor.
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.168 253542 DEBUG oslo.service.loopingcall [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.169 253542 DEBUG nova.compute.manager [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.169 253542 DEBUG nova.network.neutron [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:37:24 compute-0 ceph-mon[75015]: pgmap v1631: 321 pgs: 321 active+clean; 417 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 292 op/s
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.869 253542 DEBUG nova.network.neutron [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.886 253542 DEBUG nova.compute.manager [req-d8b8c892-03af-4958-a43e-3920b5c3aaf4 req-3d7d21cb-7b17-4e6b-a124-2eae10448dac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-deleted-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.887 253542 INFO nova.compute.manager [req-d8b8c892-03af-4958-a43e-3920b5c3aaf4 req-3d7d21cb-7b17-4e6b-a124-2eae10448dac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Neutron deleted interface 8bb4cdd1-8082-4ad3-9350-be7270fb373b; detaching it from the instance and deleting it from the info cache
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.888 253542 DEBUG nova.network.neutron [req-d8b8c892-03af-4958-a43e-3920b5c3aaf4 req-3d7d21cb-7b17-4e6b-a124-2eae10448dac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.894 253542 INFO nova.compute.manager [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Took 0.73 seconds to deallocate network for instance.
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.903 253542 DEBUG nova.compute.manager [req-d8b8c892-03af-4958-a43e-3920b5c3aaf4 req-3d7d21cb-7b17-4e6b-a124-2eae10448dac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Detach interface failed, port_id=8bb4cdd1-8082-4ad3-9350-be7270fb373b, reason: Instance 6c10a34e-4126-4e88-ad4d-ba7c407a379e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.934 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:24 compute-0 nova_compute[253538]: 2025-11-25 08:37:24.935 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:25 compute-0 kernel: tapf3740887-84 (unregistering): left promiscuous mode
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.172 253542 DEBUG nova.compute.manager [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-unplugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:25 compute-0 NetworkManager[48915]: <info>  [1764059845.1743] device (tapf3740887-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.173 253542 DEBUG oslo_concurrency.lockutils [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.176 253542 DEBUG oslo_concurrency.lockutils [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.176 253542 DEBUG oslo_concurrency.lockutils [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.177 253542 DEBUG nova.compute.manager [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] No waiting events found dispatching network-vif-unplugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.178 253542 WARNING nova.compute.manager [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received unexpected event network-vif-unplugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b for instance with vm_state deleted and task_state None.
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.179 253542 DEBUG nova.compute.manager [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.179 253542 DEBUG oslo_concurrency.lockutils [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.180 253542 DEBUG oslo_concurrency.lockutils [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.180 253542 DEBUG oslo_concurrency.lockutils [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.180 253542 DEBUG nova.compute.manager [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] No waiting events found dispatching network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.181 253542 WARNING nova.compute.manager [req-283f9650-2a1c-4cf6-898b-f1c99d7622fb req-3549263b-96a6-4dee-a95a-6851e8abc04a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Received unexpected event network-vif-plugged-8bb4cdd1-8082-4ad3-9350-be7270fb373b for instance with vm_state deleted and task_state None.
Nov 25 08:37:25 compute-0 ovn_controller[152859]: 2025-11-25T08:37:25Z|00744|binding|INFO|Releasing lport f3740887-8427-4858-b3e7-5c15f52a2484 from this chassis (sb_readonly=0)
Nov 25 08:37:25 compute-0 ovn_controller[152859]: 2025-11-25T08:37:25Z|00745|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 down in Southbound
Nov 25 08:37:25 compute-0 ovn_controller[152859]: 2025-11-25T08:37:25Z|00746|binding|INFO|Removing iface tapf3740887-84 ovn-installed in OVS
Nov 25 08:37:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:25.193 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:5b:b8 10.100.0.8'], port_security=['fa:16:3e:f9:5b:b8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '02e92d65-4521-4c60-bed4-2e8fc4d243e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3740887-8427-4858-b3e7-5c15f52a2484) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:25.194 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3740887-8427-4858-b3e7-5c15f52a2484 in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 unbound from our chassis
Nov 25 08:37:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:25.194 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:37:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:25.195 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a0687c50-b5c7-4885-9fd3-3026a049a172]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.198 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.229 253542 DEBUG oslo_concurrency.processutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:25 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 25 08:37:25 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004e.scope: Consumed 13.976s CPU time.
Nov 25 08:37:25 compute-0 systemd-machined[215790]: Machine qemu-93-instance-0000004e terminated.
Nov 25 08:37:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 393 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 273 op/s
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3137769014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.713 253542 DEBUG oslo_concurrency.processutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.719 253542 DEBUG nova.compute.provider_tree [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.737 253542 DEBUG nova.scheduler.client.report [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.744 253542 INFO nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance shutdown successfully after 13 seconds.
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.750 253542 INFO nova.virt.libvirt.driver [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance destroyed successfully.
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.750 253542 DEBUG nova.objects.instance [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'numa_topology' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.762 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.766 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.766 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.766 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.767 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.802 253542 INFO nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Attempting rescue
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.804 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.808 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.808 253542 INFO nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Creating image(s)
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.831 253542 DEBUG nova.storage.rbd_utils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.836 253542 DEBUG nova.objects.instance [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.838 253542 INFO nova.scheduler.client.report [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Deleted allocations for instance 6c10a34e-4126-4e88-ad4d-ba7c407a379e
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.874 253542 DEBUG nova.storage.rbd_utils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.908 253542 DEBUG nova.storage.rbd_utils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.913 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.994 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.995 253542 DEBUG oslo_concurrency.lockutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.996 253542 DEBUG oslo_concurrency.lockutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:25 compute-0 nova_compute[253538]: 2025-11-25 08:37:25.996 253542 DEBUG oslo_concurrency.lockutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.022 253542 DEBUG nova.storage.rbd_utils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.026 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.065 253542 DEBUG oslo_concurrency.lockutils [None req-035946b2-b87b-4bf8-a9f0-e08caa598a8f ee3a6261ded642fa9ef617b29b026d86 aa56d31750374b64b67d1be19bb4e989 - - default default] Lock "6c10a34e-4126-4e88-ad4d-ba7c407a379e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2704870207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.196 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.298 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.299 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.303 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.305 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.309 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.309 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.310 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.510 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.512 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3578MB free_disk=59.82711410522461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.512 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.513 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:26 compute-0 ceph-mon[75015]: pgmap v1632: 321 pgs: 321 active+clean; 393 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 273 op/s
Nov 25 08:37:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3137769014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2704870207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.585 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 02e92d65-4521-4c60-bed4-2e8fc4d243e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.585 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance a263c70f-c8ce-4ffc-bd62-595fc2e31593 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.586 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.586 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:37:26 compute-0 nova_compute[253538]: 2025-11-25 08:37:26.660 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2907043954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.146 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.152 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.166 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.174 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.174 253542 DEBUG nova.objects.instance [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'migration_context' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.185 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.186 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Start _get_guest_xml network_info=[{"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-337160511-network", "vif_mac": "fa:16:3e:f9:5b:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.186 253542 DEBUG nova.objects.instance [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'resources' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.188 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.189 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.203 253542 WARNING nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.211 253542 DEBUG nova.virt.libvirt.host [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.212 253542 DEBUG nova.virt.libvirt.host [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.215 253542 DEBUG nova.virt.libvirt.host [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.216 253542 DEBUG nova.virt.libvirt.host [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.216 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.217 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.220 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.220 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.220 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.221 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.221 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.221 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.222 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.222 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.222 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.222 253542 DEBUG nova.virt.hardware [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.223 253542 DEBUG nova.objects.instance [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.236 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.280 253542 DEBUG nova.compute.manager [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.280 253542 DEBUG oslo_concurrency.lockutils [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.281 253542 DEBUG oslo_concurrency.lockutils [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.281 253542 DEBUG oslo_concurrency.lockutils [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.281 253542 DEBUG nova.compute.manager [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.282 253542 WARNING nova.compute.manager [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state active and task_state rescuing.
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.282 253542 DEBUG nova.compute.manager [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.282 253542 DEBUG oslo_concurrency.lockutils [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.283 253542 DEBUG oslo_concurrency.lockutils [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.283 253542 DEBUG oslo_concurrency.lockutils [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.283 253542 DEBUG nova.compute.manager [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.284 253542 WARNING nova.compute.manager [req-01f98ba0-934d-45ae-94ee-f7bafc436dc1 req-8348cdea-99cc-441c-8b46-13d622781baf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state active and task_state rescuing.
Nov 25 08:37:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 374 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 280 op/s
Nov 25 08:37:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2907043954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:27 compute-0 ceph-mon[75015]: pgmap v1633: 321 pgs: 321 active+clean; 374 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 280 op/s
Nov 25 08:37:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/912799846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.687 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:27 compute-0 nova_compute[253538]: 2025-11-25 08:37:27.689 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577915282' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.121 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.123 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.242 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.242 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.243 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.243 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.244 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.246 253542 INFO nova.compute.manager [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Terminating instance
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.247 253542 DEBUG nova.compute.manager [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.251 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059833.2436523, 89bd5b48-efcd-45aa-98f5-e9d9f8373467 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.252 253542 INFO nova.compute.manager [-] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] VM Stopped (Lifecycle Event)
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.272 253542 DEBUG nova.compute.manager [None req-fcdf9eb8-4d75-456a-abc1-38e054aad96e - - - - - -] [instance: 89bd5b48-efcd-45aa-98f5-e9d9f8373467] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:28 compute-0 kernel: tapa48f5b09-48 (unregistering): left promiscuous mode
Nov 25 08:37:28 compute-0 NetworkManager[48915]: <info>  [1764059848.3000] device (tapa48f5b09-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:37:28 compute-0 ovn_controller[152859]: 2025-11-25T08:37:28Z|00747|binding|INFO|Releasing lport a48f5b09-487c-4713-a697-b97ef4fc6497 from this chassis (sb_readonly=0)
Nov 25 08:37:28 compute-0 ovn_controller[152859]: 2025-11-25T08:37:28Z|00748|binding|INFO|Setting lport a48f5b09-487c-4713-a697-b97ef4fc6497 down in Southbound
Nov 25 08:37:28 compute-0 ovn_controller[152859]: 2025-11-25T08:37:28Z|00749|binding|INFO|Removing iface tapa48f5b09-48 ovn-installed in OVS
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.316 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.322 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:47:69 10.100.0.5'], port_security=['fa:16:3e:10:47:69 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a263c70f-c8ce-4ffc-bd62-595fc2e31593', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57fa20d4-05ed-4242-9dcc-d6ff478d5568', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c8abf549f8d47eba559c32e8ed0679c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '128a41eb-4333-4ba4-949b-c13cdfb57f36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068f6901-0a31-4c35-818b-35b51d41e3f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a48f5b09-487c-4713-a697-b97ef4fc6497) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.323 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a48f5b09-487c-4713-a697-b97ef4fc6497 in datapath 57fa20d4-05ed-4242-9dcc-d6ff478d5568 unbound from our chassis
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.324 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57fa20d4-05ed-4242-9dcc-d6ff478d5568, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.325 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5da6cf64-46fd-4717-92ec-9c23e682d43a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.325 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568 namespace which is not needed anymore
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Nov 25 08:37:28 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004f.scope: Consumed 6.001s CPU time.
Nov 25 08:37:28 compute-0 systemd-machined[215790]: Machine qemu-95-instance-0000004f terminated.
Nov 25 08:37:28 compute-0 neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568[331815]: [NOTICE]   (331819) : haproxy version is 2.8.14-c23fe91
Nov 25 08:37:28 compute-0 neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568[331815]: [NOTICE]   (331819) : path to executable is /usr/sbin/haproxy
Nov 25 08:37:28 compute-0 neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568[331815]: [WARNING]  (331819) : Exiting Master process...
Nov 25 08:37:28 compute-0 neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568[331815]: [ALERT]    (331819) : Current worker (331822) exited with code 143 (Terminated)
Nov 25 08:37:28 compute-0 neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568[331815]: [WARNING]  (331819) : All workers exited. Exiting... (0)
Nov 25 08:37:28 compute-0 systemd[1]: libpod-7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342.scope: Deactivated successfully.
Nov 25 08:37:28 compute-0 conmon[331815]: conmon 7208cd5734dade4ddd47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342.scope/container/memory.events
Nov 25 08:37:28 compute-0 podman[332189]: 2025-11-25 08:37:28.473874028 +0000 UTC m=+0.050334683 container died 7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.485 253542 INFO nova.virt.libvirt.driver [-] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Instance destroyed successfully.
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.486 253542 DEBUG nova.objects.instance [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lazy-loading 'resources' on Instance uuid a263c70f-c8ce-4ffc-bd62-595fc2e31593 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.498 253542 DEBUG nova.virt.libvirt.vif [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:37:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1704447112',display_name='tempest-ServerMetadataNegativeTestJSON-server-1704447112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1704447112',id=79,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:37:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c8abf549f8d47eba559c32e8ed0679c',ramdisk_id='',reservation_id='r-8du0e9l9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-376934399',owner_user_name='tempest-ServerMetadataNegativeTestJSON-376934399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:37:23Z,user_data=None,user_id='aa221a12ceb248cbac90a621af09d7fb',uuid=a263c70f-c8ce-4ffc-bd62-595fc2e31593,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.499 253542 DEBUG nova.network.os_vif_util [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Converting VIF {"id": "a48f5b09-487c-4713-a697-b97ef4fc6497", "address": "fa:16:3e:10:47:69", "network": {"id": "57fa20d4-05ed-4242-9dcc-d6ff478d5568", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1144994260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c8abf549f8d47eba559c32e8ed0679c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48f5b09-48", "ovs_interfaceid": "a48f5b09-487c-4713-a697-b97ef4fc6497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.500 253542 DEBUG nova.network.os_vif_util [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:47:69,bridge_name='br-int',has_traffic_filtering=True,id=a48f5b09-487c-4713-a697-b97ef4fc6497,network=Network(57fa20d4-05ed-4242-9dcc-d6ff478d5568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48f5b09-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.502 253542 DEBUG os_vif [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:47:69,bridge_name='br-int',has_traffic_filtering=True,id=a48f5b09-487c-4713-a697-b97ef4fc6497,network=Network(57fa20d4-05ed-4242-9dcc-d6ff478d5568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48f5b09-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.505 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa48f5b09-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342-userdata-shm.mount: Deactivated successfully.
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-39cece53e3308d40f56648f20327c46ff834bf5478a4e1d2c1b619da5a20ea91-merged.mount: Deactivated successfully.
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.516 253542 INFO os_vif [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:47:69,bridge_name='br-int',has_traffic_filtering=True,id=a48f5b09-487c-4713-a697-b97ef4fc6497,network=Network(57fa20d4-05ed-4242-9dcc-d6ff478d5568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48f5b09-48')
Nov 25 08:37:28 compute-0 podman[332189]: 2025-11-25 08:37:28.527506859 +0000 UTC m=+0.103967514 container cleanup 7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:37:28 compute-0 systemd[1]: libpod-conmon-7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342.scope: Deactivated successfully.
Nov 25 08:37:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144930851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/912799846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2577915282' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:28 compute-0 ovn_controller[152859]: 2025-11-25T08:37:28Z|00750|binding|INFO|Releasing lport 98ca1f42-63ed-4687-9578-26bb52c667f7 from this chassis (sb_readonly=0)
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.612 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:28 compute-0 podman[332245]: 2025-11-25 08:37:28.613046421 +0000 UTC m=+0.063840191 container remove 7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.614 253542 DEBUG nova.virt.libvirt.vif [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1478856521',display_name='tempest-ServerRescueTestJSON-server-1478856521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1478856521',id=78,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:37:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='488c6d53000c47848dba6b7be6b4ff40',ramdisk_id='',reservation_id='r-zkq6gjzu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-324239197',owner_user_name='tempest-ServerRescueTestJSON-324239197-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:06Z,user_data=None,user_id='ad675e78b1b34f1c92c57e42532c3c20',uuid=02e92d65-4521-4c60-bed4-2e8fc4d243e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-337160511-network", "vif_mac": "fa:16:3e:f9:5b:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.615 253542 DEBUG nova.network.os_vif_util [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converting VIF {"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-337160511-network", "vif_mac": "fa:16:3e:f9:5b:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.615 253542 DEBUG nova.network.os_vif_util [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:5b:b8,bridge_name='br-int',has_traffic_filtering=True,id=f3740887-8427-4858-b3e7-5c15f52a2484,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3740887-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.616 253542 DEBUG nova.objects.instance [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.620 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[831d1696-c884-4401-a4a9-a3a684a936f5]: (4, ('Tue Nov 25 08:37:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568 (7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342)\n7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342\nTue Nov 25 08:37:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568 (7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342)\n7208cd5734dade4ddd47ec7d1a74ad2e2685a95e0d0d002511cc394e38946342\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.622 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc88fe6-6d75-4a13-8966-251ecdc27cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.623 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57fa20d4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.628 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <uuid>02e92d65-4521-4c60-bed4-2e8fc4d243e4</uuid>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <name>instance-0000004e</name>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueTestJSON-server-1478856521</nova:name>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:37:27</nova:creationTime>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <nova:user uuid="ad675e78b1b34f1c92c57e42532c3c20">tempest-ServerRescueTestJSON-324239197-project-member</nova:user>
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <nova:project uuid="488c6d53000c47848dba6b7be6b4ff40">tempest-ServerRescueTestJSON-324239197</nova:project>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <nova:port uuid="f3740887-8427-4858-b3e7-5c15f52a2484">
Nov 25 08:37:28 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <system>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <entry name="serial">02e92d65-4521-4c60-bed4-2e8fc4d243e4</entry>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <entry name="uuid">02e92d65-4521-4c60-bed4-2e8fc4d243e4</entry>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </system>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <os>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   </os>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <features>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   </features>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.rescue">
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk">
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <target dev="vdb" bus="virtio"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config.rescue">
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:f9:5b:b8"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <target dev="tapf3740887-84"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/console.log" append="off"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <video>
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </video>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:37:28 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:37:28 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:37:28 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:37:28 compute-0 nova_compute[253538]: </domain>
Nov 25 08:37:28 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.640 253542 INFO nova.virt.libvirt.driver [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance destroyed successfully.
Nov 25 08:37:28 compute-0 kernel: tap57fa20d4-00: left promiscuous mode
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.700 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[80276941-27cd-4873-86c0-fce5e240d897]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.702 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.702 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.702 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.702 253542 DEBUG nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] No VIF found with MAC fa:16:3e:f9:5b:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.703 253542 INFO nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Using config drive
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.716 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f04609c3-fe08-4a27-ba62-09c52d98e7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.718 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e55ebf-0065-4cde-a442-e0e9f2ecdf52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.733 253542 DEBUG nova.storage.rbd_utils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.737 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f81672e8-ea1a-4864-b6d4-90bf30e73274]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519691, 'reachable_time': 21062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332286, 'error': None, 'target': 'ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d57fa20d4\x2d05ed\x2d4242\x2d9dcc\x2dd6ff478d5568.mount: Deactivated successfully.
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.742 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-57fa20d4-05ed-4242-9dcc-d6ff478d5568 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:37:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:28.743 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8adc77-de7a-411a-a580-bde3116a8f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.758 253542 DEBUG nova.objects.instance [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.788 253542 DEBUG nova.objects.instance [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'keypairs' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.863 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059833.8471549, 04d190be-1443-48a9-ad51-3625b65dff6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.863 253542 INFO nova.compute.manager [-] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] VM Stopped (Lifecycle Event)
Nov 25 08:37:28 compute-0 nova_compute[253538]: 2025-11-25 08:37:28.886 253542 DEBUG nova.compute.manager [None req-8167cc14-1828-406d-b142-5734df67201f - - - - - -] [instance: 04d190be-1443-48a9-ad51-3625b65dff6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:37:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292569847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:37:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:37:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292569847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.133 253542 INFO nova.virt.libvirt.driver [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Deleting instance files /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593_del
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.134 253542 INFO nova.virt.libvirt.driver [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Deletion of /var/lib/nova/instances/a263c70f-c8ce-4ffc-bd62-595fc2e31593_del complete
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.197 253542 INFO nova.compute.manager [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Took 0.95 seconds to destroy the instance on the hypervisor.
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.197 253542 DEBUG oslo.service.loopingcall [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.197 253542 DEBUG nova.compute.manager [-] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.198 253542 DEBUG nova.network.neutron [-] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.206 253542 INFO nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Creating config drive at /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config.rescue
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.211 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wxeq29d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 355 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 261 op/s
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.359 253542 DEBUG nova.compute.manager [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received event network-vif-unplugged-a48f5b09-487c-4713-a697-b97ef4fc6497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.359 253542 DEBUG oslo_concurrency.lockutils [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.359 253542 DEBUG oslo_concurrency.lockutils [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.360 253542 DEBUG oslo_concurrency.lockutils [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.360 253542 DEBUG nova.compute.manager [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] No waiting events found dispatching network-vif-unplugged-a48f5b09-487c-4713-a697-b97ef4fc6497 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.360 253542 DEBUG nova.compute.manager [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received event network-vif-unplugged-a48f5b09-487c-4713-a697-b97ef4fc6497 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.360 253542 DEBUG nova.compute.manager [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received event network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.361 253542 DEBUG oslo_concurrency.lockutils [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.361 253542 DEBUG oslo_concurrency.lockutils [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.361 253542 DEBUG oslo_concurrency.lockutils [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.361 253542 DEBUG nova.compute.manager [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] No waiting events found dispatching network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.361 253542 WARNING nova.compute.manager [req-ae3d7661-108c-44af-9ddf-eb0379a9de54 req-fc61cdf4-4615-4394-af56-0ede6e9255d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received unexpected event network-vif-plugged-a48f5b09-487c-4713-a697-b97ef4fc6497 for instance with vm_state active and task_state deleting.
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.362 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wxeq29d" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.388 253542 DEBUG nova.storage.rbd_utils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] rbd image 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.392 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config.rescue 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.549 253542 DEBUG oslo_concurrency.processutils [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config.rescue 02e92d65-4521-4c60-bed4-2e8fc4d243e4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.551 253542 INFO nova.virt.libvirt.driver [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Deleting local config drive /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4/disk.config.rescue because it was imported into RBD.
Nov 25 08:37:29 compute-0 systemd-udevd[332170]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:29 compute-0 kernel: tapf3740887-84: entered promiscuous mode
Nov 25 08:37:29 compute-0 NetworkManager[48915]: <info>  [1764059849.5923] manager: (tapf3740887-84): new Tun device (/org/freedesktop/NetworkManager/Devices/329)
Nov 25 08:37:29 compute-0 ovn_controller[152859]: 2025-11-25T08:37:29Z|00751|binding|INFO|Claiming lport f3740887-8427-4858-b3e7-5c15f52a2484 for this chassis.
Nov 25 08:37:29 compute-0 ovn_controller[152859]: 2025-11-25T08:37:29Z|00752|binding|INFO|f3740887-8427-4858-b3e7-5c15f52a2484: Claiming fa:16:3e:f9:5b:b8 10.100.0.8
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:29.599 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:5b:b8 10.100.0.8'], port_security=['fa:16:3e:f9:5b:b8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '02e92d65-4521-4c60-bed4-2e8fc4d243e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3740887-8427-4858-b3e7-5c15f52a2484) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4144930851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3292569847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:37:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3292569847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:37:29 compute-0 ceph-mon[75015]: pgmap v1634: 321 pgs: 321 active+clean; 355 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 261 op/s
Nov 25 08:37:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:29.600 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3740887-8427-4858-b3e7-5c15f52a2484 in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 bound to our chassis
Nov 25 08:37:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:29.601 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:37:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:29.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[453fcee3-204b-4277-9d27-d87b5547ade2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:29 compute-0 NetworkManager[48915]: <info>  [1764059849.6031] device (tapf3740887-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:37:29 compute-0 NetworkManager[48915]: <info>  [1764059849.6043] device (tapf3740887-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:37:29 compute-0 systemd-machined[215790]: New machine qemu-96-instance-0000004e.
Nov 25 08:37:29 compute-0 ovn_controller[152859]: 2025-11-25T08:37:29Z|00753|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 up in Southbound
Nov 25 08:37:29 compute-0 ovn_controller[152859]: 2025-11-25T08:37:29Z|00754|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 ovn-installed in OVS
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.633 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:29 compute-0 nova_compute[253538]: 2025-11-25 08:37:29.638 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:29 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-0000004e.
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.046 253542 DEBUG nova.network.neutron [-] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.065 253542 INFO nova.compute.manager [-] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Took 0.87 seconds to deallocate network for instance.
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.083 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 02e92d65-4521-4c60-bed4-2e8fc4d243e4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.084 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059850.0825717, 02e92d65-4521-4c60-bed4-2e8fc4d243e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.084 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] VM Resumed (Lifecycle Event)
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.089 253542 DEBUG nova.compute.manager [None req-fc5cfd97-dd86-4bb2-b2e4-61775f981e42 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.122 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.124 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.127 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.127 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.150 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.151 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059850.084167, 02e92d65-4521-4c60-bed4-2e8fc4d243e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.151 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] VM Started (Lifecycle Event)
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.167 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.170 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.190 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.217 253542 DEBUG nova.compute.manager [req-7755e520-ff93-4d80-9ea7-d84fbac7b1a8 req-2afd3b89-a862-40c7-ab9b-3d487e175df7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Received event network-vif-deleted-a48f5b09-487c-4713-a697-b97ef4fc6497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.219 253542 DEBUG oslo_concurrency.processutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522340185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.737 253542 DEBUG oslo_concurrency.processutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.746 253542 DEBUG nova.compute.provider_tree [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.765 253542 DEBUG nova.scheduler.client.report [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3522340185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.794 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.828 253542 INFO nova.scheduler.client.report [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Deleted allocations for instance a263c70f-c8ce-4ffc-bd62-595fc2e31593
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.898 253542 DEBUG oslo_concurrency.lockutils [None req-51dbdde3-48d3-4be9-9a01-0a756679dd92 aa221a12ceb248cbac90a621af09d7fb 9c8abf549f8d47eba559c32e8ed0679c - - default default] Lock "a263c70f-c8ce-4ffc-bd62-595fc2e31593" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.966 253542 INFO nova.compute.manager [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Unrescuing
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.967 253542 DEBUG oslo_concurrency.lockutils [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.967 253542 DEBUG oslo_concurrency.lockutils [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquired lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:30 compute-0 nova_compute[253538]: 2025-11-25 08:37:30.968 253542 DEBUG nova.network.neutron [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:37:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 372 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 255 op/s
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.463 253542 DEBUG nova.compute.manager [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.464 253542 DEBUG oslo_concurrency.lockutils [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.464 253542 DEBUG oslo_concurrency.lockutils [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.464 253542 DEBUG oslo_concurrency.lockutils [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.464 253542 DEBUG nova.compute.manager [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.465 253542 WARNING nova.compute.manager [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state rescued and task_state unrescuing.
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.465 253542 DEBUG nova.compute.manager [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.465 253542 DEBUG oslo_concurrency.lockutils [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.465 253542 DEBUG oslo_concurrency.lockutils [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.465 253542 DEBUG oslo_concurrency.lockutils [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.466 253542 DEBUG nova.compute.manager [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.466 253542 WARNING nova.compute.manager [req-9f83e87b-1e9c-4a15-839a-b2a416e042ed req-e85d4291-93d2-464a-a6fe-9b632242cc20 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state rescued and task_state unrescuing.
Nov 25 08:37:31 compute-0 nova_compute[253538]: 2025-11-25 08:37:31.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:31 compute-0 ceph-mon[75015]: pgmap v1635: 321 pgs: 321 active+clean; 372 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 255 op/s
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.433 253542 DEBUG nova.network.neutron [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Updating instance_info_cache with network_info: [{"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.444 253542 DEBUG oslo_concurrency.lockutils [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Releasing lock "refresh_cache-02e92d65-4521-4c60-bed4-2e8fc4d243e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.445 253542 DEBUG nova.objects.instance [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'flavor' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:32 compute-0 kernel: tapf3740887-84 (unregistering): left promiscuous mode
Nov 25 08:37:32 compute-0 NetworkManager[48915]: <info>  [1764059852.5166] device (tapf3740887-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:37:32 compute-0 ovn_controller[152859]: 2025-11-25T08:37:32Z|00755|binding|INFO|Releasing lport f3740887-8427-4858-b3e7-5c15f52a2484 from this chassis (sb_readonly=0)
Nov 25 08:37:32 compute-0 ovn_controller[152859]: 2025-11-25T08:37:32Z|00756|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 down in Southbound
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:32 compute-0 ovn_controller[152859]: 2025-11-25T08:37:32Z|00757|binding|INFO|Removing iface tapf3740887-84 ovn-installed in OVS
Nov 25 08:37:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:32.542 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:5b:b8 10.100.0.8'], port_security=['fa:16:3e:f9:5b:b8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '02e92d65-4521-4c60-bed4-2e8fc4d243e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3740887-8427-4858-b3e7-5c15f52a2484) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.543 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:32.544 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3740887-8427-4858-b3e7-5c15f52a2484 in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 unbound from our chassis
Nov 25 08:37:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:32.544 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:37:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:32.546 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c49bec4-d479-441b-ad95-d41a99899697]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:37:32 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 25 08:37:32 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004e.scope: Consumed 2.873s CPU time.
Nov 25 08:37:32 compute-0 systemd-machined[215790]: Machine qemu-96-instance-0000004e terminated.
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.592 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059837.5911794, b62dacb0-2605-4b3f-b00a-9ecf5d2728f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.593 253542 INFO nova.compute.manager [-] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] VM Stopped (Lifecycle Event)
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.608 253542 DEBUG nova.compute.manager [None req-fa13c997-6089-4a0a-af62-8fcb08c654f6 - - - - - -] [instance: b62dacb0-2605-4b3f-b00a-9ecf5d2728f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.707 253542 INFO nova.virt.libvirt.driver [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance destroyed successfully.
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.707 253542 DEBUG nova.objects.instance [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'numa_topology' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:32 compute-0 systemd-udevd[332434]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:32 compute-0 kernel: tapf3740887-84: entered promiscuous mode
Nov 25 08:37:32 compute-0 NetworkManager[48915]: <info>  [1764059852.8015] manager: (tapf3740887-84): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:32 compute-0 ovn_controller[152859]: 2025-11-25T08:37:32Z|00758|binding|INFO|Claiming lport f3740887-8427-4858-b3e7-5c15f52a2484 for this chassis.
Nov 25 08:37:32 compute-0 ovn_controller[152859]: 2025-11-25T08:37:32Z|00759|binding|INFO|f3740887-8427-4858-b3e7-5c15f52a2484: Claiming fa:16:3e:f9:5b:b8 10.100.0.8
Nov 25 08:37:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:32.809 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:5b:b8 10.100.0.8'], port_security=['fa:16:3e:f9:5b:b8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '02e92d65-4521-4c60-bed4-2e8fc4d243e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3740887-8427-4858-b3e7-5c15f52a2484) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:32.811 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3740887-8427-4858-b3e7-5c15f52a2484 in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 bound to our chassis
Nov 25 08:37:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:32.812 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:37:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:32.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dca1ec0a-826e-4b87-aa55-2c5cebf858ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:32 compute-0 NetworkManager[48915]: <info>  [1764059852.8196] device (tapf3740887-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:37:32 compute-0 ovn_controller[152859]: 2025-11-25T08:37:32Z|00760|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 ovn-installed in OVS
Nov 25 08:37:32 compute-0 ovn_controller[152859]: 2025-11-25T08:37:32Z|00761|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 up in Southbound
Nov 25 08:37:32 compute-0 NetworkManager[48915]: <info>  [1764059852.8215] device (tapf3740887-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:32 compute-0 nova_compute[253538]: 2025-11-25 08:37:32.825 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:32 compute-0 systemd-machined[215790]: New machine qemu-97-instance-0000004e.
Nov 25 08:37:32 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-0000004e.
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.222 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059838.2218952, 806e081d-6b1a-4909-be7c-5490c631ebfe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.223 253542 INFO nova.compute.manager [-] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] VM Stopped (Lifecycle Event)
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.240 253542 DEBUG nova.compute.manager [None req-9d40cc3e-25bb-4193-a14b-eebcc2e5bb00 - - - - - -] [instance: 806e081d-6b1a-4909-be7c-5490c631ebfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 341 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 244 op/s
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.562 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.571 253542 DEBUG nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.571 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.571 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.571 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.572 253542 DEBUG nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.572 253542 WARNING nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state rescued and task_state unrescuing.
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.572 253542 DEBUG nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.572 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.572 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.573 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.573 253542 DEBUG nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.573 253542 WARNING nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state rescued and task_state unrescuing.
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.573 253542 DEBUG nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.573 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.574 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.574 253542 DEBUG oslo_concurrency.lockutils [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.574 253542 DEBUG nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.574 253542 WARNING nova.compute.manager [req-ad9bc8bf-52cb-4d66-9d2a-1f68e34e842e req-de7ad211-2210-4de5-98e3-e6935b3bbe49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state rescued and task_state unrescuing.
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.836 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 02e92d65-4521-4c60-bed4-2e8fc4d243e4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.837 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059853.8358922, 02e92d65-4521-4c60-bed4-2e8fc4d243e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.837 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] VM Resumed (Lifecycle Event)
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.868 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.874 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.892 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.893 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059853.8398209, 02e92d65-4521-4c60-bed4-2e8fc4d243e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.893 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] VM Started (Lifecycle Event)
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.912 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.916 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:33 compute-0 nova_compute[253538]: 2025-11-25 08:37:33.933 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 25 08:37:34 compute-0 nova_compute[253538]: 2025-11-25 08:37:34.313 253542 DEBUG nova.compute.manager [None req-79172d6a-a3a8-4d9c-a306-09931b8ae528 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:34 compute-0 ceph-mon[75015]: pgmap v1636: 321 pgs: 321 active+clean; 341 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 244 op/s
Nov 25 08:37:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 312 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 250 op/s
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.349 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.351 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.351 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.351 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.352 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.353 253542 INFO nova.compute.manager [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Terminating instance
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.355 253542 DEBUG nova.compute.manager [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:37:35 compute-0 kernel: tapf3740887-84 (unregistering): left promiscuous mode
Nov 25 08:37:35 compute-0 NetworkManager[48915]: <info>  [1764059855.5706] device (tapf3740887-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.583 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:35 compute-0 ovn_controller[152859]: 2025-11-25T08:37:35Z|00762|binding|INFO|Releasing lport f3740887-8427-4858-b3e7-5c15f52a2484 from this chassis (sb_readonly=0)
Nov 25 08:37:35 compute-0 ovn_controller[152859]: 2025-11-25T08:37:35Z|00763|binding|INFO|Setting lport f3740887-8427-4858-b3e7-5c15f52a2484 down in Southbound
Nov 25 08:37:35 compute-0 ovn_controller[152859]: 2025-11-25T08:37:35Z|00764|binding|INFO|Removing iface tapf3740887-84 ovn-installed in OVS
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:35.599 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:5b:b8 10.100.0.8'], port_security=['fa:16:3e:f9:5b:b8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '02e92d65-4521-4c60-bed4-2e8fc4d243e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f3740887-8427-4858-b3e7-5c15f52a2484) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:35.601 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f3740887-8427-4858-b3e7-5c15f52a2484 in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 unbound from our chassis
Nov 25 08:37:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:35.602 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:37:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:35.604 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3c2316-b858-4879-863c-3ee5c97b851a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:35 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 25 08:37:35 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004e.scope: Consumed 2.621s CPU time.
Nov 25 08:37:35 compute-0 systemd-machined[215790]: Machine qemu-97-instance-0000004e terminated.
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.673 253542 DEBUG nova.compute.manager [req-8f484859-4786-4c5b-a13d-033ad05f56b9 req-77127475-9231-4844-9628-d44e4097987f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.674 253542 DEBUG oslo_concurrency.lockutils [req-8f484859-4786-4c5b-a13d-033ad05f56b9 req-77127475-9231-4844-9628-d44e4097987f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.674 253542 DEBUG oslo_concurrency.lockutils [req-8f484859-4786-4c5b-a13d-033ad05f56b9 req-77127475-9231-4844-9628-d44e4097987f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.674 253542 DEBUG oslo_concurrency.lockutils [req-8f484859-4786-4c5b-a13d-033ad05f56b9 req-77127475-9231-4844-9628-d44e4097987f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.675 253542 DEBUG nova.compute.manager [req-8f484859-4786-4c5b-a13d-033ad05f56b9 req-77127475-9231-4844-9628-d44e4097987f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.675 253542 WARNING nova.compute.manager [req-8f484859-4786-4c5b-a13d-033ad05f56b9 req-77127475-9231-4844-9628-d44e4097987f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state active and task_state deleting.
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.797 253542 INFO nova.virt.libvirt.driver [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Instance destroyed successfully.
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.798 253542 DEBUG nova.objects.instance [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'resources' on Instance uuid 02e92d65-4521-4c60-bed4-2e8fc4d243e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.813 253542 DEBUG nova.virt.libvirt.vif [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1478856521',display_name='tempest-ServerRescueTestJSON-server-1478856521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1478856521',id=78,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:37:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='488c6d53000c47848dba6b7be6b4ff40',ramdisk_id='',reservation_id='r-zkq6gjzu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-324239197',owner_user_name='tempest-ServerRescueTestJSON-324239197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:37:34Z,user_data=None,user_id='ad675e78b1b34f1c92c57e42532c3c20',uuid=02e92d65-4521-4c60-bed4-2e8fc4d243e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.813 253542 DEBUG nova.network.os_vif_util [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converting VIF {"id": "f3740887-8427-4858-b3e7-5c15f52a2484", "address": "fa:16:3e:f9:5b:b8", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3740887-84", "ovs_interfaceid": "f3740887-8427-4858-b3e7-5c15f52a2484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.815 253542 DEBUG nova.network.os_vif_util [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:5b:b8,bridge_name='br-int',has_traffic_filtering=True,id=f3740887-8427-4858-b3e7-5c15f52a2484,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3740887-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.816 253542 DEBUG os_vif [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:5b:b8,bridge_name='br-int',has_traffic_filtering=True,id=f3740887-8427-4858-b3e7-5c15f52a2484,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3740887-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.820 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3740887-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.828 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.832 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:35 compute-0 nova_compute[253538]: 2025-11-25 08:37:35.841 253542 INFO os_vif [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:5b:b8,bridge_name='br-int',has_traffic_filtering=True,id=f3740887-8427-4858-b3e7-5c15f52a2484,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3740887-84')
Nov 25 08:37:36 compute-0 nova_compute[253538]: 2025-11-25 08:37:36.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:36 compute-0 ceph-mon[75015]: pgmap v1637: 321 pgs: 321 active+clean; 312 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 250 op/s
Nov 25 08:37:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 300 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.790 253542 DEBUG nova.compute.manager [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.791 253542 DEBUG oslo_concurrency.lockutils [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.791 253542 DEBUG oslo_concurrency.lockutils [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.792 253542 DEBUG oslo_concurrency.lockutils [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.792 253542 DEBUG nova.compute.manager [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.792 253542 DEBUG nova.compute.manager [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-unplugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.792 253542 DEBUG nova.compute.manager [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.792 253542 DEBUG oslo_concurrency.lockutils [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.793 253542 DEBUG oslo_concurrency.lockutils [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.793 253542 DEBUG oslo_concurrency.lockutils [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.793 253542 DEBUG nova.compute.manager [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] No waiting events found dispatching network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:37 compute-0 nova_compute[253538]: 2025-11-25 08:37:37.793 253542 WARNING nova.compute.manager [req-953287a6-3361-499b-96e6-cebda03697c8 req-172e3a8a-ab87-44e3-9008-7b59985a9028 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received unexpected event network-vif-plugged-f3740887-8427-4858-b3e7-5c15f52a2484 for instance with vm_state active and task_state deleting.
Nov 25 08:37:37 compute-0 ceph-mon[75015]: pgmap v1638: 321 pgs: 321 active+clean; 300 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.384 253542 INFO nova.virt.libvirt.driver [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Deleting instance files /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4_del
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.385 253542 INFO nova.virt.libvirt.driver [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Deletion of /var/lib/nova/instances/02e92d65-4521-4c60-bed4-2e8fc4d243e4_del complete
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.445 253542 INFO nova.compute.manager [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Took 3.09 seconds to destroy the instance on the hypervisor.
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.446 253542 DEBUG oslo.service.loopingcall [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.446 253542 DEBUG nova.compute.manager [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.447 253542 DEBUG nova.network.neutron [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.478 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059843.4760087, 6c10a34e-4126-4e88-ad4d-ba7c407a379e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.481 253542 INFO nova.compute.manager [-] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] VM Stopped (Lifecycle Event)
Nov 25 08:37:38 compute-0 nova_compute[253538]: 2025-11-25 08:37:38.502 253542 DEBUG nova.compute.manager [None req-814e5dbb-7f80-4b2d-bb85-991827c69058 - - - - - -] [instance: 6c10a34e-4126-4e88-ad4d-ba7c407a379e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 274 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 205 op/s
Nov 25 08:37:39 compute-0 nova_compute[253538]: 2025-11-25 08:37:39.748 253542 DEBUG nova.network.neutron [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:39 compute-0 nova_compute[253538]: 2025-11-25 08:37:39.767 253542 INFO nova.compute.manager [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Took 1.32 seconds to deallocate network for instance.
Nov 25 08:37:39 compute-0 nova_compute[253538]: 2025-11-25 08:37:39.813 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:39 compute-0 nova_compute[253538]: 2025-11-25 08:37:39.814 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:39 compute-0 nova_compute[253538]: 2025-11-25 08:37:39.882 253542 DEBUG nova.compute.manager [req-727c2da3-2fff-479c-89bb-00b507e4ea44 req-3f91b228-8232-460f-9765-2e6c78782db3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Received event network-vif-deleted-f3740887-8427-4858-b3e7-5c15f52a2484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:39 compute-0 nova_compute[253538]: 2025-11-25 08:37:39.885 253542 DEBUG oslo_concurrency.processutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011284153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:40 compute-0 nova_compute[253538]: 2025-11-25 08:37:40.373 253542 DEBUG oslo_concurrency.processutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:40 compute-0 nova_compute[253538]: 2025-11-25 08:37:40.381 253542 DEBUG nova.compute.provider_tree [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:40 compute-0 nova_compute[253538]: 2025-11-25 08:37:40.397 253542 DEBUG nova.scheduler.client.report [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:40 compute-0 ceph-mon[75015]: pgmap v1639: 321 pgs: 321 active+clean; 274 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 205 op/s
Nov 25 08:37:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4011284153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:40 compute-0 nova_compute[253538]: 2025-11-25 08:37:40.416 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:40 compute-0 nova_compute[253538]: 2025-11-25 08:37:40.453 253542 INFO nova.scheduler.client.report [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Deleted allocations for instance 02e92d65-4521-4c60-bed4-2e8fc4d243e4
Nov 25 08:37:40 compute-0 nova_compute[253538]: 2025-11-25 08:37:40.522 253542 DEBUG oslo_concurrency.lockutils [None req-4d955e0a-6898-4491-bfa1-408d3b22c6f3 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "02e92d65-4521-4c60-bed4-2e8fc4d243e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:40 compute-0 nova_compute[253538]: 2025-11-25 08:37:40.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:40.874 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:40 compute-0 nova_compute[253538]: 2025-11-25 08:37:40.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:40.876 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:41.063 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:41.064 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:41.064 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.135 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.136 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.137 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.137 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.138 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.140 253542 INFO nova.compute.manager [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Terminating instance
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.142 253542 DEBUG nova.compute.manager [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:37:41 compute-0 kernel: tap88b844d5-71 (unregistering): left promiscuous mode
Nov 25 08:37:41 compute-0 NetworkManager[48915]: <info>  [1764059861.2340] device (tap88b844d5-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:37:41 compute-0 ovn_controller[152859]: 2025-11-25T08:37:41Z|00765|binding|INFO|Releasing lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a from this chassis (sb_readonly=0)
Nov 25 08:37:41 compute-0 ovn_controller[152859]: 2025-11-25T08:37:41Z|00766|binding|INFO|Setting lport 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a down in Southbound
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:41 compute-0 ovn_controller[152859]: 2025-11-25T08:37:41Z|00767|binding|INFO|Removing iface tap88b844d5-71 ovn-installed in OVS
Nov 25 08:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:41.254 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:fe:4e 10.100.0.5'], port_security=['fa:16:3e:a4:fe:4e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '884b0bf9-764d-4aa8-8bcb-c9e8644a0dad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b2becc8-2b4e-4727-a105-454acfe44b89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '488c6d53000c47848dba6b7be6b4ff40', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f93c15b0-dfea-420d-abd0-8856b9b0a2b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b19baa8b-ade0-4619-a5ed-df7feb0c6cc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:41.256 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 88b844d5-7175-4dc1-92cc-d7a4d59e1d1a in datapath 6b2becc8-2b4e-4727-a105-454acfe44b89 unbound from our chassis
Nov 25 08:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:41.258 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b2becc8-2b4e-4727-a105-454acfe44b89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:41.259 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc8f4ca-c146-41ed-ad41-5e3e2eadf029]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:41 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000047.scope: Deactivated successfully.
Nov 25 08:37:41 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000047.scope: Consumed 15.888s CPU time.
Nov 25 08:37:41 compute-0 systemd-machined[215790]: Machine qemu-91-instance-00000047 terminated.
Nov 25 08:37:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 253 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 214 op/s
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.403 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.410 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.415 253542 INFO nova.virt.libvirt.driver [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Instance destroyed successfully.
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.415 253542 DEBUG nova.objects.instance [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lazy-loading 'resources' on Instance uuid 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.427 253542 DEBUG nova.virt.libvirt.vif [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:36:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1272068251',display_name='tempest-ServerRescueTestJSON-server-1272068251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1272068251',id=71,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:36:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='488c6d53000c47848dba6b7be6b4ff40',ramdisk_id='',reservation_id='r-eevo0tl5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-324239197',owner_user_name='tempest-ServerRescueTestJSON-324239197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:36:52Z,user_data=None,user_id='ad675e78b1b34f1c92c57e42532c3c20',uuid=884b0bf9-764d-4aa8-8bcb-c9e8644a0dad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.428 253542 DEBUG nova.network.os_vif_util [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converting VIF {"id": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "address": "fa:16:3e:a4:fe:4e", "network": {"id": "6b2becc8-2b4e-4727-a105-454acfe44b89", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-337160511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "488c6d53000c47848dba6b7be6b4ff40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b844d5-71", "ovs_interfaceid": "88b844d5-7175-4dc1-92cc-d7a4d59e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.428 253542 DEBUG nova.network.os_vif_util [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:fe:4e,bridge_name='br-int',has_traffic_filtering=True,id=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b844d5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.428 253542 DEBUG os_vif [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:fe:4e,bridge_name='br-int',has_traffic_filtering=True,id=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b844d5-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.430 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.430 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88b844d5-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.432 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.436 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.438 253542 INFO os_vif [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:fe:4e,bridge_name='br-int',has_traffic_filtering=True,id=88b844d5-7175-4dc1-92cc-d7a4d59e1d1a,network=Network(6b2becc8-2b4e-4727-a105-454acfe44b89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b844d5-71')
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:41 compute-0 ceph-mon[75015]: pgmap v1640: 321 pgs: 321 active+clean; 253 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 214 op/s
Nov 25 08:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:41.877 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.954 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "0a240e53-cc4c-463e-9601-41d687d64349" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.955 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.959 253542 DEBUG nova.compute.manager [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-unplugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.959 253542 DEBUG oslo_concurrency.lockutils [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.960 253542 DEBUG oslo_concurrency.lockutils [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.960 253542 DEBUG oslo_concurrency.lockutils [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.960 253542 DEBUG nova.compute.manager [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] No waiting events found dispatching network-vif-unplugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.960 253542 DEBUG nova.compute.manager [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-unplugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.960 253542 DEBUG nova.compute.manager [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.961 253542 DEBUG oslo_concurrency.lockutils [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.961 253542 DEBUG oslo_concurrency.lockutils [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.961 253542 DEBUG oslo_concurrency.lockutils [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.961 253542 DEBUG nova.compute.manager [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] No waiting events found dispatching network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.962 253542 WARNING nova.compute.manager [req-dd4a106e-a8b7-4b1e-9f7f-ba4420d53152 req-887131e9-94d7-4de5-a5b0-6f68f3ecaecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received unexpected event network-vif-plugged-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a for instance with vm_state rescued and task_state deleting.
Nov 25 08:37:41 compute-0 nova_compute[253538]: 2025-11-25 08:37:41.976 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.037 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.037 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.045 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.045 253542 INFO nova.compute.claims [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.138 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876210642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.592 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.601 253542 DEBUG nova.compute.provider_tree [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.619 253542 DEBUG nova.scheduler.client.report [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.647 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.648 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.715 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.716 253542 DEBUG nova.network.neutron [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.732 253542 INFO nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.752 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:37:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3876210642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:42 compute-0 podman[332646]: 2025-11-25 08:37:42.815478272 +0000 UTC m=+0.061253911 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.845 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.846 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.846 253542 INFO nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Creating image(s)
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.876 253542 DEBUG nova.storage.rbd_utils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 0a240e53-cc4c-463e-9601-41d687d64349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.904 253542 DEBUG nova.storage.rbd_utils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 0a240e53-cc4c-463e-9601-41d687d64349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.932 253542 DEBUG nova.storage.rbd_utils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 0a240e53-cc4c-463e-9601-41d687d64349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.937 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:42 compute-0 nova_compute[253538]: 2025-11-25 08:37:42.990 253542 DEBUG nova.policy [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdcb005cc49a4dfa82152f2c0817cc94', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b730f086c4b94185afab5e10fa2e8181', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.031 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.032 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.033 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.033 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.057 253542 DEBUG nova.storage.rbd_utils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 0a240e53-cc4c-463e-9601-41d687d64349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.062 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0a240e53-cc4c-463e-9601-41d687d64349_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 188 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 359 KiB/s wr, 216 op/s
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.484 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059848.482914, a263c70f-c8ce-4ffc-bd62-595fc2e31593 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.484 253542 INFO nova.compute.manager [-] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] VM Stopped (Lifecycle Event)
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.522 253542 DEBUG nova.compute.manager [None req-add514cb-ef6f-4f2d-9080-4fd7d91061ee - - - - - -] [instance: a263c70f-c8ce-4ffc-bd62-595fc2e31593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.547 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0a240e53-cc4c-463e-9601-41d687d64349_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.616 253542 DEBUG nova.storage.rbd_utils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] resizing rbd image 0a240e53-cc4c-463e-9601-41d687d64349_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.768 253542 DEBUG nova.network.neutron [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Successfully created port: 0061cd13-34e3-4156-a4ba-ff9808dc3607 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:37:43 compute-0 ceph-mon[75015]: pgmap v1641: 321 pgs: 321 active+clean; 188 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 359 KiB/s wr, 216 op/s
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.848 253542 DEBUG nova.objects.instance [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'migration_context' on Instance uuid 0a240e53-cc4c-463e-9601-41d687d64349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.860 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.860 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Ensure instance console log exists: /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.860 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.861 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:43 compute-0 nova_compute[253538]: 2025-11-25 08:37:43.861 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.470 253542 INFO nova.virt.libvirt.driver [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Deleting instance files /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_del
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.471 253542 INFO nova.virt.libvirt.driver [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Deletion of /var/lib/nova/instances/884b0bf9-764d-4aa8-8bcb-c9e8644a0dad_del complete
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.524 253542 INFO nova.compute.manager [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Took 3.38 seconds to destroy the instance on the hypervisor.
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.525 253542 DEBUG oslo.service.loopingcall [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.525 253542 DEBUG nova.compute.manager [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.525 253542 DEBUG nova.network.neutron [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.888 253542 DEBUG nova.network.neutron [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Successfully updated port: 0061cd13-34e3-4156-a4ba-ff9808dc3607 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.903 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.904 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquired lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.905 253542 DEBUG nova.network.neutron [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.962 253542 DEBUG nova.compute.manager [req-16370043-fddf-4d5a-bd1f-9dd707a72e21 req-b1b2807c-c8ca-4ba4-ad1c-aeb7bc3e1f6e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received event network-changed-0061cd13-34e3-4156-a4ba-ff9808dc3607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.963 253542 DEBUG nova.compute.manager [req-16370043-fddf-4d5a-bd1f-9dd707a72e21 req-b1b2807c-c8ca-4ba4-ad1c-aeb7bc3e1f6e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Refreshing instance network info cache due to event network-changed-0061cd13-34e3-4156-a4ba-ff9808dc3607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:37:44 compute-0 nova_compute[253538]: 2025-11-25 08:37:44.963 253542 DEBUG oslo_concurrency.lockutils [req-16370043-fddf-4d5a-bd1f-9dd707a72e21 req-b1b2807c-c8ca-4ba4-ad1c-aeb7bc3e1f6e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.108 253542 DEBUG nova.network.neutron [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.302 253542 DEBUG nova.network.neutron [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.321 253542 INFO nova.compute.manager [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Took 0.80 seconds to deallocate network for instance.
Nov 25 08:37:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 166 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 485 KiB/s wr, 198 op/s
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.380 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.380 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.473 253542 DEBUG oslo_concurrency.processutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403024796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.952 253542 DEBUG oslo_concurrency.processutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.960 253542 DEBUG nova.compute.provider_tree [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:45 compute-0 nova_compute[253538]: 2025-11-25 08:37:45.975 253542 DEBUG nova.scheduler.client.report [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.002 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.028 253542 INFO nova.scheduler.client.report [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Deleted allocations for instance 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.090 253542 DEBUG oslo_concurrency.lockutils [None req-319ed0ca-515c-44d3-a6fb-d3b87d2ac086 ad675e78b1b34f1c92c57e42532c3c20 488c6d53000c47848dba6b7be6b4ff40 - - default default] Lock "884b0bf9-764d-4aa8-8bcb-c9e8644a0dad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.212 253542 DEBUG nova.network.neutron [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Updating instance_info_cache with network_info: [{"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.232 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Releasing lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.232 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Instance network_info: |[{"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.234 253542 DEBUG oslo_concurrency.lockutils [req-16370043-fddf-4d5a-bd1f-9dd707a72e21 req-b1b2807c-c8ca-4ba4-ad1c-aeb7bc3e1f6e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.235 253542 DEBUG nova.network.neutron [req-16370043-fddf-4d5a-bd1f-9dd707a72e21 req-b1b2807c-c8ca-4ba4-ad1c-aeb7bc3e1f6e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Refreshing network info cache for port 0061cd13-34e3-4156-a4ba-ff9808dc3607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.242 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Start _get_guest_xml network_info=[{"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.246 253542 WARNING nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.250 253542 DEBUG nova.virt.libvirt.host [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.251 253542 DEBUG nova.virt.libvirt.host [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.254 253542 DEBUG nova.virt.libvirt.host [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.255 253542 DEBUG nova.virt.libvirt.host [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.256 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.256 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.256 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.257 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.257 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.257 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.257 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.258 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.258 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.258 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.259 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.259 253542 DEBUG nova.virt.hardware [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.262 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.311 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.311 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.328 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.392 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.392 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.398 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.398 253542 INFO nova.compute.claims [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:37:46 compute-0 ceph-mon[75015]: pgmap v1642: 321 pgs: 321 active+clean; 166 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 485 KiB/s wr, 198 op/s
Nov 25 08:37:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1403024796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.516 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.565 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3160237518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.789 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.814 253542 DEBUG nova.storage.rbd_utils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 0a240e53-cc4c-463e-9601-41d687d64349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.817 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033524930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.972 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.984 253542 DEBUG nova.compute.provider_tree [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:46 compute-0 nova_compute[253538]: 2025-11-25 08:37:46.998 253542 DEBUG nova.scheduler.client.report [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.030 253542 DEBUG nova.compute.manager [req-486fce69-631c-4843-bfa2-caf9e048ef33 req-b2cfea73-03b9-4432-939f-2db21c8d08e7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Received event network-vif-deleted-88b844d5-7175-4dc1-92cc-d7a4d59e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.033 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.034 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.098 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.099 253542 DEBUG nova.network.neutron [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.173 253542 INFO nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.194 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:37:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714697345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.248 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.249 253542 DEBUG nova.virt.libvirt.vif [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1218767313',display_name='tempest-₡-1218767313',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1218767313',id=80,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-imleqf74',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:42Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=0a240e53-cc4c-463e-9601-41d687d64349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.250 253542 DEBUG nova.network.os_vif_util [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.251 253542 DEBUG nova.network.os_vif_util [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a6:7a,bridge_name='br-int',has_traffic_filtering=True,id=0061cd13-34e3-4156-a4ba-ff9808dc3607,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0061cd13-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.252 253542 DEBUG nova.objects.instance [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a240e53-cc4c-463e-9601-41d687d64349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.275 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <uuid>0a240e53-cc4c-463e-9601-41d687d64349</uuid>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <name>instance-00000050</name>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <nova:name>tempest-₡-1218767313</nova:name>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:37:46</nova:creationTime>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <nova:user uuid="fdcb005cc49a4dfa82152f2c0817cc94">tempest-ServersTestJSON-1426188226-project-member</nova:user>
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <nova:project uuid="b730f086c4b94185afab5e10fa2e8181">tempest-ServersTestJSON-1426188226</nova:project>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <nova:port uuid="0061cd13-34e3-4156-a4ba-ff9808dc3607">
Nov 25 08:37:47 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <system>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <entry name="serial">0a240e53-cc4c-463e-9601-41d687d64349</entry>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <entry name="uuid">0a240e53-cc4c-463e-9601-41d687d64349</entry>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </system>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <os>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   </os>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <features>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   </features>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0a240e53-cc4c-463e-9601-41d687d64349_disk">
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0a240e53-cc4c-463e-9601-41d687d64349_disk.config">
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:47 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:26:a6:7a"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <target dev="tap0061cd13-34"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349/console.log" append="off"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <video>
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </video>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:37:47 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:37:47 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:37:47 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:37:47 compute-0 nova_compute[253538]: </domain>
Nov 25 08:37:47 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.277 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Preparing to wait for external event network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.278 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "0a240e53-cc4c-463e-9601-41d687d64349-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.279 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.279 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.281 253542 DEBUG nova.virt.libvirt.vif [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1218767313',display_name='tempest-₡-1218767313',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1218767313',id=80,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-imleqf74',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:42Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=0a240e53-cc4c-463e-9601-41d687d64349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.281 253542 DEBUG nova.network.os_vif_util [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.283 253542 DEBUG nova.network.os_vif_util [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a6:7a,bridge_name='br-int',has_traffic_filtering=True,id=0061cd13-34e3-4156-a4ba-ff9808dc3607,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0061cd13-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.283 253542 DEBUG os_vif [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a6:7a,bridge_name='br-int',has_traffic_filtering=True,id=0061cd13-34e3-4156-a4ba-ff9808dc3607,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0061cd13-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.291 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0061cd13-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.292 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0061cd13-34, col_values=(('external_ids', {'iface-id': '0061cd13-34e3-4156-a4ba-ff9808dc3607', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:a6:7a', 'vm-uuid': '0a240e53-cc4c-463e-9601-41d687d64349'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:47 compute-0 NetworkManager[48915]: <info>  [1764059867.3225] manager: (tap0061cd13-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.323 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.325 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.326 253542 INFO nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Creating image(s)
Nov 25 08:37:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 141 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 160 op/s
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.365 253542 DEBUG nova.storage.rbd_utils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 40912950-fedc-405c-bc49-c4a757a422dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.397 253542 DEBUG nova.storage.rbd_utils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 40912950-fedc-405c-bc49-c4a757a422dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.500 253542 DEBUG nova.storage.rbd_utils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 40912950-fedc-405c-bc49-c4a757a422dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.522 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3160237518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3033524930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3714697345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.579 253542 DEBUG nova.network.neutron [req-16370043-fddf-4d5a-bd1f-9dd707a72e21 req-b1b2807c-c8ca-4ba4-ad1c-aeb7bc3e1f6e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Updated VIF entry in instance network info cache for port 0061cd13-34e3-4156-a4ba-ff9808dc3607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.580 253542 DEBUG nova.network.neutron [req-16370043-fddf-4d5a-bd1f-9dd707a72e21 req-b1b2807c-c8ca-4ba4-ad1c-aeb7bc3e1f6e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Updating instance_info_cache with network_info: [{"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.588 253542 DEBUG nova.policy [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '24fa34332e6f4b628514969bbf76e94b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6851917992b149818e8b44146c66bfc3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.593 253542 INFO os_vif [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a6:7a,bridge_name='br-int',has_traffic_filtering=True,id=0061cd13-34e3-4156-a4ba-ff9808dc3607,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0061cd13-34')
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.641 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.642 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.643 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.644 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.821 253542 DEBUG nova.storage.rbd_utils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 40912950-fedc-405c-bc49-c4a757a422dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.825 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 40912950-fedc-405c-bc49-c4a757a422dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:47 compute-0 podman[333006]: 2025-11-25 08:37:47.852994911 +0000 UTC m=+0.093546070 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.884 253542 DEBUG oslo_concurrency.lockutils [req-16370043-fddf-4d5a-bd1f-9dd707a72e21 req-b1b2807c-c8ca-4ba4-ad1c-aeb7bc3e1f6e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.919 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.920 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.920 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No VIF found with MAC fa:16:3e:26:a6:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.921 253542 INFO nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Using config drive
Nov 25 08:37:47 compute-0 nova_compute[253538]: 2025-11-25 08:37:47.943 253542 DEBUG nova.storage.rbd_utils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 0a240e53-cc4c-463e-9601-41d687d64349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.356 253542 INFO nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Creating config drive at /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349/disk.config
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.361 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpupht_ut7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.496 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpupht_ut7" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.526 253542 DEBUG nova.storage.rbd_utils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 0a240e53-cc4c-463e-9601-41d687d64349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.537 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349/disk.config 0a240e53-cc4c-463e-9601-41d687d64349_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:48 compute-0 ceph-mon[75015]: pgmap v1643: 321 pgs: 321 active+clean; 141 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 160 op/s
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.609 253542 DEBUG nova.network.neutron [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Successfully created port: 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.681 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 40912950-fedc-405c-bc49-c4a757a422dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.754 253542 DEBUG nova.storage.rbd_utils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] resizing rbd image 40912950-fedc-405c-bc49-c4a757a422dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.786 253542 DEBUG oslo_concurrency.processutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349/disk.config 0a240e53-cc4c-463e-9601-41d687d64349_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.787 253542 INFO nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Deleting local config drive /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349/disk.config because it was imported into RBD.
Nov 25 08:37:48 compute-0 kernel: tap0061cd13-34: entered promiscuous mode
Nov 25 08:37:48 compute-0 NetworkManager[48915]: <info>  [1764059868.8573] manager: (tap0061cd13-34): new Tun device (/org/freedesktop/NetworkManager/Devices/332)
Nov 25 08:37:48 compute-0 ovn_controller[152859]: 2025-11-25T08:37:48Z|00768|binding|INFO|Claiming lport 0061cd13-34e3-4156-a4ba-ff9808dc3607 for this chassis.
Nov 25 08:37:48 compute-0 ovn_controller[152859]: 2025-11-25T08:37:48Z|00769|binding|INFO|0061cd13-34e3-4156-a4ba-ff9808dc3607: Claiming fa:16:3e:26:a6:7a 10.100.0.13
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.874 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a6:7a 10.100.0.13'], port_security=['fa:16:3e:26:a6:7a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0a240e53-cc4c-463e-9601-41d687d64349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '2', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0061cd13-34e3-4156-a4ba-ff9808dc3607) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.875 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0061cd13-34e3-4156-a4ba-ff9808dc3607 in datapath 92e26514-5b15-410b-8885-6773bc03c4ce bound to our chassis
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.877 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[14ad5614-d805-4e62-9ffc-fb4179fe2fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.891 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92e26514-51 in ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.892 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92e26514-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.892 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6db3fd-e793-436e-843b-2934524b4b04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:48 compute-0 systemd-udevd[333198]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:48 compute-0 systemd-machined[215790]: New machine qemu-98-instance-00000050.
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.896 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[940bcb13-7613-489e-ba53-a23b50e2aeb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:48 compute-0 NetworkManager[48915]: <info>  [1764059868.9080] device (tap0061cd13-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:37:48 compute-0 NetworkManager[48915]: <info>  [1764059868.9092] device (tap0061cd13-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.909 253542 DEBUG nova.objects.instance [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'migration_context' on Instance uuid 40912950-fedc-405c-bc49-c4a757a422dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:48 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000050.
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.915 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed65a40-f157-4db2-9f2b-8b203d600336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.923 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.924 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Ensure instance console log exists: /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.925 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.925 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.925 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.943 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffa1924-7903-4b16-860b-bb2bdefa268f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.952 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:48 compute-0 ovn_controller[152859]: 2025-11-25T08:37:48Z|00770|binding|INFO|Setting lport 0061cd13-34e3-4156-a4ba-ff9808dc3607 ovn-installed in OVS
Nov 25 08:37:48 compute-0 ovn_controller[152859]: 2025-11-25T08:37:48Z|00771|binding|INFO|Setting lport 0061cd13-34e3-4156-a4ba-ff9808dc3607 up in Southbound
Nov 25 08:37:48 compute-0 nova_compute[253538]: 2025-11-25 08:37:48.957 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.979 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[97c69378-5d6c-46d1-b1ff-379240309626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:48 compute-0 NetworkManager[48915]: <info>  [1764059868.9882] manager: (tap92e26514-50): new Veth device (/org/freedesktop/NetworkManager/Devices/333)
Nov 25 08:37:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:48.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c011fdcd-44d6-484d-ba6f-1e4ac125a5d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:48 compute-0 systemd-udevd[333201]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.028 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c17a163c-5698-4ab9-b44f-6df7d26bc92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.033 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[740d6f0c-1b03-4561-988e-18c47791492e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 NetworkManager[48915]: <info>  [1764059869.0626] device (tap92e26514-50): carrier: link connected
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.068 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[043fd324-7c27-4186-815e-a30644a031fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[764ce342-14e3-4e4e-b9b1-90e651e9048d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333230, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.104 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8695aa8b-d927-4d1f-8837-487e74803d35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:84fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522366, 'tstamp': 522366}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333231, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.124 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc4146a-9b7e-4b68-bf8e-34e8f575c346]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333232, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.149 253542 DEBUG nova.compute.manager [req-6656c01f-44c5-4af3-b157-558cdca3ff1c req-aee946dd-bc71-4359-aee8-728a047faf87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received event network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.150 253542 DEBUG oslo_concurrency.lockutils [req-6656c01f-44c5-4af3-b157-558cdca3ff1c req-aee946dd-bc71-4359-aee8-728a047faf87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0a240e53-cc4c-463e-9601-41d687d64349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.150 253542 DEBUG oslo_concurrency.lockutils [req-6656c01f-44c5-4af3-b157-558cdca3ff1c req-aee946dd-bc71-4359-aee8-728a047faf87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.151 253542 DEBUG oslo_concurrency.lockutils [req-6656c01f-44c5-4af3-b157-558cdca3ff1c req-aee946dd-bc71-4359-aee8-728a047faf87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.151 253542 DEBUG nova.compute.manager [req-6656c01f-44c5-4af3-b157-558cdca3ff1c req-aee946dd-bc71-4359-aee8-728a047faf87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Processing event network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.163 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ed73eb-aa82-40b2-9c20-35f418c21e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.240 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9345dbb5-5f65-40cf-b2db-30146be3cc7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.241 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.241 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.242 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:49 compute-0 NetworkManager[48915]: <info>  [1764059869.2450] manager: (tap92e26514-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Nov 25 08:37:49 compute-0 kernel: tap92e26514-50: entered promiscuous mode
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.247 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:49 compute-0 ovn_controller[152859]: 2025-11-25T08:37:49Z|00772|binding|INFO|Releasing lport 185991f0-acab-400e-baff-76794035d44a from this chassis (sb_readonly=0)
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.283 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92e26514-5b15-410b-8885-6773bc03c4ce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92e26514-5b15-410b-8885-6773bc03c4ce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.284 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52e89681-219f-46db-91b3-178c72b493f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.286 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/92e26514-5b15-410b-8885-6773bc03c4ce.pid.haproxy
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:37:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:49.287 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'env', 'PROCESS_TAG=haproxy-92e26514-5b15-410b-8885-6773bc03c4ce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92e26514-5b15-410b-8885-6773bc03c4ce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:37:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 134 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.399 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059869.3988588, 0a240e53-cc4c-463e-9601-41d687d64349 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.399 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] VM Started (Lifecycle Event)
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.402 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.410 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.425 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.429 253542 INFO nova.virt.libvirt.driver [-] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Instance spawned successfully.
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.431 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.436 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.459 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.460 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059869.3990827, 0a240e53-cc4c-463e-9601-41d687d64349 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.460 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] VM Paused (Lifecycle Event)
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.467 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.468 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.469 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.469 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.470 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.471 253542 DEBUG nova.virt.libvirt.driver [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.478 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.482 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059869.4087517, 0a240e53-cc4c-463e-9601-41d687d64349 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.482 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] VM Resumed (Lifecycle Event)
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.508 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.512 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.565 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.566 253542 INFO nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Took 6.72 seconds to spawn the instance on the hypervisor.
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.566 253542 DEBUG nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.644 253542 INFO nova.compute.manager [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Took 7.62 seconds to build instance.
Nov 25 08:37:49 compute-0 nova_compute[253538]: 2025-11-25 08:37:49.659 253542 DEBUG oslo_concurrency.lockutils [None req-5ddbde64-9646-4de8-9352-5b2b1e12db48 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:49 compute-0 podman[333306]: 2025-11-25 08:37:49.658101462 +0000 UTC m=+0.022088353 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:37:49 compute-0 ceph-mon[75015]: pgmap v1644: 321 pgs: 321 active+clean; 134 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Nov 25 08:37:49 compute-0 podman[333306]: 2025-11-25 08:37:49.981479945 +0000 UTC m=+0.345466816 container create 62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:37:50 compute-0 systemd[1]: Started libpod-conmon-62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843.scope.
Nov 25 08:37:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078e19334d694eacfa1f081f851a05cf2c3a6a66ec63ba5c3c8be55feca51c6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:37:50 compute-0 podman[333306]: 2025-11-25 08:37:50.123638109 +0000 UTC m=+0.487624990 container init 62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 08:37:50 compute-0 podman[333306]: 2025-11-25 08:37:50.130299221 +0000 UTC m=+0.494286082 container start 62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:37:50 compute-0 neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce[333321]: [NOTICE]   (333325) : New worker (333327) forked
Nov 25 08:37:50 compute-0 neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce[333321]: [NOTICE]   (333325) : Loading success.
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.487 253542 DEBUG nova.network.neutron [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Successfully updated port: 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.511 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.511 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquired lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.512 253542 DEBUG nova.network.neutron [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.538 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.539 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.571 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.656 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.657 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.664 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.664 253542 INFO nova.compute.claims [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.710 253542 DEBUG nova.compute.manager [req-1c7b0542-aba5-494e-b136-fa6d8a60d453 req-f4753875-b0c9-4870-a6f6-3491265d6a5b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-changed-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.711 253542 DEBUG nova.compute.manager [req-1c7b0542-aba5-494e-b136-fa6d8a60d453 req-f4753875-b0c9-4870-a6f6-3491265d6a5b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Refreshing instance network info cache due to event network-changed-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.711 253542 DEBUG oslo_concurrency.lockutils [req-1c7b0542-aba5-494e-b136-fa6d8a60d453 req-f4753875-b0c9-4870-a6f6-3491265d6a5b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.795 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059855.7930124, 02e92d65-4521-4c60-bed4-2e8fc4d243e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.795 253542 INFO nova.compute.manager [-] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] VM Stopped (Lifecycle Event)
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.808 253542 DEBUG nova.compute.manager [None req-79ee57fb-90b6-4379-9772-0100f9c065cd - - - - - -] [instance: 02e92d65-4521-4c60-bed4-2e8fc4d243e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.834 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:50 compute-0 nova_compute[253538]: 2025-11-25 08:37:50.870 253542 DEBUG nova.network.neutron [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:51 compute-0 ovn_controller[152859]: 2025-11-25T08:37:51Z|00773|binding|INFO|Releasing lport 185991f0-acab-400e-baff-76794035d44a from this chassis (sb_readonly=0)
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288954582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.310 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.327 253542 DEBUG nova.compute.provider_tree [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4288954582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 142 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 766 KiB/s rd, 2.1 MiB/s wr, 154 op/s
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.346 253542 DEBUG nova.scheduler.client.report [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.365 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.366 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.411 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.411 253542 DEBUG nova.network.neutron [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.441 253542 INFO nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:37:51 compute-0 podman[333357]: 2025-11-25 08:37:51.4553476 +0000 UTC m=+0.129061848 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.461 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.516 253542 DEBUG nova.compute.manager [req-aa7e9384-ecc7-4806-9311-903f0ce74a00 req-6a787e13-3bfc-4235-942f-a771cb46f08c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received event network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.516 253542 DEBUG oslo_concurrency.lockutils [req-aa7e9384-ecc7-4806-9311-903f0ce74a00 req-6a787e13-3bfc-4235-942f-a771cb46f08c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0a240e53-cc4c-463e-9601-41d687d64349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.517 253542 DEBUG oslo_concurrency.lockutils [req-aa7e9384-ecc7-4806-9311-903f0ce74a00 req-6a787e13-3bfc-4235-942f-a771cb46f08c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.517 253542 DEBUG oslo_concurrency.lockutils [req-aa7e9384-ecc7-4806-9311-903f0ce74a00 req-6a787e13-3bfc-4235-942f-a771cb46f08c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.517 253542 DEBUG nova.compute.manager [req-aa7e9384-ecc7-4806-9311-903f0ce74a00 req-6a787e13-3bfc-4235-942f-a771cb46f08c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] No waiting events found dispatching network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.517 253542 WARNING nova.compute.manager [req-aa7e9384-ecc7-4806-9311-903f0ce74a00 req-6a787e13-3bfc-4235-942f-a771cb46f08c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received unexpected event network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 for instance with vm_state active and task_state None.
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.542 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.544 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.544 253542 INFO nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Creating image(s)
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.570 253542 DEBUG nova.storage.rbd_utils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.596 253542 DEBUG nova.storage.rbd_utils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.618 253542 DEBUG nova.storage.rbd_utils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.621 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.663 253542 DEBUG nova.policy [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2c27b17fb49c46f2877860b2f7123ef2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ae570c13ba047bca1859d62faf328cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.692 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.693 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.693 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.694 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.711 253542 DEBUG nova.storage.rbd_utils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.714 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0e855a86-52f7-47bd-aee9-e88449169aa1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.874 253542 DEBUG nova.network.neutron [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Updating instance_info_cache with network_info: [{"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.890 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Releasing lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.891 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Instance network_info: |[{"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.891 253542 DEBUG oslo_concurrency.lockutils [req-1c7b0542-aba5-494e-b136-fa6d8a60d453 req-f4753875-b0c9-4870-a6f6-3491265d6a5b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.892 253542 DEBUG nova.network.neutron [req-1c7b0542-aba5-494e-b136-fa6d8a60d453 req-f4753875-b0c9-4870-a6f6-3491265d6a5b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Refreshing network info cache for port 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.896 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Start _get_guest_xml network_info=[{"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.902 253542 WARNING nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.911 253542 DEBUG nova.virt.libvirt.host [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.912 253542 DEBUG nova.virt.libvirt.host [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.915 253542 DEBUG nova.virt.libvirt.host [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.916 253542 DEBUG nova.virt.libvirt.host [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.916 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.917 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.917 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.918 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.918 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.918 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.919 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.919 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.919 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.920 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.920 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.920 253542 DEBUG nova.virt.hardware [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:37:51 compute-0 nova_compute[253538]: 2025-11-25 08:37:51.925 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:52 compute-0 nova_compute[253538]: 2025-11-25 08:37:52.322 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2146415322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:52 compute-0 nova_compute[253538]: 2025-11-25 08:37:52.420 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:52 compute-0 nova_compute[253538]: 2025-11-25 08:37:52.455 253542 DEBUG nova.storage.rbd_utils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 40912950-fedc-405c-bc49-c4a757a422dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:52 compute-0 nova_compute[253538]: 2025-11-25 08:37:52.463 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:52 compute-0 ceph-mon[75015]: pgmap v1645: 321 pgs: 321 active+clean; 142 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 766 KiB/s rd, 2.1 MiB/s wr, 154 op/s
Nov 25 08:37:52 compute-0 nova_compute[253538]: 2025-11-25 08:37:52.701 253542 DEBUG nova.network.neutron [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Successfully created port: 8f28ea33-80c4-41cb-b191-a1b619b14515 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:37:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165956724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.222 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0e855a86-52f7-47bd-aee9-e88449169aa1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.251 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.789s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.253 253542 DEBUG nova.virt.libvirt.vif [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1323952638',display_name='tempest-ServerActionsTestOtherA-server-1323952638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1323952638',id=81,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo29J9J/YlqPHlK0QOuxd9u7qavZRLETC6oYiP9ZRxH2YibFLdXMSToi/FhBlCSIfelYckeMDdyi6TGFiwYGSfSqEdkRnVTrWT65qSuA8Lvnahu6Qda7fogQYvU40lxKA==',key_name='tempest-keypair-1723325490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-o8s25783',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24fa34332e6f4b628514969bbf76e94b',uuid=40912950-fedc-405c-bc49-c4a757a422dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.253 253542 DEBUG nova.network.os_vif_util [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.254 253542 DEBUG nova.network.os_vif_util [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:0e:c1,bridge_name='br-int',has_traffic_filtering=True,id=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66cd8b3d-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.255 253542 DEBUG nova.objects.instance [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 40912950-fedc-405c-bc49-c4a757a422dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:37:53
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.316 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <uuid>40912950-fedc-405c-bc49-c4a757a422dc</uuid>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <name>instance-00000051</name>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestOtherA-server-1323952638</nova:name>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:37:51</nova:creationTime>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <nova:user uuid="24fa34332e6f4b628514969bbf76e94b">tempest-ServerActionsTestOtherA-678529119-project-member</nova:user>
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <nova:project uuid="6851917992b149818e8b44146c66bfc3">tempest-ServerActionsTestOtherA-678529119</nova:project>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <nova:port uuid="66cd8b3d-6d9a-4e8c-8487-6f32b15550c2">
Nov 25 08:37:53 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <system>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <entry name="serial">40912950-fedc-405c-bc49-c4a757a422dc</entry>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <entry name="uuid">40912950-fedc-405c-bc49-c4a757a422dc</entry>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </system>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <os>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   </os>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <features>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   </features>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/40912950-fedc-405c-bc49-c4a757a422dc_disk">
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/40912950-fedc-405c-bc49-c4a757a422dc_disk.config">
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:ac:0e:c1"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <target dev="tap66cd8b3d-6d"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc/console.log" append="off"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <video>
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </video>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:37:53 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:37:53 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:37:53 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:37:53 compute-0 nova_compute[253538]: </domain>
Nov 25 08:37:53 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.317 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Preparing to wait for external event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.317 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.317 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.318 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.318 253542 DEBUG nova.virt.libvirt.vif [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1323952638',display_name='tempest-ServerActionsTestOtherA-server-1323952638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1323952638',id=81,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo29J9J/YlqPHlK0QOuxd9u7qavZRLETC6oYiP9ZRxH2YibFLdXMSToi/FhBlCSIfelYckeMDdyi6TGFiwYGSfSqEdkRnVTrWT65qSuA8Lvnahu6Qda7fogQYvU40lxKA==',key_name='tempest-keypair-1723325490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-o8s25783',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24fa34332e6f4b628514969bbf76e94b',uuid=40912950-fedc-405c-bc49-c4a757a422dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.319 253542 DEBUG nova.network.os_vif_util [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.319 253542 DEBUG nova.network.os_vif_util [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:0e:c1,bridge_name='br-int',has_traffic_filtering=True,id=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66cd8b3d-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.320 253542 DEBUG os_vif [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:0e:c1,bridge_name='br-int',has_traffic_filtering=True,id=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66cd8b3d-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.320 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.321 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.321 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.323 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.323 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66cd8b3d-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.324 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66cd8b3d-6d, col_values=(('external_ids', {'iface-id': '66cd8b3d-6d9a-4e8c-8487-6f32b15550c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:0e:c1', 'vm-uuid': '40912950-fedc-405c-bc49-c4a757a422dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.325 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:53 compute-0 NetworkManager[48915]: <info>  [1764059873.3262] manager: (tap66cd8b3d-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.328 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.337 253542 DEBUG nova.storage.rbd_utils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] resizing rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 190 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.381 253542 INFO os_vif [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:0e:c1,bridge_name='br-int',has_traffic_filtering=True,id=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66cd8b3d-6d')
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.438 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.438 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.438 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No VIF found with MAC fa:16:3e:ac:0e:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.439 253542 INFO nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Using config drive
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.458 253542 DEBUG nova.storage.rbd_utils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 40912950-fedc-405c-bc49-c4a757a422dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.482 253542 DEBUG nova.network.neutron [req-1c7b0542-aba5-494e-b136-fa6d8a60d453 req-f4753875-b0c9-4870-a6f6-3491265d6a5b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Updated VIF entry in instance network info cache for port 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.482 253542 DEBUG nova.network.neutron [req-1c7b0542-aba5-494e-b136-fa6d8a60d453 req-f4753875-b0c9-4870-a6f6-3491265d6a5b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Updating instance_info_cache with network_info: [{"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.502 253542 DEBUG oslo_concurrency.lockutils [req-1c7b0542-aba5-494e-b136-fa6d8a60d453 req-f4753875-b0c9-4870-a6f6-3491265d6a5b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:37:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.909 253542 DEBUG nova.network.neutron [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Successfully updated port: 8f28ea33-80c4-41cb-b191-a1b619b14515 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:37:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2146415322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1165956724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:53 compute-0 ceph-mon[75015]: pgmap v1646: 321 pgs: 321 active+clean; 190 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.923 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.923 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquired lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:53 compute-0 nova_compute[253538]: 2025-11-25 08:37:53.924 253542 DEBUG nova.network.neutron [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.060 253542 DEBUG nova.compute.manager [req-ab401a35-60a3-42dd-b006-893710cdc7f1 req-718177f0-76f9-4dae-9a61-564b182f71c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.061 253542 DEBUG nova.compute.manager [req-ab401a35-60a3-42dd-b006-893710cdc7f1 req-718177f0-76f9-4dae-9a61-564b182f71c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing instance network info cache due to event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.061 253542 DEBUG oslo_concurrency.lockutils [req-ab401a35-60a3-42dd-b006-893710cdc7f1 req-718177f0-76f9-4dae-9a61-564b182f71c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.091 253542 INFO nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Creating config drive at /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc/disk.config
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.103 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbehupf0l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.149 253542 DEBUG nova.network.neutron [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.257 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbehupf0l" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.281 253542 DEBUG nova.storage.rbd_utils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 40912950-fedc-405c-bc49-c4a757a422dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.285 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc/disk.config 40912950-fedc-405c-bc49-c4a757a422dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.330 253542 DEBUG nova.objects.instance [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'migration_context' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.345 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.346 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Ensure instance console log exists: /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.346 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.346 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:54 compute-0 nova_compute[253538]: 2025-11-25 08:37:54.347 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.091 253542 DEBUG oslo_concurrency.processutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc/disk.config 40912950-fedc-405c-bc49-c4a757a422dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.805s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.091 253542 INFO nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Deleting local config drive /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc/disk.config because it was imported into RBD.
Nov 25 08:37:55 compute-0 kernel: tap66cd8b3d-6d: entered promiscuous mode
Nov 25 08:37:55 compute-0 NetworkManager[48915]: <info>  [1764059875.1597] manager: (tap66cd8b3d-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/336)
Nov 25 08:37:55 compute-0 ovn_controller[152859]: 2025-11-25T08:37:55Z|00774|binding|INFO|Claiming lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for this chassis.
Nov 25 08:37:55 compute-0 ovn_controller[152859]: 2025-11-25T08:37:55Z|00775|binding|INFO|66cd8b3d-6d9a-4e8c-8487-6f32b15550c2: Claiming fa:16:3e:ac:0e:c1 10.100.0.6
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.166 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.186 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:0e:c1 10.100.0.6'], port_security=['fa:16:3e:ac:0e:c1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40912950-fedc-405c-bc49-c4a757a422dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0db09afa-021a-4418-8ad3-d5c78354b9bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.189 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 in datapath 2b676104-a53a-419a-a348-631c409e45c0 bound to our chassis
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.192 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:37:55 compute-0 systemd-machined[215790]: New machine qemu-99-instance-00000051.
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.210 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27d5159a-b72e-4477-9e6c-cf7c986b3d7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.211 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b676104-a1 in ovnmeta-2b676104-a53a-419a-a348-631c409e45c0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.214 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b676104-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.214 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d22c8ae-df1e-44db-bf6e-6a4be820244a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.215 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[146e5bf6-4b87-48b7-9cac-87326f8d4b15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000051.
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.232 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[acb8ea0d-a09c-47b6-83b0-00e18431f80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 systemd-udevd[333688]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.264 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[349c71c3-c8d7-4a46-80f3-f14a1c1ee63c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.273 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:55 compute-0 ovn_controller[152859]: 2025-11-25T08:37:55Z|00776|binding|INFO|Setting lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 ovn-installed in OVS
Nov 25 08:37:55 compute-0 ovn_controller[152859]: 2025-11-25T08:37:55Z|00777|binding|INFO|Setting lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 up in Southbound
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:55 compute-0 NetworkManager[48915]: <info>  [1764059875.2824] device (tap66cd8b3d-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:37:55 compute-0 NetworkManager[48915]: <info>  [1764059875.2835] device (tap66cd8b3d-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.308 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[19bd7495-2bef-474a-bd3e-91a9b5fd775e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.319 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a72a7cb-1914-43c8-a305-6d3850e3e52b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 NetworkManager[48915]: <info>  [1764059875.3206] manager: (tap2b676104-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/337)
Nov 25 08:37:55 compute-0 systemd-udevd[333693]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 204 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 199 op/s
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.361 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[30995245-ba0e-41b1-8f80-9dc32642b40d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.364 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[58c2792d-dd98-44d8-8ed5-8997d5e92476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 NetworkManager[48915]: <info>  [1764059875.3900] device (tap2b676104-a0): carrier: link connected
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.396 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7c46b5-158f-4b68-b628-66abe0ba4ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.414 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2eee36c9-0083-4448-a08e-f8fdfea104a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b676104-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:55:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522998, 'reachable_time': 29715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333718, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.433 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[210fc883-b13d-4c78-8387-fa2aa45d3487]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8d:559b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522998, 'tstamp': 522998}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333719, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.451 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c64f53d-a9d6-4d78-b181-c978d8da94af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b676104-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:55:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522998, 'reachable_time': 29715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333720, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.490 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fcb8dc-9484-47da-89aa-652f1f44cca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.556 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d663fc4b-0d54-4976-b892-a3b60037560d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.557 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b676104-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.557 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.558 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b676104-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:55 compute-0 NetworkManager[48915]: <info>  [1764059875.5918] manager: (tap2b676104-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Nov 25 08:37:55 compute-0 kernel: tap2b676104-a0: entered promiscuous mode
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b676104-a0, col_values=(('external_ids', {'iface-id': 'a70ff8dd-5248-427b-8c9b-80eee3a671f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:55 compute-0 ovn_controller[152859]: 2025-11-25T08:37:55Z|00778|binding|INFO|Releasing lport a70ff8dd-5248-427b-8c9b-80eee3a671f3 from this chassis (sb_readonly=0)
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.598 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b676104-a53a-419a-a348-631c409e45c0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b676104-a53a-419a-a348-631c409e45c0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.599 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[113b47c2-dad0-41ef-98f8-0e678fb51e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.600 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/2b676104-a53a-419a-a348-631c409e45c0.pid.haproxy
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:37:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:55.601 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'env', 'PROCESS_TAG=haproxy-2b676104-a53a-419a-a348-631c409e45c0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b676104-a53a-419a-a348-631c409e45c0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.669 253542 DEBUG nova.network.neutron [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updating instance_info_cache with network_info: [{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.694 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059875.693598, 40912950-fedc-405c-bc49-c4a757a422dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.694 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] VM Started (Lifecycle Event)
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.715 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Releasing lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.716 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance network_info: |[{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.716 253542 DEBUG oslo_concurrency.lockutils [req-ab401a35-60a3-42dd-b006-893710cdc7f1 req-718177f0-76f9-4dae-9a61-564b182f71c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.716 253542 DEBUG nova.network.neutron [req-ab401a35-60a3-42dd-b006-893710cdc7f1 req-718177f0-76f9-4dae-9a61-564b182f71c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.719 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Start _get_guest_xml network_info=[{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.721 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.726 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059875.6937592, 40912950-fedc-405c-bc49-c4a757a422dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.727 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] VM Paused (Lifecycle Event)
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.728 253542 WARNING nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.745 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.749 253542 DEBUG nova.virt.libvirt.host [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.751 253542 DEBUG nova.virt.libvirt.host [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.752 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.755 253542 DEBUG nova.virt.libvirt.host [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.755 253542 DEBUG nova.virt.libvirt.host [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.756 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.756 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.756 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.756 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.757 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.757 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.757 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.757 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.757 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.758 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.758 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.758 253542 DEBUG nova.virt.hardware [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.760 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.798 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.956 253542 DEBUG nova.compute.manager [req-b8e0af9c-d993-48cd-96a6-97b3b70ff3ef req-d72662ce-e845-44a2-b721-96adff121c61 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.957 253542 DEBUG oslo_concurrency.lockutils [req-b8e0af9c-d993-48cd-96a6-97b3b70ff3ef req-d72662ce-e845-44a2-b721-96adff121c61 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.957 253542 DEBUG oslo_concurrency.lockutils [req-b8e0af9c-d993-48cd-96a6-97b3b70ff3ef req-d72662ce-e845-44a2-b721-96adff121c61 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.957 253542 DEBUG oslo_concurrency.lockutils [req-b8e0af9c-d993-48cd-96a6-97b3b70ff3ef req-d72662ce-e845-44a2-b721-96adff121c61 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.958 253542 DEBUG nova.compute.manager [req-b8e0af9c-d993-48cd-96a6-97b3b70ff3ef req-d72662ce-e845-44a2-b721-96adff121c61 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Processing event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.959 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.962 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059875.9623313, 40912950-fedc-405c-bc49-c4a757a422dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.963 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] VM Resumed (Lifecycle Event)
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.965 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.969 253542 INFO nova.virt.libvirt.driver [-] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Instance spawned successfully.
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.969 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.982 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.989 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.995 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.995 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:55 compute-0 nova_compute[253538]: 2025-11-25 08:37:55.996 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.000 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.001 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.001 253542 DEBUG nova.virt.libvirt.driver [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.007 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.066 253542 INFO nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Took 8.74 seconds to spawn the instance on the hypervisor.
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.066 253542 DEBUG nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:56 compute-0 podman[333812]: 2025-11-25 08:37:55.989200534 +0000 UTC m=+0.043505876 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.122 253542 INFO nova.compute.manager [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Took 9.75 seconds to build instance.
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.137 253542 DEBUG oslo_concurrency.lockutils [None req-341a2362-f97d-46de-9a7b-3d552bac075c 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:56 compute-0 podman[333812]: 2025-11-25 08:37:56.192904766 +0000 UTC m=+0.247210128 container create 2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:37:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723845267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.233 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.263 253542 DEBUG nova.storage.rbd_utils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:56 compute-0 systemd[1]: Started libpod-conmon-2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d.scope.
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.270 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56e410089cfa091c4670318839e74a67b6225fde95a89bb4287cf8649ba80a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:37:56 compute-0 podman[333812]: 2025-11-25 08:37:56.341172867 +0000 UTC m=+0.395478199 container init 2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:37:56 compute-0 podman[333812]: 2025-11-25 08:37:56.351824917 +0000 UTC m=+0.406130239 container start 2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:37:56 compute-0 neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0[333848]: [NOTICE]   (333853) : New worker (333868) forked
Nov 25 08:37:56 compute-0 neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0[333848]: [NOTICE]   (333853) : Loading success.
Nov 25 08:37:56 compute-0 ceph-mon[75015]: pgmap v1647: 321 pgs: 321 active+clean; 204 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 199 op/s
Nov 25 08:37:56 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1723845267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.414 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059861.4126532, 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.416 253542 INFO nova.compute.manager [-] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] VM Stopped (Lifecycle Event)
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.434 253542 DEBUG nova.compute.manager [None req-763c2b50-7477-4b4b-9db5-0b68c7c24c10 - - - - - -] [instance: 884b0bf9-764d-4aa8-8bcb-c9e8644a0dad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:37:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1096198090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.698 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.700 253542 DEBUG nova.virt.libvirt.vif [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1695294575',display_name='tempest-ServerRescueTestJSONUnderV235-server-1695294575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1695294575',id=82,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ae570c13ba047bca1859d62faf328cc',ramdisk_id='',reservation_id='r-df750kfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2082720401',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2082720401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:51Z,user_data=None,user_id='2c27b17fb49c46f2877860b2f7123ef2',uuid=0e855a86-52f7-47bd-aee9-e88449169aa1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.701 253542 DEBUG nova.network.os_vif_util [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Converting VIF {"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.702 253542 DEBUG nova.network.os_vif_util [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:2c:1d,bridge_name='br-int',has_traffic_filtering=True,id=8f28ea33-80c4-41cb-b191-a1b619b14515,network=Network(6346f61b-3f62-4471-b87c-676053219f02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f28ea33-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.704 253542 DEBUG nova.objects.instance [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.717 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <uuid>0e855a86-52f7-47bd-aee9-e88449169aa1</uuid>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <name>instance-00000052</name>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1695294575</nova:name>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:37:55</nova:creationTime>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <nova:user uuid="2c27b17fb49c46f2877860b2f7123ef2">tempest-ServerRescueTestJSONUnderV235-2082720401-project-member</nova:user>
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <nova:project uuid="6ae570c13ba047bca1859d62faf328cc">tempest-ServerRescueTestJSONUnderV235-2082720401</nova:project>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <nova:port uuid="8f28ea33-80c4-41cb-b191-a1b619b14515">
Nov 25 08:37:56 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <system>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <entry name="serial">0e855a86-52f7-47bd-aee9-e88449169aa1</entry>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <entry name="uuid">0e855a86-52f7-47bd-aee9-e88449169aa1</entry>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </system>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <os>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   </os>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <features>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   </features>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0e855a86-52f7-47bd-aee9-e88449169aa1_disk">
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config">
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       </source>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:37:56 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:51:2c:1d"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <target dev="tap8f28ea33-80"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/console.log" append="off"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <video>
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </video>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:37:56 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:37:56 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:37:56 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:37:56 compute-0 nova_compute[253538]: </domain>
Nov 25 08:37:56 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.718 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Preparing to wait for external event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.718 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.718 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.719 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.719 253542 DEBUG nova.virt.libvirt.vif [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1695294575',display_name='tempest-ServerRescueTestJSONUnderV235-server-1695294575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1695294575',id=82,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ae570c13ba047bca1859d62faf328cc',ramdisk_id='',reservation_id='r-df750kfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2082720401',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2082720401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:51Z,user_data=None,user_id='2c27b17fb49c46f2877860b2f7123ef2',uuid=0e855a86-52f7-47bd-aee9-e88449169aa1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.720 253542 DEBUG nova.network.os_vif_util [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Converting VIF {"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.720 253542 DEBUG nova.network.os_vif_util [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:2c:1d,bridge_name='br-int',has_traffic_filtering=True,id=8f28ea33-80c4-41cb-b191-a1b619b14515,network=Network(6346f61b-3f62-4471-b87c-676053219f02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f28ea33-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.721 253542 DEBUG os_vif [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:2c:1d,bridge_name='br-int',has_traffic_filtering=True,id=8f28ea33-80c4-41cb-b191-a1b619b14515,network=Network(6346f61b-3f62-4471-b87c-676053219f02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f28ea33-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.721 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.722 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:37:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.726 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f28ea33-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.726 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f28ea33-80, col_values=(('external_ids', {'iface-id': '8f28ea33-80c4-41cb-b191-a1b619b14515', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:2c:1d', 'vm-uuid': '0e855a86-52f7-47bd-aee9-e88449169aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:37:56 compute-0 NetworkManager[48915]: <info>  [1764059876.7848] manager: (tap8f28ea33-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.794 253542 INFO os_vif [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:2c:1d,bridge_name='br-int',has_traffic_filtering=True,id=8f28ea33-80c4-41cb-b191-a1b619b14515,network=Network(6346f61b-3f62-4471-b87c-676053219f02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f28ea33-80')
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.869 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.870 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.878 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.878 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.878 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] No VIF found with MAC fa:16:3e:51:2c:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.879 253542 INFO nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Using config drive
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.920 253542 DEBUG nova.storage.rbd_utils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:56 compute-0 nova_compute[253538]: 2025-11-25 08:37:56.931 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.056 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.057 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.063 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.063 253542 INFO nova.compute.claims [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.165 253542 DEBUG nova.network.neutron [req-ab401a35-60a3-42dd-b006-893710cdc7f1 req-718177f0-76f9-4dae-9a61-564b182f71c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updated VIF entry in instance network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.166 253542 DEBUG nova.network.neutron [req-ab401a35-60a3-42dd-b006-893710cdc7f1 req-718177f0-76f9-4dae-9a61-564b182f71c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updating instance_info_cache with network_info: [{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.181 253542 DEBUG oslo_concurrency.lockutils [req-ab401a35-60a3-42dd-b006-893710cdc7f1 req-718177f0-76f9-4dae-9a61-564b182f71c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.233 253542 INFO nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Creating config drive at /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.240 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphyq6ml44 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.282 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 227 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 186 op/s
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.387 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphyq6ml44" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.418 253542 DEBUG nova.storage.rbd_utils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1096198090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.428 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.600 253542 DEBUG oslo_concurrency.processutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.601 253542 INFO nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Deleting local config drive /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config because it was imported into RBD.
Nov 25 08:37:57 compute-0 kernel: tap8f28ea33-80: entered promiscuous mode
Nov 25 08:37:57 compute-0 NetworkManager[48915]: <info>  [1764059877.6529] manager: (tap8f28ea33-80): new Tun device (/org/freedesktop/NetworkManager/Devices/340)
Nov 25 08:37:57 compute-0 ovn_controller[152859]: 2025-11-25T08:37:57Z|00779|binding|INFO|Claiming lport 8f28ea33-80c4-41cb-b191-a1b619b14515 for this chassis.
Nov 25 08:37:57 compute-0 systemd-udevd[333712]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.654 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:57 compute-0 ovn_controller[152859]: 2025-11-25T08:37:57Z|00780|binding|INFO|8f28ea33-80c4-41cb-b191-a1b619b14515: Claiming fa:16:3e:51:2c:1d 10.100.0.6
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.660 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:57 compute-0 NetworkManager[48915]: <info>  [1764059877.6710] device (tap8f28ea33-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:37:57 compute-0 NetworkManager[48915]: <info>  [1764059877.6718] device (tap8f28ea33-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:37:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:57.671 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2c:1d 10.100.0.6'], port_security=['fa:16:3e:51:2c:1d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e855a86-52f7-47bd-aee9-e88449169aa1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346f61b-3f62-4471-b87c-676053219f02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ae570c13ba047bca1859d62faf328cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1118a317-9e94-4c83-9854-1785d0154360', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2aacef0e-2524-4118-9960-da2e22fd24eb, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8f28ea33-80c4-41cb-b191-a1b619b14515) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:37:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:57.673 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8f28ea33-80c4-41cb-b191-a1b619b14515 in datapath 6346f61b-3f62-4471-b87c-676053219f02 bound to our chassis
Nov 25 08:37:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:57.674 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6346f61b-3f62-4471-b87c-676053219f02 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:37:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:37:57.675 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51184b23-2f21-4c16-b383-3f85bebf8e59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:37:57 compute-0 systemd-machined[215790]: New machine qemu-100-instance-00000052.
Nov 25 08:37:57 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000052.
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:57 compute-0 ovn_controller[152859]: 2025-11-25T08:37:57Z|00781|binding|INFO|Setting lport 8f28ea33-80c4-41cb-b191-a1b619b14515 ovn-installed in OVS
Nov 25 08:37:57 compute-0 ovn_controller[152859]: 2025-11-25T08:37:57Z|00782|binding|INFO|Setting lport 8f28ea33-80c4-41cb-b191-a1b619b14515 up in Southbound
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:37:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:37:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4237955713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.782 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.789 253542 DEBUG nova.compute.provider_tree [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.803 253542 DEBUG nova.scheduler.client.report [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.827 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.828 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.874 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.874 253542 DEBUG nova.network.neutron [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.893 253542 INFO nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:37:57 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.908 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:57.999 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.001 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.002 253542 INFO nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Creating image(s)
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.029 253542 DEBUG nova.storage.rbd_utils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.054 253542 DEBUG nova.storage.rbd_utils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.080 253542 DEBUG nova.storage.rbd_utils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.083 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.178 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.180 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.181 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.181 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.201 253542 DEBUG nova.storage.rbd_utils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.204 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.358 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059878.3581586, 0e855a86-52f7-47bd-aee9-e88449169aa1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.359 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] VM Started (Lifecycle Event)
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.369 253542 DEBUG nova.policy [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdcb005cc49a4dfa82152f2c0817cc94', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b730f086c4b94185afab5e10fa2e8181', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.387 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.395 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059878.3586903, 0e855a86-52f7-47bd-aee9-e88449169aa1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.395 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] VM Paused (Lifecycle Event)
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.413 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.416 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.438 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.450 253542 DEBUG nova.compute.manager [req-9c8782e0-ba97-4149-b5a7-46a3e09fe3c0 req-4f229c67-871f-4153-be09-32d153e13a03 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.450 253542 DEBUG oslo_concurrency.lockutils [req-9c8782e0-ba97-4149-b5a7-46a3e09fe3c0 req-4f229c67-871f-4153-be09-32d153e13a03 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.451 253542 DEBUG oslo_concurrency.lockutils [req-9c8782e0-ba97-4149-b5a7-46a3e09fe3c0 req-4f229c67-871f-4153-be09-32d153e13a03 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.451 253542 DEBUG oslo_concurrency.lockutils [req-9c8782e0-ba97-4149-b5a7-46a3e09fe3c0 req-4f229c67-871f-4153-be09-32d153e13a03 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.451 253542 DEBUG nova.compute.manager [req-9c8782e0-ba97-4149-b5a7-46a3e09fe3c0 req-4f229c67-871f-4153-be09-32d153e13a03 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] No waiting events found dispatching network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.452 253542 WARNING nova.compute.manager [req-9c8782e0-ba97-4149-b5a7-46a3e09fe3c0 req-4f229c67-871f-4153-be09-32d153e13a03 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received unexpected event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for instance with vm_state active and task_state None.
Nov 25 08:37:58 compute-0 ceph-mon[75015]: pgmap v1648: 321 pgs: 321 active+clean; 227 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 186 op/s
Nov 25 08:37:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4237955713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.545 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.610 253542 DEBUG nova.storage.rbd_utils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] resizing rbd image a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.700 253542 DEBUG nova.objects.instance [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'migration_context' on Instance uuid a5ba6c2a-b5cd-4ced-b648-e54a941cda0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.710 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.710 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Ensure instance console log exists: /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.711 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.711 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:37:58 compute-0 nova_compute[253538]: 2025-11-25 08:37:58.711 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:37:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 227 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 182 op/s
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.006 253542 DEBUG nova.network.neutron [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Successfully created port: c9ccc817-d521-4b07-bb91-ca98938f9d7b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:38:00 compute-0 ovn_controller[152859]: 2025-11-25T08:38:00Z|00783|binding|INFO|Releasing lport a70ff8dd-5248-427b-8c9b-80eee3a671f3 from this chassis (sb_readonly=0)
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:00 compute-0 ovn_controller[152859]: 2025-11-25T08:38:00Z|00784|binding|INFO|Releasing lport 185991f0-acab-400e-baff-76794035d44a from this chassis (sb_readonly=0)
Nov 25 08:38:00 compute-0 NetworkManager[48915]: <info>  [1764059880.0802] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Nov 25 08:38:00 compute-0 NetworkManager[48915]: <info>  [1764059880.0824] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Nov 25 08:38:00 compute-0 ovn_controller[152859]: 2025-11-25T08:38:00Z|00785|binding|INFO|Releasing lport a70ff8dd-5248-427b-8c9b-80eee3a671f3 from this chassis (sb_readonly=0)
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:00 compute-0 ovn_controller[152859]: 2025-11-25T08:38:00Z|00786|binding|INFO|Releasing lport 185991f0-acab-400e-baff-76794035d44a from this chassis (sb_readonly=0)
Nov 25 08:38:00 compute-0 ceph-mon[75015]: pgmap v1649: 321 pgs: 321 active+clean; 227 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 182 op/s
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.582 253542 DEBUG nova.compute.manager [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.582 253542 DEBUG oslo_concurrency.lockutils [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.583 253542 DEBUG oslo_concurrency.lockutils [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.584 253542 DEBUG oslo_concurrency.lockutils [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.584 253542 DEBUG nova.compute.manager [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Processing event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.585 253542 DEBUG nova.compute.manager [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.585 253542 DEBUG oslo_concurrency.lockutils [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.586 253542 DEBUG oslo_concurrency.lockutils [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.587 253542 DEBUG oslo_concurrency.lockutils [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.587 253542 DEBUG nova.compute.manager [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] No waiting events found dispatching network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.588 253542 WARNING nova.compute.manager [req-b1727e48-1644-4e2e-8fbb-0c68c4ba7670 req-442c3c1e-5087-4750-8df9-5125cc6c756e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received unexpected event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 for instance with vm_state building and task_state spawning.
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.589 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.600 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.601 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059880.6011553, 0e855a86-52f7-47bd-aee9-e88449169aa1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.602 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] VM Resumed (Lifecycle Event)
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.608 253542 INFO nova.virt.libvirt.driver [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance spawned successfully.
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.608 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.635 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.641 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.642 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.643 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.644 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.644 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.645 253542 DEBUG nova.virt.libvirt.driver [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.653 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.679 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.707 253542 INFO nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Took 9.16 seconds to spawn the instance on the hypervisor.
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.708 253542 DEBUG nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.769 253542 INFO nova.compute.manager [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Took 10.13 seconds to build instance.
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.772 253542 DEBUG nova.network.neutron [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Successfully updated port: c9ccc817-d521-4b07-bb91-ca98938f9d7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.788 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "refresh_cache-a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.789 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquired lock "refresh_cache-a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.789 253542 DEBUG nova.network.neutron [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.795 253542 DEBUG oslo_concurrency.lockutils [None req-dd2b0da4-e957-4ab2-8560-72387eedbf24 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:00 compute-0 nova_compute[253538]: 2025-11-25 08:38:00.985 253542 DEBUG nova.network.neutron [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:38:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 248 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 187 op/s
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.539 253542 DEBUG nova.compute.manager [req-b0dc7a84-acbc-4a89-b1d5-e50ff9564dd2 req-231383eb-684a-481f-8013-35155b6552ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-changed-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.540 253542 DEBUG nova.compute.manager [req-b0dc7a84-acbc-4a89-b1d5-e50ff9564dd2 req-231383eb-684a-481f-8013-35155b6552ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Refreshing instance network info cache due to event network-changed-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.540 253542 DEBUG oslo_concurrency.lockutils [req-b0dc7a84-acbc-4a89-b1d5-e50ff9564dd2 req-231383eb-684a-481f-8013-35155b6552ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.541 253542 DEBUG oslo_concurrency.lockutils [req-b0dc7a84-acbc-4a89-b1d5-e50ff9564dd2 req-231383eb-684a-481f-8013-35155b6552ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.541 253542 DEBUG nova.network.neutron [req-b0dc7a84-acbc-4a89-b1d5-e50ff9564dd2 req-231383eb-684a-481f-8013-35155b6552ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Refreshing network info cache for port 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.563 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.656 253542 INFO nova.compute.manager [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Rescuing
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.657 253542 DEBUG oslo_concurrency.lockutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.657 253542 DEBUG oslo_concurrency.lockutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquired lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.658 253542 DEBUG nova.network.neutron [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.676 253542 DEBUG nova.network.neutron [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Updating instance_info_cache with network_info: [{"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.697 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Releasing lock "refresh_cache-a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.698 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Instance network_info: |[{"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.701 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Start _get_guest_xml network_info=[{"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.706 253542 WARNING nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.711 253542 DEBUG nova.virt.libvirt.host [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.712 253542 DEBUG nova.virt.libvirt.host [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.715 253542 DEBUG nova.virt.libvirt.host [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.716 253542 DEBUG nova.virt.libvirt.host [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.716 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.716 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.717 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.718 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.718 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.718 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.719 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.719 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.719 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.720 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.720 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.721 253542 DEBUG nova.virt.hardware [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.724 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:01 compute-0 nova_compute[253538]: 2025-11-25 08:38:01.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787495346' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.185 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.219 253542 DEBUG nova.storage.rbd_utils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.225 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:02 compute-0 ceph-mon[75015]: pgmap v1650: 321 pgs: 321 active+clean; 248 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 187 op/s
Nov 25 08:38:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/787495346' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.667 253542 DEBUG nova.compute.manager [req-911dbef2-5ec7-4c76-b71d-b8294a922a9c req-257a4a71-c2e8-420d-9c19-e70cda22284d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received event network-changed-c9ccc817-d521-4b07-bb91-ca98938f9d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.668 253542 DEBUG nova.compute.manager [req-911dbef2-5ec7-4c76-b71d-b8294a922a9c req-257a4a71-c2e8-420d-9c19-e70cda22284d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Refreshing instance network info cache due to event network-changed-c9ccc817-d521-4b07-bb91-ca98938f9d7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.668 253542 DEBUG oslo_concurrency.lockutils [req-911dbef2-5ec7-4c76-b71d-b8294a922a9c req-257a4a71-c2e8-420d-9c19-e70cda22284d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.668 253542 DEBUG oslo_concurrency.lockutils [req-911dbef2-5ec7-4c76-b71d-b8294a922a9c req-257a4a71-c2e8-420d-9c19-e70cda22284d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.669 253542 DEBUG nova.network.neutron [req-911dbef2-5ec7-4c76-b71d-b8294a922a9c req-257a4a71-c2e8-420d-9c19-e70cda22284d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Refreshing network info cache for port c9ccc817-d521-4b07-bb91-ca98938f9d7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2034375704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.702 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.704 253542 DEBUG nova.virt.libvirt.vif [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1353589796',display_name='tempest-ServersTestJSON-server-1353589796',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1353589796',id=83,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-dyhmnwkb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:57Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=a5ba6c2a-b5cd-4ced-b648-e54a941cda0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.705 253542 DEBUG nova.network.os_vif_util [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.706 253542 DEBUG nova.network.os_vif_util [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:ad:91,bridge_name='br-int',has_traffic_filtering=True,id=c9ccc817-d521-4b07-bb91-ca98938f9d7b,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9ccc817-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.707 253542 DEBUG nova.objects.instance [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5ba6c2a-b5cd-4ced-b648-e54a941cda0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.720 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <uuid>a5ba6c2a-b5cd-4ced-b648-e54a941cda0d</uuid>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <name>instance-00000053</name>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestJSON-server-1353589796</nova:name>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:38:01</nova:creationTime>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <nova:user uuid="fdcb005cc49a4dfa82152f2c0817cc94">tempest-ServersTestJSON-1426188226-project-member</nova:user>
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <nova:project uuid="b730f086c4b94185afab5e10fa2e8181">tempest-ServersTestJSON-1426188226</nova:project>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <nova:port uuid="c9ccc817-d521-4b07-bb91-ca98938f9d7b">
Nov 25 08:38:02 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <system>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <entry name="serial">a5ba6c2a-b5cd-4ced-b648-e54a941cda0d</entry>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <entry name="uuid">a5ba6c2a-b5cd-4ced-b648-e54a941cda0d</entry>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </system>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <os>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   </os>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <features>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   </features>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk">
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk.config">
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:02 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:09:ad:91"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <target dev="tapc9ccc817-d5"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d/console.log" append="off"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <video>
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </video>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:38:02 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:38:02 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:38:02 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:38:02 compute-0 nova_compute[253538]: </domain>
Nov 25 08:38:02 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.731 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Preparing to wait for external event network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.732 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.732 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.732 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.733 253542 DEBUG nova.virt.libvirt.vif [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:37:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1353589796',display_name='tempest-ServersTestJSON-server-1353589796',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1353589796',id=83,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-dyhmnwkb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:37:57Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=a5ba6c2a-b5cd-4ced-b648-e54a941cda0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.733 253542 DEBUG nova.network.os_vif_util [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.734 253542 DEBUG nova.network.os_vif_util [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:ad:91,bridge_name='br-int',has_traffic_filtering=True,id=c9ccc817-d521-4b07-bb91-ca98938f9d7b,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9ccc817-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.734 253542 DEBUG os_vif [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:ad:91,bridge_name='br-int',has_traffic_filtering=True,id=c9ccc817-d521-4b07-bb91-ca98938f9d7b,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9ccc817-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.736 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.736 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.740 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.740 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9ccc817-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.740 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9ccc817-d5, col_values=(('external_ids', {'iface-id': 'c9ccc817-d521-4b07-bb91-ca98938f9d7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:ad:91', 'vm-uuid': 'a5ba6c2a-b5cd-4ced-b648-e54a941cda0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:02 compute-0 NetworkManager[48915]: <info>  [1764059882.7432] manager: (tapc9ccc817-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.744 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.760 253542 INFO os_vif [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:ad:91,bridge_name='br-int',has_traffic_filtering=True,id=c9ccc817-d521-4b07-bb91-ca98938f9d7b,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9ccc817-d5')
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.813 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.814 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.814 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No VIF found with MAC fa:16:3e:09:ad:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.815 253542 INFO nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Using config drive
Nov 25 08:38:02 compute-0 nova_compute[253538]: 2025-11-25 08:38:02.838 253542 DEBUG nova.storage.rbd_utils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.037 253542 DEBUG nova.network.neutron [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updating instance_info_cache with network_info: [{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.055 253542 DEBUG oslo_concurrency.lockutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Releasing lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:03 compute-0 ovn_controller[152859]: 2025-11-25T08:38:03Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:a6:7a 10.100.0.13
Nov 25 08:38:03 compute-0 ovn_controller[152859]: 2025-11-25T08:38:03Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:a6:7a 10.100.0.13
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.286 253542 DEBUG nova.network.neutron [req-b0dc7a84-acbc-4a89-b1d5-e50ff9564dd2 req-231383eb-684a-481f-8013-35155b6552ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Updated VIF entry in instance network info cache for port 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.287 253542 DEBUG nova.network.neutron [req-b0dc7a84-acbc-4a89-b1d5-e50ff9564dd2 req-231383eb-684a-481f-8013-35155b6552ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Updating instance_info_cache with network_info: [{"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.305 253542 DEBUG oslo_concurrency.lockutils [req-b0dc7a84-acbc-4a89-b1d5-e50ff9564dd2 req-231383eb-684a-481f-8013-35155b6552ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-40912950-fedc-405c-bc49-c4a757a422dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 283 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.0 MiB/s wr, 258 op/s
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.407 253542 INFO nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Creating config drive at /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d/disk.config
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.416 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9iqzmlo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.500 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:38:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2034375704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.564 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9iqzmlo" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.596 253542 DEBUG nova.storage.rbd_utils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.599 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d/disk.config a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:03 compute-0 sudo[334302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:03 compute-0 sudo[334302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:03 compute-0 sudo[334302]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:03 compute-0 sudo[334345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:38:03 compute-0 sudo[334345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:03 compute-0 sudo[334345]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.817 253542 DEBUG oslo_concurrency.processutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d/disk.config a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.819 253542 INFO nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Deleting local config drive /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d/disk.config because it was imported into RBD.
Nov 25 08:38:03 compute-0 sudo[334370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:03 compute-0 sudo[334370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:03 compute-0 sudo[334370]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:03 compute-0 kernel: tapc9ccc817-d5: entered promiscuous mode
Nov 25 08:38:03 compute-0 NetworkManager[48915]: <info>  [1764059883.8744] manager: (tapc9ccc817-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:03 compute-0 ovn_controller[152859]: 2025-11-25T08:38:03Z|00787|binding|INFO|Claiming lport c9ccc817-d521-4b07-bb91-ca98938f9d7b for this chassis.
Nov 25 08:38:03 compute-0 ovn_controller[152859]: 2025-11-25T08:38:03Z|00788|binding|INFO|c9ccc817-d521-4b07-bb91-ca98938f9d7b: Claiming fa:16:3e:09:ad:91 10.100.0.9
Nov 25 08:38:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:03.889 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:ad:91 10.100.0.9'], port_security=['fa:16:3e:09:ad:91 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a5ba6c2a-b5cd-4ced-b648-e54a941cda0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '2', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c9ccc817-d521-4b07-bb91-ca98938f9d7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:03.890 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c9ccc817-d521-4b07-bb91-ca98938f9d7b in datapath 92e26514-5b15-410b-8885-6773bc03c4ce bound to our chassis
Nov 25 08:38:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:03.892 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:38:03 compute-0 ovn_controller[152859]: 2025-11-25T08:38:03Z|00789|binding|INFO|Setting lport c9ccc817-d521-4b07-bb91-ca98938f9d7b ovn-installed in OVS
Nov 25 08:38:03 compute-0 ovn_controller[152859]: 2025-11-25T08:38:03Z|00790|binding|INFO|Setting lport c9ccc817-d521-4b07-bb91-ca98938f9d7b up in Southbound
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001579656772969981 of space, bias 1.0, pg target 0.4738970318909943 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 nova_compute[253538]: 2025-11-25 08:38:03.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:38:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:38:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:03.918 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[42792f19-9eeb-4d89-9ea7-9824868734da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:03 compute-0 systemd-udevd[334426]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:03 compute-0 systemd-machined[215790]: New machine qemu-101-instance-00000053.
Nov 25 08:38:03 compute-0 NetworkManager[48915]: <info>  [1764059883.9355] device (tapc9ccc817-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:38:03 compute-0 NetworkManager[48915]: <info>  [1764059883.9365] device (tapc9ccc817-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:38:03 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000053.
Nov 25 08:38:03 compute-0 sudo[334403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:38:03 compute-0 sudo[334403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:03.952 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[230bbe59-8ddd-44cd-a77c-f32b2d853671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:03.958 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[876bf9d3-6158-4242-b260-a873dc419d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:03.989 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[29a8966e-145a-41e1-a92e-686a6ca7cb45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:04.010 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[365f0ab1-6274-4207-9f85-ad87f10524d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334446, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:04.036 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[14d2602d-ccc9-4829-a7cc-a2775371cf53]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334448, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334448, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:04.038 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.040 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:04.041 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:04.041 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:04.041 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:04.041 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.088 253542 DEBUG nova.network.neutron [req-911dbef2-5ec7-4c76-b71d-b8294a922a9c req-257a4a71-c2e8-420d-9c19-e70cda22284d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Updated VIF entry in instance network info cache for port c9ccc817-d521-4b07-bb91-ca98938f9d7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.089 253542 DEBUG nova.network.neutron [req-911dbef2-5ec7-4c76-b71d-b8294a922a9c req-257a4a71-c2e8-420d-9c19-e70cda22284d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Updating instance_info_cache with network_info: [{"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.128 253542 DEBUG oslo_concurrency.lockutils [req-911dbef2-5ec7-4c76-b71d-b8294a922a9c req-257a4a71-c2e8-420d-9c19-e70cda22284d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:04 compute-0 ceph-mon[75015]: pgmap v1651: 321 pgs: 321 active+clean; 283 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.0 MiB/s wr, 258 op/s
Nov 25 08:38:04 compute-0 sudo[334403]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:38:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:38:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:38:04 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:38:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:38:04 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:38:04 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6a6a5814-62a2-4730-9a20-a4451eccb2b4 does not exist
Nov 25 08:38:04 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 37c5a235-da5a-4ee1-8c34-2c0548e8873a does not exist
Nov 25 08:38:04 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e9906c0e-2f48-46fb-aed3-828a5dbb5136 does not exist
Nov 25 08:38:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:38:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:38:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:38:04 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:38:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:38:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.740 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059884.7397678, a5ba6c2a-b5cd-4ced-b648-e54a941cda0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.741 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] VM Started (Lifecycle Event)
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.760 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.769 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059884.7399178, a5ba6c2a-b5cd-4ced-b648-e54a941cda0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.770 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] VM Paused (Lifecycle Event)
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.784 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.800 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:04 compute-0 sudo[334522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:04 compute-0 sudo[334522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:04 compute-0 sudo[334522]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:04 compute-0 nova_compute[253538]: 2025-11-25 08:38:04.817 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:04 compute-0 sudo[334547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:38:04 compute-0 sudo[334547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:04 compute-0 sudo[334547]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:04 compute-0 sudo[334572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:04 compute-0 sudo[334572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:04 compute-0 sudo[334572]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:05 compute-0 sudo[334597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:38:05 compute-0 sudo[334597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 303 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.3 MiB/s wr, 245 op/s
Nov 25 08:38:05 compute-0 podman[334660]: 2025-11-25 08:38:05.364120076 +0000 UTC m=+0.043393304 container create 722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:38:05 compute-0 systemd[1]: Started libpod-conmon-722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4.scope.
Nov 25 08:38:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:38:05 compute-0 podman[334660]: 2025-11-25 08:38:05.341235583 +0000 UTC m=+0.020508841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:38:05 compute-0 podman[334660]: 2025-11-25 08:38:05.452237767 +0000 UTC m=+0.131511025 container init 722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swirles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 08:38:05 compute-0 podman[334660]: 2025-11-25 08:38:05.459637889 +0000 UTC m=+0.138911117 container start 722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swirles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:38:05 compute-0 podman[334660]: 2025-11-25 08:38:05.465099808 +0000 UTC m=+0.144373066 container attach 722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 08:38:05 compute-0 infallible_swirles[334677]: 167 167
Nov 25 08:38:05 compute-0 systemd[1]: libpod-722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4.scope: Deactivated successfully.
Nov 25 08:38:05 compute-0 podman[334660]: 2025-11-25 08:38:05.47031859 +0000 UTC m=+0.149591838 container died 722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 08:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-eae992f14d3a81f760053a899a52e6d1c7cfa815b4511737825021fdcae71242-merged.mount: Deactivated successfully.
Nov 25 08:38:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:38:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:38:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:38:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:38:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:38:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:38:05 compute-0 podman[334660]: 2025-11-25 08:38:05.534041696 +0000 UTC m=+0.213314934 container remove 722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swirles, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 08:38:05 compute-0 systemd[1]: libpod-conmon-722098b60e64729187ec1f7dac0c7f24e3c9c025d2d3f7fd3e4ef5a3634495e4.scope: Deactivated successfully.
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.590 253542 DEBUG nova.compute.manager [req-f0e89118-a783-48f1-b6ad-ee3d8f27fffd req-e2d1cbe6-4efe-4e73-8046-e50315d2b9ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received event network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.592 253542 DEBUG oslo_concurrency.lockutils [req-f0e89118-a783-48f1-b6ad-ee3d8f27fffd req-e2d1cbe6-4efe-4e73-8046-e50315d2b9ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.593 253542 DEBUG oslo_concurrency.lockutils [req-f0e89118-a783-48f1-b6ad-ee3d8f27fffd req-e2d1cbe6-4efe-4e73-8046-e50315d2b9ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.593 253542 DEBUG oslo_concurrency.lockutils [req-f0e89118-a783-48f1-b6ad-ee3d8f27fffd req-e2d1cbe6-4efe-4e73-8046-e50315d2b9ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.594 253542 DEBUG nova.compute.manager [req-f0e89118-a783-48f1-b6ad-ee3d8f27fffd req-e2d1cbe6-4efe-4e73-8046-e50315d2b9ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Processing event network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.595 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.600 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059885.600384, a5ba6c2a-b5cd-4ced-b648-e54a941cda0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.601 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] VM Resumed (Lifecycle Event)
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.620 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.622 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.630 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.634 253542 INFO nova.virt.libvirt.driver [-] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Instance spawned successfully.
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.635 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.649 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.662 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.663 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.664 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.665 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.666 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.667 253542 DEBUG nova.virt.libvirt.driver [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.719 253542 INFO nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Took 7.72 seconds to spawn the instance on the hypervisor.
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.720 253542 DEBUG nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:05 compute-0 podman[334700]: 2025-11-25 08:38:05.747820933 +0000 UTC m=+0.050547529 container create 31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.788 253542 INFO nova.compute.manager [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Took 8.76 seconds to build instance.
Nov 25 08:38:05 compute-0 systemd[1]: Started libpod-conmon-31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d.scope.
Nov 25 08:38:05 compute-0 nova_compute[253538]: 2025-11-25 08:38:05.811 253542 DEBUG oslo_concurrency.lockutils [None req-d40c6e3e-35a4-4571-bea7-3e431b93e8ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1b03da230738c1e52f1f542588aaa0f5bf799f099fed25eb3ca5cb97069a67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:05 compute-0 podman[334700]: 2025-11-25 08:38:05.726683186 +0000 UTC m=+0.029409792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1b03da230738c1e52f1f542588aaa0f5bf799f099fed25eb3ca5cb97069a67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:38:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7579 writes, 34K keys, 7579 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7579 writes, 7579 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1661 writes, 7761 keys, 1661 commit groups, 1.0 writes per commit group, ingest: 9.85 MB, 0.02 MB/s
                                           Interval WAL: 1662 writes, 1662 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     25.2      1.56              0.13        20    0.078       0      0       0.0       0.0
                                             L6      1/0    8.72 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5     46.4     38.0      3.65              0.44        19    0.192     95K    10K       0.0       0.0
                                            Sum      1/0    8.72 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     32.5     34.2      5.21              0.57        39    0.134     95K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     61.0     63.2      0.77              0.14        10    0.077     30K   3080       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     46.4     38.0      3.65              0.44        19    0.192     95K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     25.3      1.56              0.13        19    0.082       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.17 GB read, 0.06 MB/s read, 5.2 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 19.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000377 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1308,19.08 MB,6.27667%) FilterBlock(40,275.11 KB,0.0883755%) IndexBlock(40,471.58 KB,0.151489%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 08:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1b03da230738c1e52f1f542588aaa0f5bf799f099fed25eb3ca5cb97069a67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1b03da230738c1e52f1f542588aaa0f5bf799f099fed25eb3ca5cb97069a67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1b03da230738c1e52f1f542588aaa0f5bf799f099fed25eb3ca5cb97069a67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:05 compute-0 podman[334700]: 2025-11-25 08:38:05.853478181 +0000 UTC m=+0.156204807 container init 31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:38:05 compute-0 podman[334700]: 2025-11-25 08:38:05.861557652 +0000 UTC m=+0.164284268 container start 31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:38:05 compute-0 podman[334700]: 2025-11-25 08:38:05.869124388 +0000 UTC m=+0.171851014 container attach 31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 08:38:06 compute-0 ceph-mon[75015]: pgmap v1652: 321 pgs: 321 active+clean; 303 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.3 MiB/s wr, 245 op/s
Nov 25 08:38:06 compute-0 nova_compute[253538]: 2025-11-25 08:38:06.564 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:07 compute-0 practical_darwin[334714]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:38:07 compute-0 practical_darwin[334714]: --> relative data size: 1.0
Nov 25 08:38:07 compute-0 practical_darwin[334714]: --> All data devices are unavailable
Nov 25 08:38:07 compute-0 systemd[1]: libpod-31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d.scope: Deactivated successfully.
Nov 25 08:38:07 compute-0 systemd[1]: libpod-31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d.scope: Consumed 1.086s CPU time.
Nov 25 08:38:07 compute-0 podman[334743]: 2025-11-25 08:38:07.113749426 +0000 UTC m=+0.025278680 container died 31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 08:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d1b03da230738c1e52f1f542588aaa0f5bf799f099fed25eb3ca5cb97069a67-merged.mount: Deactivated successfully.
Nov 25 08:38:07 compute-0 podman[334743]: 2025-11-25 08:38:07.194512897 +0000 UTC m=+0.106042121 container remove 31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 08:38:07 compute-0 systemd[1]: libpod-conmon-31ee6fa6c67e447da10b0049c1ccc66738655c59ab0929a74cb945479a0de50d.scope: Deactivated successfully.
Nov 25 08:38:07 compute-0 sudo[334597]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:07 compute-0 sudo[334756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:07 compute-0 sudo[334756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:07 compute-0 sudo[334756]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 306 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.9 MiB/s wr, 256 op/s
Nov 25 08:38:07 compute-0 sudo[334781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:38:07 compute-0 sudo[334781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:07 compute-0 sudo[334781]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:07 compute-0 sudo[334806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:07 compute-0 sudo[334806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:07 compute-0 sudo[334806]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:07 compute-0 sudo[334831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:38:07 compute-0 sudo[334831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:07 compute-0 ceph-mon[75015]: pgmap v1653: 321 pgs: 321 active+clean; 306 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.9 MiB/s wr, 256 op/s
Nov 25 08:38:07 compute-0 nova_compute[253538]: 2025-11-25 08:38:07.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:07 compute-0 podman[334894]: 2025-11-25 08:38:07.830137909 +0000 UTC m=+0.042525090 container create b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 08:38:07 compute-0 systemd[1]: Started libpod-conmon-b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805.scope.
Nov 25 08:38:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:38:07 compute-0 podman[334894]: 2025-11-25 08:38:07.811598663 +0000 UTC m=+0.023985864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:38:07 compute-0 podman[334894]: 2025-11-25 08:38:07.912135843 +0000 UTC m=+0.124523054 container init b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kowalevski, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 08:38:07 compute-0 podman[334894]: 2025-11-25 08:38:07.921757096 +0000 UTC m=+0.134144277 container start b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:38:07 compute-0 goofy_kowalevski[334910]: 167 167
Nov 25 08:38:07 compute-0 systemd[1]: libpod-b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805.scope: Deactivated successfully.
Nov 25 08:38:07 compute-0 podman[334894]: 2025-11-25 08:38:07.930424802 +0000 UTC m=+0.142811983 container attach b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:38:07 compute-0 podman[334894]: 2025-11-25 08:38:07.930945206 +0000 UTC m=+0.143332387 container died b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a2ce2c4227828f959c818ff3ff0f03581bd30da9168753e27879d5a0d2f184f-merged.mount: Deactivated successfully.
Nov 25 08:38:07 compute-0 podman[334894]: 2025-11-25 08:38:07.976328082 +0000 UTC m=+0.188715263 container remove b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:38:07 compute-0 systemd[1]: libpod-conmon-b6670a02c06f8726d4e37c0b727d5f07a845ecb671e7a1baf35652eca5c79805.scope: Deactivated successfully.
Nov 25 08:38:08 compute-0 podman[334934]: 2025-11-25 08:38:08.173360873 +0000 UTC m=+0.044702920 container create c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:38:08 compute-0 systemd[1]: Started libpod-conmon-c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b.scope.
Nov 25 08:38:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091bc5ef5dff109dab115a27c7418ee5089f5a9a3a3344ba561b06d16f1d43c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091bc5ef5dff109dab115a27c7418ee5089f5a9a3a3344ba561b06d16f1d43c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:08 compute-0 podman[334934]: 2025-11-25 08:38:08.155063663 +0000 UTC m=+0.026405730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091bc5ef5dff109dab115a27c7418ee5089f5a9a3a3344ba561b06d16f1d43c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091bc5ef5dff109dab115a27c7418ee5089f5a9a3a3344ba561b06d16f1d43c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:08 compute-0 podman[334934]: 2025-11-25 08:38:08.275371062 +0000 UTC m=+0.146713129 container init c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 08:38:08 compute-0 nova_compute[253538]: 2025-11-25 08:38:08.279 253542 DEBUG nova.compute.manager [req-d329d325-399c-411a-85dd-5e76512dbd91 req-5adcbe54-a005-421e-a237-bf6fc6aea27a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received event network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:08 compute-0 nova_compute[253538]: 2025-11-25 08:38:08.280 253542 DEBUG oslo_concurrency.lockutils [req-d329d325-399c-411a-85dd-5e76512dbd91 req-5adcbe54-a005-421e-a237-bf6fc6aea27a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:08 compute-0 nova_compute[253538]: 2025-11-25 08:38:08.280 253542 DEBUG oslo_concurrency.lockutils [req-d329d325-399c-411a-85dd-5e76512dbd91 req-5adcbe54-a005-421e-a237-bf6fc6aea27a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:08 compute-0 nova_compute[253538]: 2025-11-25 08:38:08.280 253542 DEBUG oslo_concurrency.lockutils [req-d329d325-399c-411a-85dd-5e76512dbd91 req-5adcbe54-a005-421e-a237-bf6fc6aea27a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:08 compute-0 nova_compute[253538]: 2025-11-25 08:38:08.280 253542 DEBUG nova.compute.manager [req-d329d325-399c-411a-85dd-5e76512dbd91 req-5adcbe54-a005-421e-a237-bf6fc6aea27a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] No waiting events found dispatching network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:08 compute-0 nova_compute[253538]: 2025-11-25 08:38:08.281 253542 WARNING nova.compute.manager [req-d329d325-399c-411a-85dd-5e76512dbd91 req-5adcbe54-a005-421e-a237-bf6fc6aea27a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received unexpected event network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b for instance with vm_state active and task_state None.
Nov 25 08:38:08 compute-0 podman[334934]: 2025-11-25 08:38:08.287278146 +0000 UTC m=+0.158620193 container start c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 08:38:08 compute-0 podman[334934]: 2025-11-25 08:38:08.311562288 +0000 UTC m=+0.182904345 container attach c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 08:38:09 compute-0 zen_mendel[334950]: {
Nov 25 08:38:09 compute-0 zen_mendel[334950]:     "0": [
Nov 25 08:38:09 compute-0 zen_mendel[334950]:         {
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "devices": [
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "/dev/loop3"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             ],
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_name": "ceph_lv0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_size": "21470642176",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "name": "ceph_lv0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "tags": {
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cluster_name": "ceph",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.crush_device_class": "",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.encrypted": "0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osd_id": "0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.type": "block",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.vdo": "0"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             },
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "type": "block",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "vg_name": "ceph_vg0"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:         }
Nov 25 08:38:09 compute-0 zen_mendel[334950]:     ],
Nov 25 08:38:09 compute-0 zen_mendel[334950]:     "1": [
Nov 25 08:38:09 compute-0 zen_mendel[334950]:         {
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "devices": [
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "/dev/loop4"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             ],
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_name": "ceph_lv1",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_size": "21470642176",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "name": "ceph_lv1",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "tags": {
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cluster_name": "ceph",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.crush_device_class": "",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.encrypted": "0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osd_id": "1",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.type": "block",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.vdo": "0"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             },
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "type": "block",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "vg_name": "ceph_vg1"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:         }
Nov 25 08:38:09 compute-0 zen_mendel[334950]:     ],
Nov 25 08:38:09 compute-0 zen_mendel[334950]:     "2": [
Nov 25 08:38:09 compute-0 zen_mendel[334950]:         {
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "devices": [
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "/dev/loop5"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             ],
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_name": "ceph_lv2",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_size": "21470642176",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "name": "ceph_lv2",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "tags": {
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.cluster_name": "ceph",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.crush_device_class": "",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.encrypted": "0",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osd_id": "2",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.type": "block",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:                 "ceph.vdo": "0"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             },
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "type": "block",
Nov 25 08:38:09 compute-0 zen_mendel[334950]:             "vg_name": "ceph_vg2"
Nov 25 08:38:09 compute-0 zen_mendel[334950]:         }
Nov 25 08:38:09 compute-0 zen_mendel[334950]:     ]
Nov 25 08:38:09 compute-0 zen_mendel[334950]: }
Nov 25 08:38:09 compute-0 systemd[1]: libpod-c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b.scope: Deactivated successfully.
Nov 25 08:38:09 compute-0 podman[334959]: 2025-11-25 08:38:09.151116128 +0000 UTC m=+0.023533683 container died c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:38:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-091bc5ef5dff109dab115a27c7418ee5089f5a9a3a3344ba561b06d16f1d43c0-merged.mount: Deactivated successfully.
Nov 25 08:38:09 compute-0 podman[334959]: 2025-11-25 08:38:09.223010106 +0000 UTC m=+0.095427651 container remove c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 08:38:09 compute-0 systemd[1]: libpod-conmon-c9d43cd03ad61e652c192eb4c136340c19d257ec89b7309f12e2dcc94b66330b.scope: Deactivated successfully.
Nov 25 08:38:09 compute-0 sudo[334831]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:09 compute-0 sudo[334973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:09 compute-0 sudo[334973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:09 compute-0 sudo[334973]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 306 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 264 op/s
Nov 25 08:38:09 compute-0 sudo[334998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:38:09 compute-0 sudo[334998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:09 compute-0 sudo[334998]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:09 compute-0 sudo[335023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:09 compute-0 sudo[335023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:09 compute-0 sudo[335023]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:09 compute-0 ovn_controller[152859]: 2025-11-25T08:38:09Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:0e:c1 10.100.0.6
Nov 25 08:38:09 compute-0 ovn_controller[152859]: 2025-11-25T08:38:09Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:0e:c1 10.100.0.6
Nov 25 08:38:09 compute-0 sudo[335048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:38:09 compute-0 sudo[335048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.628 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.722 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.722 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.722 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.723 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.723 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.724 253542 INFO nova.compute.manager [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Terminating instance
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.725 253542 DEBUG nova.compute.manager [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:38:09 compute-0 kernel: tapc9ccc817-d5 (unregistering): left promiscuous mode
Nov 25 08:38:09 compute-0 NetworkManager[48915]: <info>  [1764059889.7945] device (tapc9ccc817-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 ovn_controller[152859]: 2025-11-25T08:38:09Z|00791|binding|INFO|Releasing lport c9ccc817-d521-4b07-bb91-ca98938f9d7b from this chassis (sb_readonly=0)
Nov 25 08:38:09 compute-0 ovn_controller[152859]: 2025-11-25T08:38:09Z|00792|binding|INFO|Setting lport c9ccc817-d521-4b07-bb91-ca98938f9d7b down in Southbound
Nov 25 08:38:09 compute-0 ovn_controller[152859]: 2025-11-25T08:38:09Z|00793|binding|INFO|Removing iface tapc9ccc817-d5 ovn-installed in OVS
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.816 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:ad:91 10.100.0.9'], port_security=['fa:16:3e:09:ad:91 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a5ba6c2a-b5cd-4ced-b648-e54a941cda0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '4', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c9ccc817-d521-4b07-bb91-ca98938f9d7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.817 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c9ccc817-d521-4b07-bb91-ca98938f9d7b in datapath 92e26514-5b15-410b-8885-6773bc03c4ce unbound from our chassis
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.818 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.838 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b753c59a-a0e2-43a0-a645-ceca665d38a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:09 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Deactivated successfully.
Nov 25 08:38:09 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Consumed 4.892s CPU time.
Nov 25 08:38:09 compute-0 systemd-machined[215790]: Machine qemu-101-instance-00000053 terminated.
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.874 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4e0da5-fe65-4ef6-81fd-e0f8e04a0ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.877 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb83af9-1edf-473d-b186-c1cb61ea1487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.910 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2138faee-f043-489c-a9cd-0476cc3bb30f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.936 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[104adcff-6a6f-49b9-906b-8851110fb38f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335131, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.952 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.957 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 podman[335121]: 2025-11-25 08:38:09.959581779 +0000 UTC m=+0.057413295 container create 03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.960 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b445525e-519e-4ce9-b9e9-9fcb598ce04c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335137, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335137, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.962 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.962 253542 INFO nova.virt.libvirt.driver [-] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Instance destroyed successfully.
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.963 253542 DEBUG nova.objects.instance [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'resources' on Instance uuid a5ba6c2a-b5cd-4ced-b648-e54a941cda0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.972 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.973 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.974 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:09.974 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.974 253542 DEBUG nova.virt.libvirt.vif [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:37:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1353589796',display_name='tempest-ServersTestJSON-server-1353589796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1353589796',id=83,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-dyhmnwkb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:38:05Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=a5ba6c2a-b5cd-4ced-b648-e54a941cda0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.975 253542 DEBUG nova.network.os_vif_util [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "address": "fa:16:3e:09:ad:91", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9ccc817-d5", "ovs_interfaceid": "c9ccc817-d521-4b07-bb91-ca98938f9d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.976 253542 DEBUG nova.network.os_vif_util [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:ad:91,bridge_name='br-int',has_traffic_filtering=True,id=c9ccc817-d521-4b07-bb91-ca98938f9d7b,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9ccc817-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.976 253542 DEBUG os_vif [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:ad:91,bridge_name='br-int',has_traffic_filtering=True,id=c9ccc817-d521-4b07-bb91-ca98938f9d7b,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9ccc817-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.978 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.979 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9ccc817-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:38:09 compute-0 nova_compute[253538]: 2025-11-25 08:38:09.986 253542 INFO os_vif [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:ad:91,bridge_name='br-int',has_traffic_filtering=True,id=c9ccc817-d521-4b07-bb91-ca98938f9d7b,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9ccc817-d5')
Nov 25 08:38:10 compute-0 systemd[1]: Started libpod-conmon-03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809.scope.
Nov 25 08:38:10 compute-0 podman[335121]: 2025-11-25 08:38:09.929979873 +0000 UTC m=+0.027811409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:38:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:38:10 compute-0 podman[335121]: 2025-11-25 08:38:10.051780252 +0000 UTC m=+0.149611798 container init 03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_diffie, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 08:38:10 compute-0 podman[335121]: 2025-11-25 08:38:10.059338788 +0000 UTC m=+0.157170304 container start 03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:38:10 compute-0 podman[335121]: 2025-11-25 08:38:10.063237064 +0000 UTC m=+0.161068590 container attach 03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 08:38:10 compute-0 serene_diffie[335164]: 167 167
Nov 25 08:38:10 compute-0 systemd[1]: libpod-03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809.scope: Deactivated successfully.
Nov 25 08:38:10 compute-0 podman[335172]: 2025-11-25 08:38:10.12473895 +0000 UTC m=+0.042074008 container died 03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 08:38:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a1ea95f399a02f8d2488cf38723f7923697d8a8258f30c2a661eef71276902-merged.mount: Deactivated successfully.
Nov 25 08:38:10 compute-0 podman[335172]: 2025-11-25 08:38:10.180661294 +0000 UTC m=+0.097996312 container remove 03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:38:10 compute-0 systemd[1]: libpod-conmon-03f35d700f9f8252242641777071414c22206bc4e9016fe2661923a93d63f809.scope: Deactivated successfully.
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.352 253542 DEBUG nova.compute.manager [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received event network-vif-unplugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.353 253542 DEBUG oslo_concurrency.lockutils [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.354 253542 DEBUG oslo_concurrency.lockutils [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.354 253542 DEBUG oslo_concurrency.lockutils [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.354 253542 DEBUG nova.compute.manager [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] No waiting events found dispatching network-vif-unplugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.356 253542 DEBUG nova.compute.manager [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received event network-vif-unplugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.357 253542 DEBUG nova.compute.manager [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received event network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.358 253542 DEBUG oslo_concurrency.lockutils [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.360 253542 DEBUG oslo_concurrency.lockutils [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.361 253542 DEBUG oslo_concurrency.lockutils [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.361 253542 DEBUG nova.compute.manager [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] No waiting events found dispatching network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.362 253542 WARNING nova.compute.manager [req-ad2fd613-90f2-4a72-a73f-6f0f11d24234 req-6bec7439-8f85-403c-b698-1694c90e6006 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received unexpected event network-vif-plugged-c9ccc817-d521-4b07-bb91-ca98938f9d7b for instance with vm_state active and task_state deleting.
Nov 25 08:38:10 compute-0 podman[335195]: 2025-11-25 08:38:10.412349958 +0000 UTC m=+0.047340171 container create 34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:38:10 compute-0 ceph-mon[75015]: pgmap v1654: 321 pgs: 321 active+clean; 306 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 264 op/s
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.451 253542 INFO nova.virt.libvirt.driver [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Deleting instance files /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_del
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.452 253542 INFO nova.virt.libvirt.driver [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Deletion of /var/lib/nova/instances/a5ba6c2a-b5cd-4ced-b648-e54a941cda0d_del complete
Nov 25 08:38:10 compute-0 systemd[1]: Started libpod-conmon-34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0.scope.
Nov 25 08:38:10 compute-0 podman[335195]: 2025-11-25 08:38:10.393173335 +0000 UTC m=+0.028163568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:38:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:38:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5e41cef69a90330c65163e2dddfff843e9aa3e6af1cff986d4a8c1d5f3ab2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5e41cef69a90330c65163e2dddfff843e9aa3e6af1cff986d4a8c1d5f3ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5e41cef69a90330c65163e2dddfff843e9aa3e6af1cff986d4a8c1d5f3ab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5e41cef69a90330c65163e2dddfff843e9aa3e6af1cff986d4a8c1d5f3ab2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.533 253542 INFO nova.compute.manager [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.533 253542 DEBUG oslo.service.loopingcall [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.534 253542 DEBUG nova.compute.manager [-] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:38:10 compute-0 nova_compute[253538]: 2025-11-25 08:38:10.534 253542 DEBUG nova.network.neutron [-] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:38:10 compute-0 podman[335195]: 2025-11-25 08:38:10.535399202 +0000 UTC m=+0.170389495 container init 34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 08:38:10 compute-0 podman[335195]: 2025-11-25 08:38:10.547062679 +0000 UTC m=+0.182052932 container start 34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:38:10 compute-0 podman[335195]: 2025-11-25 08:38:10.553236517 +0000 UTC m=+0.188226770 container attach 34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:38:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 314 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.2 MiB/s wr, 279 op/s
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]: {
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "osd_id": 1,
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "type": "bluestore"
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:     },
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "osd_id": 2,
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "type": "bluestore"
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:     },
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "osd_id": 0,
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:         "type": "bluestore"
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]:     }
Nov 25 08:38:11 compute-0 xenodochial_allen[335214]: }
Nov 25 08:38:11 compute-0 systemd[1]: libpod-34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0.scope: Deactivated successfully.
Nov 25 08:38:11 compute-0 nova_compute[253538]: 2025-11-25 08:38:11.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:11 compute-0 podman[335247]: 2025-11-25 08:38:11.585948331 +0000 UTC m=+0.032349003 container died 34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbe5e41cef69a90330c65163e2dddfff843e9aa3e6af1cff986d4a8c1d5f3ab2-merged.mount: Deactivated successfully.
Nov 25 08:38:11 compute-0 podman[335247]: 2025-11-25 08:38:11.651570069 +0000 UTC m=+0.097970721 container remove 34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 08:38:11 compute-0 systemd[1]: libpod-conmon-34efe969fa4e75f3345c780eb30972fc51c1424502c70c972a2f5674a1ce43b0.scope: Deactivated successfully.
Nov 25 08:38:11 compute-0 sudo[335048]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:38:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:38:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:38:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:38:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9c77a022-4e0e-408a-a107-7c41f867857a does not exist
Nov 25 08:38:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ded67c07-931c-46f5-99b2-f2e7a0f706af does not exist
Nov 25 08:38:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:11 compute-0 sudo[335262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:38:11 compute-0 sudo[335262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:11 compute-0 sudo[335262]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:11 compute-0 sudo[335287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:38:11 compute-0 sudo[335287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:38:11 compute-0 sudo[335287]: pam_unix(sudo:session): session closed for user root
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.104 253542 DEBUG nova.network.neutron [-] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.120 253542 INFO nova.compute.manager [-] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Took 1.59 seconds to deallocate network for instance.
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.163 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.164 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.256 253542 DEBUG oslo_concurrency.processutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.293 253542 DEBUG nova.compute.manager [req-141a5846-7a2a-4a0b-9871-49f28dd2f6a5 req-13abd7b5-0beb-4c6f-9be0-bf0ff4cac9d4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Received event network-vif-deleted-c9ccc817-d521-4b07-bb91-ca98938f9d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:12 compute-0 ceph-mon[75015]: pgmap v1655: 321 pgs: 321 active+clean; 314 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.2 MiB/s wr, 279 op/s
Nov 25 08:38:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:38:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:38:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573843578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.778 253542 DEBUG oslo_concurrency.processutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.785 253542 DEBUG nova.compute.provider_tree [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.800 253542 DEBUG nova.scheduler.client.report [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.896 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.896 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:12 compute-0 nova_compute[253538]: 2025-11-25 08:38:12.965 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.009 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.015 253542 INFO nova.scheduler.client.report [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Deleted allocations for instance a5ba6c2a-b5cd-4ced-b648-e54a941cda0d
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.104 253542 DEBUG oslo_concurrency.lockutils [None req-6999b990-a6a7-4389-8c05-23c720db6405 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "a5ba6c2a-b5cd-4ced-b648-e54a941cda0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.108 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.109 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.118 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.120 253542 INFO nova.compute.claims [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.266 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 313 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.6 MiB/s wr, 334 op/s
Nov 25 08:38:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/573843578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.564 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:38:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519089877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:13 compute-0 podman[335354]: 2025-11-25 08:38:13.825747278 +0000 UTC m=+0.070236295 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.829 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.836 253542 DEBUG nova.compute.provider_tree [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.849 253542 DEBUG nova.scheduler.client.report [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.935 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:13 compute-0 nova_compute[253538]: 2025-11-25 08:38:13.937 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.043 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.043 253542 DEBUG nova.network.neutron [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.066 253542 INFO nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.089 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.253 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.255 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.255 253542 INFO nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Creating image(s)
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.280 253542 DEBUG nova.storage.rbd_utils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.315 253542 DEBUG nova.storage.rbd_utils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.339 253542 DEBUG nova.storage.rbd_utils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.344 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.416 253542 DEBUG nova.policy [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3f889d771d484ec8b9b1fff0fbde81fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '42767d876b844fbd9b53953fb5f664b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.430 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.432 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.433 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.434 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.468 253542 DEBUG nova.storage.rbd_utils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:14 compute-0 ceph-mon[75015]: pgmap v1656: 321 pgs: 321 active+clean; 313 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.6 MiB/s wr, 334 op/s
Nov 25 08:38:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1519089877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.473 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.823 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.881 253542 DEBUG nova.storage.rbd_utils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] resizing rbd image 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.982 253542 DEBUG nova.objects.instance [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.997 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.998 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Ensure instance console log exists: /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.999 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:14 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.999 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:14.999 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.216 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.216 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.233 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:38:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 310 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.3 MiB/s wr, 281 op/s
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.382 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.382 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.389 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.389 253542 INFO nova.compute.claims [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.434 253542 DEBUG nova.network.neutron [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Successfully created port: 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:38:15 compute-0 nova_compute[253538]: 2025-11-25 08:38:15.628 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936696755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.125 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.131 253542 DEBUG nova.compute.provider_tree [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.145 253542 DEBUG nova.scheduler.client.report [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.221 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.222 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.314 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.315 253542 DEBUG nova.network.neutron [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:38:16 compute-0 kernel: tap8f28ea33-80 (unregistering): left promiscuous mode
Nov 25 08:38:16 compute-0 NetworkManager[48915]: <info>  [1764059896.3685] device (tap8f28ea33-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:38:16 compute-0 ovn_controller[152859]: 2025-11-25T08:38:16Z|00794|binding|INFO|Releasing lport 8f28ea33-80c4-41cb-b191-a1b619b14515 from this chassis (sb_readonly=0)
Nov 25 08:38:16 compute-0 ovn_controller[152859]: 2025-11-25T08:38:16Z|00795|binding|INFO|Setting lport 8f28ea33-80c4-41cb-b191-a1b619b14515 down in Southbound
Nov 25 08:38:16 compute-0 ovn_controller[152859]: 2025-11-25T08:38:16Z|00796|binding|INFO|Removing iface tap8f28ea33-80 ovn-installed in OVS
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.384 253542 INFO nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:38:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:16.396 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2c:1d 10.100.0.6'], port_security=['fa:16:3e:51:2c:1d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e855a86-52f7-47bd-aee9-e88449169aa1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346f61b-3f62-4471-b87c-676053219f02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ae570c13ba047bca1859d62faf328cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1118a317-9e94-4c83-9854-1785d0154360', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2aacef0e-2524-4118-9960-da2e22fd24eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8f28ea33-80c4-41cb-b191-a1b619b14515) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:16.398 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8f28ea33-80c4-41cb-b191-a1b619b14515 in datapath 6346f61b-3f62-4471-b87c-676053219f02 unbound from our chassis
Nov 25 08:38:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:16.399 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6346f61b-3f62-4471-b87c-676053219f02 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:38:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:16.400 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c87623-07bb-46ff-90fd-c8c65cf42720]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.431 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:38:16 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 25 08:38:16 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000052.scope: Consumed 13.428s CPU time.
Nov 25 08:38:16 compute-0 systemd-machined[215790]: Machine qemu-100-instance-00000052 terminated.
Nov 25 08:38:16 compute-0 ceph-mon[75015]: pgmap v1657: 321 pgs: 321 active+clean; 310 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.3 MiB/s wr, 281 op/s
Nov 25 08:38:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1936696755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.568 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.618 253542 INFO nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance shutdown successfully after 13 seconds.
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.624 253542 INFO nova.virt.libvirt.driver [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance destroyed successfully.
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.625 253542 DEBUG nova.objects.instance [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'numa_topology' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.653 253542 INFO nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Attempting rescue
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.654 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.659 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.659 253542 INFO nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Creating image(s)
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.683 253542 DEBUG nova.storage.rbd_utils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.687 253542 DEBUG nova.objects.instance [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.722 253542 DEBUG nova.storage.rbd_utils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.745 253542 DEBUG nova.storage.rbd_utils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.748 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.824 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.824 253542 DEBUG oslo_concurrency.lockutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.825 253542 DEBUG oslo_concurrency.lockutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.826 253542 DEBUG oslo_concurrency.lockutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.848 253542 DEBUG nova.storage.rbd_utils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:16 compute-0 nova_compute[253538]: 2025-11-25 08:38:16.851 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.161 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.162 253542 DEBUG nova.objects.instance [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'migration_context' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.177 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.178 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Start _get_guest_xml network_info=[{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "vif_mac": "fa:16:3e:51:2c:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.179 253542 DEBUG nova.objects.instance [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'resources' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.198 253542 WARNING nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.205 253542 DEBUG nova.virt.libvirt.host [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.206 253542 DEBUG nova.virt.libvirt.host [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.210 253542 DEBUG nova.virt.libvirt.host [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.211 253542 DEBUG nova.virt.libvirt.host [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.212 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.212 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.213 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.214 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.214 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.214 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.215 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.215 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.216 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.216 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.216 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.217 253542 DEBUG nova.virt.hardware [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.217 253542 DEBUG nova.objects.instance [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.237 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 330 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.4 MiB/s wr, 247 op/s
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.425 253542 DEBUG nova.policy [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3f889d771d484ec8b9b1fff0fbde81fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '42767d876b844fbd9b53953fb5f664b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.514 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.516 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.516 253542 INFO nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Creating image(s)
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.534 253542 DEBUG nova.storage.rbd_utils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.554 253542 DEBUG nova.storage.rbd_utils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.574 253542 DEBUG nova.storage.rbd_utils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.577 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.606 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.606 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.644 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.645 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.645 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.646 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.666 253542 DEBUG nova.storage.rbd_utils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.670 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251165028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.726 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.727 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.790 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.790 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:17 compute-0 nova_compute[253538]: 2025-11-25 08:38:17.970 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.028 253542 DEBUG nova.storage.rbd_utils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] resizing rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.060 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.060 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.125 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.133 253542 DEBUG nova.objects.instance [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.157 253542 DEBUG nova.compute.manager [req-2705f3d6-20b8-4e96-9f54-0661fc9fe9db req-94873554-af62-47cf-b548-ced5f147de2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-unplugged-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.157 253542 DEBUG oslo_concurrency.lockutils [req-2705f3d6-20b8-4e96-9f54-0661fc9fe9db req-94873554-af62-47cf-b548-ced5f147de2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.157 253542 DEBUG oslo_concurrency.lockutils [req-2705f3d6-20b8-4e96-9f54-0661fc9fe9db req-94873554-af62-47cf-b548-ced5f147de2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.157 253542 DEBUG oslo_concurrency.lockutils [req-2705f3d6-20b8-4e96-9f54-0661fc9fe9db req-94873554-af62-47cf-b548-ced5f147de2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.158 253542 DEBUG nova.compute.manager [req-2705f3d6-20b8-4e96-9f54-0661fc9fe9db req-94873554-af62-47cf-b548-ced5f147de2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] No waiting events found dispatching network-vif-unplugged-8f28ea33-80c4-41cb-b191-a1b619b14515 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.158 253542 WARNING nova.compute.manager [req-2705f3d6-20b8-4e96-9f54-0661fc9fe9db req-94873554-af62-47cf-b548-ced5f147de2b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received unexpected event network-vif-unplugged-8f28ea33-80c4-41cb-b191-a1b619b14515 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.159 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.160 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Ensure instance console log exists: /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.160 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.160 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.161 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/6349406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.194 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.195 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.243 253542 DEBUG nova.network.neutron [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Successfully updated port: 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.257 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "refresh_cache-5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.257 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquired lock "refresh_cache-5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.257 253542 DEBUG nova.network.neutron [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.259 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.260 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.274 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.274 253542 INFO nova.compute.claims [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.372 253542 DEBUG nova.compute.manager [req-54dfea7b-58a8-4baa-a80a-da6498e9d97f req-357961b4-fdf0-4a36-9ab4-23838391e6ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received event network-changed-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.372 253542 DEBUG nova.compute.manager [req-54dfea7b-58a8-4baa-a80a-da6498e9d97f req-357961b4-fdf0-4a36-9ab4-23838391e6ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Refreshing instance network info cache due to event network-changed-7b8f95d6-ffda-4c87-9539-bdd932e1dad5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.372 253542 DEBUG oslo_concurrency.lockutils [req-54dfea7b-58a8-4baa-a80a-da6498e9d97f req-357961b4-fdf0-4a36-9ab4-23838391e6ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.482 253542 DEBUG nova.network.neutron [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:38:18 compute-0 ceph-mon[75015]: pgmap v1658: 321 pgs: 321 active+clean; 330 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.4 MiB/s wr, 247 op/s
Nov 25 08:38:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2251165028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/6349406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.488 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:38:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272993870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.638 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.640 253542 DEBUG nova.virt.libvirt.vif [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:37:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1695294575',display_name='tempest-ServerRescueTestJSONUnderV235-server-1695294575',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1695294575',id=82,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ae570c13ba047bca1859d62faf328cc',ramdisk_id='',reservation_id='r-df750kfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2082720401',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2082720401-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:00Z,user_data=None,user_id='2c27b17fb49c46f2877860b2f7123ef2',uuid=0e855a86-52f7-47bd-aee9-e88449169aa1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "vif_mac": "fa:16:3e:51:2c:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.641 253542 DEBUG nova.network.os_vif_util [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Converting VIF {"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "vif_mac": "fa:16:3e:51:2c:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.643 253542 DEBUG nova.network.os_vif_util [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:2c:1d,bridge_name='br-int',has_traffic_filtering=True,id=8f28ea33-80c4-41cb-b191-a1b619b14515,network=Network(6346f61b-3f62-4471-b87c-676053219f02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f28ea33-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.645 253542 DEBUG nova.objects.instance [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.665 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <uuid>0e855a86-52f7-47bd-aee9-e88449169aa1</uuid>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <name>instance-00000052</name>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1695294575</nova:name>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:38:17</nova:creationTime>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <nova:user uuid="2c27b17fb49c46f2877860b2f7123ef2">tempest-ServerRescueTestJSONUnderV235-2082720401-project-member</nova:user>
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <nova:project uuid="6ae570c13ba047bca1859d62faf328cc">tempest-ServerRescueTestJSONUnderV235-2082720401</nova:project>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <nova:port uuid="8f28ea33-80c4-41cb-b191-a1b619b14515">
Nov 25 08:38:18 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <system>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <entry name="serial">0e855a86-52f7-47bd-aee9-e88449169aa1</entry>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <entry name="uuid">0e855a86-52f7-47bd-aee9-e88449169aa1</entry>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </system>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <os>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   </os>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <features>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   </features>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0e855a86-52f7-47bd-aee9-e88449169aa1_disk.rescue">
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0e855a86-52f7-47bd-aee9-e88449169aa1_disk">
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <target dev="vdb" bus="virtio"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config.rescue">
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:51:2c:1d"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <target dev="tap8f28ea33-80"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/console.log" append="off"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <video>
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </video>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:38:18 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:38:18 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:38:18 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:38:18 compute-0 nova_compute[253538]: </domain>
Nov 25 08:38:18 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.676 253542 INFO nova.virt.libvirt.driver [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance destroyed successfully.
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.693 253542 DEBUG nova.network.neutron [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Successfully created port: 21a608d7-be38-4d88-902b-2124e5227ae5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.738 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.738 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.739 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.739 253542 DEBUG nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] No VIF found with MAC fa:16:3e:51:2c:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.740 253542 INFO nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Using config drive
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.764 253542 DEBUG nova.storage.rbd_utils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:18 compute-0 podman[335928]: 2025-11-25 08:38:18.773798461 +0000 UTC m=+0.061089546 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.784 253542 DEBUG nova.objects.instance [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.815 253542 DEBUG nova.objects.instance [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'keypairs' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940743551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.968 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.973 253542 DEBUG nova.compute.provider_tree [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:18 compute-0 nova_compute[253538]: 2025-11-25 08:38:18.985 253542 DEBUG nova.scheduler.client.report [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.048 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.049 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.101 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.101 253542 DEBUG nova.network.neutron [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.115 253542 INFO nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.121 253542 INFO nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Creating config drive at /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config.rescue
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.126 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4tftbghf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.173 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.264 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.266 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.266 253542 INFO nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Creating image(s)
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.290 253542 DEBUG nova.storage.rbd_utils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.317 253542 DEBUG nova.storage.rbd_utils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.347 253542 DEBUG nova.storage.rbd_utils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.352 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 369 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.8 MiB/s wr, 242 op/s
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.399 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4tftbghf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.429 253542 DEBUG nova.storage.rbd_utils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] rbd image 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.434 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config.rescue 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.477 253542 DEBUG nova.policy [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdcb005cc49a4dfa82152f2c0817cc94', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b730f086c4b94185afab5e10fa2e8181', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.481 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.482 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.483 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.483 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3272993870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2940743551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.510 253542 DEBUG nova.storage.rbd_utils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.515 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.589 253542 DEBUG oslo_concurrency.processutils [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config.rescue 0e855a86-52f7-47bd-aee9-e88449169aa1_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.591 253542 INFO nova.virt.libvirt.driver [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Deleting local config drive /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1/disk.config.rescue because it was imported into RBD.
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.605 253542 DEBUG nova.network.neutron [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Updating instance_info_cache with network_info: [{"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.644 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Releasing lock "refresh_cache-5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.645 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Instance network_info: |[{"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.647 253542 DEBUG oslo_concurrency.lockutils [req-54dfea7b-58a8-4baa-a80a-da6498e9d97f req-357961b4-fdf0-4a36-9ab4-23838391e6ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.647 253542 DEBUG nova.network.neutron [req-54dfea7b-58a8-4baa-a80a-da6498e9d97f req-357961b4-fdf0-4a36-9ab4-23838391e6ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Refreshing network info cache for port 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.654 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Start _get_guest_xml network_info=[{"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.662 253542 WARNING nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:19 compute-0 kernel: tap8f28ea33-80: entered promiscuous mode
Nov 25 08:38:19 compute-0 NetworkManager[48915]: <info>  [1764059899.6794] manager: (tap8f28ea33-80): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.685 253542 DEBUG nova.virt.libvirt.host [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.686 253542 DEBUG nova.virt.libvirt.host [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:19 compute-0 ovn_controller[152859]: 2025-11-25T08:38:19Z|00797|binding|INFO|Claiming lport 8f28ea33-80c4-41cb-b191-a1b619b14515 for this chassis.
Nov 25 08:38:19 compute-0 ovn_controller[152859]: 2025-11-25T08:38:19Z|00798|binding|INFO|8f28ea33-80c4-41cb-b191-a1b619b14515: Claiming fa:16:3e:51:2c:1d 10.100.0.6
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.692 253542 DEBUG nova.virt.libvirt.host [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.693 253542 DEBUG nova.virt.libvirt.host [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.694 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.694 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.695 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.695 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.695 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.695 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.696 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.696 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.696 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.696 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.697 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.697 253542 DEBUG nova.virt.hardware [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:38:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:19.699 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2c:1d 10.100.0.6'], port_security=['fa:16:3e:51:2c:1d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e855a86-52f7-47bd-aee9-e88449169aa1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346f61b-3f62-4471-b87c-676053219f02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ae570c13ba047bca1859d62faf328cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1118a317-9e94-4c83-9854-1785d0154360', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2aacef0e-2524-4118-9960-da2e22fd24eb, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8f28ea33-80c4-41cb-b191-a1b619b14515) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:19.700 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8f28ea33-80c4-41cb-b191-a1b619b14515 in datapath 6346f61b-3f62-4471-b87c-676053219f02 bound to our chassis
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.701 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:19.701 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6346f61b-3f62-4471-b87c-676053219f02 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:38:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:19.703 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d844804-6b1c-4fe7-9027-727248752c06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:19 compute-0 ovn_controller[152859]: 2025-11-25T08:38:19Z|00799|binding|INFO|Setting lport 8f28ea33-80c4-41cb-b191-a1b619b14515 ovn-installed in OVS
Nov 25 08:38:19 compute-0 ovn_controller[152859]: 2025-11-25T08:38:19Z|00800|binding|INFO|Setting lport 8f28ea33-80c4-41cb-b191-a1b619b14515 up in Southbound
Nov 25 08:38:19 compute-0 systemd-machined[215790]: New machine qemu-102-instance-00000052.
Nov 25 08:38:19 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000052.
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:19 compute-0 systemd-udevd[336116]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:19 compute-0 NetworkManager[48915]: <info>  [1764059899.7658] device (tap8f28ea33-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:38:19 compute-0 NetworkManager[48915]: <info>  [1764059899.7668] device (tap8f28ea33-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.867 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.935 253542 DEBUG nova.storage.rbd_utils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] resizing rbd image 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:38:19 compute-0 nova_compute[253538]: 2025-11-25 08:38:19.985 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.028 253542 DEBUG nova.objects.instance [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'migration_context' on Instance uuid 8120a4a8-c326-4f1b-94d5-2c1ffe663959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.038 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.039 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Ensure instance console log exists: /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.039 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.040 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.040 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841670697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.116 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.135 253542 DEBUG nova.storage.rbd_utils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.141 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.508 253542 DEBUG nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.511 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.511 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.511 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.512 253542 DEBUG nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] No waiting events found dispatching network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.512 253542 WARNING nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received unexpected event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.513 253542 DEBUG nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.513 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.513 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.514 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.514 253542 DEBUG nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] No waiting events found dispatching network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.514 253542 WARNING nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received unexpected event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.515 253542 DEBUG nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.515 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.515 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.515 253542 DEBUG oslo_concurrency.lockutils [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.516 253542 DEBUG nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] No waiting events found dispatching network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.516 253542 WARNING nova.compute.manager [req-da7b244d-85ef-41fb-9c51-ff4c01521e9f req-15351350-72ad-42a8-81f9-bb21072e1cfb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received unexpected event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3768797060' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:20 compute-0 ceph-mon[75015]: pgmap v1659: 321 pgs: 321 active+clean; 369 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.8 MiB/s wr, 242 op/s
Nov 25 08:38:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/841670697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.580 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.582 253542 DEBUG nova.virt.libvirt.vif [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1021848873',display_name='tempest-ServerRescueNegativeTestJSON-server-1021848873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1021848873',id=84,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='42767d876b844fbd9b53953fb5f664b5',ramdisk_id='',reservation_id='r-0hykkj86',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-561122366',owner_user_name='tempest-ServerRescueNegativeTestJSON-561122366-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:14Z,user_data=None,user_id='3f889d771d484ec8b9b1fff0fbde81fc',uuid=5dc14644-cfc4-4e56-91fd-736ee4e3f5ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.583 253542 DEBUG nova.network.os_vif_util [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converting VIF {"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.584 253542 DEBUG nova.network.os_vif_util [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:35:96,bridge_name='br-int',has_traffic_filtering=True,id=7b8f95d6-ffda-4c87-9539-bdd932e1dad5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f95d6-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.585 253542 DEBUG nova.objects.instance [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.601 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <uuid>5dc14644-cfc4-4e56-91fd-736ee4e3f5ec</uuid>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <name>instance-00000054</name>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1021848873</nova:name>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:38:19</nova:creationTime>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <nova:user uuid="3f889d771d484ec8b9b1fff0fbde81fc">tempest-ServerRescueNegativeTestJSON-561122366-project-member</nova:user>
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <nova:project uuid="42767d876b844fbd9b53953fb5f664b5">tempest-ServerRescueNegativeTestJSON-561122366</nova:project>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <nova:port uuid="7b8f95d6-ffda-4c87-9539-bdd932e1dad5">
Nov 25 08:38:20 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <system>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <entry name="serial">5dc14644-cfc4-4e56-91fd-736ee4e3f5ec</entry>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <entry name="uuid">5dc14644-cfc4-4e56-91fd-736ee4e3f5ec</entry>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </system>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <os>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   </os>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <features>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   </features>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk">
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk.config">
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:20 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:62:35:96"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <target dev="tap7b8f95d6-ff"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec/console.log" append="off"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <video>
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </video>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:38:20 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:38:20 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:38:20 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:38:20 compute-0 nova_compute[253538]: </domain>
Nov 25 08:38:20 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.608 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Preparing to wait for external event network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.608 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.609 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.609 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.610 253542 DEBUG nova.virt.libvirt.vif [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1021848873',display_name='tempest-ServerRescueNegativeTestJSON-server-1021848873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1021848873',id=84,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='42767d876b844fbd9b53953fb5f664b5',ramdisk_id='',reservation_id='r-0hykkj86',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-561122366',owner_user_name='tempest-ServerRescueNegativeTestJSON-561122366-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:14Z,user_data=None,user_id='3f889d771d484ec8b9b1fff0fbde81fc',uuid=5dc14644-cfc4-4e56-91fd-736ee4e3f5ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.610 253542 DEBUG nova.network.os_vif_util [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converting VIF {"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.611 253542 DEBUG nova.network.os_vif_util [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:35:96,bridge_name='br-int',has_traffic_filtering=True,id=7b8f95d6-ffda-4c87-9539-bdd932e1dad5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f95d6-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.612 253542 DEBUG os_vif [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:35:96,bridge_name='br-int',has_traffic_filtering=True,id=7b8f95d6-ffda-4c87-9539-bdd932e1dad5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f95d6-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.613 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.614 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.620 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b8f95d6-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.620 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b8f95d6-ff, col_values=(('external_ids', {'iface-id': '7b8f95d6-ffda-4c87-9539-bdd932e1dad5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:35:96', 'vm-uuid': '5dc14644-cfc4-4e56-91fd-736ee4e3f5ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:20 compute-0 NetworkManager[48915]: <info>  [1764059900.6230] manager: (tap7b8f95d6-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.631 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.632 253542 INFO os_vif [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:35:96,bridge_name='br-int',has_traffic_filtering=True,id=7b8f95d6-ffda-4c87-9539-bdd932e1dad5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f95d6-ff')
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.673 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.673 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.673 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No VIF found with MAC fa:16:3e:62:35:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.674 253542 INFO nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Using config drive
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.693 253542 DEBUG nova.storage.rbd_utils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.834 253542 DEBUG nova.network.neutron [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Successfully updated port: 21a608d7-be38-4d88-902b-2124e5227ae5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.858 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.859 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquired lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.859 253542 DEBUG nova.network.neutron [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.955 253542 DEBUG nova.compute.manager [req-a3273386-6e74-4634-b79d-a9f39dfef8f1 req-cf86d6ba-bf91-429d-bd31-d06e21923935 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-changed-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.955 253542 DEBUG nova.compute.manager [req-a3273386-6e74-4634-b79d-a9f39dfef8f1 req-cf86d6ba-bf91-429d-bd31-d06e21923935 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Refreshing instance network info cache due to event network-changed-21a608d7-be38-4d88-902b-2124e5227ae5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.955 253542 DEBUG oslo_concurrency.lockutils [req-a3273386-6e74-4634-b79d-a9f39dfef8f1 req-cf86d6ba-bf91-429d-bd31-d06e21923935 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:20 compute-0 nova_compute[253538]: 2025-11-25 08:38:20.993 253542 DEBUG nova.network.neutron [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Successfully created port: 5375bd27-22cc-4b77-9aba-984375be602e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.032 253542 INFO nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Creating config drive at /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec/disk.config
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.036 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ctnjq3p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.084 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0e855a86-52f7-47bd-aee9-e88449169aa1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.085 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059901.0515437, 0e855a86-52f7-47bd-aee9-e88449169aa1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.085 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] VM Resumed (Lifecycle Event)
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.088 253542 DEBUG nova.compute.manager [None req-12ffe539-a05a-49f5-90f6-fcb7cfbc6e1d 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.095 253542 DEBUG nova.network.neutron [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.105 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.108 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.132 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.133 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059901.0527413, 0e855a86-52f7-47bd-aee9-e88449169aa1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.133 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] VM Started (Lifecycle Event)
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.151 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.158 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.195 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ctnjq3p" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.220 253542 DEBUG nova.storage.rbd_utils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.224 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec/disk.config 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 437 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.5 MiB/s wr, 247 op/s
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.377 253542 DEBUG oslo_concurrency.processutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec/disk.config 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.378 253542 INFO nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Deleting local config drive /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec/disk.config because it was imported into RBD.
Nov 25 08:38:21 compute-0 kernel: tap7b8f95d6-ff: entered promiscuous mode
Nov 25 08:38:21 compute-0 NetworkManager[48915]: <info>  [1764059901.4311] manager: (tap7b8f95d6-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Nov 25 08:38:21 compute-0 systemd-udevd[336119]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.433 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:21 compute-0 ovn_controller[152859]: 2025-11-25T08:38:21Z|00801|binding|INFO|Claiming lport 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 for this chassis.
Nov 25 08:38:21 compute-0 ovn_controller[152859]: 2025-11-25T08:38:21Z|00802|binding|INFO|7b8f95d6-ffda-4c87-9539-bdd932e1dad5: Claiming fa:16:3e:62:35:96 10.100.0.4
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.442 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:35:96 10.100.0.4'], port_security=['fa:16:3e:62:35:96 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5dc14644-cfc4-4e56-91fd-736ee4e3f5ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '42767d876b844fbd9b53953fb5f664b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87f4d1a7-e392-4893-98e7-5d05afa4be77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608490f2-11ad-4e35-9100-843accb3d76b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7b8f95d6-ffda-4c87-9539-bdd932e1dad5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.443 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 in datapath 1a4161e2-3dc8-48ab-8204-aaba5cb02136 bound to our chassis
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.445 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:38:21 compute-0 NetworkManager[48915]: <info>  [1764059901.4464] device (tap7b8f95d6-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:38:21 compute-0 NetworkManager[48915]: <info>  [1764059901.4475] device (tap7b8f95d6-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.455 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24296535-4d19-4854-bfbf-310f0cab01d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.456 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1a4161e2-31 in ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.459 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1a4161e2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.460 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff9d861-6517-4415-9fe2-acdcc95e8739]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.461 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72e87d08-5eb2-4eae-8d44-fb274dc42ba7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_controller[152859]: 2025-11-25T08:38:21Z|00803|binding|INFO|Setting lport 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 ovn-installed in OVS
Nov 25 08:38:21 compute-0 ovn_controller[152859]: 2025-11-25T08:38:21Z|00804|binding|INFO|Setting lport 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 up in Southbound
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.473 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.475 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.475 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef5311d-9200-4e61-affe-5637644a0a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 systemd-machined[215790]: New machine qemu-103-instance-00000054.
Nov 25 08:38:21 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000054.
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.501 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83f5ff14-a082-4339-83e8-f9c8d7ba0abb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.530 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[16058a80-40b3-423e-9a50-4d1eb55363f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 NetworkManager[48915]: <info>  [1764059901.5384] manager: (tap1a4161e2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9598bfc9-16cf-4e0a-a2e6-65d7ad45ac85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.569 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.580 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[214e8ad9-450c-4e9c-b301-9479ce13c304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.583 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d303f674-2d41-4368-b35b-5e718a766b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 podman[336395]: 2025-11-25 08:38:21.595321772 +0000 UTC m=+0.099705299 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:38:21 compute-0 NetworkManager[48915]: <info>  [1764059901.6094] device (tap1a4161e2-30): carrier: link connected
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.617 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5918da-4b7f-424e-8655-c023abcd1b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3768797060' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:21 compute-0 ceph-mon[75015]: pgmap v1660: 321 pgs: 321 active+clean; 437 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.5 MiB/s wr, 247 op/s
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.633 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b352215-a099-48d2-b29b-4f86bffdc443]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a4161e2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:2f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525620, 'reachable_time': 42483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336445, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.651 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[942f8000-43d5-409a-acf4-b56bd3c7994c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:2ff6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525620, 'tstamp': 525620}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336446, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.665 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[89082be6-c1bd-453a-bea8-97e3a5594aac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a4161e2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:2f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525620, 'reachable_time': 42483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336447, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.696 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2243f5f0-2021-4ac2-8642-2e0d92b9c51a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.746 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d63f1f27-0d1c-48b9-bfe4-1694f355597e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.747 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4161e2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.748 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.748 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a4161e2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:21 compute-0 kernel: tap1a4161e2-30: entered promiscuous mode
Nov 25 08:38:21 compute-0 NetworkManager[48915]: <info>  [1764059901.7521] manager: (tap1a4161e2-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.753 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a4161e2-30, col_values=(('external_ids', {'iface-id': 'ee088c9f-327b-47d5-b296-dc11de2d7323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:21 compute-0 ovn_controller[152859]: 2025-11-25T08:38:21Z|00805|binding|INFO|Releasing lport ee088c9f-327b-47d5-b296-dc11de2d7323 from this chassis (sb_readonly=0)
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.771 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1a4161e2-3dc8-48ab-8204-aaba5cb02136.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1a4161e2-3dc8-48ab-8204-aaba5cb02136.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e824bc93-ccb6-405b-ae33-4298aff4a93d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.773 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/1a4161e2-3dc8-48ab-8204-aaba5cb02136.pid.haproxy
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:38:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:21.774 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'env', 'PROCESS_TAG=haproxy-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1a4161e2-3dc8-48ab-8204-aaba5cb02136.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.835 253542 DEBUG nova.network.neutron [req-54dfea7b-58a8-4baa-a80a-da6498e9d97f req-357961b4-fdf0-4a36-9ab4-23838391e6ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Updated VIF entry in instance network info cache for port 7b8f95d6-ffda-4c87-9539-bdd932e1dad5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.836 253542 DEBUG nova.network.neutron [req-54dfea7b-58a8-4baa-a80a-da6498e9d97f req-357961b4-fdf0-4a36-9ab4-23838391e6ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Updating instance_info_cache with network_info: [{"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:21 compute-0 nova_compute[253538]: 2025-11-25 08:38:21.846 253542 DEBUG oslo_concurrency.lockutils [req-54dfea7b-58a8-4baa-a80a-da6498e9d97f req-357961b4-fdf0-4a36-9ab4-23838391e6ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:22 compute-0 podman[336478]: 2025-11-25 08:38:22.171264417 +0000 UTC m=+0.058585538 container create a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:38:22 compute-0 systemd[1]: Started libpod-conmon-a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b.scope.
Nov 25 08:38:22 compute-0 podman[336478]: 2025-11-25 08:38:22.140070247 +0000 UTC m=+0.027391458 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:38:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45de20e61aed4a1b05e83338665baea5e22a4e3b5e0ced4a5eecb215f961e99/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:38:22 compute-0 podman[336478]: 2025-11-25 08:38:22.259770859 +0000 UTC m=+0.147092000 container init a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:38:22 compute-0 podman[336478]: 2025-11-25 08:38:22.265170056 +0000 UTC m=+0.152491177 container start a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:38:22 compute-0 neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136[336493]: [NOTICE]   (336497) : New worker (336499) forked
Nov 25 08:38:22 compute-0 neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136[336493]: [NOTICE]   (336497) : Loading success.
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.519 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059902.5187914, 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.520 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] VM Started (Lifecycle Event)
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.547 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.552 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059902.5190887, 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.552 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] VM Paused (Lifecycle Event)
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.556 253542 DEBUG nova.network.neutron [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Updating instance_info_cache with network_info: [{"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.566 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.570 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.572 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Releasing lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.572 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance network_info: |[{"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.573 253542 DEBUG oslo_concurrency.lockutils [req-a3273386-6e74-4634-b79d-a9f39dfef8f1 req-cf86d6ba-bf91-429d-bd31-d06e21923935 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.573 253542 DEBUG nova.network.neutron [req-a3273386-6e74-4634-b79d-a9f39dfef8f1 req-cf86d6ba-bf91-429d-bd31-d06e21923935 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Refreshing network info cache for port 21a608d7-be38-4d88-902b-2124e5227ae5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.576 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Start _get_guest_xml network_info=[{"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.580 253542 WARNING nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.586 253542 DEBUG nova.virt.libvirt.host [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.586 253542 DEBUG nova.virt.libvirt.host [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.591 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.592 253542 DEBUG nova.virt.libvirt.host [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.592 253542 DEBUG nova.virt.libvirt.host [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.593 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.593 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.593 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.594 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.594 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.594 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.594 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.595 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.595 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.595 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.595 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.596 253542 DEBUG nova.virt.hardware [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:38:22 compute-0 nova_compute[253538]: 2025-11-25 08:38:22.598 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1281185643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.032 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1281185643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.067 253542 DEBUG nova.storage.rbd_utils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.074 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 491 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 10 MiB/s wr, 272 op/s
Nov 25 08:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.437 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.438 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.457 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.536 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.537 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.545 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.545 253542 INFO nova.compute.claims [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:38:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885153772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.576 253542 DEBUG nova.network.neutron [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Successfully updated port: 5375bd27-22cc-4b77-9aba-984375be602e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.584 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.585 253542 DEBUG nova.virt.libvirt.vif [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1788909463',display_name='tempest-ServerRescueNegativeTestJSON-server-1788909463',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1788909463',id=85,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='42767d876b844fbd9b53953fb5f664b5',ramdisk_id='',reservation_id='r-53yldfyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-561122366',owner_user_name='tempest-ServerRescueNegativeTestJSON-561122366-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:16Z,user_data=None,user_id='3f889d771d484ec8b9b1fff0fbde81fc',uuid=3bc210c6-9f67-440b-a11c-0b4e13e74a21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.586 253542 DEBUG nova.network.os_vif_util [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converting VIF {"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.587 253542 DEBUG nova.network.os_vif_util [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:45:0c,bridge_name='br-int',has_traffic_filtering=True,id=21a608d7-be38-4d88-902b-2124e5227ae5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a608d7-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.588 253542 DEBUG nova.objects.instance [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.603 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "refresh_cache-8120a4a8-c326-4f1b-94d5-2c1ffe663959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.603 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquired lock "refresh_cache-8120a4a8-c326-4f1b-94d5-2c1ffe663959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.603 253542 DEBUG nova.network.neutron [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.607 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <uuid>3bc210c6-9f67-440b-a11c-0b4e13e74a21</uuid>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <name>instance-00000055</name>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1788909463</nova:name>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:38:22</nova:creationTime>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <nova:user uuid="3f889d771d484ec8b9b1fff0fbde81fc">tempest-ServerRescueNegativeTestJSON-561122366-project-member</nova:user>
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <nova:project uuid="42767d876b844fbd9b53953fb5f664b5">tempest-ServerRescueNegativeTestJSON-561122366</nova:project>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <nova:port uuid="21a608d7-be38-4d88-902b-2124e5227ae5">
Nov 25 08:38:23 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <system>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <entry name="serial">3bc210c6-9f67-440b-a11c-0b4e13e74a21</entry>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <entry name="uuid">3bc210c6-9f67-440b-a11c-0b4e13e74a21</entry>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </system>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <os>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   </os>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <features>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   </features>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk">
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config">
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:23 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:c2:45:0c"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <target dev="tap21a608d7-be"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/console.log" append="off"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <video>
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </video>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:38:23 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:38:23 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:38:23 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:38:23 compute-0 nova_compute[253538]: </domain>
Nov 25 08:38:23 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.612 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Preparing to wait for external event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.612 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.613 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.613 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.613 253542 DEBUG nova.virt.libvirt.vif [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1788909463',display_name='tempest-ServerRescueNegativeTestJSON-server-1788909463',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1788909463',id=85,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='42767d876b844fbd9b53953fb5f664b5',ramdisk_id='',reservation_id='r-53yldfyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-561122366',owner_user_name='tempest-ServerRescueNegativeTestJSON-561122366-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:16Z,user_data=None,user_id='3f889d771d484ec8b9b1fff0fbde81fc',uuid=3bc210c6-9f67-440b-a11c-0b4e13e74a21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.614 253542 DEBUG nova.network.os_vif_util [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converting VIF {"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.614 253542 DEBUG nova.network.os_vif_util [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:45:0c,bridge_name='br-int',has_traffic_filtering=True,id=21a608d7-be38-4d88-902b-2124e5227ae5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a608d7-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.615 253542 DEBUG os_vif [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:45:0c,bridge_name='br-int',has_traffic_filtering=True,id=21a608d7-be38-4d88-902b-2124e5227ae5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a608d7-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.617 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.617 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.624 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21a608d7-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.625 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21a608d7-be, col_values=(('external_ids', {'iface-id': '21a608d7-be38-4d88-902b-2124e5227ae5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:45:0c', 'vm-uuid': '3bc210c6-9f67-440b-a11c-0b4e13e74a21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:23 compute-0 NetworkManager[48915]: <info>  [1764059903.6278] manager: (tap21a608d7-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.628 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.635 253542 INFO os_vif [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:45:0c,bridge_name='br-int',has_traffic_filtering=True,id=21a608d7-be38-4d88-902b-2124e5227ae5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a608d7-be')
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.699 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.699 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.700 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No VIF found with MAC fa:16:3e:c2:45:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.700 253542 INFO nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Using config drive
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.724 253542 DEBUG nova.storage.rbd_utils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.783 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.828 253542 DEBUG nova.network.neutron [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.837 253542 DEBUG nova.compute.manager [req-e60d3dd9-c951-40f4-9088-226ddb4c41bf req-3c852520-e75e-431e-b94a-b301472de674 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received event network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.837 253542 DEBUG oslo_concurrency.lockutils [req-e60d3dd9-c951-40f4-9088-226ddb4c41bf req-3c852520-e75e-431e-b94a-b301472de674 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.837 253542 DEBUG oslo_concurrency.lockutils [req-e60d3dd9-c951-40f4-9088-226ddb4c41bf req-3c852520-e75e-431e-b94a-b301472de674 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.838 253542 DEBUG oslo_concurrency.lockutils [req-e60d3dd9-c951-40f4-9088-226ddb4c41bf req-3c852520-e75e-431e-b94a-b301472de674 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.838 253542 DEBUG nova.compute.manager [req-e60d3dd9-c951-40f4-9088-226ddb4c41bf req-3c852520-e75e-431e-b94a-b301472de674 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Processing event network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.842 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.860 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059903.8454993, 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.860 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] VM Resumed (Lifecycle Event)
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.863 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.866 253542 INFO nova.virt.libvirt.driver [-] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Instance spawned successfully.
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.867 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.879 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.886 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.889 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.889 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.889 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.890 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.890 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.890 253542 DEBUG nova.virt.libvirt.driver [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.911 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.946 253542 INFO nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Took 9.69 seconds to spawn the instance on the hypervisor.
Nov 25 08:38:23 compute-0 nova_compute[253538]: 2025-11-25 08:38:23.947 253542 DEBUG nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.001 253542 INFO nova.compute.manager [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Took 10.91 seconds to build instance.
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.013 253542 DEBUG oslo_concurrency.lockutils [None req-58ca0718-1cbf-4f2b-bfa1-d7c781586dc7 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:24 compute-0 ceph-mon[75015]: pgmap v1661: 321 pgs: 321 active+clean; 491 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 10 MiB/s wr, 272 op/s
Nov 25 08:38:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3885153772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.095 253542 INFO nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Creating config drive at /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.100 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu54iwwsp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150943965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.249 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu54iwwsp" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.274 253542 DEBUG nova.storage.rbd_utils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.278 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.305 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.313 253542 DEBUG nova.compute.provider_tree [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.330 253542 DEBUG nova.scheduler.client.report [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.354 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.355 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.410 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.411 253542 DEBUG nova.network.neutron [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.426 253542 INFO nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.440 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.446 253542 DEBUG oslo_concurrency.processutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.447 253542 INFO nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Deleting local config drive /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config because it was imported into RBD.
Nov 25 08:38:24 compute-0 kernel: tap21a608d7-be: entered promiscuous mode
Nov 25 08:38:24 compute-0 NetworkManager[48915]: <info>  [1764059904.5194] manager: (tap21a608d7-be): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.524 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.525 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.526 253542 INFO nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Creating image(s)
Nov 25 08:38:24 compute-0 ovn_controller[152859]: 2025-11-25T08:38:24Z|00806|binding|INFO|Claiming lport 21a608d7-be38-4d88-902b-2124e5227ae5 for this chassis.
Nov 25 08:38:24 compute-0 ovn_controller[152859]: 2025-11-25T08:38:24Z|00807|binding|INFO|21a608d7-be38-4d88-902b-2124e5227ae5: Claiming fa:16:3e:c2:45:0c 10.100.0.13
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.537 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:45:0c 10.100.0.13'], port_security=['fa:16:3e:c2:45:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bc210c6-9f67-440b-a11c-0b4e13e74a21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '42767d876b844fbd9b53953fb5f664b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87f4d1a7-e392-4893-98e7-5d05afa4be77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608490f2-11ad-4e35-9100-843accb3d76b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=21a608d7-be38-4d88-902b-2124e5227ae5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.545 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 21a608d7-be38-4d88-902b-2124e5227ae5 in datapath 1a4161e2-3dc8-48ab-8204-aaba5cb02136 bound to our chassis
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.547 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:38:24 compute-0 ovn_controller[152859]: 2025-11-25T08:38:24Z|00808|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 ovn-installed in OVS
Nov 25 08:38:24 compute-0 ovn_controller[152859]: 2025-11-25T08:38:24Z|00809|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 up in Southbound
Nov 25 08:38:24 compute-0 systemd-machined[215790]: New machine qemu-104-instance-00000055.
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.572 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96b556db-b620-4d21-be6d-758048268941]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.573 253542 DEBUG nova.storage.rbd_utils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:24 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000055.
Nov 25 08:38:24 compute-0 systemd-udevd[336727]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:24 compute-0 NetworkManager[48915]: <info>  [1764059904.6044] device (tap21a608d7-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:38:24 compute-0 NetworkManager[48915]: <info>  [1764059904.6056] device (tap21a608d7-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.615 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c39d5afd-4f6a-4491-9a2f-56f12623d093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.619 253542 DEBUG nova.storage.rbd_utils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.621 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fcab1e8f-cc0c-47b1-b4f5-782ced69e4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.655 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[09a429c2-64fa-45fb-9f1e-ed43295bde73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.658 253542 DEBUG nova.storage.rbd_utils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.666 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.680 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79214c35-8755-4781-84bd-02b1b4098bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a4161e2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:2f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525620, 'reachable_time': 42483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336774, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.701 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.707 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d678ec4e-b849-416d-8c17-9c1f17c48659]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525631, 'tstamp': 525631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336777, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525634, 'tstamp': 525634}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336777, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.708 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4161e2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.710 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.712 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a4161e2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.712 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.712 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a4161e2-30, col_values=(('external_ids', {'iface-id': 'ee088c9f-327b-47d5-b296-dc11de2d7323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:24.712 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.746 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.747 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.748 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.749 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.774 253542 DEBUG nova.storage.rbd_utils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.780 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8a68398d-9640-49e2-a049-3da4f7b371c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.961 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059889.9599354, a5ba6c2a-b5cd-4ced-b648-e54a941cda0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.963 253542 INFO nova.compute.manager [-] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] VM Stopped (Lifecycle Event)
Nov 25 08:38:24 compute-0 nova_compute[253538]: 2025-11-25 08:38:24.992 253542 DEBUG nova.compute.manager [None req-59c23434-af51-4abf-a040-6b3b9cdc990d - - - - - -] [instance: a5ba6c2a-b5cd-4ced-b648-e54a941cda0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/150943965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.103 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059905.1030688, 3bc210c6-9f67-440b-a11c-0b4e13e74a21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.104 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] VM Started (Lifecycle Event)
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.106 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 8a68398d-9640-49e2-a049-3da4f7b371c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.134 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.164 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059905.1041217, 3bc210c6-9f67-440b-a11c-0b4e13e74a21 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.164 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] VM Paused (Lifecycle Event)
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.169 253542 DEBUG nova.storage.rbd_utils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] resizing rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.196 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.200 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.219 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.274 253542 DEBUG nova.objects.instance [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.295 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.296 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Ensure instance console log exists: /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.296 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.297 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.297 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 511 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 9.1 MiB/s wr, 238 op/s
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.647 253542 DEBUG nova.network.neutron [req-a3273386-6e74-4634-b79d-a9f39dfef8f1 req-cf86d6ba-bf91-429d-bd31-d06e21923935 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Updated VIF entry in instance network info cache for port 21a608d7-be38-4d88-902b-2124e5227ae5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.647 253542 DEBUG nova.network.neutron [req-a3273386-6e74-4634-b79d-a9f39dfef8f1 req-cf86d6ba-bf91-429d-bd31-d06e21923935 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Updating instance_info_cache with network_info: [{"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.667 253542 DEBUG nova.policy [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '24fa34332e6f4b628514969bbf76e94b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6851917992b149818e8b44146c66bfc3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:38:25 compute-0 nova_compute[253538]: 2025-11-25 08:38:25.669 253542 DEBUG oslo_concurrency.lockutils [req-a3273386-6e74-4634-b79d-a9f39dfef8f1 req-cf86d6ba-bf91-429d-bd31-d06e21923935 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:26 compute-0 ceph-mon[75015]: pgmap v1662: 321 pgs: 321 active+clean; 511 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 9.1 MiB/s wr, 238 op/s
Nov 25 08:38:26 compute-0 nova_compute[253538]: 2025-11-25 08:38:26.572 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.005 253542 DEBUG nova.network.neutron [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Updating instance_info_cache with network_info: [{"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.025 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Releasing lock "refresh_cache-8120a4a8-c326-4f1b-94d5-2c1ffe663959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.026 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Instance network_info: |[{"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.030 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Start _get_guest_xml network_info=[{"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.037 253542 WARNING nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.042 253542 DEBUG nova.virt.libvirt.host [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.043 253542 DEBUG nova.virt.libvirt.host [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.046 253542 DEBUG nova.virt.libvirt.host [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.046 253542 DEBUG nova.virt.libvirt.host [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.047 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.047 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.048 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.049 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.049 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.049 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.050 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.050 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.050 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.051 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.051 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.051 253542 DEBUG nova.virt.hardware [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.056 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 525 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 9.0 MiB/s wr, 228 op/s
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.483 253542 DEBUG nova.network.neutron [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Successfully created port: 203c150c-9339-4520-8e52-01740854c5ef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.495 253542 DEBUG nova.compute.manager [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received event network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.496 253542 DEBUG oslo_concurrency.lockutils [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.496 253542 DEBUG oslo_concurrency.lockutils [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.496 253542 DEBUG oslo_concurrency.lockutils [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.496 253542 DEBUG nova.compute.manager [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] No waiting events found dispatching network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.496 253542 WARNING nova.compute.manager [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received unexpected event network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 for instance with vm_state active and task_state None.
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.497 253542 DEBUG nova.compute.manager [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received event network-changed-5375bd27-22cc-4b77-9aba-984375be602e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.497 253542 DEBUG nova.compute.manager [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Refreshing instance network info cache due to event network-changed-5375bd27-22cc-4b77-9aba-984375be602e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.497 253542 DEBUG oslo_concurrency.lockutils [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8120a4a8-c326-4f1b-94d5-2c1ffe663959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.497 253542 DEBUG oslo_concurrency.lockutils [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8120a4a8-c326-4f1b-94d5-2c1ffe663959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.497 253542 DEBUG nova.network.neutron [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Refreshing network info cache for port 5375bd27-22cc-4b77-9aba-984375be602e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113041725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.524 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.543 253542 DEBUG nova.storage.rbd_utils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.546 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.578 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.603 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.604 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.604 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.605 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:38:27 compute-0 nova_compute[253538]: 2025-11-25 08:38:27.605 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946841487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.028 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.032 253542 DEBUG nova.virt.libvirt.vif [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1385124579',display_name='tempest-ServersTestJSON-server-1385124579',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1385124579',id=86,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF77JveL0BG5wdUPTW6PG1Cp7hECH5Gl79egfwpLvdbjEuG1YZ3Vj0lR/X2i629CgVY6503s+mx9fQVN724QZuHLRyYyBnIZyO1ID5iRCjMXo62SwPuSjzUopvGv9d+KWQ==',key_name='tempest-key-1758938090',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-xusmvxml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:19Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=8120a4a8-c326-4f1b-94d5-2c1ffe663959,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.033 253542 DEBUG nova.network.os_vif_util [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.035 253542 DEBUG nova.network.os_vif_util [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:9c:37,bridge_name='br-int',has_traffic_filtering=True,id=5375bd27-22cc-4b77-9aba-984375be602e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5375bd27-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.038 253542 DEBUG nova.objects.instance [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8120a4a8-c326-4f1b-94d5-2c1ffe663959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.055 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <uuid>8120a4a8-c326-4f1b-94d5-2c1ffe663959</uuid>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <name>instance-00000056</name>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestJSON-server-1385124579</nova:name>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:38:27</nova:creationTime>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <nova:user uuid="fdcb005cc49a4dfa82152f2c0817cc94">tempest-ServersTestJSON-1426188226-project-member</nova:user>
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <nova:project uuid="b730f086c4b94185afab5e10fa2e8181">tempest-ServersTestJSON-1426188226</nova:project>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <nova:port uuid="5375bd27-22cc-4b77-9aba-984375be602e">
Nov 25 08:38:28 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <system>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <entry name="serial">8120a4a8-c326-4f1b-94d5-2c1ffe663959</entry>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <entry name="uuid">8120a4a8-c326-4f1b-94d5-2c1ffe663959</entry>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </system>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <os>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   </os>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <features>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   </features>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk">
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk.config">
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:90:9c:37"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <target dev="tap5375bd27-22"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959/console.log" append="off"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <video>
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </video>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:38:28 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:38:28 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:38:28 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:38:28 compute-0 nova_compute[253538]: </domain>
Nov 25 08:38:28 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.056 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Preparing to wait for external event network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.057 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.057 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.058 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.059 253542 DEBUG nova.virt.libvirt.vif [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1385124579',display_name='tempest-ServersTestJSON-server-1385124579',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1385124579',id=86,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF77JveL0BG5wdUPTW6PG1Cp7hECH5Gl79egfwpLvdbjEuG1YZ3Vj0lR/X2i629CgVY6503s+mx9fQVN724QZuHLRyYyBnIZyO1ID5iRCjMXo62SwPuSjzUopvGv9d+KWQ==',key_name='tempest-key-1758938090',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-xusmvxml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:19Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=8120a4a8-c326-4f1b-94d5-2c1ffe663959,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.059 253542 DEBUG nova.network.os_vif_util [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.060 253542 DEBUG nova.network.os_vif_util [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:9c:37,bridge_name='br-int',has_traffic_filtering=True,id=5375bd27-22cc-4b77-9aba-984375be602e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5375bd27-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.061 253542 DEBUG os_vif [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:9c:37,bridge_name='br-int',has_traffic_filtering=True,id=5375bd27-22cc-4b77-9aba-984375be602e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5375bd27-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.063 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.064 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.069 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5375bd27-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.070 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5375bd27-22, col_values=(('external_ids', {'iface-id': '5375bd27-22cc-4b77-9aba-984375be602e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:9c:37', 'vm-uuid': '8120a4a8-c326-4f1b-94d5-2c1ffe663959'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879212926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:28 compute-0 NetworkManager[48915]: <info>  [1764059908.0734] manager: (tap5375bd27-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.079 253542 INFO os_vif [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:9c:37,bridge_name='br-int',has_traffic_filtering=True,id=5375bd27-22cc-4b77-9aba-984375be602e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5375bd27-22')
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.094 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.153 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.154 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.155 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No VIF found with MAC fa:16:3e:90:9c:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.155 253542 INFO nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Using config drive
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.190 253542 DEBUG nova.storage.rbd_utils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.270 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.270 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.273 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.273 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.276 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.276 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.276 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.282 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.282 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.285 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.285 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.288 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.288 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:38:28 compute-0 ceph-mon[75015]: pgmap v1663: 321 pgs: 321 active+clean; 525 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 9.0 MiB/s wr, 228 op/s
Nov 25 08:38:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/113041725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3946841487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3879212926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.500 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.501 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3144MB free_disk=59.759769439697266GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.501 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.502 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.611 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0a240e53-cc4c-463e-9601-41d687d64349 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.612 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 40912950-fedc-405c-bc49-c4a757a422dc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.612 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0e855a86-52f7-47bd-aee9-e88449169aa1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.612 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.612 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 3bc210c6-9f67-440b-a11c-0b4e13e74a21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.612 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8120a4a8-c326-4f1b-94d5-2c1ffe663959 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.612 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 8a68398d-9640-49e2-a049-3da4f7b371c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.613 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.613 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.771 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.927 253542 INFO nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Creating config drive at /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959/disk.config
Nov 25 08:38:28 compute-0 nova_compute[253538]: 2025-11-25 08:38:28.937 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpji85724b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:38:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/621863183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:38:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:38:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/621863183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.092 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpji85724b" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.142 253542 DEBUG nova.storage.rbd_utils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.150 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959/disk.config 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2214768680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.211 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.217 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.234 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.254 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.256 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 542 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 8.6 MiB/s wr, 237 op/s
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.389 253542 DEBUG oslo_concurrency.processutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959/disk.config 8120a4a8-c326-4f1b-94d5-2c1ffe663959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.390 253542 INFO nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Deleting local config drive /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959/disk.config because it was imported into RBD.
Nov 25 08:38:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/621863183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:38:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/621863183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:38:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2214768680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:29 compute-0 kernel: tap5375bd27-22: entered promiscuous mode
Nov 25 08:38:29 compute-0 NetworkManager[48915]: <info>  [1764059909.4527] manager: (tap5375bd27-22): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Nov 25 08:38:29 compute-0 ovn_controller[152859]: 2025-11-25T08:38:29Z|00810|binding|INFO|Claiming lport 5375bd27-22cc-4b77-9aba-984375be602e for this chassis.
Nov 25 08:38:29 compute-0 ovn_controller[152859]: 2025-11-25T08:38:29Z|00811|binding|INFO|5375bd27-22cc-4b77-9aba-984375be602e: Claiming fa:16:3e:90:9c:37 10.100.0.7
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.458 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.474 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:9c:37 10.100.0.7'], port_security=['fa:16:3e:90:9c:37 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8120a4a8-c326-4f1b-94d5-2c1ffe663959', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '2', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5375bd27-22cc-4b77-9aba-984375be602e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.475 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5375bd27-22cc-4b77-9aba-984375be602e in datapath 92e26514-5b15-410b-8885-6773bc03c4ce bound to our chassis
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.477 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:38:29 compute-0 ovn_controller[152859]: 2025-11-25T08:38:29Z|00812|binding|INFO|Setting lport 5375bd27-22cc-4b77-9aba-984375be602e ovn-installed in OVS
Nov 25 08:38:29 compute-0 ovn_controller[152859]: 2025-11-25T08:38:29Z|00813|binding|INFO|Setting lport 5375bd27-22cc-4b77-9aba-984375be602e up in Southbound
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:29 compute-0 systemd-udevd[337109]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.499 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f4651b-dcc2-4d38-a56f-fe076b68bb38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:29 compute-0 NetworkManager[48915]: <info>  [1764059909.5054] device (tap5375bd27-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:38:29 compute-0 NetworkManager[48915]: <info>  [1764059909.5075] device (tap5375bd27-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:38:29 compute-0 systemd-machined[215790]: New machine qemu-105-instance-00000056.
Nov 25 08:38:29 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000056.
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.542 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8d629a1a-ce8d-4e85-b868-2f333da892da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.549 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b541ab-a420-4d1e-9bc0-409bc4d0be38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.601 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[da4e7cef-36bd-4cbb-a26f-2528d9979c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.626 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a612622-dd48-4e84-a91c-6fc0bf21fdb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337125, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.655 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90bf3266-f252-4fc7-b03b-abfbf587da85]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337126, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337126, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.658 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.660 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:29 compute-0 nova_compute[253538]: 2025-11-25 08:38:29.662 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.663 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.663 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.665 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:29.665 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.233 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.246 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059910.246228, 8120a4a8-c326-4f1b-94d5-2c1ffe663959 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.247 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] VM Started (Lifecycle Event)
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.276 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.280 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059910.2490287, 8120a4a8-c326-4f1b-94d5-2c1ffe663959 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.280 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] VM Paused (Lifecycle Event)
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.297 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.305 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.328 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:30 compute-0 ceph-mon[75015]: pgmap v1664: 321 pgs: 321 active+clean; 542 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 8.6 MiB/s wr, 237 op/s
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.501 253542 DEBUG nova.network.neutron [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Updated VIF entry in instance network info cache for port 5375bd27-22cc-4b77-9aba-984375be602e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.502 253542 DEBUG nova.network.neutron [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Updating instance_info_cache with network_info: [{"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.511 253542 DEBUG nova.network.neutron [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Successfully updated port: 203c150c-9339-4520-8e52-01740854c5ef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.533 253542 DEBUG oslo_concurrency.lockutils [req-9cf518de-8a3f-4a97-b2e6-c175c7f56cd6 req-4b833021-84e8-48ff-9a6b-cfe9a6b31a13 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8120a4a8-c326-4f1b-94d5-2c1ffe663959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.535 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "refresh_cache-8a68398d-9640-49e2-a049-3da4f7b371c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.536 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquired lock "refresh_cache-8a68398d-9640-49e2-a049-3da4f7b371c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.536 253542 DEBUG nova.network.neutron [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.549 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.555 253542 DEBUG nova.compute.manager [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.556 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.556 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.556 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.557 253542 DEBUG nova.compute.manager [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Processing event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.557 253542 DEBUG nova.compute.manager [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.557 253542 DEBUG nova.compute.manager [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing instance network info cache due to event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.557 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.558 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.558 253542 DEBUG nova.network.neutron [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.560 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.565 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.567 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059910.5667896, 3bc210c6-9f67-440b-a11c-0b4e13e74a21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.567 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] VM Resumed (Lifecycle Event)
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.578 253542 INFO nova.virt.libvirt.driver [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance spawned successfully.
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.578 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.595 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.602 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.607 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.608 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.608 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.609 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.609 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.609 253542 DEBUG nova.virt.libvirt.driver [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.633 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.668 253542 INFO nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Took 13.15 seconds to spawn the instance on the hypervisor.
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.669 253542 DEBUG nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.749 253542 INFO nova.compute.manager [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Took 15.39 seconds to build instance.
Nov 25 08:38:30 compute-0 nova_compute[253538]: 2025-11-25 08:38:30.773 253542 DEBUG oslo_concurrency.lockutils [None req-e28a2285-4983-4b70-9e51-ba7fdfa2900f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 558 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 258 op/s
Nov 25 08:38:31 compute-0 nova_compute[253538]: 2025-11-25 08:38:31.520 253542 DEBUG nova.network.neutron [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:38:31 compute-0 nova_compute[253538]: 2025-11-25 08:38:31.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:32 compute-0 ceph-mon[75015]: pgmap v1665: 321 pgs: 321 active+clean; 558 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 258 op/s
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.131 253542 DEBUG nova.network.neutron [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updated VIF entry in instance network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.132 253542 DEBUG nova.network.neutron [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updating instance_info_cache with network_info: [{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.136 253542 DEBUG nova.network.neutron [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Updating instance_info_cache with network_info: [{"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.154 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.155 253542 DEBUG nova.compute.manager [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.156 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.157 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.157 253542 DEBUG oslo_concurrency.lockutils [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.158 253542 DEBUG nova.compute.manager [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.159 253542 WARNING nova.compute.manager [req-bd6464c0-0e0c-499a-b260-da7e79a14ce7 req-fc231ead-21f8-42ce-9a18-7d27b74a7bf9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state building and task_state spawning.
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.163 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Releasing lock "refresh_cache-8a68398d-9640-49e2-a049-3da4f7b371c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.163 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance network_info: |[{"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.170 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Start _get_guest_xml network_info=[{"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.176 253542 WARNING nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.182 253542 DEBUG nova.virt.libvirt.host [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.184 253542 DEBUG nova.virt.libvirt.host [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.188 253542 DEBUG nova.virt.libvirt.host [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.189 253542 DEBUG nova.virt.libvirt.host [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.189 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.190 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.191 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.192 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.193 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.193 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.194 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.194 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.195 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.196 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.196 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.197 253542 DEBUG nova.virt.hardware [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.203 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 558 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 267 op/s
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.637 253542 DEBUG nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.638 253542 DEBUG nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing instance network info cache due to event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.638 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.639 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.639 253542 DEBUG nova.network.neutron [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4269928275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.755 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.779 253542 DEBUG nova.storage.rbd_utils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.783 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.842 253542 INFO nova.compute.manager [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Rescuing
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.843 253542 DEBUG oslo_concurrency.lockutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.843 253542 DEBUG oslo_concurrency.lockutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquired lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:33 compute-0 nova_compute[253538]: 2025-11-25 08:38:33.843 253542 DEBUG nova.network.neutron [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:38:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400467129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.245 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.246 253542 DEBUG nova.virt.libvirt.vif [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159083655',display_name='tempest-tempest.common.compute-instance-1159083655',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159083655',id=87,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-u9oo6n6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:24Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=8a68398d-9640-49e2-a049-3da4f7b371c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.247 253542 DEBUG nova.network.os_vif_util [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.247 253542 DEBUG nova.network.os_vif_util [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.248 253542 DEBUG nova.objects.instance [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.267 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <uuid>8a68398d-9640-49e2-a049-3da4f7b371c5</uuid>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <name>instance-00000057</name>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <nova:name>tempest-tempest.common.compute-instance-1159083655</nova:name>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:38:33</nova:creationTime>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <nova:user uuid="24fa34332e6f4b628514969bbf76e94b">tempest-ServerActionsTestOtherA-678529119-project-member</nova:user>
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <nova:project uuid="6851917992b149818e8b44146c66bfc3">tempest-ServerActionsTestOtherA-678529119</nova:project>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <nova:port uuid="203c150c-9339-4520-8e52-01740854c5ef">
Nov 25 08:38:34 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <system>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <entry name="serial">8a68398d-9640-49e2-a049-3da4f7b371c5</entry>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <entry name="uuid">8a68398d-9640-49e2-a049-3da4f7b371c5</entry>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </system>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <os>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   </os>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <features>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   </features>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8a68398d-9640-49e2-a049-3da4f7b371c5_disk">
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config">
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:34 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:30:c9:50"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <target dev="tap203c150c-93"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/console.log" append="off"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <video>
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </video>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:38:34 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:38:34 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:38:34 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:38:34 compute-0 nova_compute[253538]: </domain>
Nov 25 08:38:34 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.267 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Preparing to wait for external event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.267 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.268 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.268 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.268 253542 DEBUG nova.virt.libvirt.vif [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159083655',display_name='tempest-tempest.common.compute-instance-1159083655',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159083655',id=87,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-u9oo6n6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:24Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=8a68398d-9640-49e2-a049-3da4f7b371c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.269 253542 DEBUG nova.network.os_vif_util [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.269 253542 DEBUG nova.network.os_vif_util [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.269 253542 DEBUG os_vif [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.270 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.270 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.271 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.274 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap203c150c-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.274 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap203c150c-93, col_values=(('external_ids', {'iface-id': '203c150c-9339-4520-8e52-01740854c5ef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:c9:50', 'vm-uuid': '8a68398d-9640-49e2-a049-3da4f7b371c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.275 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:34 compute-0 NetworkManager[48915]: <info>  [1764059914.2767] manager: (tap203c150c-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.277 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.282 253542 INFO os_vif [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93')
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.325 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.325 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.326 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No VIF found with MAC fa:16:3e:30:c9:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.326 253542 INFO nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Using config drive
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.353 253542 DEBUG nova.storage.rbd_utils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:34 compute-0 ceph-mon[75015]: pgmap v1666: 321 pgs: 321 active+clean; 558 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 267 op/s
Nov 25 08:38:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4269928275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2400467129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.640 253542 INFO nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Creating config drive at /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.649 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c5a12jw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.821 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c5a12jw" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.859 253542 DEBUG nova.storage.rbd_utils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:34 compute-0 nova_compute[253538]: 2025-11-25 08:38:34.864 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.032 253542 DEBUG oslo_concurrency.processutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.033 253542 INFO nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Deleting local config drive /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config because it was imported into RBD.
Nov 25 08:38:35 compute-0 kernel: tap203c150c-93: entered promiscuous mode
Nov 25 08:38:35 compute-0 NetworkManager[48915]: <info>  [1764059915.0934] manager: (tap203c150c-93): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Nov 25 08:38:35 compute-0 ovn_controller[152859]: 2025-11-25T08:38:35Z|00814|binding|INFO|Claiming lport 203c150c-9339-4520-8e52-01740854c5ef for this chassis.
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:35 compute-0 ovn_controller[152859]: 2025-11-25T08:38:35Z|00815|binding|INFO|203c150c-9339-4520-8e52-01740854c5ef: Claiming fa:16:3e:30:c9:50 10.100.0.11
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.108 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:c9:50 10.100.0.11'], port_security=['fa:16:3e:30:c9:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8a68398d-9640-49e2-a049-3da4f7b371c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee3f370c-3523-4fc9-bede-12723b8659c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=203c150c-9339-4520-8e52-01740854c5ef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.109 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 203c150c-9339-4520-8e52-01740854c5ef in datapath 2b676104-a53a-419a-a348-631c409e45c0 bound to our chassis
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.112 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:38:35 compute-0 ovn_controller[152859]: 2025-11-25T08:38:35Z|00816|binding|INFO|Setting lport 203c150c-9339-4520-8e52-01740854c5ef ovn-installed in OVS
Nov 25 08:38:35 compute-0 ovn_controller[152859]: 2025-11-25T08:38:35Z|00817|binding|INFO|Setting lport 203c150c-9339-4520-8e52-01740854c5ef up in Southbound
Nov 25 08:38:35 compute-0 systemd-udevd[337305]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.146 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b89ce75f-de18-4512-93cc-f5837f9119d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:35 compute-0 NetworkManager[48915]: <info>  [1764059915.1539] device (tap203c150c-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:38:35 compute-0 NetworkManager[48915]: <info>  [1764059915.1548] device (tap203c150c-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:38:35 compute-0 systemd-machined[215790]: New machine qemu-106-instance-00000057.
Nov 25 08:38:35 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-00000057.
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.183 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d90c27ac-c166-4571-b73b-bcb95f63cfe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.188 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a278fd8c-be08-48e6-8e5e-91b4891ac175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.221 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bd41d1b4-0a30-4df1-9bfd-b8c4d7376dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.239 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b77ce8-74e4-4bf6-a693-f26ce7ed3d2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b676104-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:55:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522998, 'reachable_time': 29715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337319, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.257 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f740fd5-3729-43b5-af31-0cb758943c4e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523012, 'tstamp': 523012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337320, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523015, 'tstamp': 523015}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337320, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.259 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b676104-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.267 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.268 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b676104-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.268 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.268 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b676104-a0, col_values=(('external_ids', {'iface-id': 'a70ff8dd-5248-427b-8c9b-80eee3a671f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:35.269 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 558 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.5 MiB/s wr, 280 op/s
Nov 25 08:38:35 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.898 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059915.8985364, 8a68398d-9640-49e2-a049-3da4f7b371c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.899 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] VM Started (Lifecycle Event)
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.922 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.926 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059915.9042764, 8a68398d-9640-49e2-a049-3da4f7b371c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.927 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] VM Paused (Lifecycle Event)
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.947 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.951 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.958 253542 DEBUG nova.network.neutron [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updated VIF entry in instance network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.959 253542 DEBUG nova.network.neutron [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updating instance_info_cache with network_info: [{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.972 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.975 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.976 253542 DEBUG nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-changed-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.976 253542 DEBUG nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Refreshing instance network info cache due to event network-changed-203c150c-9339-4520-8e52-01740854c5ef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.977 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-8a68398d-9640-49e2-a049-3da4f7b371c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.978 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-8a68398d-9640-49e2-a049-3da4f7b371c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:35 compute-0 nova_compute[253538]: 2025-11-25 08:38:35.979 253542 DEBUG nova.network.neutron [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Refreshing network info cache for port 203c150c-9339-4520-8e52-01740854c5ef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.005 253542 DEBUG nova.network.neutron [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Updating instance_info_cache with network_info: [{"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.030 253542 DEBUG oslo_concurrency.lockutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Releasing lock "refresh_cache-3bc210c6-9f67-440b-a11c-0b4e13e74a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:36 compute-0 ceph-mon[75015]: pgmap v1667: 321 pgs: 321 active+clean; 558 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.5 MiB/s wr, 280 op/s
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.478 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.577 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.736 253542 DEBUG nova.compute.manager [req-625ca01f-0d42-4da6-b9f4-3dfa91fb9766 req-3fb7b920-bdba-4fcf-a7b2-381f9d8b5ca0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.736 253542 DEBUG oslo_concurrency.lockutils [req-625ca01f-0d42-4da6-b9f4-3dfa91fb9766 req-3fb7b920-bdba-4fcf-a7b2-381f9d8b5ca0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.737 253542 DEBUG oslo_concurrency.lockutils [req-625ca01f-0d42-4da6-b9f4-3dfa91fb9766 req-3fb7b920-bdba-4fcf-a7b2-381f9d8b5ca0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.737 253542 DEBUG oslo_concurrency.lockutils [req-625ca01f-0d42-4da6-b9f4-3dfa91fb9766 req-3fb7b920-bdba-4fcf-a7b2-381f9d8b5ca0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.738 253542 DEBUG nova.compute.manager [req-625ca01f-0d42-4da6-b9f4-3dfa91fb9766 req-3fb7b920-bdba-4fcf-a7b2-381f9d8b5ca0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Processing event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.739 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.751 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059916.7423882, 8a68398d-9640-49e2-a049-3da4f7b371c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.753 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] VM Resumed (Lifecycle Event)
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.757 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.768 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance spawned successfully.
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.769 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.782 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.790 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.794 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.795 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.795 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.796 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.797 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.797 253542 DEBUG nova.virt.libvirt.driver [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.818 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.843 253542 INFO nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Took 12.32 seconds to spawn the instance on the hypervisor.
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.844 253542 DEBUG nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.921 253542 INFO nova.compute.manager [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Took 13.42 seconds to build instance.
Nov 25 08:38:36 compute-0 nova_compute[253538]: 2025-11-25 08:38:36.937 253542 DEBUG oslo_concurrency.lockutils [None req-e6c3ec85-af37-4027-bb4e-447c38f30570 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 565 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.3 MiB/s wr, 257 op/s
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.731 253542 DEBUG nova.network.neutron [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Updated VIF entry in instance network info cache for port 203c150c-9339-4520-8e52-01740854c5ef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.731 253542 DEBUG nova.network.neutron [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Updating instance_info_cache with network_info: [{"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.747 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-8a68398d-9640-49e2-a049-3da4f7b371c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.748 253542 DEBUG nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received event network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.748 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.748 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.748 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.749 253542 DEBUG nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Processing event network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.749 253542 DEBUG nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received event network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.749 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.749 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.750 253542 DEBUG oslo_concurrency.lockutils [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.750 253542 DEBUG nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] No waiting events found dispatching network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.750 253542 WARNING nova.compute.manager [req-da90bc10-c5bb-45fc-89ae-4c112a1ec3c1 req-026d9397-e37b-4dd8-b82c-596b679ec4ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received unexpected event network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e for instance with vm_state building and task_state spawning.
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.751 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.754 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059917.7540607, 8120a4a8-c326-4f1b-94d5-2c1ffe663959 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.754 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] VM Resumed (Lifecycle Event)
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.757 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.760 253542 INFO nova.virt.libvirt.driver [-] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Instance spawned successfully.
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.760 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.779 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.782 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.791 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.791 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.792 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.792 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.793 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.793 253542 DEBUG nova.virt.libvirt.driver [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.825 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.853 253542 INFO nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Took 18.59 seconds to spawn the instance on the hypervisor.
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.853 253542 DEBUG nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.909 253542 INFO nova.compute.manager [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Took 19.67 seconds to build instance.
Nov 25 08:38:37 compute-0 nova_compute[253538]: 2025-11-25 08:38:37.933 253542 DEBUG oslo_concurrency.lockutils [None req-6cd070f7-de24-4af7-a519-a2d0f0eed180 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:38 compute-0 ovn_controller[152859]: 2025-11-25T08:38:38Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:35:96 10.100.0.4
Nov 25 08:38:38 compute-0 ovn_controller[152859]: 2025-11-25T08:38:38Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:35:96 10.100.0.4
Nov 25 08:38:38 compute-0 ceph-mon[75015]: pgmap v1668: 321 pgs: 321 active+clean; 565 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.3 MiB/s wr, 257 op/s
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 570 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.0 MiB/s wr, 285 op/s
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.931 253542 DEBUG oslo_concurrency.lockutils [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.933 253542 DEBUG oslo_concurrency.lockutils [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.933 253542 DEBUG nova.compute.manager [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.939 253542 DEBUG nova.compute.manager [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.940 253542 DEBUG nova.objects.instance [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'flavor' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.981 253542 DEBUG nova.compute.manager [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.982 253542 DEBUG oslo_concurrency.lockutils [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.982 253542 DEBUG oslo_concurrency.lockutils [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.982 253542 DEBUG oslo_concurrency.lockutils [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.983 253542 DEBUG nova.compute.manager [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] No waiting events found dispatching network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.983 253542 WARNING nova.compute.manager [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received unexpected event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef for instance with vm_state active and task_state powering-off.
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.983 253542 DEBUG nova.compute.manager [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.983 253542 DEBUG nova.compute.manager [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing instance network info cache due to event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.984 253542 DEBUG oslo_concurrency.lockutils [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.984 253542 DEBUG oslo_concurrency.lockutils [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.984 253542 DEBUG nova.network.neutron [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:39 compute-0 nova_compute[253538]: 2025-11-25 08:38:39.990 253542 DEBUG nova.virt.libvirt.driver [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.136 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.138 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.138 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.138 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.139 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.140 253542 INFO nova.compute.manager [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Terminating instance
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.141 253542 DEBUG nova.compute.manager [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:38:40 compute-0 ceph-mon[75015]: pgmap v1669: 321 pgs: 321 active+clean; 570 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.0 MiB/s wr, 285 op/s
Nov 25 08:38:40 compute-0 kernel: tap5375bd27-22 (unregistering): left promiscuous mode
Nov 25 08:38:40 compute-0 NetworkManager[48915]: <info>  [1764059920.5294] device (tap5375bd27-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:40 compute-0 ovn_controller[152859]: 2025-11-25T08:38:40Z|00818|binding|INFO|Releasing lport 5375bd27-22cc-4b77-9aba-984375be602e from this chassis (sb_readonly=0)
Nov 25 08:38:40 compute-0 ovn_controller[152859]: 2025-11-25T08:38:40Z|00819|binding|INFO|Setting lport 5375bd27-22cc-4b77-9aba-984375be602e down in Southbound
Nov 25 08:38:40 compute-0 ovn_controller[152859]: 2025-11-25T08:38:40Z|00820|binding|INFO|Removing iface tap5375bd27-22 ovn-installed in OVS
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.560 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:9c:37 10.100.0.7'], port_security=['fa:16:3e:90:9c:37 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8120a4a8-c326-4f1b-94d5-2c1ffe663959', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '4', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5375bd27-22cc-4b77-9aba-984375be602e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.561 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5375bd27-22cc-4b77-9aba-984375be602e in datapath 92e26514-5b15-410b-8885-6773bc03c4ce unbound from our chassis
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.563 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.573 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:40 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 25 08:38:40 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000056.scope: Consumed 3.155s CPU time.
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.580 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ed9198-62c5-4377-b5b2-14432cb364ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:40 compute-0 systemd-machined[215790]: Machine qemu-105-instance-00000056 terminated.
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.608 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[96ace09a-7254-4285-a012-c3ac24d89741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.610 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3aa4c0-bd75-49cf-89b6-371cc14620be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.632 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[507484a5-93e8-4605-ae80-b1b12ddcf28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.686 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c4142972-a3b8-4f3b-bd5e-033297f99579]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337376, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.699 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45be5eb7-c8af-42d8-8fa3-b7e48fb32a26]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337377, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337377, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.700 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.708 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.709 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.709 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:40.709 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.783 253542 INFO nova.virt.libvirt.driver [-] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Instance destroyed successfully.
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.783 253542 DEBUG nova.objects.instance [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'resources' on Instance uuid 8120a4a8-c326-4f1b-94d5-2c1ffe663959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.805 253542 DEBUG nova.virt.libvirt.vif [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:38:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1385124579',display_name='tempest-ServersTestJSON-server-1385124579',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1385124579',id=86,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF77JveL0BG5wdUPTW6PG1Cp7hECH5Gl79egfwpLvdbjEuG1YZ3Vj0lR/X2i629CgVY6503s+mx9fQVN724QZuHLRyYyBnIZyO1ID5iRCjMXo62SwPuSjzUopvGv9d+KWQ==',key_name='tempest-key-1758938090',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-xusmvxml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:38:37Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=8120a4a8-c326-4f1b-94d5-2c1ffe663959,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.805 253542 DEBUG nova.network.os_vif_util [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "5375bd27-22cc-4b77-9aba-984375be602e", "address": "fa:16:3e:90:9c:37", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5375bd27-22", "ovs_interfaceid": "5375bd27-22cc-4b77-9aba-984375be602e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.806 253542 DEBUG nova.network.os_vif_util [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:9c:37,bridge_name='br-int',has_traffic_filtering=True,id=5375bd27-22cc-4b77-9aba-984375be602e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5375bd27-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.807 253542 DEBUG os_vif [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:9c:37,bridge_name='br-int',has_traffic_filtering=True,id=5375bd27-22cc-4b77-9aba-984375be602e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5375bd27-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.809 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5375bd27-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:40 compute-0 nova_compute[253538]: 2025-11-25 08:38:40.816 253542 INFO os_vif [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:9c:37,bridge_name='br-int',has_traffic_filtering=True,id=5375bd27-22cc-4b77-9aba-984375be602e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5375bd27-22')
Nov 25 08:38:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:41.064 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:41.065 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:41.066 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:41 compute-0 nova_compute[253538]: 2025-11-25 08:38:41.133 253542 INFO nova.virt.libvirt.driver [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Deleting instance files /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959_del
Nov 25 08:38:41 compute-0 nova_compute[253538]: 2025-11-25 08:38:41.134 253542 INFO nova.virt.libvirt.driver [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Deletion of /var/lib/nova/instances/8120a4a8-c326-4f1b-94d5-2c1ffe663959_del complete
Nov 25 08:38:41 compute-0 nova_compute[253538]: 2025-11-25 08:38:41.192 253542 INFO nova.compute.manager [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Took 1.05 seconds to destroy the instance on the hypervisor.
Nov 25 08:38:41 compute-0 nova_compute[253538]: 2025-11-25 08:38:41.192 253542 DEBUG oslo.service.loopingcall [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:38:41 compute-0 nova_compute[253538]: 2025-11-25 08:38:41.192 253542 DEBUG nova.compute.manager [-] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:38:41 compute-0 nova_compute[253538]: 2025-11-25 08:38:41.193 253542 DEBUG nova.network.neutron [-] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:38:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 587 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.5 MiB/s wr, 324 op/s
Nov 25 08:38:41 compute-0 nova_compute[253538]: 2025-11-25 08:38:41.583 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:42.116 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:42.117 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.372 253542 DEBUG nova.network.neutron [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updated VIF entry in instance network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.373 253542 DEBUG nova.network.neutron [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updating instance_info_cache with network_info: [{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.389 253542 DEBUG oslo_concurrency.lockutils [req-f99b06ec-c91d-4907-9dc2-7313e5393142 req-4494b0eb-7b76-4cbc-ac60-110b1cd0f039 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:42 compute-0 ceph-mon[75015]: pgmap v1670: 321 pgs: 321 active+clean; 587 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.5 MiB/s wr, 324 op/s
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.529 253542 DEBUG nova.network.neutron [-] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.543 253542 INFO nova.compute.manager [-] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Took 1.35 seconds to deallocate network for instance.
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.587 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.588 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.779 253542 DEBUG oslo_concurrency.processutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.831 253542 DEBUG nova.compute.manager [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.831 253542 DEBUG nova.compute.manager [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing instance network info cache due to event network-changed-8f28ea33-80c4-41cb-b191-a1b619b14515. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.832 253542 DEBUG oslo_concurrency.lockutils [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.832 253542 DEBUG oslo_concurrency.lockutils [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:42 compute-0 nova_compute[253538]: 2025-11-25 08:38:42.832 253542 DEBUG nova.network.neutron [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Refreshing network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476998479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:43 compute-0 nova_compute[253538]: 2025-11-25 08:38:43.288 253542 DEBUG oslo_concurrency.processutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:43 compute-0 nova_compute[253538]: 2025-11-25 08:38:43.296 253542 DEBUG nova.compute.provider_tree [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:43 compute-0 nova_compute[253538]: 2025-11-25 08:38:43.310 253542 DEBUG nova.scheduler.client.report [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:43 compute-0 nova_compute[253538]: 2025-11-25 08:38:43.334 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 569 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.3 MiB/s wr, 350 op/s
Nov 25 08:38:43 compute-0 nova_compute[253538]: 2025-11-25 08:38:43.366 253542 INFO nova.scheduler.client.report [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Deleted allocations for instance 8120a4a8-c326-4f1b-94d5-2c1ffe663959
Nov 25 08:38:43 compute-0 nova_compute[253538]: 2025-11-25 08:38:43.430 253542 DEBUG oslo_concurrency.lockutils [None req-e6c069b8-a2da-4c69-8d35-fc6c1c3e3d18 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2476998479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.043 253542 DEBUG nova.network.neutron [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updated VIF entry in instance network info cache for port 8f28ea33-80c4-41cb-b191-a1b619b14515. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.044 253542 DEBUG nova.network.neutron [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updating instance_info_cache with network_info: [{"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.058 253542 DEBUG oslo_concurrency.lockutils [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0e855a86-52f7-47bd-aee9-e88449169aa1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.058 253542 DEBUG nova.compute.manager [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received event network-vif-unplugged-5375bd27-22cc-4b77-9aba-984375be602e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.059 253542 DEBUG oslo_concurrency.lockutils [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.059 253542 DEBUG oslo_concurrency.lockutils [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.059 253542 DEBUG oslo_concurrency.lockutils [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.060 253542 DEBUG nova.compute.manager [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] No waiting events found dispatching network-vif-unplugged-5375bd27-22cc-4b77-9aba-984375be602e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:44 compute-0 nova_compute[253538]: 2025-11-25 08:38:44.060 253542 WARNING nova.compute.manager [req-7357e530-3bd2-467e-898b-49667117f9c5 req-09f42e56-9007-4072-818d-10e42598de24 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received unexpected event network-vif-unplugged-5375bd27-22cc-4b77-9aba-984375be602e for instance with vm_state deleted and task_state None.
Nov 25 08:38:44 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 08:38:44 compute-0 ceph-mon[75015]: pgmap v1671: 321 pgs: 321 active+clean; 569 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.3 MiB/s wr, 350 op/s
Nov 25 08:38:44 compute-0 podman[337431]: 2025-11-25 08:38:44.834280709 +0000 UTC m=+0.077823821 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 08:38:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 571 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.7 MiB/s wr, 377 op/s
Nov 25 08:38:45 compute-0 ovn_controller[152859]: 2025-11-25T08:38:45Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:45:0c 10.100.0.13
Nov 25 08:38:45 compute-0 ovn_controller[152859]: 2025-11-25T08:38:45Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:45:0c 10.100.0.13
Nov 25 08:38:45 compute-0 nova_compute[253538]: 2025-11-25 08:38:45.629 253542 DEBUG nova.compute.manager [req-6099b6ab-8e76-4792-b6ee-4e7ec911363a req-c8edfaaa-b23f-4c48-a228-013aa7395ef6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received event network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:45 compute-0 nova_compute[253538]: 2025-11-25 08:38:45.629 253542 DEBUG oslo_concurrency.lockutils [req-6099b6ab-8e76-4792-b6ee-4e7ec911363a req-c8edfaaa-b23f-4c48-a228-013aa7395ef6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:45 compute-0 nova_compute[253538]: 2025-11-25 08:38:45.630 253542 DEBUG oslo_concurrency.lockutils [req-6099b6ab-8e76-4792-b6ee-4e7ec911363a req-c8edfaaa-b23f-4c48-a228-013aa7395ef6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:45 compute-0 nova_compute[253538]: 2025-11-25 08:38:45.630 253542 DEBUG oslo_concurrency.lockutils [req-6099b6ab-8e76-4792-b6ee-4e7ec911363a req-c8edfaaa-b23f-4c48-a228-013aa7395ef6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8120a4a8-c326-4f1b-94d5-2c1ffe663959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:45 compute-0 nova_compute[253538]: 2025-11-25 08:38:45.630 253542 DEBUG nova.compute.manager [req-6099b6ab-8e76-4792-b6ee-4e7ec911363a req-c8edfaaa-b23f-4c48-a228-013aa7395ef6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] No waiting events found dispatching network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:45 compute-0 nova_compute[253538]: 2025-11-25 08:38:45.631 253542 WARNING nova.compute.manager [req-6099b6ab-8e76-4792-b6ee-4e7ec911363a req-c8edfaaa-b23f-4c48-a228-013aa7395ef6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received unexpected event network-vif-plugged-5375bd27-22cc-4b77-9aba-984375be602e for instance with vm_state deleted and task_state None.
Nov 25 08:38:45 compute-0 nova_compute[253538]: 2025-11-25 08:38:45.631 253542 DEBUG nova.compute.manager [req-6099b6ab-8e76-4792-b6ee-4e7ec911363a req-c8edfaaa-b23f-4c48-a228-013aa7395ef6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Received event network-vif-deleted-5375bd27-22cc-4b77-9aba-984375be602e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:45 compute-0 nova_compute[253538]: 2025-11-25 08:38:45.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:46 compute-0 ceph-mon[75015]: pgmap v1672: 321 pgs: 321 active+clean; 571 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.7 MiB/s wr, 377 op/s
Nov 25 08:38:46 compute-0 nova_compute[253538]: 2025-11-25 08:38:46.583 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:46 compute-0 nova_compute[253538]: 2025-11-25 08:38:46.589 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:38:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 575 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 304 op/s
Nov 25 08:38:47 compute-0 ceph-mon[75015]: pgmap v1673: 321 pgs: 321 active+clean; 575 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 304 op/s
Nov 25 08:38:47 compute-0 nova_compute[253538]: 2025-11-25 08:38:47.894 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:47 compute-0 nova_compute[253538]: 2025-11-25 08:38:47.896 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:47 compute-0 nova_compute[253538]: 2025-11-25 08:38:47.896 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:47 compute-0 nova_compute[253538]: 2025-11-25 08:38:47.897 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:47 compute-0 nova_compute[253538]: 2025-11-25 08:38:47.898 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:47 compute-0 nova_compute[253538]: 2025-11-25 08:38:47.900 253542 INFO nova.compute.manager [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Terminating instance
Nov 25 08:38:47 compute-0 nova_compute[253538]: 2025-11-25 08:38:47.901 253542 DEBUG nova.compute.manager [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:38:48 compute-0 kernel: tap8f28ea33-80 (unregistering): left promiscuous mode
Nov 25 08:38:48 compute-0 NetworkManager[48915]: <info>  [1764059928.0402] device (tap8f28ea33-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.048 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:48 compute-0 ovn_controller[152859]: 2025-11-25T08:38:48Z|00821|binding|INFO|Releasing lport 8f28ea33-80c4-41cb-b191-a1b619b14515 from this chassis (sb_readonly=0)
Nov 25 08:38:48 compute-0 ovn_controller[152859]: 2025-11-25T08:38:48Z|00822|binding|INFO|Setting lport 8f28ea33-80c4-41cb-b191-a1b619b14515 down in Southbound
Nov 25 08:38:48 compute-0 ovn_controller[152859]: 2025-11-25T08:38:48Z|00823|binding|INFO|Removing iface tap8f28ea33-80 ovn-installed in OVS
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.051 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:48.118 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:48 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 25 08:38:48 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000052.scope: Consumed 14.446s CPU time.
Nov 25 08:38:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:48.130 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:2c:1d 10.100.0.6'], port_security=['fa:16:3e:51:2c:1d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e855a86-52f7-47bd-aee9-e88449169aa1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346f61b-3f62-4471-b87c-676053219f02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ae570c13ba047bca1859d62faf328cc', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1118a317-9e94-4c83-9854-1785d0154360', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2aacef0e-2524-4118-9960-da2e22fd24eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8f28ea33-80c4-41cb-b191-a1b619b14515) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:48.132 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8f28ea33-80c4-41cb-b191-a1b619b14515 in datapath 6346f61b-3f62-4471-b87c-676053219f02 unbound from our chassis
Nov 25 08:38:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:48.134 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6346f61b-3f62-4471-b87c-676053219f02 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 08:38:48 compute-0 systemd-machined[215790]: Machine qemu-102-instance-00000052 terminated.
Nov 25 08:38:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:48.136 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fafeed97-129d-4c15-a300-54c48c887340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.348 253542 INFO nova.virt.libvirt.driver [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Instance destroyed successfully.
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.349 253542 DEBUG nova.objects.instance [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lazy-loading 'resources' on Instance uuid 0e855a86-52f7-47bd-aee9-e88449169aa1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.358 253542 DEBUG nova.compute.manager [req-189bb83b-dbf4-4649-8600-060450361b4a req-abe57633-d8bd-40d9-a50c-97d0d2f3aff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-unplugged-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.359 253542 DEBUG oslo_concurrency.lockutils [req-189bb83b-dbf4-4649-8600-060450361b4a req-abe57633-d8bd-40d9-a50c-97d0d2f3aff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.360 253542 DEBUG oslo_concurrency.lockutils [req-189bb83b-dbf4-4649-8600-060450361b4a req-abe57633-d8bd-40d9-a50c-97d0d2f3aff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.360 253542 DEBUG oslo_concurrency.lockutils [req-189bb83b-dbf4-4649-8600-060450361b4a req-abe57633-d8bd-40d9-a50c-97d0d2f3aff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.361 253542 DEBUG nova.compute.manager [req-189bb83b-dbf4-4649-8600-060450361b4a req-abe57633-d8bd-40d9-a50c-97d0d2f3aff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] No waiting events found dispatching network-vif-unplugged-8f28ea33-80c4-41cb-b191-a1b619b14515 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.362 253542 DEBUG nova.compute.manager [req-189bb83b-dbf4-4649-8600-060450361b4a req-abe57633-d8bd-40d9-a50c-97d0d2f3aff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-unplugged-8f28ea33-80c4-41cb-b191-a1b619b14515 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.369 253542 DEBUG nova.virt.libvirt.vif [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:37:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1695294575',display_name='tempest-ServerRescueTestJSONUnderV235-server-1695294575',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1695294575',id=82,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ae570c13ba047bca1859d62faf328cc',ramdisk_id='',reservation_id='r-df750kfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2082720401',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2082720401-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:38:21Z,user_data=None,user_id='2c27b17fb49c46f2877860b2f7123ef2',uuid=0e855a86-52f7-47bd-aee9-e88449169aa1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.370 253542 DEBUG nova.network.os_vif_util [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Converting VIF {"id": "8f28ea33-80c4-41cb-b191-a1b619b14515", "address": "fa:16:3e:51:2c:1d", "network": {"id": "6346f61b-3f62-4471-b87c-676053219f02", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-630581155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6ae570c13ba047bca1859d62faf328cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f28ea33-80", "ovs_interfaceid": "8f28ea33-80c4-41cb-b191-a1b619b14515", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.372 253542 DEBUG nova.network.os_vif_util [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:2c:1d,bridge_name='br-int',has_traffic_filtering=True,id=8f28ea33-80c4-41cb-b191-a1b619b14515,network=Network(6346f61b-3f62-4471-b87c-676053219f02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f28ea33-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.373 253542 DEBUG os_vif [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:2c:1d,bridge_name='br-int',has_traffic_filtering=True,id=8f28ea33-80c4-41cb-b191-a1b619b14515,network=Network(6346f61b-3f62-4471-b87c-676053219f02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f28ea33-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.376 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.377 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f28ea33-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.385 253542 INFO os_vif [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:2c:1d,bridge_name='br-int',has_traffic_filtering=True,id=8f28ea33-80c4-41cb-b191-a1b619b14515,network=Network(6346f61b-3f62-4471-b87c-676053219f02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f28ea33-80')
Nov 25 08:38:48 compute-0 kernel: tap21a608d7-be (unregistering): left promiscuous mode
Nov 25 08:38:48 compute-0 NetworkManager[48915]: <info>  [1764059928.9472] device (tap21a608d7-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:38:48 compute-0 ovn_controller[152859]: 2025-11-25T08:38:48Z|00824|binding|INFO|Releasing lport 21a608d7-be38-4d88-902b-2124e5227ae5 from this chassis (sb_readonly=0)
Nov 25 08:38:48 compute-0 ovn_controller[152859]: 2025-11-25T08:38:48Z|00825|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 down in Southbound
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:48 compute-0 ovn_controller[152859]: 2025-11-25T08:38:48Z|00826|binding|INFO|Removing iface tap21a608d7-be ovn-installed in OVS
Nov 25 08:38:48 compute-0 nova_compute[253538]: 2025-11-25 08:38:48.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:48 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 25 08:38:48 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000055.scope: Consumed 14.026s CPU time.
Nov 25 08:38:49 compute-0 systemd-machined[215790]: Machine qemu-104-instance-00000055 terminated.
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.028 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:45:0c 10.100.0.13'], port_security=['fa:16:3e:c2:45:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bc210c6-9f67-440b-a11c-0b4e13e74a21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '42767d876b844fbd9b53953fb5f664b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87f4d1a7-e392-4893-98e7-5d05afa4be77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608490f2-11ad-4e35-9100-843accb3d76b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=21a608d7-be38-4d88-902b-2124e5227ae5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.030 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 21a608d7-be38-4d88-902b-2124e5227ae5 in datapath 1a4161e2-3dc8-48ab-8204-aaba5cb02136 unbound from our chassis
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.034 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.056 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[754dccb1-90eb-47aa-b13f-496818449fe3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.100 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bd963ff7-1beb-415d-beec-a5120192db12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.106 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d24bfb0e-7d9a-4e3f-b15d-d87bba37f48a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 podman[337486]: 2025-11-25 08:38:49.135476784 +0000 UTC m=+0.121515993 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.157 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce94a23-21db-48c0-8abf-6e7dd6f7a793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 systemd-udevd[337451]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:49 compute-0 kernel: tap21a608d7-be: entered promiscuous mode
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00827|binding|INFO|Claiming lport 21a608d7-be38-4d88-902b-2124e5227ae5 for this chassis.
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00828|binding|INFO|21a608d7-be38-4d88-902b-2124e5227ae5: Claiming fa:16:3e:c2:45:0c 10.100.0.13
Nov 25 08:38:49 compute-0 NetworkManager[48915]: <info>  [1764059929.1788] manager: (tap21a608d7-be): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Nov 25 08:38:49 compute-0 kernel: tap21a608d7-be (unregistering): left promiscuous mode
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.185 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f310e568-f9d2-404a-a317-58d25dd65af4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a4161e2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:2f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525620, 'reachable_time': 42483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337513, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.193 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:45:0c 10.100.0.13'], port_security=['fa:16:3e:c2:45:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bc210c6-9f67-440b-a11c-0b4e13e74a21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '42767d876b844fbd9b53953fb5f664b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87f4d1a7-e392-4893-98e7-5d05afa4be77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608490f2-11ad-4e35-9100-843accb3d76b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=21a608d7-be38-4d88-902b-2124e5227ae5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.208 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[656fa7b2-89e6-4934-81aa-695c730905e5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525631, 'tstamp': 525631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337516, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525634, 'tstamp': 525634}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337516, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4161e2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.215 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00829|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 ovn-installed in OVS
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00830|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 up in Southbound
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00831|binding|INFO|Releasing lport 21a608d7-be38-4d88-902b-2124e5227ae5 from this chassis (sb_readonly=1)
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00832|if_status|INFO|Dropped 2 log messages in last 233 seconds (most recently, 233 seconds ago) due to excessive rate
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00833|if_status|INFO|Not setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 down as sb is readonly
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.219 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00834|binding|INFO|Removing iface tap21a608d7-be ovn-installed in OVS
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a4161e2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.253 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.253 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a4161e2-30, col_values=(('external_ids', {'iface-id': 'ee088c9f-327b-47d5-b296-dc11de2d7323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.254 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.255 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 21a608d7-be38-4d88-902b-2124e5227ae5 in datapath 1a4161e2-3dc8-48ab-8204-aaba5cb02136 bound to our chassis
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00835|binding|INFO|Releasing lport 21a608d7-be38-4d88-902b-2124e5227ae5 from this chassis (sb_readonly=0)
Nov 25 08:38:49 compute-0 ovn_controller[152859]: 2025-11-25T08:38:49Z|00836|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 down in Southbound
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.258 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.271 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:45:0c 10.100.0.13'], port_security=['fa:16:3e:c2:45:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bc210c6-9f67-440b-a11c-0b4e13e74a21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '42767d876b844fbd9b53953fb5f664b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87f4d1a7-e392-4893-98e7-5d05afa4be77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608490f2-11ad-4e35-9100-843accb3d76b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=21a608d7-be38-4d88-902b-2124e5227ae5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.277 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0bb634-c008-4649-a8fd-427fba6b720e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.284 253542 INFO nova.virt.libvirt.driver [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Deleting instance files /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1_del
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.285 253542 INFO nova.virt.libvirt.driver [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Deletion of /var/lib/nova/instances/0e855a86-52f7-47bd-aee9-e88449169aa1_del complete
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.318 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8ad13e-be9f-4f26-acfc-365b4d2e0ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.322 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[26f1b6a9-8bf2-44a7-846d-51884c14a227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.335 253542 INFO nova.compute.manager [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Took 1.43 seconds to destroy the instance on the hypervisor.
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.336 253542 DEBUG oslo.service.loopingcall [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.336 253542 DEBUG nova.compute.manager [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.336 253542 DEBUG nova.network.neutron [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.361 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3e912e3c-8108-4be2-b303-35cb0168b89e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 579 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.8 MiB/s wr, 286 op/s
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.391 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb533a0d-e463-4c19-a2d3-d2363a93ac96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a4161e2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:2f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525620, 'reachable_time': 42483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337524, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b426d6f7-314a-4dfb-8c11-c224049de74d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525631, 'tstamp': 525631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337525, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525634, 'tstamp': 525634}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337525, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.420 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4161e2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.422 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.431 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.432 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a4161e2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.432 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.433 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a4161e2-30, col_values=(('external_ids', {'iface-id': 'ee088c9f-327b-47d5-b296-dc11de2d7323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.433 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.434 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 21a608d7-be38-4d88-902b-2124e5227ae5 in datapath 1a4161e2-3dc8-48ab-8204-aaba5cb02136 unbound from our chassis
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.435 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.460 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8f4af7-8cfc-41fc-a2e2-2258d63945d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.504 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2d41e2-d680-46d7-9b11-ddcae6a9d30f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.508 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5904230b-d295-4790-8d3d-b4fd05ed04f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.552 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[269ea169-c067-4ec7-88ae-7cf4b6b38a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.583 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55ed8e78-d85e-4eab-996d-1434acc7ea99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a4161e2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:2f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525620, 'reachable_time': 42483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337532, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.606 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[282ddd60-ee25-4d2c-8099-5e70e4035e91]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525631, 'tstamp': 525631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337533, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525634, 'tstamp': 525634}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337533, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.607 253542 INFO nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance shutdown successfully after 13 seconds.
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.609 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4161e2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.619 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a4161e2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.619 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.620 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a4161e2-30, col_values=(('external_ids', {'iface-id': 'ee088c9f-327b-47d5-b296-dc11de2d7323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:49.621 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.621 253542 INFO nova.virt.libvirt.driver [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance destroyed successfully.
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.621 253542 DEBUG nova.objects.instance [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.662 253542 INFO nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Attempting rescue
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.664 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.669 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.669 253542 INFO nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Creating image(s)
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.702 253542 DEBUG nova.storage.rbd_utils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.707 253542 DEBUG nova.objects.instance [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.755 253542 DEBUG nova.storage.rbd_utils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.794 253542 DEBUG nova.storage.rbd_utils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.799 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.901 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.902 253542 DEBUG oslo_concurrency.lockutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.903 253542 DEBUG oslo_concurrency.lockutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.904 253542 DEBUG oslo_concurrency.lockutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.941 253542 DEBUG nova.storage.rbd_utils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:49 compute-0 nova_compute[253538]: 2025-11-25 08:38:49.947 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.075 253542 DEBUG nova.virt.libvirt.driver [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.305 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.306 253542 DEBUG nova.objects.instance [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.321 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.322 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Start _get_guest_xml network_info=[{"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "vif_mac": "fa:16:3e:c2:45:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.322 253542 DEBUG nova.objects.instance [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'resources' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.349 253542 WARNING nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.358 253542 DEBUG nova.virt.libvirt.host [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.359 253542 DEBUG nova.virt.libvirt.host [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.365 253542 DEBUG nova.virt.libvirt.host [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.365 253542 DEBUG nova.virt.libvirt.host [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.366 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.366 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.366 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.366 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.367 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.367 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.367 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.367 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.368 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.368 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.368 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.368 253542 DEBUG nova.virt.hardware [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.368 253542 DEBUG nova.objects.instance [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.394 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:50 compute-0 ceph-mon[75015]: pgmap v1674: 321 pgs: 321 active+clean; 579 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.8 MiB/s wr, 286 op/s
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.442 253542 DEBUG nova.compute.manager [req-532f97fe-6fd3-4bc3-8f52-ad07deb73b62 req-526c332d-a208-4e5e-a748-23d459909be8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.442 253542 DEBUG oslo_concurrency.lockutils [req-532f97fe-6fd3-4bc3-8f52-ad07deb73b62 req-526c332d-a208-4e5e-a748-23d459909be8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.443 253542 DEBUG oslo_concurrency.lockutils [req-532f97fe-6fd3-4bc3-8f52-ad07deb73b62 req-526c332d-a208-4e5e-a748-23d459909be8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.443 253542 DEBUG oslo_concurrency.lockutils [req-532f97fe-6fd3-4bc3-8f52-ad07deb73b62 req-526c332d-a208-4e5e-a748-23d459909be8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.443 253542 DEBUG nova.compute.manager [req-532f97fe-6fd3-4bc3-8f52-ad07deb73b62 req-526c332d-a208-4e5e-a748-23d459909be8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.443 253542 WARNING nova.compute.manager [req-532f97fe-6fd3-4bc3-8f52-ad07deb73b62 req-526c332d-a208-4e5e-a748-23d459909be8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.444 253542 DEBUG nova.network.neutron [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.494 253542 DEBUG nova.compute.manager [req-a88627dc-99a2-4a32-b952-2dd6d5839e74 req-37bc9446-6292-486e-8014-f8fb94513979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.495 253542 DEBUG oslo_concurrency.lockutils [req-a88627dc-99a2-4a32-b952-2dd6d5839e74 req-37bc9446-6292-486e-8014-f8fb94513979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.495 253542 DEBUG oslo_concurrency.lockutils [req-a88627dc-99a2-4a32-b952-2dd6d5839e74 req-37bc9446-6292-486e-8014-f8fb94513979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.495 253542 DEBUG oslo_concurrency.lockutils [req-a88627dc-99a2-4a32-b952-2dd6d5839e74 req-37bc9446-6292-486e-8014-f8fb94513979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.495 253542 DEBUG nova.compute.manager [req-a88627dc-99a2-4a32-b952-2dd6d5839e74 req-37bc9446-6292-486e-8014-f8fb94513979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] No waiting events found dispatching network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.495 253542 WARNING nova.compute.manager [req-a88627dc-99a2-4a32-b952-2dd6d5839e74 req-37bc9446-6292-486e-8014-f8fb94513979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received unexpected event network-vif-plugged-8f28ea33-80c4-41cb-b191-a1b619b14515 for instance with vm_state rescued and task_state deleting.
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.581 253542 INFO nova.compute.manager [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Took 1.24 seconds to deallocate network for instance.
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.633 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.634 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.850 253542 DEBUG oslo_concurrency.processutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.913 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "dd202e7c-474a-42f6-a6a8-5276974c793f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.914 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196191983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.937 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.939 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:50 compute-0 nova_compute[253538]: 2025-11-25 08:38:50.977 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.137 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518585339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.353 253542 DEBUG oslo_concurrency.processutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.361 253542 DEBUG nova.compute.provider_tree [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 538 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.4 MiB/s wr, 255 op/s
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.376 253542 DEBUG nova.scheduler.client.report [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994211775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.415 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.418 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.428 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.428 253542 INFO nova.compute.claims [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:38:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3196191983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2518585339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3994211775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.442 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.443 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.493 253542 INFO nova.scheduler.client.report [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Deleted allocations for instance 0e855a86-52f7-47bd-aee9-e88449169aa1
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.616 253542 DEBUG oslo_concurrency.lockutils [None req-c44fb860-0210-488c-9470-23f4706ac636 2c27b17fb49c46f2877860b2f7123ef2 6ae570c13ba047bca1859d62faf328cc - - default default] Lock "0e855a86-52f7-47bd-aee9-e88449169aa1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.669 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:51 compute-0 podman[337718]: 2025-11-25 08:38:51.843736218 +0000 UTC m=+0.096676775 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:38:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383451001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.898 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.903 253542 DEBUG nova.virt.libvirt.vif [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:38:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1788909463',display_name='tempest-ServerRescueNegativeTestJSON-server-1788909463',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1788909463',id=85,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='42767d876b844fbd9b53953fb5f664b5',ramdisk_id='',reservation_id='r-53yldfyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-561122366',owner_user_name='tempest-ServerRescueNegativeTestJSON-561122366-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:30Z,user_data=None,user_id='3f889d771d484ec8b9b1fff0fbde81fc',uuid=3bc210c6-9f67-440b-a11c-0b4e13e74a21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "vif_mac": "fa:16:3e:c2:45:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.904 253542 DEBUG nova.network.os_vif_util [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converting VIF {"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "vif_mac": "fa:16:3e:c2:45:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.906 253542 DEBUG nova.network.os_vif_util [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:45:0c,bridge_name='br-int',has_traffic_filtering=True,id=21a608d7-be38-4d88-902b-2124e5227ae5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a608d7-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.911 253542 DEBUG nova.objects.instance [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.929 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <uuid>3bc210c6-9f67-440b-a11c-0b4e13e74a21</uuid>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <name>instance-00000055</name>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1788909463</nova:name>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:38:50</nova:creationTime>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <nova:user uuid="3f889d771d484ec8b9b1fff0fbde81fc">tempest-ServerRescueNegativeTestJSON-561122366-project-member</nova:user>
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <nova:project uuid="42767d876b844fbd9b53953fb5f664b5">tempest-ServerRescueNegativeTestJSON-561122366</nova:project>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <nova:port uuid="21a608d7-be38-4d88-902b-2124e5227ae5">
Nov 25 08:38:51 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <system>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <entry name="serial">3bc210c6-9f67-440b-a11c-0b4e13e74a21</entry>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <entry name="uuid">3bc210c6-9f67-440b-a11c-0b4e13e74a21</entry>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </system>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <os>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   </os>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <features>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   </features>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.rescue">
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk">
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <target dev="vdb" bus="virtio"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config.rescue">
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:c2:45:0c"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <target dev="tap21a608d7-be"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/console.log" append="off"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <video>
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </video>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:38:51 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:38:51 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:38:51 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:38:51 compute-0 nova_compute[253538]: </domain>
Nov 25 08:38:51 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:38:51 compute-0 nova_compute[253538]: 2025-11-25 08:38:51.945 253542 INFO nova.virt.libvirt.driver [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance destroyed successfully.
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.004 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.004 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.004 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.005 253542 DEBUG nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] No VIF found with MAC fa:16:3e:c2:45:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.005 253542 INFO nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Using config drive
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.035 253542 DEBUG nova.storage.rbd_utils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:52 compute-0 ovn_controller[152859]: 2025-11-25T08:38:52Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:c9:50 10.100.0.11
Nov 25 08:38:52 compute-0 ovn_controller[152859]: 2025-11-25T08:38:52Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:c9:50 10.100.0.11
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.052 253542 DEBUG nova.objects.instance [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.088 253542 DEBUG nova.objects.instance [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'keypairs' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:38:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385484222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.188 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.197 253542 DEBUG nova.compute.provider_tree [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.213 253542 DEBUG nova.scheduler.client.report [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.254 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.255 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:38:52 compute-0 ceph-mon[75015]: pgmap v1675: 321 pgs: 321 active+clean; 538 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.4 MiB/s wr, 255 op/s
Nov 25 08:38:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1383451001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1385484222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.454 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.455 253542 DEBUG nova.network.neutron [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.487 253542 INFO nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.492 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.493 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.493 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.493 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.493 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.494 253542 WARNING nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.494 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Received event network-vif-deleted-8f28ea33-80c4-41cb-b191-a1b619b14515 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.494 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.494 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.494 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.494 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.495 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.495 253542 WARNING nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.495 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.495 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.495 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.495 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.495 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.496 253542 WARNING nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.496 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.496 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.496 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.496 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.496 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.497 253542 WARNING nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.497 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.497 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.497 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.497 253542 DEBUG oslo_concurrency.lockutils [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.497 253542 DEBUG nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.498 253542 WARNING nova.compute.manager [req-1a5223a8-8d81-4b53-aa1e-760f3d3e6a60 req-e9fb467f-691d-4f03-a55f-d0d7f6790e78 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state active and task_state rescuing.
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.509 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.629 253542 INFO nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Creating config drive at /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config.rescue
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.634 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjiuzb3v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.677 253542 DEBUG nova.policy [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdcb005cc49a4dfa82152f2c0817cc94', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b730f086c4b94185afab5e10fa2e8181', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.754 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.755 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.755 253542 INFO nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Creating image(s)
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.776 253542 DEBUG nova.storage.rbd_utils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image dd202e7c-474a-42f6-a6a8-5276974c793f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.800 253542 DEBUG nova.storage.rbd_utils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image dd202e7c-474a-42f6-a6a8-5276974c793f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.823 253542 DEBUG nova.storage.rbd_utils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image dd202e7c-474a-42f6-a6a8-5276974c793f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.826 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.869 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjiuzb3v" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.895 253542 DEBUG nova.storage.rbd_utils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] rbd image 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.900 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config.rescue 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.944 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.945 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.945 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.945 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.967 253542 DEBUG nova.storage.rbd_utils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image dd202e7c-474a-42f6-a6a8-5276974c793f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:52 compute-0 nova_compute[253538]: 2025-11-25 08:38:52.971 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc dd202e7c-474a-42f6-a6a8-5276974c793f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.068 253542 DEBUG oslo_concurrency.processutils [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config.rescue 3bc210c6-9f67-440b-a11c-0b4e13e74a21_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.069 253542 INFO nova.virt.libvirt.driver [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Deleting local config drive /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21/disk.config.rescue because it was imported into RBD.
Nov 25 08:38:53 compute-0 kernel: tap21a608d7-be: entered promiscuous mode
Nov 25 08:38:53 compute-0 NetworkManager[48915]: <info>  [1764059933.1176] manager: (tap21a608d7-be): new Tun device (/org/freedesktop/NetworkManager/Devices/357)
Nov 25 08:38:53 compute-0 ovn_controller[152859]: 2025-11-25T08:38:53Z|00837|binding|INFO|Claiming lport 21a608d7-be38-4d88-902b-2124e5227ae5 for this chassis.
Nov 25 08:38:53 compute-0 ovn_controller[152859]: 2025-11-25T08:38:53Z|00838|binding|INFO|21a608d7-be38-4d88-902b-2124e5227ae5: Claiming fa:16:3e:c2:45:0c 10.100.0.13
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:53 compute-0 ovn_controller[152859]: 2025-11-25T08:38:53Z|00839|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 ovn-installed in OVS
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.141 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:53 compute-0 systemd-udevd[337932]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:53 compute-0 systemd-machined[215790]: New machine qemu-107-instance-00000055.
Nov 25 08:38:53 compute-0 NetworkManager[48915]: <info>  [1764059933.1688] device (tap21a608d7-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:38:53 compute-0 NetworkManager[48915]: <info>  [1764059933.1697] device (tap21a608d7-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:38:53 compute-0 ovn_controller[152859]: 2025-11-25T08:38:53Z|00840|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 up in Southbound
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.172 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:45:0c 10.100.0.13'], port_security=['fa:16:3e:c2:45:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bc210c6-9f67-440b-a11c-0b4e13e74a21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '42767d876b844fbd9b53953fb5f664b5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '87f4d1a7-e392-4893-98e7-5d05afa4be77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608490f2-11ad-4e35-9100-843accb3d76b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=21a608d7-be38-4d88-902b-2124e5227ae5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.173 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 21a608d7-be38-4d88-902b-2124e5227ae5 in datapath 1a4161e2-3dc8-48ab-8204-aaba5cb02136 bound to our chassis
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.174 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:38:53 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-00000055.
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.189 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c02e7d8-2c23-4b81-b747-d286e18275ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.212 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[864da66a-37fc-4f5e-a5e6-14041356e5a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.215 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[548c577f-684f-44b9-bb5a-f1cd27249c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.239 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[865c7f00-bb01-4452-9439-d128bacb2faa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.252 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d44e7df0-befb-438c-aee8-fa08d52e7937]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a4161e2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:2f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525620, 'reachable_time': 42483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337946, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:38:53
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups']
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.273 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc dd202e7c-474a-42f6-a6a8-5276974c793f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.276 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6d8281-1ab3-48ed-b659-04c4189e1316]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525631, 'tstamp': 525631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337947, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525634, 'tstamp': 525634}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337947, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.277 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4161e2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.280 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a4161e2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.280 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.281 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a4161e2-30, col_values=(('external_ids', {'iface-id': 'ee088c9f-327b-47d5-b296-dc11de2d7323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:53.281 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.329 253542 DEBUG nova.storage.rbd_utils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] resizing rbd image dd202e7c-474a-42f6-a6a8-5276974c793f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 510 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.9 MiB/s wr, 248 op/s
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.414 253542 DEBUG nova.objects.instance [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'migration_context' on Instance uuid dd202e7c-474a-42f6-a6a8-5276974c793f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.432 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.432 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Ensure instance console log exists: /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.433 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.433 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.433 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.790 253542 DEBUG nova.compute.manager [None req-1e9a0e6d-f148-4568-afe3-240ee5460dc1 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.791 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 3bc210c6-9f67-440b-a11c-0b4e13e74a21 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.792 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059933.7862182, 3bc210c6-9f67-440b-a11c-0b4e13e74a21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.793 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] VM Resumed (Lifecycle Event)
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.821 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.825 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.833 253542 DEBUG nova.network.neutron [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Successfully created port: f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.851 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.852 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059933.786357, 3bc210c6-9f67-440b-a11c-0b4e13e74a21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.853 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] VM Started (Lifecycle Event)
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:38:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.876 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:53 compute-0 nova_compute[253538]: 2025-11-25 08:38:53.880 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:54 compute-0 ceph-mon[75015]: pgmap v1676: 321 pgs: 321 active+clean; 510 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.9 MiB/s wr, 248 op/s
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.817 253542 DEBUG nova.compute.manager [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.817 253542 DEBUG oslo_concurrency.lockutils [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.818 253542 DEBUG oslo_concurrency.lockutils [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.818 253542 DEBUG oslo_concurrency.lockutils [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.818 253542 DEBUG nova.compute.manager [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.819 253542 WARNING nova.compute.manager [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state rescued and task_state None.
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.819 253542 DEBUG nova.compute.manager [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.819 253542 DEBUG oslo_concurrency.lockutils [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.819 253542 DEBUG oslo_concurrency.lockutils [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.820 253542 DEBUG oslo_concurrency.lockutils [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.820 253542 DEBUG nova.compute.manager [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:38:54 compute-0 nova_compute[253538]: 2025-11-25 08:38:54.820 253542 WARNING nova.compute.manager [req-ef9fe973-3faf-4feb-8898-db5fab4b7503 req-cbbe5b84-17f1-4e0b-b86b-06711b02de70 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state rescued and task_state None.
Nov 25 08:38:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 547 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 6.6 MiB/s wr, 235 op/s
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.572 253542 DEBUG nova.network.neutron [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Successfully updated port: f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.588 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "refresh_cache-dd202e7c-474a-42f6-a6a8-5276974c793f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.588 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquired lock "refresh_cache-dd202e7c-474a-42f6-a6a8-5276974c793f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.589 253542 DEBUG nova.network.neutron [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.756 253542 DEBUG nova.network.neutron [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.772 253542 INFO nova.compute.manager [None req-9217fba9-6b3b-49b5-893c-6d8b5b15d30b 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Pausing
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.773 253542 DEBUG nova.objects.instance [None req-9217fba9-6b3b-49b5-893c-6d8b5b15d30b 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'flavor' on Instance uuid 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.782 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059920.7815025, 8120a4a8-c326-4f1b-94d5-2c1ffe663959 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.782 253542 INFO nova.compute.manager [-] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] VM Stopped (Lifecycle Event)
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.798 253542 DEBUG nova.compute.manager [None req-979576c4-8286-494f-8450-fb31de32524c - - - - - -] [instance: 8120a4a8-c326-4f1b-94d5-2c1ffe663959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.804 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059935.8044112, 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.805 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] VM Paused (Lifecycle Event)
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.807 253542 DEBUG nova.compute.manager [None req-9217fba9-6b3b-49b5-893c-6d8b5b15d30b 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.827 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.831 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:55 compute-0 nova_compute[253538]: 2025-11-25 08:38:55.857 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 08:38:56 compute-0 ceph-mon[75015]: pgmap v1677: 321 pgs: 321 active+clean; 547 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 6.6 MiB/s wr, 235 op/s
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.462863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059936462902, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1695, "num_deletes": 251, "total_data_size": 2472955, "memory_usage": 2510976, "flush_reason": "Manual Compaction"}
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059936478045, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2435596, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33550, "largest_seqno": 35244, "table_properties": {"data_size": 2427927, "index_size": 4483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17138, "raw_average_key_size": 20, "raw_value_size": 2412145, "raw_average_value_size": 2875, "num_data_blocks": 199, "num_entries": 839, "num_filter_entries": 839, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059777, "oldest_key_time": 1764059777, "file_creation_time": 1764059936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 15222 microseconds, and 7787 cpu microseconds.
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.478087) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2435596 bytes OK
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.478105) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.480342) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.480359) EVENT_LOG_v1 {"time_micros": 1764059936480353, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.480375) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2465519, prev total WAL file size 2465519, number of live WAL files 2.
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.481192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2378KB)], [74(8925KB)]
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059936481256, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11575747, "oldest_snapshot_seqno": -1}
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6029 keys, 9904135 bytes, temperature: kUnknown
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059936533933, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9904135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9862149, "index_size": 25812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 152703, "raw_average_key_size": 25, "raw_value_size": 9752286, "raw_average_value_size": 1617, "num_data_blocks": 1050, "num_entries": 6029, "num_filter_entries": 6029, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.534143) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9904135 bytes
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.535750) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.5 rd, 187.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 8.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.8) write-amplify(4.1) OK, records in: 6543, records dropped: 514 output_compression: NoCompression
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.535768) EVENT_LOG_v1 {"time_micros": 1764059936535759, "job": 42, "event": "compaction_finished", "compaction_time_micros": 52737, "compaction_time_cpu_micros": 19698, "output_level": 6, "num_output_files": 1, "total_output_size": 9904135, "num_input_records": 6543, "num_output_records": 6029, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059936536292, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059936538418, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.481107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.538542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.538551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.538555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.538559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:38:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:38:56.538563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.732 253542 DEBUG nova.network.neutron [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Updating instance_info_cache with network_info: [{"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.752 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Releasing lock "refresh_cache-dd202e7c-474a-42f6-a6a8-5276974c793f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.753 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Instance network_info: |[{"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.758 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Start _get_guest_xml network_info=[{"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.763 253542 WARNING nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.769 253542 DEBUG nova.virt.libvirt.host [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.770 253542 DEBUG nova.virt.libvirt.host [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.772 253542 DEBUG nova.virt.libvirt.host [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.773 253542 DEBUG nova.virt.libvirt.host [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.773 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.773 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.774 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.774 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.774 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.774 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.774 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.775 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.775 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.775 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.775 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.776 253542 DEBUG nova.virt.hardware [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.778 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.887 253542 DEBUG nova.compute.manager [req-764967af-44e0-4580-84bc-953448123b31 req-046f48a2-1c69-42f6-b82e-63ead815b9cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received event network-changed-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.887 253542 DEBUG nova.compute.manager [req-764967af-44e0-4580-84bc-953448123b31 req-046f48a2-1c69-42f6-b82e-63ead815b9cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Refreshing instance network info cache due to event network-changed-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.888 253542 DEBUG oslo_concurrency.lockutils [req-764967af-44e0-4580-84bc-953448123b31 req-046f48a2-1c69-42f6-b82e-63ead815b9cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-dd202e7c-474a-42f6-a6a8-5276974c793f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.888 253542 DEBUG oslo_concurrency.lockutils [req-764967af-44e0-4580-84bc-953448123b31 req-046f48a2-1c69-42f6-b82e-63ead815b9cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-dd202e7c-474a-42f6-a6a8-5276974c793f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:38:56 compute-0 nova_compute[253538]: 2025-11-25 08:38:56.888 253542 DEBUG nova.network.neutron [req-764967af-44e0-4580-84bc-953448123b31 req-046f48a2-1c69-42f6-b82e-63ead815b9cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Refreshing network info cache for port f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:38:57 compute-0 ovn_controller[152859]: 2025-11-25T08:38:57Z|00841|binding|INFO|Releasing lport a70ff8dd-5248-427b-8c9b-80eee3a671f3 from this chassis (sb_readonly=0)
Nov 25 08:38:57 compute-0 ovn_controller[152859]: 2025-11-25T08:38:57Z|00842|binding|INFO|Releasing lport ee088c9f-327b-47d5-b296-dc11de2d7323 from this chassis (sb_readonly=0)
Nov 25 08:38:57 compute-0 ovn_controller[152859]: 2025-11-25T08:38:57Z|00843|binding|INFO|Releasing lport 185991f0-acab-400e-baff-76794035d44a from this chassis (sb_readonly=0)
Nov 25 08:38:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932350838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.232 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.268 253542 DEBUG nova.storage.rbd_utils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image dd202e7c-474a-42f6-a6a8-5276974c793f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.273 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.315 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 569 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.7 MiB/s wr, 218 op/s
Nov 25 08:38:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/932350838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:38:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605963520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.738 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.739 253542 DEBUG nova.virt.libvirt.vif [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-818825662',display_name='tempest-ServersTestJSON-server-818825662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-818825662',id=88,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-dhja0dfi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:52Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=dd202e7c-474a-42f6-a6a8-5276974c793f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.740 253542 DEBUG nova.network.os_vif_util [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.741 253542 DEBUG nova.network.os_vif_util [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:f5:a2,bridge_name='br-int',has_traffic_filtering=True,id=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf37e1b29-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.742 253542 DEBUG nova.objects.instance [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd202e7c-474a-42f6-a6a8-5276974c793f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.757 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <uuid>dd202e7c-474a-42f6-a6a8-5276974c793f</uuid>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <name>instance-00000058</name>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestJSON-server-818825662</nova:name>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:38:56</nova:creationTime>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <nova:user uuid="fdcb005cc49a4dfa82152f2c0817cc94">tempest-ServersTestJSON-1426188226-project-member</nova:user>
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <nova:project uuid="b730f086c4b94185afab5e10fa2e8181">tempest-ServersTestJSON-1426188226</nova:project>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <nova:port uuid="f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6">
Nov 25 08:38:57 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <system>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <entry name="serial">dd202e7c-474a-42f6-a6a8-5276974c793f</entry>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <entry name="uuid">dd202e7c-474a-42f6-a6a8-5276974c793f</entry>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </system>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <os>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   </os>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <features>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   </features>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/dd202e7c-474a-42f6-a6a8-5276974c793f_disk">
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/dd202e7c-474a-42f6-a6a8-5276974c793f_disk.config">
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       </source>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:38:57 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:18:f5:a2"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <target dev="tapf37e1b29-e9"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f/console.log" append="off"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <video>
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </video>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:38:57 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:38:57 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:38:57 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:38:57 compute-0 nova_compute[253538]: </domain>
Nov 25 08:38:57 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.760 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Preparing to wait for external event network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.760 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.761 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.761 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.762 253542 DEBUG nova.virt.libvirt.vif [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-818825662',display_name='tempest-ServersTestJSON-server-818825662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-818825662',id=88,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-dhja0dfi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:38:52Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=dd202e7c-474a-42f6-a6a8-5276974c793f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.763 253542 DEBUG nova.network.os_vif_util [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.764 253542 DEBUG nova.network.os_vif_util [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:f5:a2,bridge_name='br-int',has_traffic_filtering=True,id=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf37e1b29-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.764 253542 DEBUG os_vif [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:f5:a2,bridge_name='br-int',has_traffic_filtering=True,id=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf37e1b29-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.766 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.767 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.772 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.772 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf37e1b29-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.773 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf37e1b29-e9, col_values=(('external_ids', {'iface-id': 'f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:f5:a2', 'vm-uuid': 'dd202e7c-474a-42f6-a6a8-5276974c793f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:57 compute-0 NetworkManager[48915]: <info>  [1764059937.7763] manager: (tapf37e1b29-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.778 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.783 253542 INFO os_vif [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:f5:a2,bridge_name='br-int',has_traffic_filtering=True,id=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf37e1b29-e9')
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.828 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.828 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.829 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No VIF found with MAC fa:16:3e:18:f5:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.829 253542 INFO nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Using config drive
Nov 25 08:38:57 compute-0 nova_compute[253538]: 2025-11-25 08:38:57.848 253542 DEBUG nova.storage.rbd_utils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image dd202e7c-474a-42f6-a6a8-5276974c793f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.043 253542 INFO nova.compute.manager [None req-ae84143d-36f1-455a-bb9c-3135af5bef45 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Unpausing
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.044 253542 DEBUG nova.objects.instance [None req-ae84143d-36f1-455a-bb9c-3135af5bef45 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'flavor' on Instance uuid 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.074 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059938.0744414, 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.074 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] VM Resumed (Lifecycle Event)
Nov 25 08:38:58 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.079 253542 DEBUG nova.virt.libvirt.guest [None req-ae84143d-36f1-455a-bb9c-3135af5bef45 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.079 253542 DEBUG nova.compute.manager [None req-ae84143d-36f1-455a-bb9c-3135af5bef45 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.103 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.108 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.145 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.473 253542 INFO nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Creating config drive at /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f/disk.config
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.478 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9h356y8g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:58 compute-0 ceph-mon[75015]: pgmap v1678: 321 pgs: 321 active+clean; 569 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.7 MiB/s wr, 218 op/s
Nov 25 08:38:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1605963520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.583 253542 DEBUG nova.network.neutron [req-764967af-44e0-4580-84bc-953448123b31 req-046f48a2-1c69-42f6-b82e-63ead815b9cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Updated VIF entry in instance network info cache for port f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.584 253542 DEBUG nova.network.neutron [req-764967af-44e0-4580-84bc-953448123b31 req-046f48a2-1c69-42f6-b82e-63ead815b9cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Updating instance_info_cache with network_info: [{"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.604 253542 DEBUG oslo_concurrency.lockutils [req-764967af-44e0-4580-84bc-953448123b31 req-046f48a2-1c69-42f6-b82e-63ead815b9cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-dd202e7c-474a-42f6-a6a8-5276974c793f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.612 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9h356y8g" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.647 253542 DEBUG nova.storage.rbd_utils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image dd202e7c-474a-42f6-a6a8-5276974c793f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.652 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f/disk.config dd202e7c-474a-42f6-a6a8-5276974c793f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.820 253542 DEBUG oslo_concurrency.processutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f/disk.config dd202e7c-474a-42f6-a6a8-5276974c793f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.821 253542 INFO nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Deleting local config drive /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f/disk.config because it was imported into RBD.
Nov 25 08:38:58 compute-0 NetworkManager[48915]: <info>  [1764059938.8634] manager: (tapf37e1b29-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Nov 25 08:38:58 compute-0 kernel: tapf37e1b29-e9: entered promiscuous mode
Nov 25 08:38:58 compute-0 ovn_controller[152859]: 2025-11-25T08:38:58Z|00844|binding|INFO|Claiming lport f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 for this chassis.
Nov 25 08:38:58 compute-0 ovn_controller[152859]: 2025-11-25T08:38:58Z|00845|binding|INFO|f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6: Claiming fa:16:3e:18:f5:a2 10.100.0.10
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:58.876 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:f5:a2 10.100.0.10'], port_security=['fa:16:3e:18:f5:a2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'dd202e7c-474a-42f6-a6a8-5276974c793f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '2', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:38:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:58.878 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 in datapath 92e26514-5b15-410b-8885-6773bc03c4ce bound to our chassis
Nov 25 08:38:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:58.881 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:38:58 compute-0 ovn_controller[152859]: 2025-11-25T08:38:58Z|00846|binding|INFO|Setting lport f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 ovn-installed in OVS
Nov 25 08:38:58 compute-0 ovn_controller[152859]: 2025-11-25T08:38:58Z|00847|binding|INFO|Setting lport f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 up in Southbound
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:58 compute-0 nova_compute[253538]: 2025-11-25 08:38:58.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:58.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81116991-68c1-4fd1-a86c-fc619da90f9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:58 compute-0 systemd-machined[215790]: New machine qemu-108-instance-00000058.
Nov 25 08:38:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:58.929 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad79d36a-0cef-4989-ab2f-1d47efca0c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:58 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000058.
Nov 25 08:38:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:58.933 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e94de548-1412-4d7e-8bb6-b7a9bdb41471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:58 compute-0 systemd-udevd[338221]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:38:58 compute-0 NetworkManager[48915]: <info>  [1764059938.9603] device (tapf37e1b29-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:38:58 compute-0 NetworkManager[48915]: <info>  [1764059938.9611] device (tapf37e1b29-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:38:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:58.964 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2e87f9-f62f-42e7-9e94-6cd47e465a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:58.990 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f412681-338f-4cdb-9bdb-3b8ff7d5be68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338226, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:59.016 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5f7744-9d00-4833-8d9a-826952ca9072]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338231, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338231, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:38:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:59.018 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.020 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.022 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:38:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:59.022 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:59.023 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:59.023 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:38:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:38:59.024 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.082 253542 DEBUG nova.compute.manager [req-c436263c-a951-4d26-bb88-5cee8e95a18b req-9a067862-f027-41bd-b0f0-cdfda279892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received event network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.084 253542 DEBUG oslo_concurrency.lockutils [req-c436263c-a951-4d26-bb88-5cee8e95a18b req-9a067862-f027-41bd-b0f0-cdfda279892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.085 253542 DEBUG oslo_concurrency.lockutils [req-c436263c-a951-4d26-bb88-5cee8e95a18b req-9a067862-f027-41bd-b0f0-cdfda279892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.085 253542 DEBUG oslo_concurrency.lockutils [req-c436263c-a951-4d26-bb88-5cee8e95a18b req-9a067862-f027-41bd-b0f0-cdfda279892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.086 253542 DEBUG nova.compute.manager [req-c436263c-a951-4d26-bb88-5cee8e95a18b req-9a067862-f027-41bd-b0f0-cdfda279892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Processing event network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:38:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 577 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 235 op/s
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.432 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059939.4317908, dd202e7c-474a-42f6-a6a8-5276974c793f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.432 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] VM Started (Lifecycle Event)
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.436 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.441 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.445 253542 INFO nova.virt.libvirt.driver [-] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Instance spawned successfully.
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.445 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.449 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.452 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.462 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.462 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.463 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.464 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.464 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.465 253542 DEBUG nova.virt.libvirt.driver [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.481 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.482 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059939.4363868, dd202e7c-474a-42f6-a6a8-5276974c793f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.482 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] VM Paused (Lifecycle Event)
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.502 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.505 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059939.439935, dd202e7c-474a-42f6-a6a8-5276974c793f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.505 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] VM Resumed (Lifecycle Event)
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.527 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.529 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.540 253542 INFO nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Took 6.79 seconds to spawn the instance on the hypervisor.
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.540 253542 DEBUG nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.563 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.616 253542 INFO nova.compute.manager [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Took 8.50 seconds to build instance.
Nov 25 08:38:59 compute-0 nova_compute[253538]: 2025-11-25 08:38:59.637 253542 DEBUG oslo_concurrency.lockutils [None req-57761819-ef66-47f2-9d6c-38ce795e8e0a fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:38:59 compute-0 ceph-mon[75015]: pgmap v1679: 321 pgs: 321 active+clean; 577 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 235 op/s
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.834 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.835 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.836 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.837 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.837 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.839 253542 INFO nova.compute.manager [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Terminating instance
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.841 253542 DEBUG nova.compute.manager [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:00 compute-0 kernel: tap21a608d7-be (unregistering): left promiscuous mode
Nov 25 08:39:00 compute-0 NetworkManager[48915]: <info>  [1764059940.9042] device (tap21a608d7-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:00 compute-0 ovn_controller[152859]: 2025-11-25T08:39:00Z|00848|binding|INFO|Releasing lport 21a608d7-be38-4d88-902b-2124e5227ae5 from this chassis (sb_readonly=0)
Nov 25 08:39:00 compute-0 ovn_controller[152859]: 2025-11-25T08:39:00Z|00849|binding|INFO|Setting lport 21a608d7-be38-4d88-902b-2124e5227ae5 down in Southbound
Nov 25 08:39:00 compute-0 ovn_controller[152859]: 2025-11-25T08:39:00Z|00850|binding|INFO|Removing iface tap21a608d7-be ovn-installed in OVS
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:00.944 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:45:0c 10.100.0.13'], port_security=['fa:16:3e:c2:45:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bc210c6-9f67-440b-a11c-0b4e13e74a21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '42767d876b844fbd9b53953fb5f664b5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '87f4d1a7-e392-4893-98e7-5d05afa4be77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608490f2-11ad-4e35-9100-843accb3d76b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=21a608d7-be38-4d88-902b-2124e5227ae5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:00 compute-0 nova_compute[253538]: 2025-11-25 08:39:00.945 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:00.947 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 21a608d7-be38-4d88-902b-2124e5227ae5 in datapath 1a4161e2-3dc8-48ab-8204-aaba5cb02136 unbound from our chassis
Nov 25 08:39:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:00.949 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a4161e2-3dc8-48ab-8204-aaba5cb02136
Nov 25 08:39:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:00.968 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30579166-1d26-4805-86d2-c723e541c269]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:00 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 25 08:39:00 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000055.scope: Consumed 7.828s CPU time.
Nov 25 08:39:00 compute-0 systemd-machined[215790]: Machine qemu-107-instance-00000055 terminated.
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.015 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0cf2a9-78f4-46a6-a744-7635be139973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.020 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d9727e-7d55-4b79-8e4a-e10b52b519e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.060 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[71ccba7e-cda0-4ca2-ac73-2047ddf2b89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.089 253542 INFO nova.virt.libvirt.driver [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Instance destroyed successfully.
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.090 253542 DEBUG nova.objects.instance [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'resources' on Instance uuid 3bc210c6-9f67-440b-a11c-0b4e13e74a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.092 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[202253ac-c8b1-4b4e-9ea6-fb217df49a2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a4161e2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:2f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525620, 'reachable_time': 42483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338288, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.104 253542 DEBUG nova.virt.libvirt.vif [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:38:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1788909463',display_name='tempest-ServerRescueNegativeTestJSON-server-1788909463',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1788909463',id=85,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='42767d876b844fbd9b53953fb5f664b5',ramdisk_id='',reservation_id='r-53yldfyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-561122366',owner_user_name='tempest-ServerRescueNegativeTestJSON-561122366-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:38:53Z,user_data=None,user_id='3f889d771d484ec8b9b1fff0fbde81fc',uuid=3bc210c6-9f67-440b-a11c-0b4e13e74a21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.104 253542 DEBUG nova.network.os_vif_util [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converting VIF {"id": "21a608d7-be38-4d88-902b-2124e5227ae5", "address": "fa:16:3e:c2:45:0c", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a608d7-be", "ovs_interfaceid": "21a608d7-be38-4d88-902b-2124e5227ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.106 253542 DEBUG nova.network.os_vif_util [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:45:0c,bridge_name='br-int',has_traffic_filtering=True,id=21a608d7-be38-4d88-902b-2124e5227ae5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a608d7-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.106 253542 DEBUG os_vif [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:45:0c,bridge_name='br-int',has_traffic_filtering=True,id=21a608d7-be38-4d88-902b-2124e5227ae5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a608d7-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.109 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21a608d7-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.113 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.116 253542 INFO os_vif [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:45:0c,bridge_name='br-int',has_traffic_filtering=True,id=21a608d7-be38-4d88-902b-2124e5227ae5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a608d7-be')
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.117 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23ceed47-3d8f-4fdd-94cd-ffd384dab43d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525631, 'tstamp': 525631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338296, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1a4161e2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525634, 'tstamp': 525634}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338296, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.118 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4161e2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.120 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a4161e2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.121 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.121 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a4161e2-30, col_values=(('external_ids', {'iface-id': 'ee088c9f-327b-47d5-b296-dc11de2d7323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:01.121 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.131 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.135 253542 DEBUG nova.virt.libvirt.driver [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:39:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 577 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 245 op/s
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.499 253542 DEBUG nova.compute.manager [req-a456f730-c63c-424b-b5ec-2cc1ecd5a59c req-5e2e4d08-a8d9-4c92-9639-81a3c73442cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received event network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.500 253542 DEBUG oslo_concurrency.lockutils [req-a456f730-c63c-424b-b5ec-2cc1ecd5a59c req-5e2e4d08-a8d9-4c92-9639-81a3c73442cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.501 253542 DEBUG oslo_concurrency.lockutils [req-a456f730-c63c-424b-b5ec-2cc1ecd5a59c req-5e2e4d08-a8d9-4c92-9639-81a3c73442cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.501 253542 DEBUG oslo_concurrency.lockutils [req-a456f730-c63c-424b-b5ec-2cc1ecd5a59c req-5e2e4d08-a8d9-4c92-9639-81a3c73442cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.502 253542 DEBUG nova.compute.manager [req-a456f730-c63c-424b-b5ec-2cc1ecd5a59c req-5e2e4d08-a8d9-4c92-9639-81a3c73442cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] No waiting events found dispatching network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.502 253542 WARNING nova.compute.manager [req-a456f730-c63c-424b-b5ec-2cc1ecd5a59c req-5e2e4d08-a8d9-4c92-9639-81a3c73442cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received unexpected event network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 for instance with vm_state active and task_state None.
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.906 253542 INFO nova.virt.libvirt.driver [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Deleting instance files /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21_del
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.906 253542 INFO nova.virt.libvirt.driver [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Deletion of /var/lib/nova/instances/3bc210c6-9f67-440b-a11c-0b4e13e74a21_del complete
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.957 253542 INFO nova.compute.manager [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Took 1.12 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.958 253542 DEBUG oslo.service.loopingcall [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.958 253542 DEBUG nova.compute.manager [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:01 compute-0 nova_compute[253538]: 2025-11-25 08:39:01.959 253542 DEBUG nova.network.neutron [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:02 compute-0 ceph-mon[75015]: pgmap v1680: 321 pgs: 321 active+clean; 577 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 245 op/s
Nov 25 08:39:02 compute-0 nova_compute[253538]: 2025-11-25 08:39:02.735 253542 DEBUG nova.network.neutron [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:02 compute-0 nova_compute[253538]: 2025-11-25 08:39:02.755 253542 INFO nova.compute.manager [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Took 0.80 seconds to deallocate network for instance.
Nov 25 08:39:02 compute-0 nova_compute[253538]: 2025-11-25 08:39:02.796 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:02 compute-0 nova_compute[253538]: 2025-11-25 08:39:02.797 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:02 compute-0 nova_compute[253538]: 2025-11-25 08:39:02.919 253542 DEBUG oslo_concurrency.processutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.347 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059928.3454394, 0e855a86-52f7-47bd-aee9-e88449169aa1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.347 253542 INFO nova.compute.manager [-] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] VM Stopped (Lifecycle Event)
Nov 25 08:39:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676024623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 538 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 288 op/s
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.379 253542 DEBUG nova.compute.manager [None req-0b18799f-70e3-43ca-8c07-1a72f94d30f1 - - - - - -] [instance: 0e855a86-52f7-47bd-aee9-e88449169aa1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.383 253542 DEBUG oslo_concurrency.processutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.391 253542 DEBUG nova.compute.provider_tree [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.406 253542 DEBUG nova.scheduler.client.report [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.424 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2676024623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.447 253542 INFO nova.scheduler.client.report [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Deleted allocations for instance 3bc210c6-9f67-440b-a11c-0b4e13e74a21
Nov 25 08:39:03 compute-0 kernel: tap203c150c-93 (unregistering): left promiscuous mode
Nov 25 08:39:03 compute-0 NetworkManager[48915]: <info>  [1764059943.4684] device (tap203c150c-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:03 compute-0 ovn_controller[152859]: 2025-11-25T08:39:03Z|00851|binding|INFO|Releasing lport 203c150c-9339-4520-8e52-01740854c5ef from this chassis (sb_readonly=0)
Nov 25 08:39:03 compute-0 ovn_controller[152859]: 2025-11-25T08:39:03Z|00852|binding|INFO|Setting lport 203c150c-9339-4520-8e52-01740854c5ef down in Southbound
Nov 25 08:39:03 compute-0 ovn_controller[152859]: 2025-11-25T08:39:03Z|00853|binding|INFO|Removing iface tap203c150c-93 ovn-installed in OVS
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.490 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:c9:50 10.100.0.11'], port_security=['fa:16:3e:30:c9:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8a68398d-9640-49e2-a049-3da4f7b371c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee3f370c-3523-4fc9-bede-12723b8659c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=203c150c-9339-4520-8e52-01740854c5ef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.491 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 203c150c-9339-4520-8e52-01740854c5ef in datapath 2b676104-a53a-419a-a348-631c409e45c0 unbound from our chassis
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.493 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.509 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0c749899-1def-4bb0-a2f1-0a74d6a265a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.521 253542 DEBUG oslo_concurrency.lockutils [None req-f252994a-26e8-4b39-83bd-c5fa5361dd82 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.543 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0d12a88f-29f5-4beb-921a-4436b36fdbf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.546 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f452bd9a-cc14-4ebb-9ed1-b5d1124f9ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:03 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000057.scope: Deactivated successfully.
Nov 25 08:39:03 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000057.scope: Consumed 16.551s CPU time.
Nov 25 08:39:03 compute-0 systemd-machined[215790]: Machine qemu-106-instance-00000057 terminated.
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.577 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[753a5ea7-ed8d-4e33-be42-9f991216b035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.594 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c260bc5a-901f-4a7c-9d44-73bd29ac5f21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b676104-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:55:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522998, 'reachable_time': 29715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338349, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.611 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed49d48-db88-4ee4-9328-bab2c735837f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523012, 'tstamp': 523012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338350, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523015, 'tstamp': 523015}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338350, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.613 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b676104-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.614 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.618 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b676104-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.619 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.619 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b676104-a0, col_values=(('external_ids', {'iface-id': 'a70ff8dd-5248-427b-8c9b-80eee3a671f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:03.620 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.730 253542 DEBUG nova.compute.manager [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.731 253542 DEBUG oslo_concurrency.lockutils [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.732 253542 DEBUG oslo_concurrency.lockutils [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.732 253542 DEBUG oslo_concurrency.lockutils [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.733 253542 DEBUG nova.compute.manager [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.733 253542 WARNING nova.compute.manager [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-unplugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state deleted and task_state None.
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.734 253542 DEBUG nova.compute.manager [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.734 253542 DEBUG oslo_concurrency.lockutils [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.735 253542 DEBUG oslo_concurrency.lockutils [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.735 253542 DEBUG oslo_concurrency.lockutils [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3bc210c6-9f67-440b-a11c-0b4e13e74a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.736 253542 DEBUG nova.compute.manager [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] No waiting events found dispatching network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.736 253542 WARNING nova.compute.manager [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received unexpected event network-vif-plugged-21a608d7-be38-4d88-902b-2124e5227ae5 for instance with vm_state deleted and task_state None.
Nov 25 08:39:03 compute-0 nova_compute[253538]: 2025-11-25 08:39:03.737 253542 DEBUG nova.compute.manager [req-f160331d-5be3-4a2a-bfff-59f917f38742 req-840ddee5-1adb-46e6-9d4a-60f7f53d4778 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Received event network-vif-deleted-21a608d7-be38-4d88-902b-2124e5227ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00412500203490265 of space, bias 1.0, pg target 1.237500610470795 quantized to 32 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.30262080199583247 quantized to 32 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:39:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.151 253542 INFO nova.virt.libvirt.driver [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance shutdown successfully after 24 seconds.
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.156 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance destroyed successfully.
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.157 253542 DEBUG nova.objects.instance [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.169 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "291c3536-48c4-40eb-a910-9494484e8668" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.170 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.172 253542 DEBUG nova.compute.manager [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.191 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.226 253542 DEBUG oslo_concurrency.lockutils [None req-e0d77ed8-e9dc-4b68-b2c8-216a10a27394 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.249 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.250 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.250 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.250 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.251 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.252 253542 INFO nova.compute.manager [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Terminating instance
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.253 253542 DEBUG nova.compute.manager [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.270 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.271 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.277 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.277 253542 INFO nova.compute.claims [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:39:04 compute-0 kernel: tap7b8f95d6-ff (unregistering): left promiscuous mode
Nov 25 08:39:04 compute-0 NetworkManager[48915]: <info>  [1764059944.3191] device (tap7b8f95d6-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.323 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:04 compute-0 ovn_controller[152859]: 2025-11-25T08:39:04Z|00854|binding|INFO|Releasing lport 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 from this chassis (sb_readonly=0)
Nov 25 08:39:04 compute-0 ovn_controller[152859]: 2025-11-25T08:39:04Z|00855|binding|INFO|Setting lport 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 down in Southbound
Nov 25 08:39:04 compute-0 ovn_controller[152859]: 2025-11-25T08:39:04Z|00856|binding|INFO|Removing iface tap7b8f95d6-ff ovn-installed in OVS
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.333 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:35:96 10.100.0.4'], port_security=['fa:16:3e:62:35:96 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5dc14644-cfc4-4e56-91fd-736ee4e3f5ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '42767d876b844fbd9b53953fb5f664b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87f4d1a7-e392-4893-98e7-5d05afa4be77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608490f2-11ad-4e35-9100-843accb3d76b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7b8f95d6-ffda-4c87-9539-bdd932e1dad5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.334 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7b8f95d6-ffda-4c87-9539-bdd932e1dad5 in datapath 1a4161e2-3dc8-48ab-8204-aaba5cb02136 unbound from our chassis
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.336 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a4161e2-3dc8-48ab-8204-aaba5cb02136, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.336 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b2524f7b-9779-4e69-bd31-29668d5df003]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.337 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136 namespace which is not needed anymore
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:04 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000054.scope: Deactivated successfully.
Nov 25 08:39:04 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000054.scope: Consumed 14.802s CPU time.
Nov 25 08:39:04 compute-0 systemd-machined[215790]: Machine qemu-103-instance-00000054 terminated.
Nov 25 08:39:04 compute-0 ceph-mon[75015]: pgmap v1681: 321 pgs: 321 active+clean; 538 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 288 op/s
Nov 25 08:39:04 compute-0 systemd-udevd[338340]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:39:04 compute-0 NetworkManager[48915]: <info>  [1764059944.4884] manager: (tap7b8f95d6-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.491 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:04 compute-0 neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136[336493]: [NOTICE]   (336497) : haproxy version is 2.8.14-c23fe91
Nov 25 08:39:04 compute-0 neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136[336493]: [NOTICE]   (336497) : path to executable is /usr/sbin/haproxy
Nov 25 08:39:04 compute-0 neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136[336493]: [WARNING]  (336497) : Exiting Master process...
Nov 25 08:39:04 compute-0 neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136[336493]: [WARNING]  (336497) : Exiting Master process...
Nov 25 08:39:04 compute-0 neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136[336493]: [ALERT]    (336497) : Current worker (336499) exited with code 143 (Terminated)
Nov 25 08:39:04 compute-0 neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136[336493]: [WARNING]  (336497) : All workers exited. Exiting... (0)
Nov 25 08:39:04 compute-0 systemd[1]: libpod-a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b.scope: Deactivated successfully.
Nov 25 08:39:04 compute-0 podman[338387]: 2025-11-25 08:39:04.518630489 +0000 UTC m=+0.055180465 container died a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.542 253542 INFO nova.virt.libvirt.driver [-] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Instance destroyed successfully.
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.543 253542 DEBUG nova.objects.instance [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lazy-loading 'resources' on Instance uuid 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b-userdata-shm.mount: Deactivated successfully.
Nov 25 08:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e45de20e61aed4a1b05e83338665baea5e22a4e3b5e0ced4a5eecb215f961e99-merged.mount: Deactivated successfully.
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.557 253542 DEBUG nova.virt.libvirt.vif [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:38:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1021848873',display_name='tempest-ServerRescueNegativeTestJSON-server-1021848873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1021848873',id=84,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='42767d876b844fbd9b53953fb5f664b5',ramdisk_id='',reservation_id='r-0hykkj86',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-561122366',owner_user_name='tempest-ServerRescueNegativeTestJSON-561122366-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:38:58Z,user_data=None,user_id='3f889d771d484ec8b9b1fff0fbde81fc',uuid=5dc14644-cfc4-4e56-91fd-736ee4e3f5ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.558 253542 DEBUG nova.network.os_vif_util [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converting VIF {"id": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "address": "fa:16:3e:62:35:96", "network": {"id": "1a4161e2-3dc8-48ab-8204-aaba5cb02136", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1363862549-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "42767d876b844fbd9b53953fb5f664b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f95d6-ff", "ovs_interfaceid": "7b8f95d6-ffda-4c87-9539-bdd932e1dad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.559 253542 DEBUG nova.network.os_vif_util [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:35:96,bridge_name='br-int',has_traffic_filtering=True,id=7b8f95d6-ffda-4c87-9539-bdd932e1dad5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f95d6-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.559 253542 DEBUG os_vif [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:35:96,bridge_name='br-int',has_traffic_filtering=True,id=7b8f95d6-ffda-4c87-9539-bdd932e1dad5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f95d6-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:04 compute-0 podman[338387]: 2025-11-25 08:39:04.56013623 +0000 UTC m=+0.096686206 container cleanup a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.562 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.562 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b8f95d6-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.564 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.568 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.571 253542 INFO os_vif [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:35:96,bridge_name='br-int',has_traffic_filtering=True,id=7b8f95d6-ffda-4c87-9539-bdd932e1dad5,network=Network(1a4161e2-3dc8-48ab-8204-aaba5cb02136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f95d6-ff')
Nov 25 08:39:04 compute-0 systemd[1]: libpod-conmon-a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b.scope: Deactivated successfully.
Nov 25 08:39:04 compute-0 podman[338424]: 2025-11-25 08:39:04.639730779 +0000 UTC m=+0.049951503 container remove a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.646 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[565fc396-d131-4483-8460-39fdb4106d46]: (4, ('Tue Nov 25 08:39:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136 (a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b)\na8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b\nTue Nov 25 08:39:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136 (a8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b)\na8538da0b5e5de597783122c064d94e3cc80fb7c446521a3e689362f2b3b515b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.648 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[beb37cb7-ce3f-4877-a2fb-9569952edb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.648 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4161e2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:04 compute-0 kernel: tap1a4161e2-30: left promiscuous mode
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:04 compute-0 nova_compute[253538]: 2025-11-25 08:39:04.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.672 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2a921c0e-7dd9-45f5-b7cf-7c213e1063c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.682 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5fea80-ed96-4977-9918-5decaa42ef6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.683 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f46023-78ab-42bd-841e-9ca4d2fc1383]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96c5cc2c-dfe7-445e-977b-92020a88c459]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525612, 'reachable_time': 28732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338479, 'error': None, 'target': 'ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d1a4161e2\x2d3dc8\x2d48ab\x2d8204\x2daaba5cb02136.mount: Deactivated successfully.
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.702 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1a4161e2-3dc8-48ab-8204-aaba5cb02136 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:39:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:04.702 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[da86463d-c759-4cee-9448-809a0cd84e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4185478284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.004 253542 INFO nova.virt.libvirt.driver [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Deleting instance files /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_del
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.005 253542 INFO nova.virt.libvirt.driver [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Deletion of /var/lib/nova/instances/5dc14644-cfc4-4e56-91fd-736ee4e3f5ec_del complete
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.010 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.019 253542 DEBUG nova.compute.provider_tree [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.033 253542 DEBUG nova.scheduler.client.report [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.063 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.065 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.076 253542 INFO nova.compute.manager [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.077 253542 DEBUG oslo.service.loopingcall [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.077 253542 DEBUG nova.compute.manager [-] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.077 253542 DEBUG nova.network.neutron [-] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.122 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.123 253542 DEBUG nova.network.neutron [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.147 253542 INFO nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.171 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.275 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.277 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.277 253542 INFO nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Creating image(s)
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.307 253542 DEBUG nova.storage.rbd_utils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 291c3536-48c4-40eb-a910-9494484e8668_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.332 253542 DEBUG nova.storage.rbd_utils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 291c3536-48c4-40eb-a910-9494484e8668_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.359 253542 DEBUG nova.storage.rbd_utils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 291c3536-48c4-40eb-a910-9494484e8668_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.362 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 474 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 272 op/s
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.400 253542 DEBUG nova.policy [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdcb005cc49a4dfa82152f2c0817cc94', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b730f086c4b94185afab5e10fa2e8181', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:39:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4185478284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.454 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.455 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.456 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.456 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.485 253542 DEBUG nova.storage.rbd_utils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 291c3536-48c4-40eb-a910-9494484e8668_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.489 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 291c3536-48c4-40eb-a910-9494484e8668_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.799 253542 DEBUG nova.network.neutron [-] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.820 253542 INFO nova.compute.manager [-] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Took 0.74 seconds to deallocate network for instance.
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.860 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 291c3536-48c4-40eb-a910-9494484e8668_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.894 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.895 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:05 compute-0 nova_compute[253538]: 2025-11-25 08:39:05.944 253542 DEBUG nova.storage.rbd_utils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] resizing rbd image 291c3536-48c4-40eb-a910-9494484e8668_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.007 253542 DEBUG nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-unplugged-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.008 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.008 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.008 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.009 253542 DEBUG nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] No waiting events found dispatching network-vif-unplugged-203c150c-9339-4520-8e52-01740854c5ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.009 253542 WARNING nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received unexpected event network-vif-unplugged-203c150c-9339-4520-8e52-01740854c5ef for instance with vm_state stopped and task_state rebuilding.
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.009 253542 DEBUG nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.009 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.010 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.010 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.010 253542 DEBUG nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] No waiting events found dispatching network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.010 253542 WARNING nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received unexpected event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef for instance with vm_state stopped and task_state rebuilding.
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.010 253542 DEBUG nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received event network-vif-unplugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.011 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.011 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.011 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.011 253542 DEBUG nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] No waiting events found dispatching network-vif-unplugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.012 253542 WARNING nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received unexpected event network-vif-unplugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 for instance with vm_state deleted and task_state None.
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.012 253542 DEBUG nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received event network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.012 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.012 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.012 253542 DEBUG oslo_concurrency.lockutils [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.013 253542 DEBUG nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] No waiting events found dispatching network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.013 253542 WARNING nova.compute.manager [req-99a9b34d-0b12-42bf-a9b8-bffff7d9dcd8 req-393a29b7-8fc5-4838-98aa-58f8473cf677 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received unexpected event network-vif-plugged-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 for instance with vm_state deleted and task_state None.
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.050 253542 DEBUG nova.objects.instance [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'migration_context' on Instance uuid 291c3536-48c4-40eb-a910-9494484e8668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.078 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.078 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Ensure instance console log exists: /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.079 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.079 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.079 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.112 253542 DEBUG oslo_concurrency.processutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.149 253542 DEBUG nova.network.neutron [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Successfully created port: 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.179 253542 INFO nova.compute.manager [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Rebuilding instance
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.399 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.417 253542 DEBUG nova.compute.manager [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.469 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.479 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:06 compute-0 ceph-mon[75015]: pgmap v1682: 321 pgs: 321 active+clean; 474 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 272 op/s
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.495 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'resources' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.507 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.514 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:39:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2582441947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.518 253542 INFO nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance already shutdown.
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.525 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance destroyed successfully.
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.530 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance destroyed successfully.
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.530 253542 DEBUG nova.virt.libvirt.vif [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159083655',display_name='tempest-tempest.common.compute-instance-1159083655',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159083655',id=87,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-u9oo6n6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:05Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=8a68398d-9640-49e2-a049-3da4f7b371c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.531 253542 DEBUG nova.network.os_vif_util [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.532 253542 DEBUG nova.network.os_vif_util [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.532 253542 DEBUG os_vif [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.534 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203c150c-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.535 253542 DEBUG oslo_concurrency.processutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.539 253542 INFO os_vif [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93')
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.562 253542 DEBUG nova.compute.provider_tree [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.574 253542 DEBUG nova.scheduler.client.report [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.589 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.618 253542 INFO nova.scheduler.client.report [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Deleted allocations for instance 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec
Nov 25 08:39:06 compute-0 nova_compute[253538]: 2025-11-25 08:39:06.679 253542 DEBUG oslo_concurrency.lockutils [None req-b862660a-4845-4e63-80d4-c2283f52492f 3f889d771d484ec8b9b1fff0fbde81fc 42767d876b844fbd9b53953fb5f664b5 - - default default] Lock "5dc14644-cfc4-4e56-91fd-736ee4e3f5ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.063 253542 INFO nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Deleting instance files /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5_del
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.064 253542 INFO nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Deletion of /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5_del complete
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.138 253542 DEBUG nova.network.neutron [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Successfully updated port: 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.158 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "refresh_cache-291c3536-48c4-40eb-a910-9494484e8668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.158 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquired lock "refresh_cache-291c3536-48c4-40eb-a910-9494484e8668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.158 253542 DEBUG nova.network.neutron [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.194 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.194 253542 INFO nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Creating image(s)
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.217 253542 DEBUG nova.storage.rbd_utils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.242 253542 DEBUG nova.storage.rbd_utils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.269 253542 DEBUG nova.storage.rbd_utils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.273 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.321 253542 DEBUG nova.network.neutron [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:39:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 447 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 221 op/s
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.377 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.378 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.378 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.379 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.403 253542 DEBUG nova.storage.rbd_utils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.407 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 8a68398d-9640-49e2-a049-3da4f7b371c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2582441947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.740 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 8a68398d-9640-49e2-a049-3da4f7b371c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.809 253542 DEBUG nova.storage.rbd_utils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] resizing rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.904 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.905 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Ensure instance console log exists: /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.905 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.906 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.906 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.909 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Start _get_guest_xml network_info=[{"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.913 253542 WARNING nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.919 253542 DEBUG nova.virt.libvirt.host [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.919 253542 DEBUG nova.virt.libvirt.host [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.923 253542 DEBUG nova.virt.libvirt.host [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.923 253542 DEBUG nova.virt.libvirt.host [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.923 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.923 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.924 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.924 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.924 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.924 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.925 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.925 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.925 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.925 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.925 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.926 253542 DEBUG nova.virt.hardware [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.926 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:07 compute-0 nova_compute[253538]: 2025-11-25 08:39:07.939 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913246976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.374 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.398 253542 DEBUG nova.storage.rbd_utils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.402 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:08 compute-0 ceph-mon[75015]: pgmap v1683: 321 pgs: 321 active+clean; 447 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 221 op/s
Nov 25 08:39:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/913246976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.595 253542 DEBUG nova.compute.manager [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Received event network-vif-deleted-7b8f95d6-ffda-4c87-9539-bdd932e1dad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.596 253542 DEBUG nova.compute.manager [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received event network-changed-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.596 253542 DEBUG nova.compute.manager [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Refreshing instance network info cache due to event network-changed-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.596 253542 DEBUG oslo_concurrency.lockutils [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-291c3536-48c4-40eb-a910-9494484e8668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363644362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.814 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.815 253542 DEBUG nova.virt.libvirt.vif [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159083655',display_name='tempest-tempest.common.compute-instance-1159083655',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159083655',id=87,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-u9oo6n6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:07Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=8a68398d-9640-49e2-a049-3da4f7b371c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.816 253542 DEBUG nova.network.os_vif_util [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.816 253542 DEBUG nova.network.os_vif_util [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.818 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <uuid>8a68398d-9640-49e2-a049-3da4f7b371c5</uuid>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <name>instance-00000057</name>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <nova:name>tempest-tempest.common.compute-instance-1159083655</nova:name>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:39:07</nova:creationTime>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <nova:user uuid="24fa34332e6f4b628514969bbf76e94b">tempest-ServerActionsTestOtherA-678529119-project-member</nova:user>
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <nova:project uuid="6851917992b149818e8b44146c66bfc3">tempest-ServerActionsTestOtherA-678529119</nova:project>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <nova:port uuid="203c150c-9339-4520-8e52-01740854c5ef">
Nov 25 08:39:08 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <system>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <entry name="serial">8a68398d-9640-49e2-a049-3da4f7b371c5</entry>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <entry name="uuid">8a68398d-9640-49e2-a049-3da4f7b371c5</entry>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </system>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <os>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:39:08 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   </os>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <features>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   </features>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8a68398d-9640-49e2-a049-3da4f7b371c5_disk">
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config">
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:30:c9:50"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <target dev="tap203c150c-93"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/console.log" append="off"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <video>
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </video>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:39:08 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:39:08 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:39:08 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:39:08 compute-0 nova_compute[253538]: </domain>
Nov 25 08:39:08 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.819 253542 DEBUG nova.virt.libvirt.vif [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159083655',display_name='tempest-tempest.common.compute-instance-1159083655',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159083655',id=87,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-u9oo6n6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:07Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=8a68398d-9640-49e2-a049-3da4f7b371c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.819 253542 DEBUG nova.network.os_vif_util [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.820 253542 DEBUG nova.network.os_vif_util [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.820 253542 DEBUG os_vif [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.821 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.821 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.823 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.823 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap203c150c-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.823 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap203c150c-93, col_values=(('external_ids', {'iface-id': '203c150c-9339-4520-8e52-01740854c5ef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:c9:50', 'vm-uuid': '8a68398d-9640-49e2-a049-3da4f7b371c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:08 compute-0 NetworkManager[48915]: <info>  [1764059948.8255] manager: (tap203c150c-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.827 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.830 253542 INFO os_vif [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93')
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.876 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.876 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.876 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No VIF found with MAC fa:16:3e:30:c9:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.877 253542 INFO nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Using config drive
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.896 253542 DEBUG nova.storage.rbd_utils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.910 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:08 compute-0 nova_compute[253538]: 2025-11-25 08:39:08.932 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'keypairs' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 424 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 219 op/s
Nov 25 08:39:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1363644362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.448 253542 INFO nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Creating config drive at /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.453 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80h7n2_l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:10 compute-0 ceph-mon[75015]: pgmap v1684: 321 pgs: 321 active+clean; 424 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 219 op/s
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.591 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80h7n2_l" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.632 253542 DEBUG nova.storage.rbd_utils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.637 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.811 253542 DEBUG oslo_concurrency.processutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config 8a68398d-9640-49e2-a049-3da4f7b371c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.812 253542 INFO nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Deleting local config drive /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5/disk.config because it was imported into RBD.
Nov 25 08:39:10 compute-0 kernel: tap203c150c-93: entered promiscuous mode
Nov 25 08:39:10 compute-0 NetworkManager[48915]: <info>  [1764059950.8682] manager: (tap203c150c-93): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Nov 25 08:39:10 compute-0 ovn_controller[152859]: 2025-11-25T08:39:10Z|00857|binding|INFO|Claiming lport 203c150c-9339-4520-8e52-01740854c5ef for this chassis.
Nov 25 08:39:10 compute-0 ovn_controller[152859]: 2025-11-25T08:39:10Z|00858|binding|INFO|203c150c-9339-4520-8e52-01740854c5ef: Claiming fa:16:3e:30:c9:50 10.100.0.11
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:10 compute-0 ovn_controller[152859]: 2025-11-25T08:39:10Z|00859|binding|INFO|Setting lport 203c150c-9339-4520-8e52-01740854c5ef ovn-installed in OVS
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:10 compute-0 nova_compute[253538]: 2025-11-25 08:39:10.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:10 compute-0 systemd-udevd[338995]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:39:10 compute-0 systemd-machined[215790]: New machine qemu-109-instance-00000057.
Nov 25 08:39:10 compute-0 NetworkManager[48915]: <info>  [1764059950.9181] device (tap203c150c-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:39:10 compute-0 NetworkManager[48915]: <info>  [1764059950.9190] device (tap203c150c-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:39:10 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-00000057.
Nov 25 08:39:10 compute-0 ovn_controller[152859]: 2025-11-25T08:39:10Z|00860|binding|INFO|Setting lport 203c150c-9339-4520-8e52-01740854c5ef up in Southbound
Nov 25 08:39:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:10.993 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:c9:50 10.100.0.11'], port_security=['fa:16:3e:30:c9:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8a68398d-9640-49e2-a049-3da4f7b371c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ee3f370c-3523-4fc9-bede-12723b8659c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=203c150c-9339-4520-8e52-01740854c5ef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:10.995 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 203c150c-9339-4520-8e52-01740854c5ef in datapath 2b676104-a53a-419a-a348-631c409e45c0 bound to our chassis
Nov 25 08:39:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:10.996 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.011 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9158ea0c-2f43-446b-b502-9116d5db859f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.048 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8aba17-41f7-4763-92f2-a54b3396a1b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.053 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[363c8a57-0195-4da4-826d-dab299a37f19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.086 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4ab88b-739b-4581-b257-5829dfbc1197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.105 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10000764-266f-4255-b617-dbd11aebd2b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b676104-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:55:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522998, 'reachable_time': 29715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339009, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.124 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3a2b4a-9311-425d-bc83-f8683dcd7073]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523012, 'tstamp': 523012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339010, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523015, 'tstamp': 523015}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339010, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b676104-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.127 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.128 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b676104-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.128 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.128 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b676104-a0, col_values=(('external_ids', {'iface-id': 'a70ff8dd-5248-427b-8c9b-80eee3a671f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:11.129 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.134 253542 DEBUG nova.network.neutron [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Updating instance_info_cache with network_info: [{"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.200 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Releasing lock "refresh_cache-291c3536-48c4-40eb-a910-9494484e8668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.201 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Instance network_info: |[{"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.201 253542 DEBUG oslo_concurrency.lockutils [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-291c3536-48c4-40eb-a910-9494484e8668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.202 253542 DEBUG nova.network.neutron [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Refreshing network info cache for port 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.206 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Start _get_guest_xml network_info=[{"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.211 253542 WARNING nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.222 253542 DEBUG nova.virt.libvirt.host [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.223 253542 DEBUG nova.virt.libvirt.host [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.228 253542 DEBUG nova.virt.libvirt.host [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.229 253542 DEBUG nova.virt.libvirt.host [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.229 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.229 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.230 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.230 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.230 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.230 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.231 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.231 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.231 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.232 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.232 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.232 253542 DEBUG nova.virt.hardware [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.236 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 386 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 227 op/s
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.502 253542 DEBUG nova.compute.manager [req-af9a7af0-2647-4579-a67a-3282fca84b16 req-df4accdd-093b-4a4a-90d8-19a614ff892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.503 253542 DEBUG oslo_concurrency.lockutils [req-af9a7af0-2647-4579-a67a-3282fca84b16 req-df4accdd-093b-4a4a-90d8-19a614ff892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.504 253542 DEBUG oslo_concurrency.lockutils [req-af9a7af0-2647-4579-a67a-3282fca84b16 req-df4accdd-093b-4a4a-90d8-19a614ff892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.504 253542 DEBUG oslo_concurrency.lockutils [req-af9a7af0-2647-4579-a67a-3282fca84b16 req-df4accdd-093b-4a4a-90d8-19a614ff892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.505 253542 DEBUG nova.compute.manager [req-af9a7af0-2647-4579-a67a-3282fca84b16 req-df4accdd-093b-4a4a-90d8-19a614ff892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] No waiting events found dispatching network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.505 253542 WARNING nova.compute.manager [req-af9a7af0-2647-4579-a67a-3282fca84b16 req-df4accdd-093b-4a4a-90d8-19a614ff892e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received unexpected event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef for instance with vm_state stopped and task_state rebuild_spawning.
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.706 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 8a68398d-9640-49e2-a049-3da4f7b371c5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.707 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059951.7060761, 8a68398d-9640-49e2-a049-3da4f7b371c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.707 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] VM Resumed (Lifecycle Event)
Nov 25 08:39:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383675617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.709 253542 DEBUG nova.compute.manager [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.710 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.718 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance spawned successfully.
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.718 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.727 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.731 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.743 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.765 253542 DEBUG nova.storage.rbd_utils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 291c3536-48c4-40eb-a910-9494484e8668_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.770 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.823 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.824 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059951.706229, 8a68398d-9640-49e2-a049-3da4f7b371c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.825 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] VM Started (Lifecycle Event)
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.833 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.833 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.834 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.835 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.835 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.836 253542 DEBUG nova.virt.libvirt.driver [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.844 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.848 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.867 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.904 253542 DEBUG nova.compute.manager [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:11 compute-0 ovn_controller[152859]: 2025-11-25T08:39:11Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:f5:a2 10.100.0.10
Nov 25 08:39:11 compute-0 ovn_controller[152859]: 2025-11-25T08:39:11Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:f5:a2 10.100.0.10
Nov 25 08:39:11 compute-0 nova_compute[253538]: 2025-11-25 08:39:11.946 253542 INFO nova.compute.manager [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] bringing vm to original state: 'stopped'
Nov 25 08:39:11 compute-0 sudo[339094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:11 compute-0 sudo[339094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:11 compute-0 sudo[339094]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.021 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.022 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.022 253542 DEBUG nova.compute.manager [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.028 253542 DEBUG nova.compute.manager [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:39:12 compute-0 sudo[339138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:39:12 compute-0 sudo[339138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:12 compute-0 sudo[339138]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:12 compute-0 sudo[339163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:12 compute-0 sudo[339163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:12 compute-0 sudo[339163]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:12 compute-0 kernel: tap203c150c-93 (unregistering): left promiscuous mode
Nov 25 08:39:12 compute-0 NetworkManager[48915]: <info>  [1764059952.1894] device (tap203c150c-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:12 compute-0 ovn_controller[152859]: 2025-11-25T08:39:12Z|00861|binding|INFO|Releasing lport 203c150c-9339-4520-8e52-01740854c5ef from this chassis (sb_readonly=0)
Nov 25 08:39:12 compute-0 ovn_controller[152859]: 2025-11-25T08:39:12Z|00862|binding|INFO|Setting lport 203c150c-9339-4520-8e52-01740854c5ef down in Southbound
Nov 25 08:39:12 compute-0 ovn_controller[152859]: 2025-11-25T08:39:12Z|00863|binding|INFO|Removing iface tap203c150c-93 ovn-installed in OVS
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 sudo[339188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:39:12 compute-0 sudo[339188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:12 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000057.scope: Deactivated successfully.
Nov 25 08:39:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4209768645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:12 compute-0 systemd-machined[215790]: Machine qemu-109-instance-00000057 terminated.
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.261 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.263 253542 DEBUG nova.virt.libvirt.vif [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-818825662',display_name='tempest-ServersTestJSON-server-818825662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-818825662',id=89,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-h7ipp38q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:05Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=291c3536-48c4-40eb-a910-9494484e8668,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.264 253542 DEBUG nova.network.os_vif_util [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.265 253542 DEBUG nova.network.os_vif_util [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:e3:8d,bridge_name='br-int',has_traffic_filtering=True,id=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c1a57e6-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.266 253542 DEBUG nova.objects.instance [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'pci_devices' on Instance uuid 291c3536-48c4-40eb-a910-9494484e8668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.278 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <uuid>291c3536-48c4-40eb-a910-9494484e8668</uuid>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <name>instance-00000059</name>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestJSON-server-818825662</nova:name>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:39:11</nova:creationTime>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <nova:user uuid="fdcb005cc49a4dfa82152f2c0817cc94">tempest-ServersTestJSON-1426188226-project-member</nova:user>
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <nova:project uuid="b730f086c4b94185afab5e10fa2e8181">tempest-ServersTestJSON-1426188226</nova:project>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <nova:port uuid="0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e">
Nov 25 08:39:12 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <system>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <entry name="serial">291c3536-48c4-40eb-a910-9494484e8668</entry>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <entry name="uuid">291c3536-48c4-40eb-a910-9494484e8668</entry>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </system>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <os>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   </os>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <features>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   </features>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/291c3536-48c4-40eb-a910-9494484e8668_disk">
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/291c3536-48c4-40eb-a910-9494484e8668_disk.config">
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:12 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:96:e3:8d"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <target dev="tap0c1a57e6-fd"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668/console.log" append="off"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <video>
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </video>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:39:12 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:39:12 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:39:12 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:39:12 compute-0 nova_compute[253538]: </domain>
Nov 25 08:39:12 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.279 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Preparing to wait for external event network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.280 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "291c3536-48c4-40eb-a910-9494484e8668-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.280 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.280 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.281 253542 DEBUG nova.virt.libvirt.vif [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-818825662',display_name='tempest-ServersTestJSON-server-818825662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-818825662',id=89,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-h7ipp38q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:05Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=291c3536-48c4-40eb-a910-9494484e8668,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.281 253542 DEBUG nova.network.os_vif_util [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.282 253542 DEBUG nova.network.os_vif_util [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:e3:8d,bridge_name='br-int',has_traffic_filtering=True,id=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c1a57e6-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.283 253542 DEBUG os_vif [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:e3:8d,bridge_name='br-int',has_traffic_filtering=True,id=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c1a57e6-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.283 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.284 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.284 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c1a57e6-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c1a57e6-fd, col_values=(('external_ids', {'iface-id': '0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:e3:8d', 'vm-uuid': '291c3536-48c4-40eb-a910-9494484e8668'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.288 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 NetworkManager[48915]: <info>  [1764059952.2896] manager: (tap0c1a57e6-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.296 253542 INFO os_vif [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:e3:8d,bridge_name='br-int',has_traffic_filtering=True,id=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c1a57e6-fd')
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.346 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:c9:50 10.100.0.11'], port_security=['fa:16:3e:30:c9:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8a68398d-9640-49e2-a049-3da4f7b371c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ee3f370c-3523-4fc9-bede-12723b8659c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=203c150c-9339-4520-8e52-01740854c5ef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.347 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 203c150c-9339-4520-8e52-01740854c5ef in datapath 2b676104-a53a-419a-a348-631c409e45c0 unbound from our chassis
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.348 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.370 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5524ab6e-0a6c-4d17-9745-ecbbd0c0849e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.381 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.381 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.382 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No VIF found with MAC fa:16:3e:96:e3:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.382 253542 INFO nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Using config drive
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.404 253542 DEBUG nova.storage.rbd_utils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 291c3536-48c4-40eb-a910-9494484e8668_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.407 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7c44c4-a78e-44a0-9179-bf56525734c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.410 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[815f07e3-fe16-4980-bd3a-bc5648f6bcef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.436 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae9662e-a657-48a4-be41-199be0818803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.451 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8c9771-49e6-4bd0-af0b-fa552b9f1224]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b676104-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:55:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522998, 'reachable_time': 29715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339259, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.462 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.465 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance destroyed successfully.
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.465 253542 DEBUG nova.compute.manager [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.474 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c84244f-10b0-4d4e-980b-b944fbae24ec]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523012, 'tstamp': 523012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339270, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523015, 'tstamp': 523015}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339270, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.476 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b676104-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.488 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b676104-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.488 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.489 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b676104-a0, col_values=(('external_ids', {'iface-id': 'a70ff8dd-5248-427b-8c9b-80eee3a671f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:12.489 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.527 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.552 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.552 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.552 253542 DEBUG nova.objects.instance [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:39:12 compute-0 ceph-mon[75015]: pgmap v1685: 321 pgs: 321 active+clean; 386 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 227 op/s
Nov 25 08:39:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2383675617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4209768645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:12 compute-0 nova_compute[253538]: 2025-11-25 08:39:12.603 253542 DEBUG oslo_concurrency.lockutils [None req-4a45df64-2e4a-47f1-addb-b19d36b89ad1 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:12 compute-0 sudo[339188]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:39:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:39:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:39:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:39:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:39:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:39:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2b9a8483-6a8b-4e68-8d65-a642efb276dd does not exist
Nov 25 08:39:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c9ce4a2d-a87a-44e5-b2b0-634a80a8c874 does not exist
Nov 25 08:39:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6862161c-88bd-415e-b926-1604357a491f does not exist
Nov 25 08:39:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:39:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:39:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:39:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:39:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:39:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:39:12 compute-0 sudo[339291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:12 compute-0 sudo[339291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:12 compute-0 sudo[339291]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:13 compute-0 sudo[339316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:39:13 compute-0 sudo[339316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:13 compute-0 sudo[339316]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:13 compute-0 sudo[339341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:13 compute-0 sudo[339341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:13 compute-0 sudo[339341]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:13 compute-0 sudo[339366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:39:13 compute-0 sudo[339366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.344 253542 INFO nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Creating config drive at /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668/disk.config
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.353 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpelln1mhs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 400 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.0 MiB/s wr, 270 op/s
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.514 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpelln1mhs" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.544 253542 DEBUG nova.storage.rbd_utils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 291c3536-48c4-40eb-a910-9494484e8668_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.550 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668/disk.config 291c3536-48c4-40eb-a910-9494484e8668_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:39:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:39:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:39:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:39:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:39:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:39:13 compute-0 ceph-mon[75015]: pgmap v1686: 321 pgs: 321 active+clean; 400 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.0 MiB/s wr, 270 op/s
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.618 253542 DEBUG nova.compute.manager [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.622 253542 DEBUG oslo_concurrency.lockutils [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.622 253542 DEBUG oslo_concurrency.lockutils [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.623 253542 DEBUG oslo_concurrency.lockutils [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.623 253542 DEBUG nova.compute.manager [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] No waiting events found dispatching network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.623 253542 WARNING nova.compute.manager [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received unexpected event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef for instance with vm_state stopped and task_state None.
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.624 253542 DEBUG nova.compute.manager [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-unplugged-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.624 253542 DEBUG oslo_concurrency.lockutils [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.624 253542 DEBUG oslo_concurrency.lockutils [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.625 253542 DEBUG oslo_concurrency.lockutils [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.625 253542 DEBUG nova.compute.manager [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] No waiting events found dispatching network-vif-unplugged-203c150c-9339-4520-8e52-01740854c5ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.625 253542 WARNING nova.compute.manager [req-46a10b00-8c93-46a7-8507-81fc3d3e9eee req-fd589bd4-1807-4f69-9125-563f6bee397b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received unexpected event network-vif-unplugged-203c150c-9339-4520-8e52-01740854c5ef for instance with vm_state stopped and task_state None.
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.628 253542 DEBUG nova.network.neutron [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Updated VIF entry in instance network info cache for port 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.629 253542 DEBUG nova.network.neutron [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Updating instance_info_cache with network_info: [{"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.643 253542 DEBUG oslo_concurrency.lockutils [req-5c17d240-6f59-41c3-ad6f-2ce0b93e3627 req-8a14e9fe-5d7c-44c9-b645-e4b3ad710557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-291c3536-48c4-40eb-a910-9494484e8668" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:13 compute-0 podman[339453]: 2025-11-25 08:39:13.66605304 +0000 UTC m=+0.035507968 container create 199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 08:39:13 compute-0 systemd[1]: Started libpod-conmon-199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d.scope.
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.735 253542 DEBUG oslo_concurrency.processutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668/disk.config 291c3536-48c4-40eb-a910-9494484e8668_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.735 253542 INFO nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Deleting local config drive /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668/disk.config because it was imported into RBD.
Nov 25 08:39:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:39:13 compute-0 podman[339453]: 2025-11-25 08:39:13.649373056 +0000 UTC m=+0.018827984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:39:13 compute-0 podman[339453]: 2025-11-25 08:39:13.757428601 +0000 UTC m=+0.126883539 container init 199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:39:13 compute-0 podman[339453]: 2025-11-25 08:39:13.764591516 +0000 UTC m=+0.134046474 container start 199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:39:13 compute-0 podman[339453]: 2025-11-25 08:39:13.768446971 +0000 UTC m=+0.137901899 container attach 199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 08:39:13 compute-0 beautiful_torvalds[339487]: 167 167
Nov 25 08:39:13 compute-0 systemd[1]: libpod-199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d.scope: Deactivated successfully.
Nov 25 08:39:13 compute-0 podman[339453]: 2025-11-25 08:39:13.77170546 +0000 UTC m=+0.141160378 container died 199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:39:13 compute-0 systemd-udevd[338997]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:39:13 compute-0 NetworkManager[48915]: <info>  [1764059953.7952] manager: (tap0c1a57e6-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Nov 25 08:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7c6ed8957a1209c40c1fb7709b544c3a7b897a0297318c79b50200f3abb1235-merged.mount: Deactivated successfully.
Nov 25 08:39:13 compute-0 kernel: tap0c1a57e6-fd: entered promiscuous mode
Nov 25 08:39:13 compute-0 ovn_controller[152859]: 2025-11-25T08:39:13Z|00864|binding|INFO|Claiming lport 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e for this chassis.
Nov 25 08:39:13 compute-0 ovn_controller[152859]: 2025-11-25T08:39:13Z|00865|binding|INFO|0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e: Claiming fa:16:3e:96:e3:8d 10.100.0.11
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:13 compute-0 NetworkManager[48915]: <info>  [1764059953.8134] device (tap0c1a57e6-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:39:13 compute-0 NetworkManager[48915]: <info>  [1764059953.8142] device (tap0c1a57e6-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:39:13 compute-0 podman[339453]: 2025-11-25 08:39:13.81426469 +0000 UTC m=+0.183719608 container remove 199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:39:13 compute-0 systemd[1]: libpod-conmon-199b01efafc3f9cfcf502be34e88fa202ff5afda76006df89e1223e87ff1eb9d.scope: Deactivated successfully.
Nov 25 08:39:13 compute-0 ovn_controller[152859]: 2025-11-25T08:39:13Z|00866|binding|INFO|Setting lport 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e ovn-installed in OVS
Nov 25 08:39:13 compute-0 nova_compute[253538]: 2025-11-25 08:39:13.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:13 compute-0 systemd-machined[215790]: New machine qemu-110-instance-00000059.
Nov 25 08:39:13 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-00000059.
Nov 25 08:39:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:13.915 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:e3:8d 10.100.0.11'], port_security=['fa:16:3e:96:e3:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '291c3536-48c4-40eb-a910-9494484e8668', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '2', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:13 compute-0 ovn_controller[152859]: 2025-11-25T08:39:13Z|00867|binding|INFO|Setting lport 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e up in Southbound
Nov 25 08:39:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:13.916 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e in datapath 92e26514-5b15-410b-8885-6773bc03c4ce bound to our chassis
Nov 25 08:39:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:13.917 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:39:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:13.937 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[169d99b2-cfa0-4a81-9786-dfeac7c5a627]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:13.973 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[33bc9d61-58d2-4d84-9227-5b93b0b60ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:13.976 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[791ce993-743d-49a2-b9a3-574490dd0da0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:14.017 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[026c9246-f2c0-43b1-9a92-ccf4182c29cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:14 compute-0 podman[339534]: 2025-11-25 08:39:14.038410868 +0000 UTC m=+0.055864504 container create e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:39:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:14.041 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec11ba89-b340-465d-b3e7-9bf89fd7d3bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339550, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:14.060 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74e38ff0-6607-4be7-9860-ce14827bcb79]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339552, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339552, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:14.062 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:14 compute-0 nova_compute[253538]: 2025-11-25 08:39:14.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:14 compute-0 nova_compute[253538]: 2025-11-25 08:39:14.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:14.070 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:14.071 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:14.071 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:14.071 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:14 compute-0 systemd[1]: Started libpod-conmon-e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d.scope.
Nov 25 08:39:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b0c0c0d6a28154d9e79a91336a4beab0a0dc50e6e4eb8a551b7730c48ba5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:14 compute-0 podman[339534]: 2025-11-25 08:39:14.018905576 +0000 UTC m=+0.036359242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b0c0c0d6a28154d9e79a91336a4beab0a0dc50e6e4eb8a551b7730c48ba5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b0c0c0d6a28154d9e79a91336a4beab0a0dc50e6e4eb8a551b7730c48ba5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b0c0c0d6a28154d9e79a91336a4beab0a0dc50e6e4eb8a551b7730c48ba5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b0c0c0d6a28154d9e79a91336a4beab0a0dc50e6e4eb8a551b7730c48ba5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:14 compute-0 podman[339534]: 2025-11-25 08:39:14.152482427 +0000 UTC m=+0.169936123 container init e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 08:39:14 compute-0 podman[339534]: 2025-11-25 08:39:14.16730145 +0000 UTC m=+0.184755106 container start e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:39:14 compute-0 podman[339534]: 2025-11-25 08:39:14.171775203 +0000 UTC m=+0.189228909 container attach e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.116 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059955.115628, 291c3536-48c4-40eb-a910-9494484e8668 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.116 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] VM Started (Lifecycle Event)
Nov 25 08:39:15 compute-0 podman[339615]: 2025-11-25 08:39:15.127129338 +0000 UTC m=+0.085146092 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.145 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.149 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059955.1195407, 291c3536-48c4-40eb-a910-9494484e8668 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.149 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] VM Paused (Lifecycle Event)
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.174 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.177 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.196 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:15 compute-0 determined_kapitsa[339556]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:39:15 compute-0 determined_kapitsa[339556]: --> relative data size: 1.0
Nov 25 08:39:15 compute-0 determined_kapitsa[339556]: --> All data devices are unavailable
Nov 25 08:39:15 compute-0 systemd[1]: libpod-e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d.scope: Deactivated successfully.
Nov 25 08:39:15 compute-0 systemd[1]: libpod-e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d.scope: Consumed 1.026s CPU time.
Nov 25 08:39:15 compute-0 podman[339534]: 2025-11-25 08:39:15.297378727 +0000 UTC m=+1.314832353 container died e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a15b0c0c0d6a28154d9e79a91336a4beab0a0dc50e6e4eb8a551b7730c48ba5f-merged.mount: Deactivated successfully.
Nov 25 08:39:15 compute-0 podman[339534]: 2025-11-25 08:39:15.373040989 +0000 UTC m=+1.390494615 container remove e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 08:39:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 414 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 5.7 MiB/s wr, 215 op/s
Nov 25 08:39:15 compute-0 systemd[1]: libpod-conmon-e81e0e3c7a7b0afcd39b21d58010baa4ec1004f6447d86d35d2c45aca582b02d.scope: Deactivated successfully.
Nov 25 08:39:15 compute-0 sudo[339366]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:15 compute-0 sudo[339666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:15 compute-0 sudo[339666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:15 compute-0 sudo[339666]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:15 compute-0 sudo[339691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:39:15 compute-0 sudo[339691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:15 compute-0 sudo[339691]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:15 compute-0 sudo[339716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:15 compute-0 sudo[339716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:15 compute-0 sudo[339716]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:15 compute-0 ovn_controller[152859]: 2025-11-25T08:39:15Z|00868|binding|INFO|Releasing lport a70ff8dd-5248-427b-8c9b-80eee3a671f3 from this chassis (sb_readonly=0)
Nov 25 08:39:15 compute-0 ovn_controller[152859]: 2025-11-25T08:39:15Z|00869|binding|INFO|Releasing lport 185991f0-acab-400e-baff-76794035d44a from this chassis (sb_readonly=0)
Nov 25 08:39:15 compute-0 sudo[339741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.785 253542 DEBUG nova.compute.manager [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.786 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.786 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.786 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.786 253542 DEBUG nova.compute.manager [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] No waiting events found dispatching network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.786 253542 WARNING nova.compute.manager [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received unexpected event network-vif-plugged-203c150c-9339-4520-8e52-01740854c5ef for instance with vm_state stopped and task_state None.
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.786 253542 DEBUG nova.compute.manager [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received event network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.787 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "291c3536-48c4-40eb-a910-9494484e8668-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.787 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.787 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.787 253542 DEBUG nova.compute.manager [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Processing event network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.787 253542 DEBUG nova.compute.manager [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received event network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.788 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "291c3536-48c4-40eb-a910-9494484e8668-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.788 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.788 253542 DEBUG oslo_concurrency.lockutils [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.788 253542 DEBUG nova.compute.manager [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] No waiting events found dispatching network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.788 253542 WARNING nova.compute.manager [req-d23aed28-d30c-48d2-a273-15579ca9329e req-a958de20-fc6c-4cfb-a7fd-07f1203ce4a7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received unexpected event network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e for instance with vm_state building and task_state spawning.
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.789 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:39:15 compute-0 sudo[339741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.794 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059955.7943566, 291c3536-48c4-40eb-a910-9494484e8668 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.794 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] VM Resumed (Lifecycle Event)
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.796 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.802 253542 INFO nova.virt.libvirt.driver [-] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Instance spawned successfully.
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.802 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.820 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.825 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.825 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.826 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.826 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.826 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.827 253542 DEBUG nova.virt.libvirt.driver [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.836 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:15 compute-0 sshd-session[339664]: Invalid user docker from 193.32.162.151 port 51318
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.866 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.902 253542 INFO nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Took 10.63 seconds to spawn the instance on the hypervisor.
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.902 253542 DEBUG nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:15 compute-0 sshd-session[339664]: Connection closed by invalid user docker 193.32.162.151 port 51318 [preauth]
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.972 253542 INFO nova.compute.manager [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Took 11.73 seconds to build instance.
Nov 25 08:39:15 compute-0 nova_compute[253538]: 2025-11-25 08:39:15.992 253542 DEBUG oslo_concurrency.lockutils [None req-4b236451-c137-44af-a9c3-29b4d9fa3c78 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.085 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059941.0844495, 3bc210c6-9f67-440b-a11c-0b4e13e74a21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.085 253542 INFO nova.compute.manager [-] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] VM Stopped (Lifecycle Event)
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.101 253542 DEBUG nova.compute.manager [None req-fbac0daa-22f6-42f8-9539-053b631e83a9 - - - - - -] [instance: 3bc210c6-9f67-440b-a11c-0b4e13e74a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:16 compute-0 podman[339806]: 2025-11-25 08:39:16.221464119 +0000 UTC m=+0.043858045 container create 838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:39:16 compute-0 systemd[1]: Started libpod-conmon-838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6.scope.
Nov 25 08:39:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:39:16 compute-0 podman[339806]: 2025-11-25 08:39:16.199682686 +0000 UTC m=+0.022076632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:39:16 compute-0 podman[339806]: 2025-11-25 08:39:16.309119599 +0000 UTC m=+0.131513575 container init 838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:39:16 compute-0 podman[339806]: 2025-11-25 08:39:16.316637783 +0000 UTC m=+0.139031709 container start 838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:39:16 compute-0 podman[339806]: 2025-11-25 08:39:16.319252674 +0000 UTC m=+0.141646620 container attach 838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:39:16 compute-0 naughty_wilbur[339822]: 167 167
Nov 25 08:39:16 compute-0 systemd[1]: libpod-838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6.scope: Deactivated successfully.
Nov 25 08:39:16 compute-0 conmon[339822]: conmon 838935e9946df888d6c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6.scope/container/memory.events
Nov 25 08:39:16 compute-0 podman[339806]: 2025-11-25 08:39:16.323959073 +0000 UTC m=+0.146352989 container died 838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb7bafe607fa70cb5e279a2a804fbd795d87d96824a25b8b244a7939e1f1a5dc-merged.mount: Deactivated successfully.
Nov 25 08:39:16 compute-0 podman[339806]: 2025-11-25 08:39:16.363443209 +0000 UTC m=+0.185837145 container remove 838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.378 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.379 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.380 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.380 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.380 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.381 253542 INFO nova.compute.manager [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Terminating instance
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.382 253542 DEBUG nova.compute.manager [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:16 compute-0 systemd[1]: libpod-conmon-838935e9946df888d6c656980b8f1e201b22f14d3c09ab8b4b5e64e92d187af6.scope: Deactivated successfully.
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.393 253542 INFO nova.virt.libvirt.driver [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Instance destroyed successfully.
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.393 253542 DEBUG nova.objects.instance [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'resources' on Instance uuid 8a68398d-9640-49e2-a049-3da4f7b371c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.411 253542 DEBUG nova.virt.libvirt.vif [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:38:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159083655',display_name='tempest-tempest.common.compute-instance-1159083655',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159083655',id=87,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:39:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-u9oo6n6b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:39:12Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=8a68398d-9640-49e2-a049-3da4f7b371c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.412 253542 DEBUG nova.network.os_vif_util [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "203c150c-9339-4520-8e52-01740854c5ef", "address": "fa:16:3e:30:c9:50", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap203c150c-93", "ovs_interfaceid": "203c150c-9339-4520-8e52-01740854c5ef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.412 253542 DEBUG nova.network.os_vif_util [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.413 253542 DEBUG os_vif [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.415 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.415 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203c150c-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.421 253542 INFO os_vif [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:50,bridge_name='br-int',has_traffic_filtering=True,id=203c150c-9339-4520-8e52-01740854c5ef,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap203c150c-93')
Nov 25 08:39:16 compute-0 ceph-mon[75015]: pgmap v1687: 321 pgs: 321 active+clean; 414 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 5.7 MiB/s wr, 215 op/s
Nov 25 08:39:16 compute-0 podman[339863]: 2025-11-25 08:39:16.592291895 +0000 UTC m=+0.051480964 container create 3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:16 compute-0 systemd[1]: Started libpod-conmon-3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e.scope.
Nov 25 08:39:16 compute-0 podman[339863]: 2025-11-25 08:39:16.57006061 +0000 UTC m=+0.029249719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:39:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54807549855dd975bf6b0de1a971cf4d08c7c400d33a74495c40dc8bd9aabc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54807549855dd975bf6b0de1a971cf4d08c7c400d33a74495c40dc8bd9aabc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54807549855dd975bf6b0de1a971cf4d08c7c400d33a74495c40dc8bd9aabc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54807549855dd975bf6b0de1a971cf4d08c7c400d33a74495c40dc8bd9aabc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:16 compute-0 podman[339863]: 2025-11-25 08:39:16.680500149 +0000 UTC m=+0.139689358 container init 3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:39:16 compute-0 podman[339863]: 2025-11-25 08:39:16.68859539 +0000 UTC m=+0.147784459 container start 3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:39:16 compute-0 podman[339863]: 2025-11-25 08:39:16.691718265 +0000 UTC m=+0.150907334 container attach 3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.745 253542 INFO nova.virt.libvirt.driver [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Deleting instance files /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5_del
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.746 253542 INFO nova.virt.libvirt.driver [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Deletion of /var/lib/nova/instances/8a68398d-9640-49e2-a049-3da4f7b371c5_del complete
Nov 25 08:39:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.986 253542 INFO nova.compute.manager [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Took 0.60 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.987 253542 DEBUG oslo.service.loopingcall [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.987 253542 DEBUG nova.compute.manager [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:16 compute-0 nova_compute[253538]: 2025-11-25 08:39:16.987 253542 DEBUG nova.network.neutron [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 394 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 504 KiB/s rd, 5.7 MiB/s wr, 202 op/s
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]: {
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:     "0": [
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:         {
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "devices": [
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "/dev/loop3"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             ],
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_name": "ceph_lv0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_size": "21470642176",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "name": "ceph_lv0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "tags": {
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cluster_name": "ceph",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.crush_device_class": "",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.encrypted": "0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osd_id": "0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.type": "block",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.vdo": "0"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             },
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "type": "block",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "vg_name": "ceph_vg0"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:         }
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:     ],
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:     "1": [
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:         {
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "devices": [
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "/dev/loop4"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             ],
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_name": "ceph_lv1",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_size": "21470642176",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "name": "ceph_lv1",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "tags": {
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cluster_name": "ceph",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.crush_device_class": "",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.encrypted": "0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osd_id": "1",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.type": "block",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.vdo": "0"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             },
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "type": "block",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "vg_name": "ceph_vg1"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:         }
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:     ],
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:     "2": [
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:         {
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "devices": [
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "/dev/loop5"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             ],
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_name": "ceph_lv2",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_size": "21470642176",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "name": "ceph_lv2",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "tags": {
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.cluster_name": "ceph",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.crush_device_class": "",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.encrypted": "0",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osd_id": "2",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.type": "block",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:                 "ceph.vdo": "0"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             },
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "type": "block",
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:             "vg_name": "ceph_vg2"
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:         }
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]:     ]
Nov 25 08:39:17 compute-0 inspiring_ptolemy[339880]: }
Nov 25 08:39:17 compute-0 systemd[1]: libpod-3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e.scope: Deactivated successfully.
Nov 25 08:39:17 compute-0 podman[339863]: 2025-11-25 08:39:17.569941158 +0000 UTC m=+1.029130257 container died 3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b54807549855dd975bf6b0de1a971cf4d08c7c400d33a74495c40dc8bd9aabc7-merged.mount: Deactivated successfully.
Nov 25 08:39:17 compute-0 podman[339863]: 2025-11-25 08:39:17.826765077 +0000 UTC m=+1.285954156 container remove 3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 08:39:17 compute-0 nova_compute[253538]: 2025-11-25 08:39:17.855 253542 DEBUG nova.network.neutron [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:17 compute-0 systemd[1]: libpod-conmon-3540857a4bd2226ac921e8ee486a140e06456211c1240a2762b7ebc4d251b78e.scope: Deactivated successfully.
Nov 25 08:39:17 compute-0 nova_compute[253538]: 2025-11-25 08:39:17.881 253542 INFO nova.compute.manager [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Took 0.89 seconds to deallocate network for instance.
Nov 25 08:39:17 compute-0 sudo[339741]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:17 compute-0 nova_compute[253538]: 2025-11-25 08:39:17.923 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:17 compute-0 nova_compute[253538]: 2025-11-25 08:39:17.924 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:17 compute-0 sudo[339900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:17 compute-0 sudo[339900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:17 compute-0 sudo[339900]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:17 compute-0 nova_compute[253538]: 2025-11-25 08:39:17.985 253542 DEBUG nova.compute.manager [req-f1eb9987-8b90-4d60-8eb6-f15eddc7544d req-a1e81f5c-acf6-44cb-92a6-ca3fcdacdb40 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Received event network-vif-deleted-203c150c-9339-4520-8e52-01740854c5ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:18 compute-0 sudo[339925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:39:18 compute-0 sudo[339925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:18 compute-0 sudo[339925]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.043 253542 DEBUG oslo_concurrency.processutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:18 compute-0 sudo[339950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:18 compute-0 sudo[339950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:18 compute-0 sudo[339950]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:18 compute-0 sudo[339976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:39:18 compute-0 sudo[339976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079264443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.488 253542 DEBUG oslo_concurrency.processutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.495 253542 DEBUG nova.compute.provider_tree [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.510 253542 DEBUG nova.scheduler.client.report [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.537 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:18 compute-0 ceph-mon[75015]: pgmap v1688: 321 pgs: 321 active+clean; 394 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 504 KiB/s rd, 5.7 MiB/s wr, 202 op/s
Nov 25 08:39:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3079264443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.569 253542 INFO nova.scheduler.client.report [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Deleted allocations for instance 8a68398d-9640-49e2-a049-3da4f7b371c5
Nov 25 08:39:18 compute-0 podman[340062]: 2025-11-25 08:39:18.625991217 +0000 UTC m=+0.068146989 container create b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.631 253542 DEBUG oslo_concurrency.lockutils [None req-7459adfa-b12a-43c9-8eae-2eccd31cce29 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "8a68398d-9640-49e2-a049-3da4f7b371c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.660 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "291c3536-48c4-40eb-a910-9494484e8668" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.661 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.661 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "291c3536-48c4-40eb-a910-9494484e8668-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.662 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.662 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.663 253542 INFO nova.compute.manager [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Terminating instance
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.664 253542 DEBUG nova.compute.manager [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:18 compute-0 systemd[1]: Started libpod-conmon-b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603.scope.
Nov 25 08:39:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:39:18 compute-0 podman[340062]: 2025-11-25 08:39:18.601648053 +0000 UTC m=+0.043803865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:39:18 compute-0 kernel: tap0c1a57e6-fd (unregistering): left promiscuous mode
Nov 25 08:39:18 compute-0 podman[340062]: 2025-11-25 08:39:18.711706042 +0000 UTC m=+0.153861904 container init b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 08:39:18 compute-0 NetworkManager[48915]: <info>  [1764059958.7122] device (tap0c1a57e6-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:18 compute-0 ovn_controller[152859]: 2025-11-25T08:39:18Z|00870|binding|INFO|Releasing lport 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e from this chassis (sb_readonly=0)
Nov 25 08:39:18 compute-0 ovn_controller[152859]: 2025-11-25T08:39:18Z|00871|binding|INFO|Setting lport 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e down in Southbound
Nov 25 08:39:18 compute-0 podman[340062]: 2025-11-25 08:39:18.726075965 +0000 UTC m=+0.168231767 container start b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:18 compute-0 ovn_controller[152859]: 2025-11-25T08:39:18Z|00872|binding|INFO|Removing iface tap0c1a57e6-fd ovn-installed in OVS
Nov 25 08:39:18 compute-0 podman[340062]: 2025-11-25 08:39:18.731563793 +0000 UTC m=+0.173719595 container attach b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:39:18 compute-0 eager_perlman[340076]: 167 167
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.739 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:e3:8d 10.100.0.11'], port_security=['fa:16:3e:96:e3:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '291c3536-48c4-40eb-a910-9494484e8668', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '4', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:18 compute-0 systemd[1]: libpod-b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603.scope: Deactivated successfully.
Nov 25 08:39:18 compute-0 podman[340062]: 2025-11-25 08:39:18.740378915 +0000 UTC m=+0.182534677 container died b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.740 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e in datapath 92e26514-5b15-410b-8885-6773bc03c4ce unbound from our chassis
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.742 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.761 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5d62b1-3a39-4793-9ce0-b514bde5e94b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb0054ab0d9cd84347d3b179f01f811cf21c91bce05f539538b241a74588f7c1-merged.mount: Deactivated successfully.
Nov 25 08:39:18 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 25 08:39:18 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Consumed 4.053s CPU time.
Nov 25 08:39:18 compute-0 systemd-machined[215790]: Machine qemu-110-instance-00000059 terminated.
Nov 25 08:39:18 compute-0 podman[340062]: 2025-11-25 08:39:18.77876621 +0000 UTC m=+0.220921972 container remove b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.801 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f4c61e-df5e-443a-b5ce-4758c47e7840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.805 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[74e27c3d-1582-4b89-bc30-303a8b490fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:18 compute-0 systemd[1]: libpod-conmon-b3b807793c3b39cbb1d7192cd29cd4aceea3bfeb1df71762633a98dd5326d603.scope: Deactivated successfully.
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.834 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bf6b9a-f88d-437d-b818-b229f2c12ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.854 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61e38274-6bc5-40d3-b90b-36ee8e01edaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340107, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.875 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[caab50ef-407e-4d8e-a9b7-9c0564bc051b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340108, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340108, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.877 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.879 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.884 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.885 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.885 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:18.886 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.896 253542 INFO nova.virt.libvirt.driver [-] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Instance destroyed successfully.
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.896 253542 DEBUG nova.objects.instance [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'resources' on Instance uuid 291c3536-48c4-40eb-a910-9494484e8668 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.908 253542 DEBUG nova.virt.libvirt.vif [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:39:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-818825662',display_name='tempest-ServersTestJSON-server-818825662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-818825662',id=89,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:39:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-h7ipp38q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:39:15Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=291c3536-48c4-40eb-a910-9494484e8668,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.908 253542 DEBUG nova.network.os_vif_util [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "address": "fa:16:3e:96:e3:8d", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c1a57e6-fd", "ovs_interfaceid": "0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.909 253542 DEBUG nova.network.os_vif_util [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:e3:8d,bridge_name='br-int',has_traffic_filtering=True,id=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c1a57e6-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.909 253542 DEBUG os_vif [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:e3:8d,bridge_name='br-int',has_traffic_filtering=True,id=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c1a57e6-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.911 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c1a57e6-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:18 compute-0 nova_compute[253538]: 2025-11-25 08:39:18.917 253542 INFO os_vif [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:e3:8d,bridge_name='br-int',has_traffic_filtering=True,id=0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c1a57e6-fd')
Nov 25 08:39:19 compute-0 podman[340132]: 2025-11-25 08:39:19.01544123 +0000 UTC m=+0.063154692 container create 3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 25 08:39:19 compute-0 systemd[1]: Started libpod-conmon-3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27.scope.
Nov 25 08:39:19 compute-0 podman[340132]: 2025-11-25 08:39:18.994194241 +0000 UTC m=+0.041907733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:39:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efade2c49f5f9b201c7f4a8c3eac8c0547525d6b922480d74c78f9203d9abaf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efade2c49f5f9b201c7f4a8c3eac8c0547525d6b922480d74c78f9203d9abaf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efade2c49f5f9b201c7f4a8c3eac8c0547525d6b922480d74c78f9203d9abaf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efade2c49f5f9b201c7f4a8c3eac8c0547525d6b922480d74c78f9203d9abaf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:19 compute-0 podman[340132]: 2025-11-25 08:39:19.126454005 +0000 UTC m=+0.174167477 container init 3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 08:39:19 compute-0 podman[340132]: 2025-11-25 08:39:19.152953928 +0000 UTC m=+0.200667390 container start 3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:39:19 compute-0 podman[340132]: 2025-11-25 08:39:19.1577937 +0000 UTC m=+0.205507252 container attach 3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_visvesvaraya, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.292 253542 INFO nova.virt.libvirt.driver [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Deleting instance files /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668_del
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.294 253542 INFO nova.virt.libvirt.driver [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Deletion of /var/lib/nova/instances/291c3536-48c4-40eb-a910-9494484e8668_del complete
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.350 253542 INFO nova.compute.manager [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Took 0.68 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.351 253542 DEBUG oslo.service.loopingcall [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.351 253542 DEBUG nova.compute.manager [-] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.352 253542 DEBUG nova.network.neutron [-] [instance: 291c3536-48c4-40eb-a910-9494484e8668] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 387 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.4 MiB/s wr, 220 op/s
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.539 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059944.5151484, 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.539 253542 INFO nova.compute.manager [-] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] VM Stopped (Lifecycle Event)
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.560 253542 DEBUG nova.compute.manager [None req-4c0c082a-0968-4985-bce1-0cb7b861eacd - - - - - -] [instance: 5dc14644-cfc4-4e56-91fd-736ee4e3f5ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.578 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.814 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.815 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.815 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:39:19 compute-0 nova_compute[253538]: 2025-11-25 08:39:19.815 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a240e53-cc4c-463e-9601-41d687d64349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:19 compute-0 podman[340165]: 2025-11-25 08:39:19.844085262 +0000 UTC m=+0.090944810 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.133 253542 DEBUG nova.compute.manager [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received event network-vif-unplugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.133 253542 DEBUG oslo_concurrency.lockutils [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "291c3536-48c4-40eb-a910-9494484e8668-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.134 253542 DEBUG oslo_concurrency.lockutils [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.134 253542 DEBUG oslo_concurrency.lockutils [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.134 253542 DEBUG nova.compute.manager [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] No waiting events found dispatching network-vif-unplugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.135 253542 DEBUG nova.compute.manager [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received event network-vif-unplugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.135 253542 DEBUG nova.compute.manager [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received event network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.135 253542 DEBUG oslo_concurrency.lockutils [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "291c3536-48c4-40eb-a910-9494484e8668-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.136 253542 DEBUG oslo_concurrency.lockutils [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.136 253542 DEBUG oslo_concurrency.lockutils [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.136 253542 DEBUG nova.compute.manager [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] No waiting events found dispatching network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.137 253542 WARNING nova.compute.manager [req-d7dfba9e-e72d-45ab-8672-671ca8060831 req-2e072097-42b1-44f8-ac2c-ac33cf7f03b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received unexpected event network-vif-plugged-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e for instance with vm_state active and task_state deleting.
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.304 253542 DEBUG nova.network.neutron [-] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]: {
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "osd_id": 1,
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "type": "bluestore"
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:     },
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "osd_id": 2,
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "type": "bluestore"
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:     },
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "osd_id": 0,
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:         "type": "bluestore"
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]:     }
Nov 25 08:39:20 compute-0 distracted_visvesvaraya[340159]: }
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.326 253542 INFO nova.compute.manager [-] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Took 0.97 seconds to deallocate network for instance.
Nov 25 08:39:20 compute-0 systemd[1]: libpod-3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27.scope: Deactivated successfully.
Nov 25 08:39:20 compute-0 podman[340132]: 2025-11-25 08:39:20.339185634 +0000 UTC m=+1.386899086 container died 3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_visvesvaraya, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.364 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.364 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-efade2c49f5f9b201c7f4a8c3eac8c0547525d6b922480d74c78f9203d9abaf7-merged.mount: Deactivated successfully.
Nov 25 08:39:20 compute-0 podman[340132]: 2025-11-25 08:39:20.41277546 +0000 UTC m=+1.460488902 container remove 3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:39:20 compute-0 systemd[1]: libpod-conmon-3bf775f337749b82a111105dbb5c933b946f9551ae0e995636de3b40e472fe27.scope: Deactivated successfully.
Nov 25 08:39:20 compute-0 sudo[339976]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.458 253542 DEBUG oslo_concurrency.processutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:39:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:39:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:39:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:39:20 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 22abb0cc-86f4-4a83-8a11-f88b402932fc does not exist
Nov 25 08:39:20 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ea6aca04-bcfa-4c21-959e-506513cbf445 does not exist
Nov 25 08:39:20 compute-0 sudo[340228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:39:20 compute-0 ceph-mon[75015]: pgmap v1689: 321 pgs: 321 active+clean; 387 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.4 MiB/s wr, 220 op/s
Nov 25 08:39:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:39:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:39:20 compute-0 sudo[340228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:20 compute-0 sudo[340228]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:20 compute-0 sudo[340253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:39:20 compute-0 sudo[340253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:39:20 compute-0 sudo[340253]: pam_unix(sudo:session): session closed for user root
Nov 25 08:39:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887785571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.932 253542 DEBUG oslo_concurrency.processutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.940 253542 DEBUG nova.compute.provider_tree [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.959 253542 DEBUG nova.scheduler.client.report [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:20 compute-0 nova_compute[253538]: 2025-11-25 08:39:20.980 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:21 compute-0 nova_compute[253538]: 2025-11-25 08:39:21.008 253542 INFO nova.scheduler.client.report [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Deleted allocations for instance 291c3536-48c4-40eb-a910-9494484e8668
Nov 25 08:39:21 compute-0 nova_compute[253538]: 2025-11-25 08:39:21.066 253542 DEBUG oslo_concurrency.lockutils [None req-e3ad9f0f-d417-4695-86c7-b4a54abbe65f fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "291c3536-48c4-40eb-a910-9494484e8668" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:21 compute-0 nova_compute[253538]: 2025-11-25 08:39:21.179 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Updating instance_info_cache with network_info: [{"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:21 compute-0 nova_compute[253538]: 2025-11-25 08:39:21.198 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-0a240e53-cc4c-463e-9601-41d687d64349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:21 compute-0 nova_compute[253538]: 2025-11-25 08:39:21.199 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:39:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 352 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 229 op/s
Nov 25 08:39:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:21.475 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:21.476 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:39:21 compute-0 nova_compute[253538]: 2025-11-25 08:39:21.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:21 compute-0 nova_compute[253538]: 2025-11-25 08:39:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2887785571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:21 compute-0 ceph-mon[75015]: pgmap v1690: 321 pgs: 321 active+clean; 352 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 229 op/s
Nov 25 08:39:21 compute-0 nova_compute[253538]: 2025-11-25 08:39:21.599 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.258 253542 DEBUG nova.compute.manager [req-93421b12-6879-450e-b0b9-769ccb719d4f req-0cbb608d-11b5-4361-8534-b0e66d14780c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Received event network-vif-deleted-0c1a57e6-fdb4-495a-a7a0-ab97f2dd244e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.375 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "dd202e7c-474a-42f6-a6a8-5276974c793f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.375 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.376 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.376 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.377 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.378 253542 INFO nova.compute.manager [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Terminating instance
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.380 253542 DEBUG nova.compute.manager [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:22 compute-0 kernel: tapf37e1b29-e9 (unregistering): left promiscuous mode
Nov 25 08:39:22 compute-0 NetworkManager[48915]: <info>  [1764059962.4403] device (tapf37e1b29-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.451 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:22 compute-0 ovn_controller[152859]: 2025-11-25T08:39:22Z|00873|binding|INFO|Releasing lport f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 from this chassis (sb_readonly=0)
Nov 25 08:39:22 compute-0 ovn_controller[152859]: 2025-11-25T08:39:22Z|00874|binding|INFO|Setting lport f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 down in Southbound
Nov 25 08:39:22 compute-0 ovn_controller[152859]: 2025-11-25T08:39:22Z|00875|binding|INFO|Removing iface tapf37e1b29-e9 ovn-installed in OVS
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.454 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.458 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:f5:a2 10.100.0.10'], port_security=['fa:16:3e:18:f5:a2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'dd202e7c-474a-42f6-a6a8-5276974c793f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '4', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.460 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 in datapath 92e26514-5b15-410b-8885-6773bc03c4ce unbound from our chassis
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.461 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.479 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0d18a3-eb45-4095-ad40-24f001aadb40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.509 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7243a2-9f6c-4353-8ad3-5b7f8b97fb5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:22 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000058.scope: Deactivated successfully.
Nov 25 08:39:22 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000058.scope: Consumed 13.622s CPU time.
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.512 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[38b4c9f7-78b4-48f6-9ebc-734bcde3710e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:22 compute-0 systemd-machined[215790]: Machine qemu-108-instance-00000058 terminated.
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.540 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[97cb7500-dcc5-4388-9f7b-2d82d0c84006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.557 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5221ec23-b509-40ad-9e3e-a10f17e68b67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340333, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.574 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c904b945-39ad-44d2-bc7d-014fa8318001]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340338, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340338, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.576 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.577 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.583 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.584 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.584 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:22.584 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:22 compute-0 podman[340301]: 2025-11-25 08:39:22.591802891 +0000 UTC m=+0.119039715 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.617 253542 INFO nova.virt.libvirt.driver [-] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Instance destroyed successfully.
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.617 253542 DEBUG nova.objects.instance [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'resources' on Instance uuid dd202e7c-474a-42f6-a6a8-5276974c793f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.628 253542 DEBUG nova.virt.libvirt.vif [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-818825662',display_name='tempest-ServersTestJSON-server-818825662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-818825662',id=88,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:38:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-dhja0dfi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:38:59Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=dd202e7c-474a-42f6-a6a8-5276974c793f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.628 253542 DEBUG nova.network.os_vif_util [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "address": "fa:16:3e:18:f5:a2", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf37e1b29-e9", "ovs_interfaceid": "f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.629 253542 DEBUG nova.network.os_vif_util [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:f5:a2,bridge_name='br-int',has_traffic_filtering=True,id=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf37e1b29-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.630 253542 DEBUG os_vif [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:f5:a2,bridge_name='br-int',has_traffic_filtering=True,id=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf37e1b29-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.635 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf37e1b29-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:22 compute-0 nova_compute[253538]: 2025-11-25 08:39:22.639 253542 INFO os_vif [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:f5:a2,bridge_name='br-int',has_traffic_filtering=True,id=f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf37e1b29-e9')
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.036 253542 INFO nova.virt.libvirt.driver [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Deleting instance files /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f_del
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.037 253542 INFO nova.virt.libvirt.driver [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Deletion of /var/lib/nova/instances/dd202e7c-474a-42f6-a6a8-5276974c793f_del complete
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.085 253542 INFO nova.compute.manager [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.086 253542 DEBUG oslo.service.loopingcall [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.087 253542 DEBUG nova.compute.manager [-] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.087 253542 DEBUG nova.network.neutron [-] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 292 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 227 op/s
Nov 25 08:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.829 253542 DEBUG nova.network.neutron [-] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.846 253542 INFO nova.compute.manager [-] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Took 0.76 seconds to deallocate network for instance.
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.894 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.894 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:23 compute-0 nova_compute[253538]: 2025-11-25 08:39:23.984 253542 DEBUG oslo_concurrency.processutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:24 compute-0 ceph-mon[75015]: pgmap v1691: 321 pgs: 321 active+clean; 292 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 227 op/s
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.455 253542 DEBUG nova.compute.manager [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received event network-vif-unplugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.456 253542 DEBUG oslo_concurrency.lockutils [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.456 253542 DEBUG oslo_concurrency.lockutils [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.457 253542 DEBUG oslo_concurrency.lockutils [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.457 253542 DEBUG nova.compute.manager [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] No waiting events found dispatching network-vif-unplugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.457 253542 WARNING nova.compute.manager [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received unexpected event network-vif-unplugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 for instance with vm_state deleted and task_state None.
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.457 253542 DEBUG nova.compute.manager [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received event network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.458 253542 DEBUG oslo_concurrency.lockutils [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.458 253542 DEBUG oslo_concurrency.lockutils [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.458 253542 DEBUG oslo_concurrency.lockutils [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.458 253542 DEBUG nova.compute.manager [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] No waiting events found dispatching network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.458 253542 WARNING nova.compute.manager [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received unexpected event network-vif-plugged-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 for instance with vm_state deleted and task_state None.
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.459 253542 DEBUG nova.compute.manager [req-f7b6157f-d80b-4bb6-9782-9cb105c1668d req-51790e81-dee0-4bda-91c0-eef1ae614065 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Received event network-vif-deleted-f37e1b29-e9b9-4cbe-8fab-483e60e4e4e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096435864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.490 253542 DEBUG oslo_concurrency.processutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.500 253542 DEBUG nova.compute.provider_tree [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.518 253542 DEBUG nova.scheduler.client.report [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.556 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.581 253542 INFO nova.scheduler.client.report [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Deleted allocations for instance dd202e7c-474a-42f6-a6a8-5276974c793f
Nov 25 08:39:24 compute-0 nova_compute[253538]: 2025-11-25 08:39:24.626 253542 DEBUG oslo_concurrency.lockutils [None req-3ae16af5-6c79-4639-9204-c5419f750530 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "dd202e7c-474a-42f6-a6a8-5276974c793f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 263 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 781 KiB/s wr, 187 op/s
Nov 25 08:39:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2096435864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:25 compute-0 nova_compute[253538]: 2025-11-25 08:39:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:26 compute-0 ceph-mon[75015]: pgmap v1692: 321 pgs: 321 active+clean; 263 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 781 KiB/s wr, 187 op/s
Nov 25 08:39:26 compute-0 nova_compute[253538]: 2025-11-25 08:39:26.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:26 compute-0 nova_compute[253538]: 2025-11-25 08:39:26.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.232 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.232 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.265 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.338 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.340 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.348 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.349 253542 INFO nova.compute.claims [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:39:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 61 KiB/s wr, 169 op/s
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.460 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059952.459401, 8a68398d-9640-49e2-a049-3da4f7b371c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.460 253542 INFO nova.compute.manager [-] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] VM Stopped (Lifecycle Event)
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.482 253542 DEBUG nova.compute.manager [None req-f0aee4f1-ee66-46c5-bc56-b45713d7be2c - - - - - -] [instance: 8a68398d-9640-49e2-a049-3da4f7b371c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.493 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3977857107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.955 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.962 253542 DEBUG nova.compute.provider_tree [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:27 compute-0 nova_compute[253538]: 2025-11-25 08:39:27.975 253542 DEBUG nova.scheduler.client.report [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.007 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.008 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.013 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.014 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.014 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.015 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.102 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.103 253542 DEBUG nova.network.neutron [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.123 253542 INFO nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.143 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.240 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.242 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.242 253542 INFO nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Creating image(s)
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.268 253542 DEBUG nova.storage.rbd_utils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.294 253542 DEBUG nova.storage.rbd_utils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.324 253542 DEBUG nova.storage.rbd_utils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.329 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.380 253542 DEBUG nova.policy [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '24fa34332e6f4b628514969bbf76e94b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6851917992b149818e8b44146c66bfc3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.431 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.432 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.433 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.433 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824813340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.458 253542 DEBUG nova.storage.rbd_utils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.462 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.489 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:28 compute-0 ceph-mon[75015]: pgmap v1693: 321 pgs: 321 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 61 KiB/s wr, 169 op/s
Nov 25 08:39:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3977857107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2824813340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.579 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.580 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.585 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.585 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.757 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.805 253542 DEBUG nova.storage.rbd_utils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] resizing rbd image 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.839 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.840 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3522MB free_disk=59.89717483520508GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.841 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.841 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.903 253542 DEBUG nova.objects.instance [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'migration_context' on Instance uuid 508a1bcc-5cc4-480e-b329-b0b02cf785d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.914 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.914 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Ensure instance console log exists: /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.914 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.915 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.915 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.920 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0a240e53-cc4c-463e-9601-41d687d64349 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.920 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 40912950-fedc-405c-bc49-c4a757a422dc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.921 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 508a1bcc-5cc4-480e-b329-b0b02cf785d2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.921 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:39:28 compute-0 nova_compute[253538]: 2025-11-25 08:39:28.921 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:39:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:39:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192322066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:39:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:39:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192322066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.017 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.111 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.111 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.130 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.198 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.227 253542 DEBUG nova.network.neutron [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Successfully created port: d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:39:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 246 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 57 KiB/s wr, 138 op/s
Nov 25 08:39:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275156581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1192322066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:39:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1192322066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:39:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2275156581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.517 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.523 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.538 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.566 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.567 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.567 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.574 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.574 253542 INFO nova.compute.claims [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:39:29 compute-0 nova_compute[253538]: 2025-11-25 08:39:29.744 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.171 253542 DEBUG nova.network.neutron [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Successfully updated port: d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:39:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320823603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.190 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.191 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquired lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.191 253542 DEBUG nova.network.neutron [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.192 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.197 253542 DEBUG nova.compute.provider_tree [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.206 253542 DEBUG nova.scheduler.client.report [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.222 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.223 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.268 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.269 253542 DEBUG nova.network.neutron [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.283 253542 INFO nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.299 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.397 253542 DEBUG nova.compute.manager [req-4ab22be0-7892-46eb-8093-eb72468e9113 req-be31525e-d2be-4eb9-9044-0e0e50acdda4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received event network-changed-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.397 253542 DEBUG nova.compute.manager [req-4ab22be0-7892-46eb-8093-eb72468e9113 req-be31525e-d2be-4eb9-9044-0e0e50acdda4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Refreshing instance network info cache due to event network-changed-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.398 253542 DEBUG oslo_concurrency.lockutils [req-4ab22be0-7892-46eb-8093-eb72468e9113 req-be31525e-d2be-4eb9-9044-0e0e50acdda4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.401 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.402 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.403 253542 INFO nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Creating image(s)
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.421 253542 DEBUG nova.storage.rbd_utils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.443 253542 DEBUG nova.storage.rbd_utils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.465 253542 DEBUG nova.storage.rbd_utils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.469 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.507 253542 DEBUG nova.network.neutron [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:39:30 compute-0 ceph-mon[75015]: pgmap v1694: 321 pgs: 321 active+clean; 246 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 57 KiB/s wr, 138 op/s
Nov 25 08:39:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3320823603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.530 253542 DEBUG nova.policy [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdcb005cc49a4dfa82152f2c0817cc94', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b730f086c4b94185afab5e10fa2e8181', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.548 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.549 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.549 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.550 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.572 253542 DEBUG nova.storage.rbd_utils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.576 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.617 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:30 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.878 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "685ce923-4b91-41a7-9a13-d62077b95839" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.878 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.884 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.926 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:39:30 compute-0 nova_compute[253538]: 2025-11-25 08:39:30.971 253542 DEBUG nova.storage.rbd_utils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] resizing rbd image 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.049 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.049 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.053 253542 DEBUG nova.objects.instance [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'migration_context' on Instance uuid 3b6a2122-85ba-42b9-9eed-7d58e10b9b98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.057 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.057 253542 INFO nova.compute.claims [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.075 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.075 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Ensure instance console log exists: /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.076 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.076 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.076 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.213 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.299 253542 DEBUG nova.network.neutron [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Updating instance_info_cache with network_info: [{"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.319 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Releasing lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.319 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Instance network_info: |[{"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.322 253542 DEBUG oslo_concurrency.lockutils [req-4ab22be0-7892-46eb-8093-eb72468e9113 req-be31525e-d2be-4eb9-9044-0e0e50acdda4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.322 253542 DEBUG nova.network.neutron [req-4ab22be0-7892-46eb-8093-eb72468e9113 req-be31525e-d2be-4eb9-9044-0e0e50acdda4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Refreshing network info cache for port d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.328 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Start _get_guest_xml network_info=[{"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.335 253542 WARNING nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.343 253542 DEBUG nova.virt.libvirt.host [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.343 253542 DEBUG nova.virt.libvirt.host [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.347 253542 DEBUG nova.virt.libvirt.host [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.348 253542 DEBUG nova.virt.libvirt.host [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.348 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.348 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.349 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.349 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.349 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.349 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.349 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.349 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.350 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.350 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.350 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.350 253542 DEBUG nova.virt.hardware [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.353 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 266 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 453 KiB/s wr, 114 op/s
Nov 25 08:39:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:31.477 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.552 253542 DEBUG nova.network.neutron [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Successfully created port: 21a76d99-06f7-421c-a4ca-766984c20ab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816019626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.756 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.763 253542 DEBUG nova.compute.provider_tree [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.776 253542 DEBUG nova.scheduler.client.report [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.806 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1064773699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.808 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.825 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.846 253542 DEBUG nova.storage.rbd_utils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.849 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.884 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.885 253542 DEBUG nova.network.neutron [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.898 253542 INFO nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:39:31 compute-0 nova_compute[253538]: 2025-11-25 08:39:31.914 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.180 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.181 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.181 253542 INFO nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Creating image(s)
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.206 253542 DEBUG nova.storage.rbd_utils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] rbd image 685ce923-4b91-41a7-9a13-d62077b95839_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.237 253542 DEBUG nova.storage.rbd_utils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] rbd image 685ce923-4b91-41a7-9a13-d62077b95839_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639876857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.268 253542 DEBUG nova.storage.rbd_utils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] rbd image 685ce923-4b91-41a7-9a13-d62077b95839_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.273 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.332 253542 DEBUG nova.policy [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df942547d2cf4befb2c6041e9912c52b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '666e694f009b457d9a9432a920faa14b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.334 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.336 253542 DEBUG nova.virt.libvirt.vif [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-409966451',display_name='tempest-ServerActionsTestOtherA-server-409966451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-409966451',id=90,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-sb8kdxjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:28Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=508a1bcc-5cc4-480e-b329-b0b02cf785d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.337 253542 DEBUG nova.network.os_vif_util [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.358 253542 DEBUG nova.network.os_vif_util [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:d4:87,bridge_name='br-int',has_traffic_filtering=True,id=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0c0b3a1-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.359 253542 DEBUG nova.objects.instance [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 508a1bcc-5cc4-480e-b329-b0b02cf785d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.374 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <uuid>508a1bcc-5cc4-480e-b329-b0b02cf785d2</uuid>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <name>instance-0000005a</name>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestOtherA-server-409966451</nova:name>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:39:31</nova:creationTime>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <nova:user uuid="24fa34332e6f4b628514969bbf76e94b">tempest-ServerActionsTestOtherA-678529119-project-member</nova:user>
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <nova:project uuid="6851917992b149818e8b44146c66bfc3">tempest-ServerActionsTestOtherA-678529119</nova:project>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <nova:port uuid="d0c0b3a1-42ce-4f0a-bb99-20a4472f9626">
Nov 25 08:39:32 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <system>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <entry name="serial">508a1bcc-5cc4-480e-b329-b0b02cf785d2</entry>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <entry name="uuid">508a1bcc-5cc4-480e-b329-b0b02cf785d2</entry>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </system>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <os>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   </os>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <features>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   </features>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk">
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk.config">
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:32 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:6c:d4:87"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <target dev="tapd0c0b3a1-42"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2/console.log" append="off"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <video>
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </video>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:39:32 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:39:32 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:39:32 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:39:32 compute-0 nova_compute[253538]: </domain>
Nov 25 08:39:32 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.374 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Preparing to wait for external event network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.374 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.375 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.375 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.375 253542 DEBUG nova.virt.libvirt.vif [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-409966451',display_name='tempest-ServerActionsTestOtherA-server-409966451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-409966451',id=90,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-sb8kdxjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:28Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=508a1bcc-5cc4-480e-b329-b0b02cf785d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.376 253542 DEBUG nova.network.os_vif_util [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.376 253542 DEBUG nova.network.os_vif_util [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:d4:87,bridge_name='br-int',has_traffic_filtering=True,id=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0c0b3a1-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.376 253542 DEBUG os_vif [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:d4:87,bridge_name='br-int',has_traffic_filtering=True,id=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0c0b3a1-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.377 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.378 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.381 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0c0b3a1-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.381 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0c0b3a1-42, col_values=(('external_ids', {'iface-id': 'd0c0b3a1-42ce-4f0a-bb99-20a4472f9626', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:d4:87', 'vm-uuid': '508a1bcc-5cc4-480e-b329-b0b02cf785d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:32 compute-0 NetworkManager[48915]: <info>  [1764059972.3834] manager: (tapd0c0b3a1-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.389 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.390 253542 INFO os_vif [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:d4:87,bridge_name='br-int',has_traffic_filtering=True,id=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0c0b3a1-42')
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.397 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.397 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.397 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.397 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.417 253542 DEBUG nova.storage.rbd_utils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] rbd image 685ce923-4b91-41a7-9a13-d62077b95839_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.420 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 685ce923-4b91-41a7-9a13-d62077b95839_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.502 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.503 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.503 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] No VIF found with MAC fa:16:3e:6c:d4:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.505 253542 INFO nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Using config drive
Nov 25 08:39:32 compute-0 ceph-mon[75015]: pgmap v1695: 321 pgs: 321 active+clean; 266 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 453 KiB/s wr, 114 op/s
Nov 25 08:39:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1816019626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1064773699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2639876857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.535 253542 DEBUG nova.storage.rbd_utils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.735 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 685ce923-4b91-41a7-9a13-d62077b95839_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.793 253542 DEBUG nova.storage.rbd_utils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] resizing rbd image 685ce923-4b91-41a7-9a13-d62077b95839_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.883 253542 DEBUG nova.objects.instance [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lazy-loading 'migration_context' on Instance uuid 685ce923-4b91-41a7-9a13-d62077b95839 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.895 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.895 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Ensure instance console log exists: /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.896 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.896 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.896 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:32 compute-0 nova_compute[253538]: 2025-11-25 08:39:32.984 253542 DEBUG nova.network.neutron [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Successfully created port: 482cdb88-12ff-42dd-8670-288f0f655f0a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.050 253542 INFO nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Creating config drive at /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2/disk.config
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.055 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdxboe4jt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.189 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdxboe4jt" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.211 253542 DEBUG nova.storage.rbd_utils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] rbd image 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.214 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2/disk.config 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.250 253542 DEBUG nova.network.neutron [req-4ab22be0-7892-46eb-8093-eb72468e9113 req-be31525e-d2be-4eb9-9044-0e0e50acdda4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Updated VIF entry in instance network info cache for port d0c0b3a1-42ce-4f0a-bb99-20a4472f9626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.251 253542 DEBUG nova.network.neutron [req-4ab22be0-7892-46eb-8093-eb72468e9113 req-be31525e-d2be-4eb9-9044-0e0e50acdda4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Updating instance_info_cache with network_info: [{"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.259 253542 DEBUG nova.network.neutron [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Successfully updated port: 21a76d99-06f7-421c-a4ca-766984c20ab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.278 253542 DEBUG oslo_concurrency.lockutils [req-4ab22be0-7892-46eb-8093-eb72468e9113 req-be31525e-d2be-4eb9-9044-0e0e50acdda4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.283 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "refresh_cache-3b6a2122-85ba-42b9-9eed-7d58e10b9b98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.283 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquired lock "refresh_cache-3b6a2122-85ba-42b9-9eed-7d58e10b9b98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.283 253542 DEBUG nova.network.neutron [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.345 253542 DEBUG nova.compute.manager [req-318aaec2-05c9-46ab-b97b-c7c3574de530 req-909269db-2250-4110-b3d1-0190af28aa05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received event network-changed-21a76d99-06f7-421c-a4ca-766984c20ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.345 253542 DEBUG nova.compute.manager [req-318aaec2-05c9-46ab-b97b-c7c3574de530 req-909269db-2250-4110-b3d1-0190af28aa05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Refreshing instance network info cache due to event network-changed-21a76d99-06f7-421c-a4ca-766984c20ab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.346 253542 DEBUG oslo_concurrency.lockutils [req-318aaec2-05c9-46ab-b97b-c7c3574de530 req-909269db-2250-4110-b3d1-0190af28aa05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3b6a2122-85ba-42b9-9eed-7d58e10b9b98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 329 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.9 MiB/s wr, 108 op/s
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.409 253542 DEBUG oslo_concurrency.processutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2/disk.config 508a1bcc-5cc4-480e-b329-b0b02cf785d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.410 253542 INFO nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Deleting local config drive /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2/disk.config because it was imported into RBD.
Nov 25 08:39:33 compute-0 kernel: tapd0c0b3a1-42: entered promiscuous mode
Nov 25 08:39:33 compute-0 NetworkManager[48915]: <info>  [1764059973.4682] manager: (tapd0c0b3a1-42): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Nov 25 08:39:33 compute-0 ovn_controller[152859]: 2025-11-25T08:39:33Z|00876|binding|INFO|Claiming lport d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 for this chassis.
Nov 25 08:39:33 compute-0 ovn_controller[152859]: 2025-11-25T08:39:33Z|00877|binding|INFO|d0c0b3a1-42ce-4f0a-bb99-20a4472f9626: Claiming fa:16:3e:6c:d4:87 10.100.0.8
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.476 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:d4:87 10.100.0.8'], port_security=['fa:16:3e:6c:d4:87 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '508a1bcc-5cc4-480e-b329-b0b02cf785d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee3f370c-3523-4fc9-bede-12723b8659c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.477 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 in datapath 2b676104-a53a-419a-a348-631c409e45c0 bound to our chassis
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.478 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.502 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d667a48-428d-4a56-82ca-bd4818034875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:33 compute-0 ovn_controller[152859]: 2025-11-25T08:39:33Z|00878|binding|INFO|Setting lport d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 ovn-installed in OVS
Nov 25 08:39:33 compute-0 ovn_controller[152859]: 2025-11-25T08:39:33Z|00879|binding|INFO|Setting lport d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 up in Southbound
Nov 25 08:39:33 compute-0 systemd-udevd[341136]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:33 compute-0 systemd-machined[215790]: New machine qemu-111-instance-0000005a.
Nov 25 08:39:33 compute-0 NetworkManager[48915]: <info>  [1764059973.5228] device (tapd0c0b3a1-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:39:33 compute-0 NetworkManager[48915]: <info>  [1764059973.5239] device (tapd0c0b3a1-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:39:33 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-0000005a.
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.541 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f5196093-d894-428f-b80c-d5b68edeec8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.545 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2f09ad-7c08-4a1b-b939-943eae7d4568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.575 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2d38d280-766d-4179-844b-9f938d7a1db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.599 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[018b4e65-35d8-4450-8f84-5c7613770a99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b676104-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:55:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522998, 'reachable_time': 29715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341149, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.619 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd13d86-f487-4d21-8ca7-f92d8034bed3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523012, 'tstamp': 523012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341151, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523015, 'tstamp': 523015}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341151, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.621 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b676104-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b676104-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b676104-a0, col_values=(('external_ids', {'iface-id': 'a70ff8dd-5248-427b-8c9b-80eee3a671f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:33.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.657 253542 DEBUG nova.network.neutron [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.662 253542 DEBUG nova.network.neutron [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Successfully updated port: 482cdb88-12ff-42dd-8670-288f0f655f0a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.678 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "refresh_cache-685ce923-4b91-41a7-9a13-d62077b95839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.679 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquired lock "refresh_cache-685ce923-4b91-41a7-9a13-d62077b95839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.679 253542 DEBUG nova.network.neutron [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.824 253542 DEBUG nova.network.neutron [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.894 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059958.89363, 291c3536-48c4-40eb-a910-9494484e8668 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.895 253542 INFO nova.compute.manager [-] [instance: 291c3536-48c4-40eb-a910-9494484e8668] VM Stopped (Lifecycle Event)
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.912 253542 DEBUG nova.compute.manager [None req-d2e745d6-98a1-47b3-a70e-f00a7e34993f - - - - - -] [instance: 291c3536-48c4-40eb-a910-9494484e8668] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.986 253542 DEBUG nova.compute.manager [req-15f5d5d0-4ece-426a-aa81-a16fa1bd1a2e req-7f04f131-ad87-4c62-9c96-e4600aa529df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received event network-changed-482cdb88-12ff-42dd-8670-288f0f655f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.986 253542 DEBUG nova.compute.manager [req-15f5d5d0-4ece-426a-aa81-a16fa1bd1a2e req-7f04f131-ad87-4c62-9c96-e4600aa529df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Refreshing instance network info cache due to event network-changed-482cdb88-12ff-42dd-8670-288f0f655f0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:39:33 compute-0 nova_compute[253538]: 2025-11-25 08:39:33.986 253542 DEBUG oslo_concurrency.lockutils [req-15f5d5d0-4ece-426a-aa81-a16fa1bd1a2e req-7f04f131-ad87-4c62-9c96-e4600aa529df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-685ce923-4b91-41a7-9a13-d62077b95839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.028 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059974.0276115, 508a1bcc-5cc4-480e-b329-b0b02cf785d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.028 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] VM Started (Lifecycle Event)
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.046 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.049 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059974.0280638, 508a1bcc-5cc4-480e-b329-b0b02cf785d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.049 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] VM Paused (Lifecycle Event)
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.066 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.068 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.090 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.573 253542 DEBUG nova.network.neutron [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Updating instance_info_cache with network_info: [{"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.600 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Releasing lock "refresh_cache-3b6a2122-85ba-42b9-9eed-7d58e10b9b98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.600 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Instance network_info: |[{"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.601 253542 DEBUG oslo_concurrency.lockutils [req-318aaec2-05c9-46ab-b97b-c7c3574de530 req-909269db-2250-4110-b3d1-0190af28aa05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3b6a2122-85ba-42b9-9eed-7d58e10b9b98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.601 253542 DEBUG nova.network.neutron [req-318aaec2-05c9-46ab-b97b-c7c3574de530 req-909269db-2250-4110-b3d1-0190af28aa05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Refreshing network info cache for port 21a76d99-06f7-421c-a4ca-766984c20ab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.604 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Start _get_guest_xml network_info=[{"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.608 253542 WARNING nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:39:34 compute-0 ceph-mon[75015]: pgmap v1696: 321 pgs: 321 active+clean; 329 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.9 MiB/s wr, 108 op/s
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.615 253542 DEBUG nova.virt.libvirt.host [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.616 253542 DEBUG nova.virt.libvirt.host [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.619 253542 DEBUG nova.virt.libvirt.host [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.619 253542 DEBUG nova.virt.libvirt.host [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.620 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.620 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.620 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.620 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.621 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.621 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.621 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.621 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.621 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.621 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.622 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.622 253542 DEBUG nova.virt.hardware [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.624 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.660 253542 DEBUG nova.network.neutron [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Updating instance_info_cache with network_info: [{"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.678 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Releasing lock "refresh_cache-685ce923-4b91-41a7-9a13-d62077b95839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.678 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Instance network_info: |[{"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.679 253542 DEBUG oslo_concurrency.lockutils [req-15f5d5d0-4ece-426a-aa81-a16fa1bd1a2e req-7f04f131-ad87-4c62-9c96-e4600aa529df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-685ce923-4b91-41a7-9a13-d62077b95839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.679 253542 DEBUG nova.network.neutron [req-15f5d5d0-4ece-426a-aa81-a16fa1bd1a2e req-7f04f131-ad87-4c62-9c96-e4600aa529df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Refreshing network info cache for port 482cdb88-12ff-42dd-8670-288f0f655f0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.681 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Start _get_guest_xml network_info=[{"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.686 253542 WARNING nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.689 253542 DEBUG nova.virt.libvirt.host [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.690 253542 DEBUG nova.virt.libvirt.host [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.696 253542 DEBUG nova.virt.libvirt.host [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.696 253542 DEBUG nova.virt.libvirt.host [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.697 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.697 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.697 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.697 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.697 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.697 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.698 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.698 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.698 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.698 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.698 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.698 253542 DEBUG nova.virt.hardware [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:39:34 compute-0 nova_compute[253538]: 2025-11-25 08:39:34.701 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3864388606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.089 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.124 253542 DEBUG nova.storage.rbd_utils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.130 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2996352167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.179 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.220 253542 DEBUG nova.storage.rbd_utils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] rbd image 685ce923-4b91-41a7-9a13-d62077b95839_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.225 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 362 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 4.2 MiB/s wr, 97 op/s
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.561 253542 DEBUG nova.compute.manager [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received event network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.562 253542 DEBUG oslo_concurrency.lockutils [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.562 253542 DEBUG oslo_concurrency.lockutils [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.563 253542 DEBUG oslo_concurrency.lockutils [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.563 253542 DEBUG nova.compute.manager [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Processing event network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.563 253542 DEBUG nova.compute.manager [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received event network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.564 253542 DEBUG oslo_concurrency.lockutils [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.564 253542 DEBUG oslo_concurrency.lockutils [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.564 253542 DEBUG oslo_concurrency.lockutils [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.565 253542 DEBUG nova.compute.manager [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] No waiting events found dispatching network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.565 253542 WARNING nova.compute.manager [req-a4c006c7-acb2-4ec8-877d-d4c3b748c9b6 req-827b3fab-f188-4115-b992-3e3ade7b6418 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received unexpected event network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 for instance with vm_state building and task_state spawning.
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.567 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:39:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3566757484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.582 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059975.5738814, 508a1bcc-5cc4-480e-b329-b0b02cf785d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.582 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] VM Resumed (Lifecycle Event)
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.586 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.591 253542 INFO nova.virt.libvirt.driver [-] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Instance spawned successfully.
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.592 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.594 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.595 253542 DEBUG nova.virt.libvirt.vif [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1820271936',display_name='tempest-ServersTestJSON-server-1820271936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1820271936',id=91,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-5fxo0nce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:30Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=3b6a2122-85ba-42b9-9eed-7d58e10b9b98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.595 253542 DEBUG nova.network.os_vif_util [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.596 253542 DEBUG nova.network.os_vif_util [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=21a76d99-06f7-421c-a4ca-766984c20ab4,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a76d99-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.597 253542 DEBUG nova.objects.instance [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b6a2122-85ba-42b9-9eed-7d58e10b9b98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.608 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.611 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.619 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <uuid>3b6a2122-85ba-42b9-9eed-7d58e10b9b98</uuid>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <name>instance-0000005b</name>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestJSON-server-1820271936</nova:name>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:39:34</nova:creationTime>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:user uuid="fdcb005cc49a4dfa82152f2c0817cc94">tempest-ServersTestJSON-1426188226-project-member</nova:user>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:project uuid="b730f086c4b94185afab5e10fa2e8181">tempest-ServersTestJSON-1426188226</nova:project>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:port uuid="21a76d99-06f7-421c-a4ca-766984c20ab4">
Nov 25 08:39:35 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <system>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="serial">3b6a2122-85ba-42b9-9eed-7d58e10b9b98</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="uuid">3b6a2122-85ba-42b9-9eed-7d58e10b9b98</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </system>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <os>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </os>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <features>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </features>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk.config">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:a5:bc:ec"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <target dev="tap21a76d99-06"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98/console.log" append="off"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <video>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </video>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:39:35 compute-0 nova_compute[253538]: </domain>
Nov 25 08:39:35 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.619 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Preparing to wait for external event network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.620 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.620 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.620 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.621 253542 DEBUG nova.virt.libvirt.vif [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1820271936',display_name='tempest-ServersTestJSON-server-1820271936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1820271936',id=91,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-5fxo0nce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:30Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=3b6a2122-85ba-42b9-9eed-7d58e10b9b98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.621 253542 DEBUG nova.network.os_vif_util [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.622 253542 DEBUG nova.network.os_vif_util [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=21a76d99-06f7-421c-a4ca-766984c20ab4,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a76d99-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.622 253542 DEBUG os_vif [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=21a76d99-06f7-421c-a4ca-766984c20ab4,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a76d99-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.624 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.624 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.628 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.628 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21a76d99-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.629 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21a76d99-06, col_values=(('external_ids', {'iface-id': '21a76d99-06f7-421c-a4ca-766984c20ab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:bc:ec', 'vm-uuid': '3b6a2122-85ba-42b9-9eed-7d58e10b9b98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:35 compute-0 NetworkManager[48915]: <info>  [1764059975.6319] manager: (tap21a76d99-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.635 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.638 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.638 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.639 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.639 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.640 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.640 253542 DEBUG nova.virt.libvirt.driver [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.644 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.645 253542 INFO os_vif [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=21a76d99-06f7-421c-a4ca-766984c20ab4,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a76d99-06')
Nov 25 08:39:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/312234598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.675 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.676 253542 DEBUG nova.virt.libvirt.vif [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1394984162',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1394984162',id=92,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='666e694f009b457d9a9432a920faa14b',ramdisk_id='',reservation_id='r-ra9mh12f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1488975917',owner_user_name='tempest-ServerTagsTestJSON-1488975917-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:31Z,user_data=None,user_id='df942547d2cf4befb2c6041e9912c52b',uuid=685ce923-4b91-41a7-9a13-d62077b95839,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.677 253542 DEBUG nova.network.os_vif_util [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Converting VIF {"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.678 253542 DEBUG nova.network.os_vif_util [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:b2:5f,bridge_name='br-int',has_traffic_filtering=True,id=482cdb88-12ff-42dd-8670-288f0f655f0a,network=Network(d80a18f6-ba96-424a-bd3b-8ee30aa40e91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482cdb88-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.679 253542 DEBUG nova.objects.instance [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lazy-loading 'pci_devices' on Instance uuid 685ce923-4b91-41a7-9a13-d62077b95839 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3864388606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2996352167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:35 compute-0 ceph-mon[75015]: pgmap v1697: 321 pgs: 321 active+clean; 362 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 4.2 MiB/s wr, 97 op/s
Nov 25 08:39:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3566757484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.718 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <uuid>685ce923-4b91-41a7-9a13-d62077b95839</uuid>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <name>instance-0000005c</name>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerTagsTestJSON-server-1394984162</nova:name>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:39:34</nova:creationTime>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:user uuid="df942547d2cf4befb2c6041e9912c52b">tempest-ServerTagsTestJSON-1488975917-project-member</nova:user>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:project uuid="666e694f009b457d9a9432a920faa14b">tempest-ServerTagsTestJSON-1488975917</nova:project>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <nova:port uuid="482cdb88-12ff-42dd-8670-288f0f655f0a">
Nov 25 08:39:35 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <system>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="serial">685ce923-4b91-41a7-9a13-d62077b95839</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="uuid">685ce923-4b91-41a7-9a13-d62077b95839</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </system>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <os>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </os>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <features>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </features>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/685ce923-4b91-41a7-9a13-d62077b95839_disk">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/685ce923-4b91-41a7-9a13-d62077b95839_disk.config">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:43:b2:5f"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <target dev="tap482cdb88-12"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839/console.log" append="off"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <video>
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </video>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:39:35 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:39:35 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:39:35 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:39:35 compute-0 nova_compute[253538]: </domain>
Nov 25 08:39:35 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.720 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Preparing to wait for external event network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.720 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "685ce923-4b91-41a7-9a13-d62077b95839-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.720 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.720 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.721 253542 DEBUG nova.virt.libvirt.vif [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1394984162',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1394984162',id=92,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='666e694f009b457d9a9432a920faa14b',ramdisk_id='',reservation_id='r-ra9mh12f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1488975917',owner_user_name='tempest-ServerTagsTestJSON-1488975917-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:31Z,user_data=None,user_id='df942547d2cf4befb2c6041e9912c52b',uuid=685ce923-4b91-41a7-9a13-d62077b95839,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.722 253542 DEBUG nova.network.os_vif_util [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Converting VIF {"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.722 253542 DEBUG nova.network.os_vif_util [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:b2:5f,bridge_name='br-int',has_traffic_filtering=True,id=482cdb88-12ff-42dd-8670-288f0f655f0a,network=Network(d80a18f6-ba96-424a-bd3b-8ee30aa40e91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482cdb88-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.723 253542 DEBUG os_vif [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:b2:5f,bridge_name='br-int',has_traffic_filtering=True,id=482cdb88-12ff-42dd-8670-288f0f655f0a,network=Network(d80a18f6-ba96-424a-bd3b-8ee30aa40e91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482cdb88-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.724 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.724 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.728 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.728 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap482cdb88-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.729 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap482cdb88-12, col_values=(('external_ids', {'iface-id': '482cdb88-12ff-42dd-8670-288f0f655f0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:b2:5f', 'vm-uuid': '685ce923-4b91-41a7-9a13-d62077b95839'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.730 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:35 compute-0 NetworkManager[48915]: <info>  [1764059975.7315] manager: (tap482cdb88-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.734 253542 INFO nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Took 7.49 seconds to spawn the instance on the hypervisor.
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.735 253542 DEBUG nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.738 253542 INFO os_vif [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:b2:5f,bridge_name='br-int',has_traffic_filtering=True,id=482cdb88-12ff-42dd-8670-288f0f655f0a,network=Network(d80a18f6-ba96-424a-bd3b-8ee30aa40e91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482cdb88-12')
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.747 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.748 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.748 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No VIF found with MAC fa:16:3e:a5:bc:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.748 253542 INFO nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Using config drive
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.766 253542 DEBUG nova.storage.rbd_utils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.809 253542 INFO nova.compute.manager [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Took 8.49 seconds to build instance.
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.831 253542 DEBUG oslo_concurrency.lockutils [None req-35ee4caa-83ca-4484-88ca-011bc0331292 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.834 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.834 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.834 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] No VIF found with MAC fa:16:3e:43:b2:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.835 253542 INFO nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Using config drive
Nov 25 08:39:35 compute-0 nova_compute[253538]: 2025-11-25 08:39:35.854 253542 DEBUG nova.storage.rbd_utils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] rbd image 685ce923-4b91-41a7-9a13-d62077b95839_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.439 253542 INFO nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Creating config drive at /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98/disk.config
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.446 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatp890fx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.588 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatp890fx" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.619 253542 DEBUG nova.storage.rbd_utils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.623 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98/disk.config 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.669 253542 INFO nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Creating config drive at /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839/disk.config
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.675 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5v3qsbt7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/312234598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.757 253542 DEBUG nova.network.neutron [req-15f5d5d0-4ece-426a-aa81-a16fa1bd1a2e req-7f04f131-ad87-4c62-9c96-e4600aa529df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Updated VIF entry in instance network info cache for port 482cdb88-12ff-42dd-8670-288f0f655f0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.759 253542 DEBUG nova.network.neutron [req-15f5d5d0-4ece-426a-aa81-a16fa1bd1a2e req-7f04f131-ad87-4c62-9c96-e4600aa529df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Updating instance_info_cache with network_info: [{"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.779 253542 DEBUG oslo_concurrency.lockutils [req-15f5d5d0-4ece-426a-aa81-a16fa1bd1a2e req-7f04f131-ad87-4c62-9c96-e4600aa529df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-685ce923-4b91-41a7-9a13-d62077b95839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.833 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5v3qsbt7" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.860 253542 DEBUG nova.storage.rbd_utils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] rbd image 685ce923-4b91-41a7-9a13-d62077b95839_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.865 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839/disk.config 685ce923-4b91-41a7-9a13-d62077b95839_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.921 253542 DEBUG oslo_concurrency.processutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98/disk.config 3b6a2122-85ba-42b9-9eed-7d58e10b9b98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.922 253542 INFO nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Deleting local config drive /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98/disk.config because it was imported into RBD.
Nov 25 08:39:36 compute-0 NetworkManager[48915]: <info>  [1764059976.9750] manager: (tap21a76d99-06): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Nov 25 08:39:36 compute-0 kernel: tap21a76d99-06: entered promiscuous mode
Nov 25 08:39:36 compute-0 ovn_controller[152859]: 2025-11-25T08:39:36Z|00880|binding|INFO|Claiming lport 21a76d99-06f7-421c-a4ca-766984c20ab4 for this chassis.
Nov 25 08:39:36 compute-0 ovn_controller[152859]: 2025-11-25T08:39:36Z|00881|binding|INFO|21a76d99-06f7-421c-a4ca-766984c20ab4: Claiming fa:16:3e:a5:bc:ec 10.100.0.9
Nov 25 08:39:36 compute-0 nova_compute[253538]: 2025-11-25 08:39:36.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:37 compute-0 systemd-machined[215790]: New machine qemu-112-instance-0000005b.
Nov 25 08:39:37 compute-0 ovn_controller[152859]: 2025-11-25T08:39:37Z|00882|binding|INFO|Setting lport 21a76d99-06f7-421c-a4ca-766984c20ab4 ovn-installed in OVS
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:37 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005b.
Nov 25 08:39:37 compute-0 ovn_controller[152859]: 2025-11-25T08:39:37Z|00883|binding|INFO|Setting lport 21a76d99-06f7-421c-a4ca-766984c20ab4 up in Southbound
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.034 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:bc:ec 10.100.0.9'], port_security=['fa:16:3e:a5:bc:ec 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3b6a2122-85ba-42b9-9eed-7d58e10b9b98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '2', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=21a76d99-06f7-421c-a4ca-766984c20ab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.035 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 21a76d99-06f7-421c-a4ca-766984c20ab4 in datapath 92e26514-5b15-410b-8885-6773bc03c4ce bound to our chassis
Nov 25 08:39:37 compute-0 systemd-udevd[341451]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.036 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:39:37 compute-0 NetworkManager[48915]: <info>  [1764059977.0484] device (tap21a76d99-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:39:37 compute-0 NetworkManager[48915]: <info>  [1764059977.0496] device (tap21a76d99-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.054 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c349172-178b-48e0-8a35-87b17ec8febf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.081 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe2291f-ac84-4be7-a2eb-9db03b8d5e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.085 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[092124c3-3978-41e9-97e0-e3dc2525d84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.109 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6fcac5-5994-4554-a10c-cb5b4a1a3af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.122 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9340ff3-a9a9-4118-9453-69475e6daa2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341465, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.138 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ca99b9-9834-49d0-a5c6-474aec237fef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341466, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341466, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.139 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.141 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.146 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.146 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.146 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.207 253542 DEBUG nova.network.neutron [req-318aaec2-05c9-46ab-b97b-c7c3574de530 req-909269db-2250-4110-b3d1-0190af28aa05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Updated VIF entry in instance network info cache for port 21a76d99-06f7-421c-a4ca-766984c20ab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.207 253542 DEBUG nova.network.neutron [req-318aaec2-05c9-46ab-b97b-c7c3574de530 req-909269db-2250-4110-b3d1-0190af28aa05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Updating instance_info_cache with network_info: [{"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.226 253542 DEBUG oslo_concurrency.lockutils [req-318aaec2-05c9-46ab-b97b-c7c3574de530 req-909269db-2250-4110-b3d1-0190af28aa05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3b6a2122-85ba-42b9-9eed-7d58e10b9b98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 385 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 5.3 MiB/s wr, 112 op/s
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.592 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059977.591636, 3b6a2122-85ba-42b9-9eed-7d58e10b9b98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.592 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] VM Started (Lifecycle Event)
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.616 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059962.614639, dd202e7c-474a-42f6-a6a8-5276974c793f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.616 253542 INFO nova.compute.manager [-] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] VM Stopped (Lifecycle Event)
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.619 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.623 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059977.5917592, 3b6a2122-85ba-42b9-9eed-7d58e10b9b98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.624 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] VM Paused (Lifecycle Event)
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.632 253542 DEBUG nova.compute.manager [None req-0ca060e8-3d04-4139-a379-fa6d9a1ef91e - - - - - -] [instance: dd202e7c-474a-42f6-a6a8-5276974c793f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.647 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.654 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.678 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.785 253542 DEBUG nova.compute.manager [req-2e1a7988-76ab-49e8-aaae-ed484b87a431 req-a8ee5a96-b980-44a1-99c8-90d7314a8cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received event network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.786 253542 DEBUG oslo_concurrency.lockutils [req-2e1a7988-76ab-49e8-aaae-ed484b87a431 req-a8ee5a96-b980-44a1-99c8-90d7314a8cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.787 253542 DEBUG oslo_concurrency.lockutils [req-2e1a7988-76ab-49e8-aaae-ed484b87a431 req-a8ee5a96-b980-44a1-99c8-90d7314a8cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.787 253542 DEBUG oslo_concurrency.lockutils [req-2e1a7988-76ab-49e8-aaae-ed484b87a431 req-a8ee5a96-b980-44a1-99c8-90d7314a8cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.787 253542 DEBUG nova.compute.manager [req-2e1a7988-76ab-49e8-aaae-ed484b87a431 req-a8ee5a96-b980-44a1-99c8-90d7314a8cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Processing event network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.788 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.795 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059977.7950137, 3b6a2122-85ba-42b9-9eed-7d58e10b9b98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.795 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] VM Resumed (Lifecycle Event)
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.797 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.804 253542 INFO nova.virt.libvirt.driver [-] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Instance spawned successfully.
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.805 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:39:37 compute-0 ceph-mon[75015]: pgmap v1698: 321 pgs: 321 active+clean; 385 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 5.3 MiB/s wr, 112 op/s
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.829 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.845 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.849 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.850 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.851 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.851 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.851 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.852 253542 DEBUG nova.virt.libvirt.driver [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.855 253542 DEBUG oslo_concurrency.processutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839/disk.config 685ce923-4b91-41a7-9a13-d62077b95839_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.990s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.856 253542 INFO nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Deleting local config drive /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839/disk.config because it was imported into RBD.
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.873 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:37 compute-0 kernel: tap482cdb88-12: entered promiscuous mode
Nov 25 08:39:37 compute-0 NetworkManager[48915]: <info>  [1764059977.9035] manager: (tap482cdb88-12): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:37 compute-0 ovn_controller[152859]: 2025-11-25T08:39:37Z|00884|binding|INFO|Claiming lport 482cdb88-12ff-42dd-8670-288f0f655f0a for this chassis.
Nov 25 08:39:37 compute-0 ovn_controller[152859]: 2025-11-25T08:39:37Z|00885|binding|INFO|482cdb88-12ff-42dd-8670-288f0f655f0a: Claiming fa:16:3e:43:b2:5f 10.100.0.10
Nov 25 08:39:37 compute-0 NetworkManager[48915]: <info>  [1764059977.9204] device (tap482cdb88-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:39:37 compute-0 NetworkManager[48915]: <info>  [1764059977.9218] device (tap482cdb88-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.923 253542 INFO nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Took 7.52 seconds to spawn the instance on the hypervisor.
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.924 253542 DEBUG nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:37 compute-0 ovn_controller[152859]: 2025-11-25T08:39:37Z|00886|binding|INFO|Setting lport 482cdb88-12ff-42dd-8670-288f0f655f0a ovn-installed in OVS
Nov 25 08:39:37 compute-0 ovn_controller[152859]: 2025-11-25T08:39:37Z|00887|binding|INFO|Setting lport 482cdb88-12ff-42dd-8670-288f0f655f0a up in Southbound
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.933 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:b2:5f 10.100.0.10'], port_security=['fa:16:3e:43:b2:5f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '685ce923-4b91-41a7-9a13-d62077b95839', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d80a18f6-ba96-424a-bd3b-8ee30aa40e91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '666e694f009b457d9a9432a920faa14b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5681076e-4aa8-4a43-a63c-4a387621e593', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f93cacc-b418-40d0-8b8c-583dba06cf3c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=482cdb88-12ff-42dd-8670-288f0f655f0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.935 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 482cdb88-12ff-42dd-8670-288f0f655f0a in datapath d80a18f6-ba96-424a-bd3b-8ee30aa40e91 bound to our chassis
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.936 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d80a18f6-ba96-424a-bd3b-8ee30aa40e91
Nov 25 08:39:37 compute-0 nova_compute[253538]: 2025-11-25 08:39:37.939 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:37 compute-0 systemd-machined[215790]: New machine qemu-113-instance-0000005c.
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.948 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d23ce8-3c57-446a-886c-da12069919d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.949 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd80a18f6-b1 in ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.951 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd80a18f6-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.951 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6eff334a-a0f5-4055-ab41-cde9fab837a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.952 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4acb9638-5f54-46fb-87cf-90f8b1dbf6ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005c.
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.963 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[18271190-dee9-40b4-9a2c-698ce5cc7f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:37.985 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35892c13-d7ad-427c-a7c5-ee9696bdc280]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.001 253542 INFO nova.compute.manager [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Took 8.83 seconds to build instance.
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.019 253542 DEBUG oslo_concurrency.lockutils [None req-cee0648e-9585-4316-827c-4f67d44670be fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.020 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cf30e372-de86-4810-94b3-60562f543aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 NetworkManager[48915]: <info>  [1764059978.0280] manager: (tapd80a18f6-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.026 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[02603dbe-22fe-4caf-a61a-d8424e8b31a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.065 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3c98a5-1d7d-471a-9af7-92c093e0e912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.069 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[85dc402b-fe99-4e78-8038-4dbf63a6adcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 NetworkManager[48915]: <info>  [1764059978.1023] device (tapd80a18f6-b0): carrier: link connected
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.115 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ca141505-d432-475d-a3b1-876f6c489bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.135 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d606ec3-c0ef-43b2-9c03-ca3ebccbc728]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd80a18f6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:dc:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533270, 'reachable_time': 20600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341559, 'error': None, 'target': 'ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.151 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0eefa7ae-a354-4bb7-b948-056c3d695d77]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:dc71'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533270, 'tstamp': 533270}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341560, 'error': None, 'target': 'ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.168 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a692651-8f16-45e0-bd46-ef778088f2b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd80a18f6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:dc:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533270, 'reachable_time': 20600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341561, 'error': None, 'target': 'ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.203 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4584e220-c069-4e85-b1b5-edd6ad104314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[31a5247e-4085-4de7-9e63-bdc33d3eb8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.280 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd80a18f6-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.280 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.281 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd80a18f6-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:38 compute-0 kernel: tapd80a18f6-b0: entered promiscuous mode
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.284 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:38 compute-0 NetworkManager[48915]: <info>  [1764059978.2864] manager: (tapd80a18f6-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.294 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd80a18f6-b0, col_values=(('external_ids', {'iface-id': 'ab93adc8-edaf-48bd-bb02-84d4e589d56b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:38 compute-0 ovn_controller[152859]: 2025-11-25T08:39:38Z|00888|binding|INFO|Releasing lport ab93adc8-edaf-48bd-bb02-84d4e589d56b from this chassis (sb_readonly=0)
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.298 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d80a18f6-ba96-424a-bd3b-8ee30aa40e91.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d80a18f6-ba96-424a-bd3b-8ee30aa40e91.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.299 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e41d729-1632-490f-9d8e-cd37d044f227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.300 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-d80a18f6-ba96-424a-bd3b-8ee30aa40e91
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/d80a18f6-ba96-424a-bd3b-8ee30aa40e91.pid.haproxy
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID d80a18f6-ba96-424a-bd3b-8ee30aa40e91
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:39:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:38.301 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91', 'env', 'PROCESS_TAG=haproxy-d80a18f6-ba96-424a-bd3b-8ee30aa40e91', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d80a18f6-ba96-424a-bd3b-8ee30aa40e91.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.329 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.392 253542 DEBUG nova.compute.manager [req-3a939fad-181d-4931-b9b4-62169b21d604 req-95448076-a7dc-4f99-9fec-58d0a645260d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received event network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.393 253542 DEBUG oslo_concurrency.lockutils [req-3a939fad-181d-4931-b9b4-62169b21d604 req-95448076-a7dc-4f99-9fec-58d0a645260d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "685ce923-4b91-41a7-9a13-d62077b95839-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.393 253542 DEBUG oslo_concurrency.lockutils [req-3a939fad-181d-4931-b9b4-62169b21d604 req-95448076-a7dc-4f99-9fec-58d0a645260d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.394 253542 DEBUG oslo_concurrency.lockutils [req-3a939fad-181d-4931-b9b4-62169b21d604 req-95448076-a7dc-4f99-9fec-58d0a645260d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.394 253542 DEBUG nova.compute.manager [req-3a939fad-181d-4931-b9b4-62169b21d604 req-95448076-a7dc-4f99-9fec-58d0a645260d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Processing event network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.408 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059978.4075904, 685ce923-4b91-41a7-9a13-d62077b95839 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.408 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] VM Started (Lifecycle Event)
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.410 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.414 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.417 253542 INFO nova.virt.libvirt.driver [-] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Instance spawned successfully.
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.418 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.427 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.435 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.441 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.442 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.442 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.442 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.443 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.444 253542 DEBUG nova.virt.libvirt.driver [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.453 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.453 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059978.4077327, 685ce923-4b91-41a7-9a13-d62077b95839 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.453 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] VM Paused (Lifecycle Event)
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.478 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.482 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059978.4128737, 685ce923-4b91-41a7-9a13-d62077b95839 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.482 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] VM Resumed (Lifecycle Event)
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.502 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.506 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.509 253542 INFO nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Took 6.33 seconds to spawn the instance on the hypervisor.
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.510 253542 DEBUG nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.537 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.574 253542 INFO nova.compute.manager [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Took 7.57 seconds to build instance.
Nov 25 08:39:38 compute-0 nova_compute[253538]: 2025-11-25 08:39:38.589 253542 DEBUG oslo_concurrency.lockutils [None req-d1c3f5af-c417-4fa2-ab64-251480b68175 df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:38 compute-0 podman[341635]: 2025-11-25 08:39:38.696230759 +0000 UTC m=+0.024396105 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:39:38 compute-0 podman[341635]: 2025-11-25 08:39:38.806475964 +0000 UTC m=+0.134641280 container create c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 08:39:38 compute-0 systemd[1]: Started libpod-conmon-c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5.scope.
Nov 25 08:39:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121d83af4069342deecb5a7256a7b0a8eea95ecdac247ef8763d8c304ee9a6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:39:38 compute-0 podman[341635]: 2025-11-25 08:39:38.991687161 +0000 UTC m=+0.319852507 container init c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:39:38 compute-0 podman[341635]: 2025-11-25 08:39:38.99936925 +0000 UTC m=+0.327534566 container start c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:39:39 compute-0 neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91[341650]: [NOTICE]   (341654) : New worker (341656) forked
Nov 25 08:39:39 compute-0 neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91[341650]: [NOTICE]   (341654) : Loading success.
Nov 25 08:39:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 385 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 947 KiB/s rd, 5.3 MiB/s wr, 125 op/s
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.018 253542 DEBUG nova.compute.manager [req-ae92ad35-515d-40ca-ad34-0a46f60d034d req-bdda97c9-5309-4d5c-b57d-c81253d5c56f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received event network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.018 253542 DEBUG oslo_concurrency.lockutils [req-ae92ad35-515d-40ca-ad34-0a46f60d034d req-bdda97c9-5309-4d5c-b57d-c81253d5c56f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.019 253542 DEBUG oslo_concurrency.lockutils [req-ae92ad35-515d-40ca-ad34-0a46f60d034d req-bdda97c9-5309-4d5c-b57d-c81253d5c56f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.019 253542 DEBUG oslo_concurrency.lockutils [req-ae92ad35-515d-40ca-ad34-0a46f60d034d req-bdda97c9-5309-4d5c-b57d-c81253d5c56f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.019 253542 DEBUG nova.compute.manager [req-ae92ad35-515d-40ca-ad34-0a46f60d034d req-bdda97c9-5309-4d5c-b57d-c81253d5c56f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] No waiting events found dispatching network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.019 253542 WARNING nova.compute.manager [req-ae92ad35-515d-40ca-ad34-0a46f60d034d req-bdda97c9-5309-4d5c-b57d-c81253d5c56f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received unexpected event network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 for instance with vm_state active and task_state None.
Nov 25 08:39:40 compute-0 ceph-mon[75015]: pgmap v1699: 321 pgs: 321 active+clean; 385 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 947 KiB/s rd, 5.3 MiB/s wr, 125 op/s
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.730 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.784 253542 DEBUG nova.compute.manager [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received event network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.785 253542 DEBUG oslo_concurrency.lockutils [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "685ce923-4b91-41a7-9a13-d62077b95839-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.785 253542 DEBUG oslo_concurrency.lockutils [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.786 253542 DEBUG oslo_concurrency.lockutils [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.786 253542 DEBUG nova.compute.manager [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] No waiting events found dispatching network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.787 253542 WARNING nova.compute.manager [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received unexpected event network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a for instance with vm_state active and task_state None.
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.787 253542 DEBUG nova.compute.manager [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received event network-changed-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.787 253542 DEBUG nova.compute.manager [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Refreshing instance network info cache due to event network-changed-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.788 253542 DEBUG oslo_concurrency.lockutils [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.788 253542 DEBUG oslo_concurrency.lockutils [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:40 compute-0 nova_compute[253538]: 2025-11-25 08:39:40.788 253542 DEBUG nova.network.neutron [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Refreshing network info cache for port d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.053 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.054 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.054 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.054 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.054 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.056 253542 INFO nova.compute.manager [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Terminating instance
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.056 253542 DEBUG nova.compute.manager [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.065 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.066 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.067 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:41 compute-0 kernel: tapd0c0b3a1-42 (unregistering): left promiscuous mode
Nov 25 08:39:41 compute-0 NetworkManager[48915]: <info>  [1764059981.1098] device (tapd0c0b3a1-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:41 compute-0 ovn_controller[152859]: 2025-11-25T08:39:41Z|00889|binding|INFO|Releasing lport d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 from this chassis (sb_readonly=0)
Nov 25 08:39:41 compute-0 ovn_controller[152859]: 2025-11-25T08:39:41Z|00890|binding|INFO|Setting lport d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 down in Southbound
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 ovn_controller[152859]: 2025-11-25T08:39:41Z|00891|binding|INFO|Removing iface tapd0c0b3a1-42 ovn-installed in OVS
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.179 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:d4:87 10.100.0.8'], port_security=['fa:16:3e:6c:d4:87 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '508a1bcc-5cc4-480e-b329-b0b02cf785d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:41 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.180 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 in datapath 2b676104-a53a-419a-a348-631c409e45c0 unbound from our chassis
Nov 25 08:39:41 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Consumed 5.904s CPU time.
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.181 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b676104-a53a-419a-a348-631c409e45c0
Nov 25 08:39:41 compute-0 systemd-machined[215790]: Machine qemu-111-instance-0000005a terminated.
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.194 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8708d6-bd42-4bc7-821b-27ecefd04928]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.222 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e883a5ae-93e7-4f71-ac08-31e7f40af019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.224 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[25819b9c-39f3-4b3a-8038-ac230c419e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.251 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a4740e15-38ad-4ca8-997d-4103f7542114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.268 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff1504c-cbbd-4b81-aefb-27db227765f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b676104-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:55:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522998, 'reachable_time': 29715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341677, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.286 253542 INFO nova.virt.libvirt.driver [-] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Instance destroyed successfully.
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.286 253542 DEBUG nova.objects.instance [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'resources' on Instance uuid 508a1bcc-5cc4-480e-b329-b0b02cf785d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.297 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a0393a8e-473d-4048-81e6-623891f12ebf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523012, 'tstamp': 523012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341683, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b676104-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523015, 'tstamp': 523015}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341683, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.299 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b676104-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.300 253542 DEBUG nova.virt.libvirt.vif [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:39:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-409966451',display_name='tempest-ServerActionsTestOtherA-server-409966451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-409966451',id=90,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:39:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-sb8kdxjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:39:35Z,user_data=None,user_id='24fa34332e6f4b628514969bbf76e94b',uuid=508a1bcc-5cc4-480e-b329-b0b02cf785d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.301 253542 DEBUG nova.network.os_vif_util [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.302 253542 DEBUG nova.network.os_vif_util [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:d4:87,bridge_name='br-int',has_traffic_filtering=True,id=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0c0b3a1-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.303 253542 DEBUG os_vif [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:d4:87,bridge_name='br-int',has_traffic_filtering=True,id=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0c0b3a1-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.304 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b676104-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.305 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0c0b3a1-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.306 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.305 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b676104-a0, col_values=(('external_ids', {'iface-id': 'a70ff8dd-5248-427b-8c9b-80eee3a671f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.305 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.307 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.307 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.309 253542 INFO os_vif [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:d4:87,bridge_name='br-int',has_traffic_filtering=True,id=d0c0b3a1-42ce-4f0a-bb99-20a4472f9626,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0c0b3a1-42')
Nov 25 08:39:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 386 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 184 op/s
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.608 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.677 253542 INFO nova.virt.libvirt.driver [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Deleting instance files /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2_del
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.679 253542 INFO nova.virt.libvirt.driver [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Deletion of /var/lib/nova/instances/508a1bcc-5cc4-480e-b329-b0b02cf785d2_del complete
Nov 25 08:39:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.759 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.760 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.760 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.761 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.761 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.763 253542 INFO nova.compute.manager [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Terminating instance
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.764 253542 DEBUG nova.compute.manager [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.765 253542 INFO nova.compute.manager [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.765 253542 DEBUG oslo.service.loopingcall [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.766 253542 DEBUG nova.compute.manager [-] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.766 253542 DEBUG nova.network.neutron [-] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:41 compute-0 kernel: tap21a76d99-06 (unregistering): left promiscuous mode
Nov 25 08:39:41 compute-0 NetworkManager[48915]: <info>  [1764059981.8178] device (tap21a76d99-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:41 compute-0 ovn_controller[152859]: 2025-11-25T08:39:41Z|00892|binding|INFO|Releasing lport 21a76d99-06f7-421c-a4ca-766984c20ab4 from this chassis (sb_readonly=0)
Nov 25 08:39:41 compute-0 ovn_controller[152859]: 2025-11-25T08:39:41Z|00893|binding|INFO|Setting lport 21a76d99-06f7-421c-a4ca-766984c20ab4 down in Southbound
Nov 25 08:39:41 compute-0 ovn_controller[152859]: 2025-11-25T08:39:41Z|00894|binding|INFO|Removing iface tap21a76d99-06 ovn-installed in OVS
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.846 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.861 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:bc:ec 10.100.0.9'], port_security=['fa:16:3e:a5:bc:ec 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3b6a2122-85ba-42b9-9eed-7d58e10b9b98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '4', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=21a76d99-06f7-421c-a4ca-766984c20ab4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.863 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 21a76d99-06f7-421c-a4ca-766984c20ab4 in datapath 92e26514-5b15-410b-8885-6773bc03c4ce unbound from our chassis
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.866 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:41 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Nov 25 08:39:41 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Consumed 4.309s CPU time.
Nov 25 08:39:41 compute-0 systemd-machined[215790]: Machine qemu-112-instance-0000005b terminated.
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.887 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[db001fd7-092c-467d-b41a-a29492014309]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.925 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d97746-867a-4519-8240-1eab68e8a25d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.928 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e52da8a2-7c16-4c73-94c5-8e453f107460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.969 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d66d533f-bab6-40e0-b2a2-7feff4f49d06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:41.990 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3985679c-7241-4771-94f9-91ceee60b41b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341718, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:41 compute-0 nova_compute[253538]: 2025-11-25 08:39:41.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.011 253542 INFO nova.virt.libvirt.driver [-] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Instance destroyed successfully.
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.011 253542 DEBUG nova.objects.instance [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'resources' on Instance uuid 3b6a2122-85ba-42b9-9eed-7d58e10b9b98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:42.023 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dda5a687-91e2-4b1f-909d-698038063d14]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341723, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341723, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:42.026 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.024 253542 DEBUG nova.virt.libvirt.vif [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:39:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1820271936',display_name='tempest-ServersTestJSON-server-1820271936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1820271936',id=91,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:39:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-5fxo0nce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:39:40Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=3b6a2122-85ba-42b9-9eed-7d58e10b9b98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.024 253542 DEBUG nova.network.os_vif_util [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "21a76d99-06f7-421c-a4ca-766984c20ab4", "address": "fa:16:3e:a5:bc:ec", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21a76d99-06", "ovs_interfaceid": "21a76d99-06f7-421c-a4ca-766984c20ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.025 253542 DEBUG nova.network.os_vif_util [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=21a76d99-06f7-421c-a4ca-766984c20ab4,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a76d99-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.025 253542 DEBUG os_vif [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=21a76d99-06f7-421c-a4ca-766984c20ab4,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a76d99-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.026 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21a76d99-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.028 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.029 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.031 253542 INFO os_vif [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=21a76d99-06f7-421c-a4ca-766984c20ab4,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21a76d99-06')
Nov 25 08:39:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:42.033 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:42.034 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:42.035 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:42.039 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.461 253542 INFO nova.virt.libvirt.driver [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Deleting instance files /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98_del
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.462 253542 INFO nova.virt.libvirt.driver [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Deletion of /var/lib/nova/instances/3b6a2122-85ba-42b9-9eed-7d58e10b9b98_del complete
Nov 25 08:39:42 compute-0 ceph-mon[75015]: pgmap v1700: 321 pgs: 321 active+clean; 386 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 184 op/s
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.479 253542 DEBUG nova.compute.manager [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received event network-vif-unplugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.480 253542 DEBUG oslo_concurrency.lockutils [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.480 253542 DEBUG oslo_concurrency.lockutils [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.480 253542 DEBUG oslo_concurrency.lockutils [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.481 253542 DEBUG nova.compute.manager [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] No waiting events found dispatching network-vif-unplugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.481 253542 DEBUG nova.compute.manager [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received event network-vif-unplugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.481 253542 DEBUG nova.compute.manager [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received event network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.481 253542 DEBUG oslo_concurrency.lockutils [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.482 253542 DEBUG oslo_concurrency.lockutils [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.482 253542 DEBUG oslo_concurrency.lockutils [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.482 253542 DEBUG nova.compute.manager [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] No waiting events found dispatching network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.483 253542 WARNING nova.compute.manager [req-0bb916aa-9a40-4b6f-beb9-5a784dc09543 req-325b0372-f3da-40f9-a5cd-ecd5856b383b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received unexpected event network-vif-plugged-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 for instance with vm_state active and task_state deleting.
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.580 253542 DEBUG nova.network.neutron [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Updated VIF entry in instance network info cache for port d0c0b3a1-42ce-4f0a-bb99-20a4472f9626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.581 253542 DEBUG nova.network.neutron [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Updating instance_info_cache with network_info: [{"id": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "address": "fa:16:3e:6c:d4:87", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c0b3a1-42", "ovs_interfaceid": "d0c0b3a1-42ce-4f0a-bb99-20a4472f9626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.631 253542 DEBUG oslo_concurrency.lockutils [req-b7defbf3-bb3b-4c26-add0-a13bc58e5dfc req-8e22f057-bd3d-4ad3-a20a-a08d035fc349 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-508a1bcc-5cc4-480e-b329-b0b02cf785d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.642 253542 INFO nova.compute.manager [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.642 253542 DEBUG oslo.service.loopingcall [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.643 253542 DEBUG nova.compute.manager [-] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:42 compute-0 nova_compute[253538]: 2025-11-25 08:39:42.643 253542 DEBUG nova.network.neutron [-] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.060 253542 DEBUG nova.network.neutron [-] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.088 253542 INFO nova.compute.manager [-] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Took 1.32 seconds to deallocate network for instance.
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.154 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.155 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.305 253542 DEBUG oslo_concurrency.processutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 347 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.9 MiB/s wr, 299 op/s
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.421 253542 DEBUG nova.network.neutron [-] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.434 253542 INFO nova.compute.manager [-] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Took 0.79 seconds to deallocate network for instance.
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.476 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025877931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.765 253542 DEBUG oslo_concurrency.processutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.772 253542 DEBUG nova.compute.provider_tree [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.789 253542 DEBUG nova.scheduler.client.report [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.813 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.817 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.844 253542 INFO nova.scheduler.client.report [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Deleted allocations for instance 508a1bcc-5cc4-480e-b329-b0b02cf785d2
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.907 253542 DEBUG oslo_concurrency.lockutils [None req-43d13fa3-7919-45d6-becc-fb4bfcafe522 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "508a1bcc-5cc4-480e-b329-b0b02cf785d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:43 compute-0 nova_compute[253538]: 2025-11-25 08:39:43.924 253542 DEBUG oslo_concurrency.processutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.039 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "685ce923-4b91-41a7-9a13-d62077b95839" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.040 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.040 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "685ce923-4b91-41a7-9a13-d62077b95839-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.040 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.041 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.042 253542 INFO nova.compute.manager [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Terminating instance
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.043 253542 DEBUG nova.compute.manager [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:44 compute-0 kernel: tap482cdb88-12 (unregistering): left promiscuous mode
Nov 25 08:39:44 compute-0 NetworkManager[48915]: <info>  [1764059984.0799] device (tap482cdb88-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:44 compute-0 ovn_controller[152859]: 2025-11-25T08:39:44Z|00895|binding|INFO|Releasing lport 482cdb88-12ff-42dd-8670-288f0f655f0a from this chassis (sb_readonly=0)
Nov 25 08:39:44 compute-0 ovn_controller[152859]: 2025-11-25T08:39:44Z|00896|binding|INFO|Setting lport 482cdb88-12ff-42dd-8670-288f0f655f0a down in Southbound
Nov 25 08:39:44 compute-0 ovn_controller[152859]: 2025-11-25T08:39:44Z|00897|binding|INFO|Removing iface tap482cdb88-12 ovn-installed in OVS
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.094 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:b2:5f 10.100.0.10'], port_security=['fa:16:3e:43:b2:5f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '685ce923-4b91-41a7-9a13-d62077b95839', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d80a18f6-ba96-424a-bd3b-8ee30aa40e91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '666e694f009b457d9a9432a920faa14b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5681076e-4aa8-4a43-a63c-4a387621e593', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f93cacc-b418-40d0-8b8c-583dba06cf3c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=482cdb88-12ff-42dd-8670-288f0f655f0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.095 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 482cdb88-12ff-42dd-8670-288f0f655f0a in datapath d80a18f6-ba96-424a-bd3b-8ee30aa40e91 unbound from our chassis
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.096 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d80a18f6-ba96-424a-bd3b-8ee30aa40e91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f795404-42fd-49d3-a3f1-b583549c3acc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.097 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91 namespace which is not needed anymore
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.107 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Nov 25 08:39:44 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Consumed 6.082s CPU time.
Nov 25 08:39:44 compute-0 systemd-machined[215790]: Machine qemu-113-instance-0000005c terminated.
Nov 25 08:39:44 compute-0 neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91[341650]: [NOTICE]   (341654) : haproxy version is 2.8.14-c23fe91
Nov 25 08:39:44 compute-0 neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91[341650]: [NOTICE]   (341654) : path to executable is /usr/sbin/haproxy
Nov 25 08:39:44 compute-0 neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91[341650]: [WARNING]  (341654) : Exiting Master process...
Nov 25 08:39:44 compute-0 neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91[341650]: [WARNING]  (341654) : Exiting Master process...
Nov 25 08:39:44 compute-0 neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91[341650]: [ALERT]    (341654) : Current worker (341656) exited with code 143 (Terminated)
Nov 25 08:39:44 compute-0 neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91[341650]: [WARNING]  (341654) : All workers exited. Exiting... (0)
Nov 25 08:39:44 compute-0 systemd[1]: libpod-c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5.scope: Deactivated successfully.
Nov 25 08:39:44 compute-0 conmon[341650]: conmon c16e2ba1375d7f6bc023 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5.scope/container/memory.events
Nov 25 08:39:44 compute-0 podman[341814]: 2025-11-25 08:39:44.246914514 +0000 UTC m=+0.044866623 container died c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.278 253542 INFO nova.virt.libvirt.driver [-] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Instance destroyed successfully.
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.279 253542 DEBUG nova.objects.instance [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lazy-loading 'resources' on Instance uuid 685ce923-4b91-41a7-9a13-d62077b95839 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5-userdata-shm.mount: Deactivated successfully.
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.305 253542 DEBUG nova.virt.libvirt.vif [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:39:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1394984162',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1394984162',id=92,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:39:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='666e694f009b457d9a9432a920faa14b',ramdisk_id='',reservation_id='r-ra9mh12f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1488975917',owner_user_name='tempest-ServerTagsTestJSON-1488975917-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:39:38Z,user_data=None,user_id='df942547d2cf4befb2c6041e9912c52b',uuid=685ce923-4b91-41a7-9a13-d62077b95839,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.310 253542 DEBUG nova.network.os_vif_util [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Converting VIF {"id": "482cdb88-12ff-42dd-8670-288f0f655f0a", "address": "fa:16:3e:43:b2:5f", "network": {"id": "d80a18f6-ba96-424a-bd3b-8ee30aa40e91", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-828174901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "666e694f009b457d9a9432a920faa14b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap482cdb88-12", "ovs_interfaceid": "482cdb88-12ff-42dd-8670-288f0f655f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.311 253542 DEBUG nova.network.os_vif_util [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:b2:5f,bridge_name='br-int',has_traffic_filtering=True,id=482cdb88-12ff-42dd-8670-288f0f655f0a,network=Network(d80a18f6-ba96-424a-bd3b-8ee30aa40e91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482cdb88-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.311 253542 DEBUG os_vif [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:b2:5f,bridge_name='br-int',has_traffic_filtering=True,id=482cdb88-12ff-42dd-8670-288f0f655f0a,network=Network(d80a18f6-ba96-424a-bd3b-8ee30aa40e91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482cdb88-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4121d83af4069342deecb5a7256a7b0a8eea95ecdac247ef8763d8c304ee9a6b-merged.mount: Deactivated successfully.
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.315 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.315 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap482cdb88-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.317 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.323 253542 INFO os_vif [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:b2:5f,bridge_name='br-int',has_traffic_filtering=True,id=482cdb88-12ff-42dd-8670-288f0f655f0a,network=Network(d80a18f6-ba96-424a-bd3b-8ee30aa40e91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap482cdb88-12')
Nov 25 08:39:44 compute-0 podman[341814]: 2025-11-25 08:39:44.335735564 +0000 UTC m=+0.133687673 container cleanup c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:39:44 compute-0 systemd[1]: libpod-conmon-c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5.scope: Deactivated successfully.
Nov 25 08:39:44 compute-0 podman[341867]: 2025-11-25 08:39:44.422150939 +0000 UTC m=+0.064284693 container remove c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.428 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e84b8823-0aea-4563-abbe-3b5d454775ed]: (4, ('Tue Nov 25 08:39:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91 (c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5)\nc16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5\nTue Nov 25 08:39:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91 (c16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5)\nc16e2ba1375d7f6bc0237721bafc5135d81c36e53feb07063f53dbf8bdbea6e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1084073971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.431 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70114654-19d0-48b0-9ece-aeafa7bc56c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.432 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd80a18f6-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.433 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 kernel: tapd80a18f6-b0: left promiscuous mode
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.449 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.453 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa25b054-889c-41bf-8d57-cec7d20ddb69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.474 253542 DEBUG oslo_concurrency.processutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.477 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27e249e1-17ce-45cd-9575-a2cca06a6afa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.480 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae5f150-f250-4242-9f3f-2f55ded179d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.480 253542 DEBUG nova.compute.provider_tree [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.495 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19a4b0c5-2414-4bff-801f-e5ec7075d786]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533261, 'reachable_time': 44222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341887, 'error': None, 'target': 'ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.498 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d80a18f6-ba96-424a-bd3b-8ee30aa40e91 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.498 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1e667ed6-7671-421f-a1d9-cdce0a4c3989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.498 253542 DEBUG nova.scheduler.client.report [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:44 compute-0 systemd[1]: run-netns-ovnmeta\x2dd80a18f6\x2dba96\x2d424a\x2dbd3b\x2d8ee30aa40e91.mount: Deactivated successfully.
Nov 25 08:39:44 compute-0 ceph-mon[75015]: pgmap v1701: 321 pgs: 321 active+clean; 347 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.9 MiB/s wr, 299 op/s
Nov 25 08:39:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3025877931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1084073971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.521 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.541 253542 DEBUG nova.compute.manager [req-e110e9e3-1bd4-479d-9002-073eebdd3967 req-bc03cd4d-f75b-4c59-852f-da9721af728a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received event network-vif-unplugged-482cdb88-12ff-42dd-8670-288f0f655f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.541 253542 DEBUG oslo_concurrency.lockutils [req-e110e9e3-1bd4-479d-9002-073eebdd3967 req-bc03cd4d-f75b-4c59-852f-da9721af728a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "685ce923-4b91-41a7-9a13-d62077b95839-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.541 253542 DEBUG oslo_concurrency.lockutils [req-e110e9e3-1bd4-479d-9002-073eebdd3967 req-bc03cd4d-f75b-4c59-852f-da9721af728a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.542 253542 DEBUG oslo_concurrency.lockutils [req-e110e9e3-1bd4-479d-9002-073eebdd3967 req-bc03cd4d-f75b-4c59-852f-da9721af728a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.542 253542 DEBUG nova.compute.manager [req-e110e9e3-1bd4-479d-9002-073eebdd3967 req-bc03cd4d-f75b-4c59-852f-da9721af728a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] No waiting events found dispatching network-vif-unplugged-482cdb88-12ff-42dd-8670-288f0f655f0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.542 253542 DEBUG nova.compute.manager [req-e110e9e3-1bd4-479d-9002-073eebdd3967 req-bc03cd4d-f75b-4c59-852f-da9721af728a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received event network-vif-unplugged-482cdb88-12ff-42dd-8670-288f0f655f0a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.561 253542 INFO nova.scheduler.client.report [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Deleted allocations for instance 3b6a2122-85ba-42b9-9eed-7d58e10b9b98
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.632 253542 DEBUG oslo_concurrency.lockutils [None req-5d5d009c-b87e-444b-b92c-ee60c1f6ed5c fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.713 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.714 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.715 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.715 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.716 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.718 253542 INFO nova.compute.manager [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Terminating instance
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.719 253542 DEBUG nova.compute.manager [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.788 253542 DEBUG nova.compute.manager [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received event network-vif-unplugged-21a76d99-06f7-421c-a4ca-766984c20ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.790 253542 DEBUG oslo_concurrency.lockutils [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.790 253542 DEBUG oslo_concurrency.lockutils [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.790 253542 DEBUG oslo_concurrency.lockutils [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.791 253542 DEBUG nova.compute.manager [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] No waiting events found dispatching network-vif-unplugged-21a76d99-06f7-421c-a4ca-766984c20ab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.791 253542 WARNING nova.compute.manager [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received unexpected event network-vif-unplugged-21a76d99-06f7-421c-a4ca-766984c20ab4 for instance with vm_state deleted and task_state None.
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.791 253542 DEBUG nova.compute.manager [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received event network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.791 253542 DEBUG oslo_concurrency.lockutils [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.792 253542 DEBUG oslo_concurrency.lockutils [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.792 253542 DEBUG oslo_concurrency.lockutils [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3b6a2122-85ba-42b9-9eed-7d58e10b9b98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.792 253542 DEBUG nova.compute.manager [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] No waiting events found dispatching network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.792 253542 WARNING nova.compute.manager [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received unexpected event network-vif-plugged-21a76d99-06f7-421c-a4ca-766984c20ab4 for instance with vm_state deleted and task_state None.
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.792 253542 DEBUG nova.compute.manager [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Received event network-vif-deleted-d0c0b3a1-42ce-4f0a-bb99-20a4472f9626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.793 253542 DEBUG nova.compute.manager [req-590d69bc-c470-408f-b3e0-a196e8b6f2ed req-bf1d5da3-497a-4c09-881e-85c0394e9581 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Received event network-vif-deleted-21a76d99-06f7-421c-a4ca-766984c20ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:44 compute-0 kernel: tap66cd8b3d-6d (unregistering): left promiscuous mode
Nov 25 08:39:44 compute-0 NetworkManager[48915]: <info>  [1764059984.8284] device (tap66cd8b3d-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:39:44 compute-0 ovn_controller[152859]: 2025-11-25T08:39:44Z|00898|binding|INFO|Releasing lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 from this chassis (sb_readonly=0)
Nov 25 08:39:44 compute-0 ovn_controller[152859]: 2025-11-25T08:39:44Z|00899|binding|INFO|Setting lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 down in Southbound
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.836 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 ovn_controller[152859]: 2025-11-25T08:39:44Z|00900|binding|INFO|Removing iface tap66cd8b3d-6d ovn-installed in OVS
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.840 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.844 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:0e:c1 10.100.0.6'], port_security=['fa:16:3e:ac:0e:c1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40912950-fedc-405c-bc49-c4a757a422dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0db09afa-021a-4418-8ad3-d5c78354b9bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.845 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 in datapath 2b676104-a53a-419a-a348-631c409e45c0 unbound from our chassis
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.847 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b676104-a53a-419a-a348-631c409e45c0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[191447f4-ffd8-4fb3-963c-26c75b0eae3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.848 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b676104-a53a-419a-a348-631c409e45c0 namespace which is not needed anymore
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Deactivated successfully.
Nov 25 08:39:44 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Consumed 17.782s CPU time.
Nov 25 08:39:44 compute-0 systemd-machined[215790]: Machine qemu-99-instance-00000051 terminated.
Nov 25 08:39:44 compute-0 kernel: tap66cd8b3d-6d: entered promiscuous mode
Nov 25 08:39:44 compute-0 kernel: tap66cd8b3d-6d (unregistering): left promiscuous mode
Nov 25 08:39:44 compute-0 NetworkManager[48915]: <info>  [1764059984.9476] manager: (tap66cd8b3d-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Nov 25 08:39:44 compute-0 ovn_controller[152859]: 2025-11-25T08:39:44Z|00901|binding|INFO|Claiming lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for this chassis.
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:44 compute-0 ovn_controller[152859]: 2025-11-25T08:39:44Z|00902|binding|INFO|66cd8b3d-6d9a-4e8c-8487-6f32b15550c2: Claiming fa:16:3e:ac:0e:c1 10.100.0.6
Nov 25 08:39:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:44.995 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:0e:c1 10.100.0.6'], port_security=['fa:16:3e:ac:0e:c1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40912950-fedc-405c-bc49-c4a757a422dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0db09afa-021a-4418-8ad3-d5c78354b9bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.994 253542 INFO nova.virt.libvirt.driver [-] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Instance destroyed successfully.
Nov 25 08:39:44 compute-0 nova_compute[253538]: 2025-11-25 08:39:44.995 253542 DEBUG nova.objects.instance [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lazy-loading 'resources' on Instance uuid 40912950-fedc-405c-bc49-c4a757a422dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:45 compute-0 ovn_controller[152859]: 2025-11-25T08:39:45Z|00903|binding|INFO|Setting lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 ovn-installed in OVS
Nov 25 08:39:45 compute-0 ovn_controller[152859]: 2025-11-25T08:39:45Z|00904|binding|INFO|Setting lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 up in Southbound
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.005 253542 DEBUG nova.virt.libvirt.vif [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:37:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1323952638',display_name='tempest-ServerActionsTestOtherA-server-1323952638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1323952638',id=81,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo29J9J/YlqPHlK0QOuxd9u7qavZRLETC6oYiP9ZRxH2YibFLdXMSToi/FhBlCSIfelYckeMDdyi6TGFiwYGSfSqEdkRnVTrWT65qSuA8Lvnahu6Qda7fogQYvU40lxKA==',key_name='tempest-keypair-1723325490',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:37:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6851917992b149818e8b44146c66bfc3',ramdisk_id='',reservation_id='r-o8s25783',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-678529119',owner_user_name='tempest-ServerActionsTestOtherA-678529119-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:37:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24fa34332e6f4b628514969bbf76e94b',uuid=40912950-fedc-405c-bc49-c4a757a422dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.006 253542 DEBUG nova.network.os_vif_util [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converting VIF {"id": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "address": "fa:16:3e:ac:0e:c1", "network": {"id": "2b676104-a53a-419a-a348-631c409e45c0", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2027283710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6851917992b149818e8b44146c66bfc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66cd8b3d-6d", "ovs_interfaceid": "66cd8b3d-6d9a-4e8c-8487-6f32b15550c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.006 253542 DEBUG nova.network.os_vif_util [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:0e:c1,bridge_name='br-int',has_traffic_filtering=True,id=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66cd8b3d-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.007 253542 DEBUG os_vif [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:0e:c1,bridge_name='br-int',has_traffic_filtering=True,id=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66cd8b3d-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.008 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66cd8b3d-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:45 compute-0 ovn_controller[152859]: 2025-11-25T08:39:45Z|00905|binding|INFO|Releasing lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 from this chassis (sb_readonly=1)
Nov 25 08:39:45 compute-0 ovn_controller[152859]: 2025-11-25T08:39:45Z|00906|if_status|INFO|Dropped 5 log messages in last 56 seconds (most recently, 56 seconds ago) due to excessive rate
Nov 25 08:39:45 compute-0 ovn_controller[152859]: 2025-11-25T08:39:45Z|00907|if_status|INFO|Not setting lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 down as sb is readonly
Nov 25 08:39:45 compute-0 ovn_controller[152859]: 2025-11-25T08:39:45Z|00908|binding|INFO|Releasing lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 from this chassis (sb_readonly=0)
Nov 25 08:39:45 compute-0 ovn_controller[152859]: 2025-11-25T08:39:45Z|00909|binding|INFO|Setting lport 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 down in Southbound
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.014 253542 INFO os_vif [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:0e:c1,bridge_name='br-int',has_traffic_filtering=True,id=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2,network=Network(2b676104-a53a-419a-a348-631c409e45c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66cd8b3d-6d')
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.019 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:0e:c1 10.100.0.6'], port_security=['fa:16:3e:ac:0e:c1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40912950-fedc-405c-bc49-c4a757a422dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b676104-a53a-419a-a348-631c409e45c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6851917992b149818e8b44146c66bfc3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0db09afa-021a-4418-8ad3-d5c78354b9bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062facaf-eaf1-4c53-a009-db8e8d32ad6e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=66cd8b3d-6d9a-4e8c-8487-6f32b15550c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.034 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:45 compute-0 neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0[333848]: [NOTICE]   (333853) : haproxy version is 2.8.14-c23fe91
Nov 25 08:39:45 compute-0 neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0[333848]: [NOTICE]   (333853) : path to executable is /usr/sbin/haproxy
Nov 25 08:39:45 compute-0 neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0[333848]: [WARNING]  (333853) : Exiting Master process...
Nov 25 08:39:45 compute-0 neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0[333848]: [ALERT]    (333853) : Current worker (333868) exited with code 143 (Terminated)
Nov 25 08:39:45 compute-0 neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0[333848]: [WARNING]  (333853) : All workers exited. Exiting... (0)
Nov 25 08:39:45 compute-0 systemd[1]: libpod-2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d.scope: Deactivated successfully.
Nov 25 08:39:45 compute-0 podman[341915]: 2025-11-25 08:39:45.09333757 +0000 UTC m=+0.076267419 container died 2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d56e410089cfa091c4670318839e74a67b6225fde95a89bb4287cf8649ba80a8-merged.mount: Deactivated successfully.
Nov 25 08:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d-userdata-shm.mount: Deactivated successfully.
Nov 25 08:39:45 compute-0 podman[341915]: 2025-11-25 08:39:45.182638104 +0000 UTC m=+0.165567943 container cleanup 2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:39:45 compute-0 systemd[1]: libpod-conmon-2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d.scope: Deactivated successfully.
Nov 25 08:39:45 compute-0 podman[341958]: 2025-11-25 08:39:45.252795846 +0000 UTC m=+0.079443177 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:39:45 compute-0 podman[341968]: 2025-11-25 08:39:45.261125113 +0000 UTC m=+0.055074042 container remove 2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.268 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5532e756-33ed-4468-8078-a531ae319ae6]: (4, ('Tue Nov 25 08:39:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0 (2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d)\n2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d\nTue Nov 25 08:39:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b676104-a53a-419a-a348-631c409e45c0 (2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d)\n2fd87ee2c4824a219687f71c5c0fc46482cbf262bbef3a4d2bb38cf9edb05a7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.270 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d45de84-fa12-4691-a0cd-6f1e83e15303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.271 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b676104-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.273 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:45 compute-0 kernel: tap2b676104-a0: left promiscuous mode
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.295 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[deed5109-69e8-4498-9539-68ad0db6ec04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.313 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d486bf0-8ebb-4477-a14d-913d83e97092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.316 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd9dcf8-c722-44f4-8a7f-03e2dedde7aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.337 253542 INFO nova.virt.libvirt.driver [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Deleting instance files /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839_del
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.338 253542 INFO nova.virt.libvirt.driver [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Deletion of /var/lib/nova/instances/685ce923-4b91-41a7-9a13-d62077b95839_del complete
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.337 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ffae5cb2-6c80-4390-8414-437de2f8149c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522989, 'reachable_time': 41923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341991, 'error': None, 'target': 'ovnmeta-2b676104-a53a-419a-a348-631c409e45c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d2b676104\x2da53a\x2d419a\x2da348\x2d631c409e45c0.mount: Deactivated successfully.
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.340 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b676104-a53a-419a-a348-631c409e45c0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.341 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6b75ab1e-694e-4833-99f5-31c380f0a042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.341 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 in datapath 2b676104-a53a-419a-a348-631c409e45c0 unbound from our chassis
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.343 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b676104-a53a-419a-a348-631c409e45c0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.343 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[029b000d-f9ca-4a66-a3b1-4577ac165e47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.344 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 in datapath 2b676104-a53a-419a-a348-631c409e45c0 unbound from our chassis
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.345 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b676104-a53a-419a-a348-631c409e45c0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:39:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:45.346 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95096429-c0c3-4ee2-aa50-5d9e8b1e8d68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 310 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 288 op/s
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.407 253542 INFO nova.compute.manager [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Took 1.36 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.408 253542 DEBUG oslo.service.loopingcall [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.409 253542 DEBUG nova.compute.manager [-] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:45 compute-0 nova_compute[253538]: 2025-11-25 08:39:45.409 253542 DEBUG nova.network.neutron [-] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:46 compute-0 nova_compute[253538]: 2025-11-25 08:39:46.496 253542 INFO nova.virt.libvirt.driver [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Deleting instance files /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc_del
Nov 25 08:39:46 compute-0 nova_compute[253538]: 2025-11-25 08:39:46.496 253542 INFO nova.virt.libvirt.driver [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Deletion of /var/lib/nova/instances/40912950-fedc-405c-bc49-c4a757a422dc_del complete
Nov 25 08:39:46 compute-0 nova_compute[253538]: 2025-11-25 08:39:46.551 253542 INFO nova.compute.manager [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Took 1.83 seconds to destroy the instance on the hypervisor.
Nov 25 08:39:46 compute-0 nova_compute[253538]: 2025-11-25 08:39:46.552 253542 DEBUG oslo.service.loopingcall [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:39:46 compute-0 nova_compute[253538]: 2025-11-25 08:39:46.553 253542 DEBUG nova.compute.manager [-] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:39:46 compute-0 nova_compute[253538]: 2025-11-25 08:39:46.553 253542 DEBUG nova.network.neutron [-] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:39:46 compute-0 ceph-mon[75015]: pgmap v1702: 321 pgs: 321 active+clean; 310 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 288 op/s
Nov 25 08:39:46 compute-0 nova_compute[253538]: 2025-11-25 08:39:46.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.138 253542 DEBUG nova.network.neutron [-] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.214 253542 INFO nova.compute.manager [-] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Took 1.80 seconds to deallocate network for instance.
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.328 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.329 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 233 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 301 op/s
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.443 253542 DEBUG oslo_concurrency.processutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.499 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received event network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.500 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "685ce923-4b91-41a7-9a13-d62077b95839-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.500 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.500 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.500 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] No waiting events found dispatching network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.501 253542 WARNING nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received unexpected event network-vif-plugged-482cdb88-12ff-42dd-8670-288f0f655f0a for instance with vm_state deleted and task_state None.
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.501 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-unplugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.501 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.501 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.501 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.502 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] No waiting events found dispatching network-vif-unplugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.502 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-unplugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.502 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.502 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.502 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.503 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.503 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] No waiting events found dispatching network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.503 253542 WARNING nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received unexpected event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for instance with vm_state active and task_state deleting.
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.503 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.503 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.504 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.504 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.504 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] No waiting events found dispatching network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.504 253542 WARNING nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received unexpected event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for instance with vm_state active and task_state deleting.
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.505 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.505 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.505 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.505 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.505 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] No waiting events found dispatching network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.505 253542 WARNING nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received unexpected event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for instance with vm_state active and task_state deleting.
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.506 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-unplugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.506 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.506 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.506 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.506 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] No waiting events found dispatching network-vif-unplugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.507 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-unplugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.507 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.507 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "40912950-fedc-405c-bc49-c4a757a422dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.507 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.507 253542 DEBUG oslo_concurrency.lockutils [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.508 253542 DEBUG nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] No waiting events found dispatching network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.508 253542 WARNING nova.compute.manager [req-06235893-48e3-4abb-922c-c9f69ee8bf85 req-f81d21bc-f006-47f2-bca3-fe84439238df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received unexpected event network-vif-plugged-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 for instance with vm_state active and task_state deleting.
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.530 253542 DEBUG nova.compute.manager [req-1262fc7e-b242-4890-a693-945616df3911 req-121cd4dc-0889-4780-b9e5-10cac43e4b06 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Received event network-vif-deleted-482cdb88-12ff-42dd-8670-288f0f655f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:47 compute-0 ceph-mon[75015]: pgmap v1703: 321 pgs: 321 active+clean; 233 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 301 op/s
Nov 25 08:39:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663724771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.935 253542 DEBUG oslo_concurrency.processutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.946 253542 DEBUG nova.compute.provider_tree [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.961 253542 DEBUG nova.scheduler.client.report [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:47 compute-0 nova_compute[253538]: 2025-11-25 08:39:47.991 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.020 253542 DEBUG nova.network.neutron [-] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.049 253542 INFO nova.scheduler.client.report [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Deleted allocations for instance 685ce923-4b91-41a7-9a13-d62077b95839
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.060 253542 INFO nova.compute.manager [-] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Took 1.51 seconds to deallocate network for instance.
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.127 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.127 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.136 253542 DEBUG oslo_concurrency.lockutils [None req-6e3c24e7-2059-474d-a691-af2f5cf4d43d df942547d2cf4befb2c6041e9912c52b 666e694f009b457d9a9432a920faa14b - - default default] Lock "685ce923-4b91-41a7-9a13-d62077b95839" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.190 253542 DEBUG oslo_concurrency.processutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.483 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.484 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.498 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.557 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/663724771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641590554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.659 253542 DEBUG oslo_concurrency.processutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.665 253542 DEBUG nova.compute.provider_tree [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.685 253542 DEBUG nova.scheduler.client.report [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.701 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.703 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.710 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.710 253542 INFO nova.compute.claims [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.739 253542 INFO nova.scheduler.client.report [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Deleted allocations for instance 40912950-fedc-405c-bc49-c4a757a422dc
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.809 253542 DEBUG oslo_concurrency.lockutils [None req-f9f8082c-d0fe-422a-b3d3-c5cfc30485e2 24fa34332e6f4b628514969bbf76e94b 6851917992b149818e8b44146c66bfc3 - - default default] Lock "40912950-fedc-405c-bc49-c4a757a422dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:48 compute-0 nova_compute[253538]: 2025-11-25 08:39:48.860 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:39:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3525433885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.338 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.348 253542 DEBUG nova.compute.provider_tree [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.367 253542 DEBUG nova.scheduler.client.report [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:39:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 200 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 41 KiB/s wr, 277 op/s
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.395 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.397 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.439 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.440 253542 DEBUG nova.network.neutron [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.459 253542 INFO nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.484 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.603 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.604 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.605 253542 INFO nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Creating image(s)
Nov 25 08:39:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1641590554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3525433885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:39:49 compute-0 ceph-mon[75015]: pgmap v1704: 321 pgs: 321 active+clean; 200 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 41 KiB/s wr, 277 op/s
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.647 253542 DEBUG nova.storage.rbd_utils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.674 253542 DEBUG nova.storage.rbd_utils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.703 253542 DEBUG nova.storage.rbd_utils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.709 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.762 253542 DEBUG nova.policy [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdcb005cc49a4dfa82152f2c0817cc94', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b730f086c4b94185afab5e10fa2e8181', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.769 253542 DEBUG nova.compute.manager [req-0baa9355-167b-4f1f-9064-d848fff7f57e req-7684e5b6-002d-47db-8036-bf6f15c01ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Received event network-vif-deleted-66cd8b3d-6d9a-4e8c-8487-6f32b15550c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.809 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.810 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.811 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.812 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.843 253542 DEBUG nova.storage.rbd_utils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:49 compute-0 nova_compute[253538]: 2025-11-25 08:39:49.848 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.446 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.491 253542 DEBUG nova.network.neutron [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Successfully created port: f5f675c0-8463-49b6-8351-06d13fa2cb29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.547 253542 DEBUG nova.storage.rbd_utils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] resizing rbd image 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.665 253542 DEBUG nova.objects.instance [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'migration_context' on Instance uuid 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.678 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.679 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Ensure instance console log exists: /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.679 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.680 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:50 compute-0 nova_compute[253538]: 2025-11-25 08:39:50.680 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:50 compute-0 podman[342225]: 2025-11-25 08:39:50.80495538 +0000 UTC m=+0.063072870 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:39:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 167 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 18 KiB/s wr, 284 op/s
Nov 25 08:39:51 compute-0 nova_compute[253538]: 2025-11-25 08:39:51.614 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:51 compute-0 nova_compute[253538]: 2025-11-25 08:39:51.763 253542 DEBUG nova.network.neutron [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Successfully updated port: f5f675c0-8463-49b6-8351-06d13fa2cb29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:39:51 compute-0 nova_compute[253538]: 2025-11-25 08:39:51.826 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "refresh_cache-4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:51 compute-0 nova_compute[253538]: 2025-11-25 08:39:51.827 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquired lock "refresh_cache-4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:51 compute-0 nova_compute[253538]: 2025-11-25 08:39:51.827 253542 DEBUG nova.network.neutron [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:39:52 compute-0 nova_compute[253538]: 2025-11-25 08:39:52.166 253542 DEBUG nova.compute.manager [req-4d9668ad-d458-47eb-b8d9-6ca19ddcdcf3 req-1e5f4fbb-dda6-4804-9a61-dddf9fe17ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received event network-changed-f5f675c0-8463-49b6-8351-06d13fa2cb29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:52 compute-0 nova_compute[253538]: 2025-11-25 08:39:52.166 253542 DEBUG nova.compute.manager [req-4d9668ad-d458-47eb-b8d9-6ca19ddcdcf3 req-1e5f4fbb-dda6-4804-9a61-dddf9fe17ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Refreshing instance network info cache due to event network-changed-f5f675c0-8463-49b6-8351-06d13fa2cb29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:39:52 compute-0 nova_compute[253538]: 2025-11-25 08:39:52.167 253542 DEBUG oslo_concurrency.lockutils [req-4d9668ad-d458-47eb-b8d9-6ca19ddcdcf3 req-1e5f4fbb-dda6-4804-9a61-dddf9fe17ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:39:52 compute-0 nova_compute[253538]: 2025-11-25 08:39:52.242 253542 DEBUG nova.network.neutron [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:39:52 compute-0 ceph-mon[75015]: pgmap v1705: 321 pgs: 321 active+clean; 167 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 18 KiB/s wr, 284 op/s
Nov 25 08:39:52 compute-0 ovn_controller[152859]: 2025-11-25T08:39:52Z|00910|binding|INFO|Releasing lport 185991f0-acab-400e-baff-76794035d44a from this chassis (sb_readonly=0)
Nov 25 08:39:52 compute-0 nova_compute[253538]: 2025-11-25 08:39:52.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:53 compute-0 ovn_controller[152859]: 2025-11-25T08:39:53Z|00911|binding|INFO|Releasing lport 185991f0-acab-400e-baff-76794035d44a from this chassis (sb_readonly=0)
Nov 25 08:39:53 compute-0 podman[342245]: 2025-11-25 08:39:53.030210652 +0000 UTC m=+0.275082128 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 08:39:53 compute-0 nova_compute[253538]: 2025-11-25 08:39:53.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:39:53
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.log', 'volumes', '.mgr']
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 200 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 228 op/s
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:39:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.185 253542 DEBUG nova.network.neutron [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Updating instance_info_cache with network_info: [{"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.213 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Releasing lock "refresh_cache-4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.213 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Instance network_info: |[{"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.214 253542 DEBUG oslo_concurrency.lockutils [req-4d9668ad-d458-47eb-b8d9-6ca19ddcdcf3 req-1e5f4fbb-dda6-4804-9a61-dddf9fe17ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.215 253542 DEBUG nova.network.neutron [req-4d9668ad-d458-47eb-b8d9-6ca19ddcdcf3 req-1e5f4fbb-dda6-4804-9a61-dddf9fe17ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Refreshing network info cache for port f5f675c0-8463-49b6-8351-06d13fa2cb29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.221 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Start _get_guest_xml network_info=[{"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.228 253542 WARNING nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.237 253542 DEBUG nova.virt.libvirt.host [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.238 253542 DEBUG nova.virt.libvirt.host [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.243 253542 DEBUG nova.virt.libvirt.host [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.243 253542 DEBUG nova.virt.libvirt.host [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.244 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.245 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.246 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.246 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.247 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.247 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.248 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.248 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.249 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.249 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.250 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.250 253542 DEBUG nova.virt.hardware [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.255 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:54 compute-0 ceph-mon[75015]: pgmap v1706: 321 pgs: 321 active+clean; 200 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 228 op/s
Nov 25 08:39:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2169499227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.777 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.815 253542 DEBUG nova.storage.rbd_utils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:54 compute-0 nova_compute[253538]: 2025-11-25 08:39:54.820 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:39:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/270474217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.248 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.251 253542 DEBUG nova.virt.libvirt.vif [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1179390849',display_name='tempest-ServersTestJSON-server-1179390849',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1179390849',id=93,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-ix8jfg0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:49Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=4d9b26b3-d7f6-44d6-8e83-24e9adb5d994,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.252 253542 DEBUG nova.network.os_vif_util [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.254 253542 DEBUG nova.network.os_vif_util [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:e0:e3,bridge_name='br-int',has_traffic_filtering=True,id=f5f675c0-8463-49b6-8351-06d13fa2cb29,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f675c0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.256 253542 DEBUG nova.objects.instance [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.274 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <uuid>4d9b26b3-d7f6-44d6-8e83-24e9adb5d994</uuid>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <name>instance-0000005d</name>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersTestJSON-server-1179390849</nova:name>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:39:54</nova:creationTime>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <nova:user uuid="fdcb005cc49a4dfa82152f2c0817cc94">tempest-ServersTestJSON-1426188226-project-member</nova:user>
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <nova:project uuid="b730f086c4b94185afab5e10fa2e8181">tempest-ServersTestJSON-1426188226</nova:project>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <nova:port uuid="f5f675c0-8463-49b6-8351-06d13fa2cb29">
Nov 25 08:39:55 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <system>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <entry name="serial">4d9b26b3-d7f6-44d6-8e83-24e9adb5d994</entry>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <entry name="uuid">4d9b26b3-d7f6-44d6-8e83-24e9adb5d994</entry>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </system>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <os>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   </os>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <features>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   </features>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk">
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk.config">
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       </source>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:39:55 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:30:e0:e3"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <target dev="tapf5f675c0-84"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994/console.log" append="off"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <video>
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </video>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:39:55 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:39:55 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:39:55 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:39:55 compute-0 nova_compute[253538]: </domain>
Nov 25 08:39:55 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.276 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Preparing to wait for external event network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.276 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.277 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.277 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.279 253542 DEBUG nova.virt.libvirt.vif [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1179390849',display_name='tempest-ServersTestJSON-server-1179390849',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1179390849',id=93,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-ix8jfg0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:39:49Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=4d9b26b3-d7f6-44d6-8e83-24e9adb5d994,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.279 253542 DEBUG nova.network.os_vif_util [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.280 253542 DEBUG nova.network.os_vif_util [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:e0:e3,bridge_name='br-int',has_traffic_filtering=True,id=f5f675c0-8463-49b6-8351-06d13fa2cb29,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f675c0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.281 253542 DEBUG os_vif [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:e0:e3,bridge_name='br-int',has_traffic_filtering=True,id=f5f675c0-8463-49b6-8351-06d13fa2cb29,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f675c0-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.282 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.283 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.284 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.290 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5f675c0-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.291 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5f675c0-84, col_values=(('external_ids', {'iface-id': 'f5f675c0-8463-49b6-8351-06d13fa2cb29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:e0:e3', 'vm-uuid': '4d9b26b3-d7f6-44d6-8e83-24e9adb5d994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:55 compute-0 NetworkManager[48915]: <info>  [1764059995.2938] manager: (tapf5f675c0-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.302 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.303 253542 INFO os_vif [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:e0:e3,bridge_name='br-int',has_traffic_filtering=True,id=f5f675c0-8463-49b6-8351-06d13fa2cb29,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f675c0-84')
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.357 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.358 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.359 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] No VIF found with MAC fa:16:3e:30:e0:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.360 253542 INFO nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Using config drive
Nov 25 08:39:55 compute-0 nova_compute[253538]: 2025-11-25 08:39:55.390 253542 DEBUG nova.storage.rbd_utils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 08:39:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2169499227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/270474217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.257 253542 INFO nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Creating config drive at /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994/disk.config
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.263 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7w226u_y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.312 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059981.2839005, 508a1bcc-5cc4-480e-b329-b0b02cf785d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.313 253542 INFO nova.compute.manager [-] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] VM Stopped (Lifecycle Event)
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.340 253542 DEBUG nova.compute.manager [None req-a28ff973-faa1-497e-8e98-160361e08d65 - - - - - -] [instance: 508a1bcc-5cc4-480e-b329-b0b02cf785d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.422 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7w226u_y" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.453 253542 DEBUG nova.storage.rbd_utils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] rbd image 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.459 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994/disk.config 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:39:56 compute-0 ceph-mon[75015]: pgmap v1707: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.617 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:56.770949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059996770994, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 806, "num_deletes": 250, "total_data_size": 946788, "memory_usage": 960848, "flush_reason": "Manual Compaction"}
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059996832425, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 620903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35245, "largest_seqno": 36050, "table_properties": {"data_size": 617445, "index_size": 1174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9645, "raw_average_key_size": 20, "raw_value_size": 609963, "raw_average_value_size": 1323, "num_data_blocks": 52, "num_entries": 461, "num_filter_entries": 461, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059937, "oldest_key_time": 1764059937, "file_creation_time": 1764059996, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 61558 microseconds, and 6617 cpu microseconds.
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:56.832503) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 620903 bytes OK
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:56.832530) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:56.880782) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:56.880831) EVENT_LOG_v1 {"time_micros": 1764059996880819, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:56.880860) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 942696, prev total WAL file size 942696, number of live WAL files 2.
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:56.882089) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323535' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(606KB)], [77(9672KB)]
Nov 25 08:39:56 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059996882260, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 10525038, "oldest_snapshot_seqno": -1}
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.895 253542 DEBUG oslo_concurrency.processutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994/disk.config 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.897 253542 INFO nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Deleting local config drive /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994/disk.config because it was imported into RBD.
Nov 25 08:39:56 compute-0 kernel: tapf5f675c0-84: entered promiscuous mode
Nov 25 08:39:56 compute-0 ovn_controller[152859]: 2025-11-25T08:39:56Z|00912|binding|INFO|Claiming lport f5f675c0-8463-49b6-8351-06d13fa2cb29 for this chassis.
Nov 25 08:39:56 compute-0 ovn_controller[152859]: 2025-11-25T08:39:56Z|00913|binding|INFO|f5f675c0-8463-49b6-8351-06d13fa2cb29: Claiming fa:16:3e:30:e0:e3 10.100.0.4
Nov 25 08:39:56 compute-0 NetworkManager[48915]: <info>  [1764059996.9731] manager: (tapf5f675c0-84): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Nov 25 08:39:56 compute-0 nova_compute[253538]: 2025-11-25 08:39:56.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:56.998 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:e0:e3 10.100.0.4'], port_security=['fa:16:3e:30:e0:e3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4d9b26b3-d7f6-44d6-8e83-24e9adb5d994', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '2', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f5f675c0-8463-49b6-8351-06d13fa2cb29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:56.999 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f5f675c0-8463-49b6-8351-06d13fa2cb29 in datapath 92e26514-5b15-410b-8885-6773bc03c4ce bound to our chassis
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.000 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.010 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059982.0093665, 3b6a2122-85ba-42b9-9eed-7d58e10b9b98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.010 253542 INFO nova.compute.manager [-] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] VM Stopped (Lifecycle Event)
Nov 25 08:39:57 compute-0 ovn_controller[152859]: 2025-11-25T08:39:57Z|00914|binding|INFO|Setting lport f5f675c0-8463-49b6-8351-06d13fa2cb29 ovn-installed in OVS
Nov 25 08:39:57 compute-0 ovn_controller[152859]: 2025-11-25T08:39:57Z|00915|binding|INFO|Setting lport f5f675c0-8463-49b6-8351-06d13fa2cb29 up in Southbound
Nov 25 08:39:57 compute-0 systemd-udevd[342410]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:39:57 compute-0 systemd-machined[215790]: New machine qemu-114-instance-0000005d.
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5999 keys, 7534232 bytes, temperature: kUnknown
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059997023650, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7534232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7496575, "index_size": 21547, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 152362, "raw_average_key_size": 25, "raw_value_size": 7391410, "raw_average_value_size": 1232, "num_data_blocks": 874, "num_entries": 5999, "num_filter_entries": 5999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764059996, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:39:57 compute-0 NetworkManager[48915]: <info>  [1764059997.0321] device (tapf5f675c0-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:57 compute-0 NetworkManager[48915]: <info>  [1764059997.0344] device (tapf5f675c0-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:39:57 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-0000005d.
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.038 253542 DEBUG nova.compute.manager [None req-28c42aa0-8d02-4967-be0d-aff3d859810f - - - - - -] [instance: 3b6a2122-85ba-42b9-9eed-7d58e10b9b98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:57.024286) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7534232 bytes
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:57.040835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.2 rd, 53.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.4 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(29.1) write-amplify(12.1) OK, records in: 6490, records dropped: 491 output_compression: NoCompression
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:57.040872) EVENT_LOG_v1 {"time_micros": 1764059997040858, "job": 44, "event": "compaction_finished", "compaction_time_micros": 141848, "compaction_time_cpu_micros": 40765, "output_level": 6, "num_output_files": 1, "total_output_size": 7534232, "num_input_records": 6490, "num_output_records": 5999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059997041281, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.043 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf699cf8-479f-4657-a740-dd4b157d4500]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764059997045434, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:56.881957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:57.045555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:57.045563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:57.045566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:57.045570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:39:57 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:39:57.045573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.091 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[513d7225-8e98-4473-a77e-3ccc01685607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.097 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[838ce0a8-4f23-4ead-891e-034d96929b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.138 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[827860a8-48f5-4527-a14a-6994053fd258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.161 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55532768-bc23-4d0e-915d-9f779e1b4969]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342424, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.190 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b12c8f3-54c2-45ac-98a6-225c9ac74db5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342425, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342425, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.192 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.197 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.197 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.198 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:39:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:39:57.199 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:39:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.407 253542 DEBUG nova.compute.manager [req-ce862fb6-0912-43d1-82d0-631f2ee4de19 req-5a66712b-5b68-4123-96c6-a559b2764646 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received event network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.409 253542 DEBUG oslo_concurrency.lockutils [req-ce862fb6-0912-43d1-82d0-631f2ee4de19 req-5a66712b-5b68-4123-96c6-a559b2764646 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.409 253542 DEBUG oslo_concurrency.lockutils [req-ce862fb6-0912-43d1-82d0-631f2ee4de19 req-5a66712b-5b68-4123-96c6-a559b2764646 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.409 253542 DEBUG oslo_concurrency.lockutils [req-ce862fb6-0912-43d1-82d0-631f2ee4de19 req-5a66712b-5b68-4123-96c6-a559b2764646 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.409 253542 DEBUG nova.compute.manager [req-ce862fb6-0912-43d1-82d0-631f2ee4de19 req-5a66712b-5b68-4123-96c6-a559b2764646 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Processing event network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.554 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059997.5544007, 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.555 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] VM Started (Lifecycle Event)
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.557 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.560 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.565 253542 INFO nova.virt.libvirt.driver [-] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Instance spawned successfully.
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.565 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.572 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.577 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.592 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.592 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.593 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.594 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.594 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.595 253542 DEBUG nova.virt.libvirt.driver [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.601 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.601 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059997.556574, 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.602 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] VM Paused (Lifecycle Event)
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.650 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.654 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764059997.5591652, 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.655 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] VM Resumed (Lifecycle Event)
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.708 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.714 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.717 253542 DEBUG nova.network.neutron [req-4d9668ad-d458-47eb-b8d9-6ca19ddcdcf3 req-1e5f4fbb-dda6-4804-9a61-dddf9fe17ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Updated VIF entry in instance network info cache for port f5f675c0-8463-49b6-8351-06d13fa2cb29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.718 253542 DEBUG nova.network.neutron [req-4d9668ad-d458-47eb-b8d9-6ca19ddcdcf3 req-1e5f4fbb-dda6-4804-9a61-dddf9fe17ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Updating instance_info_cache with network_info: [{"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.753 253542 INFO nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Took 8.15 seconds to spawn the instance on the hypervisor.
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.753 253542 DEBUG nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:57 compute-0 ceph-mon[75015]: pgmap v1708: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.790 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.790 253542 DEBUG oslo_concurrency.lockutils [req-4d9668ad-d458-47eb-b8d9-6ca19ddcdcf3 req-1e5f4fbb-dda6-4804-9a61-dddf9fe17ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.860 253542 INFO nova.compute.manager [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Took 9.32 seconds to build instance.
Nov 25 08:39:57 compute-0 nova_compute[253538]: 2025-11-25 08:39:57.878 253542 DEBUG oslo_concurrency.lockutils [None req-62534dd1-b9f6-4e6b-a1ee-35f81e278dd5 fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:39:59 compute-0 nova_compute[253538]: 2025-11-25 08:39:59.276 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059984.2757854, 685ce923-4b91-41a7-9a13-d62077b95839 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:59 compute-0 nova_compute[253538]: 2025-11-25 08:39:59.277 253542 INFO nova.compute.manager [-] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] VM Stopped (Lifecycle Event)
Nov 25 08:39:59 compute-0 nova_compute[253538]: 2025-11-25 08:39:59.305 253542 DEBUG nova.compute.manager [None req-d66a1e1a-50f4-443b-a71d-c1f3860ce5ff - - - - - -] [instance: 685ce923-4b91-41a7-9a13-d62077b95839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:39:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 08:39:59 compute-0 nova_compute[253538]: 2025-11-25 08:39:59.993 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764059984.9924135, 40912950-fedc-405c-bc49-c4a757a422dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:39:59 compute-0 nova_compute[253538]: 2025-11-25 08:39:59.993 253542 INFO nova.compute.manager [-] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] VM Stopped (Lifecycle Event)
Nov 25 08:40:00 compute-0 nova_compute[253538]: 2025-11-25 08:40:00.012 253542 DEBUG nova.compute.manager [None req-7780c55b-4907-46d4-a78d-3f32aa124692 - - - - - -] [instance: 40912950-fedc-405c-bc49-c4a757a422dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:00 compute-0 nova_compute[253538]: 2025-11-25 08:40:00.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:00 compute-0 nova_compute[253538]: 2025-11-25 08:40:00.432 253542 DEBUG nova.compute.manager [req-7d971d0b-7db0-4dd1-8d94-37bc2c06fe9b req-447f6371-ce45-409b-b218-b163791e63ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received event network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:00 compute-0 nova_compute[253538]: 2025-11-25 08:40:00.433 253542 DEBUG oslo_concurrency.lockutils [req-7d971d0b-7db0-4dd1-8d94-37bc2c06fe9b req-447f6371-ce45-409b-b218-b163791e63ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:00 compute-0 nova_compute[253538]: 2025-11-25 08:40:00.433 253542 DEBUG oslo_concurrency.lockutils [req-7d971d0b-7db0-4dd1-8d94-37bc2c06fe9b req-447f6371-ce45-409b-b218-b163791e63ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:00 compute-0 nova_compute[253538]: 2025-11-25 08:40:00.433 253542 DEBUG oslo_concurrency.lockutils [req-7d971d0b-7db0-4dd1-8d94-37bc2c06fe9b req-447f6371-ce45-409b-b218-b163791e63ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:00 compute-0 nova_compute[253538]: 2025-11-25 08:40:00.434 253542 DEBUG nova.compute.manager [req-7d971d0b-7db0-4dd1-8d94-37bc2c06fe9b req-447f6371-ce45-409b-b218-b163791e63ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] No waiting events found dispatching network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:40:00 compute-0 nova_compute[253538]: 2025-11-25 08:40:00.434 253542 WARNING nova.compute.manager [req-7d971d0b-7db0-4dd1-8d94-37bc2c06fe9b req-447f6371-ce45-409b-b218-b163791e63ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received unexpected event network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 for instance with vm_state active and task_state None.
Nov 25 08:40:00 compute-0 ceph-mon[75015]: pgmap v1709: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.184 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "067512b3-cc61-478e-b705-71fbd18f9fb5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.185 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "067512b3-cc61-478e-b705-71fbd18f9fb5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.208 253542 DEBUG nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.314 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.314 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.327 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.328 253542 INFO nova.compute.claims [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:40:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.501 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:01 compute-0 nova_compute[253538]: 2025-11-25 08:40:01.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:40:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762640977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.027 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.033 253542 DEBUG nova.compute.provider_tree [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.053 253542 DEBUG nova.scheduler.client.report [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.072 253542 DEBUG oslo_concurrency.lockutils [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.072 253542 DEBUG oslo_concurrency.lockutils [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.073 253542 DEBUG nova.compute.manager [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.075 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.076 253542 DEBUG nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.084 253542 DEBUG nova.compute.manager [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.085 253542 DEBUG nova.objects.instance [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'flavor' on Instance uuid 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.122 253542 DEBUG nova.virt.libvirt.driver [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.138 253542 DEBUG nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.153 253542 INFO nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.170 253542 DEBUG nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.286 253542 DEBUG nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.289 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.291 253542 INFO nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Creating image(s)
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.325 253542 DEBUG nova.storage.rbd_utils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.356 253542 DEBUG nova.storage.rbd_utils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.381 253542 DEBUG nova.storage.rbd_utils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.385 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.489 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.491 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.492 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.493 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:02 compute-0 ceph-mon[75015]: pgmap v1710: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Nov 25 08:40:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/762640977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.528 253542 DEBUG nova.storage.rbd_utils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:02 compute-0 nova_compute[253538]: 2025-11-25 08:40:02.533 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 067512b3-cc61-478e-b705-71fbd18f9fb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 237 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 105 op/s
Nov 25 08:40:03 compute-0 ceph-mon[75015]: pgmap v1711: 321 pgs: 321 active+clean; 237 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 105 op/s
Nov 25 08:40:03 compute-0 nova_compute[253538]: 2025-11-25 08:40:03.747 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 067512b3-cc61-478e-b705-71fbd18f9fb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012951519461808947 of space, bias 1.0, pg target 0.3885455838542684 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:40:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:40:03 compute-0 nova_compute[253538]: 2025-11-25 08:40:03.954 253542 DEBUG nova.storage.rbd_utils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] resizing rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:40:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:40:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.5 total, 600.0 interval
                                           Cumulative writes: 27K writes, 112K keys, 27K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.04 MB/s
                                           Cumulative WAL: 27K writes, 9254 syncs, 2.99 writes per sync, written: 0.10 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 45.88 MB, 0.08 MB/s
                                           Interval WAL: 11K writes, 4150 syncs, 2.68 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.236 253542 DEBUG nova.objects.instance [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'migration_context' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.252 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.253 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Ensure instance console log exists: /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.254 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.254 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.254 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.255 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.259 253542 WARNING nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.264 253542 DEBUG nova.virt.libvirt.host [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.264 253542 DEBUG nova.virt.libvirt.host [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.268 253542 DEBUG nova.virt.libvirt.host [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.268 253542 DEBUG nova.virt.libvirt.host [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.269 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.269 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.269 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.270 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.270 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.270 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.270 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.271 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.271 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.271 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.272 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.272 253542 DEBUG nova.virt.hardware [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.274 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.316 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 242 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 111 op/s
Nov 25 08:40:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:40:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3941760349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.885 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:05 compute-0 ceph-mon[75015]: pgmap v1712: 321 pgs: 321 active+clean; 242 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 111 op/s
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.969 253542 DEBUG nova.storage.rbd_utils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:05 compute-0 nova_compute[253538]: 2025-11-25 08:40:05.975 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:40:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130957740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.454 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.458 253542 DEBUG nova.objects.instance [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'pci_devices' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.478 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <uuid>067512b3-cc61-478e-b705-71fbd18f9fb5</uuid>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <name>instance-0000005e</name>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerShowV257Test-server-437649416</nova:name>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:40:05</nova:creationTime>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <nova:user uuid="15553157cb064aacaf8dafafa7d9e54c">tempest-ServerShowV257Test-836964466-project-member</nova:user>
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <nova:project uuid="b863647443fe4e7eb44c1b694072ca91">tempest-ServerShowV257Test-836964466</nova:project>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <system>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <entry name="serial">067512b3-cc61-478e-b705-71fbd18f9fb5</entry>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <entry name="uuid">067512b3-cc61-478e-b705-71fbd18f9fb5</entry>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     </system>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <os>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   </os>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <features>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   </features>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/067512b3-cc61-478e-b705-71fbd18f9fb5_disk">
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       </source>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config">
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       </source>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:40:06 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/console.log" append="off"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <video>
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     </video>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:40:06 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:40:06 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:40:06 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:40:06 compute-0 nova_compute[253538]: </domain>
Nov 25 08:40:06 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.578 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.579 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.580 253542 INFO nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Using config drive
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.611 253542 DEBUG nova.storage.rbd_utils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3941760349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1130957740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.978 253542 INFO nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Creating config drive at /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config
Nov 25 08:40:06 compute-0 nova_compute[253538]: 2025-11-25 08:40:06.990 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdg0d_af7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.156 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdg0d_af7" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.188 253542 DEBUG nova.storage.rbd_utils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.192 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 256 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 90 op/s
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.500 253542 DEBUG oslo_concurrency.processutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.502 253542 INFO nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Deleting local config drive /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config because it was imported into RBD.
Nov 25 08:40:07 compute-0 systemd-machined[215790]: New machine qemu-115-instance-0000005e.
Nov 25 08:40:07 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-0000005e.
Nov 25 08:40:07 compute-0 ceph-mon[75015]: pgmap v1713: 321 pgs: 321 active+clean; 256 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 90 op/s
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.988 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060007.9875956, 067512b3-cc61-478e-b705-71fbd18f9fb5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.989 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] VM Resumed (Lifecycle Event)
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.993 253542 DEBUG nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.993 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.997 253542 INFO nova.virt.libvirt.driver [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance spawned successfully.
Nov 25 08:40:07 compute-0 nova_compute[253538]: 2025-11-25 08:40:07.998 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.017 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.026 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.031 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.032 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.032 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.033 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.034 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.035 253542 DEBUG nova.virt.libvirt.driver [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.062 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.062 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060007.9890249, 067512b3-cc61-478e-b705-71fbd18f9fb5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.062 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] VM Started (Lifecycle Event)
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.093 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.098 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.104 253542 INFO nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Took 5.82 seconds to spawn the instance on the hypervisor.
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.105 253542 DEBUG nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.116 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.167 253542 INFO nova.compute.manager [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Took 6.90 seconds to build instance.
Nov 25 08:40:08 compute-0 nova_compute[253538]: 2025-11-25 08:40:08.181 253542 DEBUG oslo_concurrency.lockutils [None req-4c9049c6-7878-4ef9-ab95-922a15debf13 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "067512b3-cc61-478e-b705-71fbd18f9fb5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 260 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 25 08:40:10 compute-0 nova_compute[253538]: 2025-11-25 08:40:10.319 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:10 compute-0 ceph-mon[75015]: pgmap v1714: 321 pgs: 321 active+clean; 260 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 25 08:40:10 compute-0 nova_compute[253538]: 2025-11-25 08:40:10.607 253542 INFO nova.compute.manager [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Rebuilding instance
Nov 25 08:40:10 compute-0 nova_compute[253538]: 2025-11-25 08:40:10.996 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:11 compute-0 nova_compute[253538]: 2025-11-25 08:40:11.008 253542 DEBUG nova.compute.manager [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:11 compute-0 nova_compute[253538]: 2025-11-25 08:40:11.047 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'pci_requests' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:11 compute-0 nova_compute[253538]: 2025-11-25 08:40:11.055 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'pci_devices' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:11 compute-0 nova_compute[253538]: 2025-11-25 08:40:11.067 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'resources' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:11 compute-0 nova_compute[253538]: 2025-11-25 08:40:11.076 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'migration_context' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:11 compute-0 nova_compute[253538]: 2025-11-25 08:40:11.086 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:40:11 compute-0 nova_compute[253538]: 2025-11-25 08:40:11.092 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:40:11 compute-0 ovn_controller[152859]: 2025-11-25T08:40:11Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:e0:e3 10.100.0.4
Nov 25 08:40:11 compute-0 ovn_controller[152859]: 2025-11-25T08:40:11Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:e0:e3 10.100.0.4
Nov 25 08:40:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 268 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Nov 25 08:40:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:40:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.4 total, 600.0 interval
                                           Cumulative writes: 27K writes, 107K keys, 27K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 27K writes, 9007 syncs, 3.00 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 39.78 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4016 syncs, 2.54 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:40:11 compute-0 nova_compute[253538]: 2025-11-25 08:40:11.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:11 compute-0 ceph-mon[75015]: pgmap v1715: 321 pgs: 321 active+clean; 268 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Nov 25 08:40:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:12 compute-0 nova_compute[253538]: 2025-11-25 08:40:12.273 253542 DEBUG nova.virt.libvirt.driver [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:40:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 282 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 195 op/s
Nov 25 08:40:14 compute-0 ceph-mon[75015]: pgmap v1716: 321 pgs: 321 active+clean; 282 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 195 op/s
Nov 25 08:40:15 compute-0 kernel: tapf5f675c0-84 (unregistering): left promiscuous mode
Nov 25 08:40:15 compute-0 NetworkManager[48915]: <info>  [1764060015.2936] device (tapf5f675c0-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:15 compute-0 ovn_controller[152859]: 2025-11-25T08:40:15Z|00916|binding|INFO|Releasing lport f5f675c0-8463-49b6-8351-06d13fa2cb29 from this chassis (sb_readonly=0)
Nov 25 08:40:15 compute-0 ovn_controller[152859]: 2025-11-25T08:40:15Z|00917|binding|INFO|Setting lport f5f675c0-8463-49b6-8351-06d13fa2cb29 down in Southbound
Nov 25 08:40:15 compute-0 ovn_controller[152859]: 2025-11-25T08:40:15Z|00918|binding|INFO|Removing iface tapf5f675c0-84 ovn-installed in OVS
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.305 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.311 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:e0:e3 10.100.0.4'], port_security=['fa:16:3e:30:e0:e3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4d9b26b3-d7f6-44d6-8e83-24e9adb5d994', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '4', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f5f675c0-8463-49b6-8351-06d13fa2cb29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.312 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f5f675c0-8463-49b6-8351-06d13fa2cb29 in datapath 92e26514-5b15-410b-8885-6773bc03c4ce unbound from our chassis
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.313 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92e26514-5b15-410b-8885-6773bc03c4ce
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.321 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.330 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6b01ec-4175-4427-876a-c141067d3a1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:15 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Nov 25 08:40:15 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Consumed 13.412s CPU time.
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.367 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[083dbd1f-67d2-44a7-ac5d-98b3e2e24f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:15 compute-0 systemd-machined[215790]: Machine qemu-114-instance-0000005d terminated.
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[518857ac-7e18-41eb-9e4c-11760538749d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:15 compute-0 podman[342835]: 2025-11-25 08:40:15.386145045 +0000 UTC m=+0.071565780 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:40:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 289 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 155 op/s
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.408 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[826a7164-8dbd-45eb-9c56-171e46e1419a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.429 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ece5fbcf-bae8-438e-8153-7deac57c4dce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92e26514-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:84:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522366, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342864, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.453 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[298600c9-65e3-4c89-9f06-c5080d9c4cac]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522379, 'tstamp': 522379}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342866, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92e26514-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522383, 'tstamp': 522383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342866, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.455 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.457 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.462 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.463 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e26514-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.463 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.464 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92e26514-50, col_values=(('external_ids', {'iface-id': '185991f0-acab-400e-baff-76794035d44a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:15.464 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:40:15 compute-0 virtqemud[253839]: cannot parse process status data
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.499 253542 INFO nova.virt.libvirt.driver [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Instance shutdown successfully after 13 seconds.
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.513 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.518 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.530 253542 INFO nova.virt.libvirt.driver [-] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Instance destroyed successfully.
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.530 253542 DEBUG nova.objects.instance [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.546 253542 DEBUG nova.compute.manager [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:15 compute-0 nova_compute[253538]: 2025-11-25 08:40:15.623 253542 DEBUG oslo_concurrency.lockutils [None req-205906e9-8070-42a3-ba24-16f5c3dc86ca fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:16 compute-0 ceph-mon[75015]: pgmap v1717: 321 pgs: 321 active+clean; 289 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 155 op/s
Nov 25 08:40:16 compute-0 nova_compute[253538]: 2025-11-25 08:40:16.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 293 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 148 op/s
Nov 25 08:40:17 compute-0 nova_compute[253538]: 2025-11-25 08:40:17.752 253542 DEBUG nova.compute.manager [req-2516fd7b-4ca7-4dab-8a56-84b3a45eaf6a req-ae1ae3fc-6885-41d3-9af7-22c280f0e5a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received event network-vif-unplugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:17 compute-0 nova_compute[253538]: 2025-11-25 08:40:17.753 253542 DEBUG oslo_concurrency.lockutils [req-2516fd7b-4ca7-4dab-8a56-84b3a45eaf6a req-ae1ae3fc-6885-41d3-9af7-22c280f0e5a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:17 compute-0 nova_compute[253538]: 2025-11-25 08:40:17.754 253542 DEBUG oslo_concurrency.lockutils [req-2516fd7b-4ca7-4dab-8a56-84b3a45eaf6a req-ae1ae3fc-6885-41d3-9af7-22c280f0e5a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:17 compute-0 nova_compute[253538]: 2025-11-25 08:40:17.755 253542 DEBUG oslo_concurrency.lockutils [req-2516fd7b-4ca7-4dab-8a56-84b3a45eaf6a req-ae1ae3fc-6885-41d3-9af7-22c280f0e5a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:17 compute-0 nova_compute[253538]: 2025-11-25 08:40:17.755 253542 DEBUG nova.compute.manager [req-2516fd7b-4ca7-4dab-8a56-84b3a45eaf6a req-ae1ae3fc-6885-41d3-9af7-22c280f0e5a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] No waiting events found dispatching network-vif-unplugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:40:17 compute-0 nova_compute[253538]: 2025-11-25 08:40:17.756 253542 WARNING nova.compute.manager [req-2516fd7b-4ca7-4dab-8a56-84b3a45eaf6a req-ae1ae3fc-6885-41d3-9af7-22c280f0e5a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received unexpected event network-vif-unplugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 for instance with vm_state stopped and task_state None.
Nov 25 08:40:18 compute-0 ceph-mon[75015]: pgmap v1718: 321 pgs: 321 active+clean; 293 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 148 op/s
Nov 25 08:40:18 compute-0 nova_compute[253538]: 2025-11-25 08:40:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:18 compute-0 nova_compute[253538]: 2025-11-25 08:40:18.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:40:18 compute-0 nova_compute[253538]: 2025-11-25 08:40:18.983 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:18 compute-0 nova_compute[253538]: 2025-11-25 08:40:18.983 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.021 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.142 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.143 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.152 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.152 253542 INFO nova.compute.claims [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.296 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 293 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:40:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2189341936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.744 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.751 253542 DEBUG nova.compute.provider_tree [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.783 253542 DEBUG nova.scheduler.client.report [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.828 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.829 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.874 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.875 253542 DEBUG nova.network.neutron [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.896 253542 INFO nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.916 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.994 253542 DEBUG nova.compute.manager [req-465d4977-d7a2-4503-90a7-9ebd5ee15b8b req-b7699425-3050-4039-b7ff-099d4a6cdf44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received event network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.995 253542 DEBUG oslo_concurrency.lockutils [req-465d4977-d7a2-4503-90a7-9ebd5ee15b8b req-b7699425-3050-4039-b7ff-099d4a6cdf44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.996 253542 DEBUG oslo_concurrency.lockutils [req-465d4977-d7a2-4503-90a7-9ebd5ee15b8b req-b7699425-3050-4039-b7ff-099d4a6cdf44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.996 253542 DEBUG oslo_concurrency.lockutils [req-465d4977-d7a2-4503-90a7-9ebd5ee15b8b req-b7699425-3050-4039-b7ff-099d4a6cdf44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.996 253542 DEBUG nova.compute.manager [req-465d4977-d7a2-4503-90a7-9ebd5ee15b8b req-b7699425-3050-4039-b7ff-099d4a6cdf44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] No waiting events found dispatching network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:40:19 compute-0 nova_compute[253538]: 2025-11-25 08:40:19.996 253542 WARNING nova.compute.manager [req-465d4977-d7a2-4503-90a7-9ebd5ee15b8b req-b7699425-3050-4039-b7ff-099d4a6cdf44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received unexpected event network-vif-plugged-f5f675c0-8463-49b6-8351-06d13fa2cb29 for instance with vm_state stopped and task_state deleting.
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.003 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.003 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.003 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.003 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.004 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.005 253542 INFO nova.compute.manager [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Terminating instance
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.006 253542 DEBUG nova.compute.manager [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.008 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.009 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.009 253542 INFO nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating image(s)
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.031 253542 DEBUG nova.storage.rbd_utils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.054 253542 DEBUG nova.storage.rbd_utils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.081 253542 DEBUG nova.storage.rbd_utils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.086 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.138 253542 INFO nova.virt.libvirt.driver [-] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Instance destroyed successfully.
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.139 253542 DEBUG nova.objects.instance [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'resources' on Instance uuid 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.166 253542 DEBUG nova.virt.libvirt.vif [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1179390849',display_name='tempest-Íñstáñcé-1023330959',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1179390849',id=93,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:39:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-ix8jfg0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:40:17Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=4d9b26b3-d7f6-44d6-8e83-24e9adb5d994,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.167 253542 DEBUG nova.network.os_vif_util [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "address": "fa:16:3e:30:e0:e3", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f675c0-84", "ovs_interfaceid": "f5f675c0-8463-49b6-8351-06d13fa2cb29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.169 253542 DEBUG nova.network.os_vif_util [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:e0:e3,bridge_name='br-int',has_traffic_filtering=True,id=f5f675c0-8463-49b6-8351-06d13fa2cb29,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f675c0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.170 253542 DEBUG os_vif [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:e0:e3,bridge_name='br-int',has_traffic_filtering=True,id=f5f675c0-8463-49b6-8351-06d13fa2cb29,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f675c0-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.173 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.174 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5f675c0-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.177 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.181 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.181 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.182 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.183 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.210 253542 DEBUG nova.storage.rbd_utils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.215 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3e75d0af-c514-42c5-aa05-88ae5552f196_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.265 253542 INFO os_vif [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:e0:e3,bridge_name='br-int',has_traffic_filtering=True,id=f5f675c0-8463-49b6-8351-06d13fa2cb29,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f675c0-84')
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.368 253542 DEBUG nova.policy [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '66e1f27ea22d4ee08a0a470a8c18135e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '295fcc758cf24ab4b01eb393f4863e36', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:40:20 compute-0 ceph-mon[75015]: pgmap v1719: 321 pgs: 321 active+clean; 293 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Nov 25 08:40:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2189341936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.699 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3e75d0af-c514-42c5-aa05-88ae5552f196_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:20 compute-0 sudo[343012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:20 compute-0 sudo[343012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:20 compute-0 sudo[343012]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.794 253542 DEBUG nova.storage.rbd_utils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] resizing rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:40:20 compute-0 sudo[343069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:40:20 compute-0 sudo[343069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:20 compute-0 sudo[343069]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.945 253542 DEBUG nova.objects.instance [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:20 compute-0 sudo[343118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:20 compute-0 sudo[343118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:20 compute-0 podman[343116]: 2025-11-25 08:40:20.953470414 +0000 UTC m=+0.067453529 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 08:40:20 compute-0 sudo[343118]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.955 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.955 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Ensure instance console log exists: /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.955 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.956 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:20 compute-0 nova_compute[253538]: 2025-11-25 08:40:20.956 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.015 253542 INFO nova.virt.libvirt.driver [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Deleting instance files /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_del
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.016 253542 INFO nova.virt.libvirt.driver [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Deletion of /var/lib/nova/instances/4d9b26b3-d7f6-44d6-8e83-24e9adb5d994_del complete
Nov 25 08:40:21 compute-0 sudo[343180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:40:21 compute-0 sudo[343180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.065 253542 INFO nova.compute.manager [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Took 1.06 seconds to destroy the instance on the hypervisor.
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.066 253542 DEBUG oslo.service.loopingcall [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.066 253542 DEBUG nova.compute.manager [-] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.066 253542 DEBUG nova.network.neutron [-] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.148 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.231 253542 DEBUG nova.network.neutron [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Successfully created port: f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:40:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 304 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 144 op/s
Nov 25 08:40:21 compute-0 sudo[343180]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:40:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:40:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:40:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:40:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:40:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:40:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2feb978c-7507-498c-a4e9-600063912d9d does not exist
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev db1f9e2e-2ede-4191-8f2d-5307a71a136b does not exist
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:40:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4b3c4925-ba8a-460a-8434-1397982ff48f does not exist
Nov 25 08:40:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:40:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:40:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:40:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:40:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:40:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.615 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:40:21 compute-0 sudo[343236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:21 compute-0 sudo[343236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:21 compute-0 sudo[343236]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:21 compute-0 sudo[343261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:40:21 compute-0 sudo[343261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:21 compute-0 sudo[343261]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:21 compute-0 sudo[343286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:21 compute-0 sudo[343286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:21 compute-0 sudo[343286]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:21 compute-0 sudo[343311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:40:21 compute-0 sudo[343311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:40:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3002.4 total, 600.0 interval
                                           Cumulative writes: 22K writes, 89K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 22K writes, 7507 syncs, 3.01 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8104 writes, 31K keys, 8104 commit groups, 1.0 writes per commit group, ingest: 34.90 MB, 0.06 MB/s
                                           Interval WAL: 8104 writes, 3222 syncs, 2.52 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.933 253542 DEBUG nova.network.neutron [-] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:40:21 compute-0 nova_compute[253538]: 2025-11-25 08:40:21.946 253542 INFO nova.compute.manager [-] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Took 0.88 seconds to deallocate network for instance.
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.004 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.004 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.092 253542 DEBUG nova.compute.manager [req-f6e9304b-df0a-4874-9933-f4def16f282e req-afed3327-1e51-4f45-8a47-b37ca3ff357a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Received event network-vif-deleted-f5f675c0-8463-49b6-8351-06d13fa2cb29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.242 253542 DEBUG oslo_concurrency.processutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:22 compute-0 podman[343376]: 2025-11-25 08:40:22.259350891 +0000 UTC m=+0.055231156 container create 5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.288 253542 DEBUG nova.network.neutron [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Successfully updated port: f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:40:22 compute-0 systemd[1]: Started libpod-conmon-5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa.scope.
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.305 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.306 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.306 253542 DEBUG nova.network.neutron [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:40:22 compute-0 podman[343376]: 2025-11-25 08:40:22.232994132 +0000 UTC m=+0.028874407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:40:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:40:22 compute-0 podman[343376]: 2025-11-25 08:40:22.363463688 +0000 UTC m=+0.159343953 container init 5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tharp, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 08:40:22 compute-0 podman[343376]: 2025-11-25 08:40:22.371165668 +0000 UTC m=+0.167045903 container start 5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tharp, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:40:22 compute-0 podman[343376]: 2025-11-25 08:40:22.375010663 +0000 UTC m=+0.170890888 container attach 5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tharp, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 08:40:22 compute-0 friendly_tharp[343393]: 167 167
Nov 25 08:40:22 compute-0 systemd[1]: libpod-5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa.scope: Deactivated successfully.
Nov 25 08:40:22 compute-0 conmon[343393]: conmon 5c4b610e383f479c3582 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa.scope/container/memory.events
Nov 25 08:40:22 compute-0 podman[343376]: 2025-11-25 08:40:22.381234532 +0000 UTC m=+0.177114767 container died 5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tharp, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:40:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2bbf9a058ad84116a0acb8c548dea098ecefdde54d5656370ed31692ee5c61-merged.mount: Deactivated successfully.
Nov 25 08:40:22 compute-0 podman[343376]: 2025-11-25 08:40:22.428537451 +0000 UTC m=+0.224417676 container remove 5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:40:22 compute-0 systemd[1]: libpod-conmon-5c4b610e383f479c3582cff3a1cd9fe69ed1485d04e11c76bba525c215dfbeaa.scope: Deactivated successfully.
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.443 253542 DEBUG nova.network.neutron [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:40:22 compute-0 ceph-mon[75015]: pgmap v1720: 321 pgs: 321 active+clean; 304 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 144 op/s
Nov 25 08:40:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:40:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:40:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:40:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:40:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:40:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:40:22 compute-0 podman[343434]: 2025-11-25 08:40:22.675502941 +0000 UTC m=+0.076636609 container create 5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_villani, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 08:40:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:40:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174443808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.711 253542 DEBUG oslo_concurrency.processutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.716 253542 DEBUG nova.compute.provider_tree [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:40:22 compute-0 systemd[1]: Started libpod-conmon-5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda.scope.
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.728 253542 DEBUG nova.scheduler.client.report [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:40:22 compute-0 podman[343434]: 2025-11-25 08:40:22.642288606 +0000 UTC m=+0.043422354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:40:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ccbc81040bd96784c56c754991d14128f3e21338cd13d7d42320685332daa29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ccbc81040bd96784c56c754991d14128f3e21338cd13d7d42320685332daa29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ccbc81040bd96784c56c754991d14128f3e21338cd13d7d42320685332daa29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ccbc81040bd96784c56c754991d14128f3e21338cd13d7d42320685332daa29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ccbc81040bd96784c56c754991d14128f3e21338cd13d7d42320685332daa29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.746 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:22 compute-0 podman[343434]: 2025-11-25 08:40:22.757374783 +0000 UTC m=+0.158508481 container init 5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_villani, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 08:40:22 compute-0 podman[343434]: 2025-11-25 08:40:22.76536418 +0000 UTC m=+0.166497848 container start 5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:40:22 compute-0 podman[343434]: 2025-11-25 08:40:22.768655841 +0000 UTC m=+0.169789519 container attach 5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.768 253542 INFO nova.scheduler.client.report [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Deleted allocations for instance 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994
Nov 25 08:40:22 compute-0 nova_compute[253538]: 2025-11-25 08:40:22.831 253542 DEBUG oslo_concurrency.lockutils [None req-c399cfd8-ea29-4e00-9ea1-3fa543ac213e fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "4d9b26b3-d7f6-44d6-8e83-24e9adb5d994" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 291 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Nov 25 08:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:40:23 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 25 08:40:23 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Consumed 12.453s CPU time.
Nov 25 08:40:23 compute-0 systemd-machined[215790]: Machine qemu-115-instance-0000005e terminated.
Nov 25 08:40:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2174443808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:23 compute-0 podman[343465]: 2025-11-25 08:40:23.645105114 +0000 UTC m=+0.128954595 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.697 253542 DEBUG nova.network.neutron [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.716 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.716 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance network_info: |[{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.719 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Start _get_guest_xml network_info=[{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.724 253542 WARNING nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.730 253542 DEBUG nova.virt.libvirt.host [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.730 253542 DEBUG nova.virt.libvirt.host [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.734 253542 DEBUG nova.virt.libvirt.host [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.734 253542 DEBUG nova.virt.libvirt.host [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.735 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.735 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.735 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.736 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.736 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.736 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.736 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.736 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.737 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.737 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.737 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.737 253542 DEBUG nova.virt.hardware [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:40:23 compute-0 nova_compute[253538]: 2025-11-25 08:40:23.741 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:23 compute-0 dazzling_villani[343452]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:40:23 compute-0 dazzling_villani[343452]: --> relative data size: 1.0
Nov 25 08:40:23 compute-0 dazzling_villani[343452]: --> All data devices are unavailable
Nov 25 08:40:23 compute-0 systemd[1]: libpod-5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda.scope: Deactivated successfully.
Nov 25 08:40:23 compute-0 systemd[1]: libpod-5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda.scope: Consumed 1.067s CPU time.
Nov 25 08:40:23 compute-0 podman[343434]: 2025-11-25 08:40:23.888674922 +0000 UTC m=+1.289808590 container died 5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_villani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ccbc81040bd96784c56c754991d14128f3e21338cd13d7d42320685332daa29-merged.mount: Deactivated successfully.
Nov 25 08:40:23 compute-0 podman[343434]: 2025-11-25 08:40:23.971948872 +0000 UTC m=+1.373082540 container remove 5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_villani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:40:24 compute-0 systemd[1]: libpod-conmon-5528305fbb216d098bb425b1641f23b2d01b2c0f95a27bc74d42ebf2f648dfda.scope: Deactivated successfully.
Nov 25 08:40:24 compute-0 sudo[343311]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:24 compute-0 sudo[343541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:24 compute-0 sudo[343541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:24 compute-0 sudo[343541]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:24 compute-0 sudo[343566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:40:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:40:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4230540730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:24 compute-0 sudo[343566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.169 253542 INFO nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance shutdown successfully after 13 seconds.
Nov 25 08:40:24 compute-0 sudo[343566]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.177 253542 INFO nova.virt.libvirt.driver [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance destroyed successfully.
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.185 253542 INFO nova.virt.libvirt.driver [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance destroyed successfully.
Nov 25 08:40:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:24.201 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:40:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:24.202 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:40:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:24.203 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.211 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:24 compute-0 sudo[343593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.235 253542 DEBUG nova.storage.rbd_utils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:24 compute-0 sudo[343593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:24 compute-0 sudo[343593]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.241 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.279 253542 DEBUG nova.compute.manager [req-9265260b-10ed-4ea5-b8d5-700f52a35b2f req-4775e6e0-68ab-4821-a1a0-441e15b1fd33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.279 253542 DEBUG nova.compute.manager [req-9265260b-10ed-4ea5-b8d5-700f52a35b2f req-4775e6e0-68ab-4821-a1a0-441e15b1fd33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing instance network info cache due to event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.279 253542 DEBUG oslo_concurrency.lockutils [req-9265260b-10ed-4ea5-b8d5-700f52a35b2f req-4775e6e0-68ab-4821-a1a0-441e15b1fd33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.279 253542 DEBUG oslo_concurrency.lockutils [req-9265260b-10ed-4ea5-b8d5-700f52a35b2f req-4775e6e0-68ab-4821-a1a0-441e15b1fd33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.280 253542 DEBUG nova.network.neutron [req-9265260b-10ed-4ea5-b8d5-700f52a35b2f req-4775e6e0-68ab-4821-a1a0-441e15b1fd33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:40:24 compute-0 sudo[343651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:40:24 compute-0 sudo[343651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:24 compute-0 ceph-mon[75015]: pgmap v1721: 321 pgs: 321 active+clean; 291 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Nov 25 08:40:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4230540730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:40:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786570894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:24 compute-0 podman[343735]: 2025-11-25 08:40:24.70430893 +0000 UTC m=+0.084077312 container create b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.713 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.715 253542 DEBUG nova.virt.libvirt.vif [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJg9vWbWDQUJP+5O2Ge5sP4yW+A5RDbOBkV9U0C3hvxoWu1yQZFyI5Vs8mvdnljTrZSXJgG69Yru9lsQdThAcjefMLvUo4eUx6Akjue1XjQsVfgM0pq0/Z3uC1qyMxn0Ew==',key_name='tempest-keypair-1642602877',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:40:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.716 253542 DEBUG nova.network.os_vif_util [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.717 253542 DEBUG nova.network.os_vif_util [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.718 253542 DEBUG nova.objects.instance [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:24 compute-0 podman[343735]: 2025-11-25 08:40:24.646502425 +0000 UTC m=+0.026270857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.744 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <uuid>3e75d0af-c514-42c5-aa05-88ae5552f196</uuid>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <name>instance-0000005f</name>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestOtherB-server-439401428</nova:name>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:40:23</nova:creationTime>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <nova:user uuid="66e1f27ea22d4ee08a0a470a8c18135e">tempest-ServerActionsTestOtherB-587178207-project-member</nova:user>
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <nova:project uuid="295fcc758cf24ab4b01eb393f4863e36">tempest-ServerActionsTestOtherB-587178207</nova:project>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <nova:port uuid="f7c4b9b0-3445-468a-a19a-8b19b2d029a2">
Nov 25 08:40:24 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <system>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <entry name="serial">3e75d0af-c514-42c5-aa05-88ae5552f196</entry>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <entry name="uuid">3e75d0af-c514-42c5-aa05-88ae5552f196</entry>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </system>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <os>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   </os>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <features>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   </features>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk">
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       </source>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config">
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       </source>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:40:24 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:83:e9:db"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <target dev="tapf7c4b9b0-34"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/console.log" append="off"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <video>
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </video>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:40:24 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:40:24 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:40:24 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:40:24 compute-0 nova_compute[253538]: </domain>
Nov 25 08:40:24 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.745 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Preparing to wait for external event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.746 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.746 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.746 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.747 253542 DEBUG nova.virt.libvirt.vif [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJg9vWbWDQUJP+5O2Ge5sP4yW+A5RDbOBkV9U0C3hvxoWu1yQZFyI5Vs8mvdnljTrZSXJgG69Yru9lsQdThAcjefMLvUo4eUx6Akjue1XjQsVfgM0pq0/Z3uC1qyMxn0Ew==',key_name='tempest-keypair-1642602877',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:40:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.748 253542 DEBUG nova.network.os_vif_util [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.749 253542 DEBUG nova.network.os_vif_util [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.749 253542 DEBUG os_vif [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.752 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:40:24 compute-0 systemd[1]: Started libpod-conmon-b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c.scope.
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.761 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7c4b9b0-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.763 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf7c4b9b0-34, col_values=(('external_ids', {'iface-id': 'f7c4b9b0-3445-468a-a19a-8b19b2d029a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:e9:db', 'vm-uuid': '3e75d0af-c514-42c5-aa05-88ae5552f196'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:24 compute-0 NetworkManager[48915]: <info>  [1764060024.7661] manager: (tapf7c4b9b0-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.774 253542 INFO os_vif [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34')
Nov 25 08:40:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:40:24 compute-0 podman[343735]: 2025-11-25 08:40:24.820738023 +0000 UTC m=+0.200506405 container init b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:40:24 compute-0 podman[343735]: 2025-11-25 08:40:24.833601774 +0000 UTC m=+0.213370126 container start b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mcnulty, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 08:40:24 compute-0 crazy_mcnulty[343754]: 167 167
Nov 25 08:40:24 compute-0 systemd[1]: libpod-b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c.scope: Deactivated successfully.
Nov 25 08:40:24 compute-0 podman[343735]: 2025-11-25 08:40:24.847683417 +0000 UTC m=+0.227451889 container attach b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:40:24 compute-0 podman[343735]: 2025-11-25 08:40:24.848219842 +0000 UTC m=+0.227988244 container died b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.865 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.867 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.867 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No VIF found with MAC fa:16:3e:83:e9:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.867 253542 INFO nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Using config drive
Nov 25 08:40:24 compute-0 nova_compute[253538]: 2025-11-25 08:40:24.894 253542 DEBUG nova.storage.rbd_utils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ed75f7e78d9eaefaef16b46c9050cf52f1c2fb5037a70ce90ea44b37069087f-merged.mount: Deactivated successfully.
Nov 25 08:40:24 compute-0 podman[343735]: 2025-11-25 08:40:24.957926401 +0000 UTC m=+0.337694753 container remove b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mcnulty, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:40:24 compute-0 systemd[1]: libpod-conmon-b2df06ec4126c36b657cfac47f81d77b4170629a2222b8673fed60b67376322c.scope: Deactivated successfully.
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.071 253542 INFO nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Deleting instance files /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5_del
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.071 253542 INFO nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Deletion of /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5_del complete
Nov 25 08:40:25 compute-0 podman[343801]: 2025-11-25 08:40:25.159831344 +0000 UTC m=+0.062163775 container create 4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 08:40:25 compute-0 systemd[1]: Started libpod-conmon-4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e.scope.
Nov 25 08:40:25 compute-0 podman[343801]: 2025-11-25 08:40:25.138112251 +0000 UTC m=+0.040444672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.234 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.234 253542 INFO nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Creating image(s)
Nov 25 08:40:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c890ea883ead728012281ae9ae3dce8fa6a17106738cd23271ab89a7d08aeae1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c890ea883ead728012281ae9ae3dce8fa6a17106738cd23271ab89a7d08aeae1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c890ea883ead728012281ae9ae3dce8fa6a17106738cd23271ab89a7d08aeae1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c890ea883ead728012281ae9ae3dce8fa6a17106738cd23271ab89a7d08aeae1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.268 253542 DEBUG nova.storage.rbd_utils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:25 compute-0 podman[343801]: 2025-11-25 08:40:25.28081534 +0000 UTC m=+0.183147841 container init 4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bose, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:40:25 compute-0 podman[343801]: 2025-11-25 08:40:25.289039974 +0000 UTC m=+0.191372425 container start 4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 08:40:25 compute-0 podman[343801]: 2025-11-25 08:40:25.29582095 +0000 UTC m=+0.198153391 container attach 4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bose, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.302 253542 DEBUG nova.storage.rbd_utils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.329 253542 DEBUG nova.storage.rbd_utils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.333 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 292 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 508 KiB/s rd, 4.2 MiB/s wr, 142 op/s
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.418 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.419 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.419 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.419 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.442 253542 DEBUG nova.storage.rbd_utils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.445 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 067512b3-cc61-478e-b705-71fbd18f9fb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2786570894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.721 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "0a240e53-cc4c-463e-9601-41d687d64349" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.721 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.722 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "0a240e53-cc4c-463e-9601-41d687d64349-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.722 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.722 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.723 253542 INFO nova.compute.manager [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Terminating instance
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.724 253542 DEBUG nova.compute.manager [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.728 253542 INFO nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating config drive at /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.735 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp00h8vzyd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:25 compute-0 kernel: tap0061cd13-34 (unregistering): left promiscuous mode
Nov 25 08:40:25 compute-0 NetworkManager[48915]: <info>  [1764060025.8799] device (tap0061cd13-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.885 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 067512b3-cc61-478e-b705-71fbd18f9fb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:25 compute-0 ovn_controller[152859]: 2025-11-25T08:40:25Z|00919|binding|INFO|Releasing lport 0061cd13-34e3-4156-a4ba-ff9808dc3607 from this chassis (sb_readonly=0)
Nov 25 08:40:25 compute-0 ovn_controller[152859]: 2025-11-25T08:40:25Z|00920|binding|INFO|Setting lport 0061cd13-34e3-4156-a4ba-ff9808dc3607 down in Southbound
Nov 25 08:40:25 compute-0 ovn_controller[152859]: 2025-11-25T08:40:25Z|00921|binding|INFO|Removing iface tap0061cd13-34 ovn-installed in OVS
Nov 25 08:40:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:25.910 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a6:7a 10.100.0.13'], port_security=['fa:16:3e:26:a6:7a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0a240e53-cc4c-463e-9601-41d687d64349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92e26514-5b15-410b-8885-6773bc03c4ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b730f086c4b94185afab5e10fa2e8181', 'neutron:revision_number': '4', 'neutron:security_group_ids': '420b1f20-9bb9-442a-978c-444ed9d0cb04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ae0a3b6-6c0f-4ef3-ae20-c7b5b6ede8ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0061cd13-34e3-4156-a4ba-ff9808dc3607) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:40:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:25.911 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0061cd13-34e3-4156-a4ba-ff9808dc3607 in datapath 92e26514-5b15-410b-8885-6773bc03c4ce unbound from our chassis
Nov 25 08:40:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:25.912 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92e26514-5b15-410b-8885-6773bc03c4ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:40:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:25.914 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[26f87699-5955-43af-8625-d5f15f7584e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:25.915 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce namespace which is not needed anymore
Nov 25 08:40:25 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000050.scope: Deactivated successfully.
Nov 25 08:40:25 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000050.scope: Consumed 18.998s CPU time.
Nov 25 08:40:25 compute-0 systemd-machined[215790]: Machine qemu-98-instance-00000050 terminated.
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.967 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp00h8vzyd" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.990 253542 DEBUG nova.storage.rbd_utils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:25 compute-0 nova_compute[253538]: 2025-11-25 08:40:25.997 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce[333321]: [NOTICE]   (333325) : haproxy version is 2.8.14-c23fe91
Nov 25 08:40:26 compute-0 neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce[333321]: [NOTICE]   (333325) : path to executable is /usr/sbin/haproxy
Nov 25 08:40:26 compute-0 neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce[333321]: [WARNING]  (333325) : Exiting Master process...
Nov 25 08:40:26 compute-0 neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce[333321]: [WARNING]  (333325) : Exiting Master process...
Nov 25 08:40:26 compute-0 neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce[333321]: [ALERT]    (333325) : Current worker (333327) exited with code 143 (Terminated)
Nov 25 08:40:26 compute-0 neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce[333321]: [WARNING]  (333325) : All workers exited. Exiting... (0)
Nov 25 08:40:26 compute-0 systemd[1]: libpod-62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843.scope: Deactivated successfully.
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.093 253542 DEBUG nova.storage.rbd_utils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] resizing rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:40:26 compute-0 podman[343994]: 2025-11-25 08:40:26.096096248 +0000 UTC m=+0.058494325 container died 62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:40:26 compute-0 quizzical_bose[343817]: {
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:     "0": [
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:         {
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "devices": [
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "/dev/loop3"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             ],
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_name": "ceph_lv0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_size": "21470642176",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "name": "ceph_lv0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "tags": {
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cluster_name": "ceph",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.crush_device_class": "",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.encrypted": "0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osd_id": "0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.type": "block",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.vdo": "0"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             },
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "type": "block",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "vg_name": "ceph_vg0"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:         }
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:     ],
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:     "1": [
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:         {
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "devices": [
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "/dev/loop4"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             ],
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_name": "ceph_lv1",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_size": "21470642176",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "name": "ceph_lv1",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "tags": {
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cluster_name": "ceph",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.crush_device_class": "",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.encrypted": "0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osd_id": "1",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.type": "block",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.vdo": "0"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             },
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "type": "block",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "vg_name": "ceph_vg1"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:         }
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:     ],
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:     "2": [
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:         {
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "devices": [
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "/dev/loop5"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             ],
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_name": "ceph_lv2",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_size": "21470642176",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "name": "ceph_lv2",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "tags": {
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.cluster_name": "ceph",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.crush_device_class": "",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.encrypted": "0",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osd_id": "2",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.type": "block",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:                 "ceph.vdo": "0"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             },
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "type": "block",
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:             "vg_name": "ceph_vg2"
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:         }
Nov 25 08:40:26 compute-0 quizzical_bose[343817]:     ]
Nov 25 08:40:26 compute-0 quizzical_bose[343817]: }
Nov 25 08:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-078e19334d694eacfa1f081f851a05cf2c3a6a66ec63ba5c3c8be55feca51c6a-merged.mount: Deactivated successfully.
Nov 25 08:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843-userdata-shm.mount: Deactivated successfully.
Nov 25 08:40:26 compute-0 systemd[1]: libpod-4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e.scope: Deactivated successfully.
Nov 25 08:40:26 compute-0 podman[343801]: 2025-11-25 08:40:26.142245036 +0000 UTC m=+1.044577457 container died 4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bose, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.185 253542 INFO nova.virt.libvirt.driver [-] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Instance destroyed successfully.
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.185 253542 DEBUG nova.objects.instance [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lazy-loading 'resources' on Instance uuid 0a240e53-cc4c-463e-9601-41d687d64349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.222 253542 DEBUG nova.virt.libvirt.vif [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:37:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1218767313',display_name='tempest-₡-1218767313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1218767313',id=80,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:37:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b730f086c4b94185afab5e10fa2e8181',ramdisk_id='',reservation_id='r-imleqf74',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1426188226',owner_user_name='tempest-ServersTestJSON-1426188226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:37:49Z,user_data=None,user_id='fdcb005cc49a4dfa82152f2c0817cc94',uuid=0a240e53-cc4c-463e-9601-41d687d64349,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.223 253542 DEBUG nova.network.os_vif_util [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converting VIF {"id": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "address": "fa:16:3e:26:a6:7a", "network": {"id": "92e26514-5b15-410b-8885-6773bc03c4ce", "bridge": "br-int", "label": "tempest-ServersTestJSON-1720893026-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b730f086c4b94185afab5e10fa2e8181", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0061cd13-34", "ovs_interfaceid": "0061cd13-34e3-4156-a4ba-ff9808dc3607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.224 253542 DEBUG nova.network.os_vif_util [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:a6:7a,bridge_name='br-int',has_traffic_filtering=True,id=0061cd13-34e3-4156-a4ba-ff9808dc3607,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0061cd13-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.224 253542 DEBUG os_vif [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:a6:7a,bridge_name='br-int',has_traffic_filtering=True,id=0061cd13-34e3-4156-a4ba-ff9808dc3607,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0061cd13-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.227 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0061cd13-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.252 253542 INFO os_vif [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:a6:7a,bridge_name='br-int',has_traffic_filtering=True,id=0061cd13-34e3-4156-a4ba-ff9808dc3607,network=Network(92e26514-5b15-410b-8885-6773bc03c4ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0061cd13-34')
Nov 25 08:40:26 compute-0 podman[343994]: 2025-11-25 08:40:26.332146492 +0000 UTC m=+0.294544569 container cleanup 62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 08:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c890ea883ead728012281ae9ae3dce8fa6a17106738cd23271ab89a7d08aeae1-merged.mount: Deactivated successfully.
Nov 25 08:40:26 compute-0 systemd[1]: libpod-conmon-62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843.scope: Deactivated successfully.
Nov 25 08:40:26 compute-0 podman[343801]: 2025-11-25 08:40:26.405267754 +0000 UTC m=+1.307600165 container remove 4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bose, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:40:26 compute-0 systemd[1]: libpod-conmon-4f04ec17c2994e82f539ea129a72726a13f6d799cab81954e82f26e307d8633e.scope: Deactivated successfully.
Nov 25 08:40:26 compute-0 podman[344109]: 2025-11-25 08:40:26.446686233 +0000 UTC m=+0.081274707 container remove 62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:40:26 compute-0 sudo[343651]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.454 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.455 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Ensure instance console log exists: /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.454 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[05d6dede-6181-4b41-b8d8-68960adc848a]: (4, ('Tue Nov 25 08:40:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce (62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843)\n62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843\nTue Nov 25 08:40:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce (62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843)\n62e9dc7a84c6bb7e065e0a31f46bcf27ebe3aadfd25a445a6368902d44bb5843\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.456 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.456 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.456 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b05ce6a9-305a-4e68-98b2-8d5b3a0134d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.459 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e26514-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.459 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:40:26 compute-0 kernel: tap92e26514-50: left promiscuous mode
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.466 253542 DEBUG oslo_concurrency.processutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.467 253542 INFO nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deleting local config drive /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config because it was imported into RBD.
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.473 253542 WARNING nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.489 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7faa82b-9cc7-4dff-bb5b-359dfbd3a940]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.504 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[40539309-1c53-4b56-ad2f-61d755549ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.506 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[234851c9-b8e9-4e61-becf-bfd0586a826e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.511 253542 DEBUG nova.virt.libvirt.host [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.513 253542 DEBUG nova.virt.libvirt.host [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.519 253542 DEBUG nova.virt.libvirt.host [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.520 253542 DEBUG nova.virt.libvirt.host [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.521 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.521 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.521 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.522 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.522 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.522 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.522 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.522 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.523 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.523 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.523 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.523 253542 DEBUG nova.virt.hardware [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.523 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.525 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[058b60be-6ff2-4b27-8639-ab6458dc173a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522357, 'reachable_time': 43184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344168, 'error': None, 'target': 'ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d92e26514\x2d5b15\x2d410b\x2d8885\x2d6773bc03c4ce.mount: Deactivated successfully.
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.528 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92e26514-5b15-410b-8885-6773bc03c4ce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.528 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b916ac-96a3-4e65-9062-ef80e94d2d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.539 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:26 compute-0 sudo[344142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:26 compute-0 sudo[344142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:26 compute-0 sudo[344142]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:26 compute-0 kernel: tapf7c4b9b0-34: entered promiscuous mode
Nov 25 08:40:26 compute-0 ovn_controller[152859]: 2025-11-25T08:40:26Z|00922|binding|INFO|Claiming lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for this chassis.
Nov 25 08:40:26 compute-0 ovn_controller[152859]: 2025-11-25T08:40:26Z|00923|binding|INFO|f7c4b9b0-3445-468a-a19a-8b19b2d029a2: Claiming fa:16:3e:83:e9:db 10.100.0.12
Nov 25 08:40:26 compute-0 systemd-udevd[344175]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:40:26 compute-0 NetworkManager[48915]: <info>  [1764060026.5619] manager: (tapf7c4b9b0-34): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Nov 25 08:40:26 compute-0 NetworkManager[48915]: <info>  [1764060026.5776] device (tapf7c4b9b0-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:40:26 compute-0 NetworkManager[48915]: <info>  [1764060026.5791] device (tapf7c4b9b0-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:40:26 compute-0 ceph-mon[75015]: pgmap v1722: 321 pgs: 321 active+clean; 292 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 508 KiB/s rd, 4.2 MiB/s wr, 142 op/s
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.585 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.606 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e9:db 10.100.0.12'], port_security=['fa:16:3e:83:e9:db 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3e75d0af-c514-42c5-aa05-88ae5552f196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4662b63b-c8aa-4161-b270-71466cebee15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f7c4b9b0-3445-468a-a19a-8b19b2d029a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.608 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 bound to our chassis
Nov 25 08:40:26 compute-0 systemd-machined[215790]: New machine qemu-116-instance-0000005f.
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.609 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:40:26 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-0000005f.
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.622 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[424a5a87-77df-4714-8baa-9ee4713fdad4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.623 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e77a51d-21 in ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.626 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e77a51d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.626 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[14036e2b-be56-4a60-9aa1-19d3eaf716e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.629 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12a4ba69-3011-4c05-96ef-cee4fda161fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 sudo[344187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:40:26 compute-0 sudo[344187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:26 compute-0 sudo[344187]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.635 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.644 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.645 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[585394d9-e0b2-4f98-8841-eeed2b952d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_controller[152859]: 2025-11-25T08:40:26Z|00924|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 ovn-installed in OVS
Nov 25 08:40:26 compute-0 ovn_controller[152859]: 2025-11-25T08:40:26Z|00925|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 up in Southbound
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.651 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.659 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51bd3281-e28c-4912-941a-9a6b7e328f26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 sudo[344218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.698 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[53a322da-c89d-4c4b-b2b0-63b070c4a1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 sudo[344218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:26 compute-0 sudo[344218]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fdee962a-e59c-4ca4-843b-bc3a43b574ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 NetworkManager[48915]: <info>  [1764060026.7079] manager: (tap6e77a51d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.737 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[db5b7da5-7217-4dcd-b137-6dc4ee77a7b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.741 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a96309bc-a157-430b-943e-698d419be812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.744 253542 DEBUG nova.network.neutron [req-9265260b-10ed-4ea5-b8d5-700f52a35b2f req-4775e6e0-68ab-4821-a1a0-441e15b1fd33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated VIF entry in instance network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.745 253542 DEBUG nova.network.neutron [req-9265260b-10ed-4ea5-b8d5-700f52a35b2f req-4775e6e0-68ab-4821-a1a0-441e15b1fd33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:40:26 compute-0 nova_compute[253538]: 2025-11-25 08:40:26.760 253542 DEBUG oslo_concurrency.lockutils [req-9265260b-10ed-4ea5-b8d5-700f52a35b2f req-4775e6e0-68ab-4821-a1a0-441e15b1fd33 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:40:26 compute-0 sudo[344273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:40:26 compute-0 sudo[344273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:26 compute-0 NetworkManager[48915]: <info>  [1764060026.7740] device (tap6e77a51d-20): carrier: link connected
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.778 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5de53a1f-d510-4d89-aaa5-1ebfdb9ee028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.800 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[427bcb39-6e15-403c-b729-33ad160c16ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344316, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.824 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6a86ad30-53a7-4c8c-b64c-f1e7b1b893e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:6ffd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538137, 'tstamp': 538137}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344317, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.846 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e29e34-ed2f-4abb-a33b-697b73e38ac3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344318, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.892 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[238e5797-b602-44ea-85e3-0c9cd2b181a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.976 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae91c52-70ed-4455-ab53-b98afe45344a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.979 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.980 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:40:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:26.981 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:40:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876991210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.029 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:27 compute-0 kernel: tap6e77a51d-20: entered promiscuous mode
Nov 25 08:40:27 compute-0 NetworkManager[48915]: <info>  [1764060027.0333] manager: (tap6e77a51d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:27.038 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:40:27 compute-0 ovn_controller[152859]: 2025-11-25T08:40:27Z|00926|binding|INFO|Releasing lport 66275e2b-0197-461a-9be3-ae2fe1aec502 from this chassis (sb_readonly=0)
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:27.042 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e77a51d-2695-4e70-8b9d-c02ec0c62f35.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e77a51d-2695-4e70-8b9d-c02ec0c62f35.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:27.043 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8228ef45-cefa-498f-b8bb-87c1aface19b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:27.044 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/6e77a51d-2695-4e70-8b9d-c02ec0c62f35.pid.haproxy
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:40:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:27.045 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'env', 'PROCESS_TAG=haproxy-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e77a51d-2695-4e70-8b9d-c02ec0c62f35.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.074 253542 DEBUG nova.storage.rbd_utils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.081 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.135 253542 INFO nova.virt.libvirt.driver [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Deleting instance files /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349_del
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.137 253542 INFO nova.virt.libvirt.driver [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Deletion of /var/lib/nova/instances/0a240e53-cc4c-463e-9601-41d687d64349_del complete
Nov 25 08:40:27 compute-0 podman[344427]: 2025-11-25 08:40:27.170001494 +0000 UTC m=+0.046115838 container create 344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.197 253542 INFO nova.compute.manager [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Took 1.47 seconds to destroy the instance on the hypervisor.
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.198 253542 DEBUG oslo.service.loopingcall [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.198 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060027.1966293, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.198 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Started (Lifecycle Event)
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.200 253542 DEBUG nova.compute.manager [-] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.200 253542 DEBUG nova.network.neutron [-] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:40:27 compute-0 systemd[1]: Started libpod-conmon-344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21.scope.
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.219 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.226 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060027.196707, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.226 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Paused (Lifecycle Event)
Nov 25 08:40:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:40:27 compute-0 podman[344427]: 2025-11-25 08:40:27.149193637 +0000 UTC m=+0.025308001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.246 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.253 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:40:27 compute-0 podman[344427]: 2025-11-25 08:40:27.273035902 +0000 UTC m=+0.149150266 container init 344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 08:40:27 compute-0 podman[344427]: 2025-11-25 08:40:27.282176511 +0000 UTC m=+0.158290865 container start 344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.285 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.289 253542 DEBUG nova.compute.manager [req-0a6d0ed6-4a42-4035-b384-04748ae52461 req-b5603dc2-a82f-42a2-853f-14b64173a786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received event network-vif-unplugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.289 253542 DEBUG oslo_concurrency.lockutils [req-0a6d0ed6-4a42-4035-b384-04748ae52461 req-b5603dc2-a82f-42a2-853f-14b64173a786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0a240e53-cc4c-463e-9601-41d687d64349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.289 253542 DEBUG oslo_concurrency.lockutils [req-0a6d0ed6-4a42-4035-b384-04748ae52461 req-b5603dc2-a82f-42a2-853f-14b64173a786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.290 253542 DEBUG oslo_concurrency.lockutils [req-0a6d0ed6-4a42-4035-b384-04748ae52461 req-b5603dc2-a82f-42a2-853f-14b64173a786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.290 253542 DEBUG nova.compute.manager [req-0a6d0ed6-4a42-4035-b384-04748ae52461 req-b5603dc2-a82f-42a2-853f-14b64173a786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] No waiting events found dispatching network-vif-unplugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:40:27 compute-0 goofy_tharp[344444]: 167 167
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.290 253542 DEBUG nova.compute.manager [req-0a6d0ed6-4a42-4035-b384-04748ae52461 req-b5603dc2-a82f-42a2-853f-14b64173a786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received event network-vif-unplugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:40:27 compute-0 systemd[1]: libpod-344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21.scope: Deactivated successfully.
Nov 25 08:40:27 compute-0 podman[344427]: 2025-11-25 08:40:27.297767156 +0000 UTC m=+0.173881520 container attach 344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_tharp, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:40:27 compute-0 podman[344427]: 2025-11-25 08:40:27.298239688 +0000 UTC m=+0.174354052 container died 344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc52a1e1f1b04f494bd0197b0275f7c4ef328e632b0efcc4e5af4f7553d542ae-merged.mount: Deactivated successfully.
Nov 25 08:40:27 compute-0 podman[344427]: 2025-11-25 08:40:27.369031198 +0000 UTC m=+0.245145582 container remove 344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_tharp, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:40:27 compute-0 systemd[1]: libpod-conmon-344b81ad2d8444d65ffd26f863dd83bfd2f33fac7f58b8d79f1e7b19ca2a5b21.scope: Deactivated successfully.
Nov 25 08:40:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 270 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 4.2 MiB/s wr, 143 op/s
Nov 25 08:40:27 compute-0 podman[344504]: 2025-11-25 08:40:27.480970229 +0000 UTC m=+0.064064677 container create f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 08:40:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:40:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/154391109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:27 compute-0 podman[344504]: 2025-11-25 08:40:27.440723521 +0000 UTC m=+0.023817959 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:40:27 compute-0 systemd[1]: Started libpod-conmon-f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b.scope.
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.552 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.557 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <uuid>067512b3-cc61-478e-b705-71fbd18f9fb5</uuid>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <name>instance-0000005e</name>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerShowV257Test-server-437649416</nova:name>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:40:26</nova:creationTime>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <nova:user uuid="15553157cb064aacaf8dafafa7d9e54c">tempest-ServerShowV257Test-836964466-project-member</nova:user>
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <nova:project uuid="b863647443fe4e7eb44c1b694072ca91">tempest-ServerShowV257Test-836964466</nova:project>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <system>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <entry name="serial">067512b3-cc61-478e-b705-71fbd18f9fb5</entry>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <entry name="uuid">067512b3-cc61-478e-b705-71fbd18f9fb5</entry>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     </system>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <os>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   </os>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <features>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   </features>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/067512b3-cc61-478e-b705-71fbd18f9fb5_disk">
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       </source>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config">
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       </source>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:40:27 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/console.log" append="off"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <video>
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     </video>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:40:27 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:40:27 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:40:27 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:40:27 compute-0 nova_compute[253538]: </domain>
Nov 25 08:40:27 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:40:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6947475d689df6f820a434a2c1f0b94b91a492171023f6878dbc9d586341b108/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.641 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.642 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.642 253542 INFO nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Using config drive
Nov 25 08:40:27 compute-0 podman[344522]: 2025-11-25 08:40:27.556512027 +0000 UTC m=+0.031008416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.660 253542 DEBUG nova.storage.rbd_utils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:27 compute-0 podman[344522]: 2025-11-25 08:40:27.665439846 +0000 UTC m=+0.139936135 container create 1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.680 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.708 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'keypairs' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/876991210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:27 compute-0 ceph-mon[75015]: pgmap v1723: 321 pgs: 321 active+clean; 270 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 4.2 MiB/s wr, 143 op/s
Nov 25 08:40:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/154391109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:40:27 compute-0 podman[344504]: 2025-11-25 08:40:27.779152864 +0000 UTC m=+0.362247362 container init f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:40:27 compute-0 podman[344504]: 2025-11-25 08:40:27.793048573 +0000 UTC m=+0.376142981 container start f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:40:27 compute-0 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [NOTICE]   (344562) : New worker (344564) forked
Nov 25 08:40:27 compute-0 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [NOTICE]   (344562) : Loading success.
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.856 253542 INFO nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Creating config drive at /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.863 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpce3od3cz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:27 compute-0 systemd[1]: Started libpod-conmon-1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c.scope.
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.923 253542 DEBUG nova.network.neutron [-] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:40:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a838177e5679229d6cfc941a273b4e8c991f50d6b762c84e16f4405347db0a33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a838177e5679229d6cfc941a273b4e8c991f50d6b762c84e16f4405347db0a33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a838177e5679229d6cfc941a273b4e8c991f50d6b762c84e16f4405347db0a33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a838177e5679229d6cfc941a273b4e8c991f50d6b762c84e16f4405347db0a33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.948 253542 INFO nova.compute.manager [-] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Took 0.75 seconds to deallocate network for instance.
Nov 25 08:40:27 compute-0 podman[344522]: 2025-11-25 08:40:27.974715894 +0000 UTC m=+0.449212193 container init 1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:40:27 compute-0 podman[344522]: 2025-11-25 08:40:27.985347734 +0000 UTC m=+0.459844033 container start 1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 08:40:27 compute-0 podman[344522]: 2025-11-25 08:40:27.994876513 +0000 UTC m=+0.469372842 container attach 1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.994 253542 DEBUG nova.compute.manager [req-2ca5b66c-0b2a-44d7-bf43-4dafe15baff4 req-6f531214-ca2d-4866-b816-4e03d92487aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received event network-vif-deleted-0061cd13-34e3-4156-a4ba-ff9808dc3607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:27 compute-0 nova_compute[253538]: 2025-11-25 08:40:27.999 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.000 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.016 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpce3od3cz" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.044 253542 DEBUG nova.storage.rbd_utils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] rbd image 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.051 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.253 253542 DEBUG oslo_concurrency.processutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config 067512b3-cc61-478e-b705-71fbd18f9fb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.254 253542 INFO nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Deleting local config drive /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5/disk.config because it was imported into RBD.
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.257 253542 DEBUG oslo_concurrency.processutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:28 compute-0 systemd-machined[215790]: New machine qemu-117-instance-0000005e.
Nov 25 08:40:28 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-0000005e.
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:40:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790263635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.741 253542 DEBUG oslo_concurrency.processutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.748 253542 DEBUG nova.compute.provider_tree [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.768 253542 DEBUG nova.scheduler.client.report [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.793 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.818 253542 INFO nova.scheduler.client.report [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Deleted allocations for instance 0a240e53-cc4c-463e-9601-41d687d64349
Nov 25 08:40:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1790263635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:28 compute-0 nova_compute[253538]: 2025-11-25 08:40:28.869 253542 DEBUG oslo_concurrency.lockutils [None req-bb0a7a09-06d7-48f7-b643-73b70ca5441b fdcb005cc49a4dfa82152f2c0817cc94 b730f086c4b94185afab5e10fa2e8181 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:40:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2239074947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:40:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:40:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2239074947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]: {
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "osd_id": 1,
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "type": "bluestore"
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:     },
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "osd_id": 2,
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "type": "bluestore"
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:     },
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "osd_id": 0,
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:         "type": "bluestore"
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]:     }
Nov 25 08:40:29 compute-0 eloquent_brahmagupta[344577]: }
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.124 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 067512b3-cc61-478e-b705-71fbd18f9fb5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.125 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060029.124274, 067512b3-cc61-478e-b705-71fbd18f9fb5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.125 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] VM Resumed (Lifecycle Event)
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.129 253542 DEBUG nova.compute.manager [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.129 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.134 253542 INFO nova.virt.libvirt.driver [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance spawned successfully.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.135 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:40:29 compute-0 systemd[1]: libpod-1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c.scope: Deactivated successfully.
Nov 25 08:40:29 compute-0 systemd[1]: libpod-1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c.scope: Consumed 1.046s CPU time.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.166 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.173 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.173 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.174 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.175 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.175 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.176 253542 DEBUG nova.virt.libvirt.driver [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.183 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:40:29 compute-0 podman[344727]: 2025-11-25 08:40:29.196005446 +0000 UTC m=+0.033434812 container died 1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.207 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.207 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060029.128917, 067512b3-cc61-478e-b705-71fbd18f9fb5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.207 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] VM Started (Lifecycle Event)
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.234 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.239 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.243 253542 DEBUG nova.compute.manager [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.257 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.298 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.299 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.299 253542 DEBUG nova.objects.instance [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.350 253542 DEBUG oslo_concurrency.lockutils [None req-5c674b97-34b7-404c-abea-4cf68fd860cc 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.392 253542 DEBUG nova.compute.manager [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received event network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.393 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0a240e53-cc4c-463e-9601-41d687d64349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.393 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.394 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0a240e53-cc4c-463e-9601-41d687d64349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.394 253542 DEBUG nova.compute.manager [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] No waiting events found dispatching network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.395 253542 WARNING nova.compute.manager [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Received unexpected event network-vif-plugged-0061cd13-34e3-4156-a4ba-ff9808dc3607 for instance with vm_state deleted and task_state None.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.395 253542 DEBUG nova.compute.manager [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.396 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.396 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.396 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.397 253542 DEBUG nova.compute.manager [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Processing event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.397 253542 DEBUG nova.compute.manager [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.398 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.398 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.399 253542 DEBUG oslo_concurrency.lockutils [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.399 253542 DEBUG nova.compute.manager [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.399 253542 WARNING nova.compute.manager [req-1f09a38d-0fd3-4583-b9e1-47df87264f69 req-7ccb5359-6c0c-4909-a6e8-cc3a70f7af11 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state building and task_state spawning.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.400 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:40:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 246 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.408 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060029.4085019, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.409 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Resumed (Lifecycle Event)
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.411 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.417 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance spawned successfully.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.418 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.451 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.457 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.458 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.458 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.459 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.460 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.460 253542 DEBUG nova.virt.libvirt.driver [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.467 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.489 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a838177e5679229d6cfc941a273b4e8c991f50d6b762c84e16f4405347db0a33-merged.mount: Deactivated successfully.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.527 253542 INFO nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 9.52 seconds to spawn the instance on the hypervisor.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.527 253542 DEBUG nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.595 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.595 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.596 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.596 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.596 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.665 253542 INFO nova.compute.manager [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 10.55 seconds to build instance.
Nov 25 08:40:29 compute-0 nova_compute[253538]: 2025-11-25 08:40:29.702 253542 DEBUG oslo_concurrency.lockutils [None req-54caea47-3710-4570-bddb-994861272622 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:29 compute-0 podman[344727]: 2025-11-25 08:40:29.944634997 +0000 UTC m=+0.782064333 container remove 1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 08:40:29 compute-0 systemd[1]: libpod-conmon-1075abc6d59e85bb17770daae47dd24f68050e2c6e9286e04b6b89205a9ee99c.scope: Deactivated successfully.
Nov 25 08:40:29 compute-0 sudo[344273]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.027 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "067512b3-cc61-478e-b705-71fbd18f9fb5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.028 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "067512b3-cc61-478e-b705-71fbd18f9fb5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.028 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "067512b3-cc61-478e-b705-71fbd18f9fb5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.028 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "067512b3-cc61-478e-b705-71fbd18f9fb5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.028 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "067512b3-cc61-478e-b705-71fbd18f9fb5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.029 253542 INFO nova.compute.manager [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Terminating instance
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.030 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "refresh_cache-067512b3-cc61-478e-b705-71fbd18f9fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.030 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquired lock "refresh_cache-067512b3-cc61-478e-b705-71fbd18f9fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.030 253542 DEBUG nova.network.neutron [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:40:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:40:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672516618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.078 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2239074947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:40:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2239074947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:40:30 compute-0 ceph-mon[75015]: pgmap v1724: 321 pgs: 321 active+clean; 246 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.172 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.172 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.177 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.178 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.280 253542 DEBUG nova.network.neutron [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:40:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:40:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.361 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.362 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3777MB free_disk=59.90663528442383GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.363 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.363 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:40:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 78cf010a-41bf-4f05-96be-30b0fec805d5 does not exist
Nov 25 08:40:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 21dd587b-d08b-471e-9aba-9f4cf1b9900a does not exist
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.416 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 067512b3-cc61-478e-b705-71fbd18f9fb5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.417 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 3e75d0af-c514-42c5-aa05-88ae5552f196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.417 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.417 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:40:30 compute-0 sudo[344766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:40:30 compute-0 sudo[344766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:30 compute-0 sudo[344766]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.468 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:30 compute-0 sudo[344791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.526 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060015.5248504, 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.527 253542 INFO nova.compute.manager [-] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] VM Stopped (Lifecycle Event)
Nov 25 08:40:30 compute-0 sudo[344791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:40:30 compute-0 sudo[344791]: pam_unix(sudo:session): session closed for user root
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.546 253542 DEBUG nova.compute.manager [None req-188be2b2-5c28-4a21-a282-9ca851867fc2 - - - - - -] [instance: 4d9b26b3-d7f6-44d6-8e83-24e9adb5d994] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.578 253542 DEBUG nova.network.neutron [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.589 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Releasing lock "refresh_cache-067512b3-cc61-478e-b705-71fbd18f9fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.589 253542 DEBUG nova.compute.manager [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:40:30 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 25 08:40:30 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Consumed 2.067s CPU time.
Nov 25 08:40:30 compute-0 systemd-machined[215790]: Machine qemu-117-instance-0000005e terminated.
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.811 253542 INFO nova.virt.libvirt.driver [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance destroyed successfully.
Nov 25 08:40:30 compute-0 nova_compute[253538]: 2025-11-25 08:40:30.812 253542 DEBUG nova.objects.instance [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lazy-loading 'resources' on Instance uuid 067512b3-cc61-478e-b705-71fbd18f9fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:40:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1480843919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.005 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.012 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.022 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.046 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.046 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3672516618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:40:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:40:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1480843919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 211 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 429 KiB/s rd, 5.7 MiB/s wr, 200 op/s
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.566 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.959 253542 INFO nova.virt.libvirt.driver [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Deleting instance files /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5_del
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.961 253542 INFO nova.virt.libvirt.driver [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Deletion of /var/lib/nova/instances/067512b3-cc61-478e-b705-71fbd18f9fb5_del complete
Nov 25 08:40:31 compute-0 ovn_controller[152859]: 2025-11-25T08:40:31Z|00927|binding|INFO|Releasing lport 66275e2b-0197-461a-9be3-ae2fe1aec502 from this chassis (sb_readonly=0)
Nov 25 08:40:31 compute-0 nova_compute[253538]: 2025-11-25 08:40:31.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:31 compute-0 NetworkManager[48915]: <info>  [1764060031.9721] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Nov 25 08:40:31 compute-0 NetworkManager[48915]: <info>  [1764060031.9735] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.036 253542 INFO nova.compute.manager [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Took 1.45 seconds to destroy the instance on the hypervisor.
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.037 253542 DEBUG oslo.service.loopingcall [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.039 253542 DEBUG nova.compute.manager [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.039 253542 DEBUG nova.network.neutron [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:40:32 compute-0 ovn_controller[152859]: 2025-11-25T08:40:32Z|00928|binding|INFO|Releasing lport 66275e2b-0197-461a-9be3-ae2fe1aec502 from this chassis (sb_readonly=0)
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.058 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.081 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:32 compute-0 ceph-mon[75015]: pgmap v1725: 321 pgs: 321 active+clean; 211 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 429 KiB/s rd, 5.7 MiB/s wr, 200 op/s
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.390 253542 DEBUG nova.network.neutron [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.404 253542 DEBUG nova.network.neutron [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:40:32 compute-0 ovn_controller[152859]: 2025-11-25T08:40:32Z|00929|binding|INFO|Releasing lport 66275e2b-0197-461a-9be3-ae2fe1aec502 from this chassis (sb_readonly=0)
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.416 253542 INFO nova.compute.manager [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Took 0.38 seconds to deallocate network for instance.
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.458 253542 DEBUG nova.compute.manager [req-6092e6e9-01fe-40b9-8079-47efa3c0a19e req-74f491a4-f712-4fa8-909c-474c67fb222d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.458 253542 DEBUG nova.compute.manager [req-6092e6e9-01fe-40b9-8079-47efa3c0a19e req-74f491a4-f712-4fa8-909c-474c67fb222d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing instance network info cache due to event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.459 253542 DEBUG oslo_concurrency.lockutils [req-6092e6e9-01fe-40b9-8079-47efa3c0a19e req-74f491a4-f712-4fa8-909c-474c67fb222d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.459 253542 DEBUG oslo_concurrency.lockutils [req-6092e6e9-01fe-40b9-8079-47efa3c0a19e req-74f491a4-f712-4fa8-909c-474c67fb222d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.460 253542 DEBUG nova.network.neutron [req-6092e6e9-01fe-40b9-8079-47efa3c0a19e req-74f491a4-f712-4fa8-909c-474c67fb222d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.463 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.464 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.526 253542 DEBUG oslo_concurrency.processutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:40:32 compute-0 nova_compute[253538]: 2025-11-25 08:40:32.570 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:40:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171661140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.024 253542 DEBUG oslo_concurrency.processutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.032 253542 DEBUG nova.compute.provider_tree [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.047 253542 DEBUG nova.scheduler.client.report [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.072 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.103 253542 INFO nova.scheduler.client.report [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Deleted allocations for instance 067512b3-cc61-478e-b705-71fbd18f9fb5
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.172 253542 DEBUG oslo_concurrency.lockutils [None req-be3e154c-999e-4052-9529-081d980049aa 15553157cb064aacaf8dafafa7d9e54c b863647443fe4e7eb44c1b694072ca91 - - default default] Lock "067512b3-cc61-478e-b705-71fbd18f9fb5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2171661140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:40:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 156 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 264 op/s
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.965 253542 DEBUG nova.network.neutron [req-6092e6e9-01fe-40b9-8079-47efa3c0a19e req-74f491a4-f712-4fa8-909c-474c67fb222d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated VIF entry in instance network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.966 253542 DEBUG nova.network.neutron [req-6092e6e9-01fe-40b9-8079-47efa3c0a19e req-74f491a4-f712-4fa8-909c-474c67fb222d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:40:33 compute-0 nova_compute[253538]: 2025-11-25 08:40:33.996 253542 DEBUG oslo_concurrency.lockutils [req-6092e6e9-01fe-40b9-8079-47efa3c0a19e req-74f491a4-f712-4fa8-909c-474c67fb222d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:40:34 compute-0 ceph-mon[75015]: pgmap v1726: 321 pgs: 321 active+clean; 156 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 264 op/s
Nov 25 08:40:34 compute-0 nova_compute[253538]: 2025-11-25 08:40:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:34 compute-0 nova_compute[253538]: 2025-11-25 08:40:34.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 08:40:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 134 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.9 MiB/s wr, 293 op/s
Nov 25 08:40:35 compute-0 nova_compute[253538]: 2025-11-25 08:40:35.574 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:36 compute-0 nova_compute[253538]: 2025-11-25 08:40:36.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:36 compute-0 ceph-mon[75015]: pgmap v1727: 321 pgs: 321 active+clean; 134 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.9 MiB/s wr, 293 op/s
Nov 25 08:40:36 compute-0 nova_compute[253538]: 2025-11-25 08:40:36.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 134 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 25 08:40:38 compute-0 ceph-mon[75015]: pgmap v1728: 321 pgs: 321 active+clean; 134 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 25 08:40:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 134 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.6 MiB/s wr, 228 op/s
Nov 25 08:40:39 compute-0 nova_compute[253538]: 2025-11-25 08:40:39.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:40 compute-0 ceph-mon[75015]: pgmap v1729: 321 pgs: 321 active+clean; 134 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.6 MiB/s wr, 228 op/s
Nov 25 08:40:41 compute-0 nova_compute[253538]: 2025-11-25 08:40:41.046 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060026.0178242, 0a240e53-cc4c-463e-9601-41d687d64349 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:41 compute-0 nova_compute[253538]: 2025-11-25 08:40:41.046 253542 INFO nova.compute.manager [-] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] VM Stopped (Lifecycle Event)
Nov 25 08:40:41 compute-0 nova_compute[253538]: 2025-11-25 08:40:41.064 253542 DEBUG nova.compute.manager [None req-551fafd0-89e8-483a-b628-6bd10e16ab74 - - - - - -] [instance: 0a240e53-cc4c-463e-9601-41d687d64349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:41.065 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:40:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:41.066 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:40:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:40:41.067 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:40:41 compute-0 nova_compute[253538]: 2025-11-25 08:40:41.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 134 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.0 MiB/s wr, 213 op/s
Nov 25 08:40:41 compute-0 nova_compute[253538]: 2025-11-25 08:40:41.651 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:41 compute-0 ovn_controller[152859]: 2025-11-25T08:40:41Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:e9:db 10.100.0.12
Nov 25 08:40:41 compute-0 ovn_controller[152859]: 2025-11-25T08:40:41Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:e9:db 10.100.0.12
Nov 25 08:40:42 compute-0 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 08:40:42 compute-0 ceph-mon[75015]: pgmap v1730: 321 pgs: 321 active+clean; 134 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.0 MiB/s wr, 213 op/s
Nov 25 08:40:42 compute-0 nova_compute[253538]: 2025-11-25 08:40:42.689 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 148 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 195 op/s
Nov 25 08:40:43 compute-0 nova_compute[253538]: 2025-11-25 08:40:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:43 compute-0 ceph-mon[75015]: pgmap v1731: 321 pgs: 321 active+clean; 148 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 195 op/s
Nov 25 08:40:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 164 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 25 08:40:45 compute-0 nova_compute[253538]: 2025-11-25 08:40:45.810 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060030.8089674, 067512b3-cc61-478e-b705-71fbd18f9fb5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:40:45 compute-0 nova_compute[253538]: 2025-11-25 08:40:45.811 253542 INFO nova.compute.manager [-] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] VM Stopped (Lifecycle Event)
Nov 25 08:40:45 compute-0 podman[344883]: 2025-11-25 08:40:45.865753193 +0000 UTC m=+0.094891356 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 08:40:45 compute-0 nova_compute[253538]: 2025-11-25 08:40:45.893 253542 DEBUG nova.compute.manager [None req-db186658-284a-4bdf-a88e-3e153f521365 - - - - - -] [instance: 067512b3-cc61-478e-b705-71fbd18f9fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:46 compute-0 nova_compute[253538]: 2025-11-25 08:40:46.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:46 compute-0 ceph-mon[75015]: pgmap v1732: 321 pgs: 321 active+clean; 164 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 25 08:40:46 compute-0 nova_compute[253538]: 2025-11-25 08:40:46.654 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 25 08:40:48 compute-0 nova_compute[253538]: 2025-11-25 08:40:48.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:48 compute-0 ceph-mon[75015]: pgmap v1733: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 25 08:40:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:40:50 compute-0 ceph-mon[75015]: pgmap v1734: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:40:51 compute-0 nova_compute[253538]: 2025-11-25 08:40:51.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 08:40:51 compute-0 podman[344904]: 2025-11-25 08:40:51.567056763 +0000 UTC m=+0.102815673 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:40:51 compute-0 nova_compute[253538]: 2025-11-25 08:40:51.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:52 compute-0 nova_compute[253538]: 2025-11-25 08:40:52.509 253542 DEBUG nova.compute.manager [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:40:52 compute-0 ceph-mon[75015]: pgmap v1735: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 08:40:52 compute-0 nova_compute[253538]: 2025-11-25 08:40:52.552 253542 INFO nova.compute.manager [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] instance snapshotting
Nov 25 08:40:52 compute-0 nova_compute[253538]: 2025-11-25 08:40:52.553 253542 DEBUG nova.objects.instance [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'flavor' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:40:52 compute-0 nova_compute[253538]: 2025-11-25 08:40:52.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:53 compute-0 nova_compute[253538]: 2025-11-25 08:40:53.112 253542 INFO nova.virt.libvirt.driver [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Beginning live snapshot process
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:40:53
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:40:53 compute-0 nova_compute[253538]: 2025-11-25 08:40:53.441 253542 DEBUG nova.virt.libvirt.imagebackend [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:40:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:40:53 compute-0 podman[344958]: 2025-11-25 08:40:53.893803909 +0000 UTC m=+0.135172575 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 08:40:54 compute-0 nova_compute[253538]: 2025-11-25 08:40:54.077 253542 DEBUG nova.storage.rbd_utils [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(dfecb7e134474753a653a259e32834d7) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:40:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 25 08:40:54 compute-0 ceph-mon[75015]: pgmap v1736: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 08:40:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 25 08:40:54 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 25 08:40:54 compute-0 nova_compute[253538]: 2025-11-25 08:40:54.623 253542 DEBUG nova.storage.rbd_utils [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk@dfecb7e134474753a653a259e32834d7 to images/89ede02e-6d56-403f-a661-2e56029c7b26 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:40:54 compute-0 nova_compute[253538]: 2025-11-25 08:40:54.759 253542 DEBUG nova.storage.rbd_utils [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening images/89ede02e-6d56-403f-a661-2e56029c7b26 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:40:55 compute-0 nova_compute[253538]: 2025-11-25 08:40:55.236 253542 DEBUG nova.storage.rbd_utils [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] removing snapshot(dfecb7e134474753a653a259e32834d7) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:40:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 21 KiB/s wr, 9 op/s
Nov 25 08:40:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 25 08:40:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 25 08:40:55 compute-0 ceph-mon[75015]: osdmap e187: 3 total, 3 up, 3 in
Nov 25 08:40:55 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 25 08:40:55 compute-0 nova_compute[253538]: 2025-11-25 08:40:55.634 253542 DEBUG nova.storage.rbd_utils [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(snap) on rbd image(89ede02e-6d56-403f-a661-2e56029c7b26) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:40:56 compute-0 nova_compute[253538]: 2025-11-25 08:40:56.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 25 08:40:56 compute-0 ceph-mon[75015]: pgmap v1738: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 21 KiB/s wr, 9 op/s
Nov 25 08:40:56 compute-0 ceph-mon[75015]: osdmap e188: 3 total, 3 up, 3 in
Nov 25 08:40:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 25 08:40:56 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 25 08:40:56 compute-0 nova_compute[253538]: 2025-11-25 08:40:56.663 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:40:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:40:56 compute-0 nova_compute[253538]: 2025-11-25 08:40:56.872 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:40:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 202 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 25 op/s
Nov 25 08:40:57 compute-0 ceph-mon[75015]: osdmap e189: 3 total, 3 up, 3 in
Nov 25 08:40:57 compute-0 ceph-mon[75015]: pgmap v1741: 321 pgs: 321 active+clean; 202 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 25 op/s
Nov 25 08:40:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 222 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 98 op/s
Nov 25 08:41:00 compute-0 ceph-mon[75015]: pgmap v1742: 321 pgs: 321 active+clean; 222 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 98 op/s
Nov 25 08:41:00 compute-0 nova_compute[253538]: 2025-11-25 08:41:00.562 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "f08c05a1-b18c-46bc-bf8a-d3694a045584" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:00 compute-0 nova_compute[253538]: 2025-11-25 08:41:00.563 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:00 compute-0 nova_compute[253538]: 2025-11-25 08:41:00.582 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:41:00 compute-0 nova_compute[253538]: 2025-11-25 08:41:00.673 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:00 compute-0 nova_compute[253538]: 2025-11-25 08:41:00.674 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:00 compute-0 nova_compute[253538]: 2025-11-25 08:41:00.685 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:41:00 compute-0 nova_compute[253538]: 2025-11-25 08:41:00.686 253542 INFO nova.compute.claims [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:41:00 compute-0 nova_compute[253538]: 2025-11-25 08:41:00.880 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.248 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:41:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639095552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.369 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.375 253542 DEBUG nova.compute.provider_tree [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.388 253542 DEBUG nova.scheduler.client.report [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.416 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.416 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:41:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 246 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 127 op/s
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.464 253542 INFO nova.virt.libvirt.driver [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Snapshot image upload complete
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.464 253542 INFO nova.compute.manager [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 8.89 seconds to snapshot the instance on the hypervisor.
Nov 25 08:41:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2639095552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.488 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.489 253542 DEBUG nova.network.neutron [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.517 253542 INFO nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.551 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.657 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.659 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.660 253542 INFO nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Creating image(s)
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.694 253542 DEBUG nova.storage.rbd_utils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] rbd image f08c05a1-b18c-46bc-bf8a-d3694a045584_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.728 253542 DEBUG nova.storage.rbd_utils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] rbd image f08c05a1-b18c-46bc-bf8a-d3694a045584_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.765 253542 DEBUG nova.storage.rbd_utils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] rbd image f08c05a1-b18c-46bc-bf8a-d3694a045584_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.770 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 25 08:41:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 25 08:41:01 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.875 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.876 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.877 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.877 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.912 253542 DEBUG nova.storage.rbd_utils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] rbd image f08c05a1-b18c-46bc-bf8a-d3694a045584_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:01 compute-0 nova_compute[253538]: 2025-11-25 08:41:01.918 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f08c05a1-b18c-46bc-bf8a-d3694a045584_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.098 253542 DEBUG nova.compute.manager [None req-ee848d9b-1629-4b4c-beb2-5549ed273d11 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.183 253542 DEBUG nova.policy [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1c82d462fb7a4f14b4d14246ebb45df5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1de8a67238e04cb69478fbbed61e53e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.303 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f08c05a1-b18c-46bc-bf8a-d3694a045584_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.398 253542 DEBUG nova.storage.rbd_utils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] resizing rbd image f08c05a1-b18c-46bc-bf8a-d3694a045584_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:41:02 compute-0 ceph-mon[75015]: pgmap v1743: 321 pgs: 321 active+clean; 246 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 127 op/s
Nov 25 08:41:02 compute-0 ceph-mon[75015]: osdmap e190: 3 total, 3 up, 3 in
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.529 253542 DEBUG nova.objects.instance [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lazy-loading 'migration_context' on Instance uuid f08c05a1-b18c-46bc-bf8a-d3694a045584 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.548 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.549 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Ensure instance console log exists: /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.550 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.550 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:02 compute-0 nova_compute[253538]: 2025-11-25 08:41:02.551 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:03 compute-0 nova_compute[253538]: 2025-11-25 08:41:03.156 253542 DEBUG nova.compute.manager [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:03 compute-0 nova_compute[253538]: 2025-11-25 08:41:03.219 253542 INFO nova.compute.manager [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] instance snapshotting
Nov 25 08:41:03 compute-0 nova_compute[253538]: 2025-11-25 08:41:03.220 253542 DEBUG nova.objects.instance [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'flavor' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:41:03 compute-0 nova_compute[253538]: 2025-11-25 08:41:03.302 253542 DEBUG nova.network.neutron [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Successfully created port: 66e4f077-afa1-4407-b56e-30666cf03b08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 261 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 6.4 MiB/s wr, 143 op/s
Nov 25 08:41:03 compute-0 nova_compute[253538]: 2025-11-25 08:41:03.829 253542 INFO nova.virt.libvirt.driver [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Beginning live snapshot process
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008172677769095526 of space, bias 1.0, pg target 0.2451803330728658 quantized to 32 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001769157082275184 of space, bias 1.0, pg target 0.5307471246825552 quantized to 32 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:41:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.046 253542 DEBUG nova.virt.libvirt.imagebackend [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.407 253542 DEBUG nova.storage.rbd_utils [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(634abf9cdefd47d4845281a1a504b976) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:41:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 25 08:41:04 compute-0 ceph-mon[75015]: pgmap v1745: 321 pgs: 321 active+clean; 261 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 6.4 MiB/s wr, 143 op/s
Nov 25 08:41:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 25 08:41:04 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.583 253542 DEBUG nova.storage.rbd_utils [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk@634abf9cdefd47d4845281a1a504b976 to images/4d38a3f9-022a-4a35-b78d-9dc9dc9bb789 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.728 253542 DEBUG nova.storage.rbd_utils [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening images/4d38a3f9-022a-4a35-b78d-9dc9dc9bb789 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.840 253542 DEBUG nova.network.neutron [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Successfully updated port: 66e4f077-afa1-4407-b56e-30666cf03b08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.858 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "refresh_cache-f08c05a1-b18c-46bc-bf8a-d3694a045584" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.859 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquired lock "refresh_cache-f08c05a1-b18c-46bc-bf8a-d3694a045584" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.860 253542 DEBUG nova.network.neutron [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.988 253542 DEBUG nova.compute.manager [req-53659b73-fdb4-4585-bddd-f3ba976498a1 req-73c80afb-2d22-46cb-a5b2-ec9df4297a4e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received event network-changed-66e4f077-afa1-4407-b56e-30666cf03b08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.989 253542 DEBUG nova.compute.manager [req-53659b73-fdb4-4585-bddd-f3ba976498a1 req-73c80afb-2d22-46cb-a5b2-ec9df4297a4e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Refreshing instance network info cache due to event network-changed-66e4f077-afa1-4407-b56e-30666cf03b08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:41:04 compute-0 nova_compute[253538]: 2025-11-25 08:41:04.989 253542 DEBUG oslo_concurrency.lockutils [req-53659b73-fdb4-4585-bddd-f3ba976498a1 req-73c80afb-2d22-46cb-a5b2-ec9df4297a4e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f08c05a1-b18c-46bc-bf8a-d3694a045584" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:41:05 compute-0 nova_compute[253538]: 2025-11-25 08:41:05.148 253542 DEBUG nova.network.neutron [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:41:05 compute-0 nova_compute[253538]: 2025-11-25 08:41:05.256 253542 DEBUG nova.storage.rbd_utils [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] removing snapshot(634abf9cdefd47d4845281a1a504b976) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:41:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 284 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.6 MiB/s wr, 159 op/s
Nov 25 08:41:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 25 08:41:05 compute-0 ceph-mon[75015]: osdmap e191: 3 total, 3 up, 3 in
Nov 25 08:41:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 25 08:41:05 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 25 08:41:05 compute-0 nova_compute[253538]: 2025-11-25 08:41:05.569 253542 DEBUG nova.storage.rbd_utils [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(snap) on rbd image(4d38a3f9-022a-4a35-b78d-9dc9dc9bb789) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 25 08:41:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 25 08:41:06 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 25 08:41:06 compute-0 ceph-mon[75015]: pgmap v1747: 321 pgs: 321 active+clean; 284 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.6 MiB/s wr, 159 op/s
Nov 25 08:41:06 compute-0 ceph-mon[75015]: osdmap e192: 3 total, 3 up, 3 in
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.664 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.742 253542 DEBUG nova.network.neutron [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Updating instance_info_cache with network_info: [{"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.774 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Releasing lock "refresh_cache-f08c05a1-b18c-46bc-bf8a-d3694a045584" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.775 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Instance network_info: |[{"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.775 253542 DEBUG oslo_concurrency.lockutils [req-53659b73-fdb4-4585-bddd-f3ba976498a1 req-73c80afb-2d22-46cb-a5b2-ec9df4297a4e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f08c05a1-b18c-46bc-bf8a-d3694a045584" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.776 253542 DEBUG nova.network.neutron [req-53659b73-fdb4-4585-bddd-f3ba976498a1 req-73c80afb-2d22-46cb-a5b2-ec9df4297a4e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Refreshing network info cache for port 66e4f077-afa1-4407-b56e-30666cf03b08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:41:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.782 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Start _get_guest_xml network_info=[{"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.790 253542 WARNING nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.805 253542 DEBUG nova.virt.libvirt.host [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.806 253542 DEBUG nova.virt.libvirt.host [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.810 253542 DEBUG nova.virt.libvirt.host [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.810 253542 DEBUG nova.virt.libvirt.host [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.811 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.811 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.811 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.812 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.812 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.812 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.812 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.812 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.813 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.813 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.813 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.813 253542 DEBUG nova.virt.hardware [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:41:06 compute-0 nova_compute[253538]: 2025-11-25 08:41:06.816 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:41:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751795491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.288 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.313 253542 DEBUG nova.storage.rbd_utils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] rbd image f08c05a1-b18c-46bc-bf8a-d3694a045584_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.318 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 313 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 123 op/s
Nov 25 08:41:07 compute-0 ceph-mon[75015]: osdmap e193: 3 total, 3 up, 3 in
Nov 25 08:41:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3751795491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:41:07 compute-0 ceph-mon[75015]: pgmap v1750: 321 pgs: 321 active+clean; 313 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 123 op/s
Nov 25 08:41:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:41:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2987468204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.796 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.799 253542 DEBUG nova.virt.libvirt.vif [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1641527469',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1641527469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1641527469',id=96,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1de8a67238e04cb69478fbbed61e53e5',ramdisk_id='',reservation_id='r-5z07ymu7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2082284079',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2082284079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:41:01Z,user_data=None,user_id='1c82d462fb7a4f14b4d14246ebb45df5',uuid=f08c05a1-b18c-46bc-bf8a-d3694a045584,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.802 253542 DEBUG nova.network.os_vif_util [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Converting VIF {"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.804 253542 DEBUG nova.network.os_vif_util [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:47:c6,bridge_name='br-int',has_traffic_filtering=True,id=66e4f077-afa1-4407-b56e-30666cf03b08,network=Network(e67f2484-3d92-4e21-9a85-57791d9c9703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66e4f077-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.806 253542 DEBUG nova.objects.instance [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid f08c05a1-b18c-46bc-bf8a-d3694a045584 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.824 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <uuid>f08c05a1-b18c-46bc-bf8a-d3694a045584</uuid>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <name>instance-00000060</name>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1641527469</nova:name>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:41:06</nova:creationTime>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <nova:user uuid="1c82d462fb7a4f14b4d14246ebb45df5">tempest-ServersNegativeTestMultiTenantJSON-2082284079-project-member</nova:user>
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <nova:project uuid="1de8a67238e04cb69478fbbed61e53e5">tempest-ServersNegativeTestMultiTenantJSON-2082284079</nova:project>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <nova:port uuid="66e4f077-afa1-4407-b56e-30666cf03b08">
Nov 25 08:41:07 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <system>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <entry name="serial">f08c05a1-b18c-46bc-bf8a-d3694a045584</entry>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <entry name="uuid">f08c05a1-b18c-46bc-bf8a-d3694a045584</entry>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </system>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <os>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   </os>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <features>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   </features>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/f08c05a1-b18c-46bc-bf8a-d3694a045584_disk">
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       </source>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/f08c05a1-b18c-46bc-bf8a-d3694a045584_disk.config">
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       </source>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:41:07 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:9b:47:c6"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <target dev="tap66e4f077-af"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584/console.log" append="off"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <video>
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </video>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:41:07 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:41:07 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:41:07 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:41:07 compute-0 nova_compute[253538]: </domain>
Nov 25 08:41:07 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.825 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Preparing to wait for external event network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.825 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.826 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.827 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.828 253542 DEBUG nova.virt.libvirt.vif [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1641527469',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1641527469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1641527469',id=96,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1de8a67238e04cb69478fbbed61e53e5',ramdisk_id='',reservation_id='r-5z07ymu7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2082284079',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2082284079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:41:01Z,user_data=None,user_id='1c82d462fb7a4f14b4d14246ebb45df5',uuid=f08c05a1-b18c-46bc-bf8a-d3694a045584,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.829 253542 DEBUG nova.network.os_vif_util [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Converting VIF {"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.830 253542 DEBUG nova.network.os_vif_util [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:47:c6,bridge_name='br-int',has_traffic_filtering=True,id=66e4f077-afa1-4407-b56e-30666cf03b08,network=Network(e67f2484-3d92-4e21-9a85-57791d9c9703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66e4f077-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.831 253542 DEBUG os_vif [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:47:c6,bridge_name='br-int',has_traffic_filtering=True,id=66e4f077-afa1-4407-b56e-30666cf03b08,network=Network(e67f2484-3d92-4e21-9a85-57791d9c9703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66e4f077-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.832 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.833 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.834 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.842 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66e4f077-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.843 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66e4f077-af, col_values=(('external_ids', {'iface-id': '66e4f077-afa1-4407-b56e-30666cf03b08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:47:c6', 'vm-uuid': 'f08c05a1-b18c-46bc-bf8a-d3694a045584'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:07 compute-0 NetworkManager[48915]: <info>  [1764060067.8474] manager: (tap66e4f077-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.855 253542 INFO os_vif [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:47:c6,bridge_name='br-int',has_traffic_filtering=True,id=66e4f077-afa1-4407-b56e-30666cf03b08,network=Network(e67f2484-3d92-4e21-9a85-57791d9c9703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66e4f077-af')
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.905 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.905 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.906 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] No VIF found with MAC fa:16:3e:9b:47:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.907 253542 INFO nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Using config drive
Nov 25 08:41:07 compute-0 nova_compute[253538]: 2025-11-25 08:41:07.944 253542 DEBUG nova.storage.rbd_utils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] rbd image f08c05a1-b18c-46bc-bf8a-d3694a045584_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.454 253542 INFO nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Creating config drive at /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584/disk.config
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.465 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw9be8k79 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.633 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw9be8k79" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2987468204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.666 253542 DEBUG nova.storage.rbd_utils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] rbd image f08c05a1-b18c-46bc-bf8a-d3694a045584_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.671 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584/disk.config f08c05a1-b18c-46bc-bf8a-d3694a045584_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.846 253542 DEBUG oslo_concurrency.processutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584/disk.config f08c05a1-b18c-46bc-bf8a-d3694a045584_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.847 253542 INFO nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Deleting local config drive /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584/disk.config because it was imported into RBD.
Nov 25 08:41:08 compute-0 kernel: tap66e4f077-af: entered promiscuous mode
Nov 25 08:41:08 compute-0 NetworkManager[48915]: <info>  [1764060068.9102] manager: (tap66e4f077-af): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Nov 25 08:41:08 compute-0 ovn_controller[152859]: 2025-11-25T08:41:08Z|00930|binding|INFO|Claiming lport 66e4f077-afa1-4407-b56e-30666cf03b08 for this chassis.
Nov 25 08:41:08 compute-0 ovn_controller[152859]: 2025-11-25T08:41:08Z|00931|binding|INFO|66e4f077-afa1-4407-b56e-30666cf03b08: Claiming fa:16:3e:9b:47:c6 10.100.0.10
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.920 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:47:c6 10.100.0.10'], port_security=['fa:16:3e:9b:47:c6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f08c05a1-b18c-46bc-bf8a-d3694a045584', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e67f2484-3d92-4e21-9a85-57791d9c9703', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1de8a67238e04cb69478fbbed61e53e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ce71fcf3-43ae-4e62-aec3-a140426e0408', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee65f8d9-100d-464e-b6bb-a758b5cb4808, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=66e4f077-afa1-4407-b56e-30666cf03b08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.922 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 66e4f077-afa1-4407-b56e-30666cf03b08 in datapath e67f2484-3d92-4e21-9a85-57791d9c9703 bound to our chassis
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.924 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e67f2484-3d92-4e21-9a85-57791d9c9703
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aebbcdce-47b5-436a-bdb0-8c2c47203cd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.947 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape67f2484-31 in ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:41:08 compute-0 ovn_controller[152859]: 2025-11-25T08:41:08Z|00932|binding|INFO|Setting lport 66e4f077-afa1-4407-b56e-30666cf03b08 ovn-installed in OVS
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.950 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape67f2484-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87e02623-f4e7-4a4d-b306-0a524c92fd3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:08 compute-0 ovn_controller[152859]: 2025-11-25T08:41:08Z|00933|binding|INFO|Setting lport 66e4f077-afa1-4407-b56e-30666cf03b08 up in Southbound
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.952 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.952 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8adb0cd3-3431-4e31-aef8-b3135171a0bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:08 compute-0 nova_compute[253538]: 2025-11-25 08:41:08.958 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:08 compute-0 systemd-machined[215790]: New machine qemu-118-instance-00000060.
Nov 25 08:41:08 compute-0 systemd-udevd[345559]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:41:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.975 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7a1776-c587-4c58-93c8-5dca6d0a27b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:08 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-00000060.
Nov 25 08:41:08 compute-0 NetworkManager[48915]: <info>  [1764060068.9959] device (tap66e4f077-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:41:08 compute-0 NetworkManager[48915]: <info>  [1764060068.9978] device (tap66e4f077-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:08.999 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0852d4-0d13-4101-b747-a3a66628b04a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.047 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fc48b1db-83f9-48c3-a46f-30ef4eca1b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 NetworkManager[48915]: <info>  [1764060069.0561] manager: (tape67f2484-30): new Veth device (/org/freedesktop/NetworkManager/Devices/384)
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.055 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6aaedc0-354e-4d22-90d3-b5d4e4803ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 systemd-udevd[345562]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.098 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6abebdd3-333b-4401-9e48-d78df20d3144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.103 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb11ad1-dccf-448e-9195-4bc17ed57085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 NetworkManager[48915]: <info>  [1764060069.1355] device (tape67f2484-30): carrier: link connected
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.143 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8d98e39e-a7e7-4542-b5d0-84b812f644d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.163 253542 INFO nova.virt.libvirt.driver [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Snapshot image upload complete
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.164 253542 INFO nova.compute.manager [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 5.92 seconds to snapshot the instance on the hypervisor.
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.171 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0fca37a4-002d-457e-80d4-6f9c864fefe2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape67f2484-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:a1:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542373, 'reachable_time': 17897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345590, 'error': None, 'target': 'ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.174 253542 DEBUG nova.network.neutron [req-53659b73-fdb4-4585-bddd-f3ba976498a1 req-73c80afb-2d22-46cb-a5b2-ec9df4297a4e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Updated VIF entry in instance network info cache for port 66e4f077-afa1-4407-b56e-30666cf03b08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.175 253542 DEBUG nova.network.neutron [req-53659b73-fdb4-4585-bddd-f3ba976498a1 req-73c80afb-2d22-46cb-a5b2-ec9df4297a4e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Updating instance_info_cache with network_info: [{"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.193 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a9137637-b672-41c6-b340-04058e52d436]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:a135'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542373, 'tstamp': 542373}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345591, 'error': None, 'target': 'ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.197 253542 DEBUG oslo_concurrency.lockutils [req-53659b73-fdb4-4585-bddd-f3ba976498a1 req-73c80afb-2d22-46cb-a5b2-ec9df4297a4e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f08c05a1-b18c-46bc-bf8a-d3694a045584" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.219 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b969cc-e5c7-4c3e-a522-3c0ada9d0c63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape67f2484-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:a1:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542373, 'reachable_time': 17897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345592, 'error': None, 'target': 'ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.272 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eafa2526-8ebd-4ce9-8800-b0155a36adbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96dc32a5-e149-46b2-b39b-222edcc3c5c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape67f2484-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.362 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.363 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape67f2484-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:09 compute-0 kernel: tape67f2484-30: entered promiscuous mode
Nov 25 08:41:09 compute-0 NetworkManager[48915]: <info>  [1764060069.3667] manager: (tape67f2484-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.366 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.371 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape67f2484-30, col_values=(('external_ids', {'iface-id': '3085c0de-de0c-47a8-99d3-854810eb112c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:09 compute-0 ovn_controller[152859]: 2025-11-25T08:41:09Z|00934|binding|INFO|Releasing lport 3085c0de-de0c-47a8-99d3-854810eb112c from this chassis (sb_readonly=0)
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.376 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e67f2484-3d92-4e21-9a85-57791d9c9703.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e67f2484-3d92-4e21-9a85-57791d9c9703.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.377 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[159e3018-b2d0-49e5-88e6-fe8bc84fd8cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.378 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-e67f2484-3d92-4e21-9a85-57791d9c9703
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/e67f2484-3d92-4e21-9a85-57791d9c9703.pid.haproxy
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID e67f2484-3d92-4e21-9a85-57791d9c9703
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:41:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:09.379 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703', 'env', 'PROCESS_TAG=haproxy-e67f2484-3d92-4e21-9a85-57791d9c9703', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e67f2484-3d92-4e21-9a85-57791d9c9703.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.389 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 336 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.4 MiB/s wr, 86 op/s
Nov 25 08:41:09 compute-0 ceph-mon[75015]: pgmap v1751: 321 pgs: 321 active+clean; 336 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.4 MiB/s wr, 86 op/s
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.653 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060069.652165, f08c05a1-b18c-46bc-bf8a-d3694a045584 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.653 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] VM Started (Lifecycle Event)
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.657 253542 DEBUG nova.compute.manager [req-ba2a8c86-2509-4fd4-89d8-d79c30315370 req-9d8110bf-9516-44e4-ad39-8095178a3a5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received event network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.657 253542 DEBUG oslo_concurrency.lockutils [req-ba2a8c86-2509-4fd4-89d8-d79c30315370 req-9d8110bf-9516-44e4-ad39-8095178a3a5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.658 253542 DEBUG oslo_concurrency.lockutils [req-ba2a8c86-2509-4fd4-89d8-d79c30315370 req-9d8110bf-9516-44e4-ad39-8095178a3a5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.658 253542 DEBUG oslo_concurrency.lockutils [req-ba2a8c86-2509-4fd4-89d8-d79c30315370 req-9d8110bf-9516-44e4-ad39-8095178a3a5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.658 253542 DEBUG nova.compute.manager [req-ba2a8c86-2509-4fd4-89d8-d79c30315370 req-9d8110bf-9516-44e4-ad39-8095178a3a5a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Processing event network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.659 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.664 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.669 253542 INFO nova.virt.libvirt.driver [-] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Instance spawned successfully.
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.670 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.692 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.697 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.716 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.717 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.718 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.719 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.720 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.721 253542 DEBUG nova.virt.libvirt.driver [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.728 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.729 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060069.6527948, f08c05a1-b18c-46bc-bf8a-d3694a045584 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.730 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] VM Paused (Lifecycle Event)
Nov 25 08:41:09 compute-0 podman[345666]: 2025-11-25 08:41:09.768395466 +0000 UTC m=+0.052627585 container create 151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.798 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.804 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060069.6625054, f08c05a1-b18c-46bc-bf8a-d3694a045584 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.804 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] VM Resumed (Lifecycle Event)
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.813 253542 DEBUG nova.compute.manager [None req-562485f7-7aee-4308-b18b-0eb3b324ff33 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 25 08:41:09 compute-0 systemd[1]: Started libpod-conmon-151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e.scope.
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.830 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:09 compute-0 nova_compute[253538]: 2025-11-25 08:41:09.834 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:41:09 compute-0 podman[345666]: 2025-11-25 08:41:09.743168078 +0000 UTC m=+0.027400217 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:41:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a3b24ada937f8d5ec6ec4e4e8343e0e4dd51282b9b15f52bf7fb00c17a5538/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:09 compute-0 podman[345666]: 2025-11-25 08:41:09.883246176 +0000 UTC m=+0.167478305 container init 151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:41:09 compute-0 podman[345666]: 2025-11-25 08:41:09.893970238 +0000 UTC m=+0.178202357 container start 151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:41:09 compute-0 neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703[345682]: [NOTICE]   (345686) : New worker (345688) forked
Nov 25 08:41:09 compute-0 neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703[345682]: [NOTICE]   (345686) : Loading success.
Nov 25 08:41:10 compute-0 nova_compute[253538]: 2025-11-25 08:41:10.376 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:41:10 compute-0 nova_compute[253538]: 2025-11-25 08:41:10.527 253542 INFO nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Took 8.87 seconds to spawn the instance on the hypervisor.
Nov 25 08:41:10 compute-0 nova_compute[253538]: 2025-11-25 08:41:10.528 253542 DEBUG nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:10 compute-0 nova_compute[253538]: 2025-11-25 08:41:10.662 253542 INFO nova.compute.manager [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Took 10.03 seconds to build instance.
Nov 25 08:41:10 compute-0 nova_compute[253538]: 2025-11-25 08:41:10.686 253542 DEBUG oslo_concurrency.lockutils [None req-6eb04798-e6f6-486f-9ba6-7884f34177ed 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 7.5 MiB/s wr, 130 op/s
Nov 25 08:41:11 compute-0 nova_compute[253538]: 2025-11-25 08:41:11.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 25 08:41:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 25 08:41:11 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 25 08:41:11 compute-0 nova_compute[253538]: 2025-11-25 08:41:11.981 253542 DEBUG nova.compute.manager [req-00e8e530-40e4-4d87-b5c3-4fed95e011e4 req-f9f54d58-60d2-4593-8311-2dc443e9cf23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received event network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:11 compute-0 nova_compute[253538]: 2025-11-25 08:41:11.982 253542 DEBUG oslo_concurrency.lockutils [req-00e8e530-40e4-4d87-b5c3-4fed95e011e4 req-f9f54d58-60d2-4593-8311-2dc443e9cf23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:11 compute-0 nova_compute[253538]: 2025-11-25 08:41:11.983 253542 DEBUG oslo_concurrency.lockutils [req-00e8e530-40e4-4d87-b5c3-4fed95e011e4 req-f9f54d58-60d2-4593-8311-2dc443e9cf23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:11 compute-0 nova_compute[253538]: 2025-11-25 08:41:11.983 253542 DEBUG oslo_concurrency.lockutils [req-00e8e530-40e4-4d87-b5c3-4fed95e011e4 req-f9f54d58-60d2-4593-8311-2dc443e9cf23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:11 compute-0 nova_compute[253538]: 2025-11-25 08:41:11.984 253542 DEBUG nova.compute.manager [req-00e8e530-40e4-4d87-b5c3-4fed95e011e4 req-f9f54d58-60d2-4593-8311-2dc443e9cf23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] No waiting events found dispatching network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:41:11 compute-0 nova_compute[253538]: 2025-11-25 08:41:11.984 253542 WARNING nova.compute.manager [req-00e8e530-40e4-4d87-b5c3-4fed95e011e4 req-f9f54d58-60d2-4593-8311-2dc443e9cf23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received unexpected event network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 for instance with vm_state active and task_state None.
Nov 25 08:41:12 compute-0 nova_compute[253538]: 2025-11-25 08:41:12.299 253542 DEBUG nova.compute.manager [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:12 compute-0 nova_compute[253538]: 2025-11-25 08:41:12.353 253542 INFO nova.compute.manager [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] instance snapshotting
Nov 25 08:41:12 compute-0 nova_compute[253538]: 2025-11-25 08:41:12.354 253542 DEBUG nova.objects.instance [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'flavor' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:41:12 compute-0 ceph-mon[75015]: pgmap v1752: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 7.5 MiB/s wr, 130 op/s
Nov 25 08:41:12 compute-0 ceph-mon[75015]: osdmap e194: 3 total, 3 up, 3 in
Nov 25 08:41:12 compute-0 nova_compute[253538]: 2025-11-25 08:41:12.684 253542 INFO nova.virt.libvirt.driver [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Beginning live snapshot process
Nov 25 08:41:12 compute-0 nova_compute[253538]: 2025-11-25 08:41:12.826 253542 DEBUG nova.virt.libvirt.imagebackend [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:41:12 compute-0 nova_compute[253538]: 2025-11-25 08:41:12.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:13 compute-0 nova_compute[253538]: 2025-11-25 08:41:13.125 253542 DEBUG nova.storage.rbd_utils [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(4d7cca2bb6a94883aa7fee68ccd01e30) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:41:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 6.6 MiB/s wr, 181 op/s
Nov 25 08:41:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 25 08:41:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 25 08:41:13 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 25 08:41:13 compute-0 nova_compute[253538]: 2025-11-25 08:41:13.599 253542 DEBUG nova.storage.rbd_utils [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk@4d7cca2bb6a94883aa7fee68ccd01e30 to images/0091be03-c4d2-4f48-bc11-a89c5095c208 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:41:13 compute-0 nova_compute[253538]: 2025-11-25 08:41:13.735 253542 DEBUG nova.storage.rbd_utils [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening images/0091be03-c4d2-4f48-bc11-a89c5095c208 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.109 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "f08c05a1-b18c-46bc-bf8a-d3694a045584" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.110 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.111 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.111 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.112 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.114 253542 INFO nova.compute.manager [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Terminating instance
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.116 253542 DEBUG nova.compute.manager [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:41:14 compute-0 kernel: tap66e4f077-af (unregistering): left promiscuous mode
Nov 25 08:41:14 compute-0 NetworkManager[48915]: <info>  [1764060074.3582] device (tap66e4f077-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:41:14 compute-0 ovn_controller[152859]: 2025-11-25T08:41:14Z|00935|binding|INFO|Releasing lport 66e4f077-afa1-4407-b56e-30666cf03b08 from this chassis (sb_readonly=0)
Nov 25 08:41:14 compute-0 ovn_controller[152859]: 2025-11-25T08:41:14Z|00936|binding|INFO|Setting lport 66e4f077-afa1-4407-b56e-30666cf03b08 down in Southbound
Nov 25 08:41:14 compute-0 ovn_controller[152859]: 2025-11-25T08:41:14Z|00937|binding|INFO|Removing iface tap66e4f077-af ovn-installed in OVS
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.409 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:14 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 25 08:41:14 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Consumed 5.185s CPU time.
Nov 25 08:41:14 compute-0 systemd-machined[215790]: Machine qemu-118-instance-00000060 terminated.
Nov 25 08:41:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:14.424 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:47:c6 10.100.0.10'], port_security=['fa:16:3e:9b:47:c6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f08c05a1-b18c-46bc-bf8a-d3694a045584', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e67f2484-3d92-4e21-9a85-57791d9c9703', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1de8a67238e04cb69478fbbed61e53e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ce71fcf3-43ae-4e62-aec3-a140426e0408', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee65f8d9-100d-464e-b6bb-a758b5cb4808, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=66e4f077-afa1-4407-b56e-30666cf03b08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:41:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:14.427 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 66e4f077-afa1-4407-b56e-30666cf03b08 in datapath e67f2484-3d92-4e21-9a85-57791d9c9703 unbound from our chassis
Nov 25 08:41:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:14.429 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e67f2484-3d92-4e21-9a85-57791d9c9703, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:41:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:14.430 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c50b6a51-f112-4bc2-bb59-a187c0e81696]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:14.431 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703 namespace which is not needed anymore
Nov 25 08:41:14 compute-0 ceph-mon[75015]: pgmap v1754: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 6.6 MiB/s wr, 181 op/s
Nov 25 08:41:14 compute-0 ceph-mon[75015]: osdmap e195: 3 total, 3 up, 3 in
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.561 253542 INFO nova.virt.libvirt.driver [-] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Instance destroyed successfully.
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.562 253542 DEBUG nova.objects.instance [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lazy-loading 'resources' on Instance uuid f08c05a1-b18c-46bc-bf8a-d3694a045584 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.579 253542 DEBUG nova.virt.libvirt.vif [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1641527469',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1641527469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1641527469',id=96,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:41:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1de8a67238e04cb69478fbbed61e53e5',ramdisk_id='',reservation_id='r-5z07ymu7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2082284079',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2082284079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:41:10Z,user_data=None,user_id='1c82d462fb7a4f14b4d14246ebb45df5',uuid=f08c05a1-b18c-46bc-bf8a-d3694a045584,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.581 253542 DEBUG nova.network.os_vif_util [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Converting VIF {"id": "66e4f077-afa1-4407-b56e-30666cf03b08", "address": "fa:16:3e:9b:47:c6", "network": {"id": "e67f2484-3d92-4e21-9a85-57791d9c9703", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1445815200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1de8a67238e04cb69478fbbed61e53e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66e4f077-af", "ovs_interfaceid": "66e4f077-afa1-4407-b56e-30666cf03b08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.581 253542 DEBUG nova.network.os_vif_util [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:47:c6,bridge_name='br-int',has_traffic_filtering=True,id=66e4f077-afa1-4407-b56e-30666cf03b08,network=Network(e67f2484-3d92-4e21-9a85-57791d9c9703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66e4f077-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.582 253542 DEBUG os_vif [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:47:c6,bridge_name='br-int',has_traffic_filtering=True,id=66e4f077-afa1-4407-b56e-30666cf03b08,network=Network(e67f2484-3d92-4e21-9a85-57791d9c9703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66e4f077-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.583 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.584 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66e4f077-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.589 253542 INFO os_vif [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:47:c6,bridge_name='br-int',has_traffic_filtering=True,id=66e4f077-afa1-4407-b56e-30666cf03b08,network=Network(e67f2484-3d92-4e21-9a85-57791d9c9703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66e4f077-af')
Nov 25 08:41:14 compute-0 neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703[345682]: [NOTICE]   (345686) : haproxy version is 2.8.14-c23fe91
Nov 25 08:41:14 compute-0 neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703[345682]: [NOTICE]   (345686) : path to executable is /usr/sbin/haproxy
Nov 25 08:41:14 compute-0 neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703[345682]: [WARNING]  (345686) : Exiting Master process...
Nov 25 08:41:14 compute-0 neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703[345682]: [WARNING]  (345686) : Exiting Master process...
Nov 25 08:41:14 compute-0 neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703[345682]: [ALERT]    (345686) : Current worker (345688) exited with code 143 (Terminated)
Nov 25 08:41:14 compute-0 neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703[345682]: [WARNING]  (345686) : All workers exited. Exiting... (0)
Nov 25 08:41:14 compute-0 systemd[1]: libpod-151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e.scope: Deactivated successfully.
Nov 25 08:41:14 compute-0 conmon[345682]: conmon 151ec4521b4dea4a2577 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e.scope/container/memory.events
Nov 25 08:41:14 compute-0 podman[345835]: 2025-11-25 08:41:14.654268063 +0000 UTC m=+0.081534142 container died 151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e-userdata-shm.mount: Deactivated successfully.
Nov 25 08:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4a3b24ada937f8d5ec6ec4e4e8343e0e4dd51282b9b15f52bf7fb00c17a5538-merged.mount: Deactivated successfully.
Nov 25 08:41:14 compute-0 podman[345835]: 2025-11-25 08:41:14.739698191 +0000 UTC m=+0.166964260 container cleanup 151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:41:14 compute-0 systemd[1]: libpod-conmon-151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e.scope: Deactivated successfully.
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.958 253542 DEBUG nova.storage.rbd_utils [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] removing snapshot(4d7cca2bb6a94883aa7fee68ccd01e30) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:41:14 compute-0 podman[345901]: 2025-11-25 08:41:14.978545051 +0000 UTC m=+0.200624199 container remove 151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:41:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:14.985 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bef0dda2-d1ac-4197-822c-811e29b8d076]: (4, ('Tue Nov 25 08:41:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703 (151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e)\n151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e\nTue Nov 25 08:41:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703 (151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e)\n151ec4521b4dea4a2577ee9597145451bd2c546e981085e5f9fb8af759def46e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:14.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23f7ea92-3668-4399-971e-90db02e997e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:14.988 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape67f2484-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:14 compute-0 nova_compute[253538]: 2025-11-25 08:41:14.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:14 compute-0 kernel: tape67f2484-30: left promiscuous mode
Nov 25 08:41:15 compute-0 nova_compute[253538]: 2025-11-25 08:41:15.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:15.031 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be1bbb87-056c-4183-9611-ef6721942ae1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:15 compute-0 nova_compute[253538]: 2025-11-25 08:41:15.035 253542 DEBUG nova.compute.manager [req-90bb35e3-ef2f-4bc8-b88f-cbbe1034cbe7 req-84f36503-9457-4824-8375-6969c1102915 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received event network-vif-unplugged-66e4f077-afa1-4407-b56e-30666cf03b08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:15 compute-0 nova_compute[253538]: 2025-11-25 08:41:15.036 253542 DEBUG oslo_concurrency.lockutils [req-90bb35e3-ef2f-4bc8-b88f-cbbe1034cbe7 req-84f36503-9457-4824-8375-6969c1102915 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:15 compute-0 nova_compute[253538]: 2025-11-25 08:41:15.036 253542 DEBUG oslo_concurrency.lockutils [req-90bb35e3-ef2f-4bc8-b88f-cbbe1034cbe7 req-84f36503-9457-4824-8375-6969c1102915 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:15 compute-0 nova_compute[253538]: 2025-11-25 08:41:15.037 253542 DEBUG oslo_concurrency.lockutils [req-90bb35e3-ef2f-4bc8-b88f-cbbe1034cbe7 req-84f36503-9457-4824-8375-6969c1102915 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:15 compute-0 nova_compute[253538]: 2025-11-25 08:41:15.037 253542 DEBUG nova.compute.manager [req-90bb35e3-ef2f-4bc8-b88f-cbbe1034cbe7 req-84f36503-9457-4824-8375-6969c1102915 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] No waiting events found dispatching network-vif-unplugged-66e4f077-afa1-4407-b56e-30666cf03b08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:41:15 compute-0 nova_compute[253538]: 2025-11-25 08:41:15.037 253542 DEBUG nova.compute.manager [req-90bb35e3-ef2f-4bc8-b88f-cbbe1034cbe7 req-84f36503-9457-4824-8375-6969c1102915 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received event network-vif-unplugged-66e4f077-afa1-4407-b56e-30666cf03b08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:41:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:15.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[864e46b1-1e21-49a4-91d2-855f282af45d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:15.051 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[182d5009-342b-478c-9898-4d1bb319ff17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:15.079 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b545e1-8850-4ffd-b0c6-f3ad82842a7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542363, 'reachable_time': 44520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345918, 'error': None, 'target': 'ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:15.085 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e67f2484-3d92-4e21-9a85-57791d9c9703 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:41:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:15.085 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b608afa9-0db3-4992-8aa0-2f33916549ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:15 compute-0 systemd[1]: run-netns-ovnmeta\x2de67f2484\x2d3d92\x2d4e21\x2d9a85\x2d57791d9c9703.mount: Deactivated successfully.
Nov 25 08:41:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 376 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.0 MiB/s wr, 244 op/s
Nov 25 08:41:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 25 08:41:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 25 08:41:15 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 25 08:41:15 compute-0 ceph-mon[75015]: pgmap v1756: 321 pgs: 321 active+clean; 376 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.0 MiB/s wr, 244 op/s
Nov 25 08:41:15 compute-0 nova_compute[253538]: 2025-11-25 08:41:15.950 253542 DEBUG nova.storage.rbd_utils [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(snap) on rbd image(0091be03-c4d2-4f48-bc11-a89c5095c208) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:41:16 compute-0 nova_compute[253538]: 2025-11-25 08:41:16.420 253542 INFO nova.virt.libvirt.driver [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Deleting instance files /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584_del
Nov 25 08:41:16 compute-0 nova_compute[253538]: 2025-11-25 08:41:16.421 253542 INFO nova.virt.libvirt.driver [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Deletion of /var/lib/nova/instances/f08c05a1-b18c-46bc-bf8a-d3694a045584_del complete
Nov 25 08:41:16 compute-0 nova_compute[253538]: 2025-11-25 08:41:16.485 253542 INFO nova.compute.manager [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Took 2.37 seconds to destroy the instance on the hypervisor.
Nov 25 08:41:16 compute-0 nova_compute[253538]: 2025-11-25 08:41:16.485 253542 DEBUG oslo.service.loopingcall [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:41:16 compute-0 nova_compute[253538]: 2025-11-25 08:41:16.486 253542 DEBUG nova.compute.manager [-] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:41:16 compute-0 nova_compute[253538]: 2025-11-25 08:41:16.486 253542 DEBUG nova.network.neutron [-] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:41:16 compute-0 nova_compute[253538]: 2025-11-25 08:41:16.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 25 08:41:16 compute-0 ceph-mon[75015]: osdmap e196: 3 total, 3 up, 3 in
Nov 25 08:41:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 25 08:41:16 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 25 08:41:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:16 compute-0 podman[345938]: 2025-11-25 08:41:16.835739592 +0000 UTC m=+0.081400329 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.349 253542 DEBUG nova.compute.manager [req-4ec8bcc3-e52f-49a8-b305-a50f5fcfbf21 req-a04efbf4-9239-426c-b76a-63ea774427df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received event network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.350 253542 DEBUG oslo_concurrency.lockutils [req-4ec8bcc3-e52f-49a8-b305-a50f5fcfbf21 req-a04efbf4-9239-426c-b76a-63ea774427df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.350 253542 DEBUG oslo_concurrency.lockutils [req-4ec8bcc3-e52f-49a8-b305-a50f5fcfbf21 req-a04efbf4-9239-426c-b76a-63ea774427df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.351 253542 DEBUG oslo_concurrency.lockutils [req-4ec8bcc3-e52f-49a8-b305-a50f5fcfbf21 req-a04efbf4-9239-426c-b76a-63ea774427df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.351 253542 DEBUG nova.compute.manager [req-4ec8bcc3-e52f-49a8-b305-a50f5fcfbf21 req-a04efbf4-9239-426c-b76a-63ea774427df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] No waiting events found dispatching network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.352 253542 WARNING nova.compute.manager [req-4ec8bcc3-e52f-49a8-b305-a50f5fcfbf21 req-a04efbf4-9239-426c-b76a-63ea774427df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received unexpected event network-vif-plugged-66e4f077-afa1-4407-b56e-30666cf03b08 for instance with vm_state active and task_state deleting.
Nov 25 08:41:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 411 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 5.7 MiB/s wr, 226 op/s
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.778 253542 DEBUG nova.network.neutron [-] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.790 253542 INFO nova.compute.manager [-] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Took 1.30 seconds to deallocate network for instance.
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.847 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.847 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.867 253542 DEBUG nova.compute.manager [req-a3fa61de-f1b8-4b46-96d1-f785ba07d77d req-4eae289c-87d2-47e7-93b4-da749df40616 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Received event network-vif-deleted-66e4f077-afa1-4407-b56e-30666cf03b08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:17 compute-0 nova_compute[253538]: 2025-11-25 08:41:17.901 253542 DEBUG oslo_concurrency.processutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:17 compute-0 ceph-mon[75015]: osdmap e197: 3 total, 3 up, 3 in
Nov 25 08:41:17 compute-0 ceph-mon[75015]: pgmap v1759: 321 pgs: 321 active+clean; 411 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 5.7 MiB/s wr, 226 op/s
Nov 25 08:41:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:41:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430360280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:18 compute-0 nova_compute[253538]: 2025-11-25 08:41:18.377 253542 DEBUG oslo_concurrency.processutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:18 compute-0 nova_compute[253538]: 2025-11-25 08:41:18.385 253542 DEBUG nova.compute.provider_tree [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:41:18 compute-0 nova_compute[253538]: 2025-11-25 08:41:18.410 253542 DEBUG nova.scheduler.client.report [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:41:18 compute-0 nova_compute[253538]: 2025-11-25 08:41:18.433 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:18 compute-0 nova_compute[253538]: 2025-11-25 08:41:18.899 253542 INFO nova.scheduler.client.report [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Deleted allocations for instance f08c05a1-b18c-46bc-bf8a-d3694a045584
Nov 25 08:41:19 compute-0 nova_compute[253538]: 2025-11-25 08:41:19.016 253542 DEBUG oslo_concurrency.lockutils [None req-fbc39e25-2115-4d7b-a564-db03d68c1f3a 1c82d462fb7a4f14b4d14246ebb45df5 1de8a67238e04cb69478fbbed61e53e5 - - default default] Lock "f08c05a1-b18c-46bc-bf8a-d3694a045584" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1430360280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:19 compute-0 nova_compute[253538]: 2025-11-25 08:41:19.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 422 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 7.6 MiB/s wr, 215 op/s
Nov 25 08:41:19 compute-0 nova_compute[253538]: 2025-11-25 08:41:19.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:19 compute-0 nova_compute[253538]: 2025-11-25 08:41:19.593 253542 INFO nova.virt.libvirt.driver [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Snapshot image upload complete
Nov 25 08:41:19 compute-0 nova_compute[253538]: 2025-11-25 08:41:19.594 253542 INFO nova.compute.manager [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 7.20 seconds to snapshot the instance on the hypervisor.
Nov 25 08:41:19 compute-0 nova_compute[253538]: 2025-11-25 08:41:19.598 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:20 compute-0 ceph-mon[75015]: pgmap v1760: 321 pgs: 321 active+clean; 422 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 7.6 MiB/s wr, 215 op/s
Nov 25 08:41:20 compute-0 nova_compute[253538]: 2025-11-25 08:41:20.221 253542 DEBUG nova.compute.manager [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 25 08:41:20 compute-0 nova_compute[253538]: 2025-11-25 08:41:20.221 253542 DEBUG nova.compute.manager [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Nov 25 08:41:20 compute-0 nova_compute[253538]: 2025-11-25 08:41:20.222 253542 DEBUG nova.compute.manager [None req-c985478f-aa51-48a9-b0fc-f8033fc3d4f0 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deleting image 89ede02e-6d56-403f-a661-2e56029c7b26 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Nov 25 08:41:20 compute-0 sshd-session[345979]: Invalid user admin from 193.32.162.151 port 38058
Nov 25 08:41:20 compute-0 sshd-session[345979]: Connection closed by invalid user admin 193.32.162.151 port 38058 [preauth]
Nov 25 08:41:20 compute-0 nova_compute[253538]: 2025-11-25 08:41:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:20 compute-0 nova_compute[253538]: 2025-11-25 08:41:20.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:41:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 405 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 5.9 MiB/s wr, 228 op/s
Nov 25 08:41:21 compute-0 nova_compute[253538]: 2025-11-25 08:41:21.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:22 compute-0 ceph-mon[75015]: pgmap v1761: 321 pgs: 321 active+clean; 405 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 5.9 MiB/s wr, 228 op/s
Nov 25 08:41:22 compute-0 podman[345981]: 2025-11-25 08:41:22.127212804 +0000 UTC m=+0.087079535 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:41:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 25 08:41:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 25 08:41:23 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 25 08:41:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 385 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 172 op/s
Nov 25 08:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:41:23 compute-0 nova_compute[253538]: 2025-11-25 08:41:23.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:23 compute-0 nova_compute[253538]: 2025-11-25 08:41:23.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:41:23 compute-0 nova_compute[253538]: 2025-11-25 08:41:23.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:41:24 compute-0 ceph-mon[75015]: osdmap e198: 3 total, 3 up, 3 in
Nov 25 08:41:24 compute-0 ceph-mon[75015]: pgmap v1763: 321 pgs: 321 active+clean; 385 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 172 op/s
Nov 25 08:41:24 compute-0 nova_compute[253538]: 2025-11-25 08:41:24.050 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:41:24 compute-0 nova_compute[253538]: 2025-11-25 08:41:24.050 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:41:24 compute-0 nova_compute[253538]: 2025-11-25 08:41:24.051 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:41:24 compute-0 nova_compute[253538]: 2025-11-25 08:41:24.051 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:41:24 compute-0 nova_compute[253538]: 2025-11-25 08:41:24.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:24 compute-0 podman[346001]: 2025-11-25 08:41:24.873215328 +0000 UTC m=+0.107441930 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:41:24 compute-0 nova_compute[253538]: 2025-11-25 08:41:24.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:24.996 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:41:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:24.998 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:41:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 360 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 112 op/s
Nov 25 08:41:25 compute-0 nova_compute[253538]: 2025-11-25 08:41:25.970 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 25 08:41:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 25 08:41:26 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 25 08:41:26 compute-0 ceph-mon[75015]: pgmap v1764: 321 pgs: 321 active+clean; 360 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 112 op/s
Nov 25 08:41:26 compute-0 nova_compute[253538]: 2025-11-25 08:41:26.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 25 08:41:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 25 08:41:27 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 25 08:41:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 325 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.8 KiB/s wr, 53 op/s
Nov 25 08:41:27 compute-0 ceph-mon[75015]: osdmap e199: 3 total, 3 up, 3 in
Nov 25 08:41:27 compute-0 ceph-mon[75015]: osdmap e200: 3 total, 3 up, 3 in
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.961 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.976 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.977 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.978 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.978 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.979 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.997 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.998 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:27 compute-0 nova_compute[253538]: 2025-11-25 08:41:27.998 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:28 compute-0 nova_compute[253538]: 2025-11-25 08:41:28.023 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:28 compute-0 ovn_controller[152859]: 2025-11-25T08:41:28Z|00938|binding|INFO|Releasing lport 66275e2b-0197-461a-9be3-ae2fe1aec502 from this chassis (sb_readonly=0)
Nov 25 08:41:28 compute-0 nova_compute[253538]: 2025-11-25 08:41:28.185 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 25 08:41:28 compute-0 ceph-mon[75015]: pgmap v1767: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 325 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.8 KiB/s wr, 53 op/s
Nov 25 08:41:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 25 08:41:28 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 25 08:41:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:41:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2192187691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:41:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:41:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2192187691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:41:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 300 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 682 B/s wr, 14 op/s
Nov 25 08:41:29 compute-0 ceph-mon[75015]: osdmap e201: 3 total, 3 up, 3 in
Nov 25 08:41:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2192187691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:41:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2192187691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:41:29 compute-0 nova_compute[253538]: 2025-11-25 08:41:29.559 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060074.5585058, f08c05a1-b18c-46bc-bf8a-d3694a045584 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:41:29 compute-0 nova_compute[253538]: 2025-11-25 08:41:29.560 253542 INFO nova.compute.manager [-] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] VM Stopped (Lifecycle Event)
Nov 25 08:41:29 compute-0 nova_compute[253538]: 2025-11-25 08:41:29.573 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:29 compute-0 nova_compute[253538]: 2025-11-25 08:41:29.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:29 compute-0 nova_compute[253538]: 2025-11-25 08:41:29.601 253542 DEBUG nova.compute.manager [None req-44f9ecd3-5162-4cdc-88a4-cc06bdcd8661 - - - - - -] [instance: f08c05a1-b18c-46bc-bf8a-d3694a045584] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:30 compute-0 nova_compute[253538]: 2025-11-25 08:41:30.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:30 compute-0 ovn_controller[152859]: 2025-11-25T08:41:30Z|00939|binding|INFO|Releasing lport 66275e2b-0197-461a-9be3-ae2fe1aec502 from this chassis (sb_readonly=0)
Nov 25 08:41:30 compute-0 ceph-mon[75015]: pgmap v1769: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 300 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 682 B/s wr, 14 op/s
Nov 25 08:41:30 compute-0 nova_compute[253538]: 2025-11-25 08:41:30.628 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:30 compute-0 sudo[346027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:30 compute-0 sudo[346027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:30 compute-0 sudo[346027]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:30 compute-0 sudo[346052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:41:30 compute-0 sudo[346052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:30 compute-0 sudo[346052]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:30 compute-0 sudo[346077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:30 compute-0 sudo[346077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:30 compute-0 sudo[346077]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:30 compute-0 sudo[346102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 08:41:30 compute-0 sudo[346102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:31 compute-0 sudo[346102]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:41:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:41:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:31 compute-0 sudo[346147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:31 compute-0 sudo[346147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:31 compute-0 sudo[346147]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:31 compute-0 sudo[346172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:41:31 compute-0 sudo[346172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:31 compute-0 sudo[346172]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:31 compute-0 sudo[346197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:31 compute-0 sudo[346197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:31 compute-0 sudo[346197]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:31 compute-0 sudo[346222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:41:31 compute-0 sudo[346222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 254 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.0 KiB/s wr, 53 op/s
Nov 25 08:41:31 compute-0 nova_compute[253538]: 2025-11-25 08:41:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:31 compute-0 nova_compute[253538]: 2025-11-25 08:41:31.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:31 compute-0 nova_compute[253538]: 2025-11-25 08:41:31.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:31 compute-0 nova_compute[253538]: 2025-11-25 08:41:31.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:31 compute-0 nova_compute[253538]: 2025-11-25 08:41:31.578 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:41:31 compute-0 nova_compute[253538]: 2025-11-25 08:41:31.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:31 compute-0 nova_compute[253538]: 2025-11-25 08:41:31.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:31 compute-0 sudo[346222]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:41:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:41:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:41:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:41:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:41:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 928aa61d-7332-4b03-946b-031751c0328e does not exist
Nov 25 08:41:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 784f0a85-22f2-49dd-b86a-031ef84a24b3 does not exist
Nov 25 08:41:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c0c89df1-9d79-483e-be4e-88ea356ac2e2 does not exist
Nov 25 08:41:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:41:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:41:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:41:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 25 08:41:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:41:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240911554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.046 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:32 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:32 compute-0 ceph-mon[75015]: pgmap v1770: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 254 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.0 KiB/s wr, 53 op/s
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/240911554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:32 compute-0 ceph-mon[75015]: osdmap e202: 3 total, 3 up, 3 in
Nov 25 08:41:32 compute-0 sudo[346298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:32 compute-0 sudo[346298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:32 compute-0 sudo[346298]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.129 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.130 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:41:32 compute-0 sudo[346326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:41:32 compute-0 sudo[346326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:32 compute-0 sudo[346326]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:32 compute-0 sudo[346351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:32 compute-0 sudo[346351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:32 compute-0 sudo[346351]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.349 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.350 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3689MB free_disk=59.942684173583984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.350 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.351 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:32 compute-0 sudo[346376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:41:32 compute-0 sudo[346376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.423 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 3e75d0af-c514-42c5-aa05-88ae5552f196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.424 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.424 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.461 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:32 compute-0 podman[346462]: 2025-11-25 08:41:32.787400107 +0000 UTC m=+0.069964558 container create 1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:41:32 compute-0 systemd[1]: Started libpod-conmon-1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1.scope.
Nov 25 08:41:32 compute-0 podman[346462]: 2025-11-25 08:41:32.760455062 +0000 UTC m=+0.043019503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:41:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:41:32 compute-0 podman[346462]: 2025-11-25 08:41:32.87595132 +0000 UTC m=+0.158515821 container init 1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:41:32 compute-0 podman[346462]: 2025-11-25 08:41:32.884823882 +0000 UTC m=+0.167388313 container start 1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:41:32 compute-0 podman[346462]: 2025-11-25 08:41:32.88878575 +0000 UTC m=+0.171350201 container attach 1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:41:32 compute-0 dreamy_swanson[346478]: 167 167
Nov 25 08:41:32 compute-0 systemd[1]: libpod-1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1.scope: Deactivated successfully.
Nov 25 08:41:32 compute-0 podman[346462]: 2025-11-25 08:41:32.89393143 +0000 UTC m=+0.176495871 container died 1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:41:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:41:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379214012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.922 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fde31f2ccddfc735773cda41bd84fb1d8b9ae3b51b8cf403e0459b5dd45e5468-merged.mount: Deactivated successfully.
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.934 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.951 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:41:32 compute-0 podman[346462]: 2025-11-25 08:41:32.952668271 +0000 UTC m=+0.235232712 container remove 1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 08:41:32 compute-0 systemd[1]: libpod-conmon-1a7b91e621fca9f20782b4c071879b3719d63219df5692903d655350220d95f1.scope: Deactivated successfully.
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.976 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:41:32 compute-0 nova_compute[253538]: 2025-11-25 08:41:32.977 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/379214012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:33 compute-0 podman[346503]: 2025-11-25 08:41:33.214825984 +0000 UTC m=+0.073271857 container create 74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:41:33 compute-0 systemd[1]: Started libpod-conmon-74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391.scope.
Nov 25 08:41:33 compute-0 podman[346503]: 2025-11-25 08:41:33.185411673 +0000 UTC m=+0.043857606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:41:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e01ca4fdc19d4a8d78a61802c3bfd91a956f5dc70fd2b52404f5fd163abca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e01ca4fdc19d4a8d78a61802c3bfd91a956f5dc70fd2b52404f5fd163abca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e01ca4fdc19d4a8d78a61802c3bfd91a956f5dc70fd2b52404f5fd163abca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e01ca4fdc19d4a8d78a61802c3bfd91a956f5dc70fd2b52404f5fd163abca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e01ca4fdc19d4a8d78a61802c3bfd91a956f5dc70fd2b52404f5fd163abca7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:33 compute-0 podman[346503]: 2025-11-25 08:41:33.319006804 +0000 UTC m=+0.177452697 container init 74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:41:33 compute-0 podman[346503]: 2025-11-25 08:41:33.333956381 +0000 UTC m=+0.192402254 container start 74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:41:33 compute-0 podman[346503]: 2025-11-25 08:41:33.338525326 +0000 UTC m=+0.196971189 container attach 74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:41:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 167 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.2 KiB/s wr, 83 op/s
Nov 25 08:41:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:34.001 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:34 compute-0 ceph-mon[75015]: pgmap v1772: 321 pgs: 321 active+clean; 167 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.2 KiB/s wr, 83 op/s
Nov 25 08:41:34 compute-0 angry_ellis[346519]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:41:34 compute-0 angry_ellis[346519]: --> relative data size: 1.0
Nov 25 08:41:34 compute-0 angry_ellis[346519]: --> All data devices are unavailable
Nov 25 08:41:34 compute-0 systemd[1]: libpod-74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391.scope: Deactivated successfully.
Nov 25 08:41:34 compute-0 systemd[1]: libpod-74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391.scope: Consumed 1.118s CPU time.
Nov 25 08:41:34 compute-0 podman[346503]: 2025-11-25 08:41:34.544763298 +0000 UTC m=+1.403209191 container died 74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9e01ca4fdc19d4a8d78a61802c3bfd91a956f5dc70fd2b52404f5fd163abca7-merged.mount: Deactivated successfully.
Nov 25 08:41:34 compute-0 nova_compute[253538]: 2025-11-25 08:41:34.596 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:34 compute-0 podman[346503]: 2025-11-25 08:41:34.62703931 +0000 UTC m=+1.485485173 container remove 74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 08:41:34 compute-0 systemd[1]: libpod-conmon-74116acc04f070aff718e383475448364baef38ee1e805dc50d0c9fcd48a2391.scope: Deactivated successfully.
Nov 25 08:41:34 compute-0 sudo[346376]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:34 compute-0 sudo[346561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:34 compute-0 sudo[346561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:34 compute-0 sudo[346561]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:34 compute-0 sudo[346586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:41:34 compute-0 sudo[346586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:34 compute-0 sudo[346586]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:34 compute-0 sudo[346611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:34 compute-0 sudo[346611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:34 compute-0 sudo[346611]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:35 compute-0 sudo[346636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:41:35 compute-0 sudo[346636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:35 compute-0 podman[346700]: 2025-11-25 08:41:35.400457177 +0000 UTC m=+0.040428003 container create 50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_almeida, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 08:41:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 66 op/s
Nov 25 08:41:35 compute-0 systemd[1]: Started libpod-conmon-50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5.scope.
Nov 25 08:41:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:41:35 compute-0 podman[346700]: 2025-11-25 08:41:35.38517359 +0000 UTC m=+0.025144426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:41:35 compute-0 podman[346700]: 2025-11-25 08:41:35.510015993 +0000 UTC m=+0.149986919 container init 50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_almeida, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:41:35 compute-0 podman[346700]: 2025-11-25 08:41:35.52205108 +0000 UTC m=+0.162021946 container start 50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:41:35 compute-0 podman[346700]: 2025-11-25 08:41:35.526472021 +0000 UTC m=+0.166442927 container attach 50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:41:35 compute-0 exciting_almeida[346716]: 167 167
Nov 25 08:41:35 compute-0 systemd[1]: libpod-50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5.scope: Deactivated successfully.
Nov 25 08:41:35 compute-0 podman[346700]: 2025-11-25 08:41:35.529594396 +0000 UTC m=+0.169565302 container died 50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 08:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5f5ba049da92e8d321a2f7d3352cf2710a68c5e717d6df38aedb504e8017d78-merged.mount: Deactivated successfully.
Nov 25 08:41:35 compute-0 podman[346700]: 2025-11-25 08:41:35.592276434 +0000 UTC m=+0.232247300 container remove 50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 08:41:35 compute-0 systemd[1]: libpod-conmon-50e82e82ab11fce210db95c45e925bd47d7bb722604ba51c6254358c12aa32f5.scope: Deactivated successfully.
Nov 25 08:41:35 compute-0 podman[346743]: 2025-11-25 08:41:35.82845311 +0000 UTC m=+0.076554807 container create 002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:41:35 compute-0 podman[346743]: 2025-11-25 08:41:35.775537009 +0000 UTC m=+0.023638726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:41:35 compute-0 systemd[1]: Started libpod-conmon-002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479.scope.
Nov 25 08:41:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284acefed62e0f85be1b7cf07f662b47e79af9b4fde558a507f0156fcca2e5a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284acefed62e0f85be1b7cf07f662b47e79af9b4fde558a507f0156fcca2e5a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284acefed62e0f85be1b7cf07f662b47e79af9b4fde558a507f0156fcca2e5a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284acefed62e0f85be1b7cf07f662b47e79af9b4fde558a507f0156fcca2e5a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:35 compute-0 podman[346743]: 2025-11-25 08:41:35.969852074 +0000 UTC m=+0.217953831 container init 002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 08:41:35 compute-0 podman[346743]: 2025-11-25 08:41:35.981739748 +0000 UTC m=+0.229841455 container start 002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 08:41:36 compute-0 podman[346743]: 2025-11-25 08:41:36.00382921 +0000 UTC m=+0.251930917 container attach 002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 08:41:36 compute-0 ceph-mon[75015]: pgmap v1773: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 66 op/s
Nov 25 08:41:36 compute-0 ovn_controller[152859]: 2025-11-25T08:41:36Z|00940|binding|INFO|Releasing lport 66275e2b-0197-461a-9be3-ae2fe1aec502 from this chassis (sb_readonly=0)
Nov 25 08:41:36 compute-0 nice_joliot[346759]: {
Nov 25 08:41:36 compute-0 nice_joliot[346759]:     "0": [
Nov 25 08:41:36 compute-0 nice_joliot[346759]:         {
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "devices": [
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "/dev/loop3"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             ],
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_name": "ceph_lv0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_size": "21470642176",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "name": "ceph_lv0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "tags": {
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cluster_name": "ceph",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.crush_device_class": "",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.encrypted": "0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osd_id": "0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.type": "block",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.vdo": "0"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             },
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "type": "block",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "vg_name": "ceph_vg0"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:         }
Nov 25 08:41:36 compute-0 nice_joliot[346759]:     ],
Nov 25 08:41:36 compute-0 nice_joliot[346759]:     "1": [
Nov 25 08:41:36 compute-0 nice_joliot[346759]:         {
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "devices": [
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "/dev/loop4"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             ],
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_name": "ceph_lv1",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_size": "21470642176",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "name": "ceph_lv1",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "tags": {
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cluster_name": "ceph",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.crush_device_class": "",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.encrypted": "0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osd_id": "1",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.type": "block",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.vdo": "0"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             },
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "type": "block",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "vg_name": "ceph_vg1"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:         }
Nov 25 08:41:36 compute-0 nice_joliot[346759]:     ],
Nov 25 08:41:36 compute-0 nice_joliot[346759]:     "2": [
Nov 25 08:41:36 compute-0 nice_joliot[346759]:         {
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "devices": [
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "/dev/loop5"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             ],
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_name": "ceph_lv2",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_size": "21470642176",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "name": "ceph_lv2",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "tags": {
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.cluster_name": "ceph",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.crush_device_class": "",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.encrypted": "0",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osd_id": "2",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.type": "block",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:                 "ceph.vdo": "0"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             },
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "type": "block",
Nov 25 08:41:36 compute-0 nice_joliot[346759]:             "vg_name": "ceph_vg2"
Nov 25 08:41:36 compute-0 nice_joliot[346759]:         }
Nov 25 08:41:36 compute-0 nice_joliot[346759]:     ]
Nov 25 08:41:36 compute-0 nice_joliot[346759]: }
Nov 25 08:41:36 compute-0 nova_compute[253538]: 2025-11-25 08:41:36.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:36 compute-0 systemd[1]: libpod-002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479.scope: Deactivated successfully.
Nov 25 08:41:36 compute-0 podman[346743]: 2025-11-25 08:41:36.755912986 +0000 UTC m=+1.004014663 container died 002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-284acefed62e0f85be1b7cf07f662b47e79af9b4fde558a507f0156fcca2e5a0-merged.mount: Deactivated successfully.
Nov 25 08:41:36 compute-0 podman[346743]: 2025-11-25 08:41:36.85221073 +0000 UTC m=+1.100312417 container remove 002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:41:36 compute-0 systemd[1]: libpod-conmon-002a25a650d1570d7a9559980ce6cacc8558d3460f57060b9532aa5c62b83479.scope: Deactivated successfully.
Nov 25 08:41:36 compute-0 sudo[346636]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:37 compute-0 sudo[346780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:37 compute-0 sudo[346780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:37 compute-0 sudo[346780]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 25 08:41:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 25 08:41:37 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 25 08:41:37 compute-0 sudo[346805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:41:37 compute-0 sudo[346805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:37 compute-0 sudo[346805]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:37 compute-0 sudo[346830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:37 compute-0 sudo[346830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:37 compute-0 sudo[346830]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:37 compute-0 sudo[346855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:41:37 compute-0 sudo[346855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.4 KiB/s wr, 63 op/s
Nov 25 08:41:37 compute-0 podman[346922]: 2025-11-25 08:41:37.777935687 +0000 UTC m=+0.054930718 container create cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcnulty, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:41:37 compute-0 systemd[1]: Started libpod-conmon-cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2.scope.
Nov 25 08:41:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:41:37 compute-0 podman[346922]: 2025-11-25 08:41:37.754591751 +0000 UTC m=+0.031586812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:41:37 compute-0 podman[346922]: 2025-11-25 08:41:37.851897513 +0000 UTC m=+0.128892564 container init cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcnulty, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:41:37 compute-0 podman[346922]: 2025-11-25 08:41:37.863034416 +0000 UTC m=+0.140029437 container start cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 08:41:37 compute-0 great_mcnulty[346938]: 167 167
Nov 25 08:41:37 compute-0 systemd[1]: libpod-cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2.scope: Deactivated successfully.
Nov 25 08:41:37 compute-0 podman[346922]: 2025-11-25 08:41:37.871137027 +0000 UTC m=+0.148132048 container attach cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcnulty, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 08:41:37 compute-0 podman[346922]: 2025-11-25 08:41:37.871656921 +0000 UTC m=+0.148651942 container died cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcnulty, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 08:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3400e187ede98c3b51e4533baf129f3120d2a15582816ead59a4674c698e683e-merged.mount: Deactivated successfully.
Nov 25 08:41:37 compute-0 podman[346922]: 2025-11-25 08:41:37.979607243 +0000 UTC m=+0.256602284 container remove cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:41:37 compute-0 systemd[1]: libpod-conmon-cda8eba58420d725dbb4e325f3ae74aa3696b274bb2c64274119ce6ff4365df2.scope: Deactivated successfully.
Nov 25 08:41:38 compute-0 ceph-mon[75015]: osdmap e203: 3 total, 3 up, 3 in
Nov 25 08:41:38 compute-0 ceph-mon[75015]: pgmap v1775: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.4 KiB/s wr, 63 op/s
Nov 25 08:41:38 compute-0 podman[346962]: 2025-11-25 08:41:38.268078744 +0000 UTC m=+0.068942859 container create c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:41:38 compute-0 podman[346962]: 2025-11-25 08:41:38.238953061 +0000 UTC m=+0.039817246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:41:38 compute-0 systemd[1]: Started libpod-conmon-c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef.scope.
Nov 25 08:41:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32be621f60741422b1f731e7ba45267822000131742089d35f485bb2b223f009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32be621f60741422b1f731e7ba45267822000131742089d35f485bb2b223f009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32be621f60741422b1f731e7ba45267822000131742089d35f485bb2b223f009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32be621f60741422b1f731e7ba45267822000131742089d35f485bb2b223f009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:41:38 compute-0 podman[346962]: 2025-11-25 08:41:38.395283271 +0000 UTC m=+0.196147476 container init c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_almeida, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:41:38 compute-0 podman[346962]: 2025-11-25 08:41:38.407872814 +0000 UTC m=+0.208736969 container start c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 08:41:38 compute-0 podman[346962]: 2025-11-25 08:41:38.429133034 +0000 UTC m=+0.229997229 container attach c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_almeida, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:41:38 compute-0 nova_compute[253538]: 2025-11-25 08:41:38.978 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:41:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 08:41:39 compute-0 quirky_almeida[346978]: {
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "osd_id": 1,
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "type": "bluestore"
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:     },
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "osd_id": 2,
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "type": "bluestore"
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:     },
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "osd_id": 0,
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:         "type": "bluestore"
Nov 25 08:41:39 compute-0 quirky_almeida[346978]:     }
Nov 25 08:41:39 compute-0 quirky_almeida[346978]: }
Nov 25 08:41:39 compute-0 systemd[1]: libpod-c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef.scope: Deactivated successfully.
Nov 25 08:41:39 compute-0 systemd[1]: libpod-c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef.scope: Consumed 1.198s CPU time.
Nov 25 08:41:39 compute-0 nova_compute[253538]: 2025-11-25 08:41:39.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:39 compute-0 podman[347011]: 2025-11-25 08:41:39.667290905 +0000 UTC m=+0.047869975 container died c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_almeida, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-32be621f60741422b1f731e7ba45267822000131742089d35f485bb2b223f009-merged.mount: Deactivated successfully.
Nov 25 08:41:39 compute-0 podman[347011]: 2025-11-25 08:41:39.767215048 +0000 UTC m=+0.147794038 container remove c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 08:41:39 compute-0 systemd[1]: libpod-conmon-c841ad7c752172f384524eeffb41123f6f9219bb3601e36df0d977d13c3a5cef.scope: Deactivated successfully.
Nov 25 08:41:39 compute-0 sudo[346855]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:41:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:41:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2025ee0a-5941-4bf8-a436-17a696d3690f does not exist
Nov 25 08:41:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 06021d8f-af8b-4c78-820b-76df2e373890 does not exist
Nov 25 08:41:39 compute-0 sudo[347026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:41:39 compute-0 sudo[347026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:39 compute-0 sudo[347026]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:40 compute-0 sudo[347051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:41:40 compute-0 sudo[347051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:41:40 compute-0 sudo[347051]: pam_unix(sudo:session): session closed for user root
Nov 25 08:41:40 compute-0 ceph-mon[75015]: pgmap v1776: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 08:41:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:41:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:41.066 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:41.066 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:41.067 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 545 B/s rd, 0 B/s wr, 1 op/s
Nov 25 08:41:41 compute-0 nova_compute[253538]: 2025-11-25 08:41:41.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:42 compute-0 nova_compute[253538]: 2025-11-25 08:41:42.312 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "5f960b00-a365-4665-8a74-50d2e7b7f940" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:42 compute-0 nova_compute[253538]: 2025-11-25 08:41:42.313 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:42 compute-0 nova_compute[253538]: 2025-11-25 08:41:42.330 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:41:42 compute-0 nova_compute[253538]: 2025-11-25 08:41:42.414 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:42 compute-0 nova_compute[253538]: 2025-11-25 08:41:42.415 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:42 compute-0 nova_compute[253538]: 2025-11-25 08:41:42.424 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:41:42 compute-0 nova_compute[253538]: 2025-11-25 08:41:42.425 253542 INFO nova.compute.claims [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:41:42 compute-0 ceph-mon[75015]: pgmap v1777: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 545 B/s rd, 0 B/s wr, 1 op/s
Nov 25 08:41:42 compute-0 nova_compute[253538]: 2025-11-25 08:41:42.571 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:41:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813177894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.057 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.066 253542 DEBUG nova.compute.provider_tree [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.079 253542 DEBUG nova.scheduler.client.report [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.100 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.101 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.143 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.144 253542 DEBUG nova.network.neutron [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.161 253542 INFO nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.177 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.299 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.301 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.302 253542 INFO nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Creating image(s)
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.338 253542 DEBUG nova.storage.rbd_utils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 5f960b00-a365-4665-8a74-50d2e7b7f940_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.379 253542 DEBUG nova.storage.rbd_utils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 5f960b00-a365-4665-8a74-50d2e7b7f940_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.425 253542 DEBUG nova.storage.rbd_utils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 5f960b00-a365-4665-8a74-50d2e7b7f940_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.431 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.485 253542 DEBUG nova.policy [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '66e1f27ea22d4ee08a0a470a8c18135e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '295fcc758cf24ab4b01eb393f4863e36', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.539 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.540 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.541 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.541 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1813177894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.582 253542 DEBUG nova.storage.rbd_utils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 5f960b00-a365-4665-8a74-50d2e7b7f940_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:43 compute-0 nova_compute[253538]: 2025-11-25 08:41:43.589 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5f960b00-a365-4665-8a74-50d2e7b7f940_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.028 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5f960b00-a365-4665-8a74-50d2e7b7f940_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.120 253542 DEBUG nova.storage.rbd_utils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] resizing rbd image 5f960b00-a365-4665-8a74-50d2e7b7f940_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.222 253542 DEBUG nova.objects.instance [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'migration_context' on Instance uuid 5f960b00-a365-4665-8a74-50d2e7b7f940 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.237 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.237 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Ensure instance console log exists: /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.237 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.238 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.238 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.335 253542 DEBUG nova.network.neutron [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Successfully created port: 957fffc1-ba49-42af-b933-a544944131aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:41:44 compute-0 ceph-mon[75015]: pgmap v1778: 321 pgs: 321 active+clean; 167 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Nov 25 08:41:44 compute-0 nova_compute[253538]: 2025-11-25 08:41:44.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 195 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.7 MiB/s wr, 5 op/s
Nov 25 08:41:45 compute-0 nova_compute[253538]: 2025-11-25 08:41:45.830 253542 DEBUG nova.network.neutron [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Successfully updated port: 957fffc1-ba49-42af-b933-a544944131aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:41:45 compute-0 nova_compute[253538]: 2025-11-25 08:41:45.850 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-5f960b00-a365-4665-8a74-50d2e7b7f940" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:41:45 compute-0 nova_compute[253538]: 2025-11-25 08:41:45.850 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-5f960b00-a365-4665-8a74-50d2e7b7f940" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:41:45 compute-0 nova_compute[253538]: 2025-11-25 08:41:45.851 253542 DEBUG nova.network.neutron [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:41:46 compute-0 nova_compute[253538]: 2025-11-25 08:41:46.079 253542 DEBUG nova.compute.manager [req-63721a68-9ce7-4a41-89fa-625bc5534c94 req-d50b9631-8c68-4d3b-a0d7-5e91e11517d6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Received event network-changed-957fffc1-ba49-42af-b933-a544944131aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:46 compute-0 nova_compute[253538]: 2025-11-25 08:41:46.080 253542 DEBUG nova.compute.manager [req-63721a68-9ce7-4a41-89fa-625bc5534c94 req-d50b9631-8c68-4d3b-a0d7-5e91e11517d6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Refreshing instance network info cache due to event network-changed-957fffc1-ba49-42af-b933-a544944131aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:41:46 compute-0 nova_compute[253538]: 2025-11-25 08:41:46.080 253542 DEBUG oslo_concurrency.lockutils [req-63721a68-9ce7-4a41-89fa-625bc5534c94 req-d50b9631-8c68-4d3b-a0d7-5e91e11517d6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5f960b00-a365-4665-8a74-50d2e7b7f940" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:41:46 compute-0 ceph-mon[75015]: pgmap v1779: 321 pgs: 321 active+clean; 195 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.7 MiB/s wr, 5 op/s
Nov 25 08:41:46 compute-0 nova_compute[253538]: 2025-11-25 08:41:46.721 253542 DEBUG nova.network.neutron [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:41:46 compute-0 nova_compute[253538]: 2025-11-25 08:41:46.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 206 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.9 MiB/s wr, 19 op/s
Nov 25 08:41:47 compute-0 ceph-mon[75015]: pgmap v1780: 321 pgs: 321 active+clean; 206 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.9 MiB/s wr, 19 op/s
Nov 25 08:41:47 compute-0 podman[347264]: 2025-11-25 08:41:47.866679142 +0000 UTC m=+0.089812069 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.177 253542 DEBUG nova.network.neutron [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Updating instance_info_cache with network_info: [{"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.197 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-5f960b00-a365-4665-8a74-50d2e7b7f940" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.197 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Instance network_info: |[{"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.198 253542 DEBUG oslo_concurrency.lockutils [req-63721a68-9ce7-4a41-89fa-625bc5534c94 req-d50b9631-8c68-4d3b-a0d7-5e91e11517d6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5f960b00-a365-4665-8a74-50d2e7b7f940" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.198 253542 DEBUG nova.network.neutron [req-63721a68-9ce7-4a41-89fa-625bc5534c94 req-d50b9631-8c68-4d3b-a0d7-5e91e11517d6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Refreshing network info cache for port 957fffc1-ba49-42af-b933-a544944131aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.201 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Start _get_guest_xml network_info=[{"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.206 253542 WARNING nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.218 253542 DEBUG nova.virt.libvirt.host [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.219 253542 DEBUG nova.virt.libvirt.host [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.264 253542 DEBUG nova.virt.libvirt.host [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.265 253542 DEBUG nova.virt.libvirt.host [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.265 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.265 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.266 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.266 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.267 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.267 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.267 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.267 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.268 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.268 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.268 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.269 253542 DEBUG nova.virt.hardware [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.272 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:41:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260726527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.744 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.771 253542 DEBUG nova.storage.rbd_utils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 5f960b00-a365-4665-8a74-50d2e7b7f940_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:48 compute-0 nova_compute[253538]: 2025-11-25 08:41:48.775 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3260726527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:41:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:41:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462868593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.250 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.253 253542 DEBUG nova.virt.libvirt.vif [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:41:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-402804146',display_name='tempest-ServerActionsTestOtherB-server-402804146',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-402804146',id=97,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-totndcwk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:41:43Z,user_data=None,user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=5f960b00-a365-4665-8a74-50d2e7b7f940,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.254 253542 DEBUG nova.network.os_vif_util [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.256 253542 DEBUG nova.network.os_vif_util [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.258 253542 DEBUG nova.objects.instance [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f960b00-a365-4665-8a74-50d2e7b7f940 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.276 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <uuid>5f960b00-a365-4665-8a74-50d2e7b7f940</uuid>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <name>instance-00000061</name>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestOtherB-server-402804146</nova:name>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:41:48</nova:creationTime>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <nova:user uuid="66e1f27ea22d4ee08a0a470a8c18135e">tempest-ServerActionsTestOtherB-587178207-project-member</nova:user>
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <nova:project uuid="295fcc758cf24ab4b01eb393f4863e36">tempest-ServerActionsTestOtherB-587178207</nova:project>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <nova:port uuid="957fffc1-ba49-42af-b933-a544944131aa">
Nov 25 08:41:49 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <system>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <entry name="serial">5f960b00-a365-4665-8a74-50d2e7b7f940</entry>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <entry name="uuid">5f960b00-a365-4665-8a74-50d2e7b7f940</entry>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </system>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <os>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   </os>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <features>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   </features>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5f960b00-a365-4665-8a74-50d2e7b7f940_disk">
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       </source>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5f960b00-a365-4665-8a74-50d2e7b7f940_disk.config">
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       </source>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:41:49 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5b:b1:e0"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <target dev="tap957fffc1-ba"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940/console.log" append="off"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <video>
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </video>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:41:49 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:41:49 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:41:49 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:41:49 compute-0 nova_compute[253538]: </domain>
Nov 25 08:41:49 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.278 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Preparing to wait for external event network-vif-plugged-957fffc1-ba49-42af-b933-a544944131aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.279 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.280 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.280 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.281 253542 DEBUG nova.virt.libvirt.vif [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:41:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-402804146',display_name='tempest-ServerActionsTestOtherB-server-402804146',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-402804146',id=97,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-totndcwk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:41:43Z,user_data=None,user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=5f960b00-a365-4665-8a74-50d2e7b7f940,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.282 253542 DEBUG nova.network.os_vif_util [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.283 253542 DEBUG nova.network.os_vif_util [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.284 253542 DEBUG os_vif [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.292 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap957fffc1-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.293 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap957fffc1-ba, col_values=(('external_ids', {'iface-id': '957fffc1-ba49-42af-b933-a544944131aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:b1:e0', 'vm-uuid': '5f960b00-a365-4665-8a74-50d2e7b7f940'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:49 compute-0 NetworkManager[48915]: <info>  [1764060109.2976] manager: (tap957fffc1-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.306 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.307 253542 INFO os_vif [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba')
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.361 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.361 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.362 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No VIF found with MAC fa:16:3e:5b:b1:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.362 253542 INFO nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Using config drive
Nov 25 08:41:49 compute-0 nova_compute[253538]: 2025-11-25 08:41:49.391 253542 DEBUG nova.storage.rbd_utils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 5f960b00-a365-4665-8a74-50d2e7b7f940_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 213 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 08:41:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1462868593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:41:49 compute-0 ceph-mon[75015]: pgmap v1781: 321 pgs: 321 active+clean; 213 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.289 253542 INFO nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Creating config drive at /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940/disk.config
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.297 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14alyzh3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.462 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14alyzh3" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.507 253542 DEBUG nova.storage.rbd_utils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 5f960b00-a365-4665-8a74-50d2e7b7f940_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.512 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940/disk.config 5f960b00-a365-4665-8a74-50d2e7b7f940_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.714 253542 DEBUG oslo_concurrency.processutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940/disk.config 5f960b00-a365-4665-8a74-50d2e7b7f940_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.714 253542 INFO nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Deleting local config drive /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940/disk.config because it was imported into RBD.
Nov 25 08:41:50 compute-0 kernel: tap957fffc1-ba: entered promiscuous mode
Nov 25 08:41:50 compute-0 NetworkManager[48915]: <info>  [1764060110.7902] manager: (tap957fffc1-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/387)
Nov 25 08:41:50 compute-0 ovn_controller[152859]: 2025-11-25T08:41:50Z|00941|binding|INFO|Claiming lport 957fffc1-ba49-42af-b933-a544944131aa for this chassis.
Nov 25 08:41:50 compute-0 ovn_controller[152859]: 2025-11-25T08:41:50Z|00942|binding|INFO|957fffc1-ba49-42af-b933-a544944131aa: Claiming fa:16:3e:5b:b1:e0 10.100.0.8
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.805 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b1:e0 10.100.0.8'], port_security=['fa:16:3e:5b:b1:e0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5f960b00-a365-4665-8a74-50d2e7b7f940', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f53cda67-e087-4973-b43b-027ef8b57bb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=957fffc1-ba49-42af-b933-a544944131aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.807 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 957fffc1-ba49-42af-b933-a544944131aa in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 bound to our chassis
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.809 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:41:50 compute-0 ovn_controller[152859]: 2025-11-25T08:41:50Z|00943|binding|INFO|Setting lport 957fffc1-ba49-42af-b933-a544944131aa ovn-installed in OVS
Nov 25 08:41:50 compute-0 ovn_controller[152859]: 2025-11-25T08:41:50Z|00944|binding|INFO|Setting lport 957fffc1-ba49-42af-b933-a544944131aa up in Southbound
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.827 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.831 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae53566-3260-4c87-bff8-497966928b26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:50 compute-0 systemd-machined[215790]: New machine qemu-119-instance-00000061.
Nov 25 08:41:50 compute-0 systemd-udevd[347421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:41:50 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000061.
Nov 25 08:41:50 compute-0 NetworkManager[48915]: <info>  [1764060110.8681] device (tap957fffc1-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:41:50 compute-0 NetworkManager[48915]: <info>  [1764060110.8705] device (tap957fffc1-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.883 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42706d62-bb98-4a0c-ba09-07661e9c30d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.887 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[440e311c-94c7-4b0d-9ce4-3436fcb2d9fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.936 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6f161972-99ff-40ac-a17b-046bc375b560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.957 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1991af00-9b67-4c2a-8d3e-8f4f1f3e068b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347432, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.978 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[702cfbfa-2de6-43d8-863e-804e3fa19870]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347434, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347434, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.980 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:50 compute-0 nova_compute[253538]: 2025-11-25 08:41:50.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.984 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.984 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:41:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:41:50.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.121 253542 DEBUG nova.network.neutron [req-63721a68-9ce7-4a41-89fa-625bc5534c94 req-d50b9631-8c68-4d3b-a0d7-5e91e11517d6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Updated VIF entry in instance network info cache for port 957fffc1-ba49-42af-b933-a544944131aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.122 253542 DEBUG nova.network.neutron [req-63721a68-9ce7-4a41-89fa-625bc5534c94 req-d50b9631-8c68-4d3b-a0d7-5e91e11517d6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Updating instance_info_cache with network_info: [{"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.137 253542 DEBUG oslo_concurrency.lockutils [req-63721a68-9ce7-4a41-89fa-625bc5534c94 req-d50b9631-8c68-4d3b-a0d7-5e91e11517d6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5f960b00-a365-4665-8a74-50d2e7b7f940" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.404 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060111.4044337, 5f960b00-a365-4665-8a74-50d2e7b7f940 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.405 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] VM Started (Lifecycle Event)
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.422 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.428 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060111.409004, 5f960b00-a365-4665-8a74-50d2e7b7f940 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.429 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] VM Paused (Lifecycle Event)
Nov 25 08:41:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 213 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.449 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.453 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.482 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.744 253542 DEBUG nova.compute.manager [req-7beaa3e5-87c9-4741-9cdf-2dda73994c6f req-ffdf00f1-2107-425b-8ae4-01df3ecfc402 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Received event network-vif-plugged-957fffc1-ba49-42af-b933-a544944131aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.745 253542 DEBUG oslo_concurrency.lockutils [req-7beaa3e5-87c9-4741-9cdf-2dda73994c6f req-ffdf00f1-2107-425b-8ae4-01df3ecfc402 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.745 253542 DEBUG oslo_concurrency.lockutils [req-7beaa3e5-87c9-4741-9cdf-2dda73994c6f req-ffdf00f1-2107-425b-8ae4-01df3ecfc402 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.746 253542 DEBUG oslo_concurrency.lockutils [req-7beaa3e5-87c9-4741-9cdf-2dda73994c6f req-ffdf00f1-2107-425b-8ae4-01df3ecfc402 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.746 253542 DEBUG nova.compute.manager [req-7beaa3e5-87c9-4741-9cdf-2dda73994c6f req-ffdf00f1-2107-425b-8ae4-01df3ecfc402 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Processing event network-vif-plugged-957fffc1-ba49-42af-b933-a544944131aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.747 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.753 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060111.753022, 5f960b00-a365-4665-8a74-50d2e7b7f940 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.754 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] VM Resumed (Lifecycle Event)
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.756 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.760 253542 INFO nova.virt.libvirt.driver [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Instance spawned successfully.
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.761 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.772 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.779 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.783 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.783 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.784 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.784 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.785 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.785 253542 DEBUG nova.virt.libvirt.driver [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:41:51 compute-0 nova_compute[253538]: 2025-11-25 08:41:51.809 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:41:52 compute-0 nova_compute[253538]: 2025-11-25 08:41:52.017 253542 INFO nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Took 8.72 seconds to spawn the instance on the hypervisor.
Nov 25 08:41:52 compute-0 nova_compute[253538]: 2025-11-25 08:41:52.017 253542 DEBUG nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:41:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:52 compute-0 nova_compute[253538]: 2025-11-25 08:41:52.097 253542 INFO nova.compute.manager [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Took 9.72 seconds to build instance.
Nov 25 08:41:52 compute-0 nova_compute[253538]: 2025-11-25 08:41:52.148 253542 DEBUG oslo_concurrency.lockutils [None req-b8a47de0-6d2d-4858-8a33-8dad644523ea 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:52 compute-0 ceph-mon[75015]: pgmap v1782: 321 pgs: 321 active+clean; 213 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 08:41:52 compute-0 podman[347477]: 2025-11-25 08:41:52.86523847 +0000 UTC m=+0.104939120 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:41:53
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', '.mgr']
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 213 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 172 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:41:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.023 253542 DEBUG nova.compute.manager [req-6795285c-d413-4f9f-ba94-7e3d593b3dbd req-c6b7a7ed-969b-40d2-aaf3-51b028c12f65 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Received event network-vif-plugged-957fffc1-ba49-42af-b933-a544944131aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.025 253542 DEBUG oslo_concurrency.lockutils [req-6795285c-d413-4f9f-ba94-7e3d593b3dbd req-c6b7a7ed-969b-40d2-aaf3-51b028c12f65 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.025 253542 DEBUG oslo_concurrency.lockutils [req-6795285c-d413-4f9f-ba94-7e3d593b3dbd req-c6b7a7ed-969b-40d2-aaf3-51b028c12f65 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.026 253542 DEBUG oslo_concurrency.lockutils [req-6795285c-d413-4f9f-ba94-7e3d593b3dbd req-c6b7a7ed-969b-40d2-aaf3-51b028c12f65 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.026 253542 DEBUG nova.compute.manager [req-6795285c-d413-4f9f-ba94-7e3d593b3dbd req-c6b7a7ed-969b-40d2-aaf3-51b028c12f65 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] No waiting events found dispatching network-vif-plugged-957fffc1-ba49-42af-b933-a544944131aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.026 253542 WARNING nova.compute.manager [req-6795285c-d413-4f9f-ba94-7e3d593b3dbd req-c6b7a7ed-969b-40d2-aaf3-51b028c12f65 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Received unexpected event network-vif-plugged-957fffc1-ba49-42af-b933-a544944131aa for instance with vm_state active and task_state None.
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.373 253542 INFO nova.compute.manager [None req-9eed2e6a-89b6-4424-a102-da7dfcf702f1 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Get console output
Nov 25 08:41:54 compute-0 nova_compute[253538]: 2025-11-25 08:41:54.378 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:41:54 compute-0 ceph-mon[75015]: pgmap v1783: 321 pgs: 321 active+clean; 213 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 172 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 25 08:41:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 213 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Nov 25 08:41:55 compute-0 podman[347497]: 2025-11-25 08:41:55.843231015 +0000 UTC m=+0.098686190 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 08:41:56 compute-0 ceph-mon[75015]: pgmap v1784: 321 pgs: 321 active+clean; 213 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Nov 25 08:41:56 compute-0 nova_compute[253538]: 2025-11-25 08:41:56.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:41:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 213 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 396 KiB/s wr, 99 op/s
Nov 25 08:41:57 compute-0 ceph-mon[75015]: pgmap v1785: 321 pgs: 321 active+clean; 213 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 396 KiB/s wr, 99 op/s
Nov 25 08:41:59 compute-0 nova_compute[253538]: 2025-11-25 08:41:59.302 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:41:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 213 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 150 KiB/s wr, 86 op/s
Nov 25 08:42:00 compute-0 ceph-mon[75015]: pgmap v1786: 321 pgs: 321 active+clean; 213 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 150 KiB/s wr, 86 op/s
Nov 25 08:42:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 213 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 08:42:01 compute-0 nova_compute[253538]: 2025-11-25 08:42:01.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.046 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.047 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.066 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.155 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.156 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.165 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.165 253542 INFO nova.compute.claims [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.298 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:02 compute-0 ceph-mon[75015]: pgmap v1787: 321 pgs: 321 active+clean; 213 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 08:42:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:42:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576745492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.784 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.794 253542 DEBUG nova.compute.provider_tree [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.809 253542 DEBUG nova.scheduler.client.report [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.836 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.837 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.892 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.892 253542 DEBUG nova.network.neutron [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.919 253542 INFO nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:42:02 compute-0 nova_compute[253538]: 2025-11-25 08:42:02.942 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.036 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.038 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.038 253542 INFO nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Creating image(s)
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.089 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.135 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.171 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.175 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.220 253542 DEBUG nova.policy [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '66e1f27ea22d4ee08a0a470a8c18135e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '295fcc758cf24ab4b01eb393f4863e36', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.263 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.263 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.264 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.264 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.285 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.290 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 85d7ff38-1884-4942-82fe-fb79122afe63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 218 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 481 KiB/s wr, 83 op/s
Nov 25 08:42:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2576745492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.636 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 85d7ff38-1884-4942-82fe-fb79122afe63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.705 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] resizing rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.807 253542 DEBUG nova.objects.instance [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'migration_context' on Instance uuid 85d7ff38-1884-4942-82fe-fb79122afe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.822 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.822 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Ensure instance console log exists: /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.823 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.823 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:03 compute-0 nova_compute[253538]: 2025-11-25 08:42:03.824 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001197413028260728 of space, bias 1.0, pg target 0.3592239084782184 quantized to 32 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:42:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:42:04 compute-0 nova_compute[253538]: 2025-11-25 08:42:04.219 253542 DEBUG nova.network.neutron [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Successfully created port: 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:42:04 compute-0 nova_compute[253538]: 2025-11-25 08:42:04.306 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:04 compute-0 ceph-mon[75015]: pgmap v1788: 321 pgs: 321 active+clean; 218 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 481 KiB/s wr, 83 op/s
Nov 25 08:42:04 compute-0 ovn_controller[152859]: 2025-11-25T08:42:04Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:b1:e0 10.100.0.8
Nov 25 08:42:04 compute-0 ovn_controller[152859]: 2025-11-25T08:42:04Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:b1:e0 10.100.0.8
Nov 25 08:42:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 241 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 98 op/s
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.014 253542 DEBUG nova.network.neutron [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Successfully updated port: 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.034 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.034 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.034 253542 DEBUG nova.network.neutron [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.253 253542 DEBUG nova.compute.manager [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-changed-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.254 253542 DEBUG nova.compute.manager [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Refreshing instance network info cache due to event network-changed-3ed26b91-50ed-4d4d-ad1a-a63df94e9607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.255 253542 DEBUG oslo_concurrency.lockutils [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.322 253542 DEBUG nova.network.neutron [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:42:06 compute-0 ceph-mon[75015]: pgmap v1789: 321 pgs: 321 active+clean; 241 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 98 op/s
Nov 25 08:42:06 compute-0 nova_compute[253538]: 2025-11-25 08:42:06.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 273 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.3 MiB/s wr, 103 op/s
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.909 253542 DEBUG nova.network.neutron [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updating instance_info_cache with network_info: [{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.926 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.926 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance network_info: |[{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.927 253542 DEBUG oslo_concurrency.lockutils [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.927 253542 DEBUG nova.network.neutron [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Refreshing network info cache for port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.933 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Start _get_guest_xml network_info=[{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.940 253542 WARNING nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.946 253542 DEBUG nova.virt.libvirt.host [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.947 253542 DEBUG nova.virt.libvirt.host [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.959 253542 DEBUG nova.virt.libvirt.host [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.960 253542 DEBUG nova.virt.libvirt.host [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.960 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.961 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.961 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.962 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.962 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.962 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.963 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.963 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.963 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.964 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.964 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.965 253542 DEBUG nova.virt.hardware [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:42:07 compute-0 nova_compute[253538]: 2025-11-25 08:42:07.969 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:42:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677051566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.464 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.492 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.499 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:08 compute-0 ceph-mon[75015]: pgmap v1790: 321 pgs: 321 active+clean; 273 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.3 MiB/s wr, 103 op/s
Nov 25 08:42:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2677051566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:42:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050776929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.948 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.951 253542 DEBUG nova.virt.libvirt.vif [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:42:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-732970817',display_name='tempest-ServerActionsTestOtherB-server-732970817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-732970817',id=98,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-0hxuw32b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:42:02Z,user_data=None,user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=85d7ff38-1884-4942-82fe-fb79122afe63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.952 253542 DEBUG nova.network.os_vif_util [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.953 253542 DEBUG nova.network.os_vif_util [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.956 253542 DEBUG nova.objects.instance [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 85d7ff38-1884-4942-82fe-fb79122afe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.981 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <uuid>85d7ff38-1884-4942-82fe-fb79122afe63</uuid>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <name>instance-00000062</name>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestOtherB-server-732970817</nova:name>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:42:07</nova:creationTime>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <nova:user uuid="66e1f27ea22d4ee08a0a470a8c18135e">tempest-ServerActionsTestOtherB-587178207-project-member</nova:user>
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <nova:project uuid="295fcc758cf24ab4b01eb393f4863e36">tempest-ServerActionsTestOtherB-587178207</nova:project>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <nova:port uuid="3ed26b91-50ed-4d4d-ad1a-a63df94e9607">
Nov 25 08:42:08 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <system>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <entry name="serial">85d7ff38-1884-4942-82fe-fb79122afe63</entry>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <entry name="uuid">85d7ff38-1884-4942-82fe-fb79122afe63</entry>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </system>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <os>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   </os>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <features>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   </features>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/85d7ff38-1884-4942-82fe-fb79122afe63_disk">
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/85d7ff38-1884-4942-82fe-fb79122afe63_disk.config">
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       </source>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:42:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:44:bf:83"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <target dev="tap3ed26b91-50"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/console.log" append="off"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <video>
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </video>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:42:08 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:42:08 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:42:08 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:42:08 compute-0 nova_compute[253538]: </domain>
Nov 25 08:42:08 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.983 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Preparing to wait for external event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.984 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.985 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.985 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.987 253542 DEBUG nova.virt.libvirt.vif [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:42:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-732970817',display_name='tempest-ServerActionsTestOtherB-server-732970817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-732970817',id=98,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-0hxuw32b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:42:02Z,user_data=None,user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=85d7ff38-1884-4942-82fe-fb79122afe63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.988 253542 DEBUG nova.network.os_vif_util [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.989 253542 DEBUG nova.network.os_vif_util [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.990 253542 DEBUG os_vif [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.992 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.993 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.997 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.998 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ed26b91-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:08 compute-0 nova_compute[253538]: 2025-11-25 08:42:08.999 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ed26b91-50, col_values=(('external_ids', {'iface-id': '3ed26b91-50ed-4d4d-ad1a-a63df94e9607', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:bf:83', 'vm-uuid': '85d7ff38-1884-4942-82fe-fb79122afe63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:09 compute-0 NetworkManager[48915]: <info>  [1764060129.0034] manager: (tap3ed26b91-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.010 253542 INFO os_vif [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50')
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.073 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.073 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.074 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No VIF found with MAC fa:16:3e:44:bf:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.074 253542 INFO nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Using config drive
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.093 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 289 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 25 08:42:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1050776929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.942 253542 INFO nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Creating config drive at /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config
Nov 25 08:42:09 compute-0 nova_compute[253538]: 2025-11-25 08:42:09.952 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2u06uy8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.116 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2u06uy8" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.158 253542 DEBUG nova.storage.rbd_utils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 85d7ff38-1884-4942-82fe-fb79122afe63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.162 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config 85d7ff38-1884-4942-82fe-fb79122afe63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:10 compute-0 rsyslogd[1007]: imjournal: 17176 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.283 253542 DEBUG nova.network.neutron [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updated VIF entry in instance network info cache for port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.286 253542 DEBUG nova.network.neutron [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updating instance_info_cache with network_info: [{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.312 253542 DEBUG oslo_concurrency.lockutils [req-0f668a86-ce33-4ea3-a585-4f7b44fc20c1 req-76074080-f06f-4a25-9294-5ee50fb9faba b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.357 253542 DEBUG oslo_concurrency.processutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config 85d7ff38-1884-4942-82fe-fb79122afe63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.358 253542 INFO nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Deleting local config drive /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63/disk.config because it was imported into RBD.
Nov 25 08:42:10 compute-0 kernel: tap3ed26b91-50: entered promiscuous mode
Nov 25 08:42:10 compute-0 NetworkManager[48915]: <info>  [1764060130.4325] manager: (tap3ed26b91-50): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Nov 25 08:42:10 compute-0 ovn_controller[152859]: 2025-11-25T08:42:10Z|00945|binding|INFO|Claiming lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 for this chassis.
Nov 25 08:42:10 compute-0 ovn_controller[152859]: 2025-11-25T08:42:10Z|00946|binding|INFO|3ed26b91-50ed-4d4d-ad1a-a63df94e9607: Claiming fa:16:3e:44:bf:83 10.100.0.13
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.434 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.446 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:bf:83 10.100.0.13'], port_security=['fa:16:3e:44:bf:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '85d7ff38-1884-4942-82fe-fb79122afe63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f53cda67-e087-4973-b43b-027ef8b57bb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3ed26b91-50ed-4d4d-ad1a-a63df94e9607) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.449 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 bound to our chassis
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.453 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:42:10 compute-0 ovn_controller[152859]: 2025-11-25T08:42:10Z|00947|binding|INFO|Setting lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 ovn-installed in OVS
Nov 25 08:42:10 compute-0 ovn_controller[152859]: 2025-11-25T08:42:10Z|00948|binding|INFO|Setting lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 up in Southbound
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.460 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:10 compute-0 systemd-machined[215790]: New machine qemu-120-instance-00000062.
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.482 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8026efe-f6d7-4c47-8312-a24b81623c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:10 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000062.
Nov 25 08:42:10 compute-0 systemd-udevd[347852]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.537 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a3154567-0fe8-42d2-a5da-51a0187cd7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:10 compute-0 NetworkManager[48915]: <info>  [1764060130.5418] device (tap3ed26b91-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:42:10 compute-0 NetworkManager[48915]: <info>  [1764060130.5432] device (tap3ed26b91-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.544 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbacd1f-d9c3-483c-a1ba-00c231568d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:10 compute-0 ceph-mon[75015]: pgmap v1791: 321 pgs: 321 active+clean; 289 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.594 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef09eea-1c9b-49a7-99ae-e76f7b0d3dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.606528) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130606574, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1612, "num_deletes": 257, "total_data_size": 2345290, "memory_usage": 2391632, "flush_reason": "Manual Compaction"}
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130623629, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2296546, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36051, "largest_seqno": 37662, "table_properties": {"data_size": 2289001, "index_size": 4488, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16111, "raw_average_key_size": 20, "raw_value_size": 2273758, "raw_average_value_size": 2900, "num_data_blocks": 199, "num_entries": 784, "num_filter_entries": 784, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764059997, "oldest_key_time": 1764059997, "file_creation_time": 1764060130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17159 microseconds, and 9427 cpu microseconds.
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.622 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7ff6d0-ee9c-45a7-b2a3-1af01768274c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347863, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.623687) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2296546 bytes OK
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.623711) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.625405) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.625428) EVENT_LOG_v1 {"time_micros": 1764060130625421, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.625448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2338177, prev total WAL file size 2338177, number of live WAL files 2.
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.626475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2242KB)], [80(7357KB)]
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130626515, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9830778, "oldest_snapshot_seqno": -1}
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.646 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52ac0c75-4608-4981-a5cc-dcad0f169ee9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347864, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347864, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.648 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.653 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.654 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.655 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:10.656 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6258 keys, 8230205 bytes, temperature: kUnknown
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130692378, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8230205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8189787, "index_size": 23668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158609, "raw_average_key_size": 25, "raw_value_size": 8078995, "raw_average_value_size": 1290, "num_data_blocks": 958, "num_entries": 6258, "num_filter_entries": 6258, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.692587) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8230205 bytes
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.693894) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.1 rd, 124.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(7.9) write-amplify(3.6) OK, records in: 6783, records dropped: 525 output_compression: NoCompression
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.693926) EVENT_LOG_v1 {"time_micros": 1764060130693911, "job": 46, "event": "compaction_finished", "compaction_time_micros": 65929, "compaction_time_cpu_micros": 36970, "output_level": 6, "num_output_files": 1, "total_output_size": 8230205, "num_input_records": 6783, "num_output_records": 6258, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130694788, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060130697479, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.626401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:10.697569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.890 253542 DEBUG nova.compute.manager [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.891 253542 DEBUG oslo_concurrency.lockutils [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.892 253542 DEBUG oslo_concurrency.lockutils [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.894 253542 DEBUG oslo_concurrency.lockutils [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:10 compute-0 nova_compute[253538]: 2025-11-25 08:42:10.896 253542 DEBUG nova.compute.manager [req-ae466bb2-ceeb-4671-9630-baf0ccb4a760 req-ef0d218f-7253-4cc0-af7b-5900061c7354 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Processing event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.063 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060131.0630364, 85d7ff38-1884-4942-82fe-fb79122afe63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.064 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Started (Lifecycle Event)
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.068 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.072 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.076 253542 INFO nova.virt.libvirt.driver [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance spawned successfully.
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.077 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.139 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.148 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.149 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.150 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.151 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.152 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.152 253542 DEBUG nova.virt.libvirt.driver [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.159 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.201 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.202 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060131.0632844, 85d7ff38-1884-4942-82fe-fb79122afe63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.202 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Paused (Lifecycle Event)
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.234 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.239 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060131.0719643, 85d7ff38-1884-4942-82fe-fb79122afe63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.240 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Resumed (Lifecycle Event)
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.250 253542 INFO nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Took 8.21 seconds to spawn the instance on the hypervisor.
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.251 253542 DEBUG nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.262 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.267 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.306 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.344 253542 INFO nova.compute.manager [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Took 9.23 seconds to build instance.
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.364 253542 DEBUG oslo_concurrency.lockutils [None req-9de3f94f-4b87-42fb-8bc7-7611a4f8e9fd 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 3.9 MiB/s wr, 149 op/s
Nov 25 08:42:11 compute-0 ceph-mon[75015]: pgmap v1792: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 3.9 MiB/s wr, 149 op/s
Nov 25 08:42:11 compute-0 nova_compute[253538]: 2025-11-25 08:42:11.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.085 253542 DEBUG nova.compute.manager [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.086 253542 DEBUG oslo_concurrency.lockutils [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.086 253542 DEBUG oslo_concurrency.lockutils [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.087 253542 DEBUG oslo_concurrency.lockutils [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.087 253542 DEBUG nova.compute.manager [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] No waiting events found dispatching network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.088 253542 WARNING nova.compute.manager [req-0f90b276-35ef-41c6-8842-7356a36021c1 req-297f5679-b932-4648-81c8-a20be6f21ad7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received unexpected event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 for instance with vm_state active and task_state None.
Nov 25 08:42:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 175 op/s
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.724 253542 INFO nova.compute.manager [None req-64d7b8de-0dbe-4e5b-b187-25541d8eaa6f 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Pausing
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.725 253542 DEBUG nova.objects.instance [None req-64d7b8de-0dbe-4e5b-b187-25541d8eaa6f 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'flavor' on Instance uuid 85d7ff38-1884-4942-82fe-fb79122afe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.751 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060133.7508879, 85d7ff38-1884-4942-82fe-fb79122afe63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.751 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Paused (Lifecycle Event)
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.753 253542 DEBUG nova.compute.manager [None req-64d7b8de-0dbe-4e5b-b187-25541d8eaa6f 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.781 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.785 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:42:13 compute-0 nova_compute[253538]: 2025-11-25 08:42:13.817 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 08:42:14 compute-0 nova_compute[253538]: 2025-11-25 08:42:14.002 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:14 compute-0 ceph-mon[75015]: pgmap v1793: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 175 op/s
Nov 25 08:42:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 195 op/s
Nov 25 08:42:16 compute-0 ceph-mon[75015]: pgmap v1794: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 195 op/s
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.576 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.576 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.577 253542 INFO nova.compute.manager [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Shelving
Nov 25 08:42:16 compute-0 kernel: tap3ed26b91-50 (unregistering): left promiscuous mode
Nov 25 08:42:16 compute-0 NetworkManager[48915]: <info>  [1764060136.6494] device (tap3ed26b91-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:42:16 compute-0 ovn_controller[152859]: 2025-11-25T08:42:16Z|00949|binding|INFO|Releasing lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 from this chassis (sb_readonly=0)
Nov 25 08:42:16 compute-0 ovn_controller[152859]: 2025-11-25T08:42:16Z|00950|binding|INFO|Setting lport 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 down in Southbound
Nov 25 08:42:16 compute-0 ovn_controller[152859]: 2025-11-25T08:42:16Z|00951|binding|INFO|Removing iface tap3ed26b91-50 ovn-installed in OVS
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.663 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.674 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:bf:83 10.100.0.13'], port_security=['fa:16:3e:44:bf:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '85d7ff38-1884-4942-82fe-fb79122afe63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f53cda67-e087-4973-b43b-027ef8b57bb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3ed26b91-50ed-4d4d-ad1a-a63df94e9607) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.677 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 unbound from our chassis
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.679 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.710 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e5d610-e00f-4833-b14a-36a25547e1a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:16 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 25 08:42:16 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Consumed 3.334s CPU time.
Nov 25 08:42:16 compute-0 systemd-machined[215790]: Machine qemu-120-instance-00000062 terminated.
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.756 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aa847270-b30d-4335-bec0-88045b92d596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.761 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e28856-c893-4b89-b3bc-6d4a62b26cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.803 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47ed5818-56b5-497b-a7fc-e854972c5c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.823 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b696d728-3122-4c0a-93d3-3e1b8b5affff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347921, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.839 253542 INFO nova.virt.libvirt.driver [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance destroyed successfully.
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.839 253542 DEBUG nova.objects.instance [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'numa_topology' on Instance uuid 85d7ff38-1884-4942-82fe-fb79122afe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.844 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59e1cb55-3b1d-4be2-9d57-fa772b76c110]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347929, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347929, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.845 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.853 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.853 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:16.854 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.979 253542 DEBUG nova.compute.manager [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-vif-unplugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.979 253542 DEBUG oslo_concurrency.lockutils [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.980 253542 DEBUG oslo_concurrency.lockutils [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.980 253542 DEBUG oslo_concurrency.lockutils [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.980 253542 DEBUG nova.compute.manager [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] No waiting events found dispatching network-vif-unplugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:42:16 compute-0 nova_compute[253538]: 2025-11-25 08:42:16.981 253542 WARNING nova.compute.manager [req-5e3545a1-3317-49c4-b40b-356174a434ee req-98e02369-0fe9-47c4-bfc9-e457d8f7fadb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received unexpected event network-vif-unplugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 for instance with vm_state paused and task_state shelving.
Nov 25 08:42:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 190 op/s
Nov 25 08:42:17 compute-0 nova_compute[253538]: 2025-11-25 08:42:17.865 253542 INFO nova.virt.libvirt.driver [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Beginning cold snapshot process
Nov 25 08:42:18 compute-0 nova_compute[253538]: 2025-11-25 08:42:18.034 253542 DEBUG nova.virt.libvirt.imagebackend [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:42:18 compute-0 ceph-mon[75015]: pgmap v1795: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 190 op/s
Nov 25 08:42:18 compute-0 nova_compute[253538]: 2025-11-25 08:42:18.584 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(f5d1a2552dbe40718b77fec433a1f35c) on rbd image(85d7ff38-1884-4942-82fe-fb79122afe63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:42:18 compute-0 podman[347984]: 2025-11-25 08:42:18.805060067 +0000 UTC m=+0.054594450 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:42:19 compute-0 nova_compute[253538]: 2025-11-25 08:42:19.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:19 compute-0 nova_compute[253538]: 2025-11-25 08:42:19.440 253542 DEBUG nova.compute.manager [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:19 compute-0 nova_compute[253538]: 2025-11-25 08:42:19.441 253542 DEBUG oslo_concurrency.lockutils [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:19 compute-0 nova_compute[253538]: 2025-11-25 08:42:19.442 253542 DEBUG oslo_concurrency.lockutils [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:19 compute-0 nova_compute[253538]: 2025-11-25 08:42:19.442 253542 DEBUG oslo_concurrency.lockutils [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:19 compute-0 nova_compute[253538]: 2025-11-25 08:42:19.443 253542 DEBUG nova.compute.manager [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] No waiting events found dispatching network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:42:19 compute-0 nova_compute[253538]: 2025-11-25 08:42:19.443 253542 WARNING nova.compute.manager [req-83b29455-6e0b-4a16-b470-a3e8897d88c4 req-fde52005-ef0c-4827-8112-1926e8281e92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received unexpected event network-vif-plugged-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 for instance with vm_state paused and task_state shelving_image_uploading.
Nov 25 08:42:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 681 KiB/s wr, 131 op/s
Nov 25 08:42:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 25 08:42:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 25 08:42:19 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 25 08:42:19 compute-0 ceph-mon[75015]: pgmap v1796: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 681 KiB/s wr, 131 op/s
Nov 25 08:42:19 compute-0 ceph-mon[75015]: osdmap e204: 3 total, 3 up, 3 in
Nov 25 08:42:20 compute-0 nova_compute[253538]: 2025-11-25 08:42:20.220 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning vms/85d7ff38-1884-4942-82fe-fb79122afe63_disk@f5d1a2552dbe40718b77fec433a1f35c to images/394afc84-ee61-4a8e-9e6c-3501a71601dc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:42:20 compute-0 nova_compute[253538]: 2025-11-25 08:42:20.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:20 compute-0 nova_compute[253538]: 2025-11-25 08:42:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:20 compute-0 nova_compute[253538]: 2025-11-25 08:42:20.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:42:20 compute-0 nova_compute[253538]: 2025-11-25 08:42:20.602 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening images/394afc84-ee61-4a8e-9e6c-3501a71601dc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:42:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 25 08:42:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 25 08:42:20 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 25 08:42:20 compute-0 nova_compute[253538]: 2025-11-25 08:42:20.934 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] removing snapshot(f5d1a2552dbe40718b77fec433a1f35c) on rbd image(85d7ff38-1884-4942-82fe-fb79122afe63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:42:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 72 op/s
Nov 25 08:42:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 25 08:42:21 compute-0 ceph-mon[75015]: osdmap e205: 3 total, 3 up, 3 in
Nov 25 08:42:21 compute-0 ceph-mon[75015]: pgmap v1799: 321 pgs: 321 active+clean; 293 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 72 op/s
Nov 25 08:42:21 compute-0 nova_compute[253538]: 2025-11-25 08:42:21.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 25 08:42:21 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 25 08:42:21 compute-0 nova_compute[253538]: 2025-11-25 08:42:21.819 253542 DEBUG nova.storage.rbd_utils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(snap) on rbd image(394afc84-ee61-4a8e-9e6c-3501a71601dc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:42:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 25 08:42:22 compute-0 ceph-mon[75015]: osdmap e206: 3 total, 3 up, 3 in
Nov 25 08:42:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 25 08:42:22 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 25 08:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:42:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 327 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.506 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "0774dd07-d931-40b5-8590-915c0611277d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.507 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.527 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.624 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.625 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.635 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.635 253542 INFO nova.compute.claims [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.710 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.736 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.736 253542 DEBUG nova.compute.provider_tree [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.757 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.781 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 08:42:23 compute-0 ceph-mon[75015]: osdmap e207: 3 total, 3 up, 3 in
Nov 25 08:42:23 compute-0 ceph-mon[75015]: pgmap v1802: 321 pgs: 321 active+clean; 327 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.798 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.798 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.799 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.799 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:23 compute-0 podman[348096]: 2025-11-25 08:42:23.85883599 +0000 UTC m=+0.107399928 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 08:42:23 compute-0 nova_compute[253538]: 2025-11-25 08:42:23.883 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:42:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515899805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.390 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.401 253542 DEBUG nova.compute.provider_tree [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.416 253542 INFO nova.virt.libvirt.driver [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Snapshot image upload complete
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.417 253542 DEBUG nova.compute.manager [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.419 253542 DEBUG nova.scheduler.client.report [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.473 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.474 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.523 253542 INFO nova.compute.manager [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Shelve offloading
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.534 253542 INFO nova.virt.libvirt.driver [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance destroyed successfully.
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.535 253542 DEBUG nova.compute.manager [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.538 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.538 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.539 253542 DEBUG nova.network.neutron [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.548 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.548 253542 DEBUG nova.network.neutron [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.567 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.591 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.694 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.696 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.697 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Creating image(s)
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.733 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.772 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3515899805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.811 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.818 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "de4dfd283254203262189b67ccae3be59d25acb8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.819 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "de4dfd283254203262189b67ccae3be59d25acb8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.828 253542 DEBUG nova.network.neutron [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 08:42:24 compute-0 nova_compute[253538]: 2025-11-25 08:42:24.829 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.138 253542 DEBUG nova.virt.libvirt.imagebackend [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/46122271-b2f8-4ae8-a1b5-9573f094bebe/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/46122271-b2f8-4ae8-a1b5-9573f094bebe/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.221 253542 DEBUG nova.virt.libvirt.imagebackend [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/46122271-b2f8-4ae8-a1b5-9573f094bebe/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.222 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] cloning images/46122271-b2f8-4ae8-a1b5-9573f094bebe@snap to None/0774dd07-d931-40b5-8590-915c0611277d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.375 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "de4dfd283254203262189b67ccae3be59d25acb8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 339 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 135 op/s
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.525 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] resizing rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.646 253542 DEBUG nova.objects.instance [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 0774dd07-d931-40b5-8590-915c0611277d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.659 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.660 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Ensure instance console log exists: /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.660 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.661 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.661 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.662 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c6a42cd5b932233a822877975aa7662d',container_format='bare',created_at=2025-11-25T08:42:19Z,direct_url=<?>,disk_format='raw',id=46122271-b2f8-4ae8-a1b5-9573f094bebe,min_disk=0,min_ram=0,name='tempest-image-dependency-test-593485743',owner='3c6a99942fff45b7809546d76f7d9c36',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T08:42:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '46122271-b2f8-4ae8-a1b5-9573f094bebe'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.666 253542 WARNING nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.671 253542 DEBUG nova.virt.libvirt.host [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.672 253542 DEBUG nova.virt.libvirt.host [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.675 253542 DEBUG nova.virt.libvirt.host [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.676 253542 DEBUG nova.virt.libvirt.host [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.676 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.676 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c6a42cd5b932233a822877975aa7662d',container_format='bare',created_at=2025-11-25T08:42:19Z,direct_url=<?>,disk_format='raw',id=46122271-b2f8-4ae8-a1b5-9573f094bebe,min_disk=0,min_ram=0,name='tempest-image-dependency-test-593485743',owner='3c6a99942fff45b7809546d76f7d9c36',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T08:42:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.677 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.678 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.679 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.680 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.680 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.680 253542 DEBUG nova.virt.hardware [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.683 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:25 compute-0 ceph-mon[75015]: pgmap v1803: 321 pgs: 321 active+clean; 339 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 135 op/s
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.949 253542 DEBUG nova.network.neutron [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updating instance_info_cache with network_info: [{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:25 compute-0 nova_compute[253538]: 2025-11-25 08:42:25.965 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:42:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3024165051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.137 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.164 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.169 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.538 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.558 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.559 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.560 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:42:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224901811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.589 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.591 253542 DEBUG nova.objects.instance [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0774dd07-d931-40b5-8590-915c0611277d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.605 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <uuid>0774dd07-d931-40b5-8590-915c0611277d</uuid>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <name>instance-00000063</name>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <nova:name>instance-depend-image</nova:name>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:42:25</nova:creationTime>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <nova:user uuid="55be30cefe8b4a10b26c37d845e9e2fa">tempest-ImageDependencyTests-1858023939-project-member</nova:user>
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <nova:project uuid="3c6a99942fff45b7809546d76f7d9c36">tempest-ImageDependencyTests-1858023939</nova:project>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="46122271-b2f8-4ae8-a1b5-9573f094bebe"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <system>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <entry name="serial">0774dd07-d931-40b5-8590-915c0611277d</entry>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <entry name="uuid">0774dd07-d931-40b5-8590-915c0611277d</entry>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     </system>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <os>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   </os>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <features>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   </features>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0774dd07-d931-40b5-8590-915c0611277d_disk">
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       </source>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0774dd07-d931-40b5-8590-915c0611277d_disk.config">
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       </source>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:42:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/console.log" append="off"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <video>
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     </video>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:42:26 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:42:26 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:42:26 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:42:26 compute-0 nova_compute[253538]: </domain>
Nov 25 08:42:26 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.654 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.655 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.655 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Using config drive
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.678 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:26 compute-0 podman[348397]: 2025-11-25 08:42:26.78822081 +0000 UTC m=+0.134184278 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:42:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3024165051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3224901811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.887 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Creating config drive at /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config
Nov 25 08:42:26 compute-0 nova_compute[253538]: 2025-11-25 08:42:26.891 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2x_suxn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.053 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2x_suxn" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.075 253542 DEBUG nova.storage.rbd_utils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] rbd image 0774dd07-d931-40b5-8590-915c0611277d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.079 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config 0774dd07-d931-40b5-8590-915c0611277d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:27.136 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.136 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:27.138 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.175 253542 INFO nova.virt.libvirt.driver [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Instance destroyed successfully.
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.176 253542 DEBUG nova.objects.instance [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'resources' on Instance uuid 85d7ff38-1884-4942-82fe-fb79122afe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.188 253542 DEBUG nova.virt.libvirt.vif [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:42:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-732970817',display_name='tempest-ServerActionsTestOtherB-server-732970817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-732970817',id=98,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:42:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-0hxuw32b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member',shelved_at='2025-11-25T08:42:24.417271',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='394afc84-ee61-4a8e-9e6c-3501a71601dc'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:42:17Z,user_data=None,user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=85d7ff38-1884-4942-82fe-fb79122afe63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.188 253542 DEBUG nova.network.os_vif_util [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ed26b91-50", "ovs_interfaceid": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.189 253542 DEBUG nova.network.os_vif_util [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.189 253542 DEBUG os_vif [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.192 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ed26b91-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.193 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.197 253542 INFO os_vif [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:bf:83,bridge_name='br-int',has_traffic_filtering=True,id=3ed26b91-50ed-4d4d-ad1a-a63df94e9607,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ed26b91-50')
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.251 253542 DEBUG oslo_concurrency.processutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config 0774dd07-d931-40b5-8590-915c0611277d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.252 253542 INFO nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Deleting local config drive /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d/disk.config because it was imported into RBD.
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.305 253542 DEBUG nova.compute.manager [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Received event network-changed-3ed26b91-50ed-4d4d-ad1a-a63df94e9607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.306 253542 DEBUG nova.compute.manager [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Refreshing instance network info cache due to event network-changed-3ed26b91-50ed-4d4d-ad1a-a63df94e9607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.306 253542 DEBUG oslo_concurrency.lockutils [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.306 253542 DEBUG oslo_concurrency.lockutils [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.306 253542 DEBUG nova.network.neutron [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Refreshing network info cache for port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:42:27 compute-0 systemd-machined[215790]: New machine qemu-121-instance-00000063.
Nov 25 08:42:27 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000063.
Nov 25 08:42:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 339 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 173 op/s
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.617 253542 INFO nova.virt.libvirt.driver [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Deleting instance files /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63_del
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.617 253542 INFO nova.virt.libvirt.driver [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Deletion of /var/lib/nova/instances/85d7ff38-1884-4942-82fe-fb79122afe63_del complete
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.706 253542 INFO nova.scheduler.client.report [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Deleted allocations for instance 85d7ff38-1884-4942-82fe-fb79122afe63
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.756 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.756 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:27 compute-0 ceph-mon[75015]: pgmap v1804: 321 pgs: 321 active+clean; 339 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 173 op/s
Nov 25 08:42:27 compute-0 nova_compute[253538]: 2025-11-25 08:42:27.845 253542 DEBUG oslo_concurrency.processutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:42:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059818539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.317 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060148.3164241, 0774dd07-d931-40b5-8590-915c0611277d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.318 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] VM Resumed (Lifecycle Event)
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.322 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.323 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.325 253542 DEBUG oslo_concurrency.processutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.330 253542 INFO nova.virt.libvirt.driver [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance spawned successfully.
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.330 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.335 253542 DEBUG nova.compute.provider_tree [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.343 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.354 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.354 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.354 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.355 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.355 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.355 253542 DEBUG nova.virt.libvirt.driver [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.358 253542 DEBUG nova.scheduler.client.report [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.363 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.363 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060148.3167772, 0774dd07-d931-40b5-8590-915c0611277d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.363 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] VM Started (Lifecycle Event)
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.395 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.397 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.406 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.432 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.444 253542 INFO nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 3.75 seconds to spawn the instance on the hypervisor.
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.445 253542 DEBUG nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.462 253542 DEBUG oslo_concurrency.lockutils [None req-9a51a591-baff-4c8d-8d97-79d7506b979e 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "85d7ff38-1884-4942-82fe-fb79122afe63" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.517 253542 INFO nova.compute.manager [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 4.94 seconds to build instance.
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.545 253542 DEBUG oslo_concurrency.lockutils [None req-e77195f2-215e-4911-889c-8f883fa50b11 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.837 253542 DEBUG nova.network.neutron [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updated VIF entry in instance network info cache for port 3ed26b91-50ed-4d4d-ad1a-a63df94e9607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.838 253542 DEBUG nova.network.neutron [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Updating instance_info_cache with network_info: [{"id": "3ed26b91-50ed-4d4d-ad1a-a63df94e9607", "address": "fa:16:3e:44:bf:83", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3ed26b91-50", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1059818539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:28 compute-0 nova_compute[253538]: 2025-11-25 08:42:28.859 253542 DEBUG oslo_concurrency.lockutils [req-410d373e-f72f-40f0-974a-94ad47aa2def req-b8927ba8-d7b5-49b1-be31-5857c6bc6787 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-85d7ff38-1884-4942-82fe-fb79122afe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:42:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629812643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:42:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:42:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629812643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:42:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 324 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 163 op/s
Nov 25 08:42:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3629812643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:42:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3629812643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:42:29 compute-0 ceph-mon[75015]: pgmap v1805: 321 pgs: 321 active+clean; 324 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 163 op/s
Nov 25 08:42:30 compute-0 nova_compute[253538]: 2025-11-25 08:42:30.029 253542 DEBUG nova.compute.manager [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:30 compute-0 nova_compute[253538]: 2025-11-25 08:42:30.078 253542 INFO nova.compute.manager [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] instance snapshotting
Nov 25 08:42:30 compute-0 nova_compute[253538]: 2025-11-25 08:42:30.370 253542 INFO nova.virt.libvirt.driver [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Beginning live snapshot process
Nov 25 08:42:30 compute-0 nova_compute[253538]: 2025-11-25 08:42:30.599 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] creating snapshot(26df58f0b1d24626a08974dbe2780fb6) on rbd image(0774dd07-d931-40b5-8590-915c0611277d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:42:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 25 08:42:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 25 08:42:30 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 25 08:42:30 compute-0 nova_compute[253538]: 2025-11-25 08:42:30.986 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] cloning vms/0774dd07-d931-40b5-8590-915c0611277d_disk@26df58f0b1d24626a08974dbe2780fb6 to images/90509b26-5374-45ef-ab5c-c852fb5dfe98 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.119 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] flattening images/90509b26-5374-45ef-ab5c-c852fb5dfe98 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.317 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] removing snapshot(26df58f0b1d24626a08974dbe2780fb6) on rbd image(0774dd07-d931-40b5-8590-915c0611277d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:42:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 309 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.574 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.574 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.838 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060136.8360877, 85d7ff38-1884-4942-82fe-fb79122afe63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.838 253542 INFO nova.compute.manager [-] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] VM Stopped (Lifecycle Event)
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.857 253542 DEBUG nova.compute.manager [None req-3dd7a9ee-2223-425b-b271-52bc41ed0731 - - - - - -] [instance: 85d7ff38-1884-4942-82fe-fb79122afe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 25 08:42:31 compute-0 ceph-mon[75015]: osdmap e208: 3 total, 3 up, 3 in
Nov 25 08:42:31 compute-0 ceph-mon[75015]: pgmap v1807: 321 pgs: 321 active+clean; 309 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Nov 25 08:42:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 25 08:42:31 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 25 08:42:31 compute-0 nova_compute[253538]: 2025-11-25 08:42:31.967 253542 DEBUG nova.storage.rbd_utils [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] creating snapshot(snap) on rbd image(90509b26-5374-45ef-ab5c-c852fb5dfe98) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:42:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 25 08:42:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 25 08:42:32 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 25 08:42:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:42:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700371143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.090 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.178 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.178 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.183 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.183 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.189 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.190 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.193 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.477 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.478 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3394MB free_disk=59.88978576660156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.479 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.479 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.540 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 3e75d0af-c514-42c5-aa05-88ae5552f196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.541 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5f960b00-a365-4665-8a74-50d2e7b7f940 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.541 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0774dd07-d931-40b5-8590-915c0611277d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.541 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.542 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.609 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:32 compute-0 ceph-mon[75015]: osdmap e209: 3 total, 3 up, 3 in
Nov 25 08:42:32 compute-0 ceph-mon[75015]: osdmap e210: 3 total, 3 up, 3 in
Nov 25 08:42:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1700371143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.945 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.945 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.946 253542 INFO nova.compute.manager [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Shelving
Nov 25 08:42:32 compute-0 nova_compute[253538]: 2025-11-25 08:42:32.963 253542 DEBUG nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:42:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 25 08:42:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 25 08:42:33 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 25 08:42:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:42:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/499480278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:33 compute-0 nova_compute[253538]: 2025-11-25 08:42:33.135 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:33 compute-0 nova_compute[253538]: 2025-11-25 08:42:33.143 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:42:33 compute-0 nova_compute[253538]: 2025-11-25 08:42:33.161 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:42:33 compute-0 nova_compute[253538]: 2025-11-25 08:42:33.209 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:42:33 compute-0 nova_compute[253538]: 2025-11-25 08:42:33.209 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 48 KiB/s wr, 273 op/s
Nov 25 08:42:34 compute-0 ceph-mon[75015]: osdmap e211: 3 total, 3 up, 3 in
Nov 25 08:42:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/499480278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:34 compute-0 ceph-mon[75015]: pgmap v1811: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 48 KiB/s wr, 273 op/s
Nov 25 08:42:35 compute-0 kernel: tapf7c4b9b0-34 (unregistering): left promiscuous mode
Nov 25 08:42:35 compute-0 NetworkManager[48915]: <info>  [1764060155.3220] device (tapf7c4b9b0-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.325 253542 INFO nova.virt.libvirt.driver [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Snapshot image upload complete
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.326 253542 INFO nova.compute.manager [None req-b80dea55-ac8a-4e8b-b2b7-bf7386efb256 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 5.25 seconds to snapshot the instance on the hypervisor.
Nov 25 08:42:35 compute-0 ovn_controller[152859]: 2025-11-25T08:42:35Z|00952|binding|INFO|Releasing lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 from this chassis (sb_readonly=0)
Nov 25 08:42:35 compute-0 ovn_controller[152859]: 2025-11-25T08:42:35Z|00953|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 down in Southbound
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:35 compute-0 ovn_controller[152859]: 2025-11-25T08:42:35Z|00954|binding|INFO|Removing iface tapf7c4b9b0-34 ovn-installed in OVS
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.348 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e9:db 10.100.0.12'], port_security=['fa:16:3e:83:e9:db 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3e75d0af-c514-42c5-aa05-88ae5552f196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4662b63b-c8aa-4161-b270-71466cebee15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f7c4b9b0-3445-468a-a19a-8b19b2d029a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.350 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 unbound from our chassis
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.352 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.375 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:35 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 25 08:42:35 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005f.scope: Consumed 19.170s CPU time.
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.384 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4e3056-7246-4220-a28a-e765e1e8834d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:35 compute-0 systemd-machined[215790]: Machine qemu-116-instance-0000005f terminated.
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.426 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[60022fc6-a9aa-41e4-a72f-3ca931533f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.429 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42c2ab19-dceb-4968-b283-91584b402343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.455 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0110be4b-0651-47f9-9927-a364794d47f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 63 KiB/s wr, 200 op/s
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.477 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b72e8d-e396-4e17-8561-6bbf9b49ab2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348776, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.501 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6061d5-4acb-4124-93f8-f79c4e2f1518]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348777, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348777, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.504 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.513 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.513 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.514 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:35.515 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.677 253542 DEBUG nova.compute.manager [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.678 253542 DEBUG oslo_concurrency.lockutils [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.678 253542 DEBUG oslo_concurrency.lockutils [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.678 253542 DEBUG oslo_concurrency.lockutils [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.679 253542 DEBUG nova.compute.manager [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.679 253542 WARNING nova.compute.manager [req-a70688ac-b91a-459a-bfb6-d85810aa67fd req-c3d0bb99-0e41-4c02-b884-27dd2ef4d07f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state active and task_state shelving.
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.986 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance shutdown successfully after 3 seconds.
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.993 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance destroyed successfully.
Nov 25 08:42:35 compute-0 nova_compute[253538]: 2025-11-25 08:42:35.994 253542 DEBUG nova.objects.instance [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:36 compute-0 nova_compute[253538]: 2025-11-25 08:42:36.487 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Beginning cold snapshot process
Nov 25 08:42:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 25 08:42:36 compute-0 ceph-mon[75015]: pgmap v1812: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 63 KiB/s wr, 200 op/s
Nov 25 08:42:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 25 08:42:36 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 25 08:42:36 compute-0 nova_compute[253538]: 2025-11-25 08:42:36.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:36 compute-0 nova_compute[253538]: 2025-11-25 08:42:36.890 253542 DEBUG nova.virt.libvirt.imagebackend [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:42:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.087825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157087872, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 578, "num_deletes": 258, "total_data_size": 526448, "memory_usage": 538808, "flush_reason": "Manual Compaction"}
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157113812, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 520248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37663, "largest_seqno": 38240, "table_properties": {"data_size": 517043, "index_size": 1113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7372, "raw_average_key_size": 18, "raw_value_size": 510585, "raw_average_value_size": 1309, "num_data_blocks": 49, "num_entries": 390, "num_filter_entries": 390, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060131, "oldest_key_time": 1764060131, "file_creation_time": 1764060157, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 26119 microseconds, and 2828 cpu microseconds.
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:42:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:37.141 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.113927) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 520248 bytes OK
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.113973) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.150742) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.150794) EVENT_LOG_v1 {"time_micros": 1764060157150782, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.150826) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 523188, prev total WAL file size 523188, number of live WAL files 2.
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.151615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323538' seq:72057594037927935, type:22 .. '6C6F676D0031353131' seq:0, type:0; will stop at (end)
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(508KB)], [83(8037KB)]
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157151660, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8750453, "oldest_snapshot_seqno": -1}
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.196 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.204 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6121 keys, 8625972 bytes, temperature: kUnknown
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157363819, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8625972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8585317, "index_size": 24225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 156809, "raw_average_key_size": 25, "raw_value_size": 8475843, "raw_average_value_size": 1384, "num_data_blocks": 977, "num_entries": 6121, "num_filter_entries": 6121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060157, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.364268) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8625972 bytes
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.383845) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.2 rd, 40.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.8 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(33.4) write-amplify(16.6) OK, records in: 6648, records dropped: 527 output_compression: NoCompression
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.384019) EVENT_LOG_v1 {"time_micros": 1764060157383881, "job": 48, "event": "compaction_finished", "compaction_time_micros": 212346, "compaction_time_cpu_micros": 33954, "output_level": 6, "num_output_files": 1, "total_output_size": 8625972, "num_input_records": 6648, "num_output_records": 6121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157384817, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060157388251, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.151478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:42:37.388450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.390 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(16445a25cfbf4f2aa316f34132ebb55f) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:42:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 24 KiB/s wr, 86 op/s
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:42:37 compute-0 ceph-mon[75015]: osdmap e212: 3 total, 3 up, 3 in
Nov 25 08:42:37 compute-0 ceph-mon[75015]: pgmap v1814: 321 pgs: 321 active+clean; 293 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 24 KiB/s wr, 86 op/s
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.897 253542 DEBUG nova.compute.manager [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.898 253542 DEBUG oslo_concurrency.lockutils [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.898 253542 DEBUG oslo_concurrency.lockutils [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.898 253542 DEBUG oslo_concurrency.lockutils [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.898 253542 DEBUG nova.compute.manager [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:42:37 compute-0 nova_compute[253538]: 2025-11-25 08:42:37.899 253542 WARNING nova.compute.manager [req-21932724-0f26-4d54-8734-d7c5ad7d3fd5 req-dc48f4f3-c9c9-4885-a17b-0983c0ce248e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 08:42:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 25 08:42:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 25 08:42:38 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 25 08:42:38 compute-0 nova_compute[253538]: 2025-11-25 08:42:38.618 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk@16445a25cfbf4f2aa316f34132ebb55f to images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.096 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:42:39 compute-0 ceph-mon[75015]: osdmap e213: 3 total, 3 up, 3 in
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.204 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "0774dd07-d931-40b5-8590-915c0611277d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.204 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.205 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "0774dd07-d931-40b5-8590-915c0611277d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.205 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.206 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.208 253542 INFO nova.compute.manager [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Terminating instance
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.210 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "refresh_cache-0774dd07-d931-40b5-8590-915c0611277d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.211 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquired lock "refresh_cache-0774dd07-d931-40b5-8590-915c0611277d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.211 253542 DEBUG nova.network.neutron [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:42:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 293 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 23 KiB/s wr, 122 op/s
Nov 25 08:42:39 compute-0 nova_compute[253538]: 2025-11-25 08:42:39.616 253542 DEBUG nova.network.neutron [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:42:40 compute-0 ceph-mon[75015]: pgmap v1816: 321 pgs: 321 active+clean; 293 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 23 KiB/s wr, 122 op/s
Nov 25 08:42:40 compute-0 sudo[348895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:40 compute-0 sudo[348895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:40 compute-0 sudo[348895]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:40 compute-0 nova_compute[253538]: 2025-11-25 08:42:40.247 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] removing snapshot(16445a25cfbf4f2aa316f34132ebb55f) on rbd image(3e75d0af-c514-42c5-aa05-88ae5552f196_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:42:40 compute-0 sudo[348935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:42:40 compute-0 sudo[348935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:40 compute-0 sudo[348935]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:40 compute-0 nova_compute[253538]: 2025-11-25 08:42:40.294 253542 DEBUG nova.network.neutron [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:40 compute-0 nova_compute[253538]: 2025-11-25 08:42:40.304 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Releasing lock "refresh_cache-0774dd07-d931-40b5-8590-915c0611277d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:40 compute-0 nova_compute[253538]: 2025-11-25 08:42:40.305 253542 DEBUG nova.compute.manager [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:42:40 compute-0 sudo[348963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:40 compute-0 sudo[348963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:40 compute-0 sudo[348963]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:40 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 25 08:42:40 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Consumed 1.498s CPU time.
Nov 25 08:42:40 compute-0 systemd-machined[215790]: Machine qemu-121-instance-00000063 terminated.
Nov 25 08:42:40 compute-0 sudo[348988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:42:40 compute-0 sudo[348988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:40 compute-0 nova_compute[253538]: 2025-11-25 08:42:40.535 253542 INFO nova.virt.libvirt.driver [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance destroyed successfully.
Nov 25 08:42:40 compute-0 nova_compute[253538]: 2025-11-25 08:42:40.537 253542 DEBUG nova.objects.instance [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lazy-loading 'resources' on Instance uuid 0774dd07-d931-40b5-8590-915c0611277d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:41.067 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:42:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:41 compute-0 sudo[348988]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 08:42:41 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:42:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:42:41 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:42:41 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:42:41 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0b9be7ac-c19f-46ae-82fb-02a0feaa8617 does not exist
Nov 25 08:42:41 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev fc3a0c5f-c2bf-4fff-bdc3-033c8763e273 does not exist
Nov 25 08:42:41 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0c8492b2-956b-473c-aaf7-f3146ae218c3 does not exist
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:42:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:42:41 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:42:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:42:41 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:42:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 25 08:42:41 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 25 08:42:41 compute-0 sudo[349065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:41 compute-0 sudo[349065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:41 compute-0 sudo[349065]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:41 compute-0 nova_compute[253538]: 2025-11-25 08:42:41.269 253542 DEBUG nova.storage.rbd_utils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] creating snapshot(snap) on rbd image(34e9b311-13e0-4ffd-bc6f-64a46ba4b491) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:42:41 compute-0 sudo[349090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:42:41 compute-0 sudo[349090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:41 compute-0 sudo[349090]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:41 compute-0 sudo[349134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:41 compute-0 sudo[349134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:41 compute-0 sudo[349134]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 297 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 692 KiB/s wr, 128 op/s
Nov 25 08:42:41 compute-0 sudo[349159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:42:41 compute-0 sudo[349159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:41 compute-0 nova_compute[253538]: 2025-11-25 08:42:41.614 253542 INFO nova.virt.libvirt.driver [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Deleting instance files /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d_del
Nov 25 08:42:41 compute-0 nova_compute[253538]: 2025-11-25 08:42:41.615 253542 INFO nova.virt.libvirt.driver [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Deletion of /var/lib/nova/instances/0774dd07-d931-40b5-8590-915c0611277d_del complete
Nov 25 08:42:41 compute-0 nova_compute[253538]: 2025-11-25 08:42:41.677 253542 INFO nova.compute.manager [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 1.37 seconds to destroy the instance on the hypervisor.
Nov 25 08:42:41 compute-0 nova_compute[253538]: 2025-11-25 08:42:41.678 253542 DEBUG oslo.service.loopingcall [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:42:41 compute-0 nova_compute[253538]: 2025-11-25 08:42:41.679 253542 DEBUG nova.compute.manager [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:42:41 compute-0 nova_compute[253538]: 2025-11-25 08:42:41.679 253542 DEBUG nova.network.neutron [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:42:41 compute-0 nova_compute[253538]: 2025-11-25 08:42:41.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:41 compute-0 podman[349225]: 2025-11-25 08:42:41.995296312 +0000 UTC m=+0.058347071 container create 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:42:42 compute-0 systemd[1]: Started libpod-conmon-8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8.scope.
Nov 25 08:42:42 compute-0 podman[349225]: 2025-11-25 08:42:41.964057741 +0000 UTC m=+0.027108590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:42:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 25 08:42:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 25 08:42:42 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.092 253542 DEBUG nova.network.neutron [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:42:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.105 253542 DEBUG nova.network.neutron [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.117 253542 INFO nova.compute.manager [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Took 0.44 seconds to deallocate network for instance.
Nov 25 08:42:42 compute-0 podman[349225]: 2025-11-25 08:42:42.128300956 +0000 UTC m=+0.191351805 container init 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 08:42:42 compute-0 podman[349225]: 2025-11-25 08:42:42.141035114 +0000 UTC m=+0.204085913 container start 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 08:42:42 compute-0 podman[349225]: 2025-11-25 08:42:42.147578672 +0000 UTC m=+0.210629531 container attach 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:42:42 compute-0 festive_ardinghelli[349242]: 167 167
Nov 25 08:42:42 compute-0 systemd[1]: libpod-8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8.scope: Deactivated successfully.
Nov 25 08:42:42 compute-0 podman[349225]: 2025-11-25 08:42:42.151155599 +0000 UTC m=+0.214206398 container died 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.161 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.162 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ee65a6019a8c9a25dd2e6d46478134f051e7448330ec6a5060321ef2aa7fa52-merged.mount: Deactivated successfully.
Nov 25 08:42:42 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:42:42 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:42:42 compute-0 ceph-mon[75015]: osdmap e214: 3 total, 3 up, 3 in
Nov 25 08:42:42 compute-0 ceph-mon[75015]: pgmap v1818: 321 pgs: 321 active+clean; 297 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 692 KiB/s wr, 128 op/s
Nov 25 08:42:42 compute-0 ceph-mon[75015]: osdmap e215: 3 total, 3 up, 3 in
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:42 compute-0 podman[349225]: 2025-11-25 08:42:42.226765549 +0000 UTC m=+0.289816318 container remove 8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:42:42 compute-0 systemd[1]: libpod-conmon-8a7c7e00019e9c01596a864eb0cf6d71873db466940f496981374b3d5909baf8.scope: Deactivated successfully.
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.258 253542 DEBUG oslo_concurrency.processutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:42 compute-0 podman[349265]: 2025-11-25 08:42:42.457136298 +0000 UTC m=+0.067280895 container create 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:42:42 compute-0 systemd[1]: Started libpod-conmon-1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5.scope.
Nov 25 08:42:42 compute-0 podman[349265]: 2025-11-25 08:42:42.428180798 +0000 UTC m=+0.038325445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:42:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:42 compute-0 podman[349265]: 2025-11-25 08:42:42.577832037 +0000 UTC m=+0.187976634 container init 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:42:42 compute-0 podman[349265]: 2025-11-25 08:42:42.589545396 +0000 UTC m=+0.199689963 container start 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 08:42:42 compute-0 podman[349265]: 2025-11-25 08:42:42.59671167 +0000 UTC m=+0.206856237 container attach 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 08:42:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:42:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982093874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.754 253542 DEBUG oslo_concurrency.processutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.764 253542 DEBUG nova.compute.provider_tree [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.785 253542 DEBUG nova.scheduler.client.report [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.814 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.847 253542 INFO nova.scheduler.client.report [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Deleted allocations for instance 0774dd07-d931-40b5-8590-915c0611277d
Nov 25 08:42:42 compute-0 nova_compute[253538]: 2025-11-25 08:42:42.937 253542 DEBUG oslo_concurrency.lockutils [None req-bac1f4f8-1b2d-49e0-a36b-c8741dca8b4d 55be30cefe8b4a10b26c37d845e9e2fa 3c6a99942fff45b7809546d76f7d9c36 - - default default] Lock "0774dd07-d931-40b5-8590-915c0611277d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3982093874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 311 active+clean; 344 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.6 MiB/s wr, 209 op/s
Nov 25 08:42:43 compute-0 recursing_ritchie[349300]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:42:43 compute-0 recursing_ritchie[349300]: --> relative data size: 1.0
Nov 25 08:42:43 compute-0 recursing_ritchie[349300]: --> All data devices are unavailable
Nov 25 08:42:43 compute-0 systemd[1]: libpod-1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5.scope: Deactivated successfully.
Nov 25 08:42:43 compute-0 podman[349265]: 2025-11-25 08:42:43.802162982 +0000 UTC m=+1.412307599 container died 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 08:42:43 compute-0 systemd[1]: libpod-1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5.scope: Consumed 1.147s CPU time.
Nov 25 08:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2fc3f2d22f0df06236a4c9effcfec717c68f3985e5d42cd5d7c3e6ff86d4326-merged.mount: Deactivated successfully.
Nov 25 08:42:43 compute-0 podman[349265]: 2025-11-25 08:42:43.894644862 +0000 UTC m=+1.504789459 container remove 1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:42:43 compute-0 systemd[1]: libpod-conmon-1f2e6d0972f8e45ba4a3992e35792504fbe875bc8dc2617cf39b9800a2cd69c5.scope: Deactivated successfully.
Nov 25 08:42:43 compute-0 nova_compute[253538]: 2025-11-25 08:42:43.921 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Snapshot image upload complete
Nov 25 08:42:43 compute-0 nova_compute[253538]: 2025-11-25 08:42:43.922 253542 DEBUG nova.compute.manager [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:43 compute-0 sudo[349159]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:44 compute-0 nova_compute[253538]: 2025-11-25 08:42:44.031 253542 INFO nova.compute.manager [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Shelve offloading
Nov 25 08:42:44 compute-0 sudo[349343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:44 compute-0 nova_compute[253538]: 2025-11-25 08:42:44.045 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance destroyed successfully.
Nov 25 08:42:44 compute-0 nova_compute[253538]: 2025-11-25 08:42:44.046 253542 DEBUG nova.compute.manager [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:44 compute-0 nova_compute[253538]: 2025-11-25 08:42:44.051 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:44 compute-0 nova_compute[253538]: 2025-11-25 08:42:44.051 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:44 compute-0 sudo[349343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:44 compute-0 nova_compute[253538]: 2025-11-25 08:42:44.052 253542 DEBUG nova.network.neutron [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:42:44 compute-0 sudo[349343]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:44 compute-0 sudo[349368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:42:44 compute-0 sudo[349368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:44 compute-0 sudo[349368]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:44 compute-0 sudo[349393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:44 compute-0 sudo[349393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:44 compute-0 ceph-mon[75015]: pgmap v1820: 321 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 311 active+clean; 344 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.6 MiB/s wr, 209 op/s
Nov 25 08:42:44 compute-0 sudo[349393]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:44 compute-0 sudo[349418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:42:44 compute-0 sudo[349418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:44 compute-0 podman[349483]: 2025-11-25 08:42:44.712189142 +0000 UTC m=+0.055994097 container create 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 08:42:44 compute-0 systemd[1]: Started libpod-conmon-8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103.scope.
Nov 25 08:42:44 compute-0 podman[349483]: 2025-11-25 08:42:44.684055785 +0000 UTC m=+0.027860790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:42:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:42:44 compute-0 podman[349483]: 2025-11-25 08:42:44.822023385 +0000 UTC m=+0.165828320 container init 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 08:42:44 compute-0 podman[349483]: 2025-11-25 08:42:44.829066976 +0000 UTC m=+0.172871901 container start 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:42:44 compute-0 podman[349483]: 2025-11-25 08:42:44.832644924 +0000 UTC m=+0.176449859 container attach 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:42:44 compute-0 systemd[1]: libpod-8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103.scope: Deactivated successfully.
Nov 25 08:42:44 compute-0 jolly_chatelet[349500]: 167 167
Nov 25 08:42:44 compute-0 conmon[349500]: conmon 8d627db7af9257606716 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103.scope/container/memory.events
Nov 25 08:42:44 compute-0 podman[349483]: 2025-11-25 08:42:44.837366862 +0000 UTC m=+0.181171857 container died 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c61b86257a5fdf306f3af9ac048da22fbfa076aa0483122d6f46c9f8f08ba96-merged.mount: Deactivated successfully.
Nov 25 08:42:44 compute-0 podman[349483]: 2025-11-25 08:42:44.905105578 +0000 UTC m=+0.248910523 container remove 8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:42:44 compute-0 systemd[1]: libpod-conmon-8d627db7af9257606716a4e5ccb8ba3f51e299267e70d309fef3813c242c0103.scope: Deactivated successfully.
Nov 25 08:42:45 compute-0 podman[349524]: 2025-11-25 08:42:45.126595895 +0000 UTC m=+0.043583819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:42:45 compute-0 podman[349524]: 2025-11-25 08:42:45.238036061 +0000 UTC m=+0.155023955 container create 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:42:45 compute-0 systemd[1]: Started libpod-conmon-00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611.scope.
Nov 25 08:42:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:45 compute-0 podman[349524]: 2025-11-25 08:42:45.389266452 +0000 UTC m=+0.306254356 container init 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:42:45 compute-0 podman[349524]: 2025-11-25 08:42:45.398990658 +0000 UTC m=+0.315978542 container start 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:42:45 compute-0 podman[349524]: 2025-11-25 08:42:45.404797216 +0000 UTC m=+0.321785080 container attach 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:42:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 311 active+clean; 372 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 6.4 MiB/s wr, 223 op/s
Nov 25 08:42:45 compute-0 nova_compute[253538]: 2025-11-25 08:42:45.867 253542 DEBUG nova.network.neutron [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:45 compute-0 nova_compute[253538]: 2025-11-25 08:42:45.908 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:46 compute-0 elegant_booth[349541]: {
Nov 25 08:42:46 compute-0 elegant_booth[349541]:     "0": [
Nov 25 08:42:46 compute-0 elegant_booth[349541]:         {
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "devices": [
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "/dev/loop3"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             ],
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_name": "ceph_lv0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_size": "21470642176",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "name": "ceph_lv0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "tags": {
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cluster_name": "ceph",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.crush_device_class": "",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.encrypted": "0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osd_id": "0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.type": "block",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.vdo": "0"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             },
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "type": "block",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "vg_name": "ceph_vg0"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:         }
Nov 25 08:42:46 compute-0 elegant_booth[349541]:     ],
Nov 25 08:42:46 compute-0 elegant_booth[349541]:     "1": [
Nov 25 08:42:46 compute-0 elegant_booth[349541]:         {
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "devices": [
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "/dev/loop4"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             ],
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_name": "ceph_lv1",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_size": "21470642176",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "name": "ceph_lv1",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "tags": {
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cluster_name": "ceph",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.crush_device_class": "",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.encrypted": "0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osd_id": "1",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.type": "block",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.vdo": "0"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             },
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "type": "block",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "vg_name": "ceph_vg1"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:         }
Nov 25 08:42:46 compute-0 elegant_booth[349541]:     ],
Nov 25 08:42:46 compute-0 elegant_booth[349541]:     "2": [
Nov 25 08:42:46 compute-0 elegant_booth[349541]:         {
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "devices": [
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "/dev/loop5"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             ],
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_name": "ceph_lv2",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_size": "21470642176",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "name": "ceph_lv2",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "tags": {
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.cluster_name": "ceph",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.crush_device_class": "",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.encrypted": "0",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osd_id": "2",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.type": "block",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:                 "ceph.vdo": "0"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             },
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "type": "block",
Nov 25 08:42:46 compute-0 elegant_booth[349541]:             "vg_name": "ceph_vg2"
Nov 25 08:42:46 compute-0 elegant_booth[349541]:         }
Nov 25 08:42:46 compute-0 elegant_booth[349541]:     ]
Nov 25 08:42:46 compute-0 elegant_booth[349541]: }
Nov 25 08:42:46 compute-0 systemd[1]: libpod-00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611.scope: Deactivated successfully.
Nov 25 08:42:46 compute-0 podman[349524]: 2025-11-25 08:42:46.180557797 +0000 UTC m=+1.097545711 container died 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 08:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-97e8578496aa0fcfac7042614cdbcaa7c1a2e42138b365765e4248995b397439-merged.mount: Deactivated successfully.
Nov 25 08:42:46 compute-0 podman[349524]: 2025-11-25 08:42:46.302995053 +0000 UTC m=+1.219982947 container remove 00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:42:46 compute-0 systemd[1]: libpod-conmon-00328178fd9473017aba9d46db8586de05ac788f800a55d51844910b1d51d611.scope: Deactivated successfully.
Nov 25 08:42:46 compute-0 sudo[349418]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:46 compute-0 sudo[349564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:46 compute-0 sudo[349564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:46 compute-0 sudo[349564]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:46 compute-0 ceph-mon[75015]: pgmap v1821: 321 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 311 active+clean; 372 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 6.4 MiB/s wr, 223 op/s
Nov 25 08:42:46 compute-0 sudo[349589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:42:46 compute-0 sudo[349589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:46 compute-0 sudo[349589]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:46 compute-0 sudo[349614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:46 compute-0 sudo[349614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:46 compute-0 sudo[349614]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:46 compute-0 sudo[349639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:42:46 compute-0 sudo[349639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.912 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance destroyed successfully.
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.913 253542 DEBUG nova.objects.instance [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'resources' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.925 253542 DEBUG nova.virt.libvirt.vif [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJg9vWbWDQUJP+5O2Ge5sP4yW+A5RDbOBkV9U0C3hvxoWu1yQZFyI5Vs8mvdnljTrZSXJgG69Yru9lsQdThAcjefMLvUo4eUx6Akjue1XjQsVfgM0pq0/Z3uC1qyMxn0Ew==',key_name='tempest-keypair-1642602877',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:40:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member',shelved_at='2025-11-25T08:42:43.922397',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='34e9b311-13e0-4ffd-bc6f-64a46ba4b491'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:42:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.926 253542 DEBUG nova.network.os_vif_util [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.927 253542 DEBUG nova.network.os_vif_util [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.927 253542 DEBUG os_vif [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.930 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7c4b9b0-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.933 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:46 compute-0 nova_compute[253538]: 2025-11-25 08:42:46.937 253542 INFO os_vif [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34')
Nov 25 08:42:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.086 253542 DEBUG nova.compute.manager [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.087 253542 DEBUG nova.compute.manager [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing instance network info cache due to event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.088 253542 DEBUG oslo_concurrency.lockutils [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.088 253542 DEBUG oslo_concurrency.lockutils [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.088 253542 DEBUG nova.network.neutron [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:42:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 25 08:42:47 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 25 08:42:47 compute-0 podman[349724]: 2025-11-25 08:42:47.144337181 +0000 UTC m=+0.062991538 container create 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:42:47 compute-0 systemd[1]: Started libpod-conmon-790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8.scope.
Nov 25 08:42:47 compute-0 podman[349724]: 2025-11-25 08:42:47.111884827 +0000 UTC m=+0.030539204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:42:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:42:47 compute-0 podman[349724]: 2025-11-25 08:42:47.230533121 +0000 UTC m=+0.149187488 container init 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:42:47 compute-0 podman[349724]: 2025-11-25 08:42:47.244238953 +0000 UTC m=+0.162893290 container start 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 08:42:47 compute-0 sleepy_bouman[349740]: 167 167
Nov 25 08:42:47 compute-0 systemd[1]: libpod-790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8.scope: Deactivated successfully.
Nov 25 08:42:47 compute-0 podman[349724]: 2025-11-25 08:42:47.25145554 +0000 UTC m=+0.170109877 container attach 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:42:47 compute-0 podman[349724]: 2025-11-25 08:42:47.25181943 +0000 UTC m=+0.170473787 container died 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 08:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-931829aa5af67a8842217d8c23573fa4521208b21e856625ffc10155b8b50ece-merged.mount: Deactivated successfully.
Nov 25 08:42:47 compute-0 podman[349724]: 2025-11-25 08:42:47.307793815 +0000 UTC m=+0.226448152 container remove 790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 08:42:47 compute-0 systemd[1]: libpod-conmon-790f094a52d7e384a4771dfef2fa3bb6796923e5000be8ed71fbba660daa16f8.scope: Deactivated successfully.
Nov 25 08:42:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 372 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 6.8 MiB/s wr, 196 op/s
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.522 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deleting instance files /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196_del
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.524 253542 INFO nova.virt.libvirt.driver [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deletion of /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196_del complete
Nov 25 08:42:47 compute-0 podman[349765]: 2025-11-25 08:42:47.53363372 +0000 UTC m=+0.069356671 container create 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:42:47 compute-0 systemd[1]: Started libpod-conmon-89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a.scope.
Nov 25 08:42:47 compute-0 podman[349765]: 2025-11-25 08:42:47.502022558 +0000 UTC m=+0.037745559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.603 253542 INFO nova.scheduler.client.report [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Deleted allocations for instance 3e75d0af-c514-42c5-aa05-88ae5552f196
Nov 25 08:42:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:42:47 compute-0 podman[349765]: 2025-11-25 08:42:47.661283748 +0000 UTC m=+0.197006749 container init 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.662 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.663 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:47 compute-0 podman[349765]: 2025-11-25 08:42:47.67415519 +0000 UTC m=+0.209878131 container start 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:42:47 compute-0 nova_compute[253538]: 2025-11-25 08:42:47.709 253542 DEBUG oslo_concurrency.processutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:47 compute-0 podman[349765]: 2025-11-25 08:42:47.78865333 +0000 UTC m=+0.324376281 container attach 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:42:48 compute-0 ceph-mon[75015]: osdmap e216: 3 total, 3 up, 3 in
Nov 25 08:42:48 compute-0 ceph-mon[75015]: pgmap v1823: 321 pgs: 321 active+clean; 372 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 6.8 MiB/s wr, 196 op/s
Nov 25 08:42:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:42:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4238448093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:48 compute-0 nova_compute[253538]: 2025-11-25 08:42:48.215 253542 DEBUG oslo_concurrency.processutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:48 compute-0 nova_compute[253538]: 2025-11-25 08:42:48.229 253542 DEBUG nova.compute.provider_tree [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:42:48 compute-0 nova_compute[253538]: 2025-11-25 08:42:48.245 253542 DEBUG nova.scheduler.client.report [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:42:48 compute-0 nova_compute[253538]: 2025-11-25 08:42:48.272 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:48 compute-0 nova_compute[253538]: 2025-11-25 08:42:48.311 253542 DEBUG oslo_concurrency.lockutils [None req-2694c2ae-293c-4f40-9b7d-17cf8cceb678 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]: {
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "osd_id": 1,
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "type": "bluestore"
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:     },
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "osd_id": 2,
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "type": "bluestore"
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:     },
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "osd_id": 0,
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:         "type": "bluestore"
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]:     }
Nov 25 08:42:48 compute-0 musing_dubinsky[349781]: }
Nov 25 08:42:48 compute-0 systemd[1]: libpod-89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a.scope: Deactivated successfully.
Nov 25 08:42:48 compute-0 systemd[1]: libpod-89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a.scope: Consumed 1.102s CPU time.
Nov 25 08:42:48 compute-0 podman[349765]: 2025-11-25 08:42:48.786680517 +0000 UTC m=+1.322403498 container died 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 08:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f073fcb3654e5766bd8cc15cce12fbe36dcbdbc51d64c836efdb886aba003534-merged.mount: Deactivated successfully.
Nov 25 08:42:48 compute-0 nova_compute[253538]: 2025-11-25 08:42:48.845 253542 DEBUG nova.network.neutron [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated VIF entry in instance network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:42:48 compute-0 nova_compute[253538]: 2025-11-25 08:42:48.845 253542 DEBUG nova.network.neutron [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:48 compute-0 nova_compute[253538]: 2025-11-25 08:42:48.863 253542 DEBUG oslo_concurrency.lockutils [req-06d30168-3877-4d90-91f3-bb84d61239e9 req-29b64ce9-ef11-4a23-9474-606e0ca5c566 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:48 compute-0 podman[349765]: 2025-11-25 08:42:48.868434805 +0000 UTC m=+1.404157756 container remove 89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:42:48 compute-0 systemd[1]: libpod-conmon-89976c1ebb13c2015aef29fa0012cc60e360f7e8281b42cd23171920fdfca48a.scope: Deactivated successfully.
Nov 25 08:42:48 compute-0 sudo[349639]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:42:48 compute-0 podman[349846]: 2025-11-25 08:42:48.921891352 +0000 UTC m=+0.071720225 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 08:42:48 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:42:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:42:48 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:42:48 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9d7b91ca-7127-452e-87be-20fe5aabb5fb does not exist
Nov 25 08:42:48 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 236f71f6-fc38-45d3-899c-fc5c7613c370 does not exist
Nov 25 08:42:49 compute-0 sudo[349867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:42:49 compute-0 sudo[349867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:49 compute-0 sudo[349867]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:49 compute-0 sudo[349892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:42:49 compute-0 sudo[349892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:42:49 compute-0 sudo[349892]: pam_unix(sudo:session): session closed for user root
Nov 25 08:42:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4238448093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:49 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:42:49 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:42:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 355 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.3 MiB/s wr, 156 op/s
Nov 25 08:42:50 compute-0 ceph-mon[75015]: pgmap v1824: 321 pgs: 321 active+clean; 355 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.3 MiB/s wr, 156 op/s
Nov 25 08:42:50 compute-0 nova_compute[253538]: 2025-11-25 08:42:50.560 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060155.558094, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:50 compute-0 nova_compute[253538]: 2025-11-25 08:42:50.561 253542 INFO nova.compute.manager [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Stopped (Lifecycle Event)
Nov 25 08:42:50 compute-0 nova_compute[253538]: 2025-11-25 08:42:50.593 253542 DEBUG nova.compute.manager [None req-cc8c31b2-1168-4f1c-8f1d-dd1a234fc641 - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 313 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 108 op/s
Nov 25 08:42:51 compute-0 nova_compute[253538]: 2025-11-25 08:42:51.780 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:51 compute-0 nova_compute[253538]: 2025-11-25 08:42:51.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 25 08:42:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 25 08:42:52 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 25 08:42:52 compute-0 ceph-mon[75015]: pgmap v1825: 321 pgs: 321 active+clean; 313 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 108 op/s
Nov 25 08:42:52 compute-0 ceph-mon[75015]: osdmap e217: 3 total, 3 up, 3 in
Nov 25 08:42:52 compute-0 sshd-session[349917]: Invalid user cesar from 45.202.211.6 port 44398
Nov 25 08:42:52 compute-0 sshd-session[349917]: Received disconnect from 45.202.211.6 port 44398:11: Bye Bye [preauth]
Nov 25 08:42:52 compute-0 sshd-session[349917]: Disconnected from invalid user cesar 45.202.211.6 port 44398 [preauth]
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:42:53
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.log']
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.400 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.402 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.402 253542 INFO nova.compute.manager [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Unshelving
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 14 KiB/s wr, 60 op/s
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.488 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.489 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.495 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.507 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.519 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.519 253542 INFO nova.compute.claims [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:42:53 compute-0 nova_compute[253538]: 2025-11-25 08:42:53.633 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:53 compute-0 ceph-mon[75015]: pgmap v1827: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 14 KiB/s wr, 60 op/s
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:42:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:42:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:42:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362125678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:54 compute-0 nova_compute[253538]: 2025-11-25 08:42:54.109 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:54 compute-0 nova_compute[253538]: 2025-11-25 08:42:54.116 253542 DEBUG nova.compute.provider_tree [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:42:54 compute-0 nova_compute[253538]: 2025-11-25 08:42:54.131 253542 DEBUG nova.scheduler.client.report [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:42:54 compute-0 nova_compute[253538]: 2025-11-25 08:42:54.162 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:54 compute-0 nova_compute[253538]: 2025-11-25 08:42:54.549 253542 INFO nova.network.neutron [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 25 08:42:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2362125678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:42:54 compute-0 podman[349941]: 2025-11-25 08:42:54.877440789 +0000 UTC m=+0.118449428 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.294 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.294 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.294 253542 DEBUG nova.network.neutron [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:42:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 13 KiB/s wr, 42 op/s
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.481 253542 DEBUG nova.compute.manager [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.482 253542 DEBUG nova.compute.manager [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing instance network info cache due to event network-changed-f7c4b9b0-3445-468a-a19a-8b19b2d029a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.483 253542 DEBUG oslo_concurrency.lockutils [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.534 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060160.532469, 0774dd07-d931-40b5-8590-915c0611277d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.534 253542 INFO nova.compute.manager [-] [instance: 0774dd07-d931-40b5-8590-915c0611277d] VM Stopped (Lifecycle Event)
Nov 25 08:42:55 compute-0 nova_compute[253538]: 2025-11-25 08:42:55.553 253542 DEBUG nova.compute.manager [None req-d4722c92-5b78-4994-8a8b-3c62d78a4efe - - - - - -] [instance: 0774dd07-d931-40b5-8590-915c0611277d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:42:55 compute-0 ceph-mon[75015]: pgmap v1828: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 13 KiB/s wr, 42 op/s
Nov 25 08:42:56 compute-0 nova_compute[253538]: 2025-11-25 08:42:56.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:56 compute-0 nova_compute[253538]: 2025-11-25 08:42:56.934 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.213 253542 DEBUG nova.network.neutron [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.230 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.232 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.233 253542 INFO nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating image(s)
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.260 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.264 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.267 253542 DEBUG oslo_concurrency.lockutils [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.267 253542 DEBUG nova.network.neutron [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Refreshing network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.313 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.343 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.347 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "88c5d518cc852b54df0f546077c13ae28485063d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.348 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "88c5d518cc852b54df0f546077c13ae28485063d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 11 KiB/s wr, 35 op/s
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.802 253542 DEBUG nova.virt.libvirt.imagebackend [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 08:42:57 compute-0 podman[350015]: 2025-11-25 08:42:57.867200356 +0000 UTC m=+0.112506948 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.881 253542 DEBUG nova.virt.libvirt.imagebackend [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.882 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] cloning images/34e9b311-13e0-4ffd-bc6f-64a46ba4b491@snap to None/3e75d0af-c514-42c5-aa05-88ae5552f196_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:42:57 compute-0 nova_compute[253538]: 2025-11-25 08:42:57.989 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "88c5d518cc852b54df0f546077c13ae28485063d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.137 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.205 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] flattening vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:42:58 compute-0 ceph-mon[75015]: pgmap v1829: 321 pgs: 321 active+clean; 292 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 11 KiB/s wr, 35 op/s
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.876 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Image rbd:vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.878 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.878 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Ensure instance console log exists: /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.879 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.879 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.879 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.883 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Start _get_guest_xml network_info=[{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:42:32Z,direct_url=<?>,disk_format='raw',id=34e9b311-13e0-4ffd-bc6f-64a46ba4b491,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-439401428-shelved',owner='295fcc758cf24ab4b01eb393f4863e36',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:42:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.890 253542 WARNING nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.897 253542 DEBUG nova.virt.libvirt.host [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.898 253542 DEBUG nova.virt.libvirt.host [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.902 253542 DEBUG nova.virt.libvirt.host [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.902 253542 DEBUG nova.virt.libvirt.host [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.903 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.903 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:42:32Z,direct_url=<?>,disk_format='raw',id=34e9b311-13e0-4ffd-bc6f-64a46ba4b491,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-439401428-shelved',owner='295fcc758cf24ab4b01eb393f4863e36',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:42:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.904 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.905 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.905 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.905 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.906 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.906 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.906 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.907 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.907 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.907 253542 DEBUG nova.virt.hardware [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.908 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:58 compute-0 nova_compute[253538]: 2025-11-25 08:42:58.924 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:42:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265293251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.395 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.430 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.438 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:42:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 296 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 267 KiB/s wr, 37 op/s
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.649 253542 DEBUG nova.network.neutron [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updated VIF entry in instance network info cache for port f7c4b9b0-3445-468a-a19a-8b19b2d029a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.650 253542 DEBUG nova.network.neutron [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [{"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.672 253542 DEBUG oslo_concurrency.lockutils [req-838168d9-b9c7-4c84-ab0d-db60d7c0fdd0 req-84d56c32-0195-4af8-b2f5-ed039942cd94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3e75d0af-c514-42c5-aa05-88ae5552f196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:42:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1265293251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:59 compute-0 ceph-mon[75015]: pgmap v1830: 321 pgs: 321 active+clean; 296 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 267 KiB/s wr, 37 op/s
Nov 25 08:42:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:42:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982515680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.964 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.967 253542 DEBUG nova.virt.libvirt.vif [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='34e9b311-13e0-4ffd-bc6f-64a46ba4b491',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1642602877',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:40:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member',shelved_at='2025-11-25T08:42:43.922397',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='34e9b311-13e0-4ffd-bc6f-64a46ba4b491'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:42:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.968 253542 DEBUG nova.network.os_vif_util [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.970 253542 DEBUG nova.network.os_vif_util [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.973 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.992 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <uuid>3e75d0af-c514-42c5-aa05-88ae5552f196</uuid>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <name>instance-0000005f</name>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerActionsTestOtherB-server-439401428</nova:name>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:42:58</nova:creationTime>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <nova:user uuid="66e1f27ea22d4ee08a0a470a8c18135e">tempest-ServerActionsTestOtherB-587178207-project-member</nova:user>
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <nova:project uuid="295fcc758cf24ab4b01eb393f4863e36">tempest-ServerActionsTestOtherB-587178207</nova:project>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="34e9b311-13e0-4ffd-bc6f-64a46ba4b491"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <nova:port uuid="f7c4b9b0-3445-468a-a19a-8b19b2d029a2">
Nov 25 08:42:59 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <system>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <entry name="serial">3e75d0af-c514-42c5-aa05-88ae5552f196</entry>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <entry name="uuid">3e75d0af-c514-42c5-aa05-88ae5552f196</entry>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </system>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <os>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   </os>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <features>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   </features>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk">
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config">
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:42:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:83:e9:db"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <target dev="tapf7c4b9b0-34"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/console.log" append="off"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <video>
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </video>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:42:59 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:42:59 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:42:59 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:42:59 compute-0 nova_compute[253538]: </domain>
Nov 25 08:42:59 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.993 253542 DEBUG nova.compute.manager [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Preparing to wait for external event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.993 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.993 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.994 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.994 253542 DEBUG nova.virt.libvirt.vif [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='34e9b311-13e0-4ffd-bc6f-64a46ba4b491',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1642602877',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:40:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member',shelved_at='2025-11-25T08:42:43.922397',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='34e9b311-13e0-4ffd-bc6f-64a46ba4b491'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:42:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.995 253542 DEBUG nova.network.os_vif_util [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.995 253542 DEBUG nova.network.os_vif_util [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.996 253542 DEBUG os_vif [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.997 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.997 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:42:59 compute-0 nova_compute[253538]: 2025-11-25 08:42:59.998 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.001 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7c4b9b0-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.002 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf7c4b9b0-34, col_values=(('external_ids', {'iface-id': 'f7c4b9b0-3445-468a-a19a-8b19b2d029a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:e9:db', 'vm-uuid': '3e75d0af-c514-42c5-aa05-88ae5552f196'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:00 compute-0 NetworkManager[48915]: <info>  [1764060180.0604] manager: (tapf7c4b9b0-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.068 253542 INFO os_vif [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34')
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.124 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.125 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.125 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] No VIF found with MAC fa:16:3e:83:e9:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.127 253542 INFO nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Using config drive
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.161 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.181 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.213 253542 DEBUG nova.objects.instance [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'keypairs' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.557 253542 INFO nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Creating config drive at /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.570 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_ot580u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.719 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_ot580u" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3982515680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.768 253542 DEBUG nova.storage.rbd_utils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] rbd image 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:00 compute-0 nova_compute[253538]: 2025-11-25 08:43:00.776 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.021 253542 DEBUG oslo_concurrency.processutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config 3e75d0af-c514-42c5-aa05-88ae5552f196_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.022 253542 INFO nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deleting local config drive /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196/disk.config because it was imported into RBD.
Nov 25 08:43:01 compute-0 kernel: tapf7c4b9b0-34: entered promiscuous mode
Nov 25 08:43:01 compute-0 NetworkManager[48915]: <info>  [1764060181.0946] manager: (tapf7c4b9b0-34): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.094 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:01 compute-0 ovn_controller[152859]: 2025-11-25T08:43:01Z|00955|binding|INFO|Claiming lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for this chassis.
Nov 25 08:43:01 compute-0 ovn_controller[152859]: 2025-11-25T08:43:01Z|00956|binding|INFO|f7c4b9b0-3445-468a-a19a-8b19b2d029a2: Claiming fa:16:3e:83:e9:db 10.100.0.12
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.106 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e9:db 10.100.0.12'], port_security=['fa:16:3e:83:e9:db 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3e75d0af-c514-42c5-aa05-88ae5552f196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4662b63b-c8aa-4161-b270-71466cebee15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f7c4b9b0-3445-468a-a19a-8b19b2d029a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.107 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 bound to our chassis
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.109 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:43:01 compute-0 ovn_controller[152859]: 2025-11-25T08:43:01Z|00957|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 ovn-installed in OVS
Nov 25 08:43:01 compute-0 ovn_controller[152859]: 2025-11-25T08:43:01Z|00958|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 up in Southbound
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.131 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2ea6c4-fc12-441e-b2b0-ce5dc87b48d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:01 compute-0 systemd-udevd[350337]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:43:01 compute-0 systemd-machined[215790]: New machine qemu-122-instance-0000005f.
Nov 25 08:43:01 compute-0 NetworkManager[48915]: <info>  [1764060181.1569] device (tapf7c4b9b0-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:43:01 compute-0 NetworkManager[48915]: <info>  [1764060181.1584] device (tapf7c4b9b0-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:43:01 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-0000005f.
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.166 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[52780ba8-c756-476c-891b-851944d1c59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.170 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a516c0de-2c89-40e8-b9a3-8d03c528276f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.206 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f8574ea3-1eac-475a-8ace-5586d0f1172f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.226 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b20ab35-deec-4055-bcdb-ca320108909f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350348, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.249 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2ac9ce-f99f-4f8c-a6e1-6950feaa5ecd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350350, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350350, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:01.258 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.388 253542 DEBUG nova.compute.manager [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.389 253542 DEBUG oslo_concurrency.lockutils [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.396 253542 DEBUG oslo_concurrency.lockutils [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.396 253542 DEBUG oslo_concurrency.lockutils [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.397 253542 DEBUG nova.compute.manager [req-4996d02a-acd2-42d6-9b1c-944ea93eab7d req-2a66fd64-208c-4434-95f8-f505e3b7dff2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Processing event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:43:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 334 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 29 op/s
Nov 25 08:43:01 compute-0 ceph-mon[75015]: pgmap v1831: 321 pgs: 321 active+clean; 334 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 29 op/s
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.793 253542 DEBUG nova.compute.manager [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.794 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060181.792646, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.794 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Started (Lifecycle Event)
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.798 253542 DEBUG nova.virt.libvirt.driver [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.803 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance spawned successfully.
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.813 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.817 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.834 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.834 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060181.7937539, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.834 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Paused (Lifecycle Event)
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.850 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.855 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060181.7971272, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.856 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Resumed (Lifecycle Event)
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.874 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.878 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:43:01 compute-0 nova_compute[253538]: 2025-11-25 08:43:01.896 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 25 08:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 25 08:43:02 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 25 08:43:03 compute-0 nova_compute[253538]: 2025-11-25 08:43:03.236 253542 DEBUG nova.compute.manager [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:03 compute-0 nova_compute[253538]: 2025-11-25 08:43:03.320 253542 DEBUG oslo_concurrency.lockutils [None req-2be8c927-37f3-42bf-8fd4-6e589def3687 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 351 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 110 op/s
Nov 25 08:43:03 compute-0 nova_compute[253538]: 2025-11-25 08:43:03.660 253542 DEBUG nova.compute.manager [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:03 compute-0 nova_compute[253538]: 2025-11-25 08:43:03.660 253542 DEBUG oslo_concurrency.lockutils [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:03 compute-0 nova_compute[253538]: 2025-11-25 08:43:03.662 253542 DEBUG oslo_concurrency.lockutils [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:03 compute-0 nova_compute[253538]: 2025-11-25 08:43:03.663 253542 DEBUG oslo_concurrency.lockutils [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:03 compute-0 nova_compute[253538]: 2025-11-25 08:43:03.663 253542 DEBUG nova.compute.manager [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:43:03 compute-0 nova_compute[253538]: 2025-11-25 08:43:03.663 253542 WARNING nova.compute.manager [req-b2acd8c4-94ff-464e-bf30-1b8895629119 req-e95a60d8-93c1-42f6-bd28-14c7b3e595ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state active and task_state None.
Nov 25 08:43:03 compute-0 ceph-mon[75015]: osdmap e218: 3 total, 3 up, 3 in
Nov 25 08:43:03 compute-0 ceph-mon[75015]: pgmap v1833: 321 pgs: 321 active+clean; 351 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 110 op/s
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015193727819561111 of space, bias 1.0, pg target 0.45581183458683333 quantized to 32 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019389442721560201 of space, bias 1.0, pg target 0.581683281646806 quantized to 32 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:43:03 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:43:05 compute-0 nova_compute[253538]: 2025-11-25 08:43:05.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 319 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 135 op/s
Nov 25 08:43:06 compute-0 ceph-mon[75015]: pgmap v1834: 321 pgs: 321 active+clean; 319 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 135 op/s
Nov 25 08:43:06 compute-0 nova_compute[253538]: 2025-11-25 08:43:06.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 293 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 08:43:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 25 08:43:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 25 08:43:08 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 25 08:43:08 compute-0 ceph-mon[75015]: pgmap v1835: 321 pgs: 321 active+clean; 293 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 08:43:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 293 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 250 op/s
Nov 25 08:43:09 compute-0 ceph-mon[75015]: osdmap e219: 3 total, 3 up, 3 in
Nov 25 08:43:10 compute-0 nova_compute[253538]: 2025-11-25 08:43:10.110 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:10 compute-0 ceph-mon[75015]: pgmap v1837: 321 pgs: 321 active+clean; 293 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 250 op/s
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.469 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "5f960b00-a365-4665-8a74-50d2e7b7f940" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.470 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.470 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.470 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.471 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.472 253542 INFO nova.compute.manager [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Terminating instance
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.473 253542 DEBUG nova.compute.manager [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:43:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 280 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 817 KiB/s wr, 139 op/s
Nov 25 08:43:11 compute-0 kernel: tap957fffc1-ba (unregistering): left promiscuous mode
Nov 25 08:43:11 compute-0 NetworkManager[48915]: <info>  [1764060191.5373] device (tap957fffc1-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 ovn_controller[152859]: 2025-11-25T08:43:11Z|00959|binding|INFO|Releasing lport 957fffc1-ba49-42af-b933-a544944131aa from this chassis (sb_readonly=0)
Nov 25 08:43:11 compute-0 ovn_controller[152859]: 2025-11-25T08:43:11Z|00960|binding|INFO|Setting lport 957fffc1-ba49-42af-b933-a544944131aa down in Southbound
Nov 25 08:43:11 compute-0 ovn_controller[152859]: 2025-11-25T08:43:11Z|00961|binding|INFO|Removing iface tap957fffc1-ba ovn-installed in OVS
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.591 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b1:e0 10.100.0.8'], port_security=['fa:16:3e:5b:b1:e0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5f960b00-a365-4665-8a74-50d2e7b7f940', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f53cda67-e087-4973-b43b-027ef8b57bb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=957fffc1-ba49-42af-b933-a544944131aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.593 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 957fffc1-ba49-42af-b933-a544944131aa in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 unbound from our chassis
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.595 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35
Nov 25 08:43:11 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Deactivated successfully.
Nov 25 08:43:11 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Consumed 16.614s CPU time.
Nov 25 08:43:11 compute-0 systemd-machined[215790]: Machine qemu-119-instance-00000061 terminated.
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.612 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d542d4d-a5af-47cb-b191-087d026e08f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.647 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5d83f0c4-7262-4937-8a4a-6c05722398bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.651 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1da6f318-475f-4f53-bca7-468a6698e946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.679 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[253c3163-56cf-4389-b2b1-41eea9a43db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.700 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[056108d3-1cb9-4a3b-bc7e-c3d9779a5045]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e77a51d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:6f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538137, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350406, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.706 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfa9947-ba81-4d90-91ab-3ba7854c8fe5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538152, 'tstamp': 538152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350412, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e77a51d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538156, 'tstamp': 538156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350412, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.723 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.725 253542 INFO nova.virt.libvirt.driver [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Instance destroyed successfully.
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.726 253542 DEBUG nova.objects.instance [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'resources' on Instance uuid 5f960b00-a365-4665-8a74-50d2e7b7f940 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.737 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e77a51d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.738 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.738 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e77a51d-20, col_values=(('external_ids', {'iface-id': '66275e2b-0197-461a-9be3-ae2fe1aec502'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:11.739 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.741 253542 DEBUG nova.virt.libvirt.vif [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:41:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-402804146',display_name='tempest-ServerActionsTestOtherB-server-402804146',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-402804146',id=97,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:41:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-totndcwk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:41:52Z,user_data=None,user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=5f960b00-a365-4665-8a74-50d2e7b7f940,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.741 253542 DEBUG nova.network.os_vif_util [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "957fffc1-ba49-42af-b933-a544944131aa", "address": "fa:16:3e:5b:b1:e0", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap957fffc1-ba", "ovs_interfaceid": "957fffc1-ba49-42af-b933-a544944131aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.742 253542 DEBUG nova.network.os_vif_util [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.743 253542 DEBUG os_vif [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.745 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap957fffc1-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.752 253542 INFO os_vif [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b1:e0,bridge_name='br-int',has_traffic_filtering=True,id=957fffc1-ba49-42af-b933-a544944131aa,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap957fffc1-ba')
Nov 25 08:43:11 compute-0 nova_compute[253538]: 2025-11-25 08:43:11.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 25 08:43:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 25 08:43:12 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 25 08:43:12 compute-0 nova_compute[253538]: 2025-11-25 08:43:12.248 253542 INFO nova.virt.libvirt.driver [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Deleting instance files /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940_del
Nov 25 08:43:12 compute-0 nova_compute[253538]: 2025-11-25 08:43:12.249 253542 INFO nova.virt.libvirt.driver [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Deletion of /var/lib/nova/instances/5f960b00-a365-4665-8a74-50d2e7b7f940_del complete
Nov 25 08:43:12 compute-0 nova_compute[253538]: 2025-11-25 08:43:12.345 253542 INFO nova.compute.manager [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Took 0.87 seconds to destroy the instance on the hypervisor.
Nov 25 08:43:12 compute-0 nova_compute[253538]: 2025-11-25 08:43:12.346 253542 DEBUG oslo.service.loopingcall [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:43:12 compute-0 nova_compute[253538]: 2025-11-25 08:43:12.347 253542 DEBUG nova.compute.manager [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:43:12 compute-0 nova_compute[253538]: 2025-11-25 08:43:12.347 253542 DEBUG nova.network.neutron [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:43:12 compute-0 ceph-mon[75015]: pgmap v1838: 321 pgs: 321 active+clean; 280 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 817 KiB/s wr, 139 op/s
Nov 25 08:43:12 compute-0 ceph-mon[75015]: osdmap e220: 3 total, 3 up, 3 in
Nov 25 08:43:13 compute-0 nova_compute[253538]: 2025-11-25 08:43:13.467 253542 DEBUG nova.network.neutron [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:43:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 234 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 KiB/s wr, 139 op/s
Nov 25 08:43:13 compute-0 nova_compute[253538]: 2025-11-25 08:43:13.485 253542 INFO nova.compute.manager [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Took 1.14 seconds to deallocate network for instance.
Nov 25 08:43:13 compute-0 nova_compute[253538]: 2025-11-25 08:43:13.549 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:13 compute-0 nova_compute[253538]: 2025-11-25 08:43:13.550 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:13 compute-0 nova_compute[253538]: 2025-11-25 08:43:13.596 253542 DEBUG nova.compute.manager [req-ade884f5-bd65-43fa-b5fa-6efd6d5c5317 req-9e8c1aa4-2bf5-4a57-b5b6-0061d44b10fb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Received event network-vif-deleted-957fffc1-ba49-42af-b933-a544944131aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:13 compute-0 nova_compute[253538]: 2025-11-25 08:43:13.644 253542 DEBUG oslo_concurrency.processutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:13 compute-0 ceph-mon[75015]: pgmap v1840: 321 pgs: 321 active+clean; 234 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 KiB/s wr, 139 op/s
Nov 25 08:43:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:43:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192620321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:14 compute-0 nova_compute[253538]: 2025-11-25 08:43:14.097 253542 DEBUG oslo_concurrency.processutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:14 compute-0 nova_compute[253538]: 2025-11-25 08:43:14.106 253542 DEBUG nova.compute.provider_tree [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:43:14 compute-0 nova_compute[253538]: 2025-11-25 08:43:14.130 253542 DEBUG nova.scheduler.client.report [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:43:14 compute-0 nova_compute[253538]: 2025-11-25 08:43:14.150 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:14 compute-0 nova_compute[253538]: 2025-11-25 08:43:14.186 253542 INFO nova.scheduler.client.report [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Deleted allocations for instance 5f960b00-a365-4665-8a74-50d2e7b7f940
Nov 25 08:43:14 compute-0 nova_compute[253538]: 2025-11-25 08:43:14.261 253542 DEBUG oslo_concurrency.lockutils [None req-f6da964d-bc0e-4e5a-bf71-7948afd029de 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "5f960b00-a365-4665-8a74-50d2e7b7f940" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1192620321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.139 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.140 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.140 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.140 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.141 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.142 253542 INFO nova.compute.manager [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Terminating instance
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.143 253542 DEBUG nova.compute.manager [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:43:15 compute-0 kernel: tapf7c4b9b0-34 (unregistering): left promiscuous mode
Nov 25 08:43:15 compute-0 NetworkManager[48915]: <info>  [1764060195.2022] device (tapf7c4b9b0-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:43:15 compute-0 ovn_controller[152859]: 2025-11-25T08:43:15Z|00962|binding|INFO|Releasing lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 from this chassis (sb_readonly=0)
Nov 25 08:43:15 compute-0 ovn_controller[152859]: 2025-11-25T08:43:15Z|00963|binding|INFO|Setting lport f7c4b9b0-3445-468a-a19a-8b19b2d029a2 down in Southbound
Nov 25 08:43:15 compute-0 ovn_controller[152859]: 2025-11-25T08:43:15Z|00964|binding|INFO|Removing iface tapf7c4b9b0-34 ovn-installed in OVS
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.223 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:e9:db 10.100.0.12'], port_security=['fa:16:3e:83:e9:db 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3e75d0af-c514-42c5-aa05-88ae5552f196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '295fcc758cf24ab4b01eb393f4863e36', 'neutron:revision_number': '9', 'neutron:security_group_ids': '4662b63b-c8aa-4161-b270-71466cebee15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31935ac8-a5f7-4fad-9e0b-ee28fade4ce4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f7c4b9b0-3445-468a-a19a-8b19b2d029a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.227 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f7c4b9b0-3445-468a-a19a-8b19b2d029a2 in datapath 6e77a51d-2695-4e70-8b9d-c02ec0c62f35 unbound from our chassis
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.230 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e77a51d-2695-4e70-8b9d-c02ec0c62f35, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.231 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55d8aadf-ef7c-473f-82f8-ba89c95f2039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.232 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 namespace which is not needed anymore
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:15 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 25 08:43:15 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005f.scope: Consumed 14.099s CPU time.
Nov 25 08:43:15 compute-0 systemd-machined[215790]: Machine qemu-122-instance-0000005f terminated.
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.421 253542 INFO nova.virt.libvirt.driver [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Instance destroyed successfully.
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.422 253542 DEBUG nova.objects.instance [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lazy-loading 'resources' on Instance uuid 3e75d0af-c514-42c5-aa05-88ae5552f196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:43:15 compute-0 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [NOTICE]   (344562) : haproxy version is 2.8.14-c23fe91
Nov 25 08:43:15 compute-0 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [NOTICE]   (344562) : path to executable is /usr/sbin/haproxy
Nov 25 08:43:15 compute-0 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [WARNING]  (344562) : Exiting Master process...
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.435 253542 DEBUG nova.virt.libvirt.vif [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:40:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-439401428',display_name='tempest-ServerActionsTestOtherB-server-439401428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-439401428',id=95,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJg9vWbWDQUJP+5O2Ge5sP4yW+A5RDbOBkV9U0C3hvxoWu1yQZFyI5Vs8mvdnljTrZSXJgG69Yru9lsQdThAcjefMLvUo4eUx6Akjue1XjQsVfgM0pq0/Z3uC1qyMxn0Ew==',key_name='tempest-keypair-1642602877',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:43:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='295fcc758cf24ab4b01eb393f4863e36',ramdisk_id='',reservation_id='r-iqkw2vgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-587178207',owner_user_name='tempest-ServerActionsTestOtherB-587178207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:43:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66e1f27ea22d4ee08a0a470a8c18135e',uuid=3e75d0af-c514-42c5-aa05-88ae5552f196,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.435 253542 DEBUG nova.network.os_vif_util [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converting VIF {"id": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "address": "fa:16:3e:83:e9:db", "network": {"id": "6e77a51d-2695-4e70-8b9d-c02ec0c62f35", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1129579753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "295fcc758cf24ab4b01eb393f4863e36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7c4b9b0-34", "ovs_interfaceid": "f7c4b9b0-3445-468a-a19a-8b19b2d029a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.436 253542 DEBUG nova.network.os_vif_util [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.436 253542 DEBUG os_vif [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:43:15 compute-0 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [ALERT]    (344562) : Current worker (344564) exited with code 143 (Terminated)
Nov 25 08:43:15 compute-0 neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35[344535]: [WARNING]  (344562) : All workers exited. Exiting... (0)
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.438 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7c4b9b0-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:15 compute-0 systemd[1]: libpod-f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b.scope: Deactivated successfully.
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.443 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:43:15 compute-0 podman[350485]: 2025-11-25 08:43:15.445929284 +0000 UTC m=+0.100659684 container died f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.446 253542 INFO os_vif [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=f7c4b9b0-3445-468a-a19a-8b19b2d029a2,network=Network(6e77a51d-2695-4e70-8b9d-c02ec0c62f35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7c4b9b0-34')
Nov 25 08:43:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 194 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 19 KiB/s wr, 88 op/s
Nov 25 08:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b-userdata-shm.mount: Deactivated successfully.
Nov 25 08:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6947475d689df6f820a434a2c1f0b94b91a492171023f6878dbc9d586341b108-merged.mount: Deactivated successfully.
Nov 25 08:43:15 compute-0 podman[350485]: 2025-11-25 08:43:15.520631399 +0000 UTC m=+0.175361799 container cleanup f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:43:15 compute-0 systemd[1]: libpod-conmon-f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b.scope: Deactivated successfully.
Nov 25 08:43:15 compute-0 podman[350538]: 2025-11-25 08:43:15.666561806 +0000 UTC m=+0.117541924 container remove f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:43:15 compute-0 ceph-mon[75015]: pgmap v1841: 321 pgs: 321 active+clean; 194 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 19 KiB/s wr, 88 op/s
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.677 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc633608-0346-4b12-84ed-3721ad216f44]: (4, ('Tue Nov 25 08:43:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 (f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b)\nf3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b\nTue Nov 25 08:43:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 (f3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b)\nf3400fe71aa393ec401ab352cccfc649c5d4596cc58f98e493af76e9ecd4cc4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.679 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9ba656-c9e9-4f4f-b29a-724bb274e640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.682 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e77a51d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:15 compute-0 kernel: tap6e77a51d-20: left promiscuous mode
Nov 25 08:43:15 compute-0 nova_compute[253538]: 2025-11-25 08:43:15.717 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78b00812-2f1a-462e-8b6c-aabaee7db9ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[54924742-e7df-4d9a-a814-a4333e4cf4b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.734 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53f396df-f206-4da1-85f1-e5cdfa411680]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.754 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[de1568c3-e6db-454d-979d-375a2cd5ecb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538129, 'reachable_time': 18513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350555, 'error': None, 'target': 'ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e77a51d\x2d2695\x2d4e70\x2d8b9d\x2dc02ec0c62f35.mount: Deactivated successfully.
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.758 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e77a51d-2695-4e70-8b9d-c02ec0c62f35 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:43:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:15.759 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7f41a51e-62e6-4c5c-a31e-e4d8ab83c517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.226 253542 INFO nova.virt.libvirt.driver [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deleting instance files /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196_del
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.227 253542 INFO nova.virt.libvirt.driver [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deletion of /var/lib/nova/instances/3e75d0af-c514-42c5-aa05-88ae5552f196_del complete
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.268 253542 INFO nova.compute.manager [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 1.12 seconds to destroy the instance on the hypervisor.
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.268 253542 DEBUG oslo.service.loopingcall [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.269 253542 DEBUG nova.compute.manager [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.269 253542 DEBUG nova.network.neutron [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.858 253542 DEBUG nova.compute.manager [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.858 253542 DEBUG oslo_concurrency.lockutils [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.859 253542 DEBUG oslo_concurrency.lockutils [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.859 253542 DEBUG oslo_concurrency.lockutils [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.860 253542 DEBUG nova.compute.manager [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:43:16 compute-0 nova_compute[253538]: 2025-11-25 08:43:16.860 253542 DEBUG nova.compute.manager [req-9155c7fb-0e4b-49bd-8ad1-4d667b6999a6 req-2ce25ed4-72c5-49bf-b043-7f90dfb163d3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-unplugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:43:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 25 08:43:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 25 08:43:17 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 25 08:43:17 compute-0 nova_compute[253538]: 2025-11-25 08:43:17.411 253542 DEBUG nova.network.neutron [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:43:17 compute-0 nova_compute[253538]: 2025-11-25 08:43:17.435 253542 INFO nova.compute.manager [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Took 1.17 seconds to deallocate network for instance.
Nov 25 08:43:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 141 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 484 KiB/s rd, 23 KiB/s wr, 131 op/s
Nov 25 08:43:17 compute-0 nova_compute[253538]: 2025-11-25 08:43:17.506 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:17 compute-0 nova_compute[253538]: 2025-11-25 08:43:17.506 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:17 compute-0 nova_compute[253538]: 2025-11-25 08:43:17.554 253542 DEBUG oslo_concurrency.processutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:17 compute-0 nova_compute[253538]: 2025-11-25 08:43:17.602 253542 DEBUG nova.compute.manager [req-ad0623fa-3212-4403-8597-e3afc31d3fe5 req-bb736b34-6c3c-45b0-93b1-01185bf88f55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-deleted-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:43:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643779945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:17 compute-0 nova_compute[253538]: 2025-11-25 08:43:17.988 253542 DEBUG oslo_concurrency.processutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:17 compute-0 nova_compute[253538]: 2025-11-25 08:43:17.994 253542 DEBUG nova.compute.provider_tree [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:43:18 compute-0 nova_compute[253538]: 2025-11-25 08:43:18.012 253542 DEBUG nova.scheduler.client.report [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:43:18 compute-0 nova_compute[253538]: 2025-11-25 08:43:18.042 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:18 compute-0 nova_compute[253538]: 2025-11-25 08:43:18.073 253542 INFO nova.scheduler.client.report [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Deleted allocations for instance 3e75d0af-c514-42c5-aa05-88ae5552f196
Nov 25 08:43:18 compute-0 nova_compute[253538]: 2025-11-25 08:43:18.141 253542 DEBUG oslo_concurrency.lockutils [None req-a6412edf-1536-403d-86f1-80a20e2d46f6 66e1f27ea22d4ee08a0a470a8c18135e 295fcc758cf24ab4b01eb393f4863e36 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:18 compute-0 ceph-mon[75015]: osdmap e221: 3 total, 3 up, 3 in
Nov 25 08:43:18 compute-0 ceph-mon[75015]: pgmap v1843: 321 pgs: 321 active+clean; 141 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 484 KiB/s rd, 23 KiB/s wr, 131 op/s
Nov 25 08:43:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/643779945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:19 compute-0 nova_compute[253538]: 2025-11-25 08:43:19.057 253542 DEBUG nova.compute.manager [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:19 compute-0 nova_compute[253538]: 2025-11-25 08:43:19.057 253542 DEBUG oslo_concurrency.lockutils [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:19 compute-0 nova_compute[253538]: 2025-11-25 08:43:19.058 253542 DEBUG oslo_concurrency.lockutils [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:19 compute-0 nova_compute[253538]: 2025-11-25 08:43:19.058 253542 DEBUG oslo_concurrency.lockutils [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3e75d0af-c514-42c5-aa05-88ae5552f196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:19 compute-0 nova_compute[253538]: 2025-11-25 08:43:19.058 253542 DEBUG nova.compute.manager [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] No waiting events found dispatching network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:43:19 compute-0 nova_compute[253538]: 2025-11-25 08:43:19.058 253542 WARNING nova.compute.manager [req-e28b6101-49fe-4693-9d55-f43944974aa0 req-358b267f-240f-4eb5-b604-9515080455af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Received unexpected event network-vif-plugged-f7c4b9b0-3445-468a-a19a-8b19b2d029a2 for instance with vm_state deleted and task_state None.
Nov 25 08:43:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 125 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 578 KiB/s rd, 23 KiB/s wr, 149 op/s
Nov 25 08:43:20 compute-0 ceph-mon[75015]: pgmap v1844: 321 pgs: 321 active+clean; 125 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 578 KiB/s rd, 23 KiB/s wr, 149 op/s
Nov 25 08:43:20 compute-0 podman[350579]: 2025-11-25 08:43:20.198453697 +0000 UTC m=+0.066859263 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:43:20 compute-0 nova_compute[253538]: 2025-11-25 08:43:20.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:20 compute-0 nova_compute[253538]: 2025-11-25 08:43:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 496 KiB/s rd, 19 KiB/s wr, 125 op/s
Nov 25 08:43:21 compute-0 nova_compute[253538]: 2025-11-25 08:43:21.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:22 compute-0 ceph-mon[75015]: pgmap v1845: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 496 KiB/s rd, 19 KiB/s wr, 125 op/s
Nov 25 08:43:22 compute-0 nova_compute[253538]: 2025-11-25 08:43:22.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:22 compute-0 nova_compute[253538]: 2025-11-25 08:43:22.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:43:23 compute-0 nova_compute[253538]: 2025-11-25 08:43:23.436 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:43:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 438 KiB/s rd, 18 KiB/s wr, 93 op/s
Nov 25 08:43:23 compute-0 nova_compute[253538]: 2025-11-25 08:43:23.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:24 compute-0 ceph-mon[75015]: pgmap v1846: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 438 KiB/s rd, 18 KiB/s wr, 93 op/s
Nov 25 08:43:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 4.0 KiB/s wr, 60 op/s
Nov 25 08:43:25 compute-0 nova_compute[253538]: 2025-11-25 08:43:25.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:25 compute-0 nova_compute[253538]: 2025-11-25 08:43:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:25 compute-0 nova_compute[253538]: 2025-11-25 08:43:25.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:43:25 compute-0 nova_compute[253538]: 2025-11-25 08:43:25.601 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:43:25 compute-0 nova_compute[253538]: 2025-11-25 08:43:25.602 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:25 compute-0 podman[350602]: 2025-11-25 08:43:25.831382293 +0000 UTC m=+0.074632725 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 08:43:26 compute-0 sshd-session[350600]: Invalid user admin from 193.32.162.151 port 53048
Nov 25 08:43:26 compute-0 sshd-session[350600]: Connection closed by invalid user admin 193.32.162.151 port 53048 [preauth]
Nov 25 08:43:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 25 08:43:26 compute-0 ceph-mon[75015]: pgmap v1847: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 4.0 KiB/s wr, 60 op/s
Nov 25 08:43:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 25 08:43:26 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 25 08:43:26 compute-0 nova_compute[253538]: 2025-11-25 08:43:26.721 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060191.719975, 5f960b00-a365-4665-8a74-50d2e7b7f940 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:43:26 compute-0 nova_compute[253538]: 2025-11-25 08:43:26.721 253542 INFO nova.compute.manager [-] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] VM Stopped (Lifecycle Event)
Nov 25 08:43:26 compute-0 nova_compute[253538]: 2025-11-25 08:43:26.745 253542 DEBUG nova.compute.manager [None req-d948998c-dde8-4137-8337-8db65ddd58ff - - - - - -] [instance: 5f960b00-a365-4665-8a74-50d2e7b7f940] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:26 compute-0 nova_compute[253538]: 2025-11-25 08:43:26.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Nov 25 08:43:27 compute-0 ceph-mon[75015]: osdmap e222: 3 total, 3 up, 3 in
Nov 25 08:43:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:28.208 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:43:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:28.209 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:43:28 compute-0 nova_compute[253538]: 2025-11-25 08:43:28.210 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:28 compute-0 ceph-mon[75015]: pgmap v1849: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Nov 25 08:43:28 compute-0 podman[350622]: 2025-11-25 08:43:28.883716942 +0000 UTC m=+0.130268280 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 08:43:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:43:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3769272791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:43:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:43:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3769272791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:43:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.4 KiB/s wr, 15 op/s
Nov 25 08:43:29 compute-0 nova_compute[253538]: 2025-11-25 08:43:29.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 25 08:43:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3769272791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:43:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3769272791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:43:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 25 08:43:29 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 25 08:43:30 compute-0 nova_compute[253538]: 2025-11-25 08:43:30.421 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060195.420136, 3e75d0af-c514-42c5-aa05-88ae5552f196 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:43:30 compute-0 nova_compute[253538]: 2025-11-25 08:43:30.422 253542 INFO nova.compute.manager [-] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] VM Stopped (Lifecycle Event)
Nov 25 08:43:30 compute-0 nova_compute[253538]: 2025-11-25 08:43:30.454 253542 DEBUG nova.compute.manager [None req-d70cc2fb-970d-4ec4-942c-f046682c686e - - - - - -] [instance: 3e75d0af-c514-42c5-aa05-88ae5552f196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:30 compute-0 nova_compute[253538]: 2025-11-25 08:43:30.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:30 compute-0 ceph-mon[75015]: pgmap v1850: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.4 KiB/s wr, 15 op/s
Nov 25 08:43:30 compute-0 ceph-mon[75015]: osdmap e223: 3 total, 3 up, 3 in
Nov 25 08:43:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:31.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 25 08:43:31 compute-0 nova_compute[253538]: 2025-11-25 08:43:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:31 compute-0 nova_compute[253538]: 2025-11-25 08:43:31.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:31 compute-0 nova_compute[253538]: 2025-11-25 08:43:31.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:31 compute-0 nova_compute[253538]: 2025-11-25 08:43:31.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:31 compute-0 nova_compute[253538]: 2025-11-25 08:43:31.582 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:43:31 compute-0 nova_compute[253538]: 2025-11-25 08:43:31.582 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:31 compute-0 nova_compute[253538]: 2025-11-25 08:43:31.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:43:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789811154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.072 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.267 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.269 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3961MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.270 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.270 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.335 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.356 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 25 08:43:32 compute-0 ceph-mon[75015]: pgmap v1852: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 25 08:43:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1789811154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 25 08:43:32 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 25 08:43:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:43:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203456022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.842 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.848 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:43:32 compute-0 nova_compute[253538]: 2025-11-25 08:43:32.865 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:43:33 compute-0 nova_compute[253538]: 2025-11-25 08:43:33.095 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:43:33 compute-0 nova_compute[253538]: 2025-11-25 08:43:33.096 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 50 op/s
Nov 25 08:43:33 compute-0 ceph-mon[75015]: osdmap e224: 3 total, 3 up, 3 in
Nov 25 08:43:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/203456022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 25 08:43:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 25 08:43:34 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 25 08:43:34 compute-0 ceph-mon[75015]: pgmap v1854: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 50 op/s
Nov 25 08:43:35 compute-0 nova_compute[253538]: 2025-11-25 08:43:35.090 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:35 compute-0 nova_compute[253538]: 2025-11-25 08:43:35.091 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 5.3 KiB/s wr, 96 op/s
Nov 25 08:43:35 compute-0 nova_compute[253538]: 2025-11-25 08:43:35.570 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:35 compute-0 ceph-mon[75015]: osdmap e225: 3 total, 3 up, 3 in
Nov 25 08:43:36 compute-0 ceph-mon[75015]: pgmap v1856: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 5.3 KiB/s wr, 96 op/s
Nov 25 08:43:36 compute-0 nova_compute[253538]: 2025-11-25 08:43:36.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.3 KiB/s wr, 91 op/s
Nov 25 08:43:38 compute-0 nova_compute[253538]: 2025-11-25 08:43:38.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:43:38 compute-0 ceph-mon[75015]: pgmap v1857: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.3 KiB/s wr, 91 op/s
Nov 25 08:43:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.2 KiB/s wr, 91 op/s
Nov 25 08:43:40 compute-0 nova_compute[253538]: 2025-11-25 08:43:40.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 25 08:43:40 compute-0 ceph-mon[75015]: pgmap v1858: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.2 KiB/s wr, 91 op/s
Nov 25 08:43:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 25 08:43:40 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 25 08:43:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:41.068 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:41.068 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:41.068 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.217 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.217 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.231 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.295 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.295 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.300 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.300 253542 INFO nova.compute.claims [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.411 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 4.1 KiB/s wr, 76 op/s
Nov 25 08:43:41 compute-0 ceph-mon[75015]: osdmap e226: 3 total, 3 up, 3 in
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:43:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670874538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.907 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.917 253542 DEBUG nova.compute.provider_tree [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:43:41 compute-0 nova_compute[253538]: 2025-11-25 08:43:41.933 253542 DEBUG nova.scheduler.client.report [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.047 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.048 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.097 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.097 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.118 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.143 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:43:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 25 08:43:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 25 08:43:42 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.277 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.278 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.279 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Creating image(s)
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.307 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.332 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.361 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.365 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.408 253542 DEBUG nova.policy [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3d8339b871b34cf5bbf797eb592ec74e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31e0445675494c73be5eb4d1a6ec9597', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.462 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.464 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.465 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.465 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.497 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.501 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5dba47f-da80-465e-9659-1897b7d8b1dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:42 compute-0 ceph-mon[75015]: pgmap v1860: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 4.1 KiB/s wr, 76 op/s
Nov 25 08:43:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2670874538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:42 compute-0 ceph-mon[75015]: osdmap e227: 3 total, 3 up, 3 in
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.865 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5dba47f-da80-465e-9659-1897b7d8b1dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.916 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] resizing rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:43:42 compute-0 nova_compute[253538]: 2025-11-25 08:43:42.999 253542 DEBUG nova.objects.instance [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lazy-loading 'migration_context' on Instance uuid a5dba47f-da80-465e-9659-1897b7d8b1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:43:43 compute-0 nova_compute[253538]: 2025-11-25 08:43:43.016 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:43:43 compute-0 nova_compute[253538]: 2025-11-25 08:43:43.016 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Ensure instance console log exists: /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:43:43 compute-0 nova_compute[253538]: 2025-11-25 08:43:43.017 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:43 compute-0 nova_compute[253538]: 2025-11-25 08:43:43.017 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:43 compute-0 nova_compute[253538]: 2025-11-25 08:43:43.017 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:43 compute-0 nova_compute[253538]: 2025-11-25 08:43:43.174 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Successfully created port: 5de7e6a0-0b2c-4247-8314-ebb08913a220 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:43:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 96 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1005 KiB/s wr, 89 op/s
Nov 25 08:43:44 compute-0 ceph-mon[75015]: pgmap v1862: 321 pgs: 321 active+clean; 96 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1005 KiB/s wr, 89 op/s
Nov 25 08:43:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 132 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.9 MiB/s wr, 67 op/s
Nov 25 08:43:45 compute-0 nova_compute[253538]: 2025-11-25 08:43:45.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 25 08:43:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 25 08:43:45 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 25 08:43:45 compute-0 nova_compute[253538]: 2025-11-25 08:43:45.957 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Successfully updated port: 5de7e6a0-0b2c-4247-8314-ebb08913a220 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:43:46 compute-0 nova_compute[253538]: 2025-11-25 08:43:46.008 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:43:46 compute-0 nova_compute[253538]: 2025-11-25 08:43:46.008 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquired lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:43:46 compute-0 nova_compute[253538]: 2025-11-25 08:43:46.009 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:43:46 compute-0 nova_compute[253538]: 2025-11-25 08:43:46.071 253542 DEBUG nova.compute.manager [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received event network-changed-5de7e6a0-0b2c-4247-8314-ebb08913a220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:46 compute-0 nova_compute[253538]: 2025-11-25 08:43:46.072 253542 DEBUG nova.compute.manager [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Refreshing instance network info cache due to event network-changed-5de7e6a0-0b2c-4247-8314-ebb08913a220. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:43:46 compute-0 nova_compute[253538]: 2025-11-25 08:43:46.072 253542 DEBUG oslo_concurrency.lockutils [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:43:46 compute-0 nova_compute[253538]: 2025-11-25 08:43:46.177 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:43:46 compute-0 ceph-mon[75015]: pgmap v1863: 321 pgs: 321 active+clean; 132 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.9 MiB/s wr, 67 op/s
Nov 25 08:43:46 compute-0 ceph-mon[75015]: osdmap e228: 3 total, 3 up, 3 in
Nov 25 08:43:46 compute-0 nova_compute[253538]: 2025-11-25 08:43:46.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.104 253542 DEBUG nova.network.neutron [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Updating instance_info_cache with network_info: [{"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.175 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Releasing lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.175 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance network_info: |[{"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.176 253542 DEBUG oslo_concurrency.lockutils [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.176 253542 DEBUG nova.network.neutron [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Refreshing network info cache for port 5de7e6a0-0b2c-4247-8314-ebb08913a220 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.182 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start _get_guest_xml network_info=[{"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.189 253542 WARNING nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:43:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.197 253542 DEBUG nova.virt.libvirt.host [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.198 253542 DEBUG nova.virt.libvirt.host [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.204 253542 DEBUG nova.virt.libvirt.host [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.205 253542 DEBUG nova.virt.libvirt.host [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.206 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.206 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.207 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.207 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.208 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.208 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.208 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.209 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.209 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.210 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.210 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.211 253542 DEBUG nova.virt.hardware [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.215 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 182 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 10 MiB/s wr, 83 op/s
Nov 25 08:43:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:43:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657472670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.742 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.777 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:47 compute-0 nova_compute[253538]: 2025-11-25 08:43:47.782 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 25 08:43:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 25 08:43:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2657472670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:43:47 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 25 08:43:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:43:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367120127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.301 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.304 253542 DEBUG nova.virt.libvirt.vif [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1260419212',display_name='tempest-ServerAddressesTestJSON-server-1260419212',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1260419212',id=100,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31e0445675494c73be5eb4d1a6ec9597',ramdisk_id='',reservation_id='r-b20g17nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1579214935',owner_user_name='tempest-ServerAddressesTestJSON-1579214935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:43:42Z,user_data=None,user_id='3d8339b871b34cf5bbf797eb592ec74e',uuid=a5dba47f-da80-465e-9659-1897b7d8b1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.305 253542 DEBUG nova.network.os_vif_util [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converting VIF {"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.306 253542 DEBUG nova.network.os_vif_util [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.308 253542 DEBUG nova.objects.instance [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5dba47f-da80-465e-9659-1897b7d8b1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.331 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <uuid>a5dba47f-da80-465e-9659-1897b7d8b1dc</uuid>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <name>instance-00000064</name>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerAddressesTestJSON-server-1260419212</nova:name>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:43:47</nova:creationTime>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <nova:user uuid="3d8339b871b34cf5bbf797eb592ec74e">tempest-ServerAddressesTestJSON-1579214935-project-member</nova:user>
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <nova:project uuid="31e0445675494c73be5eb4d1a6ec9597">tempest-ServerAddressesTestJSON-1579214935</nova:project>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <nova:port uuid="5de7e6a0-0b2c-4247-8314-ebb08913a220">
Nov 25 08:43:48 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <system>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <entry name="serial">a5dba47f-da80-465e-9659-1897b7d8b1dc</entry>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <entry name="uuid">a5dba47f-da80-465e-9659-1897b7d8b1dc</entry>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </system>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <os>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   </os>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <features>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   </features>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a5dba47f-da80-465e-9659-1897b7d8b1dc_disk">
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       </source>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config">
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       </source>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:43:48 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:68:61:a9"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <target dev="tap5de7e6a0-0b"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/console.log" append="off"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <video>
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </video>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:43:48 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:43:48 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:43:48 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:43:48 compute-0 nova_compute[253538]: </domain>
Nov 25 08:43:48 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.332 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Preparing to wait for external event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.333 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.333 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.334 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.334 253542 DEBUG nova.virt.libvirt.vif [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1260419212',display_name='tempest-ServerAddressesTestJSON-server-1260419212',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1260419212',id=100,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31e0445675494c73be5eb4d1a6ec9597',ramdisk_id='',reservation_id='r-b20g17nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1579214935',owner_user_name='tempest-ServerAddressesTestJSON-1579214935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:43:42Z,user_data=None,user_id='3d8339b871b34cf5bbf797eb592ec74e',uuid=a5dba47f-da80-465e-9659-1897b7d8b1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.335 253542 DEBUG nova.network.os_vif_util [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converting VIF {"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.336 253542 DEBUG nova.network.os_vif_util [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.336 253542 DEBUG os_vif [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.338 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.339 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.343 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5de7e6a0-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.343 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5de7e6a0-0b, col_values=(('external_ids', {'iface-id': '5de7e6a0-0b2c-4247-8314-ebb08913a220', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:61:a9', 'vm-uuid': 'a5dba47f-da80-465e-9659-1897b7d8b1dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.345 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:48 compute-0 NetworkManager[48915]: <info>  [1764060228.3471] manager: (tap5de7e6a0-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.348 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.355 253542 INFO os_vif [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b')
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.400 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.400 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.401 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] No VIF found with MAC fa:16:3e:68:61:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.401 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Using config drive
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.426 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.619 253542 DEBUG nova.network.neutron [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Updated VIF entry in instance network info cache for port 5de7e6a0-0b2c-4247-8314-ebb08913a220. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.619 253542 DEBUG nova.network.neutron [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Updating instance_info_cache with network_info: [{"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.635 253542 DEBUG oslo_concurrency.lockutils [req-f2042fb7-ac06-493d-a6ea-e4ceb028d411 req-942ffc40-6e2c-4ce3-86a0-5a7dd510c0c1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5dba47f-da80-465e-9659-1897b7d8b1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:43:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 25 08:43:48 compute-0 ceph-mon[75015]: pgmap v1865: 321 pgs: 321 active+clean; 182 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 10 MiB/s wr, 83 op/s
Nov 25 08:43:48 compute-0 ceph-mon[75015]: osdmap e229: 3 total, 3 up, 3 in
Nov 25 08:43:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/367120127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:43:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 25 08:43:48 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.894 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Creating config drive at /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config
Nov 25 08:43:48 compute-0 nova_compute[253538]: 2025-11-25 08:43:48.904 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80pndh_r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.074 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80pndh_r" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.118 253542 DEBUG nova.storage.rbd_utils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] rbd image a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.125 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:49 compute-0 sudo[350984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:49 compute-0 sudo[350984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:49 compute-0 sudo[350984]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:49 compute-0 sudo[351017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:43:49 compute-0 sudo[351017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:49 compute-0 sudo[351017]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:49 compute-0 sudo[351050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:49 compute-0 sudo[351050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:49 compute-0 sudo[351050]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:49 compute-0 sudo[351078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:43:49 compute-0 sudo[351078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 182 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 18 MiB/s wr, 96 op/s
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.521 253542 DEBUG oslo_concurrency.processutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config a5dba47f-da80-465e-9659-1897b7d8b1dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.522 253542 INFO nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Deleting local config drive /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc/disk.config because it was imported into RBD.
Nov 25 08:43:49 compute-0 kernel: tap5de7e6a0-0b: entered promiscuous mode
Nov 25 08:43:49 compute-0 NetworkManager[48915]: <info>  [1764060229.6008] manager: (tap5de7e6a0-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Nov 25 08:43:49 compute-0 ovn_controller[152859]: 2025-11-25T08:43:49Z|00965|binding|INFO|Claiming lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 for this chassis.
Nov 25 08:43:49 compute-0 ovn_controller[152859]: 2025-11-25T08:43:49Z|00966|binding|INFO|5de7e6a0-0b2c-4247-8314-ebb08913a220: Claiming fa:16:3e:68:61:a9 10.100.0.12
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.616 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:61:a9 10.100.0.12'], port_security=['fa:16:3e:68:61:a9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a5dba47f-da80-465e-9659-1897b7d8b1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31e0445675494c73be5eb4d1a6ec9597', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c438a4c1-c66c-4de5-aec1-723426528db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d691f172-35d6-40c9-b66c-3f38462fa73a, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5de7e6a0-0b2c-4247-8314-ebb08913a220) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.618 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5de7e6a0-0b2c-4247-8314-ebb08913a220 in datapath ff042a1d-571e-4cfe-b6e8-5931f017fb97 bound to our chassis
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.619 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff042a1d-571e-4cfe-b6e8-5931f017fb97
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.633 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6acb2e5c-1845-466f-b77f-8a778546745c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.634 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff042a1d-51 in ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.636 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff042a1d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.637 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e045c9b-f7ab-4167-9de9-724614a00cf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.638 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2763b7-5599-425f-ac4f-352709816fef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 systemd-machined[215790]: New machine qemu-123-instance-00000064.
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.652 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d704204f-2c73-4f10-b369-1dbd19ef35df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.678 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a15b030e-e492-401e-b57a-85b9db973eaa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ovn_controller[152859]: 2025-11-25T08:43:49Z|00967|binding|INFO|Setting lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 ovn-installed in OVS
Nov 25 08:43:49 compute-0 ovn_controller[152859]: 2025-11-25T08:43:49Z|00968|binding|INFO|Setting lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 up in Southbound
Nov 25 08:43:49 compute-0 nova_compute[253538]: 2025-11-25 08:43:49.682 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:49 compute-0 systemd-udevd[351132]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:43:49 compute-0 NetworkManager[48915]: <info>  [1764060229.7015] device (tap5de7e6a0-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:43:49 compute-0 NetworkManager[48915]: <info>  [1764060229.7029] device (tap5de7e6a0-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.718 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fa853ec2-382b-4a57-a9c2-6fc55ac70d45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.724 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58d47d1b-0e17-4275-a7ca-5ec8689ae637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 NetworkManager[48915]: <info>  [1764060229.7267] manager: (tapff042a1d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/394)
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.762 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[00014d28-37a5-4315-9a9c-c23d91f35cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.765 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b92cdde-8ff0-4058-a576-a978f99115b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 NetworkManager[48915]: <info>  [1764060229.7936] device (tapff042a1d-50): carrier: link connected
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.799 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[312a5206-7700-4fdf-9340-21f0ed8a2f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.823 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b285309f-435b-4d3b-b9c9-6587bccdc086]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff042a1d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:19:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558439, 'reachable_time': 36076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351165, 'error': None, 'target': 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.848 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1effee9-e4c5-4573-babe-999673e20253]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:195b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558439, 'tstamp': 558439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351168, 'error': None, 'target': 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffc2d2e-3729-4d8f-93ef-7eda847b5a3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff042a1d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:19:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558439, 'reachable_time': 36076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351169, 'error': None, 'target': 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 ceph-mon[75015]: osdmap e230: 3 total, 3 up, 3 in
Nov 25 08:43:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:49.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa5d4dd-f93c-4b8e-9f86-723061fabb71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:49 compute-0 sudo[351078]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:43:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:43:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:43:49 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:43:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.005 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[060524da-a9c2-4908-86a6-f30a1370b131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.007 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff042a1d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.008 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.009 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff042a1d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:50 compute-0 NetworkManager[48915]: <info>  [1764060230.0122] manager: (tapff042a1d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Nov 25 08:43:50 compute-0 kernel: tapff042a1d-50: entered promiscuous mode
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.017 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff042a1d-50, col_values=(('external_ids', {'iface-id': 'de55f523-30c0-4da1-b06c-16fb81cd9b07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:50 compute-0 ovn_controller[152859]: 2025-11-25T08:43:50Z|00969|binding|INFO|Releasing lport de55f523-30c0-4da1-b06c-16fb81cd9b07 from this chassis (sb_readonly=0)
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.020 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff042a1d-571e-4cfe-b6e8-5931f017fb97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff042a1d-571e-4cfe-b6e8-5931f017fb97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.021 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd50ef5-e1a2-480d-873b-793f7ee2159a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.021 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ff042a1d-571e-4cfe-b6e8-5931f017fb97
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ff042a1d-571e-4cfe-b6e8-5931f017fb97.pid.haproxy
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ff042a1d-571e-4cfe-b6e8-5931f017fb97
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:43:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:50.022 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'env', 'PROCESS_TAG=haproxy-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff042a1d-571e-4cfe-b6e8-5931f017fb97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:50 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:43:50 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e5e5a70e-114a-4cd6-8e1a-45d5c8c831c4 does not exist
Nov 25 08:43:50 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5c51b251-b367-440f-974f-3a898733076d does not exist
Nov 25 08:43:50 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2b5a232c-d973-4422-8447-5272c210ff61 does not exist
Nov 25 08:43:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:43:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:43:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:43:50 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:43:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:43:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.065 253542 DEBUG nova.compute.manager [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.065 253542 DEBUG oslo_concurrency.lockutils [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.066 253542 DEBUG oslo_concurrency.lockutils [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.066 253542 DEBUG oslo_concurrency.lockutils [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.066 253542 DEBUG nova.compute.manager [req-42d1cf6f-e8dd-4b86-a869-6c1bb3e281b5 req-ab4a04c6-a3db-424e-840d-f0ca9031341c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Processing event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:43:50 compute-0 sudo[351224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:50 compute-0 sudo[351224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:50 compute-0 sudo[351224]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:50 compute-0 sudo[351257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:43:50 compute-0 sudo[351257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:50 compute-0 sudo[351257]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.246 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.248 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060230.2460926, a5dba47f-da80-465e-9659-1897b7d8b1dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.250 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] VM Started (Lifecycle Event)
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.259 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.263 253542 INFO nova.virt.libvirt.driver [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance spawned successfully.
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.264 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.272 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.278 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.291 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.292 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.293 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.294 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.295 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.296 253542 DEBUG nova.virt.libvirt.driver [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.303 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.304 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060230.2476506, a5dba47f-da80-465e-9659-1897b7d8b1dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.305 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] VM Paused (Lifecycle Event)
Nov 25 08:43:50 compute-0 podman[351282]: 2025-11-25 08:43:50.314735834 +0000 UTC m=+0.065694802 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:43:50 compute-0 sudo[351285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:50 compute-0 sudo[351285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:50 compute-0 sudo[351285]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.323 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.329 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060230.253739, a5dba47f-da80-465e-9659-1897b7d8b1dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.329 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] VM Resumed (Lifecycle Event)
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.345 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.358 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:43:50 compute-0 sudo[351341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:43:50 compute-0 sudo[351341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:50 compute-0 podman[351352]: 2025-11-25 08:43:50.417061852 +0000 UTC m=+0.053120988 container create 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:43:50 compute-0 systemd[1]: Started libpod-conmon-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16.scope.
Nov 25 08:43:50 compute-0 podman[351352]: 2025-11-25 08:43:50.387953329 +0000 UTC m=+0.024012475 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:43:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25bac1800189459c5bbd21b0537bc43218693610d30ca13e122f4e702894dc9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.495 253542 INFO nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Took 8.22 seconds to spawn the instance on the hypervisor.
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.496 253542 DEBUG nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:43:50 compute-0 podman[351352]: 2025-11-25 08:43:50.509873181 +0000 UTC m=+0.145932397 container init 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:43:50 compute-0 podman[351352]: 2025-11-25 08:43:50.521013085 +0000 UTC m=+0.157072241 container start 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:43:50 compute-0 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [NOTICE]   (351397) : New worker (351403) forked
Nov 25 08:43:50 compute-0 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [NOTICE]   (351397) : Loading success.
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.572 253542 INFO nova.compute.manager [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Took 9.30 seconds to build instance.
Nov 25 08:43:50 compute-0 nova_compute[253538]: 2025-11-25 08:43:50.588 253542 DEBUG oslo_concurrency.lockutils [None req-c32a9ba2-ab75-4948-b380-addcdd4656a2 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:50 compute-0 podman[351440]: 2025-11-25 08:43:50.748564316 +0000 UTC m=+0.052501071 container create 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:43:50 compute-0 systemd[1]: Started libpod-conmon-75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c.scope.
Nov 25 08:43:50 compute-0 podman[351440]: 2025-11-25 08:43:50.724124391 +0000 UTC m=+0.028061166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:43:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:43:50 compute-0 podman[351440]: 2025-11-25 08:43:50.866621534 +0000 UTC m=+0.170558309 container init 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:43:50 compute-0 podman[351440]: 2025-11-25 08:43:50.876584445 +0000 UTC m=+0.180521200 container start 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 08:43:50 compute-0 podman[351440]: 2025-11-25 08:43:50.881417687 +0000 UTC m=+0.185354442 container attach 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:43:50 compute-0 systemd[1]: libpod-75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c.scope: Deactivated successfully.
Nov 25 08:43:50 compute-0 strange_matsumoto[351456]: 167 167
Nov 25 08:43:50 compute-0 conmon[351456]: conmon 75c41a190a6cb71868c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c.scope/container/memory.events
Nov 25 08:43:50 compute-0 podman[351440]: 2025-11-25 08:43:50.885125378 +0000 UTC m=+0.189062123 container died 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:43:50 compute-0 ceph-mon[75015]: pgmap v1868: 321 pgs: 321 active+clean; 182 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 18 MiB/s wr, 96 op/s
Nov 25 08:43:50 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:43:50 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:43:50 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:43:50 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:43:50 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:43:50 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1fca7a212772035d5792b863d52dbeb5a61cec29600586aea766c8dc9b2e325-merged.mount: Deactivated successfully.
Nov 25 08:43:50 compute-0 podman[351440]: 2025-11-25 08:43:50.929138127 +0000 UTC m=+0.233074852 container remove 75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_matsumoto, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:43:50 compute-0 systemd[1]: libpod-conmon-75c41a190a6cb71868c2fe90205c3c1d2deaf4e6e15dfa3d4fce4b7cc16b009c.scope: Deactivated successfully.
Nov 25 08:43:51 compute-0 podman[351482]: 2025-11-25 08:43:51.092888169 +0000 UTC m=+0.042734425 container create 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:43:51 compute-0 systemd[1]: Started libpod-conmon-8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846.scope.
Nov 25 08:43:51 compute-0 podman[351482]: 2025-11-25 08:43:51.073956234 +0000 UTC m=+0.023802480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:43:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:51 compute-0 podman[351482]: 2025-11-25 08:43:51.197859461 +0000 UTC m=+0.147705717 container init 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:43:51 compute-0 podman[351482]: 2025-11-25 08:43:51.214045362 +0000 UTC m=+0.163891648 container start 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:43:51 compute-0 podman[351482]: 2025-11-25 08:43:51.219084458 +0000 UTC m=+0.168930734 container attach 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:43:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 166 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 17 MiB/s wr, 108 op/s
Nov 25 08:43:51 compute-0 nova_compute[253538]: 2025-11-25 08:43:51.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:52 compute-0 nova_compute[253538]: 2025-11-25 08:43:52.217 253542 DEBUG nova.compute.manager [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:52 compute-0 nova_compute[253538]: 2025-11-25 08:43:52.218 253542 DEBUG oslo_concurrency.lockutils [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:52 compute-0 nova_compute[253538]: 2025-11-25 08:43:52.218 253542 DEBUG oslo_concurrency.lockutils [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:52 compute-0 nova_compute[253538]: 2025-11-25 08:43:52.220 253542 DEBUG oslo_concurrency.lockutils [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:52 compute-0 nova_compute[253538]: 2025-11-25 08:43:52.221 253542 DEBUG nova.compute.manager [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] No waiting events found dispatching network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:43:52 compute-0 nova_compute[253538]: 2025-11-25 08:43:52.222 253542 WARNING nova.compute.manager [req-5345a655-4649-444f-a605-ae65f97db990 req-5d6fc922-49f2-4fd3-be70-a249e7761839 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received unexpected event network-vif-plugged-5de7e6a0-0b2c-4247-8314-ebb08913a220 for instance with vm_state active and task_state None.
Nov 25 08:43:52 compute-0 agitated_faraday[351499]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:43:52 compute-0 agitated_faraday[351499]: --> relative data size: 1.0
Nov 25 08:43:52 compute-0 agitated_faraday[351499]: --> All data devices are unavailable
Nov 25 08:43:52 compute-0 systemd[1]: libpod-8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846.scope: Deactivated successfully.
Nov 25 08:43:52 compute-0 systemd[1]: libpod-8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846.scope: Consumed 1.005s CPU time.
Nov 25 08:43:52 compute-0 podman[351482]: 2025-11-25 08:43:52.296236973 +0000 UTC m=+1.246083239 container died 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 08:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfcbc420c1146610adc4c348bab9816e7bc2b9687508ee3dec75985ebf7b1b44-merged.mount: Deactivated successfully.
Nov 25 08:43:52 compute-0 podman[351482]: 2025-11-25 08:43:52.466813572 +0000 UTC m=+1.416659838 container remove 8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:43:52 compute-0 systemd[1]: libpod-conmon-8d8a101f84de1baf141e7bc27c1c40abf6ce2712eba6593057dedf697a2d8846.scope: Deactivated successfully.
Nov 25 08:43:52 compute-0 sudo[351341]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:52 compute-0 sudo[351542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:52 compute-0 sudo[351542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:52 compute-0 sudo[351542]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:52 compute-0 sudo[351567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:43:52 compute-0 sudo[351567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:52 compute-0 sudo[351567]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:52 compute-0 sudo[351592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:52 compute-0 sudo[351592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:52 compute-0 sudo[351592]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:52 compute-0 sudo[351617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:43:52 compute-0 sudo[351617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:52 compute-0 ceph-mon[75015]: pgmap v1869: 321 pgs: 321 active+clean; 166 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 17 MiB/s wr, 108 op/s
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:43:53
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', 'images']
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:43:53 compute-0 podman[351682]: 2025-11-25 08:43:53.303636756 +0000 UTC m=+0.084336580 container create bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:53 compute-0 systemd[1]: Started libpod-conmon-bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff.scope.
Nov 25 08:43:53 compute-0 podman[351682]: 2025-11-25 08:43:53.276091536 +0000 UTC m=+0.056791410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:43:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.393 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.394 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.395 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.395 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.395 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.396 253542 INFO nova.compute.manager [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Terminating instance
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.397 253542 DEBUG nova.compute.manager [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:43:53 compute-0 podman[351682]: 2025-11-25 08:43:53.415479504 +0000 UTC m=+0.196179318 container init bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:43:53 compute-0 podman[351682]: 2025-11-25 08:43:53.428411167 +0000 UTC m=+0.209110951 container start bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 08:43:53 compute-0 kernel: tap5de7e6a0-0b (unregistering): left promiscuous mode
Nov 25 08:43:53 compute-0 podman[351682]: 2025-11-25 08:43:53.433268639 +0000 UTC m=+0.213968423 container attach bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 08:43:53 compute-0 compassionate_torvalds[351699]: 167 167
Nov 25 08:43:53 compute-0 NetworkManager[48915]: <info>  [1764060233.4368] device (tap5de7e6a0-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:43:53 compute-0 podman[351682]: 2025-11-25 08:43:53.439205371 +0000 UTC m=+0.219905165 container died bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 08:43:53 compute-0 systemd[1]: libpod-bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff.scope: Deactivated successfully.
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:53 compute-0 ovn_controller[152859]: 2025-11-25T08:43:53Z|00970|binding|INFO|Releasing lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 from this chassis (sb_readonly=0)
Nov 25 08:43:53 compute-0 ovn_controller[152859]: 2025-11-25T08:43:53Z|00971|binding|INFO|Setting lport 5de7e6a0-0b2c-4247-8314-ebb08913a220 down in Southbound
Nov 25 08:43:53 compute-0 ovn_controller[152859]: 2025-11-25T08:43:53Z|00972|binding|INFO|Removing iface tap5de7e6a0-0b ovn-installed in OVS
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.462 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:61:a9 10.100.0.12'], port_security=['fa:16:3e:68:61:a9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a5dba47f-da80-465e-9659-1897b7d8b1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31e0445675494c73be5eb4d1a6ec9597', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c438a4c1-c66c-4de5-aec1-723426528db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d691f172-35d6-40c9-b66c-3f38462fa73a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5de7e6a0-0b2c-4247-8314-ebb08913a220) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:43:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.464 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5de7e6a0-0b2c-4247-8314-ebb08913a220 in datapath ff042a1d-571e-4cfe-b6e8-5931f017fb97 unbound from our chassis
Nov 25 08:43:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.466 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff042a1d-571e-4cfe-b6e8-5931f017fb97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:43:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c4cd5c-d8a9-482b-8b42-dd249395e813]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:53.472 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 namespace which is not needed anymore
Nov 25 08:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-592d93dc4749e2c5202be24bdd51d8b899a1fbdb34fa48c053075301d7347642-merged.mount: Deactivated successfully.
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:53 compute-0 podman[351682]: 2025-11-25 08:43:53.491969529 +0000 UTC m=+0.272669333 container remove bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:43:53 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 25 08:43:53 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 3.626s CPU time.
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 MiB/s wr, 184 op/s
Nov 25 08:43:53 compute-0 systemd-machined[215790]: Machine qemu-123-instance-00000064 terminated.
Nov 25 08:43:53 compute-0 systemd[1]: libpod-conmon-bb58c7f3005b568581db992baa824589fd2ff42b70fbb24cc2f24c36b5d82cff.scope: Deactivated successfully.
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.644 253542 INFO nova.virt.libvirt.driver [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Instance destroyed successfully.
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.644 253542 DEBUG nova.objects.instance [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lazy-loading 'resources' on Instance uuid a5dba47f-da80-465e-9659-1897b7d8b1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:43:53 compute-0 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [NOTICE]   (351397) : haproxy version is 2.8.14-c23fe91
Nov 25 08:43:53 compute-0 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [NOTICE]   (351397) : path to executable is /usr/sbin/haproxy
Nov 25 08:43:53 compute-0 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [WARNING]  (351397) : Exiting Master process...
Nov 25 08:43:53 compute-0 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [ALERT]    (351397) : Current worker (351403) exited with code 143 (Terminated)
Nov 25 08:43:53 compute-0 neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97[351386]: [WARNING]  (351397) : All workers exited. Exiting... (0)
Nov 25 08:43:53 compute-0 systemd[1]: libpod-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16.scope: Deactivated successfully.
Nov 25 08:43:53 compute-0 conmon[351386]: conmon 4a41c7c055871c76bd92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16.scope/container/memory.events
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.658 253542 DEBUG nova.virt.libvirt.vif [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1260419212',display_name='tempest-ServerAddressesTestJSON-server-1260419212',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1260419212',id=100,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:43:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31e0445675494c73be5eb4d1a6ec9597',ramdisk_id='',reservation_id='r-b20g17nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1579214935',owner_user_name='tempest-ServerAddressesTestJSON-1579214935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:43:50Z,user_data=None,user_id='3d8339b871b34cf5bbf797eb592ec74e',uuid=a5dba47f-da80-465e-9659-1897b7d8b1dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.658 253542 DEBUG nova.network.os_vif_util [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converting VIF {"id": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "address": "fa:16:3e:68:61:a9", "network": {"id": "ff042a1d-571e-4cfe-b6e8-5931f017fb97", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2146982642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e0445675494c73be5eb4d1a6ec9597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de7e6a0-0b", "ovs_interfaceid": "5de7e6a0-0b2c-4247-8314-ebb08913a220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:43:53 compute-0 podman[351743]: 2025-11-25 08:43:53.660991195 +0000 UTC m=+0.093583591 container died 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.660 253542 DEBUG nova.network.os_vif_util [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.663 253542 DEBUG os_vif [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.666 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5de7e6a0-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:43:53 compute-0 nova_compute[253538]: 2025-11-25 08:43:53.673 253542 INFO os_vif [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:61:a9,bridge_name='br-int',has_traffic_filtering=True,id=5de7e6a0-0b2c-4247-8314-ebb08913a220,network=Network(ff042a1d-571e-4cfe-b6e8-5931f017fb97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de7e6a0-0b')
Nov 25 08:43:53 compute-0 podman[351762]: 2025-11-25 08:43:53.656605825 +0000 UTC m=+0.023719197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:43:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16-userdata-shm.mount: Deactivated successfully.
Nov 25 08:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-25bac1800189459c5bbd21b0537bc43218693610d30ca13e122f4e702894dc9e-merged.mount: Deactivated successfully.
Nov 25 08:43:54 compute-0 podman[351762]: 2025-11-25 08:43:54.045605836 +0000 UTC m=+0.412719228 container create ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:43:54 compute-0 podman[351743]: 2025-11-25 08:43:54.050764907 +0000 UTC m=+0.483357333 container cleanup 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 08:43:54 compute-0 systemd[1]: libpod-conmon-4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16.scope: Deactivated successfully.
Nov 25 08:43:54 compute-0 systemd[1]: Started libpod-conmon-ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71.scope.
Nov 25 08:43:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:54 compute-0 podman[351762]: 2025-11-25 08:43:54.181101959 +0000 UTC m=+0.548215411 container init ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:43:54 compute-0 podman[351817]: 2025-11-25 08:43:54.188850939 +0000 UTC m=+0.086455226 container remove 4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:43:54 compute-0 podman[351762]: 2025-11-25 08:43:54.196486348 +0000 UTC m=+0.563599720 container start ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.196 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9aefc608-76cc-4acb-aad8-a7c15d33de3f]: (4, ('Tue Nov 25 08:43:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 (4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16)\n4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16\nTue Nov 25 08:43:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 (4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16)\n4a41c7c055871c76bd9207cca29fe04716cffc16bba1cdedf7fb9e7d00ec1f16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.199 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f020c1-99eb-453b-99b2-719709d92a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.200 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff042a1d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:43:54 compute-0 podman[351762]: 2025-11-25 08:43:54.201047062 +0000 UTC m=+0.568160454 container attach ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:43:54 compute-0 nova_compute[253538]: 2025-11-25 08:43:54.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:54 compute-0 kernel: tapff042a1d-50: left promiscuous mode
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a31dfd-3e35-48c8-a401-d3cabfddb09b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:54 compute-0 nova_compute[253538]: 2025-11-25 08:43:54.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[28bb16a8-f921-4818-99eb-175320205eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.229 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7a986f49-934d-4653-b18f-fe9053120e03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.251 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e8303cc0-d461-4bf3-8714-b0c2fea72882]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558430, 'reachable_time': 18010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351839, 'error': None, 'target': 'ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dff042a1d\x2d571e\x2d4cfe\x2db6e8\x2d5931f017fb97.mount: Deactivated successfully.
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.256 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff042a1d-571e-4cfe-b6e8-5931f017fb97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:43:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:43:54.256 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3c7606-6658-4648-88a4-65cc081a3bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:43:54 compute-0 nova_compute[253538]: 2025-11-25 08:43:54.459 253542 INFO nova.virt.libvirt.driver [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Deleting instance files /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc_del
Nov 25 08:43:54 compute-0 nova_compute[253538]: 2025-11-25 08:43:54.460 253542 INFO nova.virt.libvirt.driver [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Deletion of /var/lib/nova/instances/a5dba47f-da80-465e-9659-1897b7d8b1dc_del complete
Nov 25 08:43:54 compute-0 nova_compute[253538]: 2025-11-25 08:43:54.512 253542 INFO nova.compute.manager [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Took 1.11 seconds to destroy the instance on the hypervisor.
Nov 25 08:43:54 compute-0 nova_compute[253538]: 2025-11-25 08:43:54.513 253542 DEBUG oslo.service.loopingcall [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:43:54 compute-0 nova_compute[253538]: 2025-11-25 08:43:54.513 253542 DEBUG nova.compute.manager [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:43:54 compute-0 nova_compute[253538]: 2025-11-25 08:43:54.514 253542 DEBUG nova.network.neutron [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]: {
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:     "0": [
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:         {
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "devices": [
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "/dev/loop3"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             ],
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_name": "ceph_lv0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_size": "21470642176",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "name": "ceph_lv0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "tags": {
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cluster_name": "ceph",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.crush_device_class": "",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.encrypted": "0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osd_id": "0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.type": "block",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.vdo": "0"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             },
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "type": "block",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "vg_name": "ceph_vg0"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:         }
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:     ],
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:     "1": [
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:         {
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "devices": [
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "/dev/loop4"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             ],
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_name": "ceph_lv1",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_size": "21470642176",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "name": "ceph_lv1",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "tags": {
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cluster_name": "ceph",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.crush_device_class": "",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.encrypted": "0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osd_id": "1",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.type": "block",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.vdo": "0"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             },
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "type": "block",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "vg_name": "ceph_vg1"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:         }
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:     ],
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:     "2": [
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:         {
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "devices": [
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "/dev/loop5"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             ],
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_name": "ceph_lv2",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_size": "21470642176",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "name": "ceph_lv2",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "tags": {
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.cluster_name": "ceph",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.crush_device_class": "",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.encrypted": "0",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osd_id": "2",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.type": "block",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:                 "ceph.vdo": "0"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             },
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "type": "block",
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:             "vg_name": "ceph_vg2"
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:         }
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]:     ]
Nov 25 08:43:54 compute-0 quirky_mahavira[351825]: }
Nov 25 08:43:54 compute-0 systemd[1]: libpod-ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71.scope: Deactivated successfully.
Nov 25 08:43:54 compute-0 podman[351762]: 2025-11-25 08:43:54.97448539 +0000 UTC m=+1.341598762 container died ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b277a5b8cad874e155653a18414fb1cf9b9335df84dcf2df73f8bf3711af37f-merged.mount: Deactivated successfully.
Nov 25 08:43:55 compute-0 ceph-mon[75015]: pgmap v1870: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 MiB/s wr, 184 op/s
Nov 25 08:43:55 compute-0 podman[351762]: 2025-11-25 08:43:55.132603558 +0000 UTC m=+1.499716920 container remove ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:43:55 compute-0 systemd[1]: libpod-conmon-ac5273ef9ae56c3344bd833dc6585094f7c4a715f7167625208fb000ac992d71.scope: Deactivated successfully.
Nov 25 08:43:55 compute-0 sudo[351617]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:55 compute-0 sudo[351855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:55 compute-0 sudo[351855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:55 compute-0 sudo[351855]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:55 compute-0 sudo[351880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:43:55 compute-0 sudo[351880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:55 compute-0 sudo[351880]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:55 compute-0 sudo[351905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:55 compute-0 sudo[351905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:55 compute-0 sudo[351905]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 114 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.0 MiB/s wr, 223 op/s
Nov 25 08:43:55 compute-0 sudo[351930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:43:55 compute-0 sudo[351930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:55 compute-0 nova_compute[253538]: 2025-11-25 08:43:55.503 253542 DEBUG nova.network.neutron [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:43:55 compute-0 nova_compute[253538]: 2025-11-25 08:43:55.519 253542 INFO nova.compute.manager [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Took 1.01 seconds to deallocate network for instance.
Nov 25 08:43:55 compute-0 nova_compute[253538]: 2025-11-25 08:43:55.566 253542 DEBUG nova.compute.manager [req-88607c3d-0aa9-4da4-96ac-3b578a7935c5 req-ad026df2-39bf-4768-9db6-af0ca59f31ea b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Received event network-vif-deleted-5de7e6a0-0b2c-4247-8314-ebb08913a220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:43:55 compute-0 nova_compute[253538]: 2025-11-25 08:43:55.569 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:43:55 compute-0 nova_compute[253538]: 2025-11-25 08:43:55.570 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:43:55 compute-0 nova_compute[253538]: 2025-11-25 08:43:55.639 253542 DEBUG oslo_concurrency.processutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:43:55 compute-0 podman[352006]: 2025-11-25 08:43:55.834906928 +0000 UTC m=+0.047970319 container create 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 08:43:55 compute-0 systemd[1]: Started libpod-conmon-742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7.scope.
Nov 25 08:43:55 compute-0 podman[352006]: 2025-11-25 08:43:55.81260165 +0000 UTC m=+0.025665071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:43:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:43:55 compute-0 podman[352006]: 2025-11-25 08:43:55.953009836 +0000 UTC m=+0.166073237 container init 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:43:55 compute-0 podman[352006]: 2025-11-25 08:43:55.960416938 +0000 UTC m=+0.173480349 container start 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:43:55 compute-0 beautiful_volhard[352041]: 167 167
Nov 25 08:43:55 compute-0 systemd[1]: libpod-742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7.scope: Deactivated successfully.
Nov 25 08:43:55 compute-0 podman[352006]: 2025-11-25 08:43:55.971966653 +0000 UTC m=+0.185030114 container attach 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 08:43:55 compute-0 podman[352006]: 2025-11-25 08:43:55.97259809 +0000 UTC m=+0.185661491 container died 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:43:56 compute-0 podman[352030]: 2025-11-25 08:43:56.020614468 +0000 UTC m=+0.137090377 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 08:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e8a224085af0e24fa38d0e5781a5be4407abbbccc5b25956a6cae3f64dc80ab-merged.mount: Deactivated successfully.
Nov 25 08:43:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:43:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928886384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:56 compute-0 nova_compute[253538]: 2025-11-25 08:43:56.100 253542 DEBUG oslo_concurrency.processutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:43:56 compute-0 nova_compute[253538]: 2025-11-25 08:43:56.108 253542 DEBUG nova.compute.provider_tree [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:43:56 compute-0 nova_compute[253538]: 2025-11-25 08:43:56.128 253542 DEBUG nova.scheduler.client.report [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:43:56 compute-0 nova_compute[253538]: 2025-11-25 08:43:56.155 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:56 compute-0 podman[352006]: 2025-11-25 08:43:56.168011495 +0000 UTC m=+0.381074886 container remove 742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:43:56 compute-0 systemd[1]: libpod-conmon-742fe1498db01c575f22ff5e5447e46d6554fd65f453a86c401818dacc397ef7.scope: Deactivated successfully.
Nov 25 08:43:56 compute-0 nova_compute[253538]: 2025-11-25 08:43:56.197 253542 INFO nova.scheduler.client.report [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Deleted allocations for instance a5dba47f-da80-465e-9659-1897b7d8b1dc
Nov 25 08:43:56 compute-0 nova_compute[253538]: 2025-11-25 08:43:56.270 253542 DEBUG oslo_concurrency.lockutils [None req-1fe5a5ed-fa78-490a-8b4b-cc53adb653da 3d8339b871b34cf5bbf797eb592ec74e 31e0445675494c73be5eb4d1a6ec9597 - - default default] Lock "a5dba47f-da80-465e-9659-1897b7d8b1dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:43:56 compute-0 podman[352079]: 2025-11-25 08:43:56.391529997 +0000 UTC m=+0.076410044 container create f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 08:43:56 compute-0 podman[352079]: 2025-11-25 08:43:56.343105876 +0000 UTC m=+0.027986003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:43:56 compute-0 systemd[1]: Started libpod-conmon-f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b.scope.
Nov 25 08:43:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:43:56 compute-0 podman[352079]: 2025-11-25 08:43:56.524818228 +0000 UTC m=+0.209698315 container init f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:43:56 compute-0 podman[352079]: 2025-11-25 08:43:56.533936647 +0000 UTC m=+0.218816694 container start f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:43:56 compute-0 podman[352079]: 2025-11-25 08:43:56.551693651 +0000 UTC m=+0.236573688 container attach f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:43:57 compute-0 nova_compute[253538]: 2025-11-25 08:43:57.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:57 compute-0 ceph-mon[75015]: pgmap v1871: 321 pgs: 321 active+clean; 114 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.0 MiB/s wr, 223 op/s
Nov 25 08:43:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3928886384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:43:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:43:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 25 08:43:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 25 08:43:57 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 25 08:43:57 compute-0 busy_swirles[352096]: {
Nov 25 08:43:57 compute-0 busy_swirles[352096]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "osd_id": 1,
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "type": "bluestore"
Nov 25 08:43:57 compute-0 busy_swirles[352096]:     },
Nov 25 08:43:57 compute-0 busy_swirles[352096]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "osd_id": 2,
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "type": "bluestore"
Nov 25 08:43:57 compute-0 busy_swirles[352096]:     },
Nov 25 08:43:57 compute-0 busy_swirles[352096]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "osd_id": 0,
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:43:57 compute-0 busy_swirles[352096]:         "type": "bluestore"
Nov 25 08:43:57 compute-0 busy_swirles[352096]:     }
Nov 25 08:43:57 compute-0 busy_swirles[352096]: }
Nov 25 08:43:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 96 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 155 op/s
Nov 25 08:43:57 compute-0 systemd[1]: libpod-f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b.scope: Deactivated successfully.
Nov 25 08:43:57 compute-0 podman[352079]: 2025-11-25 08:43:57.505625077 +0000 UTC m=+1.190505144 container died f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 08:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e01dec20bd8d1c6b693ad7e7b8ce31746f2110cf622006e29cad8280a8c67fe-merged.mount: Deactivated successfully.
Nov 25 08:43:57 compute-0 podman[352079]: 2025-11-25 08:43:57.577392213 +0000 UTC m=+1.262272290 container remove f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 08:43:57 compute-0 systemd[1]: libpod-conmon-f7d116ec682a3933ddd939fde1b681b88475be5ca5201b65e7f54e3fe18ef95b.scope: Deactivated successfully.
Nov 25 08:43:57 compute-0 sudo[351930]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:43:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:43:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:43:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:43:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev be81313d-8599-41c6-a411-19a47d16bf95 does not exist
Nov 25 08:43:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0b4f9d4b-7dac-4a6a-b2ea-bb0541c947ca does not exist
Nov 25 08:43:57 compute-0 sudo[352140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:43:57 compute-0 sudo[352140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:57 compute-0 sudo[352140]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:57 compute-0 sudo[352165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:43:57 compute-0 sudo[352165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:43:57 compute-0 sudo[352165]: pam_unix(sudo:session): session closed for user root
Nov 25 08:43:58 compute-0 ceph-mon[75015]: osdmap e231: 3 total, 3 up, 3 in
Nov 25 08:43:58 compute-0 ceph-mon[75015]: pgmap v1873: 321 pgs: 321 active+clean; 96 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 155 op/s
Nov 25 08:43:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:43:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:43:58 compute-0 nova_compute[253538]: 2025-11-25 08:43:58.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:43:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 88 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 147 op/s
Nov 25 08:43:59 compute-0 podman[352190]: 2025-11-25 08:43:59.910336029 +0000 UTC m=+0.143550033 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:44:00 compute-0 nova_compute[253538]: 2025-11-25 08:44:00.414 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:00 compute-0 ceph-mon[75015]: pgmap v1874: 321 pgs: 321 active+clean; 88 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 147 op/s
Nov 25 08:44:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 08:44:02 compute-0 nova_compute[253538]: 2025-11-25 08:44:02.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:02 compute-0 ceph-mon[75015]: pgmap v1875: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 08:44:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 757 KiB/s rd, 1.4 KiB/s wr, 59 op/s
Nov 25 08:44:03 compute-0 nova_compute[253538]: 2025-11-25 08:44:03.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:44:04 compute-0 ceph-mon[75015]: pgmap v1876: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 757 KiB/s rd, 1.4 KiB/s wr, 59 op/s
Nov 25 08:44:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1023 B/s wr, 17 op/s
Nov 25 08:44:06 compute-0 ceph-mon[75015]: pgmap v1877: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1023 B/s wr, 17 op/s
Nov 25 08:44:07 compute-0 nova_compute[253538]: 2025-11-25 08:44:07.061 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 397 B/s wr, 13 op/s
Nov 25 08:44:08 compute-0 nova_compute[253538]: 2025-11-25 08:44:08.643 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060233.6417558, a5dba47f-da80-465e-9659-1897b7d8b1dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:44:08 compute-0 nova_compute[253538]: 2025-11-25 08:44:08.643 253542 INFO nova.compute.manager [-] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] VM Stopped (Lifecycle Event)
Nov 25 08:44:08 compute-0 nova_compute[253538]: 2025-11-25 08:44:08.671 253542 DEBUG nova.compute.manager [None req-355a25f5-f478-46ba-ba76-ca3c584f810a - - - - - -] [instance: a5dba47f-da80-465e-9659-1897b7d8b1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:44:08 compute-0 nova_compute[253538]: 2025-11-25 08:44:08.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:08 compute-0 ceph-mon[75015]: pgmap v1878: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 397 B/s wr, 13 op/s
Nov 25 08:44:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 341 B/s wr, 11 op/s
Nov 25 08:44:10 compute-0 ceph-mon[75015]: pgmap v1879: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 341 B/s wr, 11 op/s
Nov 25 08:44:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:12 compute-0 nova_compute[253538]: 2025-11-25 08:44:12.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:12 compute-0 ceph-mon[75015]: pgmap v1880: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:13 compute-0 nova_compute[253538]: 2025-11-25 08:44:13.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:14 compute-0 ceph-mon[75015]: pgmap v1881: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:16 compute-0 ceph-mon[75015]: pgmap v1882: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:17 compute-0 nova_compute[253538]: 2025-11-25 08:44:17.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.391 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.391 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.404 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.475 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.476 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.487 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.488 253542 INFO nova.compute.claims [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.601 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:18 compute-0 nova_compute[253538]: 2025-11-25 08:44:18.677 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:18 compute-0 ceph-mon[75015]: pgmap v1883: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:44:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3897038534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.143 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.150 253542 DEBUG nova.compute.provider_tree [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.170 253542 DEBUG nova.scheduler.client.report [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.193 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.194 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.240 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.241 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.264 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.287 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.380 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.381 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.382 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Creating image(s)
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.406 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:44:19 compute-0 sshd-session[352238]: Connection closed by 186.86.52.137 port 8990 [preauth]
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.435 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.460 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.464 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.527 253542 DEBUG nova.policy [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9e24446c871b4d7ca816a3833d05daa9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95ca8ccb4dca4f58b3896b0533bab879', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.553 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.554 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.555 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.555 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.579 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.583 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3897038534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:44:19 compute-0 nova_compute[253538]: 2025-11-25 08:44:19.924 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.005 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] resizing rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.115 253542 DEBUG nova.objects.instance [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lazy-loading 'migration_context' on Instance uuid 65e7119e-238b-426c-9e9d-67b4c38c61b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.129 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.129 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Ensure instance console log exists: /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.130 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.130 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.131 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:20 compute-0 nova_compute[253538]: 2025-11-25 08:44:20.671 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Successfully created port: 82c25621-80b3-4927-957d-aec0a653a4f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:44:20 compute-0 ceph-mon[75015]: pgmap v1884: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:20 compute-0 podman[352409]: 2025-11-25 08:44:20.854851462 +0000 UTC m=+0.097062696 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 08:44:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:21 compute-0 nova_compute[253538]: 2025-11-25 08:44:21.817 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Successfully updated port: 82c25621-80b3-4927-957d-aec0a653a4f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:44:21 compute-0 nova_compute[253538]: 2025-11-25 08:44:21.833 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:44:21 compute-0 nova_compute[253538]: 2025-11-25 08:44:21.833 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquired lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:44:21 compute-0 nova_compute[253538]: 2025-11-25 08:44:21.833 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:44:21 compute-0 nova_compute[253538]: 2025-11-25 08:44:21.983 253542 DEBUG nova.compute.manager [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-changed-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:44:21 compute-0 nova_compute[253538]: 2025-11-25 08:44:21.984 253542 DEBUG nova.compute.manager [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Refreshing instance network info cache due to event network-changed-82c25621-80b3-4927-957d-aec0a653a4f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:44:21 compute-0 nova_compute[253538]: 2025-11-25 08:44:21.985 253542 DEBUG oslo_concurrency.lockutils [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:44:22 compute-0 nova_compute[253538]: 2025-11-25 08:44:22.054 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:44:22 compute-0 nova_compute[253538]: 2025-11-25 08:44:22.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:22 compute-0 ceph-mon[75015]: pgmap v1885: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:44:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 108 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 892 KiB/s wr, 25 op/s
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.654 253542 DEBUG nova.network.neutron [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Updating instance_info_cache with network_info: [{"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.677 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Releasing lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.678 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance network_info: |[{"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.679 253542 DEBUG oslo_concurrency.lockutils [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.680 253542 DEBUG nova.network.neutron [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Refreshing network info cache for port 82c25621-80b3-4927-957d-aec0a653a4f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.690 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start _get_guest_xml network_info=[{"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.699 253542 WARNING nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.710 253542 DEBUG nova.virt.libvirt.host [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.710 253542 DEBUG nova.virt.libvirt.host [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.715 253542 DEBUG nova.virt.libvirt.host [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.715 253542 DEBUG nova.virt.libvirt.host [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.716 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.717 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.717 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.718 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.718 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.719 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.719 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.720 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.720 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.721 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.721 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.722 253542 DEBUG nova.virt.hardware [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:44:23 compute-0 nova_compute[253538]: 2025-11-25 08:44:23.726 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:44:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321357064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.298 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.328 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.332 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:44:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:44:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455607539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.840 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.842 253542 DEBUG nova.virt.libvirt.vif [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1971153713',display_name='tempest-ServerAddressesNegativeTestJSON-server-1971153713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1971153713',id=101,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95ca8ccb4dca4f58b3896b0533bab879',ramdisk_id='',reservation_id='r-eybd3brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1856355067',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1856355067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:44:19Z,user_data=None,user_id='9e24446c871b4d7ca816a3833d05daa9',uuid=65e7119e-238b-426c-9e9d-67b4c38c61b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.842 253542 DEBUG nova.network.os_vif_util [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converting VIF {"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.843 253542 DEBUG nova.network.os_vif_util [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.844 253542 DEBUG nova.objects.instance [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65e7119e-238b-426c-9e9d-67b4c38c61b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.860 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <uuid>65e7119e-238b-426c-9e9d-67b4c38c61b7</uuid>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <name>instance-00000065</name>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1971153713</nova:name>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:44:23</nova:creationTime>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <nova:user uuid="9e24446c871b4d7ca816a3833d05daa9">tempest-ServerAddressesNegativeTestJSON-1856355067-project-member</nova:user>
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <nova:project uuid="95ca8ccb4dca4f58b3896b0533bab879">tempest-ServerAddressesNegativeTestJSON-1856355067</nova:project>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <nova:port uuid="82c25621-80b3-4927-957d-aec0a653a4f8">
Nov 25 08:44:24 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <system>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <entry name="serial">65e7119e-238b-426c-9e9d-67b4c38c61b7</entry>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <entry name="uuid">65e7119e-238b-426c-9e9d-67b4c38c61b7</entry>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </system>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <os>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   </os>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <features>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   </features>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/65e7119e-238b-426c-9e9d-67b4c38c61b7_disk">
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       </source>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config">
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       </source>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:44:24 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:c8:b4:91"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <target dev="tap82c25621-80"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/console.log" append="off"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <video>
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </video>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:44:24 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:44:24 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:44:24 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:44:24 compute-0 nova_compute[253538]: </domain>
Nov 25 08:44:24 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.862 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Preparing to wait for external event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.862 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.863 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.863 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.864 253542 DEBUG nova.virt.libvirt.vif [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1971153713',display_name='tempest-ServerAddressesNegativeTestJSON-server-1971153713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1971153713',id=101,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95ca8ccb4dca4f58b3896b0533bab879',ramdisk_id='',reservation_id='r-eybd3brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1856355067',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1856355067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:44:19Z,user_data=None,user_id='9e24446c871b4d7ca816a3833d05daa9',uuid=65e7119e-238b-426c-9e9d-67b4c38c61b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.864 253542 DEBUG nova.network.os_vif_util [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converting VIF {"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.866 253542 DEBUG nova.network.os_vif_util [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.866 253542 DEBUG os_vif [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.872 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82c25621-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.873 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82c25621-80, col_values=(('external_ids', {'iface-id': '82c25621-80b3-4927-957d-aec0a653a4f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:b4:91', 'vm-uuid': '65e7119e-238b-426c-9e9d-67b4c38c61b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:24 compute-0 NetworkManager[48915]: <info>  [1764060264.8755] manager: (tap82c25621-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.881 253542 INFO os_vif [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80')
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.950 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.951 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.952 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] No VIF found with MAC fa:16:3e:c8:b4:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.953 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Using config drive
Nov 25 08:44:24 compute-0 nova_compute[253538]: 2025-11-25 08:44:24.984 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:44:25 compute-0 ceph-mon[75015]: pgmap v1886: 321 pgs: 321 active+clean; 108 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 892 KiB/s wr, 25 op/s
Nov 25 08:44:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3321357064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:44:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1455607539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:44:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:44:25 compute-0 nova_compute[253538]: 2025-11-25 08:44:25.791 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Creating config drive at /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config
Nov 25 08:44:25 compute-0 nova_compute[253538]: 2025-11-25 08:44:25.800 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq8mjtha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:25 compute-0 nova_compute[253538]: 2025-11-25 08:44:25.891 253542 DEBUG nova.network.neutron [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Updated VIF entry in instance network info cache for port 82c25621-80b3-4927-957d-aec0a653a4f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:44:25 compute-0 nova_compute[253538]: 2025-11-25 08:44:25.892 253542 DEBUG nova.network.neutron [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Updating instance_info_cache with network_info: [{"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:44:25 compute-0 nova_compute[253538]: 2025-11-25 08:44:25.909 253542 DEBUG oslo_concurrency.lockutils [req-909cc263-008e-4d4e-9dc8-c38707afdb84 req-70fa9ed3-b196-4254-843e-d65b1dc34fc5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-65e7119e-238b-426c-9e9d-67b4c38c61b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:44:25 compute-0 nova_compute[253538]: 2025-11-25 08:44:25.943 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq8mjtha" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:25 compute-0 nova_compute[253538]: 2025-11-25 08:44:25.967 253542 DEBUG nova.storage.rbd_utils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] rbd image 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:44:25 compute-0 nova_compute[253538]: 2025-11-25 08:44:25.970 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.284 253542 DEBUG oslo_concurrency.processutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config 65e7119e-238b-426c-9e9d-67b4c38c61b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.285 253542 INFO nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Deleting local config drive /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7/disk.config because it was imported into RBD.
Nov 25 08:44:26 compute-0 kernel: tap82c25621-80: entered promiscuous mode
Nov 25 08:44:26 compute-0 NetworkManager[48915]: <info>  [1764060266.3665] manager: (tap82c25621-80): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Nov 25 08:44:26 compute-0 ovn_controller[152859]: 2025-11-25T08:44:26Z|00973|binding|INFO|Claiming lport 82c25621-80b3-4927-957d-aec0a653a4f8 for this chassis.
Nov 25 08:44:26 compute-0 ovn_controller[152859]: 2025-11-25T08:44:26Z|00974|binding|INFO|82c25621-80b3-4927-957d-aec0a653a4f8: Claiming fa:16:3e:c8:b4:91 10.100.0.12
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.397 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:b4:91 10.100.0.12'], port_security=['fa:16:3e:c8:b4:91 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '65e7119e-238b-426c-9e9d-67b4c38c61b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5091e160-cd87-462f-b734-443b7a3a08ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95ca8ccb4dca4f58b3896b0533bab879', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15c498d9-36a3-47cc-8194-8875167d8fa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86fe2083-70bb-4b93-9773-f8b27481b1ca, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=82c25621-80b3-4927-957d-aec0a653a4f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.399 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 82c25621-80b3-4927-957d-aec0a653a4f8 in datapath 5091e160-cd87-462f-b734-443b7a3a08ee bound to our chassis
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.401 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5091e160-cd87-462f-b734-443b7a3a08ee
Nov 25 08:44:26 compute-0 systemd-machined[215790]: New machine qemu-124-instance-00000065.
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ce7778-bb24-4ccd-a3d2-ba05574950f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.418 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5091e160-c1 in ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.421 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5091e160-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.421 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8afcfb20-bacf-461d-9c79-0e660682be84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.422 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c089d6b-1c47-4564-ae9b-ba1d74401a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.438 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[98641f84-0c4f-4070-9270-f3d6fde6f955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000065.
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.465 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a357c37-38ec-42bb-8104-074bfe7ea2a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_controller[152859]: 2025-11-25T08:44:26Z|00975|binding|INFO|Setting lport 82c25621-80b3-4927-957d-aec0a653a4f8 ovn-installed in OVS
Nov 25 08:44:26 compute-0 ovn_controller[152859]: 2025-11-25T08:44:26Z|00976|binding|INFO|Setting lport 82c25621-80b3-4927-957d-aec0a653a4f8 up in Southbound
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:26 compute-0 podman[352561]: 2025-11-25 08:44:26.485751643 +0000 UTC m=+0.080122244 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 25 08:44:26 compute-0 systemd-udevd[352589]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:44:26 compute-0 NetworkManager[48915]: <info>  [1764060266.5011] device (tap82c25621-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:44:26 compute-0 NetworkManager[48915]: <info>  [1764060266.5020] device (tap82c25621-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.509 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6863c0-1840-4d25-9040-61e34bdc164d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 NetworkManager[48915]: <info>  [1764060266.5155] manager: (tap5091e160-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.514 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a7509ee7-bc30-495a-8540-c6b8f35cdb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 systemd-udevd[352594]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.548 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b9839b2a-ac38-4427-8ae4-ebb7f3aa1ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.554 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[40d73f4e-4361-478e-968e-79588a6e0729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 NetworkManager[48915]: <info>  [1764060266.5736] device (tap5091e160-c0): carrier: link connected
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.580 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6d909d-02e8-41f9-87e0-658a15fd13f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.597 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[669fa496-7b3d-44d9-ab40-60d8019af672]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5091e160-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:e2:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562117, 'reachable_time': 30122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352618, 'error': None, 'target': 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.616 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf26dc62-3f8c-4584-86cb-9d60487bdc7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:e281'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562117, 'tstamp': 562117}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352619, 'error': None, 'target': 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2846e32a-857e-4fae-ab2d-611cf912887a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5091e160-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:e2:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562117, 'reachable_time': 30122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352620, 'error': None, 'target': 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0edc9e-aa77-4781-aab9-9a17e4598a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.744 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d32900ec-a4cd-4074-9833-d6370c512d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.745 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5091e160-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.746 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.746 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5091e160-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:26 compute-0 kernel: tap5091e160-c0: entered promiscuous mode
Nov 25 08:44:26 compute-0 NetworkManager[48915]: <info>  [1764060266.7493] manager: (tap5091e160-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.753 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5091e160-c0, col_values=(('external_ids', {'iface-id': '09d23056-ba0e-4cb0-ae84-5e88f5553ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:26 compute-0 ovn_controller[152859]: 2025-11-25T08:44:26Z|00977|binding|INFO|Releasing lport 09d23056-ba0e-4cb0-ae84-5e88f5553ae3 from this chassis (sb_readonly=0)
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.779 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5091e160-cd87-462f-b734-443b7a3a08ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5091e160-cd87-462f-b734-443b7a3a08ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.781 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0a82a159-c204-4ee6-8457-9fb912e75627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.781 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-5091e160-cd87-462f-b734-443b7a3a08ee
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/5091e160-cd87-462f-b734-443b7a3a08ee.pid.haproxy
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 5091e160-cd87-462f-b734-443b7a3a08ee
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:44:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:26.782 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'env', 'PROCESS_TAG=haproxy-5091e160-cd87-462f-b734-443b7a3a08ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5091e160-cd87-462f-b734-443b7a3a08ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.796 253542 DEBUG nova.compute.manager [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.797 253542 DEBUG oslo_concurrency.lockutils [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.797 253542 DEBUG oslo_concurrency.lockutils [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.797 253542 DEBUG oslo_concurrency.lockutils [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:26 compute-0 nova_compute[253538]: 2025-11-25 08:44:26.798 253542 DEBUG nova.compute.manager [req-b19560db-667f-40c6-b2a3-1fc137702f27 req-a2346e44-7bb1-4799-bbe6-1d15d2cd0a92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Processing event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:44:27 compute-0 ceph-mon[75015]: pgmap v1887: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.148 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:27 compute-0 podman[352653]: 2025-11-25 08:44:27.180159057 +0000 UTC m=+0.047106635 container create ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:44:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:27 compute-0 systemd[1]: Started libpod-conmon-ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb.scope.
Nov 25 08:44:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d54a18379cf3aa265a6136d343a21576bceff9585d581766ed779c3b455434/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:44:27 compute-0 podman[352653]: 2025-11-25 08:44:27.154006815 +0000 UTC m=+0.020954423 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:44:27 compute-0 podman[352653]: 2025-11-25 08:44:27.264070074 +0000 UTC m=+0.131017632 container init ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:44:27 compute-0 podman[352653]: 2025-11-25 08:44:27.271531957 +0000 UTC m=+0.138479525 container start ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 08:44:27 compute-0 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [NOTICE]   (352710) : New worker (352713) forked
Nov 25 08:44:27 compute-0 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [NOTICE]   (352710) : Loading success.
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.400 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060267.3997738, 65e7119e-238b-426c-9e9d-67b4c38c61b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.400 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] VM Started (Lifecycle Event)
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.404 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.408 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.413 253542 INFO nova.virt.libvirt.driver [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance spawned successfully.
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.413 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.430 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.436 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.441 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.442 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.442 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.443 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.443 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.444 253542 DEBUG nova.virt.libvirt.driver [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.484 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.484 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060267.4042397, 65e7119e-238b-426c-9e9d-67b4c38c61b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.485 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] VM Paused (Lifecycle Event)
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.511 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.516 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060267.407532, 65e7119e-238b-426c-9e9d-67b4c38c61b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.516 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] VM Resumed (Lifecycle Event)
Nov 25 08:44:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.536 253542 INFO nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Took 8.16 seconds to spawn the instance on the hypervisor.
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.537 253542 DEBUG nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.539 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.549 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.586 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.595 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.595 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.596 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.629 253542 INFO nova.compute.manager [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Took 9.18 seconds to build instance.
Nov 25 08:44:27 compute-0 nova_compute[253538]: 2025-11-25 08:44:27.656 253542 DEBUG oslo_concurrency.lockutils [None req-2e8ff9d8-ad48-4ca6-9375-f328afc8dbb2 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:28.358 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:44:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:28.359 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:44:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:28.360 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:28 compute-0 nova_compute[253538]: 2025-11-25 08:44:28.406 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:28 compute-0 nova_compute[253538]: 2025-11-25 08:44:28.928 253542 DEBUG nova.compute.manager [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:44:28 compute-0 nova_compute[253538]: 2025-11-25 08:44:28.929 253542 DEBUG oslo_concurrency.lockutils [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:28 compute-0 nova_compute[253538]: 2025-11-25 08:44:28.929 253542 DEBUG oslo_concurrency.lockutils [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:28 compute-0 nova_compute[253538]: 2025-11-25 08:44:28.929 253542 DEBUG oslo_concurrency.lockutils [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:28 compute-0 nova_compute[253538]: 2025-11-25 08:44:28.930 253542 DEBUG nova.compute.manager [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] No waiting events found dispatching network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:44:28 compute-0 nova_compute[253538]: 2025-11-25 08:44:28.930 253542 WARNING nova.compute.manager [req-bf4f38b1-f3d8-4e9b-ba12-b3b5d0f904bb req-7412ae7a-4b37-4c6e-beb8-49509933eb4f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received unexpected event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 for instance with vm_state active and task_state None.
Nov 25 08:44:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:44:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562701283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:44:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:44:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562701283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:44:29 compute-0 ceph-mon[75015]: pgmap v1888: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:44:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/562701283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:44:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/562701283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:44:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.582 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.583 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.584 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.584 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.584 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.585 253542 INFO nova.compute.manager [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Terminating instance
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.586 253542 DEBUG nova.compute.manager [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:44:29 compute-0 kernel: tap82c25621-80 (unregistering): left promiscuous mode
Nov 25 08:44:29 compute-0 NetworkManager[48915]: <info>  [1764060269.7825] device (tap82c25621-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:44:29 compute-0 ovn_controller[152859]: 2025-11-25T08:44:29Z|00978|binding|INFO|Releasing lport 82c25621-80b3-4927-957d-aec0a653a4f8 from this chassis (sb_readonly=0)
Nov 25 08:44:29 compute-0 ovn_controller[152859]: 2025-11-25T08:44:29Z|00979|binding|INFO|Setting lport 82c25621-80b3-4927-957d-aec0a653a4f8 down in Southbound
Nov 25 08:44:29 compute-0 ovn_controller[152859]: 2025-11-25T08:44:29Z|00980|binding|INFO|Removing iface tap82c25621-80 ovn-installed in OVS
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.801 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:b4:91 10.100.0.12'], port_security=['fa:16:3e:c8:b4:91 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '65e7119e-238b-426c-9e9d-67b4c38c61b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5091e160-cd87-462f-b734-443b7a3a08ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95ca8ccb4dca4f58b3896b0533bab879', 'neutron:revision_number': '4', 'neutron:security_group_ids': '15c498d9-36a3-47cc-8194-8875167d8fa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86fe2083-70bb-4b93-9773-f8b27481b1ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=82c25621-80b3-4927-957d-aec0a653a4f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:44:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.802 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 82c25621-80b3-4927-957d-aec0a653a4f8 in datapath 5091e160-cd87-462f-b734-443b7a3a08ee unbound from our chassis
Nov 25 08:44:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.803 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5091e160-cd87-462f-b734-443b7a3a08ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:44:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.804 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cba9f19e-9ef6-47e0-8270-f0d464804aab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:29.804 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee namespace which is not needed anymore
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:29 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 25 08:44:29 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000065.scope: Consumed 3.163s CPU time.
Nov 25 08:44:29 compute-0 systemd-machined[215790]: Machine qemu-124-instance-00000065 terminated.
Nov 25 08:44:29 compute-0 nova_compute[253538]: 2025-11-25 08:44:29.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.030 253542 INFO nova.virt.libvirt.driver [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Instance destroyed successfully.
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.031 253542 DEBUG nova.objects.instance [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lazy-loading 'resources' on Instance uuid 65e7119e-238b-426c-9e9d-67b4c38c61b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:44:30 compute-0 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [NOTICE]   (352710) : haproxy version is 2.8.14-c23fe91
Nov 25 08:44:30 compute-0 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [NOTICE]   (352710) : path to executable is /usr/sbin/haproxy
Nov 25 08:44:30 compute-0 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [WARNING]  (352710) : Exiting Master process...
Nov 25 08:44:30 compute-0 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [WARNING]  (352710) : Exiting Master process...
Nov 25 08:44:30 compute-0 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [ALERT]    (352710) : Current worker (352713) exited with code 143 (Terminated)
Nov 25 08:44:30 compute-0 neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee[352685]: [WARNING]  (352710) : All workers exited. Exiting... (0)
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.047 253542 DEBUG nova.virt.libvirt.vif [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1971153713',display_name='tempest-ServerAddressesNegativeTestJSON-server-1971153713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1971153713',id=101,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95ca8ccb4dca4f58b3896b0533bab879',ramdisk_id='',reservation_id='r-eybd3brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1856355067',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1856355067-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:44:27Z,user_data=None,user_id='9e24446c871b4d7ca816a3833d05daa9',uuid=65e7119e-238b-426c-9e9d-67b4c38c61b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.048 253542 DEBUG nova.network.os_vif_util [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converting VIF {"id": "82c25621-80b3-4927-957d-aec0a653a4f8", "address": "fa:16:3e:c8:b4:91", "network": {"id": "5091e160-cd87-462f-b734-443b7a3a08ee", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1608437613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95ca8ccb4dca4f58b3896b0533bab879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c25621-80", "ovs_interfaceid": "82c25621-80b3-4927-957d-aec0a653a4f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.049 253542 DEBUG nova.network.os_vif_util [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:44:30 compute-0 systemd[1]: libpod-ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb.scope: Deactivated successfully.
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.050 253542 DEBUG os_vif [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.051 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.052 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c25621-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.053 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:30 compute-0 podman[352752]: 2025-11-25 08:44:30.057665364 +0000 UTC m=+0.167131786 container died ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.058 253542 INFO os_vif [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b4:91,bridge_name='br-int',has_traffic_filtering=True,id=82c25621-80b3-4927-957d-aec0a653a4f8,network=Network(5091e160-cd87-462f-b734-443b7a3a08ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c25621-80')
Nov 25 08:44:30 compute-0 podman[352778]: 2025-11-25 08:44:30.423300258 +0000 UTC m=+0.363354073 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb-userdata-shm.mount: Deactivated successfully.
Nov 25 08:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d54a18379cf3aa265a6136d343a21576bceff9585d581766ed779c3b455434-merged.mount: Deactivated successfully.
Nov 25 08:44:30 compute-0 ceph-mon[75015]: pgmap v1889: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 08:44:30 compute-0 podman[352752]: 2025-11-25 08:44:30.474423101 +0000 UTC m=+0.583889533 container cleanup ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:30 compute-0 podman[352838]: 2025-11-25 08:44:30.562680677 +0000 UTC m=+0.061822536 container remove ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:44:30 compute-0 systemd[1]: libpod-conmon-ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb.scope: Deactivated successfully.
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.568 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a43185c2-07d7-4e67-9f6b-32c7975f4e9d]: (4, ('Tue Nov 25 08:44:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee (ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb)\nba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb\nTue Nov 25 08:44:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee (ba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb)\nba8588a56368e542da74ef2c59bb52ed114faf17bb5f012202e8d0458088eefb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[387cff21-f137-4711-a11b-35e60515b654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.571 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5091e160-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:30 compute-0 kernel: tap5091e160-c0: left promiscuous mode
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.590 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3117b71-a824-4e17-9c4b-8d34f6bfb847]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e68f837b-a124-4380-9b58-a61ccd2d13fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.603 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7beaf457-b9eb-4d95-bc06-9b47d5c600ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.616 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fba7735a-d7a3-4da8-846f-41a0331e71f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562110, 'reachable_time': 42824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352853, 'error': None, 'target': 'ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.619 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5091e160-cd87-462f-b734-443b7a3a08ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:44:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:30.619 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f991180d-5902-4206-a9a4-baf85292e422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:44:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d5091e160\x2dcd87\x2d462f\x2db734\x2d443b7a3a08ee.mount: Deactivated successfully.
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.980 253542 INFO nova.virt.libvirt.driver [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Deleting instance files /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7_del
Nov 25 08:44:30 compute-0 nova_compute[253538]: 2025-11-25 08:44:30.981 253542 INFO nova.virt.libvirt.driver [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Deletion of /var/lib/nova/instances/65e7119e-238b-426c-9e9d-67b4c38c61b7_del complete
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.009 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-unplugged-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.009 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.009 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.010 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.010 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] No waiting events found dispatching network-vif-unplugged-82c25621-80b3-4927-957d-aec0a653a4f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.010 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-unplugged-82c25621-80b3-4927-957d-aec0a653a4f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.010 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.011 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.011 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.011 253542 DEBUG oslo_concurrency.lockutils [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.011 253542 DEBUG nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] No waiting events found dispatching network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.012 253542 WARNING nova.compute.manager [req-5727c05e-e076-4661-8555-1cbf30cbaa6d req-899293d3-defa-47d4-ba4d-37166c14bd4c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received unexpected event network-vif-plugged-82c25621-80b3-4927-957d-aec0a653a4f8 for instance with vm_state active and task_state deleting.
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.024 253542 INFO nova.compute.manager [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Took 1.44 seconds to destroy the instance on the hypervisor.
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.024 253542 DEBUG oslo.service.loopingcall [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.025 253542 DEBUG nova.compute.manager [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:44:31 compute-0 nova_compute[253538]: 2025-11-25 08:44:31.025 253542 DEBUG nova.network.neutron [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:44:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.111 253542 DEBUG nova.network.neutron [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.132 253542 INFO nova.compute.manager [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Took 1.11 seconds to deallocate network for instance.
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.178 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.179 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.225 253542 DEBUG oslo_concurrency.processutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:32 compute-0 ceph-mon[75015]: pgmap v1890: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 25 08:44:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:44:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/679818289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.654 253542 DEBUG oslo_concurrency.processutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.660 253542 DEBUG nova.compute.provider_tree [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.675 253542 DEBUG nova.scheduler.client.report [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.693 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.695 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.695 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.696 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.696 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.745 253542 INFO nova.scheduler.client.report [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Deleted allocations for instance 65e7119e-238b-426c-9e9d-67b4c38c61b7
Nov 25 08:44:32 compute-0 nova_compute[253538]: 2025-11-25 08:44:32.824 253542 DEBUG oslo_concurrency.lockutils [None req-6265c21e-f30f-4d24-8cb1-a928c8cdba8d 9e24446c871b4d7ca816a3833d05daa9 95ca8ccb4dca4f58b3896b0533bab879 - - default default] Lock "65e7119e-238b-426c-9e9d-67b4c38c61b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:44:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045952111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.108 253542 DEBUG nova.compute.manager [req-1692b5a5-88f7-4ff5-b9d9-d25ad729eba8 req-942da066-845e-4fc8-8b48-1f0cfc462b83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Received event network-vif-deleted-82c25621-80b3-4927-957d-aec0a653a4f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.110 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.301 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.302 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3939MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.303 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.303 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.350 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.352 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.385 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:44:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 114 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Nov 25 08:44:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/679818289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:44:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4045952111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:44:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:44:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976193311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.944 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.951 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.966 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.988 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:44:33 compute-0 nova_compute[253538]: 2025-11-25 08:44:33.988 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:34 compute-0 ceph-mon[75015]: pgmap v1891: 321 pgs: 321 active+clean; 114 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Nov 25 08:44:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3976193311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:44:34 compute-0 nova_compute[253538]: 2025-11-25 08:44:34.988 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:35 compute-0 nova_compute[253538]: 2025-11-25 08:44:35.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 936 KiB/s wr, 101 op/s
Nov 25 08:44:35 compute-0 nova_compute[253538]: 2025-11-25 08:44:35.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:35 compute-0 nova_compute[253538]: 2025-11-25 08:44:35.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:36 compute-0 ceph-mon[75015]: pgmap v1892: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 936 KiB/s wr, 101 op/s
Nov 25 08:44:37 compute-0 nova_compute[253538]: 2025-11-25 08:44:37.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 08:44:38 compute-0 nova_compute[253538]: 2025-11-25 08:44:38.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:38 compute-0 ceph-mon[75015]: pgmap v1893: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 08:44:38 compute-0 nova_compute[253538]: 2025-11-25 08:44:38.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:44:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Nov 25 08:44:40 compute-0 nova_compute[253538]: 2025-11-25 08:44:40.058 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:40 compute-0 ceph-mon[75015]: pgmap v1894: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Nov 25 08:44:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:41.069 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:44:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:41.070 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:44:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:44:41.070 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:44:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 96 op/s
Nov 25 08:44:42 compute-0 nova_compute[253538]: 2025-11-25 08:44:42.153 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:43 compute-0 ceph-mon[75015]: pgmap v1895: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 96 op/s
Nov 25 08:44:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 1.2 KiB/s wr, 59 op/s
Nov 25 08:44:44 compute-0 ceph-mon[75015]: pgmap v1896: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 1.2 KiB/s wr, 59 op/s
Nov 25 08:44:45 compute-0 nova_compute[253538]: 2025-11-25 08:44:45.029 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060270.0267487, 65e7119e-238b-426c-9e9d-67b4c38c61b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:44:45 compute-0 nova_compute[253538]: 2025-11-25 08:44:45.030 253542 INFO nova.compute.manager [-] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] VM Stopped (Lifecycle Event)
Nov 25 08:44:45 compute-0 nova_compute[253538]: 2025-11-25 08:44:45.055 253542 DEBUG nova.compute.manager [None req-9b05518c-4af7-49dd-91d0-672d42ce37ba - - - - - -] [instance: 65e7119e-238b-426c-9e9d-67b4c38c61b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:44:45 compute-0 nova_compute[253538]: 2025-11-25 08:44:45.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 08:44:46 compute-0 ceph-mon[75015]: pgmap v1897: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 08:44:47 compute-0 nova_compute[253538]: 2025-11-25 08:44:47.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:49 compute-0 ceph-mon[75015]: pgmap v1898: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:50 compute-0 nova_compute[253538]: 2025-11-25 08:44:50.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:50 compute-0 ceph-mon[75015]: pgmap v1899: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:51 compute-0 podman[352922]: 2025-11-25 08:44:51.826657305 +0000 UTC m=+0.078016817 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:44:52 compute-0 nova_compute[253538]: 2025-11-25 08:44:52.157 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:52 compute-0 ceph-mon[75015]: pgmap v1900: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:44:53
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.rgw.root', 'vms', 'backups', 'default.rgw.meta']
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:44:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:44:54 compute-0 ceph-mon[75015]: pgmap v1901: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:55 compute-0 nova_compute[253538]: 2025-11-25 08:44:55.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:56 compute-0 podman[352942]: 2025-11-25 08:44:56.822234122 +0000 UTC m=+0.072776754 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:44:57 compute-0 ceph-mon[75015]: pgmap v1902: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:57 compute-0 nova_compute[253538]: 2025-11-25 08:44:57.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:44:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:44:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:57 compute-0 sudo[352962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:44:57 compute-0 sudo[352962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:44:57 compute-0 sudo[352962]: pam_unix(sudo:session): session closed for user root
Nov 25 08:44:57 compute-0 sudo[352987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:44:57 compute-0 sudo[352987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:44:57 compute-0 sudo[352987]: pam_unix(sudo:session): session closed for user root
Nov 25 08:44:58 compute-0 sudo[353012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:44:58 compute-0 sudo[353012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:44:58 compute-0 sudo[353012]: pam_unix(sudo:session): session closed for user root
Nov 25 08:44:58 compute-0 sudo[353037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 08:44:58 compute-0 sudo[353037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:44:59 compute-0 podman[353135]: 2025-11-25 08:44:59.000086232 +0000 UTC m=+0.342330470 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:44:59 compute-0 ceph-mon[75015]: pgmap v1903: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:44:59 compute-0 podman[353135]: 2025-11-25 08:44:59.143737077 +0000 UTC m=+0.485981255 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:44:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:00 compute-0 nova_compute[253538]: 2025-11-25 08:45:00.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:00 compute-0 sudo[353037]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:45:00 compute-0 ceph-mon[75015]: pgmap v1904: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:00 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:45:00 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:00 compute-0 sudo[353295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:00 compute-0 sudo[353295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:00 compute-0 sudo[353295]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:00 compute-0 sudo[353321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:45:00 compute-0 sudo[353321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:00 compute-0 sudo[353321]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:00 compute-0 sudo[353361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:00 compute-0 podman[353320]: 2025-11-25 08:45:00.600833535 +0000 UTC m=+0.106960306 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 08:45:00 compute-0 sudo[353361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:00 compute-0 sudo[353361]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:00 compute-0 sudo[353395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:45:00 compute-0 sudo[353395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:01 compute-0 sudo[353395]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:45:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:45:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:45:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev cb7b7dc4-3bad-4ac9-9c1c-c9ff2261dfb2 does not exist
Nov 25 08:45:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1ea8571f-f945-4836-9c83-b0d91a87e9e6 does not exist
Nov 25 08:45:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2f7cc1f0-954d-4bf5-8ab1-293ee578109b does not exist
Nov 25 08:45:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:45:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:45:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:45:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:45:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:45:01 compute-0 sudo[353449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:01 compute-0 sudo[353449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:01 compute-0 sudo[353449]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:01 compute-0 sudo[353474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:45:01 compute-0 sudo[353474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:01 compute-0 sudo[353474]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:01 compute-0 sudo[353499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:01 compute-0 sudo[353499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:01 compute-0 sudo[353499]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:01 compute-0 sudo[353524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:45:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:01 compute-0 sudo[353524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:01 compute-0 podman[353590]: 2025-11-25 08:45:01.978367565 +0000 UTC m=+0.071173320 container create cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:45:02 compute-0 systemd[1]: Started libpod-conmon-cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d.scope.
Nov 25 08:45:02 compute-0 podman[353590]: 2025-11-25 08:45:01.946453305 +0000 UTC m=+0.039259070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:45:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:45:02 compute-0 podman[353590]: 2025-11-25 08:45:02.090798308 +0000 UTC m=+0.183604073 container init cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 08:45:02 compute-0 podman[353590]: 2025-11-25 08:45:02.09817866 +0000 UTC m=+0.190984375 container start cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:45:02 compute-0 systemd[1]: libpod-cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d.scope: Deactivated successfully.
Nov 25 08:45:02 compute-0 priceless_banzai[353606]: 167 167
Nov 25 08:45:02 compute-0 conmon[353606]: conmon cc831d43ed639e14140f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d.scope/container/memory.events
Nov 25 08:45:02 compute-0 podman[353590]: 2025-11-25 08:45:02.107881094 +0000 UTC m=+0.200686809 container attach cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:45:02 compute-0 podman[353590]: 2025-11-25 08:45:02.108375497 +0000 UTC m=+0.201181242 container died cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 08:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ec8aec0c7c42f370d76ed540009118e8eae8335caafb165bef8ec257db7c2a6-merged.mount: Deactivated successfully.
Nov 25 08:45:02 compute-0 nova_compute[253538]: 2025-11-25 08:45:02.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:02 compute-0 podman[353590]: 2025-11-25 08:45:02.172966287 +0000 UTC m=+0.265772002 container remove cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:45:02 compute-0 systemd[1]: libpod-conmon-cc831d43ed639e14140ff5f7896fe6020fd1aaef78370ac601949e9ff1eac84d.scope: Deactivated successfully.
Nov 25 08:45:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:02 compute-0 ceph-mon[75015]: pgmap v1905: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:02 compute-0 podman[353630]: 2025-11-25 08:45:02.346577619 +0000 UTC m=+0.046818357 container create 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 08:45:02 compute-0 systemd[1]: Started libpod-conmon-08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7.scope.
Nov 25 08:45:02 compute-0 podman[353630]: 2025-11-25 08:45:02.324701103 +0000 UTC m=+0.024941861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:45:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:02 compute-0 podman[353630]: 2025-11-25 08:45:02.466355203 +0000 UTC m=+0.166595971 container init 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 08:45:02 compute-0 podman[353630]: 2025-11-25 08:45:02.478557086 +0000 UTC m=+0.178797824 container start 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:45:02 compute-0 podman[353630]: 2025-11-25 08:45:02.487624953 +0000 UTC m=+0.187865741 container attach 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 08:45:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:03 compute-0 fervent_easley[353647]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:45:03 compute-0 fervent_easley[353647]: --> relative data size: 1.0
Nov 25 08:45:03 compute-0 fervent_easley[353647]: --> All data devices are unavailable
Nov 25 08:45:03 compute-0 systemd[1]: libpod-08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7.scope: Deactivated successfully.
Nov 25 08:45:03 compute-0 systemd[1]: libpod-08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7.scope: Consumed 1.090s CPU time.
Nov 25 08:45:03 compute-0 podman[353630]: 2025-11-25 08:45:03.625293836 +0000 UTC m=+1.325534624 container died 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:45:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb6d5e90267e7c69e262fb87213892f955fa4feead9cef4a6391c3757f2a3e8b-merged.mount: Deactivated successfully.
Nov 25 08:45:03 compute-0 podman[353630]: 2025-11-25 08:45:03.967301756 +0000 UTC m=+1.667542494 container remove 08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 08:45:03 compute-0 systemd[1]: libpod-conmon-08f75f34ba37be07a950a8b1209de75fef487fb2b7226e40ed0efa08f70f4fc7.scope: Deactivated successfully.
Nov 25 08:45:04 compute-0 sudo[353524]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:45:04 compute-0 sudo[353689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:04 compute-0 sudo[353689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:04 compute-0 sudo[353689]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:04 compute-0 sudo[353714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:45:04 compute-0 sudo[353714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:04 compute-0 sudo[353714]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:04 compute-0 sudo[353739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:04 compute-0 sudo[353739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:04 compute-0 sudo[353739]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:04 compute-0 sudo[353764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:45:04 compute-0 sudo[353764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:04 compute-0 podman[353831]: 2025-11-25 08:45:04.723998317 +0000 UTC m=+0.042149679 container create 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 08:45:04 compute-0 systemd[1]: Started libpod-conmon-99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2.scope.
Nov 25 08:45:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:45:04 compute-0 podman[353831]: 2025-11-25 08:45:04.705152143 +0000 UTC m=+0.023303535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:45:04 compute-0 podman[353831]: 2025-11-25 08:45:04.810752932 +0000 UTC m=+0.128904324 container init 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 08:45:04 compute-0 podman[353831]: 2025-11-25 08:45:04.817257109 +0000 UTC m=+0.135408481 container start 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 08:45:04 compute-0 podman[353831]: 2025-11-25 08:45:04.820875308 +0000 UTC m=+0.139026710 container attach 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 08:45:04 compute-0 nice_beaver[353847]: 167 167
Nov 25 08:45:04 compute-0 systemd[1]: libpod-99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2.scope: Deactivated successfully.
Nov 25 08:45:04 compute-0 podman[353831]: 2025-11-25 08:45:04.822904192 +0000 UTC m=+0.141055574 container died 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ae638707330f2c7c18558eaefb2042e7c04063007e7ab32ce4355c22d7775da-merged.mount: Deactivated successfully.
Nov 25 08:45:04 compute-0 podman[353831]: 2025-11-25 08:45:04.863290064 +0000 UTC m=+0.181441436 container remove 99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 08:45:04 compute-0 systemd[1]: libpod-conmon-99ad72fb87c9c6f9ebc9070d5f07e728ffa6c6f4baa8fa544b705dea7c192db2.scope: Deactivated successfully.
Nov 25 08:45:04 compute-0 ceph-mon[75015]: pgmap v1906: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:05 compute-0 podman[353871]: 2025-11-25 08:45:05.027636292 +0000 UTC m=+0.045346007 container create acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:45:05 compute-0 systemd[1]: Started libpod-conmon-acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49.scope.
Nov 25 08:45:05 compute-0 podman[353871]: 2025-11-25 08:45:05.004057109 +0000 UTC m=+0.021766764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:45:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:45:05 compute-0 nova_compute[253538]: 2025-11-25 08:45:05.113 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:05 compute-0 podman[353871]: 2025-11-25 08:45:05.141266429 +0000 UTC m=+0.158976424 container init acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 08:45:05 compute-0 podman[353871]: 2025-11-25 08:45:05.150392417 +0000 UTC m=+0.168102092 container start acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:45:05 compute-0 podman[353871]: 2025-11-25 08:45:05.155946378 +0000 UTC m=+0.173656043 container attach acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:45:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:05 compute-0 angry_sutherland[353888]: {
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:     "0": [
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:         {
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "devices": [
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "/dev/loop3"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             ],
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_name": "ceph_lv0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_size": "21470642176",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "name": "ceph_lv0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "tags": {
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cluster_name": "ceph",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.crush_device_class": "",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.encrypted": "0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osd_id": "0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.type": "block",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.vdo": "0"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             },
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "type": "block",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "vg_name": "ceph_vg0"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:         }
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:     ],
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:     "1": [
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:         {
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "devices": [
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "/dev/loop4"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             ],
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_name": "ceph_lv1",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_size": "21470642176",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "name": "ceph_lv1",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "tags": {
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cluster_name": "ceph",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.crush_device_class": "",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.encrypted": "0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osd_id": "1",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.type": "block",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.vdo": "0"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             },
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "type": "block",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "vg_name": "ceph_vg1"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:         }
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:     ],
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:     "2": [
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:         {
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "devices": [
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "/dev/loop5"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             ],
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_name": "ceph_lv2",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_size": "21470642176",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "name": "ceph_lv2",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "tags": {
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.cluster_name": "ceph",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.crush_device_class": "",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.encrypted": "0",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osd_id": "2",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.type": "block",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:                 "ceph.vdo": "0"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             },
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "type": "block",
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:             "vg_name": "ceph_vg2"
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:         }
Nov 25 08:45:05 compute-0 angry_sutherland[353888]:     ]
Nov 25 08:45:05 compute-0 angry_sutherland[353888]: }
Nov 25 08:45:05 compute-0 systemd[1]: libpod-acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49.scope: Deactivated successfully.
Nov 25 08:45:05 compute-0 podman[353871]: 2025-11-25 08:45:05.992863366 +0000 UTC m=+1.010573011 container died acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-75bfb344a9c30b0e24d6e6557d74c59cfb7841f8a3065212bb52b1c1abd2c005-merged.mount: Deactivated successfully.
Nov 25 08:45:06 compute-0 podman[353871]: 2025-11-25 08:45:06.062992957 +0000 UTC m=+1.080702592 container remove acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sutherland, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:45:06 compute-0 systemd[1]: libpod-conmon-acad2594dc67a008e32c63d855042d56df5c5d0e8e549f8780ce020eebc2bd49.scope: Deactivated successfully.
Nov 25 08:45:06 compute-0 sudo[353764]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:06 compute-0 sudo[353911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:06 compute-0 sudo[353911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:06 compute-0 sudo[353911]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:06 compute-0 sudo[353936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:45:06 compute-0 sudo[353936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:06 compute-0 sudo[353936]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:06 compute-0 sudo[353961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:06 compute-0 sudo[353961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:06 compute-0 sudo[353961]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:06 compute-0 sudo[353986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:45:06 compute-0 sudo[353986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:06 compute-0 podman[354051]: 2025-11-25 08:45:06.791408267 +0000 UTC m=+0.035401266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:45:06 compute-0 podman[354051]: 2025-11-25 08:45:06.995808057 +0000 UTC m=+0.239801026 container create 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 08:45:07 compute-0 ceph-mon[75015]: pgmap v1907: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:07 compute-0 systemd[1]: Started libpod-conmon-3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8.scope.
Nov 25 08:45:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:45:07 compute-0 podman[354051]: 2025-11-25 08:45:07.134295772 +0000 UTC m=+0.378288821 container init 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:45:07 compute-0 podman[354051]: 2025-11-25 08:45:07.143083701 +0000 UTC m=+0.387076660 container start 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:45:07 compute-0 stoic_hertz[354067]: 167 167
Nov 25 08:45:07 compute-0 systemd[1]: libpod-3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8.scope: Deactivated successfully.
Nov 25 08:45:07 compute-0 conmon[354067]: conmon 3c603dc69eb7593e015d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8.scope/container/memory.events
Nov 25 08:45:07 compute-0 nova_compute[253538]: 2025-11-25 08:45:07.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:07 compute-0 podman[354051]: 2025-11-25 08:45:07.17573075 +0000 UTC m=+0.419723759 container attach 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 08:45:07 compute-0 podman[354051]: 2025-11-25 08:45:07.176354988 +0000 UTC m=+0.420347977 container died 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 08:45:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ca01bfd5b09f7d66871e288471a81483624665255c2bb9a3ca56df670a17bba-merged.mount: Deactivated successfully.
Nov 25 08:45:07 compute-0 podman[354051]: 2025-11-25 08:45:07.263301368 +0000 UTC m=+0.507294337 container remove 3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hertz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 08:45:07 compute-0 systemd[1]: libpod-conmon-3c603dc69eb7593e015de9ccacb2ef046f77b2839427b6de20087171e1e0d6a8.scope: Deactivated successfully.
Nov 25 08:45:07 compute-0 podman[354092]: 2025-11-25 08:45:07.498391203 +0000 UTC m=+0.046968380 container create 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:45:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:07 compute-0 systemd[1]: Started libpod-conmon-7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6.scope.
Nov 25 08:45:07 compute-0 podman[354092]: 2025-11-25 08:45:07.478603474 +0000 UTC m=+0.027180651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:45:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:07 compute-0 podman[354092]: 2025-11-25 08:45:07.605088872 +0000 UTC m=+0.153666099 container init 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 08:45:07 compute-0 podman[354092]: 2025-11-25 08:45:07.616167674 +0000 UTC m=+0.164744821 container start 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:45:07 compute-0 podman[354092]: 2025-11-25 08:45:07.622420024 +0000 UTC m=+0.170997201 container attach 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:45:08 compute-0 heuristic_colden[354109]: {
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "osd_id": 1,
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "type": "bluestore"
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:     },
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "osd_id": 2,
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "type": "bluestore"
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:     },
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "osd_id": 0,
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:         "type": "bluestore"
Nov 25 08:45:08 compute-0 heuristic_colden[354109]:     }
Nov 25 08:45:08 compute-0 heuristic_colden[354109]: }
Nov 25 08:45:08 compute-0 systemd[1]: libpod-7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6.scope: Deactivated successfully.
Nov 25 08:45:08 compute-0 podman[354092]: 2025-11-25 08:45:08.676026346 +0000 UTC m=+1.224603523 container died 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:45:08 compute-0 systemd[1]: libpod-7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6.scope: Consumed 1.064s CPU time.
Nov 25 08:45:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-28dc452f7ab2563e9c7c830661297e87a5909c2ae6b46f35ea5491a7b86ac382-merged.mount: Deactivated successfully.
Nov 25 08:45:08 compute-0 podman[354092]: 2025-11-25 08:45:08.735446756 +0000 UTC m=+1.284023883 container remove 7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:45:08 compute-0 systemd[1]: libpod-conmon-7a806559cc37a49877f10a1e0c6c3ab378f0e9a2d756a924ed1406cb4dfc56e6.scope: Deactivated successfully.
Nov 25 08:45:08 compute-0 sudo[353986]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:45:08 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:45:08 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:08 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f13e6c5f-67c6-4f8d-98cc-deb4a1e43135 does not exist
Nov 25 08:45:08 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 7ce54203-8e03-4c5a-908b-e9e5541e7af3 does not exist
Nov 25 08:45:08 compute-0 ovn_controller[152859]: 2025-11-25T08:45:08Z|00981|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Nov 25 08:45:08 compute-0 sudo[354154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:45:08 compute-0 sudo[354154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:08 compute-0 sudo[354154]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:08 compute-0 sudo[354179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:45:08 compute-0 sudo[354179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:45:08 compute-0 sudo[354179]: pam_unix(sudo:session): session closed for user root
Nov 25 08:45:09 compute-0 ceph-mon[75015]: pgmap v1908: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:09 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:09 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:45:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:10 compute-0 nova_compute[253538]: 2025-11-25 08:45:10.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:11 compute-0 ceph-mon[75015]: pgmap v1909: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:12 compute-0 nova_compute[253538]: 2025-11-25 08:45:12.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:13 compute-0 ceph-mon[75015]: pgmap v1910: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:15 compute-0 ceph-mon[75015]: pgmap v1911: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:15 compute-0 nova_compute[253538]: 2025-11-25 08:45:15.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:17 compute-0 ceph-mon[75015]: pgmap v1912: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:17 compute-0 nova_compute[253538]: 2025-11-25 08:45:17.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:19 compute-0 ceph-mon[75015]: pgmap v1913: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:19 compute-0 nova_compute[253538]: 2025-11-25 08:45:19.556 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:19 compute-0 nova_compute[253538]: 2025-11-25 08:45:19.557 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:19 compute-0 nova_compute[253538]: 2025-11-25 08:45:19.591 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:45:19 compute-0 nova_compute[253538]: 2025-11-25 08:45:19.682 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:19 compute-0 nova_compute[253538]: 2025-11-25 08:45:19.683 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:19 compute-0 nova_compute[253538]: 2025-11-25 08:45:19.696 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:45:19 compute-0 nova_compute[253538]: 2025-11-25 08:45:19.697 253542 INFO nova.compute.claims [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:45:19 compute-0 nova_compute[253538]: 2025-11-25 08:45:19.783 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:45:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407019856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.270 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.280 253542 DEBUG nova.compute.provider_tree [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.303 253542 DEBUG nova.scheduler.client.report [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.339 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.403 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "dddd2e74-d0f8-4cdf-9b9c-58f4a6307432" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.404 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "dddd2e74-d0f8-4cdf-9b9c-58f4a6307432" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.414 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "dddd2e74-d0f8-4cdf-9b9c-58f4a6307432" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.415 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.464 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.464 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.486 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.516 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.595 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.596 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.597 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Creating image(s)
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.621 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.646 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.678 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.684 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.777 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.779 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.779 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.780 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.803 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.807 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:20 compute-0 nova_compute[253538]: 2025-11-25 08:45:20.902 253542 DEBUG nova.policy [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0f86af4601044b11b6ad679db30c1c0a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f476c88f03140fb8498fc62d3d783b0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:45:21 compute-0 ceph-mon[75015]: pgmap v1914: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/407019856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.159 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.226 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] resizing rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.323 253542 DEBUG nova.objects.instance [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lazy-loading 'migration_context' on Instance uuid 5bcfbf47-fa2f-4580-9ace-f946f5683844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.334 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.335 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Ensure instance console log exists: /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.335 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.336 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.336 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:21 compute-0 nova_compute[253538]: 2025-11-25 08:45:21.661 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Successfully created port: 284bf2a3-cc8c-4631-81d0-b2eda45727f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.125096) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322125171, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1887, "num_deletes": 262, "total_data_size": 2869941, "memory_usage": 2907920, "flush_reason": "Manual Compaction"}
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322142410, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2804690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38241, "largest_seqno": 40127, "table_properties": {"data_size": 2795971, "index_size": 5405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18382, "raw_average_key_size": 20, "raw_value_size": 2778369, "raw_average_value_size": 3135, "num_data_blocks": 238, "num_entries": 886, "num_filter_entries": 886, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060157, "oldest_key_time": 1764060157, "file_creation_time": 1764060322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 17361 microseconds, and 7098 cpu microseconds.
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.142463) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2804690 bytes OK
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.142487) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.144447) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.144459) EVENT_LOG_v1 {"time_micros": 1764060322144454, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.144478) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2861786, prev total WAL file size 2861786, number of live WAL files 2.
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.145462) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2738KB)], [86(8423KB)]
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322145513, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11430662, "oldest_snapshot_seqno": -1}
Nov 25 08:45:22 compute-0 nova_compute[253538]: 2025-11-25 08:45:22.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6474 keys, 9802179 bytes, temperature: kUnknown
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322202281, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9802179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9757783, "index_size": 27134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164951, "raw_average_key_size": 25, "raw_value_size": 9640545, "raw_average_value_size": 1489, "num_data_blocks": 1096, "num_entries": 6474, "num_filter_entries": 6474, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.202556) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9802179 bytes
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.203896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.9 rd, 172.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7007, records dropped: 533 output_compression: NoCompression
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.203910) EVENT_LOG_v1 {"time_micros": 1764060322203903, "job": 50, "event": "compaction_finished", "compaction_time_micros": 56887, "compaction_time_cpu_micros": 25683, "output_level": 6, "num_output_files": 1, "total_output_size": 9802179, "num_input_records": 7007, "num_output_records": 6474, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322204405, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060322206000, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.145398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:45:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:45:22.206096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:45:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:22 compute-0 nova_compute[253538]: 2025-11-25 08:45:22.606 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Successfully updated port: 284bf2a3-cc8c-4631-81d0-b2eda45727f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:45:22 compute-0 nova_compute[253538]: 2025-11-25 08:45:22.621 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:45:22 compute-0 nova_compute[253538]: 2025-11-25 08:45:22.621 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquired lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:45:22 compute-0 nova_compute[253538]: 2025-11-25 08:45:22.621 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:45:22 compute-0 nova_compute[253538]: 2025-11-25 08:45:22.728 253542 DEBUG nova.compute.manager [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-changed-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:45:22 compute-0 nova_compute[253538]: 2025-11-25 08:45:22.729 253542 DEBUG nova.compute.manager [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Refreshing instance network info cache due to event network-changed-284bf2a3-cc8c-4631-81d0-b2eda45727f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:45:22 compute-0 nova_compute[253538]: 2025-11-25 08:45:22.729 253542 DEBUG oslo_concurrency.lockutils [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:45:22 compute-0 podman[354393]: 2025-11-25 08:45:22.815613906 +0000 UTC m=+0.060429708 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:45:23 compute-0 nova_compute[253538]: 2025-11-25 08:45:23.046 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:45:23 compute-0 ceph-mon[75015]: pgmap v1915: 321 pgs: 321 active+clean; 88 MiB data, 677 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:45:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 95 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s wr, 0 op/s
Nov 25 08:45:24 compute-0 ceph-mon[75015]: pgmap v1916: 321 pgs: 321 active+clean; 95 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s wr, 0 op/s
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.481 253542 DEBUG nova.network.neutron [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Updating instance_info_cache with network_info: [{"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.499 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Releasing lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.499 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance network_info: |[{"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.500 253542 DEBUG oslo_concurrency.lockutils [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.500 253542 DEBUG nova.network.neutron [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Refreshing network info cache for port 284bf2a3-cc8c-4631-81d0-b2eda45727f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.502 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start _get_guest_xml network_info=[{"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.507 253542 WARNING nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.516 253542 DEBUG nova.virt.libvirt.host [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.516 253542 DEBUG nova.virt.libvirt.host [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.519 253542 DEBUG nova.virt.libvirt.host [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.519 253542 DEBUG nova.virt.libvirt.host [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.519 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.519 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.520 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.520 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.520 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.520 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.521 253542 DEBUG nova.virt.hardware [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.524 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:45:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3619621212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.973 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:24 compute-0 nova_compute[253538]: 2025-11-25 08:45:24.996 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.000 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3619621212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:45:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109364491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.543 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 134 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.545 253542 DEBUG nova.virt.libvirt.vif [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1939536355',display_name='tempest-ServerGroupTestJSON-server-1939536355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1939536355',id=102,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f476c88f03140fb8498fc62d3d783b0',ramdisk_id='',reservation_id='r-qblg0f2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-166485107',owner_user_name='tempest-ServerGroupTestJSON-166485107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:45:20Z,user_data=None,user_id='0f86af4601044b11b6ad679db30c1c0a',uuid=5bcfbf47-fa2f-4580-9ace-f946f5683844,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.546 253542 DEBUG nova.network.os_vif_util [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converting VIF {"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.547 253542 DEBUG nova.network.os_vif_util [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.548 253542 DEBUG nova.objects.instance [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5bcfbf47-fa2f-4580-9ace-f946f5683844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.563 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <uuid>5bcfbf47-fa2f-4580-9ace-f946f5683844</uuid>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <name>instance-00000066</name>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerGroupTestJSON-server-1939536355</nova:name>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:45:24</nova:creationTime>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <nova:user uuid="0f86af4601044b11b6ad679db30c1c0a">tempest-ServerGroupTestJSON-166485107-project-member</nova:user>
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <nova:project uuid="5f476c88f03140fb8498fc62d3d783b0">tempest-ServerGroupTestJSON-166485107</nova:project>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <nova:port uuid="284bf2a3-cc8c-4631-81d0-b2eda45727f9">
Nov 25 08:45:25 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <system>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <entry name="serial">5bcfbf47-fa2f-4580-9ace-f946f5683844</entry>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <entry name="uuid">5bcfbf47-fa2f-4580-9ace-f946f5683844</entry>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </system>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <os>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   </os>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <features>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   </features>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5bcfbf47-fa2f-4580-9ace-f946f5683844_disk">
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       </source>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config">
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       </source>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:45:25 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:22:ee:87"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <target dev="tap284bf2a3-cc"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/console.log" append="off"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <video>
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </video>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:45:25 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:45:25 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:45:25 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:45:25 compute-0 nova_compute[253538]: </domain>
Nov 25 08:45:25 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.564 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Preparing to wait for external event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.564 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.565 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.565 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.566 253542 DEBUG nova.virt.libvirt.vif [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1939536355',display_name='tempest-ServerGroupTestJSON-server-1939536355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1939536355',id=102,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f476c88f03140fb8498fc62d3d783b0',ramdisk_id='',reservation_id='r-qblg0f2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-166485107',owner_user_name='tempest-ServerGroupTestJSON-166485107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:45:20Z,user_data=None,user_id='0f86af4601044b11b6ad679db30c1c0a',uuid=5bcfbf47-fa2f-4580-9ace-f946f5683844,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.566 253542 DEBUG nova.network.os_vif_util [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converting VIF {"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.567 253542 DEBUG nova.network.os_vif_util [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.567 253542 DEBUG os_vif [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.568 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.568 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.569 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.573 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.573 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap284bf2a3-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.574 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap284bf2a3-cc, col_values=(('external_ids', {'iface-id': '284bf2a3-cc8c-4631-81d0-b2eda45727f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:ee:87', 'vm-uuid': '5bcfbf47-fa2f-4580-9ace-f946f5683844'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:25 compute-0 NetworkManager[48915]: <info>  [1764060325.6125] manager: (tap284bf2a3-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.619 253542 INFO os_vif [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc')
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.671 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.672 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.672 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] No VIF found with MAC fa:16:3e:22:ee:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.672 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Using config drive
Nov 25 08:45:25 compute-0 nova_compute[253538]: 2025-11-25 08:45:25.696 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4109364491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:45:26 compute-0 ceph-mon[75015]: pgmap v1917: 321 pgs: 321 active+clean; 134 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.265 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Creating config drive at /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.276 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwss7npke execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.430 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwss7npke" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.456 253542 DEBUG nova.storage.rbd_utils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] rbd image 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.459 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.622 253542 DEBUG oslo_concurrency.processutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config 5bcfbf47-fa2f-4580-9ace-f946f5683844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.623 253542 INFO nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Deleting local config drive /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844/disk.config because it was imported into RBD.
Nov 25 08:45:26 compute-0 kernel: tap284bf2a3-cc: entered promiscuous mode
Nov 25 08:45:26 compute-0 NetworkManager[48915]: <info>  [1764060326.6917] manager: (tap284bf2a3-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Nov 25 08:45:26 compute-0 ovn_controller[152859]: 2025-11-25T08:45:26Z|00982|binding|INFO|Claiming lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 for this chassis.
Nov 25 08:45:26 compute-0 ovn_controller[152859]: 2025-11-25T08:45:26Z|00983|binding|INFO|284bf2a3-cc8c-4631-81d0-b2eda45727f9: Claiming fa:16:3e:22:ee:87 10.100.0.13
Nov 25 08:45:26 compute-0 systemd-udevd[354545]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.743 253542 DEBUG nova.network.neutron [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Updated VIF entry in instance network info cache for port 284bf2a3-cc8c-4631-81d0-b2eda45727f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.744 253542 DEBUG nova.network.neutron [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Updating instance_info_cache with network_info: [{"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:26 compute-0 NetworkManager[48915]: <info>  [1764060326.7578] device (tap284bf2a3-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:45:26 compute-0 NetworkManager[48915]: <info>  [1764060326.7585] device (tap284bf2a3-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.767 253542 DEBUG oslo_concurrency.lockutils [req-e19cf46c-3168-4f19-88b9-5e71d9add750 req-9c891613-0f2d-4423-ba35-a35b4c1d1012 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5bcfbf47-fa2f-4580-9ace-f946f5683844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:45:26 compute-0 systemd-machined[215790]: New machine qemu-125-instance-00000066.
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.797 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:ee:87 10.100.0.13'], port_security=['fa:16:3e:22:ee:87 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5bcfbf47-fa2f-4580-9ace-f946f5683844', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f476c88f03140fb8498fc62d3d783b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe50aa03-4d4a-43b6-a9bc-79cbda787ef2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5d4171-3312-481d-8e49-f074dc7ca346, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=284bf2a3-cc8c-4631-81d0-b2eda45727f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.799 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 284bf2a3-cc8c-4631-81d0-b2eda45727f9 in datapath e802e356-a113-4a9a-a825-0a16ca2eb73c bound to our chassis
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.800 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e802e356-a113-4a9a-a825-0a16ca2eb73c
Nov 25 08:45:26 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000066.
Nov 25 08:45:26 compute-0 ovn_controller[152859]: 2025-11-25T08:45:26Z|00984|binding|INFO|Setting lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 ovn-installed in OVS
Nov 25 08:45:26 compute-0 ovn_controller[152859]: 2025-11-25T08:45:26Z|00985|binding|INFO|Setting lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 up in Southbound
Nov 25 08:45:26 compute-0 nova_compute[253538]: 2025-11-25 08:45:26.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.817 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4d9f2d-6c00-4a53-87b6-f7e9b778ce2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.818 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape802e356-a1 in ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.824 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape802e356-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.824 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[39e5ba6e-be19-4d37-9da9-8f6e08187de4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.827 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e22bd54-49ed-4b50-a8fa-e35a4535cb64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.845 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0d50647c-0ad7-4b52-b865-da5bd1de18de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.873 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9bae36b3-8247-4c90-8e1d-5c146ac57acd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.906 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[971ee084-443f-4917-b94e-db84445d6816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.913 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e9660e-70d4-4c3d-9cef-e0f64261e3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 NetworkManager[48915]: <info>  [1764060326.9154] manager: (tape802e356-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Nov 25 08:45:26 compute-0 systemd-udevd[354549]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.946 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a113db9-9d31-4c5d-b0ed-de86720bfc1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.950 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[913a6ea9-ad13-4754-b8ca-2cdc3f3e8bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 podman[354553]: 2025-11-25 08:45:26.96342408 +0000 UTC m=+0.101227262 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:45:26 compute-0 NetworkManager[48915]: <info>  [1764060326.9695] device (tape802e356-a0): carrier: link connected
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.973 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[14a124ba-26da-45f6-9cc4-b53b9370fb97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:26.988 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3497022-6cc5-4515-a86f-b612e269d09a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape802e356-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:01:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568156, 'reachable_time': 40673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354601, 'error': None, 'target': 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.000 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9891c253-ebfa-469b-8ecd-e4529aca95d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:192'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568156, 'tstamp': 568156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354602, 'error': None, 'target': 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.014 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b442ab71-f3a0-419e-8714-f7b5a9196904]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape802e356-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:01:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568156, 'reachable_time': 40673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354603, 'error': None, 'target': 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.051 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7ee884-7cd3-465a-a765-7c9f4d5298f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.109 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc2ab8c-5b7d-482e-be53-a460b52d3514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.111 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape802e356-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.112 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.113 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape802e356-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:27 compute-0 NetworkManager[48915]: <info>  [1764060327.1151] manager: (tape802e356-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 25 08:45:27 compute-0 kernel: tape802e356-a0: entered promiscuous mode
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.117 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape802e356-a0, col_values=(('external_ids', {'iface-id': 'd883574e-d66a-473c-a9da-0bd8ff7e0311'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:27 compute-0 ovn_controller[152859]: 2025-11-25T08:45:27Z|00986|binding|INFO|Releasing lport d883574e-d66a-473c-a9da-0bd8ff7e0311 from this chassis (sb_readonly=0)
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.134 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e802e356-a113-4a9a-a825-0a16ca2eb73c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e802e356-a113-4a9a-a825-0a16ca2eb73c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.135 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1af98fd-69c1-4f6e-aae1-7fdaf1f04c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.137 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-e802e356-a113-4a9a-a825-0a16ca2eb73c
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/e802e356-a113-4a9a-a825-0a16ca2eb73c.pid.haproxy
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID e802e356-a113-4a9a-a825-0a16ca2eb73c
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:45:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:27.138 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'env', 'PROCESS_TAG=haproxy-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e802e356-a113-4a9a-a825-0a16ca2eb73c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.255 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060327.25455, 5bcfbf47-fa2f-4580-9ace-f946f5683844 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.255 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] VM Started (Lifecycle Event)
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.279 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.284 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060327.2556005, 5bcfbf47-fa2f-4580-9ace-f946f5683844 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.284 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] VM Paused (Lifecycle Event)
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.303 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.306 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.325 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.375 253542 DEBUG nova.compute.manager [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.376 253542 DEBUG oslo_concurrency.lockutils [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.377 253542 DEBUG oslo_concurrency.lockutils [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.377 253542 DEBUG oslo_concurrency.lockutils [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.377 253542 DEBUG nova.compute.manager [req-5127318b-fabf-4939-86ac-c0ff98ddd8df req-8a73912c-155e-4fa3-825c-121b9e3cd9f0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Processing event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.378 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.382 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060327.382117, 5bcfbf47-fa2f-4580-9ace-f946f5683844 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.382 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] VM Resumed (Lifecycle Event)
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.384 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.388 253542 INFO nova.virt.libvirt.driver [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance spawned successfully.
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.388 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.405 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.410 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.413 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.414 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.414 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.414 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.415 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.415 253542 DEBUG nova.virt.libvirt.driver [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.445 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.474 253542 INFO nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Took 6.88 seconds to spawn the instance on the hypervisor.
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.474 253542 DEBUG nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:27 compute-0 podman[354678]: 2025-11-25 08:45:27.511290082 +0000 UTC m=+0.049867585 container create 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.540 253542 INFO nova.compute.manager [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Took 7.90 seconds to build instance.
Nov 25 08:45:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 08:45:27 compute-0 systemd[1]: Started libpod-conmon-8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b.scope.
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:27 compute-0 nova_compute[253538]: 2025-11-25 08:45:27.570 253542 DEBUG oslo_concurrency.lockutils [None req-bc1b3ee1-26cf-4578-8a5c-9ff2f10fe318 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:45:27 compute-0 podman[354678]: 2025-11-25 08:45:27.482948124 +0000 UTC m=+0.021525657 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb33698eadac245ca4ccca1fab19fcc25dc1326559f66c577ff040cd5948b3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:45:27 compute-0 podman[354678]: 2025-11-25 08:45:27.600288626 +0000 UTC m=+0.138866159 container init 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:45:27 compute-0 podman[354678]: 2025-11-25 08:45:27.60641811 +0000 UTC m=+0.144995613 container start 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:45:27 compute-0 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [NOTICE]   (354697) : New worker (354699) forked
Nov 25 08:45:27 compute-0 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [NOTICE]   (354697) : Loading success.
Nov 25 08:45:28 compute-0 ceph-mon[75015]: pgmap v1918: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 08:45:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:45:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4155403741' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:45:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:45:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4155403741' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.537 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.538 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.538 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.538 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.539 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.540 253542 INFO nova.compute.manager [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Terminating instance
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.541 253542 DEBUG nova.compute.manager [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:45:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 644 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.647 253542 DEBUG nova.compute.manager [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.647 253542 DEBUG oslo_concurrency.lockutils [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.648 253542 DEBUG oslo_concurrency.lockutils [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.648 253542 DEBUG oslo_concurrency.lockutils [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.648 253542 DEBUG nova.compute.manager [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] No waiting events found dispatching network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.649 253542 WARNING nova.compute.manager [req-4a9955f7-72b8-46ac-83a6-aef94513abd7 req-df9288e5-0bf1-41ce-b04a-b0c1f1053bbe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received unexpected event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 for instance with vm_state active and task_state deleting.
Nov 25 08:45:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4155403741' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:45:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4155403741' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:45:29 compute-0 kernel: tap284bf2a3-cc (unregistering): left promiscuous mode
Nov 25 08:45:29 compute-0 NetworkManager[48915]: <info>  [1764060329.6877] device (tap284bf2a3-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:29 compute-0 ovn_controller[152859]: 2025-11-25T08:45:29Z|00987|binding|INFO|Releasing lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 from this chassis (sb_readonly=0)
Nov 25 08:45:29 compute-0 ovn_controller[152859]: 2025-11-25T08:45:29Z|00988|binding|INFO|Setting lport 284bf2a3-cc8c-4631-81d0-b2eda45727f9 down in Southbound
Nov 25 08:45:29 compute-0 ovn_controller[152859]: 2025-11-25T08:45:29Z|00989|binding|INFO|Removing iface tap284bf2a3-cc ovn-installed in OVS
Nov 25 08:45:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.709 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:ee:87 10.100.0.13'], port_security=['fa:16:3e:22:ee:87 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5bcfbf47-fa2f-4580-9ace-f946f5683844', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f476c88f03140fb8498fc62d3d783b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe50aa03-4d4a-43b6-a9bc-79cbda787ef2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5d4171-3312-481d-8e49-f074dc7ca346, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=284bf2a3-cc8c-4631-81d0-b2eda45727f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:45:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.711 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 284bf2a3-cc8c-4631-81d0-b2eda45727f9 in datapath e802e356-a113-4a9a-a825-0a16ca2eb73c unbound from our chassis
Nov 25 08:45:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.713 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e802e356-a113-4a9a-a825-0a16ca2eb73c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:45:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.714 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37416b1e-8503-43d5-9452-f95bf2168b33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.714 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c namespace which is not needed anymore
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:29 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 25 08:45:29 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Consumed 2.650s CPU time.
Nov 25 08:45:29 compute-0 systemd-machined[215790]: Machine qemu-125-instance-00000066 terminated.
Nov 25 08:45:29 compute-0 NetworkManager[48915]: <info>  [1764060329.7567] manager: (tap284bf2a3-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.771 253542 INFO nova.virt.libvirt.driver [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Instance destroyed successfully.
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.772 253542 DEBUG nova.objects.instance [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lazy-loading 'resources' on Instance uuid 5bcfbf47-fa2f-4580-9ace-f946f5683844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.794 253542 DEBUG nova.virt.libvirt.vif [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1939536355',display_name='tempest-ServerGroupTestJSON-server-1939536355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1939536355',id=102,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:45:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f476c88f03140fb8498fc62d3d783b0',ramdisk_id='',reservation_id='r-qblg0f2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-166485107',owner_user_name='tempest-ServerGroupTestJSON-166485107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:45:27Z,user_data=None,user_id='0f86af4601044b11b6ad679db30c1c0a',uuid=5bcfbf47-fa2f-4580-9ace-f946f5683844,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.795 253542 DEBUG nova.network.os_vif_util [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converting VIF {"id": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "address": "fa:16:3e:22:ee:87", "network": {"id": "e802e356-a113-4a9a-a825-0a16ca2eb73c", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1741820126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f476c88f03140fb8498fc62d3d783b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap284bf2a3-cc", "ovs_interfaceid": "284bf2a3-cc8c-4631-81d0-b2eda45727f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.796 253542 DEBUG nova.network.os_vif_util [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.796 253542 DEBUG os_vif [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.798 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap284bf2a3-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.800 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.803 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:45:29 compute-0 nova_compute[253538]: 2025-11-25 08:45:29.805 253542 INFO os_vif [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=284bf2a3-cc8c-4631-81d0-b2eda45727f9,network=Network(e802e356-a113-4a9a-a825-0a16ca2eb73c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap284bf2a3-cc')
Nov 25 08:45:29 compute-0 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [NOTICE]   (354697) : haproxy version is 2.8.14-c23fe91
Nov 25 08:45:29 compute-0 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [NOTICE]   (354697) : path to executable is /usr/sbin/haproxy
Nov 25 08:45:29 compute-0 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [WARNING]  (354697) : Exiting Master process...
Nov 25 08:45:29 compute-0 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [ALERT]    (354697) : Current worker (354699) exited with code 143 (Terminated)
Nov 25 08:45:29 compute-0 neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c[354693]: [WARNING]  (354697) : All workers exited. Exiting... (0)
Nov 25 08:45:29 compute-0 systemd[1]: libpod-8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b.scope: Deactivated successfully.
Nov 25 08:45:29 compute-0 podman[354743]: 2025-11-25 08:45:29.848960359 +0000 UTC m=+0.045299034 container died 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:45:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b-userdata-shm.mount: Deactivated successfully.
Nov 25 08:45:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-beb33698eadac245ca4ccca1fab19fcc25dc1326559f66c577ff040cd5948b3a-merged.mount: Deactivated successfully.
Nov 25 08:45:29 compute-0 podman[354743]: 2025-11-25 08:45:29.899674118 +0000 UTC m=+0.096012803 container cleanup 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:45:29 compute-0 systemd[1]: libpod-conmon-8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b.scope: Deactivated successfully.
Nov 25 08:45:29 compute-0 sshd-session[354708]: Invalid user oracle from 193.32.162.151 port 39774
Nov 25 08:45:29 compute-0 podman[354793]: 2025-11-25 08:45:29.980712818 +0000 UTC m=+0.056462333 container remove 8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:45:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5595ca5-f5af-41de-9746-b133617ec956]: (4, ('Tue Nov 25 08:45:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c (8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b)\n8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b\nTue Nov 25 08:45:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c (8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b)\n8ad3315585977bf4d76840b382ec350859c95df106cfedba694ea9ed375df84b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.989 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[42fdcad8-a46f-4656-b66e-e104e0ba5d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:29.990 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape802e356-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:30 compute-0 sshd-session[354708]: Connection closed by invalid user oracle 193.32.162.151 port 39774 [preauth]
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.033 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:30 compute-0 kernel: tape802e356-a0: left promiscuous mode
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.038 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad33721-1e90-444d-93db-62e2c57055a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.053 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e98f6c85-0287-4d5e-aecb-7c499a634d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.055 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf9bb4-c400-40d0-a7bf-49eead470bf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.071 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35d34317-b579-48bb-b87b-954a83758eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568150, 'reachable_time': 18652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354809, 'error': None, 'target': 'ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:30 compute-0 systemd[1]: run-netns-ovnmeta\x2de802e356\x2da113\x2d4a9a\x2da825\x2d0a16ca2eb73c.mount: Deactivated successfully.
Nov 25 08:45:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.076 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e802e356-a113-4a9a-a825-0a16ca2eb73c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:45:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.077 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[26bc9d73-cd02-4940-9105-f947ad7389a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.150 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:45:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:30.152 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.210 253542 INFO nova.virt.libvirt.driver [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Deleting instance files /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844_del
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.211 253542 INFO nova.virt.libvirt.driver [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Deletion of /var/lib/nova/instances/5bcfbf47-fa2f-4580-9ace-f946f5683844_del complete
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.271 253542 INFO nova.compute.manager [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.272 253542 DEBUG oslo.service.loopingcall [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.272 253542 DEBUG nova.compute.manager [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:45:30 compute-0 nova_compute[253538]: 2025-11-25 08:45:30.273 253542 DEBUG nova.network.neutron [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:45:30 compute-0 ceph-mon[75015]: pgmap v1919: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 644 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 25 08:45:30 compute-0 podman[354810]: 2025-11-25 08:45:30.844028269 +0000 UTC m=+0.094149283 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:45:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.738 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-unplugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.739 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.739 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.740 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.740 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] No waiting events found dispatching network-vif-unplugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.741 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-unplugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.741 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.742 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.742 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.743 253542 DEBUG oslo_concurrency.lockutils [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.743 253542 DEBUG nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] No waiting events found dispatching network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.743 253542 WARNING nova.compute.manager [req-51320292-220b-46ad-b8db-83303f265f23 req-0cae2e2d-8243-4626-9654-1e767b14dc56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received unexpected event network-vif-plugged-284bf2a3-cc8c-4631-81d0-b2eda45727f9 for instance with vm_state active and task_state deleting.
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.921 253542 DEBUG nova.network.neutron [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:45:31 compute-0 nova_compute[253538]: 2025-11-25 08:45:31.972 253542 INFO nova.compute.manager [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Took 1.70 seconds to deallocate network for instance.
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.027 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.028 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.030 253542 DEBUG nova.compute.manager [req-3818273b-383b-4c7f-b619-8e03db95eff4 req-2c288f2c-c003-4590-beb2-27a7b4aabca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Received event network-vif-deleted-284bf2a3-cc8c-4631-81d0-b2eda45727f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:45:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.223 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.374 253542 DEBUG oslo_concurrency.processutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:32 compute-0 ceph-mon[75015]: pgmap v1920: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 08:45:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:45:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885918888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.834 253542 DEBUG oslo_concurrency.processutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.841 253542 DEBUG nova.compute.provider_tree [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.866 253542 DEBUG nova.scheduler.client.report [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:45:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:32.874 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:78:41 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8d7b16e8-5cab-4430-95a4-049834bc562c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d7b16e8-5cab-4430-95a4-049834bc562c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b8af023-ed29-4fce-8188-06112851a9e5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8e8e3470-47b7-47a3-bc0e-2a46677d3377) old=Port_Binding(mac=['fa:16:3e:ba:78:41 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8d7b16e8-5cab-4430-95a4-049834bc562c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d7b16e8-5cab-4430-95a4-049834bc562c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:45:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:32.875 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8e8e3470-47b7-47a3-bc0e-2a46677d3377 in datapath 8d7b16e8-5cab-4430-95a4-049834bc562c updated
Nov 25 08:45:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:32.876 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8d7b16e8-5cab-4430-95a4-049834bc562c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:45:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:32.877 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4acd1086-1c70-4ebc-837e-f14ffc307d61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.883 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:32 compute-0 nova_compute[253538]: 2025-11-25 08:45:32.940 253542 INFO nova.scheduler.client.report [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Deleted allocations for instance 5bcfbf47-fa2f-4580-9ace-f946f5683844
Nov 25 08:45:33 compute-0 nova_compute[253538]: 2025-11-25 08:45:33.014 253542 DEBUG oslo_concurrency.lockutils [None req-30253b64-33e8-4c1f-8cdf-f15e04bf0928 0f86af4601044b11b6ad679db30c1c0a 5f476c88f03140fb8498fc62d3d783b0 - - default default] Lock "5bcfbf47-fa2f-4580-9ace-f946f5683844" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 104 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Nov 25 08:45:33 compute-0 nova_compute[253538]: 2025-11-25 08:45:33.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:33 compute-0 nova_compute[253538]: 2025-11-25 08:45:33.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 08:45:33 compute-0 nova_compute[253538]: 2025-11-25 08:45:33.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 08:45:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2885918888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:34.154 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:45:34 compute-0 nova_compute[253538]: 2025-11-25 08:45:34.567 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:34 compute-0 nova_compute[253538]: 2025-11-25 08:45:34.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:34 compute-0 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:34 compute-0 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:34 compute-0 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:34 compute-0 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:45:34 compute-0 nova_compute[253538]: 2025-11-25 08:45:34.594 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:34 compute-0 nova_compute[253538]: 2025-11-25 08:45:34.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:34 compute-0 ceph-mon[75015]: pgmap v1921: 321 pgs: 321 active+clean; 104 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Nov 25 08:45:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:45:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114571231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:35 compute-0 nova_compute[253538]: 2025-11-25 08:45:35.203 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:35 compute-0 nova_compute[253538]: 2025-11-25 08:45:35.395 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:45:35 compute-0 nova_compute[253538]: 2025-11-25 08:45:35.396 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3932MB free_disk=59.980873107910156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:45:35 compute-0 nova_compute[253538]: 2025-11-25 08:45:35.396 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:35 compute-0 nova_compute[253538]: 2025-11-25 08:45:35.397 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 126 op/s
Nov 25 08:45:35 compute-0 nova_compute[253538]: 2025-11-25 08:45:35.549 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:45:35 compute-0 nova_compute[253538]: 2025-11-25 08:45:35.549 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:45:35 compute-0 nova_compute[253538]: 2025-11-25 08:45:35.565 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4114571231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:45:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902485140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.040 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.046 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.062 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.088 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.088 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.358 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:36 compute-0 nova_compute[253538]: 2025-11-25 08:45:36.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 08:45:36 compute-0 ceph-mon[75015]: pgmap v1922: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 126 op/s
Nov 25 08:45:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3902485140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:37 compute-0 nova_compute[253538]: 2025-11-25 08:45:37.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 08:45:38 compute-0 sshd-session[354881]: Invalid user db2inst1 from 45.78.222.2 port 47804
Nov 25 08:45:38 compute-0 nova_compute[253538]: 2025-11-25 08:45:38.567 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:38 compute-0 ceph-mon[75015]: pgmap v1923: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 08:45:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Nov 25 08:45:39 compute-0 nova_compute[253538]: 2025-11-25 08:45:39.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:40 compute-0 sshd-session[354881]: Received disconnect from 45.78.222.2 port 47804:11: Bye Bye [preauth]
Nov 25 08:45:40 compute-0 sshd-session[354881]: Disconnected from invalid user db2inst1 45.78.222.2 port 47804 [preauth]
Nov 25 08:45:40 compute-0 ceph-mon[75015]: pgmap v1924: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Nov 25 08:45:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:41.070 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:45:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 08:45:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:42 compute-0 nova_compute[253538]: 2025-11-25 08:45:42.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:42 compute-0 ceph-mon[75015]: pgmap v1925: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.077 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "7d74f950-951c-4cea-99f7-71a915c6a21c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.077 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.094 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.187 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.188 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.195 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.196 253542 INFO nova.compute.claims [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.318 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 593 KiB/s rd, 1.2 KiB/s wr, 46 op/s
Nov 25 08:45:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:45:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984082783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.752 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.761 253542 DEBUG nova.compute.provider_tree [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.780 253542 DEBUG nova.scheduler.client.report [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.805 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.806 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.847 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.860 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.875 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:45:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1984082783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.952 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.954 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.954 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating image(s)
Nov 25 08:45:43 compute-0 nova_compute[253538]: 2025-11-25 08:45:43.979 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.006 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.028 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.031 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.104 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.105 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.105 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.106 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.124 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.127 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7d74f950-951c-4cea-99f7-71a915c6a21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.425 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7d74f950-951c-4cea-99f7-71a915c6a21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.485 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] resizing rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.587 253542 DEBUG nova.objects.instance [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.602 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.603 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Ensure instance console log exists: /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.603 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.604 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.604 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.606 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.611 253542 WARNING nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.615 253542 DEBUG nova.virt.libvirt.host [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.616 253542 DEBUG nova.virt.libvirt.host [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.624 253542 DEBUG nova.virt.libvirt.host [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.625 253542 DEBUG nova.virt.libvirt.host [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.625 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.626 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.626 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.626 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.627 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.627 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.627 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.627 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.628 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.628 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.628 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.628 253542 DEBUG nova.virt.hardware [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.632 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.771 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060329.7701275, 5bcfbf47-fa2f-4580-9ace-f946f5683844 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.771 253542 INFO nova.compute.manager [-] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] VM Stopped (Lifecycle Event)
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.795 253542 DEBUG nova.compute.manager [None req-6426877f-db23-4ea2-8196-165e937de9b8 - - - - - -] [instance: 5bcfbf47-fa2f-4580-9ace-f946f5683844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:44 compute-0 nova_compute[253538]: 2025-11-25 08:45:44.806 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:44 compute-0 ceph-mon[75015]: pgmap v1926: 321 pgs: 321 active+clean; 88 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 593 KiB/s rd, 1.2 KiB/s wr, 46 op/s
Nov 25 08:45:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:45:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695249858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.201 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.224 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.227 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 98 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 715 KiB/s wr, 25 op/s
Nov 25 08:45:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:45:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269541838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.655 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.657 253542 DEBUG nova.objects.instance [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.687 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <uuid>7d74f950-951c-4cea-99f7-71a915c6a21c</uuid>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <name>instance-00000067</name>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerShowV254Test-server-320589125</nova:name>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:45:44</nova:creationTime>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <nova:user uuid="61a0c433a20242eeae2b074ca5cce0fb">tempest-ServerShowV254Test-584114205-project-member</nova:user>
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <nova:project uuid="eab64550373b4d6fa996c94c6ad06846">tempest-ServerShowV254Test-584114205</nova:project>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <system>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <entry name="serial">7d74f950-951c-4cea-99f7-71a915c6a21c</entry>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <entry name="uuid">7d74f950-951c-4cea-99f7-71a915c6a21c</entry>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     </system>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <os>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   </os>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <features>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   </features>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7d74f950-951c-4cea-99f7-71a915c6a21c_disk">
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       </source>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config">
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       </source>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:45:45 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/console.log" append="off"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <video>
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     </video>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:45:45 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:45:45 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:45:45 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:45:45 compute-0 nova_compute[253538]: </domain>
Nov 25 08:45:45 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.737 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.737 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.738 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Using config drive
Nov 25 08:45:45 compute-0 nova_compute[253538]: 2025-11-25 08:45:45.756 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2695249858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:45:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3269541838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:45:46 compute-0 nova_compute[253538]: 2025-11-25 08:45:46.001 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating config drive at /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config
Nov 25 08:45:46 compute-0 nova_compute[253538]: 2025-11-25 08:45:46.006 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjy_euid_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:46 compute-0 nova_compute[253538]: 2025-11-25 08:45:46.160 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjy_euid_" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:46 compute-0 nova_compute[253538]: 2025-11-25 08:45:46.187 253542 DEBUG nova.storage.rbd_utils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:45:46 compute-0 nova_compute[253538]: 2025-11-25 08:45:46.193 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:45:46 compute-0 nova_compute[253538]: 2025-11-25 08:45:46.386 253542 DEBUG oslo_concurrency.processutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:45:46 compute-0 nova_compute[253538]: 2025-11-25 08:45:46.388 253542 INFO nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deleting local config drive /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config because it was imported into RBD.
Nov 25 08:45:46 compute-0 systemd-machined[215790]: New machine qemu-126-instance-00000067.
Nov 25 08:45:46 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000067.
Nov 25 08:45:46 compute-0 ceph-mon[75015]: pgmap v1927: 321 pgs: 321 active+clean; 98 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 715 KiB/s wr, 25 op/s
Nov 25 08:45:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.546 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060347.545099, 7d74f950-951c-4cea-99f7-71a915c6a21c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.546 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Resumed (Lifecycle Event)
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.549 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.549 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.552 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance spawned successfully.
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.552 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:45:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.578 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.586 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.587 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.587 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.587 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.588 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.588 253542 DEBUG nova.virt.libvirt.driver [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.593 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.624 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.625 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060347.5487173, 7d74f950-951c-4cea-99f7-71a915c6a21c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.625 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Started (Lifecycle Event)
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.652 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.656 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.676 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.686 253542 INFO nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Took 3.73 seconds to spawn the instance on the hypervisor.
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.687 253542 DEBUG nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.748 253542 INFO nova.compute.manager [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Took 4.60 seconds to build instance.
Nov 25 08:45:47 compute-0 nova_compute[253538]: 2025-11-25 08:45:47.765 253542 DEBUG oslo_concurrency.lockutils [None req-172343fa-55e9-4310-9527-fa12301679c6 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:45:48 compute-0 ceph-mon[75015]: pgmap v1928: 321 pgs: 321 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Nov 25 08:45:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 134 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 08:45:49 compute-0 nova_compute[253538]: 2025-11-25 08:45:49.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.243 253542 INFO nova.compute.manager [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Rebuilding instance
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.482 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.496 253542 DEBUG nova.compute.manager [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.542 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.551 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.564 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'resources' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.573 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.583 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:45:50 compute-0 nova_compute[253538]: 2025-11-25 08:45:50.588 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:45:50 compute-0 ceph-mon[75015]: pgmap v1929: 321 pgs: 321 active+clean; 134 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 08:45:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Nov 25 08:45:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:52 compute-0 nova_compute[253538]: 2025-11-25 08:45:52.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:53 compute-0 ceph-mon[75015]: pgmap v1930: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:45:53
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.log', '.rgw.root', 'vms']
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:45:53 compute-0 nova_compute[253538]: 2025-11-25 08:45:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 08:45:53 compute-0 podman[355273]: 2025-11-25 08:45:53.821390277 +0000 UTC m=+0.062461533 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:45:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:45:54 compute-0 nova_compute[253538]: 2025-11-25 08:45:54.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:55 compute-0 ceph-mon[75015]: pgmap v1931: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 08:45:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 08:45:57 compute-0 ceph-mon[75015]: pgmap v1932: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 08:45:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:45:57 compute-0 nova_compute[253538]: 2025-11-25 08:45:57.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:45:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Nov 25 08:45:57 compute-0 podman[355292]: 2025-11-25 08:45:57.842167301 +0000 UTC m=+0.082127691 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 08:45:58 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 08:45:59 compute-0 ceph-mon[75015]: pgmap v1933: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Nov 25 08:45:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 669 KiB/s wr, 86 op/s
Nov 25 08:45:59 compute-0 nova_compute[253538]: 2025-11-25 08:45:59.815 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:00 compute-0 nova_compute[253538]: 2025-11-25 08:46:00.637 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 08:46:01 compute-0 ceph-mon[75015]: pgmap v1934: 321 pgs: 321 active+clean; 134 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 669 KiB/s wr, 86 op/s
Nov 25 08:46:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 148 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 908 KiB/s wr, 73 op/s
Nov 25 08:46:01 compute-0 podman[355314]: 2025-11-25 08:46:01.872426537 +0000 UTC m=+0.121483064 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:46:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:02 compute-0 nova_compute[253538]: 2025-11-25 08:46:02.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:03 compute-0 ceph-mon[75015]: pgmap v1935: 321 pgs: 321 active+clean; 148 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 908 KiB/s wr, 73 op/s
Nov 25 08:46:03 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 25 08:46:03 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000067.scope: Consumed 13.520s CPU time.
Nov 25 08:46:03 compute-0 systemd-machined[215790]: Machine qemu-126-instance-00000067 terminated.
Nov 25 08:46:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 165 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1016 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Nov 25 08:46:03 compute-0 nova_compute[253538]: 2025-11-25 08:46:03.653 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance shutdown successfully after 13 seconds.
Nov 25 08:46:03 compute-0 nova_compute[253538]: 2025-11-25 08:46:03.661 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance destroyed successfully.
Nov 25 08:46:03 compute-0 nova_compute[253538]: 2025-11-25 08:46:03.668 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance destroyed successfully.
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.033 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deleting instance files /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c_del
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.034 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deletion of /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c_del complete
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007553940182001693 of space, bias 1.0, pg target 0.2266182054600508 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.172 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.173 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating image(s)
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.204 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.241 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.278 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.283 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.380 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.382 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.383 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.384 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.418 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.423 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 7d74f950-951c-4cea-99f7-71a915c6a21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.769 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 7d74f950-951c-4cea-99f7-71a915c6a21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.831 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.838 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] resizing rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.943 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.944 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Ensure instance console log exists: /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.945 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.945 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.945 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.947 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.951 253542 WARNING nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.979 253542 DEBUG nova.virt.libvirt.host [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.980 253542 DEBUG nova.virt.libvirt.host [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.984 253542 DEBUG nova.virt.libvirt.host [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.984 253542 DEBUG nova.virt.libvirt.host [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.984 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.985 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.985 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.985 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.986 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.987 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.987 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.987 253542 DEBUG nova.virt.hardware [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:46:04 compute-0 nova_compute[253538]: 2025-11-25 08:46:04.987 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.009 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:05 compute-0 ceph-mon[75015]: pgmap v1936: 321 pgs: 321 active+clean; 165 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1016 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Nov 25 08:46:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:46:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947521671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.420 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.438 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.441 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 149 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Nov 25 08:46:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:46:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947023449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.871 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.875 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <uuid>7d74f950-951c-4cea-99f7-71a915c6a21c</uuid>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <name>instance-00000067</name>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <nova:name>tempest-ServerShowV254Test-server-320589125</nova:name>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:46:04</nova:creationTime>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <nova:user uuid="61a0c433a20242eeae2b074ca5cce0fb">tempest-ServerShowV254Test-584114205-project-member</nova:user>
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <nova:project uuid="eab64550373b4d6fa996c94c6ad06846">tempest-ServerShowV254Test-584114205</nova:project>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <system>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <entry name="serial">7d74f950-951c-4cea-99f7-71a915c6a21c</entry>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <entry name="uuid">7d74f950-951c-4cea-99f7-71a915c6a21c</entry>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     </system>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <os>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   </os>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <features>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   </features>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7d74f950-951c-4cea-99f7-71a915c6a21c_disk">
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       </source>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config">
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       </source>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:46:05 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/console.log" append="off"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <video>
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     </video>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:46:05 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:46:05 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:46:05 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:46:05 compute-0 nova_compute[253538]: </domain>
Nov 25 08:46:05 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.926 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.926 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.927 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Using config drive
Nov 25 08:46:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:05.936 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:ff:26 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=524d73b0-969b-46fc-b4ff-7468eaa76344, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4082b80-0c53-4461-b3c7-56420ca50a2b) old=Port_Binding(mac=['fa:16:3e:8b:ff:26 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:46:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:05.938 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4082b80-0c53-4461-b3c7-56420ca50a2b in datapath 3b58c00c-f900-493f-9371-00803cd7f82a updated
Nov 25 08:46:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:05.939 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b58c00c-f900-493f-9371-00803cd7f82a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:46:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:05.940 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[014a3f22-d828-444f-860f-539b97eb043e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.951 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:05 compute-0 nova_compute[253538]: 2025-11-25 08:46:05.968 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:46:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/947521671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:06 compute-0 ceph-mon[75015]: pgmap v1937: 321 pgs: 321 active+clean; 149 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Nov 25 08:46:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3947023449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:06 compute-0 nova_compute[253538]: 2025-11-25 08:46:06.241 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Creating config drive at /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config
Nov 25 08:46:06 compute-0 nova_compute[253538]: 2025-11-25 08:46:06.250 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpis31mg61 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:06 compute-0 nova_compute[253538]: 2025-11-25 08:46:06.412 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpis31mg61" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:06 compute-0 nova_compute[253538]: 2025-11-25 08:46:06.448 253542 DEBUG nova.storage.rbd_utils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] rbd image 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:06 compute-0 nova_compute[253538]: 2025-11-25 08:46:06.453 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:06 compute-0 nova_compute[253538]: 2025-11-25 08:46:06.706 253542 DEBUG oslo_concurrency.processutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config 7d74f950-951c-4cea-99f7-71a915c6a21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:06 compute-0 nova_compute[253538]: 2025-11-25 08:46:06.708 253542 INFO nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deleting local config drive /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c/disk.config because it was imported into RBD.
Nov 25 08:46:06 compute-0 systemd-machined[215790]: New machine qemu-127-instance-00000067.
Nov 25 08:46:06 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000067.
Nov 25 08:46:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.237 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.291 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 7d74f950-951c-4cea-99f7-71a915c6a21c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.292 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060367.2913284, 7d74f950-951c-4cea-99f7-71a915c6a21c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.292 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Resumed (Lifecycle Event)
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.295 253542 DEBUG nova.compute.manager [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.295 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.300 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance spawned successfully.
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.301 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.334 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.339 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.349 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.350 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.350 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.350 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.351 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.351 253542 DEBUG nova.virt.libvirt.driver [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.373 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.374 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060367.2957478, 7d74f950-951c-4cea-99f7-71a915c6a21c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.374 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Started (Lifecycle Event)
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.404 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.408 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.414 253542 DEBUG nova.compute.manager [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.423 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.462 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.463 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.463 253542 DEBUG nova.objects.instance [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:46:07 compute-0 nova_compute[253538]: 2025-11-25 08:46:07.517 253542 DEBUG oslo_concurrency.lockutils [None req-a6fb5086-8250-421b-b016-cb44267cbd6e 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 127 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Nov 25 08:46:08 compute-0 ceph-mon[75015]: pgmap v1938: 321 pgs: 321 active+clean; 127 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.804 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "7d74f950-951c-4cea-99f7-71a915c6a21c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.804 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.804 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "7d74f950-951c-4cea-99f7-71a915c6a21c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.805 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.805 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.806 253542 INFO nova.compute.manager [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Terminating instance
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.807 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "refresh_cache-7d74f950-951c-4cea-99f7-71a915c6a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.807 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquired lock "refresh_cache-7d74f950-951c-4cea-99f7-71a915c6a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:46:08 compute-0 nova_compute[253538]: 2025-11-25 08:46:08.807 253542 DEBUG nova.network.neutron [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.001 253542 DEBUG nova.network.neutron [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:46:09 compute-0 sudo[355706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:09 compute-0 sudo[355706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:09 compute-0 sudo[355706]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:09 compute-0 sudo[355731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:46:09 compute-0 sudo[355731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:09 compute-0 sudo[355731]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:09 compute-0 sudo[355756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:09 compute-0 sudo[355756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:09 compute-0 sudo[355756]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:09 compute-0 sudo[355781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:46:09 compute-0 sudo[355781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.319 253542 DEBUG nova.network.neutron [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.346 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Releasing lock "refresh_cache-7d74f950-951c-4cea-99f7-71a915c6a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.346 253542 DEBUG nova.compute.manager [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:46:09 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 25 08:46:09 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Consumed 2.641s CPU time.
Nov 25 08:46:09 compute-0 systemd-machined[215790]: Machine qemu-127-instance-00000067 terminated.
Nov 25 08:46:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 119 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 808 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.570 253542 INFO nova.virt.libvirt.driver [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance destroyed successfully.
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.571 253542 DEBUG nova.objects.instance [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lazy-loading 'resources' on Instance uuid 7d74f950-951c-4cea-99f7-71a915c6a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:46:09 compute-0 sudo[355781]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:46:09 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 7c8f7b1e-67a8-4e31-95e0-fa3ea2f20e55 does not exist
Nov 25 08:46:09 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d40661d3-566f-48f1-99b0-ff02f1b3a083 does not exist
Nov 25 08:46:09 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ba1d45b7-b105-411e-a2a7-4fe4ae298d87 does not exist
Nov 25 08:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:46:09 compute-0 sudo[355857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:09 compute-0 sudo[355857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:09 compute-0 sudo[355857]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:09 compute-0 sudo[355883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:46:09 compute-0 sudo[355883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:09 compute-0 sudo[355883]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:09 compute-0 sudo[355910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:09 compute-0 sudo[355910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:09 compute-0 sudo[355910]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.969 253542 INFO nova.virt.libvirt.driver [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deleting instance files /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c_del
Nov 25 08:46:09 compute-0 nova_compute[253538]: 2025-11-25 08:46:09.970 253542 INFO nova.virt.libvirt.driver [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deletion of /var/lib/nova/instances/7d74f950-951c-4cea-99f7-71a915c6a21c_del complete
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.016 253542 INFO nova.compute.manager [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Took 0.67 seconds to destroy the instance on the hypervisor.
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.017 253542 DEBUG oslo.service.loopingcall [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.018 253542 DEBUG nova.compute.manager [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.018 253542 DEBUG nova.network.neutron [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:46:10 compute-0 sudo[355935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:46:10 compute-0 sudo[355935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:10 compute-0 ovn_controller[152859]: 2025-11-25T08:46:10Z|00990|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.207 253542 DEBUG nova.network.neutron [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.222 253542 DEBUG nova.network.neutron [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.240 253542 INFO nova.compute.manager [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Took 0.22 seconds to deallocate network for instance.
Nov 25 08:46:10 compute-0 podman[356001]: 2025-11-25 08:46:10.373531761 +0000 UTC m=+0.044033830 container create 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:46:10 compute-0 systemd[1]: Started libpod-conmon-8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74.scope.
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.426 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.426 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:10 compute-0 podman[356001]: 2025-11-25 08:46:10.352340363 +0000 UTC m=+0.022842432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:46:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:46:10 compute-0 podman[356001]: 2025-11-25 08:46:10.46608623 +0000 UTC m=+0.136588339 container init 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 08:46:10 compute-0 podman[356001]: 2025-11-25 08:46:10.47544648 +0000 UTC m=+0.145948569 container start 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:46:10 compute-0 podman[356001]: 2025-11-25 08:46:10.480360752 +0000 UTC m=+0.150862831 container attach 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:46:10 compute-0 blissful_bassi[356017]: 167 167
Nov 25 08:46:10 compute-0 systemd[1]: libpod-8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74.scope: Deactivated successfully.
Nov 25 08:46:10 compute-0 conmon[356017]: conmon 8e2908b66aaa9f9a78f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74.scope/container/memory.events
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.485 253542 DEBUG oslo_concurrency.processutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:10 compute-0 podman[356001]: 2025-11-25 08:46:10.486976139 +0000 UTC m=+0.157478188 container died 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 08:46:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c290bb5bd5fdf500ac93d4d0a2c832cf142614c4ffcd6438edd3a98338878404-merged.mount: Deactivated successfully.
Nov 25 08:46:10 compute-0 podman[356001]: 2025-11-25 08:46:10.528006278 +0000 UTC m=+0.198508357 container remove 8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bassi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 08:46:10 compute-0 systemd[1]: libpod-conmon-8e2908b66aaa9f9a78f53de9f1eeb4a115da55703df22f3e33be6b6b33883b74.scope: Deactivated successfully.
Nov 25 08:46:10 compute-0 ceph-mon[75015]: pgmap v1939: 321 pgs: 321 active+clean; 119 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 808 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Nov 25 08:46:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:46:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:46:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:46:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:46:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:46:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:46:10 compute-0 podman[356061]: 2025-11-25 08:46:10.716766463 +0000 UTC m=+0.064002975 container create c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:46:10 compute-0 systemd[1]: Started libpod-conmon-c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78.scope.
Nov 25 08:46:10 compute-0 podman[356061]: 2025-11-25 08:46:10.68340487 +0000 UTC m=+0.030641422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:46:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:10 compute-0 podman[356061]: 2025-11-25 08:46:10.838916775 +0000 UTC m=+0.186153337 container init c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:46:10 compute-0 podman[356061]: 2025-11-25 08:46:10.847920056 +0000 UTC m=+0.195156528 container start c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 08:46:10 compute-0 podman[356061]: 2025-11-25 08:46:10.854083001 +0000 UTC m=+0.201319523 container attach c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:46:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:46:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/642968321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.943 253542 DEBUG oslo_concurrency.processutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.951 253542 DEBUG nova.compute.provider_tree [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:46:10 compute-0 nova_compute[253538]: 2025-11-25 08:46:10.975 253542 DEBUG nova.scheduler.client.report [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:46:11 compute-0 nova_compute[253538]: 2025-11-25 08:46:11.110 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:11 compute-0 nova_compute[253538]: 2025-11-25 08:46:11.172 253542 INFO nova.scheduler.client.report [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Deleted allocations for instance 7d74f950-951c-4cea-99f7-71a915c6a21c
Nov 25 08:46:11 compute-0 nova_compute[253538]: 2025-11-25 08:46:11.272 253542 DEBUG oslo_concurrency.lockutils [None req-5fd87975-9760-42f0-8db0-837451033232 61a0c433a20242eeae2b074ca5cce0fb eab64550373b4d6fa996c94c6ad06846 - - default default] Lock "7d74f950-951c-4cea-99f7-71a915c6a21c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Nov 25 08:46:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/642968321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:11 compute-0 sshd-session[355900]: Invalid user ts2 from 45.78.217.205 port 38068
Nov 25 08:46:11 compute-0 inspiring_driscoll[356078]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:46:11 compute-0 inspiring_driscoll[356078]: --> relative data size: 1.0
Nov 25 08:46:11 compute-0 inspiring_driscoll[356078]: --> All data devices are unavailable
Nov 25 08:46:11 compute-0 systemd[1]: libpod-c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78.scope: Deactivated successfully.
Nov 25 08:46:11 compute-0 systemd[1]: libpod-c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78.scope: Consumed 1.060s CPU time.
Nov 25 08:46:12 compute-0 podman[356109]: 2025-11-25 08:46:12.012971498 +0000 UTC m=+0.027674613 container died c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf998b94a794eba61a8732f0122e73af3e3af31a5fb8ef58690feb09755f5f24-merged.mount: Deactivated successfully.
Nov 25 08:46:12 compute-0 podman[356109]: 2025-11-25 08:46:12.067182639 +0000 UTC m=+0.081885734 container remove c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:46:12 compute-0 systemd[1]: libpod-conmon-c3302ae8b89d6da317da96cd3e2913eb2ec91d4a785908e0459799116af9bd78.scope: Deactivated successfully.
Nov 25 08:46:12 compute-0 sudo[355935]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:12 compute-0 sshd-session[355900]: Received disconnect from 45.78.217.205 port 38068:11: Bye Bye [preauth]
Nov 25 08:46:12 compute-0 sshd-session[355900]: Disconnected from invalid user ts2 45.78.217.205 port 38068 [preauth]
Nov 25 08:46:12 compute-0 sudo[356124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:12 compute-0 sudo[356124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:12 compute-0 sudo[356124]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:12 compute-0 nova_compute[253538]: 2025-11-25 08:46:12.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:12 compute-0 sudo[356149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:46:12 compute-0 sudo[356149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:12 compute-0 sudo[356149]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:12 compute-0 sudo[356174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:12 compute-0 sudo[356174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:12 compute-0 sudo[356174]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:12 compute-0 sudo[356199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:46:12 compute-0 sudo[356199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:12 compute-0 podman[356265]: 2025-11-25 08:46:12.709478202 +0000 UTC m=+0.035703528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:46:12 compute-0 ceph-mon[75015]: pgmap v1940: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Nov 25 08:46:12 compute-0 podman[356265]: 2025-11-25 08:46:12.919811285 +0000 UTC m=+0.246036561 container create f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:46:12 compute-0 systemd[1]: Started libpod-conmon-f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9.scope.
Nov 25 08:46:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:46:13 compute-0 podman[356265]: 2025-11-25 08:46:13.048615234 +0000 UTC m=+0.374840500 container init f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:46:13 compute-0 podman[356265]: 2025-11-25 08:46:13.055980091 +0000 UTC m=+0.382205327 container start f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 08:46:13 compute-0 unruffled_elion[356281]: 167 167
Nov 25 08:46:13 compute-0 systemd[1]: libpod-f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9.scope: Deactivated successfully.
Nov 25 08:46:13 compute-0 podman[356265]: 2025-11-25 08:46:13.064433687 +0000 UTC m=+0.390658973 container attach f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 08:46:13 compute-0 podman[356265]: 2025-11-25 08:46:13.065976499 +0000 UTC m=+0.392201805 container died f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:46:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c81e26d87a0a1b0d4f4a49f8f43543840f84f7a90403757ff393003dc0d6347-merged.mount: Deactivated successfully.
Nov 25 08:46:13 compute-0 podman[356265]: 2025-11-25 08:46:13.15715189 +0000 UTC m=+0.483377166 container remove f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elion, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:46:13 compute-0 systemd[1]: libpod-conmon-f12ea809448563538a909f1e445a88413d72eeef2fbe72025190e1a9ee6190a9.scope: Deactivated successfully.
Nov 25 08:46:13 compute-0 podman[356305]: 2025-11-25 08:46:13.377527503 +0000 UTC m=+0.045611153 container create f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:46:13 compute-0 systemd[1]: Started libpod-conmon-f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd.scope.
Nov 25 08:46:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:46:13 compute-0 podman[356305]: 2025-11-25 08:46:13.361330959 +0000 UTC m=+0.029414619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:13 compute-0 podman[356305]: 2025-11-25 08:46:13.470990895 +0000 UTC m=+0.139074575 container init f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 08:46:13 compute-0 podman[356305]: 2025-11-25 08:46:13.485196567 +0000 UTC m=+0.153280217 container start f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 08:46:13 compute-0 podman[356305]: 2025-11-25 08:46:13.490009595 +0000 UTC m=+0.158093245 container attach f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:46:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 111 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 190 op/s
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]: {
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:     "0": [
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:         {
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "devices": [
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "/dev/loop3"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             ],
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_name": "ceph_lv0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_size": "21470642176",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "name": "ceph_lv0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "tags": {
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cluster_name": "ceph",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.crush_device_class": "",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.encrypted": "0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osd_id": "0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.type": "block",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.vdo": "0"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             },
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "type": "block",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "vg_name": "ceph_vg0"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:         }
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:     ],
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:     "1": [
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:         {
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "devices": [
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "/dev/loop4"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             ],
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_name": "ceph_lv1",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_size": "21470642176",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "name": "ceph_lv1",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "tags": {
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cluster_name": "ceph",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.crush_device_class": "",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.encrypted": "0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osd_id": "1",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.type": "block",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.vdo": "0"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             },
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "type": "block",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "vg_name": "ceph_vg1"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:         }
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:     ],
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:     "2": [
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:         {
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "devices": [
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "/dev/loop5"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             ],
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_name": "ceph_lv2",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_size": "21470642176",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "name": "ceph_lv2",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "tags": {
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.cluster_name": "ceph",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.crush_device_class": "",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.encrypted": "0",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osd_id": "2",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.type": "block",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:                 "ceph.vdo": "0"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             },
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "type": "block",
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:             "vg_name": "ceph_vg2"
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:         }
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]:     ]
Nov 25 08:46:14 compute-0 laughing_chebyshev[356321]: }
Nov 25 08:46:14 compute-0 systemd[1]: libpod-f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd.scope: Deactivated successfully.
Nov 25 08:46:14 compute-0 podman[356305]: 2025-11-25 08:46:14.287172885 +0000 UTC m=+0.955256525 container died f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cdc2bd1bce1bc5d920d0fed2203f5932cc232e119754600811026523f685212-merged.mount: Deactivated successfully.
Nov 25 08:46:14 compute-0 podman[356305]: 2025-11-25 08:46:14.343597466 +0000 UTC m=+1.011681116 container remove f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 08:46:14 compute-0 systemd[1]: libpod-conmon-f9bb6dfb7b70f2f67cad900e0796e7368fafd8d518c248897c7d0134b7b7f5dd.scope: Deactivated successfully.
Nov 25 08:46:14 compute-0 sudo[356199]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:14 compute-0 sudo[356343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:14 compute-0 sudo[356343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:14 compute-0 sudo[356343]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:14 compute-0 sudo[356368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:46:14 compute-0 sudo[356368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:14 compute-0 sudo[356368]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:14 compute-0 sudo[356393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:14 compute-0 sudo[356393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:14 compute-0 sudo[356393]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:14 compute-0 sudo[356418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:46:14 compute-0 sudo[356418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:14 compute-0 nova_compute[253538]: 2025-11-25 08:46:14.836 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:14 compute-0 ceph-mon[75015]: pgmap v1941: 321 pgs: 321 active+clean; 111 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 190 op/s
Nov 25 08:46:14 compute-0 podman[356483]: 2025-11-25 08:46:14.989873264 +0000 UTC m=+0.049696332 container create 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 08:46:15 compute-0 systemd[1]: Started libpod-conmon-1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2.scope.
Nov 25 08:46:15 compute-0 podman[356483]: 2025-11-25 08:46:14.968754028 +0000 UTC m=+0.028577106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:46:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:46:15 compute-0 podman[356483]: 2025-11-25 08:46:15.112389765 +0000 UTC m=+0.172212913 container init 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 08:46:15 compute-0 podman[356483]: 2025-11-25 08:46:15.125155637 +0000 UTC m=+0.184978745 container start 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 25 08:46:15 compute-0 podman[356483]: 2025-11-25 08:46:15.130067329 +0000 UTC m=+0.189890497 container attach 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 08:46:15 compute-0 eager_golick[356499]: 167 167
Nov 25 08:46:15 compute-0 systemd[1]: libpod-1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2.scope: Deactivated successfully.
Nov 25 08:46:15 compute-0 conmon[356499]: conmon 1abdd1f5c70e597abf54 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2.scope/container/memory.events
Nov 25 08:46:15 compute-0 podman[356483]: 2025-11-25 08:46:15.133022668 +0000 UTC m=+0.192845736 container died 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf113db45288d783046d69b3f525f98d67828bb88cbb8743d0e916669ca47d6f-merged.mount: Deactivated successfully.
Nov 25 08:46:15 compute-0 podman[356483]: 2025-11-25 08:46:15.169531456 +0000 UTC m=+0.229354524 container remove 1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 08:46:15 compute-0 systemd[1]: libpod-conmon-1abdd1f5c70e597abf54abe2d017f9fc66b027e91bcb1e983ddda921f3b2c5e2.scope: Deactivated successfully.
Nov 25 08:46:15 compute-0 podman[356524]: 2025-11-25 08:46:15.345959651 +0000 UTC m=+0.043109376 container create ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:46:15 compute-0 systemd[1]: Started libpod-conmon-ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c.scope.
Nov 25 08:46:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:15 compute-0 podman[356524]: 2025-11-25 08:46:15.326235732 +0000 UTC m=+0.023385457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:46:15 compute-0 podman[356524]: 2025-11-25 08:46:15.43292528 +0000 UTC m=+0.130074975 container init ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:46:15 compute-0 podman[356524]: 2025-11-25 08:46:15.438333925 +0000 UTC m=+0.135483620 container start ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 25 08:46:15 compute-0 podman[356524]: 2025-11-25 08:46:15.443607016 +0000 UTC m=+0.140756741 container attach ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:46:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Nov 25 08:46:16 compute-0 gracious_pare[356540]: {
Nov 25 08:46:16 compute-0 gracious_pare[356540]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "osd_id": 1,
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "type": "bluestore"
Nov 25 08:46:16 compute-0 gracious_pare[356540]:     },
Nov 25 08:46:16 compute-0 gracious_pare[356540]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "osd_id": 2,
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "type": "bluestore"
Nov 25 08:46:16 compute-0 gracious_pare[356540]:     },
Nov 25 08:46:16 compute-0 gracious_pare[356540]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "osd_id": 0,
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:46:16 compute-0 gracious_pare[356540]:         "type": "bluestore"
Nov 25 08:46:16 compute-0 gracious_pare[356540]:     }
Nov 25 08:46:16 compute-0 gracious_pare[356540]: }
Nov 25 08:46:16 compute-0 systemd[1]: libpod-ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c.scope: Deactivated successfully.
Nov 25 08:46:16 compute-0 podman[356524]: 2025-11-25 08:46:16.460083678 +0000 UTC m=+1.157233413 container died ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:46:16 compute-0 systemd[1]: libpod-ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c.scope: Consumed 1.035s CPU time.
Nov 25 08:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-470262dccf30f895cec56bbb6c6ecddec14ac030047199f9910c87611e463951-merged.mount: Deactivated successfully.
Nov 25 08:46:16 compute-0 podman[356524]: 2025-11-25 08:46:16.519598243 +0000 UTC m=+1.216747948 container remove ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 08:46:16 compute-0 systemd[1]: libpod-conmon-ba4cff7cd5a6deac67377245eb949bbbea6da00ea6cab3cc8d6a9b514117b10c.scope: Deactivated successfully.
Nov 25 08:46:16 compute-0 sudo[356418]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:46:16 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:46:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:46:16 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:46:16 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev dcee8d98-a42d-45d5-a325-379b51e9fe38 does not exist
Nov 25 08:46:16 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ddcbc32c-7895-4c25-8377-a0dec9697939 does not exist
Nov 25 08:46:16 compute-0 sudo[356587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:46:16 compute-0 sudo[356587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:16 compute-0 sudo[356587]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:16 compute-0 sudo[356612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:46:16 compute-0 sudo[356612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:46:16 compute-0 sudo[356612]: pam_unix(sudo:session): session closed for user root
Nov 25 08:46:16 compute-0 ceph-mon[75015]: pgmap v1942: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Nov 25 08:46:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:46:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:46:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:17 compute-0 nova_compute[253538]: 2025-11-25 08:46:17.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 130 op/s
Nov 25 08:46:18 compute-0 ceph-mon[75015]: pgmap v1943: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 130 op/s
Nov 25 08:46:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:19.301 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:ff:26 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=524d73b0-969b-46fc-b4ff-7468eaa76344, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4082b80-0c53-4461-b3c7-56420ca50a2b) old=Port_Binding(mac=['fa:16:3e:8b:ff:26 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b58c00c-f900-493f-9371-00803cd7f82a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36c8f2a4d2284593a0dc0fa30aef95d1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:46:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:19.303 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4082b80-0c53-4461-b3c7-56420ca50a2b in datapath 3b58c00c-f900-493f-9371-00803cd7f82a updated
Nov 25 08:46:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:19.304 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b58c00c-f900-493f-9371-00803cd7f82a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:46:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:19.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b6659b-25ec-4caa-9836-5ee0252adbf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Nov 25 08:46:19 compute-0 nova_compute[253538]: 2025-11-25 08:46:19.840 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:21 compute-0 ceph-mon[75015]: pgmap v1944: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Nov 25 08:46:21 compute-0 nova_compute[253538]: 2025-11-25 08:46:21.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 725 KiB/s wr, 86 op/s
Nov 25 08:46:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:22 compute-0 nova_compute[253538]: 2025-11-25 08:46:22.243 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:23 compute-0 ceph-mon[75015]: pgmap v1945: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 725 KiB/s wr, 86 op/s
Nov 25 08:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:46:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 530 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 25 08:46:24 compute-0 ceph-mon[75015]: pgmap v1946: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 530 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 25 08:46:24 compute-0 nova_compute[253538]: 2025-11-25 08:46:24.569 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060369.5679095, 7d74f950-951c-4cea-99f7-71a915c6a21c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:24 compute-0 nova_compute[253538]: 2025-11-25 08:46:24.570 253542 INFO nova.compute.manager [-] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] VM Stopped (Lifecycle Event)
Nov 25 08:46:24 compute-0 nova_compute[253538]: 2025-11-25 08:46:24.588 253542 DEBUG nova.compute.manager [None req-d6b179ab-c939-4f81-a596-8e3b767e7eb1 - - - - - -] [instance: 7d74f950-951c-4cea-99f7-71a915c6a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:24 compute-0 podman[356637]: 2025-11-25 08:46:24.831988553 +0000 UTC m=+0.076225862 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 08:46:24 compute-0 nova_compute[253538]: 2025-11-25 08:46:24.842 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 08:46:26 compute-0 ceph-mon[75015]: pgmap v1947: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 08:46:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:27 compute-0 nova_compute[253538]: 2025-11-25 08:46:27.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:27 compute-0 nova_compute[253538]: 2025-11-25 08:46:27.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:46:28 compute-0 nova_compute[253538]: 2025-11-25 08:46:28.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:28 compute-0 nova_compute[253538]: 2025-11-25 08:46:28.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:46:28 compute-0 ceph-mon[75015]: pgmap v1948: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:46:28 compute-0 podman[356657]: 2025-11-25 08:46:28.820362317 +0000 UTC m=+0.066254266 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:46:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:46:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3770457833' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:46:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:46:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3770457833' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:46:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:46:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3770457833' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:46:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3770457833' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:46:29 compute-0 nova_compute[253538]: 2025-11-25 08:46:29.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:30 compute-0 ceph-mon[75015]: pgmap v1949: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:46:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:31.527 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:46:31 compute-0 nova_compute[253538]: 2025-11-25 08:46:31.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:31.529 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:46:31 compute-0 nova_compute[253538]: 2025-11-25 08:46:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:31 compute-0 nova_compute[253538]: 2025-11-25 08:46:31.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:46:31 compute-0 nova_compute[253538]: 2025-11-25 08:46:31.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:46:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:46:31 compute-0 nova_compute[253538]: 2025-11-25 08:46:31.578 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:46:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.581 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.582 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.598 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.678 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.679 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.688 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.689 253542 INFO nova.compute.claims [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:46:32 compute-0 ceph-mon[75015]: pgmap v1950: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:46:32 compute-0 podman[356677]: 2025-11-25 08:46:32.843940455 +0000 UTC m=+0.092809167 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:46:32 compute-0 nova_compute[253538]: 2025-11-25 08:46:32.926 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:46:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588211025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.366 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.372 253542 DEBUG nova.compute.provider_tree [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.386 253542 DEBUG nova.scheduler.client.report [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.403 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.404 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.453 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.454 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.471 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.484 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.563 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.564 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.565 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating image(s)
Nov 25 08:46:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.586 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.603 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.622 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.624 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.689 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.690 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.691 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.691 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.710 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.715 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:33 compute-0 nova_compute[253538]: 2025-11-25 08:46:33.795 253542 DEBUG nova.policy [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f02d89c9848d8aaaaab070ce4d179', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '947f731219de435196429037dc94fd56', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:46:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1588211025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:34 compute-0 nova_compute[253538]: 2025-11-25 08:46:34.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:34 compute-0 nova_compute[253538]: 2025-11-25 08:46:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:34 compute-0 nova_compute[253538]: 2025-11-25 08:46:34.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:34 compute-0 nova_compute[253538]: 2025-11-25 08:46:34.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:34 compute-0 nova_compute[253538]: 2025-11-25 08:46:34.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:34 compute-0 nova_compute[253538]: 2025-11-25 08:46:34.578 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:46:34 compute-0 nova_compute[253538]: 2025-11-25 08:46:34.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:34 compute-0 nova_compute[253538]: 2025-11-25 08:46:34.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.030 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Successfully created port: 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:46:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:46:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222939804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.143 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:35 compute-0 ceph-mon[75015]: pgmap v1951: 321 pgs: 321 active+clean; 88 MiB data, 699 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.343 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.406 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] resizing rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.483 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.484 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3899MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.484 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.485 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.553 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.554 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.554 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:46:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 88 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.5 KiB/s wr, 10 op/s
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.607 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.834 253542 DEBUG nova.objects.instance [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.843 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.844 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Ensure instance console log exists: /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.844 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.844 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:35 compute-0 nova_compute[253538]: 2025-11-25 08:46:35.845 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:46:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1841820826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.061 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.068 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.084 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.105 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.106 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/222939804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:36 compute-0 ceph-mon[75015]: pgmap v1952: 321 pgs: 321 active+clean; 88 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.5 KiB/s wr, 10 op/s
Nov 25 08:46:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1841820826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.679 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Successfully updated port: 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.736 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.737 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.737 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.830 253542 DEBUG nova.compute.manager [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.831 253542 DEBUG nova.compute.manager [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing instance network info cache due to event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:46:36 compute-0 nova_compute[253538]: 2025-11-25 08:46:36.831 253542 DEBUG oslo_concurrency.lockutils [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:46:37 compute-0 nova_compute[253538]: 2025-11-25 08:46:37.100 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:37 compute-0 nova_compute[253538]: 2025-11-25 08:46:37.101 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:37 compute-0 nova_compute[253538]: 2025-11-25 08:46:37.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 101 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 774 KiB/s wr, 21 op/s
Nov 25 08:46:37 compute-0 nova_compute[253538]: 2025-11-25 08:46:37.783 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:46:38 compute-0 nova_compute[253538]: 2025-11-25 08:46:38.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:38 compute-0 nova_compute[253538]: 2025-11-25 08:46:38.604 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:46:38 compute-0 ceph-mon[75015]: pgmap v1953: 321 pgs: 321 active+clean; 101 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 774 KiB/s wr, 21 op/s
Nov 25 08:46:38 compute-0 nova_compute[253538]: 2025-11-25 08:46:38.989 253542 DEBUG nova.network.neutron [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.068 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.068 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance network_info: |[{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.068 253542 DEBUG oslo_concurrency.lockutils [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.069 253542 DEBUG nova.network.neutron [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.072 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start _get_guest_xml network_info=[{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.077 253542 WARNING nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.091 253542 DEBUG nova.virt.libvirt.host [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.092 253542 DEBUG nova.virt.libvirt.host [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.097 253542 DEBUG nova.virt.libvirt.host [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.098 253542 DEBUG nova.virt.libvirt.host [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.099 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.099 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.100 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.100 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.100 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.101 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.101 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.101 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.101 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.102 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.102 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.102 253542 DEBUG nova.virt.hardware [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.107 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:46:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808212849' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.575 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 103 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 866 KiB/s wr, 24 op/s
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.598 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.604 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/808212849' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:39 compute-0 nova_compute[253538]: 2025-11-25 08:46:39.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:46:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139228368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.031 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.033 253542 DEBUG nova.virt.libvirt.vif [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:46:33Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.034 253542 DEBUG nova.network.os_vif_util [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.035 253542 DEBUG nova.network.os_vif_util [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.037 253542 DEBUG nova.objects.instance [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.052 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <uuid>0f5e68e6-8f02-4a3a-ac0c-322d82950d98</uuid>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <name>instance-00000068</name>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersNegativeTestJSON-server-673040864</nova:name>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:46:39</nova:creationTime>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <nova:user uuid="229f02d89c9848d8aaaaab070ce4d179">tempest-ServersNegativeTestJSON-740481153-project-member</nova:user>
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <nova:project uuid="947f731219de435196429037dc94fd56">tempest-ServersNegativeTestJSON-740481153</nova:project>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <nova:port uuid="7246ed42-6ec3-42e8-9b9d-12606aeeb43c">
Nov 25 08:46:40 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <system>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <entry name="serial">0f5e68e6-8f02-4a3a-ac0c-322d82950d98</entry>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <entry name="uuid">0f5e68e6-8f02-4a3a-ac0c-322d82950d98</entry>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </system>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <os>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   </os>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <features>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   </features>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk">
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       </source>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config">
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       </source>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:46:40 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:8c:96:cd"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <target dev="tap7246ed42-6e"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/console.log" append="off"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <video>
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </video>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:46:40 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:46:40 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:46:40 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:46:40 compute-0 nova_compute[253538]: </domain>
Nov 25 08:46:40 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.054 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Preparing to wait for external event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.054 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.054 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.054 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.055 253542 DEBUG nova.virt.libvirt.vif [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:46:33Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.055 253542 DEBUG nova.network.os_vif_util [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.056 253542 DEBUG nova.network.os_vif_util [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.056 253542 DEBUG os_vif [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.057 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.057 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.057 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.060 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.061 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7246ed42-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.061 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7246ed42-6e, col_values=(('external_ids', {'iface-id': '7246ed42-6ec3-42e8-9b9d-12606aeeb43c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:96:cd', 'vm-uuid': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:40 compute-0 NetworkManager[48915]: <info>  [1764060400.0640] manager: (tap7246ed42-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.070 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.070 253542 INFO os_vif [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.114 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.114 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.114 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No VIF found with MAC fa:16:3e:8c:96:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.115 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Using config drive
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.139 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:40.531 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.691 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating config drive at /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.695 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphtifg9cm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.745 253542 DEBUG nova.network.neutron [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated VIF entry in instance network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.747 253542 DEBUG nova.network.neutron [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.768 253542 DEBUG oslo_concurrency.lockutils [req-c11e99ff-d585-46f7-a3d3-fc11f150ab14 req-3721c3a8-fdb1-4c70-b187-1ce9eb079667 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:46:40 compute-0 ceph-mon[75015]: pgmap v1954: 321 pgs: 321 active+clean; 103 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 866 KiB/s wr, 24 op/s
Nov 25 08:46:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4139228368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.835 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphtifg9cm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.873 253542 DEBUG nova.storage.rbd_utils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:40 compute-0 nova_compute[253538]: 2025-11-25 08:46:40.876 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.069 253542 DEBUG oslo_concurrency.processutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.071 253542 INFO nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deleting local config drive /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config because it was imported into RBD.
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.071 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.072 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:41 compute-0 kernel: tap7246ed42-6e: entered promiscuous mode
Nov 25 08:46:41 compute-0 NetworkManager[48915]: <info>  [1764060401.1359] manager: (tap7246ed42-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 ovn_controller[152859]: 2025-11-25T08:46:41Z|00991|binding|INFO|Claiming lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c for this chassis.
Nov 25 08:46:41 compute-0 ovn_controller[152859]: 2025-11-25T08:46:41Z|00992|binding|INFO|7246ed42-6ec3-42e8-9b9d-12606aeeb43c: Claiming fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.144 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.158 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.159 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 bound to our chassis
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.161 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:46:41 compute-0 systemd-machined[215790]: New machine qemu-128-instance-00000068.
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.177 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[013b6b67-0571-4a29-85a1-6828704011aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.178 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa04d86f-71 in ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.180 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa04d86f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.180 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78bb0179-c4b6-4513-a7df-062391567210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.181 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f08286f9-4af0-44bf-85dc-c6304c5a49f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.197 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcb2576-de8b-4734-9d9b-35baa3871cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000068.
Nov 25 08:46:41 compute-0 ovn_controller[152859]: 2025-11-25T08:46:41Z|00993|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c ovn-installed in OVS
Nov 25 08:46:41 compute-0 ovn_controller[152859]: 2025-11-25T08:46:41Z|00994|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c up in Southbound
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 systemd-udevd[357076]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.218 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5d885432-af7f-47cb-b75e-a6e1b8ff09af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 NetworkManager[48915]: <info>  [1764060401.2419] device (tap7246ed42-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:46:41 compute-0 NetworkManager[48915]: <info>  [1764060401.2428] device (tap7246ed42-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.256 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[328ba119-9ca9-48bf-949d-716306419899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.261 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4deaa3-cd5c-4145-a385-8da32f22ba5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 NetworkManager[48915]: <info>  [1764060401.2631] manager: (tapaa04d86f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.298 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[36870340-57fb-4736-b14e-ab1fa49d77a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.301 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[47f31f0c-e9d3-4f5e-a7bc-de0c059c0068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 NetworkManager[48915]: <info>  [1764060401.3229] device (tapaa04d86f-70): carrier: link connected
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.328 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fb19fa-dd0b-4ffe-b782-0d0cdbe55226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[114d45d0-4e9d-4e26-b036-09619672ba5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575592, 'reachable_time': 15292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357106, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e188277f-26bf-4f48-84bb-e8925bd0d281]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:d2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575592, 'tstamp': 575592}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357107, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.377 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8315cd77-63da-469b-9553-ab4319e194da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575592, 'reachable_time': 15292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357108, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3306347f-0412-4f94-a74f-c2fb43686976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.473 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e30ec9a3-0c92-4dd0-8753-241227d5fa51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.474 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.474 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.475 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 kernel: tapaa04d86f-70: entered promiscuous mode
Nov 25 08:46:41 compute-0 NetworkManager[48915]: <info>  [1764060401.4771] manager: (tapaa04d86f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.479 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 ovn_controller[152859]: 2025-11-25T08:46:41Z|00995|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.499 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.500 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7331bb-5ac4-4fa8-8f6c-d9ef5c30011f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.501 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:46:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:41.501 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'env', 'PROCESS_TAG=haproxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa04d86f-73a3-4b24-9c95-8ec29aa39064.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.569 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060401.5686646, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.570 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Started (Lifecycle Event)
Nov 25 08:46:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.587 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.590 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060401.568901, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.590 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Paused (Lifecycle Event)
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.605 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.608 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.619 253542 DEBUG nova.compute.manager [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.619 253542 DEBUG oslo_concurrency.lockutils [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.620 253542 DEBUG oslo_concurrency.lockutils [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.620 253542 DEBUG oslo_concurrency.lockutils [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.620 253542 DEBUG nova.compute.manager [req-bba16e52-ea9a-49a6-9756-131849133e03 req-87d8e408-f1c5-465e-8f1a-ac5dc901fcb3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Processing event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.621 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.624 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.625 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060401.6240888, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.625 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Resumed (Lifecycle Event)
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.627 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.630 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance spawned successfully.
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.630 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.648 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.652 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.653 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.653 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.654 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.654 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.654 253542 DEBUG nova.virt.libvirt.driver [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.658 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.686 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.713 253542 INFO nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Took 8.15 seconds to spawn the instance on the hypervisor.
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.714 253542 DEBUG nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.777 253542 INFO nova.compute.manager [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Took 9.13 seconds to build instance.
Nov 25 08:46:41 compute-0 nova_compute[253538]: 2025-11-25 08:46:41.791 253542 DEBUG oslo_concurrency.lockutils [None req-5b5eacf4-c6da-49cf-9bde-b1ad24f0d0e3 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:41 compute-0 podman[357183]: 2025-11-25 08:46:41.924426933 +0000 UTC m=+0.058518737 container create 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:46:41 compute-0 systemd[1]: Started libpod-conmon-7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf.scope.
Nov 25 08:46:41 compute-0 podman[357183]: 2025-11-25 08:46:41.890718941 +0000 UTC m=+0.024810765 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:46:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5392b1f619a47826c2c6fed409eebb3fb96b83ac7eec8e199c10f4e6b9e7199c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:46:42 compute-0 podman[357183]: 2025-11-25 08:46:42.010519549 +0000 UTC m=+0.144611373 container init 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:46:42 compute-0 podman[357183]: 2025-11-25 08:46:42.021341399 +0000 UTC m=+0.155433203 container start 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 08:46:42 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [NOTICE]   (357204) : New worker (357206) forked
Nov 25 08:46:42 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [NOTICE]   (357204) : Loading success.
Nov 25 08:46:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:42 compute-0 nova_compute[253538]: 2025-11-25 08:46:42.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:42 compute-0 ceph-mon[75015]: pgmap v1955: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:46:43 compute-0 sshd-session[357201]: Invalid user csgoserver from 45.202.211.6 port 59366
Nov 25 08:46:43 compute-0 sshd-session[357201]: Received disconnect from 45.202.211.6 port 59366:11: Bye Bye [preauth]
Nov 25 08:46:43 compute-0 sshd-session[357201]: Disconnected from invalid user csgoserver 45.202.211.6 port 59366 [preauth]
Nov 25 08:46:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 08:46:43 compute-0 nova_compute[253538]: 2025-11-25 08:46:43.809 253542 DEBUG nova.compute.manager [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:46:43 compute-0 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 DEBUG oslo_concurrency.lockutils [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:43 compute-0 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 DEBUG oslo_concurrency.lockutils [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:43 compute-0 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 DEBUG oslo_concurrency.lockutils [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:43 compute-0 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 DEBUG nova.compute.manager [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:46:43 compute-0 nova_compute[253538]: 2025-11-25 08:46:43.810 253542 WARNING nova.compute.manager [req-a2a157d0-7e35-4d69-b341-8dd82fe063f5 req-f6255b50-63fa-4d29-a17c-c158bbcc41bd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state None.
Nov 25 08:46:44 compute-0 ceph-mon[75015]: pgmap v1956: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 08:46:45 compute-0 nova_compute[253538]: 2025-11-25 08:46:45.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 08:46:45 compute-0 nova_compute[253538]: 2025-11-25 08:46:45.799 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:45 compute-0 nova_compute[253538]: 2025-11-25 08:46:45.800 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:45 compute-0 nova_compute[253538]: 2025-11-25 08:46:45.818 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:46:45 compute-0 nova_compute[253538]: 2025-11-25 08:46:45.894 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:45 compute-0 nova_compute[253538]: 2025-11-25 08:46:45.894 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:45 compute-0 nova_compute[253538]: 2025-11-25 08:46:45.906 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:46:45 compute-0 nova_compute[253538]: 2025-11-25 08:46:45.906 253542 INFO nova.compute.claims [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.044 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:46:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307040062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.469 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.476 253542 DEBUG nova.compute.provider_tree [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.492 253542 DEBUG nova.scheduler.client.report [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.514 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.515 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.559 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.560 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.604 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.639 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.764 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.766 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.767 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Creating image(s)
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.809 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.834 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.857 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.861 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:46 compute-0 ceph-mon[75015]: pgmap v1957: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 08:46:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3307040062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.900 253542 DEBUG nova.policy [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f02d89c9848d8aaaaab070ce4d179', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '947f731219de435196429037dc94fd56', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.938 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.939 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.939 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.940 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.963 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:46 compute-0 nova_compute[253538]: 2025-11-25 08:46:46.968 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.281 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.349 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] resizing rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.448 253542 DEBUG nova.objects.instance [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'migration_context' on Instance uuid 46491e7b-1f61-45bf-a185-2a6b9dfb7258 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.462 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.462 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Ensure instance console log exists: /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.463 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.463 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.463 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 08:46:47 compute-0 nova_compute[253538]: 2025-11-25 08:46:47.885 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Successfully created port: 51030d00-0656-4e18-a844-07210dd53c67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:46:48 compute-0 ceph-mon[75015]: pgmap v1958: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 08:46:49 compute-0 nova_compute[253538]: 2025-11-25 08:46:49.578 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Successfully updated port: 51030d00-0656-4e18-a844-07210dd53c67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:46:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 140 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 80 op/s
Nov 25 08:46:49 compute-0 nova_compute[253538]: 2025-11-25 08:46:49.600 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:46:49 compute-0 nova_compute[253538]: 2025-11-25 08:46:49.601 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:46:49 compute-0 nova_compute[253538]: 2025-11-25 08:46:49.601 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:46:49 compute-0 nova_compute[253538]: 2025-11-25 08:46:49.679 253542 DEBUG nova.compute.manager [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-changed-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:46:49 compute-0 nova_compute[253538]: 2025-11-25 08:46:49.679 253542 DEBUG nova.compute.manager [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Refreshing instance network info cache due to event network-changed-51030d00-0656-4e18-a844-07210dd53c67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:46:49 compute-0 nova_compute[253538]: 2025-11-25 08:46:49.680 253542 DEBUG oslo_concurrency.lockutils [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.025 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.068 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.790 253542 DEBUG nova.network.neutron [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Updating instance_info_cache with network_info: [{"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.808 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.808 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance network_info: |[{"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.809 253542 DEBUG oslo_concurrency.lockutils [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.810 253542 DEBUG nova.network.neutron [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Refreshing network info cache for port 51030d00-0656-4e18-a844-07210dd53c67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.815 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start _get_guest_xml network_info=[{"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.823 253542 WARNING nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.836 253542 DEBUG nova.virt.libvirt.host [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.837 253542 DEBUG nova.virt.libvirt.host [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.842 253542 DEBUG nova.virt.libvirt.host [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.843 253542 DEBUG nova.virt.libvirt.host [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.844 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.845 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.846 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.847 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.847 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.848 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.849 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.849 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.850 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.851 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.851 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.852 253542 DEBUG nova.virt.hardware [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:46:50 compute-0 nova_compute[253538]: 2025-11-25 08:46:50.858 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:50 compute-0 ceph-mon[75015]: pgmap v1959: 321 pgs: 321 active+clean; 140 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 80 op/s
Nov 25 08:46:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:46:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266228349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.350 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.370 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.373 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 151 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 90 op/s
Nov 25 08:46:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:46:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222846228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.823 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.827 253542 DEBUG nova.virt.libvirt.vif [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1354440524',display_name='tempest-ServersNegativeTestJSON-server-1354440524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1354440524',id=105,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-94mtar4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:46:46Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=46491e7b-1f61-45bf-a185-2a6b9dfb7258,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.828 253542 DEBUG nova.network.os_vif_util [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.830 253542 DEBUG nova.network.os_vif_util [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.833 253542 DEBUG nova.objects.instance [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 46491e7b-1f61-45bf-a185-2a6b9dfb7258 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.863 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <uuid>46491e7b-1f61-45bf-a185-2a6b9dfb7258</uuid>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <name>instance-00000069</name>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersNegativeTestJSON-server-1354440524</nova:name>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:46:50</nova:creationTime>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <nova:user uuid="229f02d89c9848d8aaaaab070ce4d179">tempest-ServersNegativeTestJSON-740481153-project-member</nova:user>
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <nova:project uuid="947f731219de435196429037dc94fd56">tempest-ServersNegativeTestJSON-740481153</nova:project>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <nova:port uuid="51030d00-0656-4e18-a844-07210dd53c67">
Nov 25 08:46:51 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <system>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <entry name="serial">46491e7b-1f61-45bf-a185-2a6b9dfb7258</entry>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <entry name="uuid">46491e7b-1f61-45bf-a185-2a6b9dfb7258</entry>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </system>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <os>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   </os>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <features>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   </features>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk">
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config">
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:46:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:71:8f:76"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <target dev="tap51030d00-06"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/console.log" append="off"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <video>
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </video>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:46:51 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:46:51 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:46:51 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:46:51 compute-0 nova_compute[253538]: </domain>
Nov 25 08:46:51 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.865 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Preparing to wait for external event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.865 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.865 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.866 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.866 253542 DEBUG nova.virt.libvirt.vif [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1354440524',display_name='tempest-ServersNegativeTestJSON-server-1354440524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1354440524',id=105,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-94mtar4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:46:46Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=46491e7b-1f61-45bf-a185-2a6b9dfb7258,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.867 253542 DEBUG nova.network.os_vif_util [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.867 253542 DEBUG nova.network.os_vif_util [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.868 253542 DEBUG os_vif [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.869 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.869 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.876 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51030d00-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.877 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51030d00-06, col_values=(('external_ids', {'iface-id': '51030d00-0656-4e18-a844-07210dd53c67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:8f:76', 'vm-uuid': '46491e7b-1f61-45bf-a185-2a6b9dfb7258'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.879 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:51 compute-0 NetworkManager[48915]: <info>  [1764060411.8800] manager: (tap51030d00-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.888 253542 INFO os_vif [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06')
Nov 25 08:46:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3266228349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3222846228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.965 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.966 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.967 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No VIF found with MAC fa:16:3e:71:8f:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.968 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Using config drive
Nov 25 08:46:51 compute-0 nova_compute[253538]: 2025-11-25 08:46:51.994 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.168 253542 DEBUG nova.network.neutron [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Updated VIF entry in instance network info cache for port 51030d00-0656-4e18-a844-07210dd53c67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.169 253542 DEBUG nova.network.neutron [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Updating instance_info_cache with network_info: [{"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.181 253542 DEBUG oslo_concurrency.lockutils [req-a7847f34-a297-4579-86f9-16d1d9fede53 req-788158b7-045f-479e-b108-9251caad3d67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-46491e7b-1f61-45bf-a185-2a6b9dfb7258" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:46:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.407 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Creating config drive at /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.416 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5cv_v4pt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.583 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5cv_v4pt" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.610 253542 DEBUG nova.storage.rbd_utils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:46:52 compute-0 nova_compute[253538]: 2025-11-25 08:46:52.665 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:46:53 compute-0 ceph-mon[75015]: pgmap v1960: 321 pgs: 321 active+clean; 151 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 90 op/s
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:46:53
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.data']
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 180 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 08:46:53 compute-0 nova_compute[253538]: 2025-11-25 08:46:53.624 253542 DEBUG oslo_concurrency.processutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config 46491e7b-1f61-45bf-a185-2a6b9dfb7258_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.959s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:46:53 compute-0 nova_compute[253538]: 2025-11-25 08:46:53.626 253542 INFO nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Deleting local config drive /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258/disk.config because it was imported into RBD.
Nov 25 08:46:53 compute-0 NetworkManager[48915]: <info>  [1764060413.6861] manager: (tap51030d00-06): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Nov 25 08:46:53 compute-0 kernel: tap51030d00-06: entered promiscuous mode
Nov 25 08:46:53 compute-0 ovn_controller[152859]: 2025-11-25T08:46:53Z|00996|binding|INFO|Claiming lport 51030d00-0656-4e18-a844-07210dd53c67 for this chassis.
Nov 25 08:46:53 compute-0 ovn_controller[152859]: 2025-11-25T08:46:53Z|00997|binding|INFO|51030d00-0656-4e18-a844-07210dd53c67: Claiming fa:16:3e:71:8f:76 10.100.0.8
Nov 25 08:46:53 compute-0 nova_compute[253538]: 2025-11-25 08:46:53.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.695 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:8f:76 10.100.0.8'], port_security=['fa:16:3e:71:8f:76 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '46491e7b-1f61-45bf-a185-2a6b9dfb7258', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=51030d00-0656-4e18-a844-07210dd53c67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.696 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 51030d00-0656-4e18-a844-07210dd53c67 in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 bound to our chassis
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.698 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:46:53 compute-0 ovn_controller[152859]: 2025-11-25T08:46:53Z|00998|binding|INFO|Setting lport 51030d00-0656-4e18-a844-07210dd53c67 ovn-installed in OVS
Nov 25 08:46:53 compute-0 ovn_controller[152859]: 2025-11-25T08:46:53Z|00999|binding|INFO|Setting lport 51030d00-0656-4e18-a844-07210dd53c67 up in Southbound
Nov 25 08:46:53 compute-0 nova_compute[253538]: 2025-11-25 08:46:53.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:53 compute-0 nova_compute[253538]: 2025-11-25 08:46:53.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.712 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb39712-3f31-4de9-83b9-00955b656f76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:53 compute-0 systemd-udevd[357543]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:46:53 compute-0 systemd-machined[215790]: New machine qemu-129-instance-00000069.
Nov 25 08:46:53 compute-0 NetworkManager[48915]: <info>  [1764060413.7391] device (tap51030d00-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:46:53 compute-0 NetworkManager[48915]: <info>  [1764060413.7402] device (tap51030d00-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:46:53 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000069.
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.746 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[41080deb-4311-4e1a-80bc-96af4baa2d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.750 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[530b8a6e-3935-4910-8bfb-8f61df6fd13b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.777 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[557290e2-a00e-420c-8253-810e251e90df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.797 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[04b55135-67b1-46d0-aeaa-20ed663f1a0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575592, 'reachable_time': 15292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357552, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd13505-2b43-4c56-9ddc-5b6d60cb7101]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa04d86f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575604, 'tstamp': 575604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357555, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa04d86f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575606, 'tstamp': 575606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357555, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.815 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:53 compute-0 nova_compute[253538]: 2025-11-25 08:46:53.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:53 compute-0 nova_compute[253538]: 2025-11-25 08:46:53.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.818 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.818 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.819 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:53.819 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:46:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:46:53 compute-0 ovn_controller[152859]: 2025-11-25T08:46:53Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 08:46:53 compute-0 ovn_controller[152859]: 2025-11-25T08:46:53Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 08:46:54 compute-0 nova_compute[253538]: 2025-11-25 08:46:54.164 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060414.1637063, 46491e7b-1f61-45bf-a185-2a6b9dfb7258 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:54 compute-0 nova_compute[253538]: 2025-11-25 08:46:54.164 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] VM Started (Lifecycle Event)
Nov 25 08:46:54 compute-0 nova_compute[253538]: 2025-11-25 08:46:54.182 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:54 compute-0 nova_compute[253538]: 2025-11-25 08:46:54.185 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060414.1638577, 46491e7b-1f61-45bf-a185-2a6b9dfb7258 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:54 compute-0 nova_compute[253538]: 2025-11-25 08:46:54.185 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] VM Paused (Lifecycle Event)
Nov 25 08:46:54 compute-0 nova_compute[253538]: 2025-11-25 08:46:54.199 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:54 compute-0 nova_compute[253538]: 2025-11-25 08:46:54.202 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:46:54 compute-0 nova_compute[253538]: 2025-11-25 08:46:54.221 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:46:55 compute-0 ceph-mon[75015]: pgmap v1961: 321 pgs: 321 active+clean; 180 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 08:46:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 192 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 133 op/s
Nov 25 08:46:55 compute-0 podman[357599]: 2025-11-25 08:46:55.824581071 +0000 UTC m=+0.062768501 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.094 253542 DEBUG nova.compute.manager [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.095 253542 DEBUG oslo_concurrency.lockutils [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.095 253542 DEBUG oslo_concurrency.lockutils [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.095 253542 DEBUG oslo_concurrency.lockutils [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.095 253542 DEBUG nova.compute.manager [req-0e8adc8c-22b6-4162-93f0-37ec3f7f9b27 req-ad0faddb-a365-4a1e-a2c0-4e257879d11a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Processing event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.096 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.099 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060416.0996416, 46491e7b-1f61-45bf-a185-2a6b9dfb7258 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.099 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] VM Resumed (Lifecycle Event)
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.102 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.105 253542 INFO nova.virt.libvirt.driver [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance spawned successfully.
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.106 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.126 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.132 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.135 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.136 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.136 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.137 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.137 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.137 253542 DEBUG nova.virt.libvirt.driver [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.164 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.192 253542 INFO nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Took 9.43 seconds to spawn the instance on the hypervisor.
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.192 253542 DEBUG nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.254 253542 INFO nova.compute.manager [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Took 10.39 seconds to build instance.
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.267 253542 DEBUG oslo_concurrency.lockutils [None req-792be2b9-75a6-4252-8518-0add3a6e87f8 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:56 compute-0 nova_compute[253538]: 2025-11-25 08:46:56.919 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:57 compute-0 ceph-mon[75015]: pgmap v1962: 321 pgs: 321 active+clean; 192 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 133 op/s
Nov 25 08:46:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:46:57 compute-0 nova_compute[253538]: 2025-11-25 08:46:57.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 208 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 114 op/s
Nov 25 08:46:58 compute-0 ceph-mon[75015]: pgmap v1963: 321 pgs: 321 active+clean; 208 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 114 op/s
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.190 253542 DEBUG nova.compute.manager [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.191 253542 DEBUG oslo_concurrency.lockutils [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.191 253542 DEBUG oslo_concurrency.lockutils [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.192 253542 DEBUG oslo_concurrency.lockutils [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.192 253542 DEBUG nova.compute.manager [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] No waiting events found dispatching network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.192 253542 WARNING nova.compute.manager [req-2514c12d-85fa-45e0-90fd-acb6c48907e3 req-fefbea91-014a-4d39-972f-b534ee377e05 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received unexpected event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 for instance with vm_state active and task_state None.
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.454 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.455 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.455 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.456 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.456 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.457 253542 INFO nova.compute.manager [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Terminating instance
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.458 253542 DEBUG nova.compute.manager [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:46:58 compute-0 kernel: tap51030d00-06 (unregistering): left promiscuous mode
Nov 25 08:46:58 compute-0 NetworkManager[48915]: <info>  [1764060418.4968] device (tap51030d00-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:58 compute-0 ovn_controller[152859]: 2025-11-25T08:46:58Z|01000|binding|INFO|Releasing lport 51030d00-0656-4e18-a844-07210dd53c67 from this chassis (sb_readonly=0)
Nov 25 08:46:58 compute-0 ovn_controller[152859]: 2025-11-25T08:46:58Z|01001|binding|INFO|Setting lport 51030d00-0656-4e18-a844-07210dd53c67 down in Southbound
Nov 25 08:46:58 compute-0 ovn_controller[152859]: 2025-11-25T08:46:58Z|01002|binding|INFO|Removing iface tap51030d00-06 ovn-installed in OVS
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.515 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:8f:76 10.100.0.8'], port_security=['fa:16:3e:71:8f:76 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '46491e7b-1f61-45bf-a185-2a6b9dfb7258', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=51030d00-0656-4e18-a844-07210dd53c67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.516 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 51030d00-0656-4e18-a844-07210dd53c67 in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.517 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.549 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f388c3c3-3acc-419c-b499-2d77fcebd72b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:58 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 25 08:46:58 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000069.scope: Consumed 2.871s CPU time.
Nov 25 08:46:58 compute-0 systemd-machined[215790]: Machine qemu-129-instance-00000069 terminated.
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.587 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01fe2583-eff8-4b27-805c-b76be726ed59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.589 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a1567a5f-121b-4d45-bb43-0dfcaada992e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.620 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[087c58b1-a107-48dd-abe0-8afa243227cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.643 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83b12c23-6c31-4c07-9ebb-14f1527013ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575592, 'reachable_time': 15292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357630, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.664 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1809fcfd-fdbb-4f88-95ad-89441c920d82]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa04d86f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575604, 'tstamp': 575604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357631, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa04d86f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575606, 'tstamp': 575606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357631, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.666 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.668 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.671 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.672 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.672 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.673 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:46:58.673 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.706 253542 INFO nova.virt.libvirt.driver [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Instance destroyed successfully.
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.706 253542 DEBUG nova.objects.instance [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'resources' on Instance uuid 46491e7b-1f61-45bf-a185-2a6b9dfb7258 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.718 253542 DEBUG nova.virt.libvirt.vif [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1354440524',display_name='tempest-ServersNegativeTestJSON-server-1354440524',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1354440524',id=105,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:46:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-94mtar4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:46:56Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=46491e7b-1f61-45bf-a185-2a6b9dfb7258,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.718 253542 DEBUG nova.network.os_vif_util [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "51030d00-0656-4e18-a844-07210dd53c67", "address": "fa:16:3e:71:8f:76", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51030d00-06", "ovs_interfaceid": "51030d00-0656-4e18-a844-07210dd53c67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.719 253542 DEBUG nova.network.os_vif_util [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.720 253542 DEBUG os_vif [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.722 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51030d00-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.724 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:46:58 compute-0 nova_compute[253538]: 2025-11-25 08:46:58.729 253542 INFO os_vif [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:8f:76,bridge_name='br-int',has_traffic_filtering=True,id=51030d00-0656-4e18-a844-07210dd53c67,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51030d00-06')
Nov 25 08:46:59 compute-0 nova_compute[253538]: 2025-11-25 08:46:59.165 253542 INFO nova.virt.libvirt.driver [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Deleting instance files /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258_del
Nov 25 08:46:59 compute-0 nova_compute[253538]: 2025-11-25 08:46:59.166 253542 INFO nova.virt.libvirt.driver [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Deletion of /var/lib/nova/instances/46491e7b-1f61-45bf-a185-2a6b9dfb7258_del complete
Nov 25 08:46:59 compute-0 nova_compute[253538]: 2025-11-25 08:46:59.248 253542 INFO nova.compute.manager [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 25 08:46:59 compute-0 nova_compute[253538]: 2025-11-25 08:46:59.249 253542 DEBUG oslo.service.loopingcall [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:46:59 compute-0 nova_compute[253538]: 2025-11-25 08:46:59.249 253542 DEBUG nova.compute.manager [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:46:59 compute-0 nova_compute[253538]: 2025-11-25 08:46:59.249 253542 DEBUG nova.network.neutron [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:46:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 208 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 08:46:59 compute-0 podman[357663]: 2025-11-25 08:46:59.839185828 +0000 UTC m=+0.076982012 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.299 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-unplugged-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.300 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.300 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.300 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.300 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] No waiting events found dispatching network-vif-unplugged-51030d00-0656-4e18-a844-07210dd53c67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-unplugged-51030d00-0656-4e18-a844-07210dd53c67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG oslo_concurrency.lockutils [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 DEBUG nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] No waiting events found dispatching network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.301 253542 WARNING nova.compute.manager [req-05033038-ba5b-479b-9a33-3471d099c027 req-c1a9f9e4-894d-401c-bcf1-04693587a820 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received unexpected event network-vif-plugged-51030d00-0656-4e18-a844-07210dd53c67 for instance with vm_state active and task_state deleting.
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.424 253542 DEBUG nova.network.neutron [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.438 253542 INFO nova.compute.manager [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Took 1.19 seconds to deallocate network for instance.
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.475 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.475 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.488 253542 DEBUG nova.compute.manager [req-b0a575b9-535f-46e9-9990-d0f9e6cbe346 req-7f07f70f-e124-4c5f-a4f3-26f0dfb13033 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Received event network-vif-deleted-51030d00-0656-4e18-a844-07210dd53c67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:47:00 compute-0 nova_compute[253538]: 2025-11-25 08:47:00.574 253542 DEBUG oslo_concurrency.processutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:47:00 compute-0 ceph-mon[75015]: pgmap v1964: 321 pgs: 321 active+clean; 208 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 08:47:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:47:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261886896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:47:01 compute-0 nova_compute[253538]: 2025-11-25 08:47:01.082 253542 DEBUG oslo_concurrency.processutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:47:01 compute-0 nova_compute[253538]: 2025-11-25 08:47:01.090 253542 DEBUG nova.compute.provider_tree [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:47:01 compute-0 nova_compute[253538]: 2025-11-25 08:47:01.105 253542 DEBUG nova.scheduler.client.report [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:47:01 compute-0 nova_compute[253538]: 2025-11-25 08:47:01.128 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:47:01 compute-0 nova_compute[253538]: 2025-11-25 08:47:01.160 253542 INFO nova.scheduler.client.report [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Deleted allocations for instance 46491e7b-1f61-45bf-a185-2a6b9dfb7258
Nov 25 08:47:01 compute-0 nova_compute[253538]: 2025-11-25 08:47:01.217 253542 DEBUG oslo_concurrency.lockutils [None req-d7327ad1-4a1f-44da-a984-e6a13ef73fe4 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "46491e7b-1f61-45bf-a185-2a6b9dfb7258" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:47:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 202 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 3.8 MiB/s wr, 133 op/s
Nov 25 08:47:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2261886896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:47:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:02 compute-0 nova_compute[253538]: 2025-11-25 08:47:02.348 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:02 compute-0 ceph-mon[75015]: pgmap v1965: 321 pgs: 321 active+clean; 202 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 3.8 MiB/s wr, 133 op/s
Nov 25 08:47:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Nov 25 08:47:03 compute-0 nova_compute[253538]: 2025-11-25 08:47:03.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:03 compute-0 podman[357706]: 2025-11-25 08:47:03.851532027 +0000 UTC m=+0.097817921 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007559663345705542 of space, bias 1.0, pg target 0.22678990037116625 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:47:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:47:04 compute-0 ceph-mon[75015]: pgmap v1966: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Nov 25 08:47:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 161 op/s
Nov 25 08:47:06 compute-0 ceph-mon[75015]: pgmap v1967: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 161 op/s
Nov 25 08:47:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:07 compute-0 nova_compute[253538]: 2025-11-25 08:47:07.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Nov 25 08:47:08 compute-0 ceph-mon[75015]: pgmap v1968: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Nov 25 08:47:08 compute-0 nova_compute[253538]: 2025-11-25 08:47:08.728 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 KiB/s wr, 102 op/s
Nov 25 08:47:10 compute-0 ceph-mon[75015]: pgmap v1969: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 KiB/s wr, 102 op/s
Nov 25 08:47:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 67 KiB/s wr, 96 op/s
Nov 25 08:47:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:12 compute-0 nova_compute[253538]: 2025-11-25 08:47:12.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:13 compute-0 ceph-mon[75015]: pgmap v1970: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 67 KiB/s wr, 96 op/s
Nov 25 08:47:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 56 op/s
Nov 25 08:47:13 compute-0 nova_compute[253538]: 2025-11-25 08:47:13.705 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060418.7043505, 46491e7b-1f61-45bf-a185-2a6b9dfb7258 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:47:13 compute-0 nova_compute[253538]: 2025-11-25 08:47:13.706 253542 INFO nova.compute.manager [-] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] VM Stopped (Lifecycle Event)
Nov 25 08:47:13 compute-0 nova_compute[253538]: 2025-11-25 08:47:13.722 253542 DEBUG nova.compute.manager [None req-6c397927-9bbc-48f3-82f4-5cd8fab880c2 - - - - - -] [instance: 46491e7b-1f61-45bf-a185-2a6b9dfb7258] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:47:13 compute-0 nova_compute[253538]: 2025-11-25 08:47:13.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:15 compute-0 ceph-mon[75015]: pgmap v1971: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 56 op/s
Nov 25 08:47:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 08:47:16 compute-0 sudo[357735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:16 compute-0 sudo[357735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:16 compute-0 sudo[357735]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:16 compute-0 sudo[357760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:47:16 compute-0 sudo[357760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:16 compute-0 sudo[357760]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:16 compute-0 sudo[357785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:16 compute-0 sudo[357785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:16 compute-0 sudo[357785]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:17 compute-0 sudo[357810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:47:17 compute-0 sudo[357810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:17 compute-0 ceph-mon[75015]: pgmap v1972: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 08:47:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:17 compute-0 nova_compute[253538]: 2025-11-25 08:47:17.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:17 compute-0 sudo[357810]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:47:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:47:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:47:17 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:47:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:47:17 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:47:17 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c77da09d-3b00-4d8d-abad-c88895c2ecf3 does not exist
Nov 25 08:47:17 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev b63b025c-8960-4dfa-8bec-ad279b45aa06 does not exist
Nov 25 08:47:17 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c5effa08-5f89-4848-9d34-57b223cc7fc0 does not exist
Nov 25 08:47:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:47:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:47:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:47:17 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:47:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:47:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:47:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:17 compute-0 sudo[357867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:17 compute-0 sudo[357867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:17 compute-0 sudo[357867]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:17 compute-0 sudo[357892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:47:17 compute-0 sudo[357892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:17 compute-0 sudo[357892]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:17 compute-0 sudo[357917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:17 compute-0 sudo[357917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:17 compute-0 sudo[357917]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:17 compute-0 sudo[357942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:47:17 compute-0 sudo[357942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:47:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:47:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:47:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:47:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:47:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:47:18 compute-0 podman[358008]: 2025-11-25 08:47:18.217610452 +0000 UTC m=+0.051772848 container create 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:47:18 compute-0 systemd[1]: Started libpod-conmon-542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674.scope.
Nov 25 08:47:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:47:18 compute-0 podman[358008]: 2025-11-25 08:47:18.192670034 +0000 UTC m=+0.026832460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:47:18 compute-0 podman[358008]: 2025-11-25 08:47:18.293917376 +0000 UTC m=+0.128079802 container init 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:47:18 compute-0 podman[358008]: 2025-11-25 08:47:18.302841515 +0000 UTC m=+0.137003921 container start 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:47:18 compute-0 podman[358008]: 2025-11-25 08:47:18.305861345 +0000 UTC m=+0.140023751 container attach 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 25 08:47:18 compute-0 elastic_williams[358022]: 167 167
Nov 25 08:47:18 compute-0 systemd[1]: libpod-542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674.scope: Deactivated successfully.
Nov 25 08:47:18 compute-0 podman[358008]: 2025-11-25 08:47:18.309834542 +0000 UTC m=+0.143996948 container died 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:47:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e1a19815b431335eb246fac309316f9715cd86e8ba2194b67d6d035798373e1-merged.mount: Deactivated successfully.
Nov 25 08:47:18 compute-0 podman[358008]: 2025-11-25 08:47:18.353160442 +0000 UTC m=+0.187322848 container remove 542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williams, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 08:47:18 compute-0 systemd[1]: libpod-conmon-542a5678a17648af72c28d7911bfe35d78240fee2aa772bd9e0713bed98ee674.scope: Deactivated successfully.
Nov 25 08:47:18 compute-0 podman[358046]: 2025-11-25 08:47:18.511892814 +0000 UTC m=+0.038051451 container create 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:47:18 compute-0 systemd[1]: Started libpod-conmon-285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c.scope.
Nov 25 08:47:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:47:18 compute-0 podman[358046]: 2025-11-25 08:47:18.49570863 +0000 UTC m=+0.021867297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:18 compute-0 podman[358046]: 2025-11-25 08:47:18.607008991 +0000 UTC m=+0.133167618 container init 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 08:47:18 compute-0 podman[358046]: 2025-11-25 08:47:18.620374009 +0000 UTC m=+0.146532636 container start 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:47:18 compute-0 podman[358046]: 2025-11-25 08:47:18.623522153 +0000 UTC m=+0.149680780 container attach 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 08:47:18 compute-0 nova_compute[253538]: 2025-11-25 08:47:18.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:19 compute-0 ceph-mon[75015]: pgmap v1973: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:19 compute-0 reverent_lovelace[358063]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:47:19 compute-0 reverent_lovelace[358063]: --> relative data size: 1.0
Nov 25 08:47:19 compute-0 reverent_lovelace[358063]: --> All data devices are unavailable
Nov 25 08:47:19 compute-0 systemd[1]: libpod-285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c.scope: Deactivated successfully.
Nov 25 08:47:19 compute-0 podman[358046]: 2025-11-25 08:47:19.725168197 +0000 UTC m=+1.251326824 container died 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:47:19 compute-0 systemd[1]: libpod-285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c.scope: Consumed 1.051s CPU time.
Nov 25 08:47:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-49871b6162d184fa7fbaae731f90cd07d8afc050a8114f33c51f56b1d5822d4a-merged.mount: Deactivated successfully.
Nov 25 08:47:19 compute-0 podman[358046]: 2025-11-25 08:47:19.789906461 +0000 UTC m=+1.316065088 container remove 285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 08:47:19 compute-0 systemd[1]: libpod-conmon-285a7482557b6c895a65405550bfe05bd69235f27b035018536786fa162a761c.scope: Deactivated successfully.
Nov 25 08:47:19 compute-0 sudo[357942]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:19 compute-0 sudo[358104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:19 compute-0 sudo[358104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:19 compute-0 sudo[358104]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:19 compute-0 sudo[358129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:47:19 compute-0 sudo[358129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:19 compute-0 sudo[358129]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:20 compute-0 sudo[358154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:20 compute-0 sudo[358154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:20 compute-0 sudo[358154]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:20 compute-0 sudo[358179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:47:20 compute-0 sudo[358179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:20 compute-0 podman[358244]: 2025-11-25 08:47:20.57139629 +0000 UTC m=+0.056770811 container create 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:47:20 compute-0 systemd[1]: Started libpod-conmon-65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd.scope.
Nov 25 08:47:20 compute-0 podman[358244]: 2025-11-25 08:47:20.545097626 +0000 UTC m=+0.030472227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:47:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:47:20 compute-0 podman[358244]: 2025-11-25 08:47:20.677012269 +0000 UTC m=+0.162386830 container init 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 08:47:20 compute-0 podman[358244]: 2025-11-25 08:47:20.685330792 +0000 UTC m=+0.170705303 container start 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 08:47:20 compute-0 modest_bhaskara[358261]: 167 167
Nov 25 08:47:20 compute-0 systemd[1]: libpod-65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd.scope: Deactivated successfully.
Nov 25 08:47:20 compute-0 podman[358244]: 2025-11-25 08:47:20.691213729 +0000 UTC m=+0.176588280 container attach 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:47:20 compute-0 podman[358244]: 2025-11-25 08:47:20.691670571 +0000 UTC m=+0.177045092 container died 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:47:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7a541d9857c589bf2bf6ed978a48c6aa6f2d6fb5cf102cf7f88784232aaf503-merged.mount: Deactivated successfully.
Nov 25 08:47:20 compute-0 podman[358244]: 2025-11-25 08:47:20.734093207 +0000 UTC m=+0.219467718 container remove 65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bhaskara, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 08:47:20 compute-0 systemd[1]: libpod-conmon-65475d31d40e1e622fcf176daa092dae24a91217ea693b5a82556166ac939bfd.scope: Deactivated successfully.
Nov 25 08:47:20 compute-0 podman[358285]: 2025-11-25 08:47:20.977361973 +0000 UTC m=+0.077602550 container create e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 08:47:21 compute-0 podman[358285]: 2025-11-25 08:47:20.938108501 +0000 UTC m=+0.038349068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:47:21 compute-0 systemd[1]: Started libpod-conmon-e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613.scope.
Nov 25 08:47:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:21 compute-0 podman[358285]: 2025-11-25 08:47:21.085060277 +0000 UTC m=+0.185300904 container init e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:47:21 compute-0 podman[358285]: 2025-11-25 08:47:21.100654305 +0000 UTC m=+0.200894842 container start e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 08:47:21 compute-0 podman[358285]: 2025-11-25 08:47:21.108158396 +0000 UTC m=+0.208399033 container attach e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:47:21 compute-0 ceph-mon[75015]: pgmap v1974: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:21 compute-0 nova_compute[253538]: 2025-11-25 08:47:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]: {
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:     "0": [
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:         {
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "devices": [
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "/dev/loop3"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             ],
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_name": "ceph_lv0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_size": "21470642176",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "name": "ceph_lv0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "tags": {
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cluster_name": "ceph",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.crush_device_class": "",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.encrypted": "0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osd_id": "0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.type": "block",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.vdo": "0"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             },
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "type": "block",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "vg_name": "ceph_vg0"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:         }
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:     ],
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:     "1": [
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:         {
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "devices": [
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "/dev/loop4"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             ],
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_name": "ceph_lv1",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_size": "21470642176",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "name": "ceph_lv1",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "tags": {
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cluster_name": "ceph",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.crush_device_class": "",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.encrypted": "0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osd_id": "1",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.type": "block",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.vdo": "0"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             },
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "type": "block",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "vg_name": "ceph_vg1"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:         }
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:     ],
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:     "2": [
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:         {
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "devices": [
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "/dev/loop5"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             ],
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_name": "ceph_lv2",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_size": "21470642176",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "name": "ceph_lv2",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "tags": {
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.cluster_name": "ceph",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.crush_device_class": "",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.encrypted": "0",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osd_id": "2",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.type": "block",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:                 "ceph.vdo": "0"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             },
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "type": "block",
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:             "vg_name": "ceph_vg2"
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:         }
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]:     ]
Nov 25 08:47:21 compute-0 eloquent_lederberg[358301]: }
Nov 25 08:47:21 compute-0 systemd[1]: libpod-e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613.scope: Deactivated successfully.
Nov 25 08:47:21 compute-0 podman[358285]: 2025-11-25 08:47:21.936246733 +0000 UTC m=+1.036487280 container died e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 08:47:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fc654a88ed71455e11c02369ca85ef6c238ffcc166c004b27cedfdf72b7109b-merged.mount: Deactivated successfully.
Nov 25 08:47:22 compute-0 podman[358285]: 2025-11-25 08:47:22.050838692 +0000 UTC m=+1.151079239 container remove e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:47:22 compute-0 systemd[1]: libpod-conmon-e44ffea12e263bb6c20c56e1dee14b99c1bed34e04c3953f905d6450b25ee613.scope: Deactivated successfully.
Nov 25 08:47:22 compute-0 sudo[358179]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:22 compute-0 sudo[358321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:22 compute-0 sudo[358321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:22 compute-0 sudo[358321]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:22 compute-0 sudo[358346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:47:22 compute-0 sudo[358346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:22 compute-0 sudo[358346]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:22 compute-0 sudo[358371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:22 compute-0 sudo[358371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:22 compute-0 sudo[358371]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:22 compute-0 nova_compute[253538]: 2025-11-25 08:47:22.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:22 compute-0 sudo[358396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:47:22 compute-0 sudo[358396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:22 compute-0 podman[358462]: 2025-11-25 08:47:22.815280805 +0000 UTC m=+0.064542089 container create 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 08:47:22 compute-0 systemd[1]: Started libpod-conmon-11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad.scope.
Nov 25 08:47:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:47:22 compute-0 podman[358462]: 2025-11-25 08:47:22.788623242 +0000 UTC m=+0.037884606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:47:22 compute-0 podman[358462]: 2025-11-25 08:47:22.894928358 +0000 UTC m=+0.144189742 container init 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 08:47:22 compute-0 podman[358462]: 2025-11-25 08:47:22.901809453 +0000 UTC m=+0.151070737 container start 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 08:47:22 compute-0 podman[358462]: 2025-11-25 08:47:22.905148202 +0000 UTC m=+0.154409496 container attach 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 08:47:22 compute-0 affectionate_agnesi[358478]: 167 167
Nov 25 08:47:22 compute-0 systemd[1]: libpod-11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad.scope: Deactivated successfully.
Nov 25 08:47:22 compute-0 podman[358462]: 2025-11-25 08:47:22.909959911 +0000 UTC m=+0.159221225 container died 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e64e9be6df04ffdeb7b98598942aa01935c85ecf0a134d2c983a3b73f09af5b7-merged.mount: Deactivated successfully.
Nov 25 08:47:22 compute-0 podman[358462]: 2025-11-25 08:47:22.9454028 +0000 UTC m=+0.194664084 container remove 11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:47:22 compute-0 systemd[1]: libpod-conmon-11354ff55bd5b42ea0c862eb544290c290dd12a13405fdad4affe2cc62a79aad.scope: Deactivated successfully.
Nov 25 08:47:23 compute-0 ceph-mon[75015]: pgmap v1975: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:23 compute-0 podman[358501]: 2025-11-25 08:47:23.153031971 +0000 UTC m=+0.059910046 container create 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 25 08:47:23 compute-0 systemd[1]: Started libpod-conmon-9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80.scope.
Nov 25 08:47:23 compute-0 podman[358501]: 2025-11-25 08:47:23.124209458 +0000 UTC m=+0.031087583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:47:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:47:23 compute-0 podman[358501]: 2025-11-25 08:47:23.272534781 +0000 UTC m=+0.179412896 container init 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 08:47:23 compute-0 podman[358501]: 2025-11-25 08:47:23.279163579 +0000 UTC m=+0.186041654 container start 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:47:23 compute-0 podman[358501]: 2025-11-25 08:47:23.286842344 +0000 UTC m=+0.193720469 container attach 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 08:47:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:47:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:47:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:47:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:47:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:47:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:47:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:23 compute-0 nova_compute[253538]: 2025-11-25 08:47:23.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:24 compute-0 quirky_wright[358517]: {
Nov 25 08:47:24 compute-0 quirky_wright[358517]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "osd_id": 1,
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "type": "bluestore"
Nov 25 08:47:24 compute-0 quirky_wright[358517]:     },
Nov 25 08:47:24 compute-0 quirky_wright[358517]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "osd_id": 2,
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "type": "bluestore"
Nov 25 08:47:24 compute-0 quirky_wright[358517]:     },
Nov 25 08:47:24 compute-0 quirky_wright[358517]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "osd_id": 0,
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:47:24 compute-0 quirky_wright[358517]:         "type": "bluestore"
Nov 25 08:47:24 compute-0 quirky_wright[358517]:     }
Nov 25 08:47:24 compute-0 quirky_wright[358517]: }
Nov 25 08:47:24 compute-0 systemd[1]: libpod-9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80.scope: Deactivated successfully.
Nov 25 08:47:24 compute-0 systemd[1]: libpod-9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80.scope: Consumed 1.125s CPU time.
Nov 25 08:47:24 compute-0 podman[358501]: 2025-11-25 08:47:24.537298164 +0000 UTC m=+1.444176249 container died 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 08:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d125c475ea58b36811403f83de960be1bb4a9f8b44d4d7a10252caf829d4851d-merged.mount: Deactivated successfully.
Nov 25 08:47:24 compute-0 podman[358501]: 2025-11-25 08:47:24.597077824 +0000 UTC m=+1.503955899 container remove 9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:47:24 compute-0 systemd[1]: libpod-conmon-9bec94195a47e137c4482be98900037a374c2005e2b3a49cd5a21eb5d4cecf80.scope: Deactivated successfully.
Nov 25 08:47:24 compute-0 sudo[358396]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:47:24 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:47:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:47:24 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:47:24 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 48120499-6576-4cff-82dd-9bfa8ec7444b does not exist
Nov 25 08:47:24 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f17f3596-a74d-47d1-866b-4c21259da89a does not exist
Nov 25 08:47:24 compute-0 sudo[358562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:47:24 compute-0 sudo[358562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:24 compute-0 sudo[358562]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:24 compute-0 sudo[358587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:47:24 compute-0 sudo[358587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:47:24 compute-0 sudo[358587]: pam_unix(sudo:session): session closed for user root
Nov 25 08:47:25 compute-0 ceph-mon[75015]: pgmap v1976: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:25 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:47:25 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:47:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:26 compute-0 ceph-mon[75015]: pgmap v1977: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 08:47:26 compute-0 podman[358612]: 2025-11-25 08:47:26.832578635 +0000 UTC m=+0.076087769 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 08:47:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.242103) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447242166, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1292, "num_deletes": 250, "total_data_size": 1879368, "memory_usage": 1905504, "flush_reason": "Manual Compaction"}
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447256714, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1850201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40128, "largest_seqno": 41419, "table_properties": {"data_size": 1844180, "index_size": 3288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 11935, "raw_average_key_size": 18, "raw_value_size": 1832112, "raw_average_value_size": 2797, "num_data_blocks": 147, "num_entries": 655, "num_filter_entries": 655, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060323, "oldest_key_time": 1764060323, "file_creation_time": 1764060447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 14673 microseconds, and 8891 cpu microseconds.
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.256775) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1850201 bytes OK
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.256800) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.258630) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.258654) EVENT_LOG_v1 {"time_micros": 1764060447258647, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.258677) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1873554, prev total WAL file size 1873554, number of live WAL files 2.
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.259758) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1806KB)], [89(9572KB)]
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447259887, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11652380, "oldest_snapshot_seqno": -1}
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6617 keys, 10933566 bytes, temperature: kUnknown
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447346370, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10933566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10886716, "index_size": 29202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 169714, "raw_average_key_size": 25, "raw_value_size": 10765498, "raw_average_value_size": 1626, "num_data_blocks": 1166, "num_entries": 6617, "num_filter_entries": 6617, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.346586) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10933566 bytes
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.347981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.7 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.3 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(12.2) write-amplify(5.9) OK, records in: 7129, records dropped: 512 output_compression: NoCompression
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.347999) EVENT_LOG_v1 {"time_micros": 1764060447347990, "job": 52, "event": "compaction_finished", "compaction_time_micros": 86529, "compaction_time_cpu_micros": 40105, "output_level": 6, "num_output_files": 1, "total_output_size": 10933566, "num_input_records": 7129, "num_output_records": 6617, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447348436, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060447350128, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.259540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:47:27 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:47:27.350223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:47:27 compute-0 nova_compute[253538]: 2025-11-25 08:47:27.408 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:47:28 compute-0 ceph-mon[75015]: pgmap v1978: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:47:28 compute-0 nova_compute[253538]: 2025-11-25 08:47:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:28 compute-0 nova_compute[253538]: 2025-11-25 08:47:28.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:47:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182159265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:47:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:47:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182159265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:47:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2182159265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:47:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2182159265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:47:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:47:30 compute-0 ceph-mon[75015]: pgmap v1979: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:47:30 compute-0 nova_compute[253538]: 2025-11-25 08:47:30.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:30 compute-0 nova_compute[253538]: 2025-11-25 08:47:30.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:47:30 compute-0 podman[358633]: 2025-11-25 08:47:30.807203172 +0000 UTC m=+0.062032792 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:47:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:47:31.684 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:47:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:47:31.685 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:47:31 compute-0 nova_compute[253538]: 2025-11-25 08:47:31.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:32 compute-0 nova_compute[253538]: 2025-11-25 08:47:32.410 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:32 compute-0 sshd-session[358653]: Invalid user Admin from 193.32.162.151 port 54748
Nov 25 08:47:32 compute-0 nova_compute[253538]: 2025-11-25 08:47:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:32 compute-0 nova_compute[253538]: 2025-11-25 08:47:32.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:47:32 compute-0 nova_compute[253538]: 2025-11-25 08:47:32.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:47:32 compute-0 ceph-mon[75015]: pgmap v1980: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:32 compute-0 sshd-session[358653]: Connection closed by invalid user Admin 193.32.162.151 port 54748 [preauth]
Nov 25 08:47:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:47:32.687 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:47:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:33 compute-0 nova_compute[253538]: 2025-11-25 08:47:33.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:33 compute-0 nova_compute[253538]: 2025-11-25 08:47:33.785 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:47:33 compute-0 nova_compute[253538]: 2025-11-25 08:47:33.786 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:47:33 compute-0 nova_compute[253538]: 2025-11-25 08:47:33.786 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:47:33 compute-0 nova_compute[253538]: 2025-11-25 08:47:33.786 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:47:34 compute-0 ceph-mon[75015]: pgmap v1981: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:34 compute-0 podman[358655]: 2025-11-25 08:47:34.881379895 +0000 UTC m=+0.125313167 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:47:34 compute-0 nova_compute[253538]: 2025-11-25 08:47:34.972 253542 INFO nova.compute.manager [None req-0b9eab54-e03d-440c-85a1-2fd2c0492551 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Pausing
Nov 25 08:47:34 compute-0 nova_compute[253538]: 2025-11-25 08:47:34.973 253542 DEBUG nova.objects.instance [None req-0b9eab54-e03d-440c-85a1-2fd2c0492551 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'flavor' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.007 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060455.0074933, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.008 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Paused (Lifecycle Event)
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.009 253542 DEBUG nova.compute.manager [None req-0b9eab54-e03d-440c-85a1-2fd2c0492551 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.030 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.035 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.045 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.049 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.064 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.064 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:35 compute-0 nova_compute[253538]: 2025-11-25 08:47:35.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:36 compute-0 nova_compute[253538]: 2025-11-25 08:47:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:36 compute-0 nova_compute[253538]: 2025-11-25 08:47:36.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:47:36 compute-0 nova_compute[253538]: 2025-11-25 08:47:36.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:47:36 compute-0 nova_compute[253538]: 2025-11-25 08:47:36.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:47:36 compute-0 nova_compute[253538]: 2025-11-25 08:47:36.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:47:36 compute-0 nova_compute[253538]: 2025-11-25 08:47:36.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:47:36 compute-0 ceph-mon[75015]: pgmap v1982: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:47:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602891899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.027 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.124 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.125 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:47:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.315 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.316 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3696MB free_disk=59.942779541015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.317 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.317 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.378 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.378 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.379 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.393 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.410 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.410 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.425 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.445 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.476 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:47:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/602891899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.777 253542 INFO nova.compute.manager [None req-e51841f2-2494-4049-9023-d82542654b2c 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Unpausing
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.778 253542 DEBUG nova.objects.instance [None req-e51841f2-2494-4049-9023-d82542654b2c 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'flavor' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.805 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060457.8048139, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.805 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Resumed (Lifecycle Event)
Nov 25 08:47:37 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.808 253542 DEBUG nova.virt.libvirt.guest [None req-e51841f2-2494-4049-9023-d82542654b2c 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.809 253542 DEBUG nova.compute.manager [None req-e51841f2-2494-4049-9023-d82542654b2c 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.828 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.832 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.854 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 25 08:47:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:47:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798887891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.972 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.980 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:47:37 compute-0 nova_compute[253538]: 2025-11-25 08:47:37.992 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:47:38 compute-0 nova_compute[253538]: 2025-11-25 08:47:38.014 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:47:38 compute-0 nova_compute[253538]: 2025-11-25 08:47:38.015 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:47:38 compute-0 ceph-mon[75015]: pgmap v1983: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2798887891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:47:38 compute-0 nova_compute[253538]: 2025-11-25 08:47:38.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:39 compute-0 nova_compute[253538]: 2025-11-25 08:47:39.008 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:40 compute-0 nova_compute[253538]: 2025-11-25 08:47:40.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:47:40 compute-0 ceph-mon[75015]: pgmap v1984: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:47:41.072 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:47:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:47:41.072 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:47:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:47:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:47:41 compute-0 sshd-session[358726]: Received disconnect from 119.96.131.8 port 50784:11:  [preauth]
Nov 25 08:47:41 compute-0 sshd-session[358726]: Disconnected from authenticating user root 119.96.131.8 port 50784 [preauth]
Nov 25 08:47:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:42 compute-0 nova_compute[253538]: 2025-11-25 08:47:42.415 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:42 compute-0 ceph-mon[75015]: pgmap v1985: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Nov 25 08:47:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:47:43 compute-0 nova_compute[253538]: 2025-11-25 08:47:43.763 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:44 compute-0 ceph-mon[75015]: pgmap v1986: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:47:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Nov 25 08:47:46 compute-0 ceph-mon[75015]: pgmap v1987: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Nov 25 08:47:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:47 compute-0 nova_compute[253538]: 2025-11-25 08:47:47.417 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:48 compute-0 nova_compute[253538]: 2025-11-25 08:47:48.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:48 compute-0 ceph-mon[75015]: pgmap v1988: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:50 compute-0 ceph-mon[75015]: pgmap v1989: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:52 compute-0 nova_compute[253538]: 2025-11-25 08:47:52.419 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:52 compute-0 ceph-mon[75015]: pgmap v1990: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:47:53
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.control', 'default.rgw.log']
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:53 compute-0 nova_compute[253538]: 2025-11-25 08:47:53.767 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:47:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:47:54 compute-0 ceph-mon[75015]: pgmap v1991: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:56 compute-0 ceph-mon[75015]: pgmap v1992: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Nov 25 08:47:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:47:57 compute-0 nova_compute[253538]: 2025-11-25 08:47:57.420 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 25 08:47:57 compute-0 podman[358730]: 2025-11-25 08:47:57.827156245 +0000 UTC m=+0.077975079 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:47:58 compute-0 nova_compute[253538]: 2025-11-25 08:47:58.078 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:47:58 compute-0 nova_compute[253538]: 2025-11-25 08:47:58.078 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:47:58 compute-0 nova_compute[253538]: 2025-11-25 08:47:58.078 253542 INFO nova.compute.manager [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Shelving
Nov 25 08:47:58 compute-0 nova_compute[253538]: 2025-11-25 08:47:58.093 253542 DEBUG nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:47:58 compute-0 nova_compute[253538]: 2025-11-25 08:47:58.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:47:58 compute-0 ceph-mon[75015]: pgmap v1993: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 25 08:47:59 compute-0 sshd-session[358728]: Received disconnect from 45.202.211.6 port 53498:11: Bye Bye [preauth]
Nov 25 08:47:59 compute-0 sshd-session[358728]: Disconnected from authenticating user root 45.202.211.6 port 53498 [preauth]
Nov 25 08:47:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:48:00 compute-0 kernel: tap7246ed42-6e (unregistering): left promiscuous mode
Nov 25 08:48:00 compute-0 NetworkManager[48915]: <info>  [1764060480.3647] device (tap7246ed42-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.375 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:00 compute-0 ovn_controller[152859]: 2025-11-25T08:48:00Z|01003|binding|INFO|Releasing lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c from this chassis (sb_readonly=0)
Nov 25 08:48:00 compute-0 ovn_controller[152859]: 2025-11-25T08:48:00Z|01004|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c down in Southbound
Nov 25 08:48:00 compute-0 ovn_controller[152859]: 2025-11-25T08:48:00Z|01005|binding|INFO|Removing iface tap7246ed42-6e ovn-installed in OVS
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:00 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 25 08:48:00 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Consumed 15.717s CPU time.
Nov 25 08:48:00 compute-0 systemd-machined[215790]: Machine qemu-128-instance-00000068 terminated.
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.464 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.466 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.468 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b126234e-82cd-49e5-9b28-c32f37514807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.470 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace which is not needed anymore
Nov 25 08:48:00 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [NOTICE]   (357204) : haproxy version is 2.8.14-c23fe91
Nov 25 08:48:00 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [NOTICE]   (357204) : path to executable is /usr/sbin/haproxy
Nov 25 08:48:00 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [WARNING]  (357204) : Exiting Master process...
Nov 25 08:48:00 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [WARNING]  (357204) : Exiting Master process...
Nov 25 08:48:00 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [ALERT]    (357204) : Current worker (357206) exited with code 143 (Terminated)
Nov 25 08:48:00 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[357198]: [WARNING]  (357204) : All workers exited. Exiting... (0)
Nov 25 08:48:00 compute-0 systemd[1]: libpod-7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf.scope: Deactivated successfully.
Nov 25 08:48:00 compute-0 podman[358774]: 2025-11-25 08:48:00.630914635 +0000 UTC m=+0.060808950 container died 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf-userdata-shm.mount: Deactivated successfully.
Nov 25 08:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5392b1f619a47826c2c6fed409eebb3fb96b83ac7eec8e199c10f4e6b9e7199c-merged.mount: Deactivated successfully.
Nov 25 08:48:00 compute-0 podman[358774]: 2025-11-25 08:48:00.688563249 +0000 UTC m=+0.118457544 container cleanup 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:48:00 compute-0 systemd[1]: libpod-conmon-7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf.scope: Deactivated successfully.
Nov 25 08:48:00 compute-0 podman[358816]: 2025-11-25 08:48:00.776795032 +0000 UTC m=+0.065287480 container remove 7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.785 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f37f1650-ff9f-4b3e-bd91-5d6a5ecfcba4]: (4, ('Tue Nov 25 08:48:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf)\n7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf\nTue Nov 25 08:48:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf)\n7670f239ad1da4d9969c96f4d7b382e932cc33f3700d83bfd2af5d25adafffaf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.787 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b2f437-acd0-4303-bf43-3c84780b545f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.788 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:00 compute-0 kernel: tapaa04d86f-70: left promiscuous mode
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.810 253542 DEBUG nova.compute.manager [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.811 253542 DEBUG oslo_concurrency.lockutils [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.812 253542 DEBUG oslo_concurrency.lockutils [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.812 253542 DEBUG oslo_concurrency.lockutils [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.813 253542 DEBUG nova.compute.manager [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.813 253542 WARNING nova.compute.manager [req-ee30c98f-aa1f-4416-b9d2-788427a9bdd1 req-9b731722-310d-403a-8475-c24d49cfb9da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state shelving.
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.814 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2df285cd-c786-46bf-9a81-964a4e668155]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:00 compute-0 nova_compute[253538]: 2025-11-25 08:48:00.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.838 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[029e7655-3026-4b1a-96c7-f9b8a8780c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.840 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[83c845af-e6d4-4204-ade8-133a1639f041]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[77a406c3-5ab0-422b-bdd0-d8a8995d17a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575584, 'reachable_time': 24835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358836, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:00 compute-0 systemd[1]: run-netns-ovnmeta\x2daa04d86f\x2d73a3\x2d4b24\x2d9c95\x2d8ec29aa39064.mount: Deactivated successfully.
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.872 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:48:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:00.872 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d04e6fa9-5490-48c6-86e0-be83949c6435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:00 compute-0 ceph-mon[75015]: pgmap v1994: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail
Nov 25 08:48:00 compute-0 podman[358835]: 2025-11-25 08:48:00.962447844 +0000 UTC m=+0.106743510 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 25 08:48:01 compute-0 nova_compute[253538]: 2025-11-25 08:48:01.112 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance shutdown successfully after 3 seconds.
Nov 25 08:48:01 compute-0 nova_compute[253538]: 2025-11-25 08:48:01.119 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance destroyed successfully.
Nov 25 08:48:01 compute-0 nova_compute[253538]: 2025-11-25 08:48:01.120 253542 DEBUG nova.objects.instance [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:01 compute-0 nova_compute[253538]: 2025-11-25 08:48:01.421 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Beginning cold snapshot process
Nov 25 08:48:01 compute-0 nova_compute[253538]: 2025-11-25 08:48:01.540 253542 DEBUG nova.virt.libvirt.imagebackend [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 08:48:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 8.3 KiB/s wr, 1 op/s
Nov 25 08:48:01 compute-0 nova_compute[253538]: 2025-11-25 08:48:01.731 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] creating snapshot(8057da270e024476acbd6ce05785685a) on rbd image(0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:48:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 25 08:48:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 25 08:48:01 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 25 08:48:01 compute-0 nova_compute[253538]: 2025-11-25 08:48:01.945 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] cloning vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk@8057da270e024476acbd6ce05785685a to images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.053 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] flattening images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:48:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.423 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.751 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] removing snapshot(8057da270e024476acbd6ce05785685a) on rbd image(0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 08:48:02 compute-0 ceph-mon[75015]: pgmap v1995: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 8.3 KiB/s wr, 1 op/s
Nov 25 08:48:02 compute-0 ceph-mon[75015]: osdmap e232: 3 total, 3 up, 3 in
Nov 25 08:48:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 25 08:48:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 25 08:48:02 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.923 253542 DEBUG nova.compute.manager [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.924 253542 DEBUG oslo_concurrency.lockutils [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.924 253542 DEBUG oslo_concurrency.lockutils [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.925 253542 DEBUG oslo_concurrency.lockutils [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.925 253542 DEBUG nova.compute.manager [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.926 253542 WARNING nova.compute.manager [req-bd138e37-b9ab-44d6-b9cf-8266fd225f35 req-02cd81cc-9daa-4df8-90b1-8f7699dbb25c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 08:48:02 compute-0 nova_compute[253538]: 2025-11-25 08:48:02.952 253542 DEBUG nova.storage.rbd_utils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] creating snapshot(snap) on rbd image(515c4bcf-552c-4c04-8c0d-ad03b9e9133d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 08:48:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 188 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Nov 25 08:48:03 compute-0 nova_compute[253538]: 2025-11-25 08:48:03.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 25 08:48:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 25 08:48:03 compute-0 ceph-mon[75015]: osdmap e233: 3 total, 3 up, 3 in
Nov 25 08:48:03 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007594002327928631 of space, bias 1.0, pg target 0.22782006983785894 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0011925165437585467 of space, bias 1.0, pg target 0.357754963127564 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:48:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:48:04 compute-0 ceph-mon[75015]: pgmap v1998: 321 pgs: 321 active+clean; 188 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Nov 25 08:48:04 compute-0 ceph-mon[75015]: osdmap e234: 3 total, 3 up, 3 in
Nov 25 08:48:05 compute-0 nova_compute[253538]: 2025-11-25 08:48:05.239 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Snapshot image upload complete
Nov 25 08:48:05 compute-0 nova_compute[253538]: 2025-11-25 08:48:05.240 253542 DEBUG nova.compute.manager [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:05 compute-0 nova_compute[253538]: 2025-11-25 08:48:05.322 253542 INFO nova.compute.manager [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Shelve offloading
Nov 25 08:48:05 compute-0 nova_compute[253538]: 2025-11-25 08:48:05.334 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance destroyed successfully.
Nov 25 08:48:05 compute-0 nova_compute[253538]: 2025-11-25 08:48:05.335 253542 DEBUG nova.compute.manager [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:05 compute-0 nova_compute[253538]: 2025-11-25 08:48:05.338 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:05 compute-0 nova_compute[253538]: 2025-11-25 08:48:05.339 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:05 compute-0 nova_compute[253538]: 2025-11-25 08:48:05.339 253542 DEBUG nova.network.neutron [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:48:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 210 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.5 MiB/s wr, 75 op/s
Nov 25 08:48:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:48:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9129 writes, 41K keys, 9129 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 9129 writes, 9129 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1550 writes, 7194 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 9.45 MB, 0.02 MB/s
                                           Interval WAL: 1550 writes, 1550 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.9      1.71              0.17        26    0.066       0      0       0.0       0.0
                                             L6      1/0   10.43 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9     54.0     44.8      4.26              0.63        25    0.171    135K    14K       0.0       0.0
                                            Sum      1/0   10.43 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     38.5     40.3      5.98              0.81        51    0.117    135K    14K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2     79.1     81.4      0.77              0.24        12    0.064     40K   3102       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     54.0     44.8      4.26              0.63        25    0.171    135K    14K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.9      1.71              0.17        25    0.068       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.048, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.22 GB read, 0.06 MB/s read, 6.0 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 26.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000221 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1750,25.58 MB,8.41591%) FilterBlock(52,386.11 KB,0.124033%) IndexBlock(52,655.14 KB,0.210456%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 08:48:05 compute-0 podman[358999]: 2025-11-25 08:48:05.884797262 +0000 UTC m=+0.134255197 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:48:06 compute-0 ceph-mon[75015]: pgmap v2000: 321 pgs: 321 active+clean; 210 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.5 MiB/s wr, 75 op/s
Nov 25 08:48:06 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:48:06 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:48:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:07 compute-0 nova_compute[253538]: 2025-11-25 08:48:07.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 170 op/s
Nov 25 08:48:08 compute-0 nova_compute[253538]: 2025-11-25 08:48:08.190 253542 DEBUG nova.network.neutron [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:08 compute-0 nova_compute[253538]: 2025-11-25 08:48:08.227 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:08 compute-0 nova_compute[253538]: 2025-11-25 08:48:08.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:08 compute-0 ceph-mon[75015]: pgmap v2001: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 170 op/s
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.591 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance destroyed successfully.
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.592 253542 DEBUG nova.objects.instance [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'resources' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.604 253542 DEBUG nova.virt.libvirt.vif [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:46:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member',shelved_at='2025-11-25T08:48:05.240608',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='515c4bcf-552c-4c04-8c0d-ad03b9e9133d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:48:01Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.605 253542 DEBUG nova.network.os_vif_util [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.606 253542 DEBUG nova.network.os_vif_util [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.606 253542 DEBUG os_vif [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.609 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7246ed42-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.617 253542 INFO os_vif [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')
Nov 25 08:48:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 129 op/s
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.713 253542 DEBUG nova.compute.manager [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.714 253542 DEBUG nova.compute.manager [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing instance network info cache due to event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.715 253542 DEBUG oslo_concurrency.lockutils [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.715 253542 DEBUG oslo_concurrency.lockutils [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.715 253542 DEBUG nova.network.neutron [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.910 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deleting instance files /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_del
Nov 25 08:48:09 compute-0 nova_compute[253538]: 2025-11-25 08:48:09.911 253542 INFO nova.virt.libvirt.driver [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deletion of /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_del complete
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.068 253542 INFO nova.scheduler.client.report [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Deleted allocations for instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.121 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.122 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.154 253542 DEBUG oslo_concurrency.processutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:48:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207137970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.581 253542 DEBUG oslo_concurrency.processutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.591 253542 DEBUG nova.compute.provider_tree [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.610 253542 DEBUG nova.scheduler.client.report [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.702 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:10 compute-0 nova_compute[253538]: 2025-11-25 08:48:10.753 253542 DEBUG oslo_concurrency.lockutils [None req-f3e2ea2b-df6b-4b6c-90d2-fed2fa078aa2 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:10 compute-0 ceph-mon[75015]: pgmap v2002: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 129 op/s
Nov 25 08:48:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2207137970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 117 op/s
Nov 25 08:48:12 compute-0 nova_compute[253538]: 2025-11-25 08:48:12.174 253542 DEBUG nova.network.neutron [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated VIF entry in instance network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:48:12 compute-0 nova_compute[253538]: 2025-11-25 08:48:12.175 253542 DEBUG nova.network.neutron [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap7246ed42-6e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:12 compute-0 nova_compute[253538]: 2025-11-25 08:48:12.197 253542 DEBUG oslo_concurrency.lockutils [req-c36c9d84-4ec5-44ef-8053-6e278432db47 req-fdb9a605-27ee-452d-92c1-81528b2138f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 25 08:48:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 25 08:48:12 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 25 08:48:12 compute-0 nova_compute[253538]: 2025-11-25 08:48:12.428 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:12 compute-0 ceph-mon[75015]: pgmap v2003: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 117 op/s
Nov 25 08:48:12 compute-0 ceph-mon[75015]: osdmap e235: 3 total, 3 up, 3 in
Nov 25 08:48:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 214 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 89 op/s
Nov 25 08:48:14 compute-0 nova_compute[253538]: 2025-11-25 08:48:14.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:14 compute-0 ceph-mon[75015]: pgmap v2005: 321 pgs: 321 active+clean; 214 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 89 op/s
Nov 25 08:48:15 compute-0 nova_compute[253538]: 2025-11-25 08:48:15.611 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060480.6104796, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:15 compute-0 nova_compute[253538]: 2025-11-25 08:48:15.612 253542 INFO nova.compute.manager [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Stopped (Lifecycle Event)
Nov 25 08:48:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 93 op/s
Nov 25 08:48:15 compute-0 nova_compute[253538]: 2025-11-25 08:48:15.633 253542 DEBUG nova.compute.manager [None req-47b23a12-9dfd-4059-9c72-87c0180ba1af - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.165 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.165 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.165 253542 INFO nova.compute.manager [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Unshelving
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.364 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.365 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.371 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.380 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.389 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.390 253542 INFO nova.compute.claims [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:48:16 compute-0 nova_compute[253538]: 2025-11-25 08:48:16.523 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:16 compute-0 ceph-mon[75015]: pgmap v2006: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 93 op/s
Nov 25 08:48:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:48:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771433555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:17 compute-0 nova_compute[253538]: 2025-11-25 08:48:17.029 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:17 compute-0 nova_compute[253538]: 2025-11-25 08:48:17.038 253542 DEBUG nova.compute.provider_tree [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:48:17 compute-0 nova_compute[253538]: 2025-11-25 08:48:17.058 253542 DEBUG nova.scheduler.client.report [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:48:17 compute-0 nova_compute[253538]: 2025-11-25 08:48:17.124 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.255433) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497255473, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 674, "num_deletes": 251, "total_data_size": 781704, "memory_usage": 793488, "flush_reason": "Manual Compaction"}
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497260030, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 523980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41420, "largest_seqno": 42093, "table_properties": {"data_size": 520872, "index_size": 1016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8233, "raw_average_key_size": 20, "raw_value_size": 514368, "raw_average_value_size": 1289, "num_data_blocks": 46, "num_entries": 399, "num_filter_entries": 399, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060448, "oldest_key_time": 1764060448, "file_creation_time": 1764060497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 4624 microseconds, and 1731 cpu microseconds.
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.260060) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 523980 bytes OK
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.260074) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.261775) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.261795) EVENT_LOG_v1 {"time_micros": 1764060497261789, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.261815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 778151, prev total WAL file size 778151, number of live WAL files 2.
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.262534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(511KB)], [92(10MB)]
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497262590, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11457546, "oldest_snapshot_seqno": -1}
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6521 keys, 8417251 bytes, temperature: kUnknown
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497337728, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 8417251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8375163, "index_size": 24698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 167916, "raw_average_key_size": 25, "raw_value_size": 8259662, "raw_average_value_size": 1266, "num_data_blocks": 979, "num_entries": 6521, "num_filter_entries": 6521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.338019) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 8417251 bytes
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.339789) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.3 rd, 111.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(37.9) write-amplify(16.1) OK, records in: 7016, records dropped: 495 output_compression: NoCompression
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.339819) EVENT_LOG_v1 {"time_micros": 1764060497339806, "job": 54, "event": "compaction_finished", "compaction_time_micros": 75219, "compaction_time_cpu_micros": 40412, "output_level": 6, "num_output_files": 1, "total_output_size": 8417251, "num_input_records": 7016, "num_output_records": 6521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497340165, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060497344067, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.262399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:17.344127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:17 compute-0 nova_compute[253538]: 2025-11-25 08:48:17.430 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:17 compute-0 nova_compute[253538]: 2025-11-25 08:48:17.538 253542 INFO nova.network.neutron [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 25 08:48:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Nov 25 08:48:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2771433555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:18 compute-0 ceph-mon[75015]: pgmap v2007: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Nov 25 08:48:19 compute-0 nova_compute[253538]: 2025-11-25 08:48:19.315 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:19 compute-0 nova_compute[253538]: 2025-11-25 08:48:19.315 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:19 compute-0 nova_compute[253538]: 2025-11-25 08:48:19.315 253542 DEBUG nova.network.neutron [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:48:19 compute-0 nova_compute[253538]: 2025-11-25 08:48:19.439 253542 DEBUG nova.compute.manager [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:19 compute-0 nova_compute[253538]: 2025-11-25 08:48:19.439 253542 DEBUG nova.compute.manager [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing instance network info cache due to event network-changed-7246ed42-6ec3-42e8-9b9d-12606aeeb43c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:48:19 compute-0 nova_compute[253538]: 2025-11-25 08:48:19.440 253542 DEBUG oslo_concurrency.lockutils [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:19 compute-0 nova_compute[253538]: 2025-11-25 08:48:19.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Nov 25 08:48:20 compute-0 ceph-mon[75015]: pgmap v2008: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.107 253542 DEBUG nova.network.neutron [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.125 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.128 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.128 253542 INFO nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating image(s)
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.160 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.165 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.167 253542 DEBUG oslo_concurrency.lockutils [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.168 253542 DEBUG nova.network.neutron [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Refreshing network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.201 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.224 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.228 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "be59cdadf3c3d9b1c643597c1bdc7dc8b2c4cd9c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.229 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "be59cdadf3c3d9b1c643597c1bdc7dc8b2c4cd9c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.494 253542 DEBUG nova.virt.libvirt.imagebackend [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.560 253542 DEBUG nova.virt.libvirt.imagebackend [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.561 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] cloning images/515c4bcf-552c-4c04-8c0d-ad03b9e9133d@snap to None/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 08:48:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.674 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "be59cdadf3c3d9b1c643597c1bdc7dc8b2c4cd9c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.796 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:21 compute-0 nova_compute[253538]: 2025-11-25 08:48:21.862 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] flattening vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 08:48:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.294 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Image rbd:vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.295 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.296 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Ensure instance console log exists: /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.296 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.297 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.297 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.299 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start _get_guest_xml network_info=[{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:47:57Z,direct_url=<?>,disk_format='raw',id=515c4bcf-552c-4c04-8c0d-ad03b9e9133d,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-673040864-shelved',owner='947f731219de435196429037dc94fd56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:48:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.303 253542 WARNING nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.307 253542 DEBUG nova.virt.libvirt.host [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.308 253542 DEBUG nova.virt.libvirt.host [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.311 253542 DEBUG nova.virt.libvirt.host [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.311 253542 DEBUG nova.virt.libvirt.host [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T08:47:57Z,direct_url=<?>,disk_format='raw',id=515c4bcf-552c-4c04-8c0d-ad03b9e9133d,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-673040864-shelved',owner='947f731219de435196429037dc94fd56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T08:48:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.312 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.313 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.314 253542 DEBUG nova.virt.hardware [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.314 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.328 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.431 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.595 253542 DEBUG nova.network.neutron [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated VIF entry in instance network info cache for port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.595 253542 DEBUG nova.network.neutron [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.606 253542 DEBUG oslo_concurrency.lockutils [req-dc7b8288-94e0-48b2-a0b2-3af5d4e540b5 req-706ec3ab-65ed-45ca-9297-fdbafd7b270c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:48:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3891797224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.785 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.817 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:22 compute-0 nova_compute[253538]: 2025-11-25 08:48:22.821 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.009 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.009 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.027 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:48:23 compute-0 ceph-mon[75015]: pgmap v2009: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 25 08:48:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3891797224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.098 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.098 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.105 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.106 253542 INFO nova.compute.claims [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.222 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:48:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871314407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.286 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.288 253542 DEBUG nova.virt.libvirt.vif [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='515c4bcf-552c-4c04-8c0d-ad03b9e9133d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:46:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member',shelved_at='2025-11-25T08:48:05.240608',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='515c4bcf-552c-4c04-8c0d-ad03b9e9133d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:48:16Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.288 253542 DEBUG nova.network.os_vif_util [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.289 253542 DEBUG nova.network.os_vif_util [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.290 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.301 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <uuid>0f5e68e6-8f02-4a3a-ac0c-322d82950d98</uuid>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <name>instance-00000068</name>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <nova:name>tempest-ServersNegativeTestJSON-server-673040864</nova:name>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:48:22</nova:creationTime>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <nova:user uuid="229f02d89c9848d8aaaaab070ce4d179">tempest-ServersNegativeTestJSON-740481153-project-member</nova:user>
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <nova:project uuid="947f731219de435196429037dc94fd56">tempest-ServersNegativeTestJSON-740481153</nova:project>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="515c4bcf-552c-4c04-8c0d-ad03b9e9133d"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <nova:port uuid="7246ed42-6ec3-42e8-9b9d-12606aeeb43c">
Nov 25 08:48:23 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <system>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <entry name="serial">0f5e68e6-8f02-4a3a-ac0c-322d82950d98</entry>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <entry name="uuid">0f5e68e6-8f02-4a3a-ac0c-322d82950d98</entry>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </system>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <os>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   </os>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <features>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   </features>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk">
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       </source>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config">
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       </source>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:48:23 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:8c:96:cd"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <target dev="tap7246ed42-6e"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/console.log" append="off"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <video>
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </video>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:48:23 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:48:23 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:48:23 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:48:23 compute-0 nova_compute[253538]: </domain>
Nov 25 08:48:23 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.302 253542 DEBUG nova.compute.manager [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Preparing to wait for external event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.303 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.304 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.305 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.306 253542 DEBUG nova.virt.libvirt.vif [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='515c4bcf-552c-4c04-8c0d-ad03b9e9133d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:46:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member',shelved_at='2025-11-25T08:48:05.240608',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='515c4bcf-552c-4c04-8c0d-ad03b9e9133d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:48:16Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.307 253542 DEBUG nova.network.os_vif_util [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.308 253542 DEBUG nova.network.os_vif_util [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.309 253542 DEBUG os_vif [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.310 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.311 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.312 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.315 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.316 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7246ed42-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.317 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7246ed42-6e, col_values=(('external_ids', {'iface-id': '7246ed42-6ec3-42e8-9b9d-12606aeeb43c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:96:cd', 'vm-uuid': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:23 compute-0 NetworkManager[48915]: <info>  [1764060503.3207] manager: (tap7246ed42-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.322 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.325 253542 INFO os_vif [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.390 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.390 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.391 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] No VIF found with MAC fa:16:3e:8c:96:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.391 253542 INFO nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Using config drive
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.416 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.432 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:48:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:48:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:48:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:48:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:48:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.491 253542 DEBUG nova.objects.instance [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'keypairs' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 182 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 1017 KiB/s rd, 618 KiB/s wr, 35 op/s
Nov 25 08:48:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:48:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376442700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.671 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.677 253542 DEBUG nova.compute.provider_tree [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.693 253542 DEBUG nova.scheduler.client.report [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.714 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.715 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.775 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.775 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.795 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.814 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.907 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.909 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.909 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Creating image(s)
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.935 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.958 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.989 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:23 compute-0 nova_compute[253538]: 2025-11-25 08:48:23.992 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.025 253542 INFO nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Creating config drive at /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.030 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7pggu0r3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/871314407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:48:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2376442700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.064 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.065 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.066 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.066 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.088 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.091 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.132 253542 DEBUG nova.policy [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.170 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7pggu0r3" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.194 253542 DEBUG nova.storage.rbd_utils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] rbd image 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.198 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.407 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.464 253542 DEBUG oslo_concurrency.processutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config 0f5e68e6-8f02-4a3a-ac0c-322d82950d98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.465 253542 INFO nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deleting local config drive /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98/disk.config because it was imported into RBD.
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.470 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:48:24 compute-0 kernel: tap7246ed42-6e: entered promiscuous mode
Nov 25 08:48:24 compute-0 NetworkManager[48915]: <info>  [1764060504.5218] manager: (tap7246ed42-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:24 compute-0 ovn_controller[152859]: 2025-11-25T08:48:24Z|01006|binding|INFO|Claiming lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c for this chassis.
Nov 25 08:48:24 compute-0 ovn_controller[152859]: 2025-11-25T08:48:24Z|01007|binding|INFO|7246ed42-6ec3-42e8-9b9d-12606aeeb43c: Claiming fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.533 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '7', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.535 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 bound to our chassis
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.537 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:48:24 compute-0 ovn_controller[152859]: 2025-11-25T08:48:24Z|01008|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c ovn-installed in OVS
Nov 25 08:48:24 compute-0 ovn_controller[152859]: 2025-11-25T08:48:24Z|01009|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c up in Southbound
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.548 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[17721f9a-0ed7-4d43-bb35-c438f58adba9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.549 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa04d86f-71 in ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.551 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa04d86f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.551 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5194cf-3aa3-4ba1-ac43-3004e7073e1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d83b3e8-bda9-4d5a-bf36-f9ac07410f94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 systemd-udevd[359610]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:48:24 compute-0 systemd-machined[215790]: New machine qemu-130-instance-00000068.
Nov 25 08:48:24 compute-0 NetworkManager[48915]: <info>  [1764060504.5682] device (tap7246ed42-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:48:24 compute-0 NetworkManager[48915]: <info>  [1764060504.5693] device (tap7246ed42-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.570 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ee54e7ae-873d-45b8-849f-cc5da2b58f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000068.
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.586 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cad40d86-b3b4-4040-a5be-be05c6acc760]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.600 253542 DEBUG nova.objects.instance [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.612 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.613 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Ensure instance console log exists: /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.613 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.613 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.614 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.619 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff7e1c7-334a-4d9e-b4f0-e962626dc4db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.624 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c8453c91-1487-4be1-abd7-d6898ac18a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 NetworkManager[48915]: <info>  [1764060504.6258] manager: (tapaa04d86f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.658 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[854568d3-3eff-45fc-a389-e17c66f7e487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.662 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3706ce14-122a-4cf3-874a-3cc6ca862212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 NetworkManager[48915]: <info>  [1764060504.6844] device (tapaa04d86f-70): carrier: link connected
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.688 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[05295ca2-e7e8-4fae-b524-3e7df2780fa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.704 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd294fc8-e393-4e7c-beb5-4a60d783bf41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585928, 'reachable_time': 25222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359662, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2aed3438-9e24-420a-b05e-5ceddc653753]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:d2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585928, 'tstamp': 585928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359663, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.736 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[957fc964-0755-4e5d-ab48-68fab7210e6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585928, 'reachable_time': 25222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359664, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.766 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1178df11-3b7e-4a50-8f3e-bb384c0df590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.833 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d2f247-e8d1-4e6f-8256-24526aae8576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.834 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.834 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.834 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:24 compute-0 NetworkManager[48915]: <info>  [1764060504.8367] manager: (tapaa04d86f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 25 08:48:24 compute-0 kernel: tapaa04d86f-70: entered promiscuous mode
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.838 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.839 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:24 compute-0 ovn_controller[152859]: 2025-11-25T08:48:24Z|01010|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.841 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.842 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b524119e-eef3-4307-9212-5819f1d2ce10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.843 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:48:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:24.843 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'env', 'PROCESS_TAG=haproxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa04d86f-73a3-4b24-9c95-8ec29aa39064.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:24 compute-0 sudo[359710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:24 compute-0 sudo[359710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.935 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Successfully created port: 5b999504-81af-4e3d-9707-b0a72b902669 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:48:24 compute-0 sudo[359710]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.985 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060504.984773, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:24 compute-0 nova_compute[253538]: 2025-11-25 08:48:24.986 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Started (Lifecycle Event)
Nov 25 08:48:24 compute-0 sudo[359740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:48:24 compute-0 sudo[359740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:25 compute-0 sudo[359740]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.008 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.016 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060504.9848933, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.017 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Paused (Lifecycle Event)
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.036 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.040 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:48:25 compute-0 ceph-mon[75015]: pgmap v2010: 321 pgs: 321 active+clean; 182 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 1017 KiB/s rd, 618 KiB/s wr, 35 op/s
Nov 25 08:48:25 compute-0 sudo[359766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.056 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:48:25 compute-0 sudo[359766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:25 compute-0 sudo[359766]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:25 compute-0 sudo[359791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:48:25 compute-0 sudo[359791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:25 compute-0 podman[359838]: 2025-11-25 08:48:25.231563782 +0000 UTC m=+0.052013325 container create 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 08:48:25 compute-0 systemd[1]: Started libpod-conmon-1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d.scope.
Nov 25 08:48:25 compute-0 podman[359838]: 2025-11-25 08:48:25.204961749 +0000 UTC m=+0.025411322 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:48:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18e74c7d913d4d8440112d04e1a5043b5748bae7519c2709c1298e5997bcd39c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:25 compute-0 podman[359838]: 2025-11-25 08:48:25.328242121 +0000 UTC m=+0.148691684 container init 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:48:25 compute-0 podman[359838]: 2025-11-25 08:48:25.333861971 +0000 UTC m=+0.154311504 container start 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:48:25 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [NOTICE]   (359870) : New worker (359872) forked
Nov 25 08:48:25 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [NOTICE]   (359870) : Loading success.
Nov 25 08:48:25 compute-0 sudo[359791]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:48:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:48:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 220 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 61 op/s
Nov 25 08:48:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:48:25 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:48:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:48:25 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:48:25 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 47fdf725-20d0-4765-91f9-a2ba5350be93 does not exist
Nov 25 08:48:25 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f6d6e071-ca3d-487c-9737-d0b78d1a52d9 does not exist
Nov 25 08:48:25 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4ebcfa09-2b55-4f9b-8dcc-6f8dc013c723 does not exist
Nov 25 08:48:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:48:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.641 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Successfully updated port: 5b999504-81af-4e3d-9707-b0a72b902669 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:48:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:48:25 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:48:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:48:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.654 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.654 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.654 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:48:25 compute-0 sudo[359898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:25 compute-0 sudo[359898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:25 compute-0 sudo[359898]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:25 compute-0 sudo[359923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:48:25 compute-0 sudo[359923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:25 compute-0 sudo[359923]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.784 253542 DEBUG nova.compute.manager [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-changed-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.784 253542 DEBUG nova.compute.manager [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing instance network info cache due to event network-changed-5b999504-81af-4e3d-9707-b0a72b902669. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.784 253542 DEBUG oslo_concurrency.lockutils [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:25 compute-0 sudo[359948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:25 compute-0 sudo[359948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:25 compute-0 sudo[359948]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:25 compute-0 nova_compute[253538]: 2025-11-25 08:48:25.856 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:48:25 compute-0 sudo[359973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:48:25 compute-0 sudo[359973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:26 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:48:26 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:48:26 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:48:26 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:48:26 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:48:26 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:48:26 compute-0 podman[360038]: 2025-11-25 08:48:26.27851477 +0000 UTC m=+0.061997041 container create fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:48:26 compute-0 systemd[1]: Started libpod-conmon-fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9.scope.
Nov 25 08:48:26 compute-0 podman[360038]: 2025-11-25 08:48:26.245397363 +0000 UTC m=+0.028879634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:48:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:26 compute-0 podman[360038]: 2025-11-25 08:48:26.381220361 +0000 UTC m=+0.164702632 container init fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 08:48:26 compute-0 podman[360038]: 2025-11-25 08:48:26.392847992 +0000 UTC m=+0.176330243 container start fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 08:48:26 compute-0 podman[360038]: 2025-11-25 08:48:26.397403895 +0000 UTC m=+0.180886146 container attach fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:48:26 compute-0 silly_cannon[360055]: 167 167
Nov 25 08:48:26 compute-0 systemd[1]: libpod-fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9.scope: Deactivated successfully.
Nov 25 08:48:26 compute-0 conmon[360055]: conmon fb3cd3bf81bf55b3c3aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9.scope/container/memory.events
Nov 25 08:48:26 compute-0 podman[360038]: 2025-11-25 08:48:26.403897778 +0000 UTC m=+0.187380029 container died fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9ed06d03727b6ab3876a5e9632a457105a0d411b9ccf58edb2aa26a35e35959-merged.mount: Deactivated successfully.
Nov 25 08:48:26 compute-0 podman[360038]: 2025-11-25 08:48:26.456884307 +0000 UTC m=+0.240366568 container remove fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:48:26 compute-0 systemd[1]: libpod-conmon-fb3cd3bf81bf55b3c3aad8fcafb9aa8eed984f45b339d5dafaefcdb1b24274f9.scope: Deactivated successfully.
Nov 25 08:48:26 compute-0 podman[360078]: 2025-11-25 08:48:26.643990278 +0000 UTC m=+0.037603118 container create be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 08:48:26 compute-0 systemd[1]: Started libpod-conmon-be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c.scope.
Nov 25 08:48:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:26 compute-0 podman[360078]: 2025-11-25 08:48:26.724178365 +0000 UTC m=+0.117791215 container init be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:48:26 compute-0 podman[360078]: 2025-11-25 08:48:26.629409848 +0000 UTC m=+0.023022708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:48:26 compute-0 podman[360078]: 2025-11-25 08:48:26.732101378 +0000 UTC m=+0.125714258 container start be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:48:26 compute-0 podman[360078]: 2025-11-25 08:48:26.735921951 +0000 UTC m=+0.129534791 container attach be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.774 253542 DEBUG nova.network.neutron [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.794 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.794 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance network_info: |[{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.795 253542 DEBUG oslo_concurrency.lockutils [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.795 253542 DEBUG nova.network.neutron [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.799 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start _get_guest_xml network_info=[{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.804 253542 WARNING nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.813 253542 DEBUG nova.virt.libvirt.host [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.814 253542 DEBUG nova.virt.libvirt.host [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.817 253542 DEBUG nova.virt.libvirt.host [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.817 253542 DEBUG nova.virt.libvirt.host [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.818 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.818 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.819 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.819 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.819 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.819 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.820 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.820 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.820 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.820 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.821 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.821 253542 DEBUG nova.virt.hardware [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:48:26 compute-0 nova_compute[253538]: 2025-11-25 08:48:26.824 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:27 compute-0 ceph-mon[75015]: pgmap v2011: 321 pgs: 321 active+clean; 220 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 61 op/s
Nov 25 08:48:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:48:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129538565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.321 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.353 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.358 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 288 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 100 op/s
Nov 25 08:48:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:48:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/637292873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:48:27 compute-0 great_panini[360095]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.852 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:27 compute-0 great_panini[360095]: --> relative data size: 1.0
Nov 25 08:48:27 compute-0 great_panini[360095]: --> All data devices are unavailable
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.854 253542 DEBUG nova.virt.libvirt.vif [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:48:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-500014830',display_name='tempest-TestNetworkBasicOps-server-500014830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-500014830',id=106,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFCCvuRR9teEjk+xhoL/dPXtSbMEI/QvMm2XyYfTKUyOXE8qn7R4eNZpb9TezDBvzTLIaZuuD77pyfzIuaqqEBF8FLx+5feWI/X0iULdgxVeu0o4nXU62owugHwOXwCyOg==',key_name='tempest-TestNetworkBasicOps-1624027369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-nidctccp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:48:23Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.855 253542 DEBUG nova.network.os_vif_util [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.856 253542 DEBUG nova.network.os_vif_util [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.859 253542 DEBUG nova.objects.instance [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:27 compute-0 systemd[1]: libpod-be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c.scope: Deactivated successfully.
Nov 25 08:48:27 compute-0 systemd[1]: libpod-be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c.scope: Consumed 1.046s CPU time.
Nov 25 08:48:27 compute-0 podman[360078]: 2025-11-25 08:48:27.878917972 +0000 UTC m=+1.272530812 container died be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8cae20c8f0bd9bf4e532c325bcd8326c40b428f438c27be3d39d074ba98d866-merged.mount: Deactivated successfully.
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.908 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <uuid>fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe</uuid>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <name>instance-0000006a</name>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-500014830</nova:name>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:48:26</nova:creationTime>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <nova:port uuid="5b999504-81af-4e3d-9707-b0a72b902669">
Nov 25 08:48:27 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <system>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <entry name="serial">fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe</entry>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <entry name="uuid">fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe</entry>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </system>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <os>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   </os>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <features>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   </features>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk">
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       </source>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config">
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       </source>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:48:27 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:cf:68:34"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <target dev="tap5b999504-81"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/console.log" append="off"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <video>
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </video>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:48:27 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:48:27 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:48:27 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:48:27 compute-0 nova_compute[253538]: </domain>
Nov 25 08:48:27 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.909 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Preparing to wait for external event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.910 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.910 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.910 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.911 253542 DEBUG nova.virt.libvirt.vif [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:48:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-500014830',display_name='tempest-TestNetworkBasicOps-server-500014830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-500014830',id=106,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFCCvuRR9teEjk+xhoL/dPXtSbMEI/QvMm2XyYfTKUyOXE8qn7R4eNZpb9TezDBvzTLIaZuuD77pyfzIuaqqEBF8FLx+5feWI/X0iULdgxVeu0o4nXU62owugHwOXwCyOg==',key_name='tempest-TestNetworkBasicOps-1624027369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-nidctccp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:48:23Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.911 253542 DEBUG nova.network.os_vif_util [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.912 253542 DEBUG nova.network.os_vif_util [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.912 253542 DEBUG os_vif [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.913 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.914 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.919 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b999504-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.919 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b999504-81, col_values=(('external_ids', {'iface-id': '5b999504-81af-4e3d-9707-b0a72b902669', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:68:34', 'vm-uuid': 'fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:27 compute-0 NetworkManager[48915]: <info>  [1764060507.9224] manager: (tap5b999504-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.929 253542 INFO os_vif [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81')
Nov 25 08:48:27 compute-0 podman[360078]: 2025-11-25 08:48:27.940773909 +0000 UTC m=+1.334386749 container remove be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_panini, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:48:27 compute-0 systemd[1]: libpod-conmon-be9d6df7c4e289107a1b234ef2e0a9836bf565f9822766fd21b55569af09da2c.scope: Deactivated successfully.
Nov 25 08:48:27 compute-0 sudo[359973]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:27 compute-0 podman[360186]: 2025-11-25 08:48:27.979456024 +0000 UTC m=+0.071860275 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.983 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.984 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.984 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:cf:68:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:48:27 compute-0 nova_compute[253538]: 2025-11-25 08:48:27.984 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Using config drive
Nov 25 08:48:28 compute-0 nova_compute[253538]: 2025-11-25 08:48:28.007 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:28 compute-0 sudo[360217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:28 compute-0 sudo[360217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:28 compute-0 sudo[360217]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:28 compute-0 sudo[360260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:48:28 compute-0 sudo[360260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:28 compute-0 sudo[360260]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:28 compute-0 sudo[360285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:28 compute-0 sudo[360285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:28 compute-0 sudo[360285]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1129538565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:48:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/637292873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:48:28 compute-0 sudo[360310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:48:28 compute-0 sudo[360310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:28 compute-0 nova_compute[253538]: 2025-11-25 08:48:28.555 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Creating config drive at /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config
Nov 25 08:48:28 compute-0 nova_compute[253538]: 2025-11-25 08:48:28.560 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4oi_w9ch execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:28 compute-0 podman[360376]: 2025-11-25 08:48:28.560447634 +0000 UTC m=+0.021330932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:48:28 compute-0 nova_compute[253538]: 2025-11-25 08:48:28.707 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4oi_w9ch" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:28 compute-0 nova_compute[253538]: 2025-11-25 08:48:28.734 253542 DEBUG nova.storage.rbd_utils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:48:28 compute-0 nova_compute[253538]: 2025-11-25 08:48:28.738 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:28 compute-0 podman[360376]: 2025-11-25 08:48:28.914477945 +0000 UTC m=+0.375361263 container create 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 25 08:48:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:48:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564928069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:48:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:48:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564928069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:48:29 compute-0 systemd[1]: Started libpod-conmon-67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd.scope.
Nov 25 08:48:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:29 compute-0 podman[360376]: 2025-11-25 08:48:29.134251282 +0000 UTC m=+0.595134640 container init 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:48:29 compute-0 podman[360376]: 2025-11-25 08:48:29.147073025 +0000 UTC m=+0.607956343 container start 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 08:48:29 compute-0 podman[360376]: 2025-11-25 08:48:29.15210779 +0000 UTC m=+0.612991088 container attach 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:48:29 compute-0 nostalgic_nash[360430]: 167 167
Nov 25 08:48:29 compute-0 systemd[1]: libpod-67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd.scope: Deactivated successfully.
Nov 25 08:48:29 compute-0 podman[360376]: 2025-11-25 08:48:29.15808507 +0000 UTC m=+0.618968448 container died 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.175 253542 DEBUG oslo_concurrency.processutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.177 253542 INFO nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Deleting local config drive /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe/disk.config because it was imported into RBD.
Nov 25 08:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c0432f1067bc43c487cee263258d3c4fe163defcbc34c339935a087115412b0-merged.mount: Deactivated successfully.
Nov 25 08:48:29 compute-0 podman[360376]: 2025-11-25 08:48:29.204869013 +0000 UTC m=+0.665752301 container remove 67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:48:29 compute-0 systemd[1]: libpod-conmon-67c2bdc2585a1246b7a8871a8e8805a04bca6193a51b2f4b948ad943d1b35efd.scope: Deactivated successfully.
Nov 25 08:48:29 compute-0 ceph-mon[75015]: pgmap v2012: 321 pgs: 321 active+clean; 288 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 100 op/s
Nov 25 08:48:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2564928069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:48:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2564928069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:48:29 compute-0 kernel: tap5b999504-81: entered promiscuous mode
Nov 25 08:48:29 compute-0 NetworkManager[48915]: <info>  [1764060509.2389] manager: (tap5b999504-81): new Tun device (/org/freedesktop/NetworkManager/Devices/416)
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.238 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:29 compute-0 ovn_controller[152859]: 2025-11-25T08:48:29Z|01011|binding|INFO|Claiming lport 5b999504-81af-4e3d-9707-b0a72b902669 for this chassis.
Nov 25 08:48:29 compute-0 ovn_controller[152859]: 2025-11-25T08:48:29Z|01012|binding|INFO|5b999504-81af-4e3d-9707-b0a72b902669: Claiming fa:16:3e:cf:68:34 10.100.0.9
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.243 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.257 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:68:34 10.100.0.9'], port_security=['fa:16:3e:cf:68:34 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4a5a2de2-f65d-4e79-a42e-c5ccdc573b10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a84b49-c79a-4804-945b-0e3005e5ab18, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5b999504-81af-4e3d-9707-b0a72b902669) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5b999504-81af-4e3d-9707-b0a72b902669 in datapath 41ed78ca-e8a4-4daf-884b-6b7b763e272f bound to our chassis
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.264 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41ed78ca-e8a4-4daf-884b-6b7b763e272f
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fe405515-5d9a-42fb-b8de-b53c143491dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.279 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41ed78ca-e1 in ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:48:29 compute-0 systemd-machined[215790]: New machine qemu-131-instance-0000006a.
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.282 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41ed78ca-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.282 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3baa8c77-fe8d-406f-b01e-114fc830b506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[628eea26-e776-4212-9b73-b078cc29ec04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 systemd-udevd[360467]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:48:29 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-0000006a.
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.300 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[86898f55-8ff7-423a-a82b-9b15622882e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 NetworkManager[48915]: <info>  [1764060509.3027] device (tap5b999504-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:48:29 compute-0 NetworkManager[48915]: <info>  [1764060509.3080] device (tap5b999504-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:48:29 compute-0 ovn_controller[152859]: 2025-11-25T08:48:29Z|01013|binding|INFO|Setting lport 5b999504-81af-4e3d-9707-b0a72b902669 ovn-installed in OVS
Nov 25 08:48:29 compute-0 ovn_controller[152859]: 2025-11-25T08:48:29Z|01014|binding|INFO|Setting lport 5b999504-81af-4e3d-9707-b0a72b902669 up in Southbound
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.319 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e53c9a5-4397-49c1-8bbf-12538131378e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.346 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[12135bc8-8f23-451a-bddb-741aec46386a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd8fd55-0509-4926-b176-f2b7fd51fba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 NetworkManager[48915]: <info>  [1764060509.3528] manager: (tap41ed78ca-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/417)
Nov 25 08:48:29 compute-0 systemd-udevd[360470]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.385 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[21d23abe-3560-40e2-8688-12930250aebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 podman[360479]: 2025-11-25 08:48:29.387604327 +0000 UTC m=+0.041121253 container create c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.388 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[986e52d9-bd6a-4c5e-a7dd-1391ec9d5d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 NetworkManager[48915]: <info>  [1764060509.4085] device (tap41ed78ca-e0): carrier: link connected
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.413 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcd3b95-0e90-4e9f-92c4-7c6a5757c46d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 systemd[1]: Started libpod-conmon-c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6.scope.
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.429 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f970479-40fa-416a-b7e0-a7b2944de157]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41ed78ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:60:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586400, 'reachable_time': 38666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360521, 'error': None, 'target': 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.444 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3032d2-b886-4284-8137-a5de4ed0b058]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:609b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586400, 'tstamp': 586400}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360524, 'error': None, 'target': 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:29 compute-0 podman[360479]: 2025-11-25 08:48:29.373255283 +0000 UTC m=+0.026772229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.465 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f524342-3110-4d51-a142-12bff50926f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41ed78ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:60:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586400, 'reachable_time': 38666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360526, 'error': None, 'target': 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:29 compute-0 podman[360479]: 2025-11-25 08:48:29.484691717 +0000 UTC m=+0.138208683 container init c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 08:48:29 compute-0 podman[360479]: 2025-11-25 08:48:29.494678904 +0000 UTC m=+0.148195830 container start c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:48:29 compute-0 podman[360479]: 2025-11-25 08:48:29.49824864 +0000 UTC m=+0.151765586 container attach c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.504 253542 DEBUG nova.network.neutron [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updated VIF entry in instance network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.505 253542 DEBUG nova.network.neutron [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.519 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2e994b-1c2e-43e7-a5e4-3a667888bad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.520 253542 DEBUG oslo_concurrency.lockutils [req-0be2142a-30d2-439b-aa6e-b70ce0c8ecbb req-d3ee8ccd-6049-4075-ab3e-b6578c36cd5c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.586 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b93d627-493a-4676-95af-b8871d57f359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.588 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41ed78ca-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.588 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.589 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41ed78ca-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:29 compute-0 NetworkManager[48915]: <info>  [1764060509.5916] manager: (tap41ed78ca-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 25 08:48:29 compute-0 kernel: tap41ed78ca-e0: entered promiscuous mode
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41ed78ca-e0, col_values=(('external_ids', {'iface-id': '2a6dde1f-8745-4374-9d65-dba32b48db06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:29 compute-0 ovn_controller[152859]: 2025-11-25T08:48:29Z|01015|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.621 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41ed78ca-e8a4-4daf-884b-6b7b763e272f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41ed78ca-e8a4-4daf-884b-6b7b763e272f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.622 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29c46812-80c3-49e2-8809-11ccb2069b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.623 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-41ed78ca-e8a4-4daf-884b-6b7b763e272f
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/41ed78ca-e8a4-4daf-884b-6b7b763e272f.pid.haproxy
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 41ed78ca-e8a4-4daf-884b-6b7b763e272f
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:48:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:29.623 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'env', 'PROCESS_TAG=haproxy-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41ed78ca-e8a4-4daf-884b-6b7b763e272f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:48:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 292 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 113 op/s
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.770 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060509.7696543, fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.770 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] VM Started (Lifecycle Event)
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.785 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.789 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060509.771985, fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.789 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] VM Paused (Lifecycle Event)
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.806 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.809 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:48:29 compute-0 nova_compute[253538]: 2025-11-25 08:48:29.838 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:48:30 compute-0 podman[360601]: 2025-11-25 08:48:30.057653891 +0000 UTC m=+0.063376687 container create bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 08:48:30 compute-0 systemd[1]: Started libpod-conmon-bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f.scope.
Nov 25 08:48:30 compute-0 podman[360601]: 2025-11-25 08:48:30.020504387 +0000 UTC m=+0.026227173 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:48:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1849c6e6683bfaf37cc6be930869317f1e1d8455b3540b1f7e705a5ef3ca71a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:30 compute-0 podman[360601]: 2025-11-25 08:48:30.165598863 +0000 UTC m=+0.171321629 container init bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:48:30 compute-0 podman[360601]: 2025-11-25 08:48:30.176748432 +0000 UTC m=+0.182471188 container start bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:48:30 compute-0 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [NOTICE]   (360622) : New worker (360626) forked
Nov 25 08:48:30 compute-0 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [NOTICE]   (360622) : Loading success.
Nov 25 08:48:30 compute-0 ceph-mon[75015]: pgmap v2013: 321 pgs: 321 active+clean; 292 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 113 op/s
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]: {
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:     "0": [
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:         {
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "devices": [
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "/dev/loop3"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             ],
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_name": "ceph_lv0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_size": "21470642176",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "name": "ceph_lv0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "tags": {
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cluster_name": "ceph",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.crush_device_class": "",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.encrypted": "0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osd_id": "0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.type": "block",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.vdo": "0"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             },
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "type": "block",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "vg_name": "ceph_vg0"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:         }
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:     ],
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:     "1": [
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:         {
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "devices": [
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "/dev/loop4"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             ],
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_name": "ceph_lv1",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_size": "21470642176",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "name": "ceph_lv1",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "tags": {
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cluster_name": "ceph",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.crush_device_class": "",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.encrypted": "0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osd_id": "1",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.type": "block",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.vdo": "0"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             },
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "type": "block",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "vg_name": "ceph_vg1"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:         }
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:     ],
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:     "2": [
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:         {
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "devices": [
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "/dev/loop5"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             ],
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_name": "ceph_lv2",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_size": "21470642176",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "name": "ceph_lv2",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "tags": {
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.cluster_name": "ceph",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.crush_device_class": "",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.encrypted": "0",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osd_id": "2",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.type": "block",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:                 "ceph.vdo": "0"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             },
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "type": "block",
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:             "vg_name": "ceph_vg2"
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:         }
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]:     ]
Nov 25 08:48:30 compute-0 eager_stonebraker[360522]: }
Nov 25 08:48:30 compute-0 systemd[1]: libpod-c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6.scope: Deactivated successfully.
Nov 25 08:48:30 compute-0 podman[360479]: 2025-11-25 08:48:30.275832645 +0000 UTC m=+0.929349571 container died c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ed5815105189e7d8616e25d145024352c74833678e3620fdb6a521b333c4756-merged.mount: Deactivated successfully.
Nov 25 08:48:30 compute-0 podman[360479]: 2025-11-25 08:48:30.340466546 +0000 UTC m=+0.993983472 container remove c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_stonebraker, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 08:48:30 compute-0 systemd[1]: libpod-conmon-c4104425d6f906714e87250f2709158778ee08c6a259dc1cf4333d4f0b154ab6.scope: Deactivated successfully.
Nov 25 08:48:30 compute-0 sudo[360310]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:30 compute-0 sudo[360646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:30 compute-0 sudo[360646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:30 compute-0 sudo[360646]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:30 compute-0 sudo[360671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:48:30 compute-0 sudo[360671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:30 compute-0 sudo[360671]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:30 compute-0 sudo[360696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:30 compute-0 sudo[360696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:30 compute-0 sudo[360696]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:30 compute-0 sudo[360721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:48:30 compute-0 sudo[360721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:30 compute-0 podman[360786]: 2025-11-25 08:48:30.976289544 +0000 UTC m=+0.042930430 container create fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:48:31 compute-0 systemd[1]: Started libpod-conmon-fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c.scope.
Nov 25 08:48:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:31 compute-0 podman[360786]: 2025-11-25 08:48:30.957728738 +0000 UTC m=+0.024369654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:48:31 compute-0 podman[360786]: 2025-11-25 08:48:31.054243022 +0000 UTC m=+0.120883978 container init fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:48:31 compute-0 podman[360786]: 2025-11-25 08:48:31.066351536 +0000 UTC m=+0.132992412 container start fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:48:31 compute-0 podman[360786]: 2025-11-25 08:48:31.070147198 +0000 UTC m=+0.136788074 container attach fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 08:48:31 compute-0 systemd[1]: libpod-fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c.scope: Deactivated successfully.
Nov 25 08:48:31 compute-0 suspicious_aryabhata[360804]: 167 167
Nov 25 08:48:31 compute-0 conmon[360804]: conmon fb2781fa68584be05b2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c.scope/container/memory.events
Nov 25 08:48:31 compute-0 podman[360786]: 2025-11-25 08:48:31.077136265 +0000 UTC m=+0.143777171 container died fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 08:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f5f69eee9ad3dcf3a615ec601b0722d9b458cabc86f8788f4cbab61346f961f-merged.mount: Deactivated successfully.
Nov 25 08:48:31 compute-0 podman[360786]: 2025-11-25 08:48:31.11614009 +0000 UTC m=+0.182780966 container remove fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_aryabhata, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:48:31 compute-0 podman[360800]: 2025-11-25 08:48:31.121915114 +0000 UTC m=+0.095296123 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:48:31 compute-0 systemd[1]: libpod-conmon-fb2781fa68584be05b2ec9fdb399c3f71b9aa60dd38d882c745c50f3b07fdd4c.scope: Deactivated successfully.
Nov 25 08:48:31 compute-0 podman[360847]: 2025-11-25 08:48:31.304430392 +0000 UTC m=+0.049332421 container create 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:48:31 compute-0 systemd[1]: Started libpod-conmon-5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a.scope.
Nov 25 08:48:31 compute-0 podman[360847]: 2025-11-25 08:48:31.278606381 +0000 UTC m=+0.023508490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:48:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.387 253542 DEBUG nova.compute.manager [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.388 253542 DEBUG oslo_concurrency.lockutils [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.388 253542 DEBUG oslo_concurrency.lockutils [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.389 253542 DEBUG oslo_concurrency.lockutils [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.389 253542 DEBUG nova.compute.manager [req-b07b73a1-b562-4c71-b2e3-457456e7c97a req-eceabf1a-52e7-4188-a1fd-deb7b6eca57a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Processing event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.390 253542 DEBUG nova.compute.manager [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:48:31 compute-0 podman[360847]: 2025-11-25 08:48:31.394870125 +0000 UTC m=+0.139772264 container init 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.396 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060511.3959463, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.396 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Resumed (Lifecycle Event)
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.399 253542 DEBUG nova.virt.libvirt.driver [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.403 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance spawned successfully.
Nov 25 08:48:31 compute-0 podman[360847]: 2025-11-25 08:48:31.404894123 +0000 UTC m=+0.149796152 container start 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 08:48:31 compute-0 podman[360847]: 2025-11-25 08:48:31.409956419 +0000 UTC m=+0.154858448 container attach 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.415 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.420 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.450 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:48:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 118 op/s
Nov 25 08:48:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:31.965 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:31 compute-0 nova_compute[253538]: 2025-11-25 08:48:31.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:31.966 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:48:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:32 compute-0 brave_hertz[360864]: {
Nov 25 08:48:32 compute-0 brave_hertz[360864]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "osd_id": 1,
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "type": "bluestore"
Nov 25 08:48:32 compute-0 brave_hertz[360864]:     },
Nov 25 08:48:32 compute-0 brave_hertz[360864]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "osd_id": 2,
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "type": "bluestore"
Nov 25 08:48:32 compute-0 brave_hertz[360864]:     },
Nov 25 08:48:32 compute-0 brave_hertz[360864]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "osd_id": 0,
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:48:32 compute-0 brave_hertz[360864]:         "type": "bluestore"
Nov 25 08:48:32 compute-0 brave_hertz[360864]:     }
Nov 25 08:48:32 compute-0 brave_hertz[360864]: }
Nov 25 08:48:32 compute-0 systemd[1]: libpod-5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a.scope: Deactivated successfully.
Nov 25 08:48:32 compute-0 podman[360847]: 2025-11-25 08:48:32.415891059 +0000 UTC m=+1.160793098 container died 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:48:32 compute-0 nova_compute[253538]: 2025-11-25 08:48:32.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e161e6472f1112ff5d4f86ab502a4b2841c88a971979218dd933f2378ad0e10-merged.mount: Deactivated successfully.
Nov 25 08:48:32 compute-0 nova_compute[253538]: 2025-11-25 08:48:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:32 compute-0 nova_compute[253538]: 2025-11-25 08:48:32.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:48:32 compute-0 podman[360847]: 2025-11-25 08:48:32.662617027 +0000 UTC m=+1.407519096 container remove 5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hertz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:48:32 compute-0 systemd[1]: libpod-conmon-5c9800e43168f6c04198eb5b41063717f6eefa13c9cc042637a702fdda36609a.scope: Deactivated successfully.
Nov 25 08:48:32 compute-0 sudo[360721]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 25 08:48:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:48:32 compute-0 ceph-mon[75015]: pgmap v2014: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 118 op/s
Nov 25 08:48:32 compute-0 nova_compute[253538]: 2025-11-25 08:48:32.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 25 08:48:33 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:48:33 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 25 08:48:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:48:33 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:48:33 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 55f6932e-5d8a-41f1-94c5-c5fe7264fcf4 does not exist
Nov 25 08:48:33 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c4d665d6-d460-47c5-bae2-3529dfa60c46 does not exist
Nov 25 08:48:33 compute-0 sudo[360911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:48:33 compute-0 sudo[360911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:33 compute-0 sudo[360911]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:33 compute-0 sudo[360936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:48:33 compute-0 sudo[360936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:48:33 compute-0 sudo[360936]: pam_unix(sudo:session): session closed for user root
Nov 25 08:48:33 compute-0 nova_compute[253538]: 2025-11-25 08:48:33.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:33 compute-0 nova_compute[253538]: 2025-11-25 08:48:33.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:48:33 compute-0 nova_compute[253538]: 2025-11-25 08:48:33.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:48:33 compute-0 nova_compute[253538]: 2025-11-25 08:48:33.576 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:48:33 compute-0 nova_compute[253538]: 2025-11-25 08:48:33.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:33 compute-0 nova_compute[253538]: 2025-11-25 08:48:33.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:33 compute-0 nova_compute[253538]: 2025-11-25 08:48:33.577 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:48:33 compute-0 nova_compute[253538]: 2025-11-25 08:48:33.577 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.1 MiB/s wr, 138 op/s
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.128 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.128 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.129 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.129 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.129 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.130 253542 WARNING nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state shelved_offloaded and task_state spawning.
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.130 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.131 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.131 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.131 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.132 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Processing event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.132 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.132 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.133 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.133 253542 DEBUG oslo_concurrency.lockutils [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.133 253542 DEBUG nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] No waiting events found dispatching network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.134 253542 WARNING nova.compute.manager [req-a4433b7e-8b94-4bda-8171-ab1742a1268f req-97036637-39f6-4d09-8852-f050313cbe87 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received unexpected event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 for instance with vm_state building and task_state spawning.
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.135 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.147 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060514.1473668, fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.148 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] VM Resumed (Lifecycle Event)
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.150 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.155 253542 INFO nova.virt.libvirt.driver [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance spawned successfully.
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.156 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.175 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.180 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.181 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.182 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.182 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.183 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.184 253542 DEBUG nova.virt.libvirt.driver [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.189 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.213 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.249 253542 INFO nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Took 10.34 seconds to spawn the instance on the hypervisor.
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.250 253542 DEBUG nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:48:34 compute-0 ceph-mon[75015]: osdmap e236: 3 total, 3 up, 3 in
Nov 25 08:48:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.316 253542 INFO nova.compute.manager [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Took 11.25 seconds to build instance.
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.331 253542 DEBUG oslo_concurrency.lockutils [None req-9ca56f56-3c41-4463-8196-1fa2ff9b5bed 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.587 253542 DEBUG nova.compute.manager [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.653 253542 DEBUG oslo_concurrency.lockutils [None req-ef9476da-4879-492c-9ae5-d68db7389824 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 18.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:34 compute-0 nova_compute[253538]: 2025-11-25 08:48:34.979 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:35 compute-0 nova_compute[253538]: 2025-11-25 08:48:35.007 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:35 compute-0 nova_compute[253538]: 2025-11-25 08:48:35.008 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:48:35 compute-0 ceph-mon[75015]: pgmap v2016: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.1 MiB/s wr, 138 op/s
Nov 25 08:48:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 270 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.4 MiB/s wr, 119 op/s
Nov 25 08:48:36 compute-0 ceph-mon[75015]: pgmap v2017: 321 pgs: 321 active+clean; 270 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.4 MiB/s wr, 119 op/s
Nov 25 08:48:36 compute-0 nova_compute[253538]: 2025-11-25 08:48:36.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:36 compute-0 podman[360961]: 2025-11-25 08:48:36.861079999 +0000 UTC m=+0.098419007 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 08:48:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.585 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 235 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 86 KiB/s wr, 178 op/s
Nov 25 08:48:37 compute-0 nova_compute[253538]: 2025-11-25 08:48:37.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:48:38 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845244451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.046 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.142 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.143 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.149 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.149 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.364 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.368 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3520MB free_disk=59.92182922363281GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.369 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.370 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.464 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.464 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.464 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.465 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.520 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:48:38 compute-0 ceph-mon[75015]: pgmap v2018: 321 pgs: 321 active+clean; 235 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 86 KiB/s wr, 178 op/s
Nov 25 08:48:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1845244451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:38.968 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:48:38 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272144953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:38 compute-0 nova_compute[253538]: 2025-11-25 08:48:38.997 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:48:39 compute-0 nova_compute[253538]: 2025-11-25 08:48:39.003 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:48:39 compute-0 nova_compute[253538]: 2025-11-25 08:48:39.017 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:48:39 compute-0 nova_compute[253538]: 2025-11-25 08:48:39.044 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:48:39 compute-0 nova_compute[253538]: 2025-11-25 08:48:39.045 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:39 compute-0 ovn_controller[152859]: 2025-11-25T08:48:39Z|01016|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 08:48:39 compute-0 nova_compute[253538]: 2025-11-25 08:48:39.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:39 compute-0 NetworkManager[48915]: <info>  [1764060519.0476] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Nov 25 08:48:39 compute-0 NetworkManager[48915]: <info>  [1764060519.0494] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Nov 25 08:48:39 compute-0 ovn_controller[152859]: 2025-11-25T08:48:39Z|01017|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 08:48:39 compute-0 ovn_controller[152859]: 2025-11-25T08:48:39Z|01018|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 08:48:39 compute-0 ovn_controller[152859]: 2025-11-25T08:48:39Z|01019|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 08:48:39 compute-0 nova_compute[253538]: 2025-11-25 08:48:39.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:39 compute-0 nova_compute[253538]: 2025-11-25 08:48:39.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 213 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 16 KiB/s wr, 170 op/s
Nov 25 08:48:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3272144953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.035 253542 DEBUG nova.compute.manager [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-changed-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.035 253542 DEBUG nova.compute.manager [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing instance network info cache due to event network-changed-5b999504-81af-4e3d-9707-b0a72b902669. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.036 253542 DEBUG oslo_concurrency.lockutils [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.036 253542 DEBUG oslo_concurrency.lockutils [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.036 253542 DEBUG nova.network.neutron [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.039 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.039 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.584 253542 DEBUG nova.objects.instance [None req-2d4b191e-064f-42de-9532-c08f279cf4b0 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.610 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060520.610518, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.611 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Paused (Lifecycle Event)
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.634 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.638 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:48:40 compute-0 nova_compute[253538]: 2025-11-25 08:48:40.659 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 08:48:40 compute-0 ceph-mon[75015]: pgmap v2019: 321 pgs: 321 active+clean; 213 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 16 KiB/s wr, 170 op/s
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.072 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:41 compute-0 kernel: tap7246ed42-6e (unregistering): left promiscuous mode
Nov 25 08:48:41 compute-0 NetworkManager[48915]: <info>  [1764060521.1353] device (tap7246ed42-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01020|binding|INFO|Releasing lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c from this chassis (sb_readonly=0)
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.139 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01021|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c down in Southbound
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01022|binding|INFO|Removing iface tap7246ed42-6e ovn-installed in OVS
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.151 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.152 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.153 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.154 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a08c0da2-ac33-4280-9a58-f345e51389ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.157 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace which is not needed anymore
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.169 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 25 08:48:41 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Consumed 10.181s CPU time.
Nov 25 08:48:41 compute-0 systemd-machined[215790]: Machine qemu-130-instance-00000068 terminated.
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.244 253542 DEBUG nova.network.neutron [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updated VIF entry in instance network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.245 253542 DEBUG nova.network.neutron [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.262 253542 DEBUG oslo_concurrency.lockutils [req-080ae981-c6d1-49a3-8d24-620f42ee65e3 req-c93c8a9d-863f-4a5a-8351-cfe9e889bef4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:41 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [NOTICE]   (359870) : haproxy version is 2.8.14-c23fe91
Nov 25 08:48:41 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [NOTICE]   (359870) : path to executable is /usr/sbin/haproxy
Nov 25 08:48:41 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [WARNING]  (359870) : Exiting Master process...
Nov 25 08:48:41 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [ALERT]    (359870) : Current worker (359872) exited with code 143 (Terminated)
Nov 25 08:48:41 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[359865]: [WARNING]  (359870) : All workers exited. Exiting... (0)
Nov 25 08:48:41 compute-0 systemd[1]: libpod-1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d.scope: Deactivated successfully.
Nov 25 08:48:41 compute-0 podman[361060]: 2025-11-25 08:48:41.289628492 +0000 UTC m=+0.048653354 container died 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:48:41 compute-0 kernel: tap7246ed42-6e: entered promiscuous mode
Nov 25 08:48:41 compute-0 systemd-udevd[361039]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:48:41 compute-0 NetworkManager[48915]: <info>  [1764060521.3075] manager: (tap7246ed42-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/421)
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01023|binding|INFO|Claiming lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c for this chassis.
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01024|binding|INFO|7246ed42-6ec3-42e8-9b9d-12606aeeb43c: Claiming fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.310 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.316 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:41 compute-0 kernel: tap7246ed42-6e (unregistering): left promiscuous mode
Nov 25 08:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d-userdata-shm.mount: Deactivated successfully.
Nov 25 08:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e74c7d913d4d8440112d04e1a5043b5748bae7519c2709c1298e5997bcd39c-merged.mount: Deactivated successfully.
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01025|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c ovn-installed in OVS
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01026|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c up in Southbound
Nov 25 08:48:41 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.334 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01027|binding|INFO|Releasing lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c from this chassis (sb_readonly=0)
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.341 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01028|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c down in Southbound
Nov 25 08:48:41 compute-0 ovn_controller[152859]: 2025-11-25T08:48:41Z|01029|binding|INFO|Removing iface tap7246ed42-6e ovn-installed in OVS
Nov 25 08:48:41 compute-0 podman[361060]: 2025-11-25 08:48:41.343618818 +0000 UTC m=+0.102643680 container cleanup 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:48:41 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.346 253542 DEBUG nova.compute.manager [None req-2d4b191e-064f-42de-9532-c08f279cf4b0 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.348 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:41 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.356 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 08:48:41 compute-0 systemd[1]: libpod-conmon-1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d.scope: Deactivated successfully.
Nov 25 08:48:41 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 08:48:41 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 08:48:41 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 08:48:41 compute-0 virtnodedevd[239165]: ethtool ioctl error on tap7246ed42-6e: No such device
Nov 25 08:48:41 compute-0 podman[361097]: 2025-11-25 08:48:41.411896486 +0000 UTC m=+0.043242549 container remove 1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c564000-2b7b-4eb7-8f49-417792ea48e2]: (4, ('Tue Nov 25 08:48:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d)\n1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d\nTue Nov 25 08:48:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d)\n1ef76cfbd14ecfe3c363f9d691011408549afde8970f34e26eb70eec780e434d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.419 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce0a48a-b246-4c00-beed-aa7308329dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.420 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.422 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 kernel: tapaa04d86f-70: left promiscuous mode
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.439 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fb684fee-1ffe-468d-b66d-62206c7c9291]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.457 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7b0f3f-c18f-4b28-bc7c-7dba7cbb82f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c12e62-b718-4254-a9ba-4b4ab7531d54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.474 253542 DEBUG nova.compute.manager [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.475 253542 DEBUG oslo_concurrency.lockutils [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.475 253542 DEBUG oslo_concurrency.lockutils [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.475 253542 DEBUG oslo_concurrency.lockutils [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.476 253542 DEBUG nova.compute.manager [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:41 compute-0 nova_compute[253538]: 2025-11-25 08:48:41.476 253542 WARNING nova.compute.manager [req-6f88cfc7-6bb6-4e1a-a3da-a865876fafa8 req-be4e026f-5537-4af8-b350-e49a92c39214 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state None.
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.476 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cd79c7-64a5-474a-9c0f-e5a9a5c3299d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585921, 'reachable_time': 17273, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361128, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.482 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.482 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8797e6d9-4840-41ae-ab2f-286ff9049d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.483 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis
Nov 25 08:48:41 compute-0 systemd[1]: run-netns-ovnmeta\x2daa04d86f\x2d73a3\x2d4b24\x2d9c95\x2d8ec29aa39064.mount: Deactivated successfully.
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.484 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.485 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51102c6c-8c3d-487f-989a-6e6bb3f9adb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.486 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.487 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:48:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:41.487 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0036a8-ae28-41ad-99cb-b73b82400668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 213 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 186 op/s
Nov 25 08:48:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 25 08:48:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 25 08:48:42 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 25 08:48:42 compute-0 nova_compute[253538]: 2025-11-25 08:48:42.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:42 compute-0 nova_compute[253538]: 2025-11-25 08:48:42.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:48:42 compute-0 ceph-mon[75015]: pgmap v2020: 321 pgs: 321 active+clean; 213 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 186 op/s
Nov 25 08:48:42 compute-0 ceph-mon[75015]: osdmap e237: 3 total, 3 up, 3 in
Nov 25 08:48:42 compute-0 nova_compute[253538]: 2025-11-25 08:48:42.809 253542 INFO nova.compute.manager [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Resuming
Nov 25 08:48:42 compute-0 nova_compute[253538]: 2025-11-25 08:48:42.810 253542 DEBUG nova.objects.instance [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'flavor' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:42 compute-0 nova_compute[253538]: 2025-11-25 08:48:42.836 253542 DEBUG oslo_concurrency.lockutils [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:48:42 compute-0 nova_compute[253538]: 2025-11-25 08:48:42.837 253542 DEBUG oslo_concurrency.lockutils [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquired lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:48:42 compute-0 nova_compute[253538]: 2025-11-25 08:48:42.837 253542 DEBUG nova.network.neutron [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:48:42 compute-0 nova_compute[253538]: 2025-11-25 08:48:42.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.581 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.581 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.581 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.582 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.583 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.583 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.583 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.583 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.584 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.584 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.584 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.584 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.585 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.585 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.585 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.585 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.586 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.587 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.587 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.587 253542 DEBUG oslo_concurrency.lockutils [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.587 253542 DEBUG nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:43 compute-0 nova_compute[253538]: 2025-11-25 08:48:43.588 253542 WARNING nova.compute.manager [req-2fa8ccb6-1ac1-427f-8c3a-0363df00ea18 req-6c38c6e6-59b7-4eb3-8840-86827375665a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state suspended and task_state resuming.
Nov 25 08:48:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.3 KiB/s wr, 170 op/s
Nov 25 08:48:44 compute-0 ceph-mon[75015]: pgmap v2022: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.3 KiB/s wr, 170 op/s
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.821 253542 DEBUG nova.network.neutron [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [{"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.846 253542 DEBUG oslo_concurrency.lockutils [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Releasing lock "refresh_cache-0f5e68e6-8f02-4a3a-ac0c-322d82950d98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.851 253542 DEBUG nova.virt.libvirt.vif [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:48:41Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.852 253542 DEBUG nova.network.os_vif_util [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.852 253542 DEBUG nova.network.os_vif_util [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.853 253542 DEBUG os_vif [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.854 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.854 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.857 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.857 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7246ed42-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.857 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7246ed42-6e, col_values=(('external_ids', {'iface-id': '7246ed42-6ec3-42e8-9b9d-12606aeeb43c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:96:cd', 'vm-uuid': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.858 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.858 253542 INFO os_vif [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.881 253542 DEBUG nova.objects.instance [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:44 compute-0 kernel: tap7246ed42-6e: entered promiscuous mode
Nov 25 08:48:44 compute-0 ovn_controller[152859]: 2025-11-25T08:48:44Z|01030|binding|INFO|Claiming lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c for this chassis.
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:44 compute-0 ovn_controller[152859]: 2025-11-25T08:48:44Z|01031|binding|INFO|7246ed42-6ec3-42e8-9b9d-12606aeeb43c: Claiming fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.957 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '12', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.958 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 bound to our chassis
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.959 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:48:44 compute-0 NetworkManager[48915]: <info>  [1764060524.9636] manager: (tap7246ed42-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Nov 25 08:48:44 compute-0 ovn_controller[152859]: 2025-11-25T08:48:44Z|01032|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c ovn-installed in OVS
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.971 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c66fe68e-d391-4fba-9549-df3b077175fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.972 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa04d86f-71 in ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:48:44 compute-0 ovn_controller[152859]: 2025-11-25T08:48:44Z|01033|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c up in Southbound
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.974 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa04d86f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.974 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ea699c-742d-460a-9b9e-653500cb7727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.975 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9d97dd-87a7-4d0c-9058-79f16db6f2d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:44 compute-0 systemd-udevd[361142]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:48:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:44.986 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[bf48e926-c2d7-49e3-a176-91046f4fb5d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:44 compute-0 nova_compute[253538]: 2025-11-25 08:48:44.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:44 compute-0 NetworkManager[48915]: <info>  [1764060524.9978] device (tap7246ed42-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:48:45 compute-0 systemd-machined[215790]: New machine qemu-132-instance-00000068.
Nov 25 08:48:45 compute-0 NetworkManager[48915]: <info>  [1764060525.0028] device (tap7246ed42-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.010 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[91aa5166-4ce5-4e56-b063-6cdb33e1c7e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-00000068.
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.038 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9be68a92-4de1-4b31-8981-8525fc6d3b70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.043 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d508633a-a61a-4cfc-847e-6465541e6b64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 NetworkManager[48915]: <info>  [1764060525.0438] manager: (tapaa04d86f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Nov 25 08:48:45 compute-0 systemd-udevd[361148]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.079 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bc99cff6-4902-4e1b-bb97-faca9cbed997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.081 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5feead4b-ddae-415f-9998-595bef0eb7c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 NetworkManager[48915]: <info>  [1764060525.1042] device (tapaa04d86f-70): carrier: link connected
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.109 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5be971-c791-4923-8ffc-62bb894235d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.127 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a3be46-c5df-46cb-a8ee-cf386d59cb7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587970, 'reachable_time': 16497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361176, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.141 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52d96dea-7fca-4413-a159-571eb1710b02]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:d2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587970, 'tstamp': 587970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361177, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.159 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab54f03f-df66-4223-9bd3-f47c3f350b26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa04d86f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:d2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587970, 'reachable_time': 16497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361178, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.189 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c57fd2-65e4-4bcf-9709-2fd9c9e54f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.262 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[139f1a7c-da6d-45fd-ae6c-138e16275895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.263 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.264 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.264 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa04d86f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.266 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:45 compute-0 NetworkManager[48915]: <info>  [1764060525.2668] manager: (tapaa04d86f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Nov 25 08:48:45 compute-0 kernel: tapaa04d86f-70: entered promiscuous mode
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.270 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa04d86f-70, col_values=(('external_ids', {'iface-id': 'bedad8d3-cb44-47dd-87a4-c24448506880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.271 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:45 compute-0 ovn_controller[152859]: 2025-11-25T08:48:45Z|01034|binding|INFO|Releasing lport bedad8d3-cb44-47dd-87a4-c24448506880 from this chassis (sb_readonly=0)
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.272 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.273 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.274 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[538a30f4-b27b-4a55-b6ca-ee22d8483e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.275 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/aa04d86f-73a3-4b24-9c95-8ec29aa39064.pid.haproxy
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID aa04d86f-73a3-4b24-9c95-8ec29aa39064
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:48:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:45.275 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'env', 'PROCESS_TAG=haproxy-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa04d86f-73a3-4b24-9c95-8ec29aa39064.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.478 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.479 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060525.478455, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.479 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Started (Lifecycle Event)
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.493 253542 DEBUG nova.compute.manager [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.494 253542 DEBUG nova.objects.instance [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.503 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.506 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.518 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance running successfully.
Nov 25 08:48:45 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.521 253542 DEBUG nova.virt.libvirt.guest [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.522 253542 DEBUG nova.compute.manager [None req-4df8f282-a01e-4d82-aa1b-b3b18fe6f346 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.527 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.527 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060525.4817843, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.527 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Resumed (Lifecycle Event)
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.568 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.574 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:48:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1023 B/s wr, 154 op/s
Nov 25 08:48:45 compute-0 podman[361252]: 2025-11-25 08:48:45.674711661 +0000 UTC m=+0.057147381 container create a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.704 253542 DEBUG nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.705 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.705 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.705 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.705 253542 DEBUG nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.706 253542 WARNING nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state None.
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.706 253542 DEBUG nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.706 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.707 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.707 253542 DEBUG oslo_concurrency.lockutils [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.707 253542 DEBUG nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:48:45 compute-0 nova_compute[253538]: 2025-11-25 08:48:45.707 253542 WARNING nova.compute.manager [req-392ce04e-e0ac-4c35-b92f-f22f5846cd81 req-af05835e-078f-4d21-8876-f629abe8af1e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state active and task_state None.
Nov 25 08:48:45 compute-0 systemd[1]: Started libpod-conmon-a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a.scope.
Nov 25 08:48:45 compute-0 podman[361252]: 2025-11-25 08:48:45.647074551 +0000 UTC m=+0.029510301 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:48:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c4637736e1dcbe4428132fd374615171f835052d57e6b8b7096684d764b77e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:48:45 compute-0 podman[361252]: 2025-11-25 08:48:45.764411404 +0000 UTC m=+0.146847154 container init a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:48:45 compute-0 podman[361252]: 2025-11-25 08:48:45.770861046 +0000 UTC m=+0.153296776 container start a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:48:45 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [NOTICE]   (361271) : New worker (361273) forked
Nov 25 08:48:45 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [NOTICE]   (361271) : Loading success.
Nov 25 08:48:46 compute-0 ceph-mon[75015]: pgmap v2023: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1023 B/s wr, 154 op/s
Nov 25 08:48:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:47 compute-0 nova_compute[253538]: 2025-11-25 08:48:47.484 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 221 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 1.4 MiB/s wr, 55 op/s
Nov 25 08:48:47 compute-0 nova_compute[253538]: 2025-11-25 08:48:47.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:48 compute-0 ovn_controller[152859]: 2025-11-25T08:48:48Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:68:34 10.100.0.9
Nov 25 08:48:48 compute-0 ovn_controller[152859]: 2025-11-25T08:48:48Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:68:34 10.100.0.9
Nov 25 08:48:48 compute-0 ceph-mon[75015]: pgmap v2024: 321 pgs: 321 active+clean; 221 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 1.4 MiB/s wr, 55 op/s
Nov 25 08:48:49 compute-0 ovn_controller[152859]: 2025-11-25T08:48:49Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:96:cd 10.100.0.3
Nov 25 08:48:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 233 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 916 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 25 08:48:50 compute-0 ceph-mon[75015]: pgmap v2025: 321 pgs: 321 active+clean; 233 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 916 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 25 08:48:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 243 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Nov 25 08:48:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:52 compute-0 nova_compute[253538]: 2025-11-25 08:48:52.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:52 compute-0 ceph-mon[75015]: pgmap v2026: 321 pgs: 321 active+clean; 243 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Nov 25 08:48:52 compute-0 nova_compute[253538]: 2025-11-25 08:48:52.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:48:53
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'backups']
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 2.3 MiB/s wr, 116 op/s
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:48:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:48:54 compute-0 ceph-mon[75015]: pgmap v2027: 321 pgs: 321 active+clean; 246 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 2.3 MiB/s wr, 116 op/s
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.784580) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534784682, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 619, "num_deletes": 252, "total_data_size": 640566, "memory_usage": 651384, "flush_reason": "Manual Compaction"}
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 25 08:48:54 compute-0 nova_compute[253538]: 2025-11-25 08:48:54.787 253542 INFO nova.compute.manager [None req-c0924b2b-b51e-468d-b422-c90007fdb370 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Get console output
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534793406, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 634118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42094, "largest_seqno": 42712, "table_properties": {"data_size": 630790, "index_size": 1236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8031, "raw_average_key_size": 19, "raw_value_size": 623927, "raw_average_value_size": 1529, "num_data_blocks": 54, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060497, "oldest_key_time": 1764060497, "file_creation_time": 1764060534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 8871 microseconds, and 3589 cpu microseconds.
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:48:54 compute-0 nova_compute[253538]: 2025-11-25 08:48:54.794 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.793468) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 634118 bytes OK
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.793496) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.796284) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.796350) EVENT_LOG_v1 {"time_micros": 1764060534796338, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.796384) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 637181, prev total WAL file size 637181, number of live WAL files 2.
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.797057) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(619KB)], [95(8219KB)]
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534797106, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9051369, "oldest_snapshot_seqno": -1}
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6410 keys, 7383917 bytes, temperature: kUnknown
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534852397, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7383917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7343599, "index_size": 23184, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 166342, "raw_average_key_size": 25, "raw_value_size": 7231117, "raw_average_value_size": 1128, "num_data_blocks": 907, "num_entries": 6410, "num_filter_entries": 6410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.852820) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7383917 bytes
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.854658) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.4 rd, 133.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(25.9) write-amplify(11.6) OK, records in: 6929, records dropped: 519 output_compression: NoCompression
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.854677) EVENT_LOG_v1 {"time_micros": 1764060534854667, "job": 56, "event": "compaction_finished", "compaction_time_micros": 55405, "compaction_time_cpu_micros": 33116, "output_level": 6, "num_output_files": 1, "total_output_size": 7383917, "num_input_records": 6929, "num_output_records": 6410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534854973, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060534857126, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.796957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:54 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:48:54.857229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:48:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 859 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 08:48:56 compute-0 ceph-mon[75015]: pgmap v2028: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 859 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 08:48:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:48:57 compute-0 nova_compute[253538]: 2025-11-25 08:48:57.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Nov 25 08:48:57 compute-0 nova_compute[253538]: 2025-11-25 08:48:57.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:58 compute-0 podman[361284]: 2025-11-25 08:48:58.821388439 +0000 UTC m=+0.065677090 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 08:48:58 compute-0 ceph-mon[75015]: pgmap v2029: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.225 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.226 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.227 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.228 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.228 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.230 253542 INFO nova.compute.manager [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Terminating instance
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.232 253542 DEBUG nova.compute.manager [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:48:59 compute-0 kernel: tap7246ed42-6e (unregistering): left promiscuous mode
Nov 25 08:48:59 compute-0 NetworkManager[48915]: <info>  [1764060539.3180] device (tap7246ed42-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:48:59 compute-0 ovn_controller[152859]: 2025-11-25T08:48:59Z|01035|binding|INFO|Releasing lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c from this chassis (sb_readonly=0)
Nov 25 08:48:59 compute-0 ovn_controller[152859]: 2025-11-25T08:48:59Z|01036|binding|INFO|Setting lport 7246ed42-6ec3-42e8-9b9d-12606aeeb43c down in Southbound
Nov 25 08:48:59 compute-0 ovn_controller[152859]: 2025-11-25T08:48:59Z|01037|binding|INFO|Removing iface tap7246ed42-6e ovn-installed in OVS
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.330 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.333 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.338 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:96:cd 10.100.0.3'], port_security=['fa:16:3e:8c:96:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f5e68e6-8f02-4a3a-ac0c-322d82950d98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '947f731219de435196429037dc94fd56', 'neutron:revision_number': '13', 'neutron:security_group_ids': '6084cb22-e3b6-414c-801b-b7e992786a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4043632-827b-4717-bb5e-38582ebfacad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=7246ed42-6ec3-42e8-9b9d-12606aeeb43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.341 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7246ed42-6ec3-42e8-9b9d-12606aeeb43c in datapath aa04d86f-73a3-4b24-9c95-8ec29aa39064 unbound from our chassis
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.342 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa04d86f-73a3-4b24-9c95-8ec29aa39064, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdd8429-4ea8-470f-885c-0b844c808e12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.344 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 namespace which is not needed anymore
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.366 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 25 08:48:59 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Consumed 4.733s CPU time.
Nov 25 08:48:59 compute-0 systemd-machined[215790]: Machine qemu-132-instance-00000068 terminated.
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [NOTICE]   (361271) : haproxy version is 2.8.14-c23fe91
Nov 25 08:48:59 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [NOTICE]   (361271) : path to executable is /usr/sbin/haproxy
Nov 25 08:48:59 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [ALERT]    (361271) : Current worker (361273) exited with code 143 (Terminated)
Nov 25 08:48:59 compute-0 neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064[361267]: [WARNING]  (361271) : All workers exited. Exiting... (0)
Nov 25 08:48:59 compute-0 systemd[1]: libpod-a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a.scope: Deactivated successfully.
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 podman[361329]: 2025-11-25 08:48:59.475668302 +0000 UTC m=+0.052338983 container died a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.476 253542 INFO nova.virt.libvirt.driver [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Instance destroyed successfully.
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.476 253542 DEBUG nova.objects.instance [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lazy-loading 'resources' on Instance uuid 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.490 253542 DEBUG nova.virt.libvirt.vif [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-673040864',display_name='tempest-ServersNegativeTestJSON-server-673040864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-673040864',id=104,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='947f731219de435196429037dc94fd56',ramdisk_id='',reservation_id='r-lfthxaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-740481153',owner_user_name='tempest-ServersNegativeTestJSON-740481153-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:48:45Z,user_data=None,user_id='229f02d89c9848d8aaaaab070ce4d179',uuid=0f5e68e6-8f02-4a3a-ac0c-322d82950d98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.491 253542 DEBUG nova.network.os_vif_util [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converting VIF {"id": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "address": "fa:16:3e:8c:96:cd", "network": {"id": "aa04d86f-73a3-4b24-9c95-8ec29aa39064", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1377856389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "947f731219de435196429037dc94fd56", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7246ed42-6e", "ovs_interfaceid": "7246ed42-6ec3-42e8-9b9d-12606aeeb43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.491 253542 DEBUG nova.network.os_vif_util [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.492 253542 DEBUG os_vif [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7246ed42-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.498 253542 INFO os_vif [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:96:cd,bridge_name='br-int',has_traffic_filtering=True,id=7246ed42-6ec3-42e8-9b9d-12606aeeb43c,network=Network(aa04d86f-73a3-4b24-9c95-8ec29aa39064),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7246ed42-6e')
Nov 25 08:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a-userdata-shm.mount: Deactivated successfully.
Nov 25 08:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-27c4637736e1dcbe4428132fd374615171f835052d57e6b8b7096684d764b77e-merged.mount: Deactivated successfully.
Nov 25 08:48:59 compute-0 podman[361329]: 2025-11-25 08:48:59.552853858 +0000 UTC m=+0.129524519 container cleanup a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:48:59 compute-0 systemd[1]: libpod-conmon-a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a.scope: Deactivated successfully.
Nov 25 08:48:59 compute-0 podman[361387]: 2025-11-25 08:48:59.622382451 +0000 UTC m=+0.046164728 container remove a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.631 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[107b1716-079d-44ed-b273-ad3e9cc004a5]: (4, ('Tue Nov 25 08:48:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a)\na559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a\nTue Nov 25 08:48:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 (a559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a)\na559c132001b3659bbd3cf2fe2e10523a8a704c8e542ff28e0667476905b656a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.633 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc966103-c07a-4c93-b012-26092e147202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.634 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa04d86f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 kernel: tapaa04d86f-70: left promiscuous mode
Nov 25 08:48:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 710 KiB/s rd, 1.0 MiB/s wr, 90 op/s
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb34e242-17c0-445b-8dea-462496b548fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.674 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86f7403d-2f41-4501-8905-e58ede1d6c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.676 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[08b62d5e-8d7e-4fdb-82c9-ca944ded141f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.697 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73ce3328-c1b9-476a-b337-9e0d60a7216f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587963, 'reachable_time': 44558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361403, 'error': None, 'target': 'ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.700 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa04d86f-73a3-4b24-9c95-8ec29aa39064 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:48:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:48:59.700 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[09e260e7-04f2-4a88-881f-043690682133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:48:59 compute-0 systemd[1]: run-netns-ovnmeta\x2daa04d86f\x2d73a3\x2d4b24\x2d9c95\x2d8ec29aa39064.mount: Deactivated successfully.
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.917 253542 INFO nova.virt.libvirt.driver [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deleting instance files /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_del
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.918 253542 INFO nova.virt.libvirt.driver [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deletion of /var/lib/nova/instances/0f5e68e6-8f02-4a3a-ac0c-322d82950d98_del complete
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.991 253542 INFO nova.compute.manager [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.992 253542 DEBUG oslo.service.loopingcall [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.993 253542 DEBUG nova.compute.manager [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:48:59 compute-0 nova_compute[253538]: 2025-11-25 08:48:59.993 253542 DEBUG nova.network.neutron [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.114 253542 DEBUG nova.compute.manager [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.115 253542 DEBUG oslo_concurrency.lockutils [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.115 253542 DEBUG oslo_concurrency.lockutils [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.115 253542 DEBUG oslo_concurrency.lockutils [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.116 253542 DEBUG nova.compute.manager [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.116 253542 DEBUG nova.compute.manager [req-1eeb0f55-d83a-477d-8020-f4f7e0176f1f req-500be00e-f872-4ea7-962d-6737220fae48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-unplugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.774 253542 DEBUG nova.network.neutron [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.801 253542 INFO nova.compute.manager [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Took 0.81 seconds to deallocate network for instance.
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.861 253542 DEBUG nova.compute.manager [req-a2477d76-dee8-4bd5-8d0c-61626c505780 req-fe2358db-9c3a-41b4-aabb-c6b4fc5eba0e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-deleted-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.892 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.893 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:00 compute-0 ceph-mon[75015]: pgmap v2030: 321 pgs: 321 active+clean; 248 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 710 KiB/s rd, 1.0 MiB/s wr, 90 op/s
Nov 25 08:49:00 compute-0 nova_compute[253538]: 2025-11-25 08:49:00.977 253542 DEBUG oslo_concurrency.processutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:49:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124057411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:01 compute-0 nova_compute[253538]: 2025-11-25 08:49:01.419 253542 DEBUG oslo_concurrency.processutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:01 compute-0 nova_compute[253538]: 2025-11-25 08:49:01.428 253542 DEBUG nova.compute.provider_tree [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:49:01 compute-0 nova_compute[253538]: 2025-11-25 08:49:01.444 253542 DEBUG nova.scheduler.client.report [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:49:01 compute-0 nova_compute[253538]: 2025-11-25 08:49:01.463 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:01 compute-0 nova_compute[253538]: 2025-11-25 08:49:01.490 253542 INFO nova.scheduler.client.report [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Deleted allocations for instance 0f5e68e6-8f02-4a3a-ac0c-322d82950d98
Nov 25 08:49:01 compute-0 nova_compute[253538]: 2025-11-25 08:49:01.554 253542 DEBUG oslo_concurrency.lockutils [None req-446c21e8-43f5-418f-a525-16e7c35e584d 229f02d89c9848d8aaaaab070ce4d179 947f731219de435196429037dc94fd56 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 218 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 341 KiB/s wr, 72 op/s
Nov 25 08:49:01 compute-0 podman[361429]: 2025-11-25 08:49:01.815761452 +0000 UTC m=+0.064843037 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:49:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1124057411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:02 compute-0 nova_compute[253538]: 2025-11-25 08:49:02.378 253542 DEBUG nova.compute.manager [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:02 compute-0 nova_compute[253538]: 2025-11-25 08:49:02.379 253542 DEBUG oslo_concurrency.lockutils [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:02 compute-0 nova_compute[253538]: 2025-11-25 08:49:02.379 253542 DEBUG oslo_concurrency.lockutils [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:02 compute-0 nova_compute[253538]: 2025-11-25 08:49:02.381 253542 DEBUG oslo_concurrency.lockutils [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "0f5e68e6-8f02-4a3a-ac0c-322d82950d98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:02 compute-0 nova_compute[253538]: 2025-11-25 08:49:02.381 253542 DEBUG nova.compute.manager [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] No waiting events found dispatching network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:02 compute-0 nova_compute[253538]: 2025-11-25 08:49:02.382 253542 WARNING nova.compute.manager [req-d0a440df-36f1-4958-a4ac-c3554ff05ae6 req-f5b70ea4-55fa-4648-9e49-efaec511f3fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Received unexpected event network-vif-plugged-7246ed42-6ec3-42e8-9b9d-12606aeeb43c for instance with vm_state deleted and task_state None.
Nov 25 08:49:02 compute-0 nova_compute[253538]: 2025-11-25 08:49:02.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:02 compute-0 ceph-mon[75015]: pgmap v2031: 321 pgs: 321 active+clean; 218 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 341 KiB/s wr, 72 op/s
Nov 25 08:49:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 167 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 88 KiB/s wr, 52 op/s
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000759972549163248 of space, bias 1.0, pg target 0.2279917647489744 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:49:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:49:04 compute-0 nova_compute[253538]: 2025-11-25 08:49:04.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:04 compute-0 ceph-mon[75015]: pgmap v2032: 321 pgs: 321 active+clean; 167 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 88 KiB/s wr, 52 op/s
Nov 25 08:49:05 compute-0 ovn_controller[152859]: 2025-11-25T08:49:05Z|01038|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 08:49:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 167 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 27 KiB/s wr, 29 op/s
Nov 25 08:49:05 compute-0 nova_compute[253538]: 2025-11-25 08:49:05.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:06 compute-0 sshd-session[361405]: Received disconnect from 45.78.222.2 port 57786:11: Bye Bye [preauth]
Nov 25 08:49:06 compute-0 sshd-session[361405]: Disconnected from authenticating user root 45.78.222.2 port 57786 [preauth]
Nov 25 08:49:06 compute-0 nova_compute[253538]: 2025-11-25 08:49:06.830 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:06 compute-0 nova_compute[253538]: 2025-11-25 08:49:06.830 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:06 compute-0 nova_compute[253538]: 2025-11-25 08:49:06.842 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:49:06 compute-0 nova_compute[253538]: 2025-11-25 08:49:06.910 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:06 compute-0 nova_compute[253538]: 2025-11-25 08:49:06.911 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:06 compute-0 nova_compute[253538]: 2025-11-25 08:49:06.918 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:49:06 compute-0 nova_compute[253538]: 2025-11-25 08:49:06.918 253542 INFO nova.compute.claims [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:49:07 compute-0 ceph-mon[75015]: pgmap v2033: 321 pgs: 321 active+clean; 167 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 27 KiB/s wr, 29 op/s
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.019 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:49:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469260779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.480 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.485 253542 DEBUG nova.compute.provider_tree [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.502 253542 DEBUG nova.scheduler.client.report [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.525 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.526 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.568 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.568 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.583 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.599 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:49:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 167 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 26 KiB/s wr, 29 op/s
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.665 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.667 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.668 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Creating image(s)
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.695 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.715 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.739 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.744 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.817 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.818 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.818 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.819 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.839 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.842 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ec34a574-9c78-43d8-a65a-aa4052a5d452_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:07 compute-0 podman[361513]: 2025-11-25 08:49:07.856183303 +0000 UTC m=+0.108344523 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:49:07 compute-0 nova_compute[253538]: 2025-11-25 08:49:07.886 253542 DEBUG nova.policy [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:49:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1469260779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.273 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ec34a574-9c78-43d8-a65a-aa4052a5d452_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.337 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.440 253542 DEBUG nova.objects.instance [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid ec34a574-9c78-43d8-a65a-aa4052a5d452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.453 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.454 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Ensure instance console log exists: /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.454 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.454 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.455 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:08 compute-0 nova_compute[253538]: 2025-11-25 08:49:08.957 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Successfully created port: bc66e4f2-ce2b-49ca-ba89-a86607d56e5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:49:09 compute-0 ceph-mon[75015]: pgmap v2034: 321 pgs: 321 active+clean; 167 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 26 KiB/s wr, 29 op/s
Nov 25 08:49:09 compute-0 nova_compute[253538]: 2025-11-25 08:49:09.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 181 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 604 KiB/s wr, 41 op/s
Nov 25 08:49:10 compute-0 nova_compute[253538]: 2025-11-25 08:49:10.193 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Successfully updated port: bc66e4f2-ce2b-49ca-ba89-a86607d56e5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:49:10 compute-0 nova_compute[253538]: 2025-11-25 08:49:10.205 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:49:10 compute-0 nova_compute[253538]: 2025-11-25 08:49:10.206 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:49:10 compute-0 nova_compute[253538]: 2025-11-25 08:49:10.206 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:49:10 compute-0 ceph-mon[75015]: pgmap v2035: 321 pgs: 321 active+clean; 181 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 604 KiB/s wr, 41 op/s
Nov 25 08:49:10 compute-0 nova_compute[253538]: 2025-11-25 08:49:10.363 253542 DEBUG nova.compute.manager [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-changed-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:10 compute-0 nova_compute[253538]: 2025-11-25 08:49:10.364 253542 DEBUG nova.compute.manager [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Refreshing instance network info cache due to event network-changed-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:49:10 compute-0 nova_compute[253538]: 2025-11-25 08:49:10.364 253542 DEBUG oslo_concurrency.lockutils [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:49:10 compute-0 nova_compute[253538]: 2025-11-25 08:49:10.403 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:49:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 194 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 835 KiB/s wr, 54 op/s
Nov 25 08:49:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.628 253542 DEBUG nova.network.neutron [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Updating instance_info_cache with network_info: [{"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.645 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.646 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance network_info: |[{"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.646 253542 DEBUG oslo_concurrency.lockutils [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.647 253542 DEBUG nova.network.neutron [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Refreshing network info cache for port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.649 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start _get_guest_xml network_info=[{"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.653 253542 WARNING nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.657 253542 DEBUG nova.virt.libvirt.host [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.658 253542 DEBUG nova.virt.libvirt.host [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.662 253542 DEBUG nova.virt.libvirt.host [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.662 253542 DEBUG nova.virt.libvirt.host [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.663 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.663 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.664 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.665 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.665 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.665 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.665 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.666 253542 DEBUG nova.virt.hardware [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:49:12 compute-0 nova_compute[253538]: 2025-11-25 08:49:12.668 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:12 compute-0 ceph-mon[75015]: pgmap v2036: 321 pgs: 321 active+clean; 194 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 835 KiB/s wr, 54 op/s
Nov 25 08:49:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:49:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124645347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.113 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.135 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.139 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:49:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637525819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.578 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.580 253542 DEBUG nova.virt.libvirt.vif [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-286927500',display_name='tempest-TestNetworkBasicOps-server-286927500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-286927500',id=107,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpx2ujcMDfZDTmdH6itq/rcGct31eJ9TyaupJxEMjoZjvc+WgDrLkUbHxw8m+QMC78njJvM+fOPyWv9TETxSR2Le+lHvoJLnW/RQzdZT3SocZ8dY0e2xdmGW9jZNSUf0g==',key_name='tempest-TestNetworkBasicOps-1732848641',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-tlyua8ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:49:07Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=ec34a574-9c78-43d8-a65a-aa4052a5d452,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.581 253542 DEBUG nova.network.os_vif_util [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.581 253542 DEBUG nova.network.os_vif_util [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.583 253542 DEBUG nova.objects.instance [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec34a574-9c78-43d8-a65a-aa4052a5d452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.604 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <uuid>ec34a574-9c78-43d8-a65a-aa4052a5d452</uuid>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <name>instance-0000006b</name>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-286927500</nova:name>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:49:12</nova:creationTime>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <nova:port uuid="bc66e4f2-ce2b-49ca-ba89-a86607d56e5a">
Nov 25 08:49:13 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <system>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <entry name="serial">ec34a574-9c78-43d8-a65a-aa4052a5d452</entry>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <entry name="uuid">ec34a574-9c78-43d8-a65a-aa4052a5d452</entry>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </system>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <os>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   </os>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <features>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   </features>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ec34a574-9c78-43d8-a65a-aa4052a5d452_disk">
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       </source>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config">
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       </source>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:49:13 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d3:98:59"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <target dev="tapbc66e4f2-ce"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/console.log" append="off"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <video>
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </video>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:49:13 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:49:13 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:49:13 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:49:13 compute-0 nova_compute[253538]: </domain>
Nov 25 08:49:13 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.605 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Preparing to wait for external event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.606 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.606 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.606 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.607 253542 DEBUG nova.virt.libvirt.vif [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-286927500',display_name='tempest-TestNetworkBasicOps-server-286927500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-286927500',id=107,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpx2ujcMDfZDTmdH6itq/rcGct31eJ9TyaupJxEMjoZjvc+WgDrLkUbHxw8m+QMC78njJvM+fOPyWv9TETxSR2Le+lHvoJLnW/RQzdZT3SocZ8dY0e2xdmGW9jZNSUf0g==',key_name='tempest-TestNetworkBasicOps-1732848641',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-tlyua8ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:49:07Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=ec34a574-9c78-43d8-a65a-aa4052a5d452,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.607 253542 DEBUG nova.network.os_vif_util [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.608 253542 DEBUG nova.network.os_vif_util [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.608 253542 DEBUG os_vif [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.610 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.610 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.613 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc66e4f2-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.614 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc66e4f2-ce, col_values=(('external_ids', {'iface-id': 'bc66e4f2-ce2b-49ca-ba89-a86607d56e5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:98:59', 'vm-uuid': 'ec34a574-9c78-43d8-a65a-aa4052a5d452'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:13 compute-0 NetworkManager[48915]: <info>  [1764060553.6503] manager: (tapbc66e4f2-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Nov 25 08:49:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.649 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.655 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.655 253542 INFO os_vif [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce')
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.715 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.715 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.716 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:d3:98:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.716 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Using config drive
Nov 25 08:49:13 compute-0 nova_compute[253538]: 2025-11-25 08:49:13.739 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/124645347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:49:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3637525819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.473 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060539.4726338, 0f5e68e6-8f02-4a3a-ac0c-322d82950d98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.474 253542 INFO nova.compute.manager [-] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] VM Stopped (Lifecycle Event)
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.490 253542 DEBUG nova.compute.manager [None req-31915953-ace4-495d-9a22-bb1cd8eceb17 - - - - - -] [instance: 0f5e68e6-8f02-4a3a-ac0c-322d82950d98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.559 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Creating config drive at /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.564 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp17n7bimt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.707 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp17n7bimt" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.735 253542 DEBUG nova.storage.rbd_utils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.739 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.958 253542 DEBUG nova.network.neutron [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Updated VIF entry in instance network info cache for port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.959 253542 DEBUG nova.network.neutron [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Updating instance_info_cache with network_info: [{"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:14 compute-0 nova_compute[253538]: 2025-11-25 08:49:14.979 253542 DEBUG oslo_concurrency.lockutils [req-d63b1dad-4547-4e35-b2a9-8da69771b43f req-4eacee53-84a7-4b9a-8f1c-e41af8ec8a97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ec34a574-9c78-43d8-a65a-aa4052a5d452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:49:14 compute-0 ceph-mon[75015]: pgmap v2037: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.132 253542 DEBUG oslo_concurrency.processutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config ec34a574-9c78-43d8-a65a-aa4052a5d452_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.133 253542 INFO nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Deleting local config drive /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452/disk.config because it was imported into RBD.
Nov 25 08:49:15 compute-0 kernel: tapbc66e4f2-ce: entered promiscuous mode
Nov 25 08:49:15 compute-0 NetworkManager[48915]: <info>  [1764060555.1804] manager: (tapbc66e4f2-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Nov 25 08:49:15 compute-0 ovn_controller[152859]: 2025-11-25T08:49:15Z|01039|binding|INFO|Claiming lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for this chassis.
Nov 25 08:49:15 compute-0 ovn_controller[152859]: 2025-11-25T08:49:15Z|01040|binding|INFO|bc66e4f2-ce2b-49ca-ba89-a86607d56e5a: Claiming fa:16:3e:d3:98:59 10.100.0.23
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.191 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:98:59 10.100.0.23'], port_security=['fa:16:3e:d3:98:59 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ec34a574-9c78-43d8-a65a-aa4052a5d452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6e83494-bf67-403f-bbb9-2d14a77705aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99275f51-5c9e-402a-8f14-8705646486ea, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.192 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a in datapath 34b2f9d4-63fb-44ac-b745-d86825df1c61 bound to our chassis
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.193 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b2f9d4-63fb-44ac-b745-d86825df1c61
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.205 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[add58370-ba19-40db-b15b-518aec5251f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.206 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34b2f9d4-61 in ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.208 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34b2f9d4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.208 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53bdcdb4-167a-4c9a-aa90-2abef5632b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24f5e807-8e17-4427-b9cf-312a972875de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 systemd-machined[215790]: New machine qemu-133-instance-0000006b.
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.222 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2b485511-423f-4efb-9a41-2b2066629cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:15 compute-0 ovn_controller[152859]: 2025-11-25T08:49:15Z|01041|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a ovn-installed in OVS
Nov 25 08:49:15 compute-0 ovn_controller[152859]: 2025-11-25T08:49:15Z|01042|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a up in Southbound
Nov 25 08:49:15 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006b.
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:15 compute-0 systemd-udevd[361803]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.247 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[479adfcc-83db-4bbf-8db4-14e6a1631cc4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 NetworkManager[48915]: <info>  [1764060555.2532] device (tapbc66e4f2-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:49:15 compute-0 NetworkManager[48915]: <info>  [1764060555.2541] device (tapbc66e4f2-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.275 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[846abea7-0796-4804-845b-2c3727dcbfd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.281 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38587ca0-a519-4786-b33d-40760078c197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 NetworkManager[48915]: <info>  [1764060555.2829] manager: (tap34b2f9d4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.313 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e83aec7c-fbf7-4026-9e0d-0c8e102f1877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.317 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[810ddae8-31e3-4290-bf3f-179646013604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 NetworkManager[48915]: <info>  [1764060555.3454] device (tap34b2f9d4-60): carrier: link connected
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.351 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[78fbeab3-8c81-4b96-9af0-fc5d274e621e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.370 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab856813-8e54-4bd1-87af-bba6ec3a2213]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b2f9d4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:3b:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590994, 'reachable_time': 30016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361833, 'error': None, 'target': 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.394 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1823f16c-fab9-42bb-b69e-0df924083fcb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:3bef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 590994, 'tstamp': 590994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361834, 'error': None, 'target': 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.424 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[901a7577-9579-4404-a8b4-7c0969c7dd9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b2f9d4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:3b:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590994, 'reachable_time': 30016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361835, 'error': None, 'target': 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.467 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3435674-4c2c-4bc2-83b7-89fbd7381e5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.534 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cda45b2c-33ff-4e68-a429-a2748dd99857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.536 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b2f9d4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.537 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.537 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b2f9d4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:15 compute-0 NetworkManager[48915]: <info>  [1764060555.5405] manager: (tap34b2f9d4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Nov 25 08:49:15 compute-0 kernel: tap34b2f9d4-60: entered promiscuous mode
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.543 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b2f9d4-60, col_values=(('external_ids', {'iface-id': '07382928-ba7a-4550-9d32-44d1241668cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:15 compute-0 ovn_controller[152859]: 2025-11-25T08:49:15Z|01043|binding|INFO|Releasing lport 07382928-ba7a-4550-9d32-44d1241668cf from this chassis (sb_readonly=0)
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.546 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34b2f9d4-63fb-44ac-b745-d86825df1c61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34b2f9d4-63fb-44ac-b745-d86825df1c61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9727fec1-d9ce-4c2e-87ac-fbbdcc6f76d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.560 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.560 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-34b2f9d4-63fb-44ac-b745-d86825df1c61
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/34b2f9d4-63fb-44ac-b745-d86825df1c61.pid.haproxy
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 34b2f9d4-63fb-44ac-b745-d86825df1c61
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:49:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:15.561 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'env', 'PROCESS_TAG=haproxy-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34b2f9d4-63fb-44ac-b745-d86825df1c61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:49:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.812 253542 DEBUG nova.compute.manager [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.812 253542 DEBUG oslo_concurrency.lockutils [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.817 253542 DEBUG oslo_concurrency.lockutils [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.818 253542 DEBUG oslo_concurrency.lockutils [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.818 253542 DEBUG nova.compute.manager [req-7949981e-aa9d-4d62-bce2-8ac23f7a87f3 req-d873ba04-1185-44c9-bad1-d3ae0f75a979 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Processing event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.950 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.953 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060555.9491007, ec34a574-9c78-43d8-a65a-aa4052a5d452 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.954 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] VM Started (Lifecycle Event)
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.957 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.963 253542 INFO nova.virt.libvirt.driver [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance spawned successfully.
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.963 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.975 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:15 compute-0 sshd-session[361749]: Received disconnect from 45.202.211.6 port 52368:11: Bye Bye [preauth]
Nov 25 08:49:15 compute-0 sshd-session[361749]: Disconnected from authenticating user root 45.202.211.6 port 52368 [preauth]
Nov 25 08:49:15 compute-0 podman[361906]: 2025-11-25 08:49:15.986872906 +0000 UTC m=+0.076825929 container create 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.987 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.996 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.997 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.998 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:15 compute-0 nova_compute[253538]: 2025-11-25 08:49:15.999 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.000 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.001 253542 DEBUG nova.virt.libvirt.driver [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.007 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.008 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060555.9513018, ec34a574-9c78-43d8-a65a-aa4052a5d452 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.008 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] VM Paused (Lifecycle Event)
Nov 25 08:49:16 compute-0 podman[361906]: 2025-11-25 08:49:15.935385386 +0000 UTC m=+0.025338239 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.034 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.039 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060555.9572742, ec34a574-9c78-43d8-a65a-aa4052a5d452 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.039 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] VM Resumed (Lifecycle Event)
Nov 25 08:49:16 compute-0 systemd[1]: Started libpod-conmon-50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241.scope.
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.064 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.067 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:49:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.075 253542 INFO nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Took 8.41 seconds to spawn the instance on the hypervisor.
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.075 253542 DEBUG nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d709033092afbe32e7301de9ce77134dfb1c21bcf75d55f1f66a87f8dc2459/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.085 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:49:16 compute-0 podman[361906]: 2025-11-25 08:49:16.093647585 +0000 UTC m=+0.183600458 container init 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:49:16 compute-0 podman[361906]: 2025-11-25 08:49:16.099103891 +0000 UTC m=+0.189056754 container start 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:49:16 compute-0 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [NOTICE]   (361926) : New worker (361928) forked
Nov 25 08:49:16 compute-0 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [NOTICE]   (361926) : Loading success.
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.131 253542 INFO nova.compute.manager [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Took 9.25 seconds to build instance.
Nov 25 08:49:16 compute-0 nova_compute[253538]: 2025-11-25 08:49:16.142 253542 DEBUG oslo_concurrency.lockutils [None req-ada1986d-f723-4490-9503-77afa8d8cdd4 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:17 compute-0 ceph-mon[75015]: pgmap v2038: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 08:49:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:17 compute-0 nova_compute[253538]: 2025-11-25 08:49:17.541 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 08:49:17 compute-0 nova_compute[253538]: 2025-11-25 08:49:17.905 253542 DEBUG nova.compute.manager [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:17 compute-0 nova_compute[253538]: 2025-11-25 08:49:17.905 253542 DEBUG oslo_concurrency.lockutils [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:17 compute-0 nova_compute[253538]: 2025-11-25 08:49:17.905 253542 DEBUG oslo_concurrency.lockutils [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:17 compute-0 nova_compute[253538]: 2025-11-25 08:49:17.905 253542 DEBUG oslo_concurrency.lockutils [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:17 compute-0 nova_compute[253538]: 2025-11-25 08:49:17.906 253542 DEBUG nova.compute.manager [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:17 compute-0 nova_compute[253538]: 2025-11-25 08:49:17.906 253542 WARNING nova.compute.manager [req-fa9d9c0a-766d-44f9-9573-0dc12327814a req-af2572c8-834d-40e6-a37b-38808bab7e66 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state None.
Nov 25 08:49:18 compute-0 nova_compute[253538]: 2025-11-25 08:49:18.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:19 compute-0 ceph-mon[75015]: pgmap v2039: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 08:49:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Nov 25 08:49:21 compute-0 ceph-mon[75015]: pgmap v2040: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Nov 25 08:49:21 compute-0 nova_compute[253538]: 2025-11-25 08:49:21.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Nov 25 08:49:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:22 compute-0 nova_compute[253538]: 2025-11-25 08:49:22.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:23 compute-0 ceph-mon[75015]: pgmap v2041: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Nov 25 08:49:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:49:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:49:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:49:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:49:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:49:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:49:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1003 KiB/s wr, 75 op/s
Nov 25 08:49:23 compute-0 nova_compute[253538]: 2025-11-25 08:49:23.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:24 compute-0 nova_compute[253538]: 2025-11-25 08:49:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:25 compute-0 ceph-mon[75015]: pgmap v2042: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1003 KiB/s wr, 75 op/s
Nov 25 08:49:25 compute-0 nova_compute[253538]: 2025-11-25 08:49:25.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 08:49:26 compute-0 ceph-mon[75015]: pgmap v2043: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 08:49:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:27 compute-0 nova_compute[253538]: 2025-11-25 08:49:27.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 08:49:28 compute-0 nova_compute[253538]: 2025-11-25 08:49:28.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:28 compute-0 ceph-mon[75015]: pgmap v2044: 321 pgs: 321 active+clean; 213 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 08:49:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:49:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3817985458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:49:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:49:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3817985458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:49:29 compute-0 ovn_controller[152859]: 2025-11-25T08:49:29Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:98:59 10.100.0.23
Nov 25 08:49:29 compute-0 ovn_controller[152859]: 2025-11-25T08:49:29Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:98:59 10.100.0.23
Nov 25 08:49:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 215 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 346 KiB/s wr, 75 op/s
Nov 25 08:49:29 compute-0 podman[361939]: 2025-11-25 08:49:29.836669354 +0000 UTC m=+0.078173384 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 08:49:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3817985458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:49:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3817985458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:49:30 compute-0 nova_compute[253538]: 2025-11-25 08:49:30.474 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:30 compute-0 nova_compute[253538]: 2025-11-25 08:49:30.474 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:30 compute-0 nova_compute[253538]: 2025-11-25 08:49:30.496 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:49:30 compute-0 nova_compute[253538]: 2025-11-25 08:49:30.564 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:30 compute-0 nova_compute[253538]: 2025-11-25 08:49:30.565 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:30 compute-0 nova_compute[253538]: 2025-11-25 08:49:30.573 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:49:30 compute-0 nova_compute[253538]: 2025-11-25 08:49:30.573 253542 INFO nova.compute.claims [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:49:30 compute-0 nova_compute[253538]: 2025-11-25 08:49:30.701 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:49:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996448545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.180 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.188 253542 DEBUG nova.compute.provider_tree [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.206 253542 DEBUG nova.scheduler.client.report [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:49:31 compute-0 ceph-mon[75015]: pgmap v2045: 321 pgs: 321 active+clean; 215 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 346 KiB/s wr, 75 op/s
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.229 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.230 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.275 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.276 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.301 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.322 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.422 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.423 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.424 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Creating image(s)
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.452 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.483 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.517 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.523 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.563 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.607 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.609 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.610 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.611 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.640 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.645 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 233 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Nov 25 08:49:31 compute-0 nova_compute[253538]: 2025-11-25 08:49:31.734 253542 DEBUG nova.policy [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:49:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1996448545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:32 compute-0 ceph-mon[75015]: pgmap v2046: 321 pgs: 321 active+clean; 233 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.554 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.909s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.609 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.728 253542 DEBUG nova.objects.instance [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.751 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.751 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Ensure instance console log exists: /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.752 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.752 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.752 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:32 compute-0 nova_compute[253538]: 2025-11-25 08:49:32.778 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Successfully created port: 67b278e0-034e-4bb1-8cba-035ab2a72de3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:49:32 compute-0 podman[362148]: 2025-11-25 08:49:32.804908677 +0000 UTC m=+0.057425709 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:49:33 compute-0 sudo[362168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:33 compute-0 sudo[362168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:33 compute-0 sudo[362168]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:33 compute-0 sudo[362193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:49:33 compute-0 sudo[362193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:33 compute-0 sudo[362193]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:33 compute-0 sudo[362218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:33 compute-0 sudo[362218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:33 compute-0 sudo[362218]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:49:33 compute-0 sudo[362243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:49:33 compute-0 sudo[362243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 264 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 88 op/s
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.827 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Successfully updated port: 67b278e0-034e-4bb1-8cba-035ab2a72de3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.833 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.833 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.834 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.834 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.839 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.839 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.839 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.955 253542 DEBUG nova.compute.manager [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.955 253542 DEBUG nova.compute.manager [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing instance network info cache due to event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:49:33 compute-0 nova_compute[253538]: 2025-11-25 08:49:33.956 253542 DEBUG oslo_concurrency.lockutils [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.026 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:49:34 compute-0 sudo[362243]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:34 compute-0 sudo[362299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:34 compute-0 sudo[362299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:34 compute-0 sudo[362299]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:34 compute-0 sudo[362324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:49:34 compute-0 sudo[362324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:34 compute-0 sudo[362324]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:34 compute-0 sudo[362349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:34 compute-0 sudo[362349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:34 compute-0 sudo[362349]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:34 compute-0 sudo[362374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 08:49:34 compute-0 sudo[362374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:34 compute-0 sudo[362374]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:49:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:49:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:49:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:49:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:49:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:34 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1b01b991-9cde-41c8-9d0c-09e8311a2d01 does not exist
Nov 25 08:49:34 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c21a5308-e4ff-415d-acb8-76a9fb62479a does not exist
Nov 25 08:49:34 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0c792828-9384-4086-852f-71f111eb13f1 does not exist
Nov 25 08:49:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:49:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:49:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:49:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: pgmap v2047: 321 pgs: 321 active+clean; 264 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 88 op/s
Nov 25 08:49:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:49:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:49:34 compute-0 sudo[362417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:34 compute-0 sudo[362417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:34 compute-0 sudo[362417]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:34 compute-0 sudo[362442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:49:34 compute-0 sudo[362442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:34 compute-0 sudo[362442]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:34 compute-0 sudo[362467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:34 compute-0 sudo[362467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:34 compute-0 sudo[362467]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:34 compute-0 sudo[362492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:49:34 compute-0 sudo[362492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.953 253542 DEBUG nova.network.neutron [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.976 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.977 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance network_info: |[{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.977 253542 DEBUG oslo_concurrency.lockutils [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.978 253542 DEBUG nova.network.neutron [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.980 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start _get_guest_xml network_info=[{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.988 253542 WARNING nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.997 253542 DEBUG nova.virt.libvirt.host [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:49:34 compute-0 nova_compute[253538]: 2025-11-25 08:49:34.998 253542 DEBUG nova.virt.libvirt.host [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.001 253542 DEBUG nova.virt.libvirt.host [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.002 253542 DEBUG nova.virt.libvirt.host [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.002 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.003 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.003 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.004 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.004 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.004 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.004 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.005 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.005 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.005 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.006 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.006 253542 DEBUG nova.virt.hardware [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.010 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:35 compute-0 podman[362578]: 2025-11-25 08:49:35.332028005 +0000 UTC m=+0.067944800 container create b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:49:35 compute-0 systemd[1]: Started libpod-conmon-b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51.scope.
Nov 25 08:49:35 compute-0 podman[362578]: 2025-11-25 08:49:35.301782925 +0000 UTC m=+0.037699740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:49:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:49:35 compute-0 podman[362578]: 2025-11-25 08:49:35.432112236 +0000 UTC m=+0.168029031 container init b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 08:49:35 compute-0 podman[362578]: 2025-11-25 08:49:35.439117503 +0000 UTC m=+0.175034298 container start b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:49:35 compute-0 optimistic_bell[362594]: 167 167
Nov 25 08:49:35 compute-0 systemd[1]: libpod-b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51.scope: Deactivated successfully.
Nov 25 08:49:35 compute-0 podman[362578]: 2025-11-25 08:49:35.450783785 +0000 UTC m=+0.186700660 container attach b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:49:35 compute-0 podman[362578]: 2025-11-25 08:49:35.451392662 +0000 UTC m=+0.187309487 container died b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:49:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:49:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191068514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.491 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.528 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.534 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc5264ad69715d30f95aea435c50859cf7292d8da1ec24cf8c60ab718585ed33-merged.mount: Deactivated successfully.
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.589 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.608 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.608 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.609 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:35 compute-0 nova_compute[253538]: 2025-11-25 08:49:35.609 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:49:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 285 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 3.4 MiB/s wr, 66 op/s
Nov 25 08:49:35 compute-0 podman[362578]: 2025-11-25 08:49:35.699389144 +0000 UTC m=+0.435305969 container remove b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:49:35 compute-0 systemd[1]: libpod-conmon-b0a378b81fbe23d4a0905facc4fb7b40750821f30598ca47416b92da4fe35d51.scope: Deactivated successfully.
Nov 25 08:49:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/191068514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:49:35 compute-0 podman[362660]: 2025-11-25 08:49:35.940064239 +0000 UTC m=+0.066392289 container create 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:49:35 compute-0 podman[362660]: 2025-11-25 08:49:35.895385963 +0000 UTC m=+0.021714003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:49:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:49:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438672530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.019 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.023 253542 DEBUG nova.virt.libvirt.vif [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:49:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2097673480',display_name='tempest-TestNetworkAdvancedServerOps-server-2097673480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2097673480',id=108,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8CJ+LAazH0nh/fs57lUBduvcorPeFDtVdwZY7U/GDNTdvdOvwS2k2O3rjC5dikEP5slLAsdzOE76Bw4/4L12X0ArhaClfawfYB19breOk8NW05uifXWs22TjYOgG1XfA==',key_name='tempest-TestNetworkAdvancedServerOps-659140204',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-140u7iq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:49:31Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.024 253542 DEBUG nova.network.os_vif_util [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.025 253542 DEBUG nova.network.os_vif_util [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.028 253542 DEBUG nova.objects.instance [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:49:36 compute-0 systemd[1]: Started libpod-conmon-7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013.scope.
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.043 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <uuid>a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5</uuid>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <name>instance-0000006c</name>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2097673480</nova:name>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:49:34</nova:creationTime>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <nova:port uuid="67b278e0-034e-4bb1-8cba-035ab2a72de3">
Nov 25 08:49:36 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <system>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <entry name="serial">a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5</entry>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <entry name="uuid">a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5</entry>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </system>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <os>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   </os>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <features>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   </features>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk">
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       </source>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config">
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       </source>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:49:36 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d5:f1:de"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <target dev="tap67b278e0-03"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/console.log" append="off"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <video>
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </video>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:49:36 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:49:36 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:49:36 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:49:36 compute-0 nova_compute[253538]: </domain>
Nov 25 08:49:36 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.043 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Preparing to wait for external event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.044 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.044 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.045 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.046 253542 DEBUG nova.virt.libvirt.vif [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:49:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2097673480',display_name='tempest-TestNetworkAdvancedServerOps-server-2097673480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2097673480',id=108,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8CJ+LAazH0nh/fs57lUBduvcorPeFDtVdwZY7U/GDNTdvdOvwS2k2O3rjC5dikEP5slLAsdzOE76Bw4/4L12X0ArhaClfawfYB19breOk8NW05uifXWs22TjYOgG1XfA==',key_name='tempest-TestNetworkAdvancedServerOps-659140204',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-140u7iq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:49:31Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.047 253542 DEBUG nova.network.os_vif_util [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.047 253542 DEBUG nova.network.os_vif_util [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.048 253542 DEBUG os_vif [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.050 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.051 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.059 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67b278e0-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.060 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67b278e0-03, col_values=(('external_ids', {'iface-id': '67b278e0-034e-4bb1-8cba-035ab2a72de3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:f1:de', 'vm-uuid': 'a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:36 compute-0 NetworkManager[48915]: <info>  [1764060576.0645] manager: (tap67b278e0-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.067 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:49:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.073 253542 INFO os_vif [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03')
Nov 25 08:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:36 compute-0 podman[362660]: 2025-11-25 08:49:36.184978209 +0000 UTC m=+0.311306289 container init 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:49:36 compute-0 podman[362660]: 2025-11-25 08:49:36.197702559 +0000 UTC m=+0.324030609 container start 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.215 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.216 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.216 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:d5:f1:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.216 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Using config drive
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.240 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:36 compute-0 podman[362660]: 2025-11-25 08:49:36.251762387 +0000 UTC m=+0.378090447 container attach 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.628 253542 DEBUG nova.network.neutron [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updated VIF entry in instance network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.629 253542 DEBUG nova.network.neutron [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.677 253542 DEBUG oslo_concurrency.lockutils [req-549cbfdb-e01d-44bb-bccb-93d992cdc4fe req-98761f87-8987-40ae-b14e-fae4aa4aa41b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.820 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Creating config drive at /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.829 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqktfooxn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:36 compute-0 ceph-mon[75015]: pgmap v2048: 321 pgs: 321 active+clean; 285 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 3.4 MiB/s wr, 66 op/s
Nov 25 08:49:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2438672530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:49:36 compute-0 nova_compute[253538]: 2025-11-25 08:49:36.983 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqktfooxn" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.009 253542 DEBUG nova.storage.rbd_utils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.013 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:37 compute-0 vigorous_curie[362678]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:49:37 compute-0 vigorous_curie[362678]: --> relative data size: 1.0
Nov 25 08:49:37 compute-0 vigorous_curie[362678]: --> All data devices are unavailable
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.228 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.230 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.231 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.232 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.232 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.235 253542 INFO nova.compute.manager [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Terminating instance
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.237 253542 DEBUG nova.compute.manager [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:49:37 compute-0 systemd[1]: libpod-7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013.scope: Deactivated successfully.
Nov 25 08:49:37 compute-0 podman[362765]: 2025-11-25 08:49:37.2812914 +0000 UTC m=+0.025089843 container died 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:49:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.502956) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577502997, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 629, "num_deletes": 260, "total_data_size": 667157, "memory_usage": 680312, "flush_reason": "Manual Compaction"}
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577534971, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 660917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42713, "largest_seqno": 43341, "table_properties": {"data_size": 657542, "index_size": 1219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7747, "raw_average_key_size": 18, "raw_value_size": 650719, "raw_average_value_size": 1579, "num_data_blocks": 54, "num_entries": 412, "num_filter_entries": 412, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060535, "oldest_key_time": 1764060535, "file_creation_time": 1764060577, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 32073 microseconds, and 3420 cpu microseconds.
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.535026) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 660917 bytes OK
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.535049) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.560611) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.560649) EVENT_LOG_v1 {"time_micros": 1764060577560641, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.560671) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 663718, prev total WAL file size 663718, number of live WAL files 2.
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.561250) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353130' seq:72057594037927935, type:22 .. '6C6F676D0031373636' seq:0, type:0; will stop at (end)
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(645KB)], [98(7210KB)]
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577561275, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 8044834, "oldest_snapshot_seqno": -1}
Nov 25 08:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-181c67ffbc920c269885be2cfb351c29e96a406420cab0835f8d406c343e5fd3-merged.mount: Deactivated successfully.
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6290 keys, 7911635 bytes, temperature: kUnknown
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577636292, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7911635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7871020, "index_size": 23801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 164809, "raw_average_key_size": 26, "raw_value_size": 7759444, "raw_average_value_size": 1233, "num_data_blocks": 930, "num_entries": 6290, "num_filter_entries": 6290, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060577, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.636565) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7911635 bytes
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.653281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.1 rd, 105.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(24.1) write-amplify(12.0) OK, records in: 6822, records dropped: 532 output_compression: NoCompression
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.653349) EVENT_LOG_v1 {"time_micros": 1764060577653331, "job": 58, "event": "compaction_finished", "compaction_time_micros": 75124, "compaction_time_cpu_micros": 19393, "output_level": 6, "num_output_files": 1, "total_output_size": 7911635, "num_input_records": 6822, "num_output_records": 6290, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577653612, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060577654793, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.561159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:49:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:49:37.654874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:49:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 293 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 25 08:49:37 compute-0 podman[362765]: 2025-11-25 08:49:37.733560261 +0000 UTC m=+0.477358684 container remove 7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_curie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 08:49:37 compute-0 systemd[1]: libpod-conmon-7cb80059c3f9497bfb9274fda802542c6b4e211568f74aea4278f73d739be013.scope: Deactivated successfully.
Nov 25 08:49:37 compute-0 sudo[362492]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:37 compute-0 kernel: tapbc66e4f2-ce (unregistering): left promiscuous mode
Nov 25 08:49:37 compute-0 NetworkManager[48915]: <info>  [1764060577.7896] device (tapbc66e4f2-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:49:37 compute-0 ovn_controller[152859]: 2025-11-25T08:49:37Z|01044|binding|INFO|Releasing lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a from this chassis (sb_readonly=0)
Nov 25 08:49:37 compute-0 ovn_controller[152859]: 2025-11-25T08:49:37Z|01045|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a down in Southbound
Nov 25 08:49:37 compute-0 ovn_controller[152859]: 2025-11-25T08:49:37Z|01046|binding|INFO|Removing iface tapbc66e4f2-ce ovn-installed in OVS
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.857 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:98:59 10.100.0.23'], port_security=['fa:16:3e:d3:98:59 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ec34a574-9c78-43d8-a65a-aa4052a5d452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6e83494-bf67-403f-bbb9-2d14a77705aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99275f51-5c9e-402a-8f14-8705646486ea, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:49:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.858 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a in datapath 34b2f9d4-63fb-44ac-b745-d86825df1c61 unbound from our chassis
Nov 25 08:49:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.859 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34b2f9d4-63fb-44ac-b745-d86825df1c61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:49:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.860 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa69bcc2-61d7-434f-9023-e741ebc5b87a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:37.860 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 namespace which is not needed anymore
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:37 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 25 08:49:37 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Consumed 13.484s CPU time.
Nov 25 08:49:37 compute-0 systemd-machined[215790]: Machine qemu-133-instance-0000006b terminated.
Nov 25 08:49:37 compute-0 sudo[362784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:37 compute-0 sudo[362784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:37 compute-0 sudo[362784]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:37 compute-0 sudo[362833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:49:37 compute-0 sudo[362833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:37 compute-0 sudo[362833]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.966 253542 DEBUG oslo_concurrency.processutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.952s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:37 compute-0 nova_compute[253538]: 2025-11-25 08:49:37.966 253542 INFO nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Deleting local config drive /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5/disk.config because it was imported into RBD.
Nov 25 08:49:38 compute-0 sudo[362890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:38 compute-0 sudo[362890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:38 compute-0 sudo[362890]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:38 compute-0 kernel: tap67b278e0-03: entered promiscuous mode
Nov 25 08:49:38 compute-0 NetworkManager[48915]: <info>  [1764060578.0242] manager: (tap67b278e0-03): new Tun device (/org/freedesktop/NetworkManager/Devices/430)
Nov 25 08:49:38 compute-0 systemd-udevd[362818]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:49:38 compute-0 NetworkManager[48915]: <info>  [1764060578.0352] device (tap67b278e0-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:49:38 compute-0 NetworkManager[48915]: <info>  [1764060578.0360] device (tap67b278e0-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01047|binding|INFO|Claiming lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 for this chassis.
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.038 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01048|binding|INFO|67b278e0-034e-4bb1-8cba-035ab2a72de3: Claiming fa:16:3e:d5:f1:de 10.100.0.12
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.043 253542 DEBUG nova.compute.manager [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.043 253542 DEBUG oslo_concurrency.lockutils [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.044 253542 DEBUG oslo_concurrency.lockutils [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.044 253542 DEBUG oslo_concurrency.lockutils [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.044 253542 DEBUG nova.compute.manager [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.044 253542 DEBUG nova.compute.manager [req-23ab3e89-9735-4ce5-9a32-2e5d445d6209 req-ee137a08-2074-4c78-af8e-6fdd88af6e89 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.045 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:f1:de 10.100.0.12'], port_security=['fa:16:3e:d5:f1:de 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76a697a7-7255-4dde-bfd3-a4f7c520b32e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddbaf93a-8581-4a98-b32a-d829e79ecbfd, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=67b278e0-034e-4bb1-8cba-035ab2a72de3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01049|binding|INFO|Setting lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 ovn-installed in OVS
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01050|binding|INFO|Setting lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 up in Southbound
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.065 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 podman[362812]: 2025-11-25 08:49:38.066322613 +0000 UTC m=+0.173715783 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:49:38 compute-0 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [NOTICE]   (361926) : haproxy version is 2.8.14-c23fe91
Nov 25 08:49:38 compute-0 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [NOTICE]   (361926) : path to executable is /usr/sbin/haproxy
Nov 25 08:49:38 compute-0 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [WARNING]  (361926) : Exiting Master process...
Nov 25 08:49:38 compute-0 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [ALERT]    (361926) : Current worker (361928) exited with code 143 (Terminated)
Nov 25 08:49:38 compute-0 neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61[361922]: [WARNING]  (361926) : All workers exited. Exiting... (0)
Nov 25 08:49:38 compute-0 NetworkManager[48915]: <info>  [1764060578.0685] manager: (tapbc66e4f2-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Nov 25 08:49:38 compute-0 systemd[1]: libpod-50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241.scope: Deactivated successfully.
Nov 25 08:49:38 compute-0 kernel: tapbc66e4f2-ce: entered promiscuous mode
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01051|binding|INFO|Claiming lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for this chassis.
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01052|binding|INFO|bc66e4f2-ce2b-49ca-ba89-a86607d56e5a: Claiming fa:16:3e:d3:98:59 10.100.0.23
Nov 25 08:49:38 compute-0 kernel: tapbc66e4f2-ce (unregistering): left promiscuous mode
Nov 25 08:49:38 compute-0 sudo[362929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.077 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:98:59 10.100.0.23'], port_security=['fa:16:3e:d3:98:59 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ec34a574-9c78-43d8-a65a-aa4052a5d452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f6e83494-bf67-403f-bbb9-2d14a77705aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99275f51-5c9e-402a-8f14-8705646486ea, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:49:38 compute-0 podman[362874]: 2025-11-25 08:49:38.079884337 +0000 UTC m=+0.132049558 container died 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:49:38 compute-0 sudo[362929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:38 compute-0 systemd-machined[215790]: New machine qemu-134-instance-0000006c.
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.099 253542 INFO nova.virt.libvirt.driver [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Instance destroyed successfully.
Nov 25 08:49:38 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006c.
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.100 253542 DEBUG nova.objects.instance [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid ec34a574-9c78-43d8-a65a-aa4052a5d452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01053|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a ovn-installed in OVS
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01054|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a up in Southbound
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01055|binding|INFO|Releasing lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a from this chassis (sb_readonly=1)
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01056|binding|INFO|Removing iface tapbc66e4f2-ce ovn-installed in OVS
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01057|if_status|INFO|Not setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a down as sb is readonly
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.102 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01058|binding|INFO|Releasing lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a from this chassis (sb_readonly=0)
Nov 25 08:49:38 compute-0 ovn_controller[152859]: 2025-11-25T08:49:38Z|01059|binding|INFO|Setting lport bc66e4f2-ce2b-49ca-ba89-a86607d56e5a down in Southbound
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.112 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:98:59 10.100.0.23'], port_security=['fa:16:3e:d3:98:59 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ec34a574-9c78-43d8-a65a-aa4052a5d452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f6e83494-bf67-403f-bbb9-2d14a77705aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99275f51-5c9e-402a-8f14-8705646486ea, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.113 253542 DEBUG nova.virt.libvirt.vif [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-286927500',display_name='tempest-TestNetworkBasicOps-server-286927500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-286927500',id=107,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpx2ujcMDfZDTmdH6itq/rcGct31eJ9TyaupJxEMjoZjvc+WgDrLkUbHxw8m+QMC78njJvM+fOPyWv9TETxSR2Le+lHvoJLnW/RQzdZT3SocZ8dY0e2xdmGW9jZNSUf0g==',key_name='tempest-TestNetworkBasicOps-1732848641',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:49:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-tlyua8ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:49:16Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=ec34a574-9c78-43d8-a65a-aa4052a5d452,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.113 253542 DEBUG nova.network.os_vif_util [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "address": "fa:16:3e:d3:98:59", "network": {"id": "34b2f9d4-63fb-44ac-b745-d86825df1c61", "bridge": "br-int", "label": "tempest-network-smoke--1835414434", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc66e4f2-ce", "ovs_interfaceid": "bc66e4f2-ce2b-49ca-ba89-a86607d56e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.114 253542 DEBUG nova.network.os_vif_util [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.114 253542 DEBUG os_vif [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.117 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc66e4f2-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.120 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.122 253542 INFO os_vif [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:98:59,bridge_name='br-int',has_traffic_filtering=True,id=bc66e4f2-ce2b-49ca-ba89-a86607d56e5a,network=Network(34b2f9d4-63fb-44ac-b745-d86825df1c61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc66e4f2-ce')
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.155 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.365 253542 DEBUG nova.compute.manager [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.367 253542 DEBUG oslo_concurrency.lockutils [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.367 253542 DEBUG oslo_concurrency.lockutils [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.368 253542 DEBUG oslo_concurrency.lockutils [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.368 253542 DEBUG nova.compute.manager [req-dcd82882-91c2-4a1e-aae7-ba7b491b311a req-64706819-548d-4ff3-988c-85778814c304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Processing event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2d709033092afbe32e7301de9ce77134dfb1c21bcf75d55f1f66a87f8dc2459-merged.mount: Deactivated successfully.
Nov 25 08:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241-userdata-shm.mount: Deactivated successfully.
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.578 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:38 compute-0 podman[362874]: 2025-11-25 08:49:38.663936598 +0000 UTC m=+0.716101819 container cleanup 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:49:38 compute-0 ceph-mon[75015]: pgmap v2049: 321 pgs: 321 active+clean; 293 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 25 08:49:38 compute-0 systemd[1]: libpod-conmon-50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241.scope: Deactivated successfully.
Nov 25 08:49:38 compute-0 podman[363052]: 2025-11-25 08:49:38.853330411 +0000 UTC m=+0.155731222 container remove 50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.864 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15ea3d1d-e5e1-4d71-8dcf-68dc00a984df]: (4, ('Tue Nov 25 08:49:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 (50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241)\n50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241\nTue Nov 25 08:49:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 (50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241)\n50e62fc8631cbda48a1b969df0030a18056f96d0308a64f2b67e79dab3389241\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59c26321-5321-4d58-ae4e-763f82cf6d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.868 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b2f9d4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:38 compute-0 kernel: tap34b2f9d4-60: left promiscuous mode
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.940 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.955 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e39681f1-9c0c-4b0d-ab4a-25f9d099969a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.977 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12c68f42-47fc-419c-9304-3816f62de4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.978 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc23548d-ccda-49b6-98f2-f0c07e9dedf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6511d47-37c7-4241-850c-8ce3ff15f5b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590986, 'reachable_time': 35568, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363135, 'error': None, 'target': 'ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.994 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060578.9941876, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.995 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Started (Lifecycle Event)
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.996 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34b2f9d4-63fb-44ac-b745-d86825df1c61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.996 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[14646793-fa81-4a0a-af01-b7716bd604e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:38.997 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 67b278e0-034e-4bb1-8cba-035ab2a72de3 in datapath 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c unbound from our chassis
Nov 25 08:49:38 compute-0 podman[363119]: 2025-11-25 08:49:38.997213214 +0000 UTC m=+0.093711571 container create e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:49:38 compute-0 nova_compute[253538]: 2025-11-25 08:49:38.997 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:49:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d34b2f9d4\x2d63fb\x2d44ac\x2db745\x2dd86825df1c61.mount: Deactivated successfully.
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.000 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.004 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.009 253542 INFO nova.virt.libvirt.driver [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance spawned successfully.
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.009 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ec390b-f2ad-401f-8b76-563b527abb06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.012 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6cdb9ec3-61 in ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.013 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.015 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6cdb9ec3-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.015 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec60da9-870a-494e-af81-427bc3f6498f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.015 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.016 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[43d32f67-496d-4403-a760-966815e770a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.026 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.027 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.027 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.027 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.028 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.028 253542 DEBUG nova.virt.libvirt.driver [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.031 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e43c1a88-6086-4e7b-bb67-ffcfecdaf9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 podman[363119]: 2025-11-25 08:49:38.944171253 +0000 UTC m=+0.040669640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.040 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.041 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060578.994403, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.041 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Paused (Lifecycle Event)
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.053 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8469cb6c-4a31-44ef-a33f-338a98899b63]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 systemd[1]: Started libpod-conmon-e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef.scope.
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.062 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.065 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060579.001391, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.065 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Resumed (Lifecycle Event)
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.081 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.084 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.088 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0b1e84-2638-4089-9ec3-9236bd5e877d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.091 253542 INFO nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Took 7.67 seconds to spawn the instance on the hypervisor.
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.092 253542 DEBUG nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6ceef6-c522-4d72-ad4e-15115a795a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 NetworkManager[48915]: <info>  [1764060579.0988] manager: (tap6cdb9ec3-60): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.119 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.133 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ce3893-bcbc-4e0f-ab7f-a05bf5245816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:49:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886141674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.138 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9aeff855-dae3-4ea6-afce-b6bfd8ad672c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 podman[363119]: 2025-11-25 08:49:39.14343013 +0000 UTC m=+0.239928507 container init e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:49:39 compute-0 podman[363119]: 2025-11-25 08:49:39.151651661 +0000 UTC m=+0.248150018 container start e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 08:49:39 compute-0 happy_nobel[363142]: 167 167
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.155 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:39 compute-0 systemd[1]: libpod-e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef.scope: Deactivated successfully.
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.159 253542 INFO nova.compute.manager [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Took 8.62 seconds to build instance.
Nov 25 08:49:39 compute-0 NetworkManager[48915]: <info>  [1764060579.1646] device (tap6cdb9ec3-60): carrier: link connected
Nov 25 08:49:39 compute-0 podman[363119]: 2025-11-25 08:49:39.170753321 +0000 UTC m=+0.267251708 container attach e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:49:39 compute-0 podman[363119]: 2025-11-25 08:49:39.172073857 +0000 UTC m=+0.268572224 container died e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.172 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba55f619-56dd-4cc5-acec-0995dedc2e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.178 253542 DEBUG oslo_concurrency.lockutils [None req-19216ac5-cfe1-4bf8-b52b-a146e40d4512 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.192 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e183afd9-f3e4-4246-8e00-5e2a4e1fc9e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6cdb9ec3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:ec:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593376, 'reachable_time': 27825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363174, 'error': None, 'target': 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6e457073-ac4c-4b3d-90ba-0c848cabffb4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:ec5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593376, 'tstamp': 593376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363179, 'error': None, 'target': 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4571ae45-bbcd-49f8-8b3c-cbbfadfcc5f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6cdb9ec3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:ec:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593376, 'reachable_time': 27825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363180, 'error': None, 'target': 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.255 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.255 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.257 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[740def7c-6af5-42ac-a9a1-495330b61f83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.258 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.259 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.268 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.268 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.328 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9569e6a7-0436-4546-b979-4b8eff55988e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cdb9ec3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.329 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6cdb9ec3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.331 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:39 compute-0 NetworkManager[48915]: <info>  [1764060579.3319] manager: (tap6cdb9ec3-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Nov 25 08:49:39 compute-0 kernel: tap6cdb9ec3-60: entered promiscuous mode
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.345 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6cdb9ec3-60, col_values=(('external_ids', {'iface-id': 'a1fb9b8f-24d1-400b-bfc7-6d0e30e1e254'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:39 compute-0 ovn_controller[152859]: 2025-11-25T08:49:39Z|01060|binding|INFO|Releasing lport a1fb9b8f-24d1-400b-bfc7-6d0e30e1e254 from this chassis (sb_readonly=0)
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.349 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6cdb9ec3-6144-4767-9719-ddbf0a68bf7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6cdb9ec3-6144-4767-9719-ddbf0a68bf7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.350 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf04abee-f7e2-4704-80b9-4453e0d44e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.351 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/6cdb9ec3-6144-4767-9719-ddbf0a68bf7c.pid.haproxy
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:49:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:39.352 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'env', 'PROCESS_TAG=haproxy-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6cdb9ec3-6144-4767-9719-ddbf0a68bf7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.361 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d4b8906c251b2b7e57b0e9e5272232cbdb6f9837bc1e0c0de887bbceee89836-merged.mount: Deactivated successfully.
Nov 25 08:49:39 compute-0 podman[363119]: 2025-11-25 08:49:39.530018174 +0000 UTC m=+0.626516521 container remove e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nobel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.538 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.539 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3513MB free_disk=59.87651062011719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.539 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.540 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.617 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ec34a574-9c78-43d8-a65a-aa4052a5d452 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.618 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:49:39 compute-0 systemd[1]: libpod-conmon-e0eb2105062c253ec69225fe41d8d391f962594bea5a9b2a5e4478517f1520ef.scope: Deactivated successfully.
Nov 25 08:49:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 293 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 08:49:39 compute-0 nova_compute[253538]: 2025-11-25 08:49:39.680 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:39 compute-0 podman[363211]: 2025-11-25 08:49:39.771145282 +0000 UTC m=+0.079087460 container create f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 08:49:39 compute-0 podman[363211]: 2025-11-25 08:49:39.720892566 +0000 UTC m=+0.028834614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:49:39 compute-0 systemd[1]: Started libpod-conmon-f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d.scope.
Nov 25 08:49:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1886141674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:39 compute-0 podman[363230]: 2025-11-25 08:49:39.867214414 +0000 UTC m=+0.133418794 container create c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 08:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:39 compute-0 podman[363230]: 2025-11-25 08:49:39.783556664 +0000 UTC m=+0.049761074 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:39 compute-0 podman[363211]: 2025-11-25 08:49:39.919220637 +0000 UTC m=+0.227162655 container init f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:49:39 compute-0 podman[363211]: 2025-11-25 08:49:39.927793487 +0000 UTC m=+0.235735485 container start f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:49:39 compute-0 systemd[1]: Started libpod-conmon-c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b.scope.
Nov 25 08:49:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51884f3ed72ef29fe201d1c8b86187d773b15028358389bb3a90b47cd47948b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:40 compute-0 podman[363211]: 2025-11-25 08:49:40.017128429 +0000 UTC m=+0.325070477 container attach f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 08:49:40 compute-0 podman[363230]: 2025-11-25 08:49:40.071910366 +0000 UTC m=+0.338114776 container init c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:49:40 compute-0 podman[363230]: 2025-11-25 08:49:40.081659538 +0000 UTC m=+0.347863918 container start c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:49:40 compute-0 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [NOTICE]   (363277) : New worker (363279) forked
Nov 25 08:49:40 compute-0 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [NOTICE]   (363277) : Loading success.
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.142 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.143 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.143 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.144 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.144 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.145 253542 WARNING nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state deleting.
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.145 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.145 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.146 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.146 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.146 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.146 253542 WARNING nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state deleting.
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.147 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.147 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.148 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.148 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.148 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.149 253542 WARNING nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state deleting.
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.149 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.149 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.150 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.150 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.150 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a in datapath 34b2f9d4-63fb-44ac-b745-d86825df1c61 unbound from our chassis
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.150 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.150 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-unplugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.151 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.151 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.151 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34b2f9d4-63fb-44ac-b745-d86825df1c61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.151 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.152 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85e7c194-8c80-4815-b372-2b38564e7c97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.152 253542 DEBUG oslo_concurrency.lockutils [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.153 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc66e4f2-ce2b-49ca-ba89-a86607d56e5a in datapath 34b2f9d4-63fb-44ac-b745-d86825df1c61 unbound from our chassis
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.153 253542 DEBUG nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] No waiting events found dispatching network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.153 253542 WARNING nova.compute.manager [req-23c5af2d-b57a-40c0-b17d-9a598d155ba4 req-24be5093-a61e-41e7-bf4f-a3250ab60e94 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received unexpected event network-vif-plugged-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a for instance with vm_state active and task_state deleting.
Nov 25 08:49:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.153 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34b2f9d4-63fb-44ac-b745-d86825df1c61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:49:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.154 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93084fa8-4dd8-429b-9e4d-8023eea56a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:40.154 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:49:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:49:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651653293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.225 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.233 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.257 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.277 253542 INFO nova.virt.libvirt.driver [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Deleting instance files /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452_del
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.278 253542 INFO nova.virt.libvirt.driver [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Deletion of /var/lib/nova/instances/ec34a574-9c78-43d8-a65a-aa4052a5d452_del complete
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.282 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.282 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.338 253542 INFO nova.compute.manager [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Took 3.10 seconds to destroy the instance on the hypervisor.
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.338 253542 DEBUG oslo.service.loopingcall [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.339 253542 DEBUG nova.compute.manager [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.339 253542 DEBUG nova.network.neutron [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.516 253542 DEBUG nova.compute.manager [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.517 253542 DEBUG oslo_concurrency.lockutils [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.517 253542 DEBUG oslo_concurrency.lockutils [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.517 253542 DEBUG oslo_concurrency.lockutils [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.517 253542 DEBUG nova.compute.manager [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] No waiting events found dispatching network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:40 compute-0 nova_compute[253538]: 2025-11-25 08:49:40.518 253542 WARNING nova.compute.manager [req-5a34007d-2fc7-4b8f-8c33-2d1d8537ad45 req-cf8f828b-e6d5-4eeb-96a0-7b5078e159be b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received unexpected event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 for instance with vm_state active and task_state None.
Nov 25 08:49:40 compute-0 gracious_mendel[363263]: {
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:     "0": [
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:         {
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "devices": [
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "/dev/loop3"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             ],
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_name": "ceph_lv0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_size": "21470642176",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "name": "ceph_lv0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "tags": {
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cluster_name": "ceph",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.crush_device_class": "",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.encrypted": "0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osd_id": "0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.type": "block",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.vdo": "0"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             },
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "type": "block",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "vg_name": "ceph_vg0"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:         }
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:     ],
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:     "1": [
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:         {
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "devices": [
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "/dev/loop4"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             ],
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_name": "ceph_lv1",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_size": "21470642176",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "name": "ceph_lv1",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "tags": {
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cluster_name": "ceph",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.crush_device_class": "",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.encrypted": "0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osd_id": "1",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.type": "block",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.vdo": "0"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             },
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "type": "block",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "vg_name": "ceph_vg1"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:         }
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:     ],
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:     "2": [
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:         {
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "devices": [
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "/dev/loop5"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             ],
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_name": "ceph_lv2",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_size": "21470642176",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "name": "ceph_lv2",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "tags": {
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.cluster_name": "ceph",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.crush_device_class": "",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.encrypted": "0",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osd_id": "2",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.type": "block",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:                 "ceph.vdo": "0"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             },
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "type": "block",
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:             "vg_name": "ceph_vg2"
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:         }
Nov 25 08:49:40 compute-0 gracious_mendel[363263]:     ]
Nov 25 08:49:40 compute-0 gracious_mendel[363263]: }
Nov 25 08:49:40 compute-0 systemd[1]: libpod-f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d.scope: Deactivated successfully.
Nov 25 08:49:40 compute-0 podman[363211]: 2025-11-25 08:49:40.756084539 +0000 UTC m=+1.064026547 container died f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ac1284a1ccf97edfbea65953620e524c6cd053344512cf31e249083ff348e13-merged.mount: Deactivated successfully.
Nov 25 08:49:40 compute-0 ceph-mon[75015]: pgmap v2050: 321 pgs: 321 active+clean; 293 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 08:49:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1651653293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:40 compute-0 podman[363211]: 2025-11-25 08:49:40.897933808 +0000 UTC m=+1.205875807 container remove f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:49:40 compute-0 systemd[1]: libpod-conmon-f8284ac3054758f93e1956d42b529dd53200b28cb691fcc27237c71804f0995d.scope: Deactivated successfully.
Nov 25 08:49:40 compute-0 sudo[362929]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:41 compute-0 sudo[363310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:41 compute-0 sudo[363310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:41 compute-0 sudo[363310]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:41 compute-0 sshd-session[363290]: Invalid user admin from 193.32.162.151 port 41500
Nov 25 08:49:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:41.073 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:41.074 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:41.075 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:41 compute-0 sudo[363335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:49:41 compute-0 sudo[363335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:41 compute-0 sudo[363335]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:41 compute-0 sudo[363360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:41 compute-0 sudo[363360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:41 compute-0 sudo[363360]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:41 compute-0 sshd-session[363290]: Connection closed by invalid user admin 193.32.162.151 port 41500 [preauth]
Nov 25 08:49:41 compute-0 sudo[363385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:49:41 compute-0 sudo[363385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.223 253542 DEBUG nova.network.neutron [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.251 253542 INFO nova.compute.manager [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Took 0.91 seconds to deallocate network for instance.
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.276 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.303 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.303 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.429 253542 DEBUG oslo_concurrency.processutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:41 compute-0 podman[363449]: 2025-11-25 08:49:41.545298056 +0000 UTC m=+0.045972332 container create fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:49:41 compute-0 systemd[1]: Started libpod-conmon-fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567.scope.
Nov 25 08:49:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:49:41 compute-0 podman[363449]: 2025-11-25 08:49:41.522125166 +0000 UTC m=+0.022799462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:49:41 compute-0 podman[363449]: 2025-11-25 08:49:41.636781466 +0000 UTC m=+0.137460542 container init fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:49:41 compute-0 podman[363449]: 2025-11-25 08:49:41.643943158 +0000 UTC m=+0.144617434 container start fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 08:49:41 compute-0 modest_albattani[363484]: 167 167
Nov 25 08:49:41 compute-0 systemd[1]: libpod-fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567.scope: Deactivated successfully.
Nov 25 08:49:41 compute-0 podman[363449]: 2025-11-25 08:49:41.654151081 +0000 UTC m=+0.154825397 container attach fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 08:49:41 compute-0 podman[363449]: 2025-11-25 08:49:41.654584963 +0000 UTC m=+0.155259249 container died fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 08:49:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 260 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Nov 25 08:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-88440674ec54114c41870e7212d11b3016c757d5aff6a0b7a670128015928746-merged.mount: Deactivated successfully.
Nov 25 08:49:41 compute-0 podman[363449]: 2025-11-25 08:49:41.727464825 +0000 UTC m=+0.228139111 container remove fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_albattani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:49:41 compute-0 systemd[1]: libpod-conmon-fee426d57fc6ef4567bf77d85a18cb810c4ab69390016fdc7f72ce07c34c1567.scope: Deactivated successfully.
Nov 25 08:49:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:49:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739853544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.919 253542 DEBUG oslo_concurrency.processutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.930 253542 DEBUG nova.compute.provider_tree [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:49:41 compute-0 podman[363508]: 2025-11-25 08:49:41.94795621 +0000 UTC m=+0.060087651 container create 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.949 253542 DEBUG nova.scheduler.client.report [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:49:41 compute-0 nova_compute[253538]: 2025-11-25 08:49:41.976 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:42 compute-0 systemd[1]: Started libpod-conmon-4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f.scope.
Nov 25 08:49:42 compute-0 podman[363508]: 2025-11-25 08:49:41.921215863 +0000 UTC m=+0.033347314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:49:42 compute-0 nova_compute[253538]: 2025-11-25 08:49:42.024 253542 INFO nova.scheduler.client.report [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance ec34a574-9c78-43d8-a65a-aa4052a5d452
Nov 25 08:49:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:49:42 compute-0 podman[363508]: 2025-11-25 08:49:42.073588675 +0000 UTC m=+0.185720206 container init 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 08:49:42 compute-0 podman[363508]: 2025-11-25 08:49:42.081703341 +0000 UTC m=+0.193834812 container start 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 08:49:42 compute-0 podman[363508]: 2025-11-25 08:49:42.090652402 +0000 UTC m=+0.202783843 container attach 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:49:42 compute-0 nova_compute[253538]: 2025-11-25 08:49:42.100 253542 DEBUG oslo_concurrency.lockutils [None req-66cbd094-af78-48a9-b936-09c8259d7202 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ec34a574-9c78-43d8-a65a-aa4052a5d452" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:42 compute-0 nova_compute[253538]: 2025-11-25 08:49:42.310 253542 DEBUG nova.compute.manager [req-a59d175c-ed76-4fc8-9552-88001618296c req-ab356573-6698-407b-a439-236be186e439 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Received event network-vif-deleted-bc66e4f2-ce2b-49ca-ba89-a86607d56e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:42 compute-0 nova_compute[253538]: 2025-11-25 08:49:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:42 compute-0 nova_compute[253538]: 2025-11-25 08:49:42.575 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:42 compute-0 ceph-mon[75015]: pgmap v2051: 321 pgs: 321 active+clean; 260 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Nov 25 08:49:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/739853544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:43 compute-0 jovial_colden[363524]: {
Nov 25 08:49:43 compute-0 jovial_colden[363524]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "osd_id": 1,
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "type": "bluestore"
Nov 25 08:49:43 compute-0 jovial_colden[363524]:     },
Nov 25 08:49:43 compute-0 jovial_colden[363524]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "osd_id": 2,
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "type": "bluestore"
Nov 25 08:49:43 compute-0 jovial_colden[363524]:     },
Nov 25 08:49:43 compute-0 jovial_colden[363524]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "osd_id": 0,
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:49:43 compute-0 jovial_colden[363524]:         "type": "bluestore"
Nov 25 08:49:43 compute-0 jovial_colden[363524]:     }
Nov 25 08:49:43 compute-0 jovial_colden[363524]: }
Nov 25 08:49:43 compute-0 systemd[1]: libpod-4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f.scope: Deactivated successfully.
Nov 25 08:49:43 compute-0 podman[363508]: 2025-11-25 08:49:43.111754538 +0000 UTC m=+1.223885969 container died 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:49:43 compute-0 systemd[1]: libpod-4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f.scope: Consumed 1.036s CPU time.
Nov 25 08:49:43 compute-0 nova_compute[253538]: 2025-11-25 08:49:43.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0be47031c7368003ff63301176b0c92c4a6918301a566fcfad4ed6d2941c286e-merged.mount: Deactivated successfully.
Nov 25 08:49:43 compute-0 podman[363508]: 2025-11-25 08:49:43.188382991 +0000 UTC m=+1.300514432 container remove 4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 08:49:43 compute-0 systemd[1]: libpod-conmon-4b311b7e42d36cff0b7f1fe977b7a5a97a6fe4764af02039fd97b5a24c49bc6f.scope: Deactivated successfully.
Nov 25 08:49:43 compute-0 sudo[363385]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:49:43 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:49:43 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:43 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 39305fec-3473-42a5-8618-3f487b7272c3 does not exist
Nov 25 08:49:43 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 02efaacc-18a0-4659-857e-4a0d65f69301 does not exist
Nov 25 08:49:43 compute-0 sudo[363571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:49:43 compute-0 sudo[363571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:43 compute-0 sudo[363571]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:43 compute-0 sudo[363596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:49:43 compute-0 sudo[363596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:49:43 compute-0 sudo[363596]: pam_unix(sudo:session): session closed for user root
Nov 25 08:49:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 213 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 147 op/s
Nov 25 08:49:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:49:44 compute-0 ceph-mon[75015]: pgmap v2052: 321 pgs: 321 active+clean; 213 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 147 op/s
Nov 25 08:49:44 compute-0 ovn_controller[152859]: 2025-11-25T08:49:44Z|01061|binding|INFO|Releasing lport a1fb9b8f-24d1-400b-bfc7-6d0e30e1e254 from this chassis (sb_readonly=0)
Nov 25 08:49:44 compute-0 ovn_controller[152859]: 2025-11-25T08:49:44Z|01062|binding|INFO|Releasing lport 2a6dde1f-8745-4374-9d65-dba32b48db06 from this chassis (sb_readonly=0)
Nov 25 08:49:44 compute-0 nova_compute[253538]: 2025-11-25 08:49:44.407 253542 DEBUG nova.compute.manager [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:44 compute-0 nova_compute[253538]: 2025-11-25 08:49:44.408 253542 DEBUG nova.compute.manager [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing instance network info cache due to event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:49:44 compute-0 nova_compute[253538]: 2025-11-25 08:49:44.408 253542 DEBUG oslo_concurrency.lockutils [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:49:44 compute-0 nova_compute[253538]: 2025-11-25 08:49:44.409 253542 DEBUG oslo_concurrency.lockutils [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:49:44 compute-0 nova_compute[253538]: 2025-11-25 08:49:44.409 253542 DEBUG nova.network.neutron [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:49:44 compute-0 nova_compute[253538]: 2025-11-25 08:49:44.428 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.434 253542 DEBUG nova.compute.manager [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-changed-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.435 253542 DEBUG nova.compute.manager [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing instance network info cache due to event network-changed-5b999504-81af-4e3d-9707-b0a72b902669. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.436 253542 DEBUG oslo_concurrency.lockutils [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.436 253542 DEBUG oslo_concurrency.lockutils [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.436 253542 DEBUG nova.network.neutron [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Refreshing network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.591 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.591 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.592 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.593 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.593 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.595 253542 INFO nova.compute.manager [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Terminating instance
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.596 253542 DEBUG nova.compute.manager [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:49:45 compute-0 kernel: tap5b999504-81 (unregistering): left promiscuous mode
Nov 25 08:49:45 compute-0 NetworkManager[48915]: <info>  [1764060585.6607] device (tap5b999504-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:49:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 213 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 134 op/s
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:45 compute-0 ovn_controller[152859]: 2025-11-25T08:49:45Z|01063|binding|INFO|Releasing lport 5b999504-81af-4e3d-9707-b0a72b902669 from this chassis (sb_readonly=0)
Nov 25 08:49:45 compute-0 ovn_controller[152859]: 2025-11-25T08:49:45Z|01064|binding|INFO|Setting lport 5b999504-81af-4e3d-9707-b0a72b902669 down in Southbound
Nov 25 08:49:45 compute-0 ovn_controller[152859]: 2025-11-25T08:49:45Z|01065|binding|INFO|Removing iface tap5b999504-81 ovn-installed in OVS
Nov 25 08:49:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.678 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:68:34 10.100.0.9'], port_security=['fa:16:3e:cf:68:34 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4a5a2de2-f65d-4e79-a42e-c5ccdc573b10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a84b49-c79a-4804-945b-0e3005e5ab18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5b999504-81af-4e3d-9707-b0a72b902669) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:49:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.679 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5b999504-81af-4e3d-9707-b0a72b902669 in datapath 41ed78ca-e8a4-4daf-884b-6b7b763e272f unbound from our chassis
Nov 25 08:49:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.680 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41ed78ca-e8a4-4daf-884b-6b7b763e272f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:49:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.681 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3da42f4b-a13e-41fb-a2aa-92fc5347c850]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:45.686 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f namespace which is not needed anymore
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.706 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:45 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 25 08:49:45 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Consumed 15.912s CPU time.
Nov 25 08:49:45 compute-0 systemd-machined[215790]: Machine qemu-131-instance-0000006a terminated.
Nov 25 08:49:45 compute-0 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [NOTICE]   (360622) : haproxy version is 2.8.14-c23fe91
Nov 25 08:49:45 compute-0 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [NOTICE]   (360622) : path to executable is /usr/sbin/haproxy
Nov 25 08:49:45 compute-0 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [WARNING]  (360622) : Exiting Master process...
Nov 25 08:49:45 compute-0 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [ALERT]    (360622) : Current worker (360626) exited with code 143 (Terminated)
Nov 25 08:49:45 compute-0 neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f[360616]: [WARNING]  (360622) : All workers exited. Exiting... (0)
Nov 25 08:49:45 compute-0 systemd[1]: libpod-bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f.scope: Deactivated successfully.
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.830 253542 INFO nova.virt.libvirt.driver [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Instance destroyed successfully.
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.831 253542 DEBUG nova.objects.instance [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:49:45 compute-0 podman[363644]: 2025-11-25 08:49:45.833429719 +0000 UTC m=+0.046564888 container died bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.842 253542 DEBUG nova.virt.libvirt.vif [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:48:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-500014830',display_name='tempest-TestNetworkBasicOps-server-500014830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-500014830',id=106,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFCCvuRR9teEjk+xhoL/dPXtSbMEI/QvMm2XyYfTKUyOXE8qn7R4eNZpb9TezDBvzTLIaZuuD77pyfzIuaqqEBF8FLx+5feWI/X0iULdgxVeu0o4nXU62owugHwOXwCyOg==',key_name='tempest-TestNetworkBasicOps-1624027369',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-nidctccp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:48:34Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.845 253542 DEBUG nova.network.os_vif_util [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.846 253542 DEBUG nova.network.os_vif_util [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.846 253542 DEBUG os_vif [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.848 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b999504-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:45 compute-0 nova_compute[253538]: 2025-11-25 08:49:45.855 253542 INFO os_vif [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:68:34,bridge_name='br-int',has_traffic_filtering=True,id=5b999504-81af-4e3d-9707-b0a72b902669,network=Network(41ed78ca-e8a4-4daf-884b-6b7b763e272f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b999504-81')
Nov 25 08:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f-userdata-shm.mount: Deactivated successfully.
Nov 25 08:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1849c6e6683bfaf37cc6be930869317f1e1d8455b3540b1f7e705a5ef3ca71a-merged.mount: Deactivated successfully.
Nov 25 08:49:45 compute-0 podman[363644]: 2025-11-25 08:49:45.936989423 +0000 UTC m=+0.150124572 container cleanup bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:49:45 compute-0 systemd[1]: libpod-conmon-bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f.scope: Deactivated successfully.
Nov 25 08:49:46 compute-0 podman[363704]: 2025-11-25 08:49:46.01529413 +0000 UTC m=+0.048925742 container remove bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.024 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd91fff-e742-451f-b4e5-f2e4e5fe866c]: (4, ('Tue Nov 25 08:49:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f (bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f)\nbbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f\nTue Nov 25 08:49:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f (bbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f)\nbbe7a88bce4ee5fd9c18ecf2e88ab17f45240f07c5add1dacd1bb1ee8f137c9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.027 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00afde97-bcfc-4b71-ba96-6a9280bc6358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.029 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41ed78ca-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:46 compute-0 nova_compute[253538]: 2025-11-25 08:49:46.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:46 compute-0 kernel: tap41ed78ca-e0: left promiscuous mode
Nov 25 08:49:46 compute-0 nova_compute[253538]: 2025-11-25 08:49:46.044 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d99d73ba-1766-4a11-90f2-86a3a0671a33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.068 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55ea96c7-2f6f-4238-90bb-afb5ad89488b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c137e0-2f02-43e7-90cd-0df0e3f71c5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc0171e-8f92-4c98-a63b-92a6335b7c31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586394, 'reachable_time': 16603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363719, 'error': None, 'target': 'ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d41ed78ca\x2de8a4\x2d4daf\x2d884b\x2d6b7b763e272f.mount: Deactivated successfully.
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.091 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41ed78ca-e8a4-4daf-884b-6b7b763e272f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.091 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a501b747-6a31-4b7f-945e-3cd05fb48c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:49:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:49:46.155 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:49:46 compute-0 nova_compute[253538]: 2025-11-25 08:49:46.203 253542 INFO nova.virt.libvirt.driver [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Deleting instance files /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_del
Nov 25 08:49:46 compute-0 nova_compute[253538]: 2025-11-25 08:49:46.206 253542 INFO nova.virt.libvirt.driver [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Deletion of /var/lib/nova/instances/fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe_del complete
Nov 25 08:49:46 compute-0 nova_compute[253538]: 2025-11-25 08:49:46.278 253542 INFO nova.compute.manager [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Took 0.68 seconds to destroy the instance on the hypervisor.
Nov 25 08:49:46 compute-0 nova_compute[253538]: 2025-11-25 08:49:46.279 253542 DEBUG oslo.service.loopingcall [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:49:46 compute-0 nova_compute[253538]: 2025-11-25 08:49:46.279 253542 DEBUG nova.compute.manager [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:49:46 compute-0 nova_compute[253538]: 2025-11-25 08:49:46.280 253542 DEBUG nova.network.neutron [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:49:46 compute-0 ceph-mon[75015]: pgmap v2053: 321 pgs: 321 active+clean; 213 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 134 op/s
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.127 253542 DEBUG nova.network.neutron [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updated VIF entry in instance network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.129 253542 DEBUG nova.network.neutron [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.147 253542 DEBUG oslo_concurrency.lockutils [req-64c3ac17-05ac-4784-b80d-b0a22aefc7eb req-f9790e13-0e7e-4c40-96ee-30d27c89bcd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.262 253542 DEBUG nova.network.neutron [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.280 253542 INFO nova.compute.manager [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Took 1.00 seconds to deallocate network for instance.
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.319 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.320 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.395 253542 DEBUG oslo_concurrency.processutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:49:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.577 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.647 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-unplugged-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.649 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.649 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.649 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.650 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] No waiting events found dispatching network-vif-unplugged-5b999504-81af-4e3d-9707-b0a72b902669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.650 253542 WARNING nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received unexpected event network-vif-unplugged-5b999504-81af-4e3d-9707-b0a72b902669 for instance with vm_state deleted and task_state None.
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.650 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.651 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.652 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.652 253542 DEBUG oslo_concurrency.lockutils [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.653 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] No waiting events found dispatching network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.653 253542 WARNING nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received unexpected event network-vif-plugged-5b999504-81af-4e3d-9707-b0a72b902669 for instance with vm_state deleted and task_state None.
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.653 253542 DEBUG nova.compute.manager [req-aaec318e-7c76-4c50-9291-5991ae5407f6 req-e2d1ba84-6c86-4ce8-b041-d8e392193ebb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Received event network-vif-deleted-5b999504-81af-4e3d-9707-b0a72b902669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:49:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 171 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 531 KiB/s wr, 138 op/s
Nov 25 08:49:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:49:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523222807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.869 253542 DEBUG oslo_concurrency.processutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.875 253542 DEBUG nova.compute.provider_tree [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.893 253542 DEBUG nova.scheduler.client.report [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.919 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:47 compute-0 nova_compute[253538]: 2025-11-25 08:49:47.961 253542 INFO nova.scheduler.client.report [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe
Nov 25 08:49:48 compute-0 nova_compute[253538]: 2025-11-25 08:49:48.297 253542 DEBUG oslo_concurrency.lockutils [None req-cd41ae2e-df99-4795-b08d-38b967982846 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:48 compute-0 nova_compute[253538]: 2025-11-25 08:49:48.350 253542 DEBUG nova.network.neutron [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updated VIF entry in instance network info cache for port 5b999504-81af-4e3d-9707-b0a72b902669. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:49:48 compute-0 nova_compute[253538]: 2025-11-25 08:49:48.351 253542 DEBUG nova.network.neutron [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Updating instance_info_cache with network_info: [{"id": "5b999504-81af-4e3d-9707-b0a72b902669", "address": "fa:16:3e:cf:68:34", "network": {"id": "41ed78ca-e8a4-4daf-884b-6b7b763e272f", "bridge": "br-int", "label": "tempest-network-smoke--1240799795", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b999504-81", "ovs_interfaceid": "5b999504-81af-4e3d-9707-b0a72b902669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:49:48 compute-0 nova_compute[253538]: 2025-11-25 08:49:48.369 253542 DEBUG oslo_concurrency.lockutils [req-357b6125-7503-40fd-b463-0a454c8918c2 req-eaa37c7e-a716-4126-a620-1c7af9a4bc55 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:49:48 compute-0 ceph-mon[75015]: pgmap v2054: 321 pgs: 321 active+clean; 171 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 531 KiB/s wr, 138 op/s
Nov 25 08:49:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/523222807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:49:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 154 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 120 op/s
Nov 25 08:49:50 compute-0 ceph-mon[75015]: pgmap v2055: 321 pgs: 321 active+clean; 154 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 120 op/s
Nov 25 08:49:50 compute-0 nova_compute[253538]: 2025-11-25 08:49:50.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 134 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 126 op/s
Nov 25 08:49:51 compute-0 ovn_controller[152859]: 2025-11-25T08:49:51Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:f1:de 10.100.0.12
Nov 25 08:49:51 compute-0 ovn_controller[152859]: 2025-11-25T08:49:51Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:f1:de 10.100.0.12
Nov 25 08:49:52 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 08:49:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:52 compute-0 nova_compute[253538]: 2025-11-25 08:49:52.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:52 compute-0 ceph-mon[75015]: pgmap v2056: 321 pgs: 321 active+clean; 134 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 126 op/s
Nov 25 08:49:53 compute-0 nova_compute[253538]: 2025-11-25 08:49:53.097 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060578.0957832, ec34a574-9c78-43d8-a65a-aa4052a5d452 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:53 compute-0 nova_compute[253538]: 2025-11-25 08:49:53.097 253542 INFO nova.compute.manager [-] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] VM Stopped (Lifecycle Event)
Nov 25 08:49:53 compute-0 nova_compute[253538]: 2025-11-25 08:49:53.116 253542 DEBUG nova.compute.manager [None req-4bad9868-c77d-4cc7-a328-40ab7f05430d - - - - - -] [instance: ec34a574-9c78-43d8-a65a-aa4052a5d452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:49:53
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr']
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 143 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 961 KiB/s wr, 151 op/s
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:49:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:49:53 compute-0 ovn_controller[152859]: 2025-11-25T08:49:53Z|01066|binding|INFO|Releasing lport a1fb9b8f-24d1-400b-bfc7-6d0e30e1e254 from this chassis (sb_readonly=0)
Nov 25 08:49:54 compute-0 nova_compute[253538]: 2025-11-25 08:49:54.034 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:54 compute-0 ceph-mon[75015]: pgmap v2057: 321 pgs: 321 active+clean; 143 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 961 KiB/s wr, 151 op/s
Nov 25 08:49:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 164 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 871 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 08:49:55 compute-0 nova_compute[253538]: 2025-11-25 08:49:55.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:56 compute-0 ceph-mon[75015]: pgmap v2058: 321 pgs: 321 active+clean; 164 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 871 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.556 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.557 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.558 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.558 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.559 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.559 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.580 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.583 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.599 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.600 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Image id 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e yields fingerprint ad982bd9427c86feb49d0b60fa1a5b2511227adc _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.600 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] image 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e at (/var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc): checking
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.600 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] image 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e at (/var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.603 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.604 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.604 253542 WARNING nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.604 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Active base files: /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.604 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Removable base files: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.605 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.605 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.605 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.606 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 25 08:49:57 compute-0 nova_compute[253538]: 2025-11-25 08:49:57.606 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 25 08:49:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 167 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.575 253542 INFO nova.compute.manager [None req-71708210-6bc6-4ee9-ae8e-307a0bafd9ab 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Get console output
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.581 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.860 253542 INFO nova.compute.manager [None req-27801bca-78f3-4b87-aa11-3def6d8566fe 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Pausing
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.861 253542 DEBUG nova.objects.instance [None req-27801bca-78f3-4b87-aa11-3def6d8566fe 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.884 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060598.8844662, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.884 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Paused (Lifecycle Event)
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.886 253542 DEBUG nova.compute.manager [None req-27801bca-78f3-4b87-aa11-3def6d8566fe 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:58 compute-0 ceph-mon[75015]: pgmap v2059: 321 pgs: 321 active+clean; 167 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.907 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.911 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:49:58 compute-0 nova_compute[253538]: 2025-11-25 08:49:58.933 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 08:49:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 167 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 08:50:00 compute-0 podman[363742]: 2025-11-25 08:50:00.819059287 +0000 UTC m=+0.073406437 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:50:00 compute-0 nova_compute[253538]: 2025-11-25 08:50:00.827 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060585.8265743, fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:00 compute-0 nova_compute[253538]: 2025-11-25 08:50:00.828 253542 INFO nova.compute.manager [-] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] VM Stopped (Lifecycle Event)
Nov 25 08:50:00 compute-0 nova_compute[253538]: 2025-11-25 08:50:00.844 253542 DEBUG nova.compute.manager [None req-1e14336a-ee55-467d-917f-79e8a6ac8e11 - - - - - -] [instance: fc289da4-ce2a-46ee-bf8c-bffa1e0e75fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:00 compute-0 nova_compute[253538]: 2025-11-25 08:50:00.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:00 compute-0 ceph-mon[75015]: pgmap v2060: 321 pgs: 321 active+clean; 167 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 08:50:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 25 08:50:01 compute-0 nova_compute[253538]: 2025-11-25 08:50:01.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:02 compute-0 nova_compute[253538]: 2025-11-25 08:50:02.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:02 compute-0 ceph-mon[75015]: pgmap v2061: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.414 253542 INFO nova.compute.manager [None req-b27c68a6-1b9d-474e-b49b-1ba52da493d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Get console output
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.421 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.562 253542 INFO nova.compute.manager [None req-dd55d156-6abd-477a-94bf-33b49f6f470d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Unpausing
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.563 253542 DEBUG nova.objects.instance [None req-dd55d156-6abd-477a-94bf-33b49f6f470d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.590 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060603.5904915, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.591 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Resumed (Lifecycle Event)
Nov 25 08:50:03 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.595 253542 DEBUG nova.virt.libvirt.guest [None req-dd55d156-6abd-477a-94bf-33b49f6f470d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.595 253542 DEBUG nova.compute.manager [None req-dd55d156-6abd-477a-94bf-33b49f6f470d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.611 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:03 compute-0 nova_compute[253538]: 2025-11-25 08:50:03.621 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:50:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:50:03 compute-0 podman[363762]: 2025-11-25 08:50:03.841680417 +0000 UTC m=+0.084434133 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:50:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:50:04 compute-0 nova_compute[253538]: 2025-11-25 08:50:04.467 253542 INFO nova.compute.manager [None req-172c20c4-70c0-4bdf-bb0f-0568f5238bef 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Get console output
Nov 25 08:50:04 compute-0 nova_compute[253538]: 2025-11-25 08:50:04.473 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:50:04 compute-0 ceph-mon[75015]: pgmap v2062: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:50:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:50:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.5 total, 600.0 interval
                                           Cumulative writes: 33K writes, 133K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.90 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5652 writes, 21K keys, 5652 commit groups, 1.0 writes per commit group, ingest: 22.96 MB, 0.04 MB/s
                                           Interval WAL: 5653 writes, 2226 syncs, 2.54 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.537 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.538 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.538 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.538 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.539 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.540 253542 INFO nova.compute.manager [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Terminating instance
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.541 253542 DEBUG nova.compute.manager [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:50:05 compute-0 kernel: tap67b278e0-03 (unregistering): left promiscuous mode
Nov 25 08:50:05 compute-0 NetworkManager[48915]: <info>  [1764060605.5969] device (tap67b278e0-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 ovn_controller[152859]: 2025-11-25T08:50:05Z|01067|binding|INFO|Releasing lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 from this chassis (sb_readonly=0)
Nov 25 08:50:05 compute-0 ovn_controller[152859]: 2025-11-25T08:50:05Z|01068|binding|INFO|Setting lport 67b278e0-034e-4bb1-8cba-035ab2a72de3 down in Southbound
Nov 25 08:50:05 compute-0 ovn_controller[152859]: 2025-11-25T08:50:05Z|01069|binding|INFO|Removing iface tap67b278e0-03 ovn-installed in OVS
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.619 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:f1:de 10.100.0.12'], port_security=['fa:16:3e:d5:f1:de 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76a697a7-7255-4dde-bfd3-a4f7c520b32e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddbaf93a-8581-4a98-b32a-d829e79ecbfd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=67b278e0-034e-4bb1-8cba-035ab2a72de3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.622 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 67b278e0-034e-4bb1-8cba-035ab2a72de3 in datapath 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c unbound from our chassis
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.624 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.626 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb222281-3834-4a4b-bd86-7251812b7a98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.627 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c namespace which is not needed anymore
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.627 253542 DEBUG nova.compute.manager [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.628 253542 DEBUG nova.compute.manager [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing instance network info cache due to event network-changed-67b278e0-034e-4bb1-8cba-035ab2a72de3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.628 253542 DEBUG oslo_concurrency.lockutils [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.628 253542 DEBUG oslo_concurrency.lockutils [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.629 253542 DEBUG nova.network.neutron [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Refreshing network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 08:50:05 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 25 08:50:05 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Consumed 13.733s CPU time.
Nov 25 08:50:05 compute-0 systemd-machined[215790]: Machine qemu-134-instance-0000006c terminated.
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.787 253542 INFO nova.virt.libvirt.driver [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Instance destroyed successfully.
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.788 253542 DEBUG nova.objects.instance [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:05 compute-0 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [NOTICE]   (363277) : haproxy version is 2.8.14-c23fe91
Nov 25 08:50:05 compute-0 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [NOTICE]   (363277) : path to executable is /usr/sbin/haproxy
Nov 25 08:50:05 compute-0 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [WARNING]  (363277) : Exiting Master process...
Nov 25 08:50:05 compute-0 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [WARNING]  (363277) : Exiting Master process...
Nov 25 08:50:05 compute-0 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [ALERT]    (363277) : Current worker (363279) exited with code 143 (Terminated)
Nov 25 08:50:05 compute-0 neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c[363273]: [WARNING]  (363277) : All workers exited. Exiting... (0)
Nov 25 08:50:05 compute-0 systemd[1]: libpod-c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b.scope: Deactivated successfully.
Nov 25 08:50:05 compute-0 podman[363808]: 2025-11-25 08:50:05.822653089 +0000 UTC m=+0.067713794 container died c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.849 253542 DEBUG nova.virt.libvirt.vif [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:49:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2097673480',display_name='tempest-TestNetworkAdvancedServerOps-server-2097673480',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2097673480',id=108,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8CJ+LAazH0nh/fs57lUBduvcorPeFDtVdwZY7U/GDNTdvdOvwS2k2O3rjC5dikEP5slLAsdzOE76Bw4/4L12X0ArhaClfawfYB19breOk8NW05uifXWs22TjYOgG1XfA==',key_name='tempest-TestNetworkAdvancedServerOps-659140204',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:49:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-140u7iq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:03Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.849 253542 DEBUG nova.network.os_vif_util [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.852 253542 DEBUG nova.network.os_vif_util [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.853 253542 DEBUG os_vif [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b-userdata-shm.mount: Deactivated successfully.
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.858 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67b278e0-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-51884f3ed72ef29fe201d1c8b86187d773b15028358389bb3a90b47cd47948b0-merged.mount: Deactivated successfully.
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.867 253542 INFO os_vif [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f1:de,bridge_name='br-int',has_traffic_filtering=True,id=67b278e0-034e-4bb1-8cba-035ab2a72de3,network=Network(6cdb9ec3-6144-4767-9719-ddbf0a68bf7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67b278e0-03')
Nov 25 08:50:05 compute-0 podman[363808]: 2025-11-25 08:50:05.872048403 +0000 UTC m=+0.117109118 container cleanup c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:50:05 compute-0 systemd[1]: libpod-conmon-c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b.scope: Deactivated successfully.
Nov 25 08:50:05 compute-0 podman[363855]: 2025-11-25 08:50:05.945339045 +0000 UTC m=+0.047748719 container remove c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.951 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2d68fd-ca2e-49b2-a496-841191866c60]: (4, ('Tue Nov 25 08:50:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c (c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b)\nc76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b\nTue Nov 25 08:50:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c (c76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b)\nc76772cc6aabf600f4ec379b74bcf99747fba8c7f3ee0a94c71e948710ff658b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.953 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7af7179f-8b08-48ac-8382-ca297ac34a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.955 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cdb9ec3-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 kernel: tap6cdb9ec3-60: left promiscuous mode
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 nova_compute[253538]: 2025-11-25 08:50:05.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.984 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d312f268-f719-4a1b-8723-d3cc012f821f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:05.999 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[acc9bf1b-e7a8-448a-afec-d94ce86194a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:06.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75754567-253a-4667-bcc6-3acdb3da1607]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:06.023 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc16e26-cba5-4437-a53e-ed427b71493f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593368, 'reachable_time': 21018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363881, 'error': None, 'target': 'ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d6cdb9ec3\x2d6144\x2d4767\x2d9719\x2dddbf0a68bf7c.mount: Deactivated successfully.
Nov 25 08:50:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:06.027 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6cdb9ec3-6144-4767-9719-ddbf0a68bf7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:50:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:06.027 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7f302018-54d9-41fd-b611-f88338301a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:06 compute-0 nova_compute[253538]: 2025-11-25 08:50:06.207 253542 INFO nova.virt.libvirt.driver [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Deleting instance files /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_del
Nov 25 08:50:06 compute-0 nova_compute[253538]: 2025-11-25 08:50:06.208 253542 INFO nova.virt.libvirt.driver [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Deletion of /var/lib/nova/instances/a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5_del complete
Nov 25 08:50:06 compute-0 nova_compute[253538]: 2025-11-25 08:50:06.282 253542 INFO nova.compute.manager [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 25 08:50:06 compute-0 nova_compute[253538]: 2025-11-25 08:50:06.282 253542 DEBUG oslo.service.loopingcall [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:50:06 compute-0 nova_compute[253538]: 2025-11-25 08:50:06.283 253542 DEBUG nova.compute.manager [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:50:06 compute-0 nova_compute[253538]: 2025-11-25 08:50:06.283 253542 DEBUG nova.network.neutron [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:50:06 compute-0 ceph-mon[75015]: pgmap v2063: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 08:50:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:07 compute-0 nova_compute[253538]: 2025-11-25 08:50:07.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 150 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 18 KiB/s wr, 22 op/s
Nov 25 08:50:08 compute-0 nova_compute[253538]: 2025-11-25 08:50:08.465 253542 DEBUG nova.compute.manager [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-unplugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:08 compute-0 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG oslo_concurrency.lockutils [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:08 compute-0 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG oslo_concurrency.lockutils [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:08 compute-0 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG oslo_concurrency.lockutils [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:08 compute-0 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG nova.compute.manager [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] No waiting events found dispatching network-vif-unplugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:08 compute-0 nova_compute[253538]: 2025-11-25 08:50:08.466 253542 DEBUG nova.compute.manager [req-28f7df6e-0601-4a05-bd52-b50139969290 req-7596ecd4-7607-4db3-93bd-14d4d9ac77f6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-unplugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:50:08 compute-0 podman[363883]: 2025-11-25 08:50:08.877210717 +0000 UTC m=+0.099730713 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:50:08 compute-0 ceph-mon[75015]: pgmap v2064: 321 pgs: 321 active+clean; 150 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 18 KiB/s wr, 22 op/s
Nov 25 08:50:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 129 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 13 KiB/s wr, 18 op/s
Nov 25 08:50:09 compute-0 nova_compute[253538]: 2025-11-25 08:50:09.894 253542 DEBUG nova.network.neutron [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:09 compute-0 nova_compute[253538]: 2025-11-25 08:50:09.932 253542 INFO nova.compute.manager [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Took 3.65 seconds to deallocate network for instance.
Nov 25 08:50:09 compute-0 nova_compute[253538]: 2025-11-25 08:50:09.996 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:09 compute-0 nova_compute[253538]: 2025-11-25 08:50:09.996 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.085 253542 DEBUG oslo_concurrency.processutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.187 253542 DEBUG nova.network.neutron [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updated VIF entry in instance network info cache for port 67b278e0-034e-4bb1-8cba-035ab2a72de3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.188 253542 DEBUG nova.network.neutron [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [{"id": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "address": "fa:16:3e:d5:f1:de", "network": {"id": "6cdb9ec3-6144-4767-9719-ddbf0a68bf7c", "bridge": "br-int", "label": "tempest-network-smoke--1425209404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67b278e0-03", "ovs_interfaceid": "67b278e0-034e-4bb1-8cba-035ab2a72de3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.204 253542 DEBUG oslo_concurrency.lockutils [req-d2b2b4fc-27b8-4001-9a72-23ead2519d37 req-4d65b1b9-1809-42b1-85ea-e16a5d1e46b1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.215 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.216 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.234 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.291 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:50:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1851252405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.557 253542 DEBUG oslo_concurrency.processutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.565 253542 DEBUG nova.compute.provider_tree [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.583 253542 DEBUG nova.scheduler.client.report [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.606 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.611 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.620 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.620 253542 INFO nova.compute.claims [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.643 253542 INFO nova.scheduler.client.report [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.684 253542 DEBUG nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.685 253542 DEBUG oslo_concurrency.lockutils [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.685 253542 DEBUG oslo_concurrency.lockutils [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.686 253542 DEBUG oslo_concurrency.lockutils [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.686 253542 DEBUG nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] No waiting events found dispatching network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.687 253542 WARNING nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received unexpected event network-vif-plugged-67b278e0-034e-4bb1-8cba-035ab2a72de3 for instance with vm_state deleted and task_state None.
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.687 253542 DEBUG nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Received event network-vif-deleted-67b278e0-034e-4bb1-8cba-035ab2a72de3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.688 253542 INFO nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Neutron deleted interface 67b278e0-034e-4bb1-8cba-035ab2a72de3; detaching it from the instance and deleting it from the info cache
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.688 253542 DEBUG nova.network.neutron [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.724 253542 DEBUG nova.compute.manager [req-3f0a1bdc-5fdb-4348-b360-538e3eec4fd6 req-7fc8f680-dfab-49f3-b2b2-6656821c3b3c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Detach interface failed, port_id=67b278e0-034e-4bb1-8cba-035ab2a72de3, reason: Instance a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.747 253542 DEBUG oslo_concurrency.lockutils [None req-2e8b81d8-68aa-4e66-8a71-fc27daf5eff3 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.761 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:10 compute-0 nova_compute[253538]: 2025-11-25 08:50:10.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:10 compute-0 ceph-mon[75015]: pgmap v2065: 321 pgs: 321 active+clean; 129 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 13 KiB/s wr, 18 op/s
Nov 25 08:50:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1851252405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:50:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301658766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.254 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.260 253542 DEBUG nova.compute.provider_tree [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.277 253542 DEBUG nova.scheduler.client.report [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.302 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.303 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.357 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.358 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.393 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.407 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.507 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.509 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.510 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Creating image(s)
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.544 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.568 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.596 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.599 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:50:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.4 total, 600.0 interval
                                           Cumulative writes: 32K writes, 126K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.89 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5256 writes, 19K keys, 5256 commit groups, 1.0 writes per commit group, ingest: 17.05 MB, 0.03 MB/s
                                           Interval WAL: 5256 writes, 2172 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.637 253542 DEBUG nova.policy [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:50:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.683 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.684 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.685 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.686 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.729 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:11 compute-0 nova_compute[253538]: 2025-11-25 08:50:11.733 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3301658766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.066 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.124 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.208 253542 DEBUG nova.objects.instance [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.222 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.222 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Ensure instance console log exists: /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.223 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.223 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.224 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.306 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Successfully created port: d0945383-2a0d-4019-9b60-eea96d667c69 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:50:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:12 compute-0 nova_compute[253538]: 2025-11-25 08:50:12.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:13 compute-0 ceph-mon[75015]: pgmap v2066: 321 pgs: 321 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.246 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Successfully updated port: d0945383-2a0d-4019-9b60-eea96d667c69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.265 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.266 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.266 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.411 253542 DEBUG nova.compute.manager [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.412 253542 DEBUG nova.compute.manager [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing instance network info cache due to event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.413 253542 DEBUG oslo_concurrency.lockutils [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.511 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.577 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 94 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 205 KiB/s wr, 28 op/s
Nov 25 08:50:13 compute-0 nova_compute[253538]: 2025-11-25 08:50:13.793 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.711 253542 DEBUG nova.network.neutron [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.738 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.738 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance network_info: |[{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.739 253542 DEBUG oslo_concurrency.lockutils [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.739 253542 DEBUG nova.network.neutron [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.745 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start _get_guest_xml network_info=[{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.751 253542 WARNING nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.761 253542 DEBUG nova.virt.libvirt.host [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.762 253542 DEBUG nova.virt.libvirt.host [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.767 253542 DEBUG nova.virt.libvirt.host [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.767 253542 DEBUG nova.virt.libvirt.host [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.768 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.769 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.769 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.770 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.770 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.771 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.771 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.772 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.772 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.772 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.773 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.773 253542 DEBUG nova.virt.hardware [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:50:14 compute-0 nova_compute[253538]: 2025-11-25 08:50:14.778 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:15 compute-0 ceph-mon[75015]: pgmap v2067: 321 pgs: 321 active+clean; 94 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 205 KiB/s wr, 28 op/s
Nov 25 08:50:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:50:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706721461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.326 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.357 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.360 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 110 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 679 KiB/s wr, 42 op/s
Nov 25 08:50:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:50:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376361003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.823 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.826 253542 DEBUG nova.virt.libvirt.vif [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:50:11Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.827 253542 DEBUG nova.network.os_vif_util [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.829 253542 DEBUG nova.network.os_vif_util [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.831 253542 DEBUG nova.objects.instance [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.852 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <name>instance-0000006d</name>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:50:14</nova:creationTime>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 08:50:15 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <system>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <entry name="serial">49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <entry name="uuid">49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </system>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <os>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   </os>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <features>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   </features>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk">
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config">
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:50:15 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:88:13:51"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <target dev="tapd0945383-2a"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log" append="off"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <video>
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </video>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:50:15 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:50:15 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:50:15 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:50:15 compute-0 nova_compute[253538]: </domain>
Nov 25 08:50:15 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.854 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Preparing to wait for external event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.855 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.856 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.856 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.857 253542 DEBUG nova.virt.libvirt.vif [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:50:11Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.858 253542 DEBUG nova.network.os_vif_util [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.859 253542 DEBUG nova.network.os_vif_util [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.860 253542 DEBUG os_vif [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.862 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.863 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0945383-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.869 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0945383-2a, col_values=(('external_ids', {'iface-id': 'd0945383-2a0d-4019-9b60-eea96d667c69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:13:51', 'vm-uuid': '49b75125-0ca4-438d-9f2a-1d130a6b5632'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:15 compute-0 NetworkManager[48915]: <info>  [1764060615.8730] manager: (tapd0945383-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.879 253542 INFO os_vif [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a')
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.961 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.962 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.962 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:88:13:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.963 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Using config drive
Nov 25 08:50:15 compute-0 nova_compute[253538]: 2025-11-25 08:50:15.989 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1706721461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:50:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3376361003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:50:16 compute-0 nova_compute[253538]: 2025-11-25 08:50:16.754 253542 DEBUG nova.network.neutron [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updated VIF entry in instance network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:50:16 compute-0 nova_compute[253538]: 2025-11-25 08:50:16.755 253542 DEBUG nova.network.neutron [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:16 compute-0 nova_compute[253538]: 2025-11-25 08:50:16.773 253542 DEBUG oslo_concurrency.lockutils [req-fd6bbb25-595a-4447-9a58-bb21575e279b req-2b7172cc-0d68-40df-b004-5890e26a3c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:16 compute-0 nova_compute[253538]: 2025-11-25 08:50:16.780 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Creating config drive at /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config
Nov 25 08:50:16 compute-0 nova_compute[253538]: 2025-11-25 08:50:16.789 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4r65ayrp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:16 compute-0 nova_compute[253538]: 2025-11-25 08:50:16.938 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4r65ayrp" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:16 compute-0 nova_compute[253538]: 2025-11-25 08:50:16.963 253542 DEBUG nova.storage.rbd_utils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:16 compute-0 nova_compute[253538]: 2025-11-25 08:50:16.966 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:17 compute-0 ceph-mon[75015]: pgmap v2068: 321 pgs: 321 active+clean; 110 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 679 KiB/s wr, 42 op/s
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.133 253542 DEBUG oslo_concurrency.processutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config 49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.134 253542 INFO nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Deleting local config drive /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/disk.config because it was imported into RBD.
Nov 25 08:50:17 compute-0 kernel: tapd0945383-2a: entered promiscuous mode
Nov 25 08:50:17 compute-0 NetworkManager[48915]: <info>  [1764060617.1781] manager: (tapd0945383-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/435)
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.178 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 ovn_controller[152859]: 2025-11-25T08:50:17Z|01070|binding|INFO|Claiming lport d0945383-2a0d-4019-9b60-eea96d667c69 for this chassis.
Nov 25 08:50:17 compute-0 ovn_controller[152859]: 2025-11-25T08:50:17Z|01071|binding|INFO|d0945383-2a0d-4019-9b60-eea96d667c69: Claiming fa:16:3e:88:13:51 10.100.0.3
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.192 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:13:51 10.100.0.3'], port_security=['fa:16:3e:88:13:51 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '49b75125-0ca4-438d-9f2a-1d130a6b5632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c833a599-5a18-44d2-82ad-b16f7476c220', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4dacd795-ee8f-4895-b3fe-aaa7865132b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d0e444d-0863-4949-9e8d-d9b0bfd89ac2, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d0945383-2a0d-4019-9b60-eea96d667c69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.194 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d0945383-2a0d-4019-9b60-eea96d667c69 in datapath c833a599-5a18-44d2-82ad-b16f7476c220 bound to our chassis
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.195 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c833a599-5a18-44d2-82ad-b16f7476c220
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.207 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19c94124-6b33-41a3-96f0-d1d6013d4797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.208 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc833a599-51 in ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:50:17 compute-0 systemd-machined[215790]: New machine qemu-135-instance-0000006d.
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.209 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc833a599-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.210 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93b3a383-147c-406d-9db9-bc8868d172ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fad87e76-9cd7-4cbb-8306-e0104b8cfa75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.221 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[799608cb-47db-4fcc-bcee-8c4f823303e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006d.
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7225d4-ef3e-4315-a01a-98ef8c6ba0e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_controller[152859]: 2025-11-25T08:50:17Z|01072|binding|INFO|Setting lport d0945383-2a0d-4019-9b60-eea96d667c69 ovn-installed in OVS
Nov 25 08:50:17 compute-0 ovn_controller[152859]: 2025-11-25T08:50:17Z|01073|binding|INFO|Setting lport d0945383-2a0d-4019-9b60-eea96d667c69 up in Southbound
Nov 25 08:50:17 compute-0 systemd-udevd[364257]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 NetworkManager[48915]: <info>  [1764060617.2649] device (tapd0945383-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:50:17 compute-0 NetworkManager[48915]: <info>  [1764060617.2659] device (tapd0945383-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.278 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[874c7dd1-8a9d-4c97-aee1-942388ac35c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 systemd-udevd[364261]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.284 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa80b4-0c19-4c82-82df-2f25bd23a0b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 NetworkManager[48915]: <info>  [1764060617.2850] manager: (tapc833a599-50): new Veth device (/org/freedesktop/NetworkManager/Devices/436)
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.338 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fe936c-e930-435c-9a8c-875868371b7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.340 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[488385dc-2219-4f76-9f82-f94d3eea6378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 NetworkManager[48915]: <info>  [1764060617.3636] device (tapc833a599-50): carrier: link connected
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2f1cd1-83a0-450a-b98a-6ebf0c8ff12f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.385 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc1c043-54ae-4e81-83a5-7091845ac92f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc833a599-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:30:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597196, 'reachable_time': 41066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364287, 'error': None, 'target': 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.401 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3325b6d-6aa7-4d26-8fa8-6a090995fdfc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:300d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597196, 'tstamp': 597196}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364288, 'error': None, 'target': 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.415 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc7957b-e771-48f8-a4ac-6bf691d883d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc833a599-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:30:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597196, 'reachable_time': 41066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364289, 'error': None, 'target': 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.440 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96fe0728-ca14-4285-ab01-b74618586956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.498 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[320f2e2a-0aae-46f3-83e9-65a9d623cf96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.499 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc833a599-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.500 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.500 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc833a599-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.509 253542 DEBUG nova.compute.manager [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.509 253542 DEBUG oslo_concurrency.lockutils [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.510 253542 DEBUG oslo_concurrency.lockutils [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.510 253542 DEBUG oslo_concurrency.lockutils [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.510 253542 DEBUG nova.compute.manager [req-53c7a8bf-825a-484c-8260-3d6757176aef req-48892bfa-b819-49b3-9564-46d2f0f54c91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Processing event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 NetworkManager[48915]: <info>  [1764060617.5528] manager: (tapc833a599-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Nov 25 08:50:17 compute-0 kernel: tapc833a599-50: entered promiscuous mode
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.556 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc833a599-50, col_values=(('external_ids', {'iface-id': 'bf9779d5-46bc-415b-b1d2-7b9d4a76754d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 ovn_controller[152859]: 2025-11-25T08:50:17Z|01074|binding|INFO|Releasing lport bf9779d5-46bc-415b-b1d2-7b9d4a76754d from this chassis (sb_readonly=0)
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.590 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c833a599-5a18-44d2-82ad-b16f7476c220.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c833a599-5a18-44d2-82ad-b16f7476c220.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.593 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfad831-9b77-447b-8c47-8f5cb6b9368f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.594 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-c833a599-5a18-44d2-82ad-b16f7476c220
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/c833a599-5a18-44d2-82ad-b16f7476c220.pid.haproxy
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID c833a599-5a18-44d2-82ad-b16f7476c220
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:50:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:17.596 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'env', 'PROCESS_TAG=haproxy-c833a599-5a18-44d2-82ad-b16f7476c220', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c833a599-5a18-44d2-82ad-b16f7476c220.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:50:17 compute-0 nova_compute[253538]: 2025-11-25 08:50:17.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 25 08:50:17 compute-0 podman[364319]: 2025-11-25 08:50:17.985891259 +0000 UTC m=+0.049321203 container create 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:50:18 compute-0 systemd[1]: Started libpod-conmon-56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a.scope.
Nov 25 08:50:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c75c05cfa6c1a51f5b9e1bfff96754528c2b327c0c5c0ba20f6d03e2e0d2380a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:18 compute-0 podman[364319]: 2025-11-25 08:50:17.959248556 +0000 UTC m=+0.022678520 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:50:18 compute-0 podman[364319]: 2025-11-25 08:50:18.066552389 +0000 UTC m=+0.129982433 container init 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:50:18 compute-0 podman[364319]: 2025-11-25 08:50:18.079753233 +0000 UTC m=+0.143183187 container start 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:50:18 compute-0 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [NOTICE]   (364379) : New worker (364382) forked
Nov 25 08:50:18 compute-0 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [NOTICE]   (364379) : Loading success.
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.129 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060618.1290493, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.130 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] VM Started (Lifecycle Event)
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.132 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.137 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.140 253542 INFO nova.virt.libvirt.driver [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance spawned successfully.
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.140 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.145 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.148 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.156 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.157 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.157 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.157 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.158 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.158 253542 DEBUG nova.virt.libvirt.driver [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.163 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.163 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060618.1294785, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.163 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] VM Paused (Lifecycle Event)
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.183 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.187 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060618.1367126, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.188 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] VM Resumed (Lifecycle Event)
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.209 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.212 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.217 253542 INFO nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Took 6.71 seconds to spawn the instance on the hypervisor.
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.218 253542 DEBUG nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.226 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.286 253542 INFO nova.compute.manager [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Took 8.01 seconds to build instance.
Nov 25 08:50:18 compute-0 nova_compute[253538]: 2025-11-25 08:50:18.320 253542 DEBUG oslo_concurrency.lockutils [None req-ed0e43f2-f757-4d6e-bc6d-994891196834 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:19 compute-0 ceph-mon[75015]: pgmap v2069: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 25 08:50:19 compute-0 nova_compute[253538]: 2025-11-25 08:50:19.638 253542 DEBUG nova.compute.manager [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:19 compute-0 nova_compute[253538]: 2025-11-25 08:50:19.638 253542 DEBUG oslo_concurrency.lockutils [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:19 compute-0 nova_compute[253538]: 2025-11-25 08:50:19.639 253542 DEBUG oslo_concurrency.lockutils [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:19 compute-0 nova_compute[253538]: 2025-11-25 08:50:19.639 253542 DEBUG oslo_concurrency.lockutils [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:19 compute-0 nova_compute[253538]: 2025-11-25 08:50:19.639 253542 DEBUG nova.compute.manager [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:19 compute-0 nova_compute[253538]: 2025-11-25 08:50:19.639 253542 WARNING nova.compute.manager [req-4f409768-a925-4b77-9f52-cd2d6e6bcd31 req-166c00ee-184a-4043-8637-b59c4d338922 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 for instance with vm_state active and task_state None.
Nov 25 08:50:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 08:50:20 compute-0 nova_compute[253538]: 2025-11-25 08:50:20.786 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060605.7854335, a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:20 compute-0 nova_compute[253538]: 2025-11-25 08:50:20.787 253542 INFO nova.compute.manager [-] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] VM Stopped (Lifecycle Event)
Nov 25 08:50:20 compute-0 nova_compute[253538]: 2025-11-25 08:50:20.805 253542 DEBUG nova.compute.manager [None req-4683676e-c4e9-40e8-b1e5-0601c1bf2528 - - - - - -] [instance: a3e97a6a-f96c-46ee-ba5a-2729f07dd5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:20 compute-0 nova_compute[253538]: 2025-11-25 08:50:20.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:21 compute-0 ceph-mon[75015]: pgmap v2070: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 08:50:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 517 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 08:50:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:50:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3602.4 total, 600.0 interval
                                           Cumulative writes: 26K writes, 102K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 26K writes, 8894 syncs, 2.94 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3508 writes, 13K keys, 3508 commit groups, 1.0 writes per commit group, ingest: 13.36 MB, 0.02 MB/s
                                           Interval WAL: 3508 writes, 1387 syncs, 2.53 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 08:50:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:22 compute-0 nova_compute[253538]: 2025-11-25 08:50:22.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:22 compute-0 nova_compute[253538]: 2025-11-25 08:50:22.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:22 compute-0 NetworkManager[48915]: <info>  [1764060622.6693] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Nov 25 08:50:22 compute-0 NetworkManager[48915]: <info>  [1764060622.6709] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Nov 25 08:50:22 compute-0 nova_compute[253538]: 2025-11-25 08:50:22.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:22 compute-0 ovn_controller[152859]: 2025-11-25T08:50:22Z|01075|binding|INFO|Releasing lport bf9779d5-46bc-415b-b1d2-7b9d4a76754d from this chassis (sb_readonly=0)
Nov 25 08:50:22 compute-0 nova_compute[253538]: 2025-11-25 08:50:22.763 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:23 compute-0 ceph-mon[75015]: pgmap v2071: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 517 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 08:50:23 compute-0 nova_compute[253538]: 2025-11-25 08:50:23.306 253542 DEBUG nova.compute.manager [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:23 compute-0 nova_compute[253538]: 2025-11-25 08:50:23.307 253542 DEBUG nova.compute.manager [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing instance network info cache due to event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:50:23 compute-0 nova_compute[253538]: 2025-11-25 08:50:23.307 253542 DEBUG oslo_concurrency.lockutils [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:23 compute-0 nova_compute[253538]: 2025-11-25 08:50:23.308 253542 DEBUG oslo_concurrency.lockutils [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:23 compute-0 nova_compute[253538]: 2025-11-25 08:50:23.308 253542 DEBUG nova.network.neutron [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:50:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:50:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:50:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:50:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:50:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:50:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:50:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 08:50:24 compute-0 nova_compute[253538]: 2025-11-25 08:50:24.606 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:25 compute-0 nova_compute[253538]: 2025-11-25 08:50:25.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:25 compute-0 ceph-mon[75015]: pgmap v2072: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 08:50:25 compute-0 nova_compute[253538]: 2025-11-25 08:50:25.427 253542 DEBUG nova.network.neutron [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updated VIF entry in instance network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:50:25 compute-0 nova_compute[253538]: 2025-11-25 08:50:25.429 253542 DEBUG nova.network.neutron [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:25 compute-0 nova_compute[253538]: 2025-11-25 08:50:25.447 253542 DEBUG oslo_concurrency.lockutils [req-13d90790-992d-4613-a97d-5848e5c1b970 req-ac34986c-0ef4-477c-a9e8-4ea89ac7b0f4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 99 op/s
Nov 25 08:50:25 compute-0 nova_compute[253538]: 2025-11-25 08:50:25.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:27 compute-0 ceph-mon[75015]: pgmap v2073: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 99 op/s
Nov 25 08:50:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:27 compute-0 nova_compute[253538]: 2025-11-25 08:50:27.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 08:50:28 compute-0 nova_compute[253538]: 2025-11-25 08:50:28.994 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:28 compute-0 nova_compute[253538]: 2025-11-25 08:50:28.997 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:29 compute-0 sshd-session[364392]: Invalid user ekp from 45.202.211.6 port 52022
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.023 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:50:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:50:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1671249306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:50:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:50:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1671249306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.123 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.124 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:29 compute-0 ceph-mon[75015]: pgmap v2074: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 08:50:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1671249306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:50:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1671249306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.133 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.133 253542 INFO nova.compute.claims [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:50:29 compute-0 sshd-session[364392]: Received disconnect from 45.202.211.6 port 52022:11: Bye Bye [preauth]
Nov 25 08:50:29 compute-0 sshd-session[364392]: Disconnected from invalid user ekp 45.202.211.6 port 52022 [preauth]
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.304 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 08:50:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:50:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801715794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.899 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.908 253542 DEBUG nova.compute.provider_tree [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.929 253542 DEBUG nova.scheduler.client.report [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.952 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:29 compute-0 nova_compute[253538]: 2025-11-25 08:50:29.954 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.005 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.006 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.038 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.065 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:50:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/801715794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.163 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.165 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.165 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Creating image(s)
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.189 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.219 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.242 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.245 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.309 253542 DEBUG nova.policy [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.350 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.351 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.351 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.352 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.370 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.373 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc b4f98996-3a98-41ad-af66-af37066515d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.728 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc b4f98996-3a98-41ad-af66-af37066515d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.804 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.909 253542 DEBUG nova.objects.instance [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid b4f98996-3a98-41ad-af66-af37066515d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.969 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.970 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Ensure instance console log exists: /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.970 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.971 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:30 compute-0 nova_compute[253538]: 2025-11-25 08:50:30.971 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:31 compute-0 ceph-mon[75015]: pgmap v2075: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 08:50:31 compute-0 nova_compute[253538]: 2025-11-25 08:50:31.177 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Successfully created port: 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:50:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 55 op/s
Nov 25 08:50:31 compute-0 ovn_controller[152859]: 2025-11-25T08:50:31Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:13:51 10.100.0.3
Nov 25 08:50:31 compute-0 ovn_controller[152859]: 2025-11-25T08:50:31Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:13:51 10.100.0.3
Nov 25 08:50:31 compute-0 podman[364582]: 2025-11-25 08:50:31.860490773 +0000 UTC m=+0.093523917 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.257 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Successfully updated port: 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.271 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.271 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.272 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.356 253542 DEBUG nova.compute.manager [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.357 253542 DEBUG nova.compute.manager [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing instance network info cache due to event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.358 253542 DEBUG oslo_concurrency.lockutils [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.629 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:32 compute-0 nova_compute[253538]: 2025-11-25 08:50:32.790 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:50:33 compute-0 ceph-mon[75015]: pgmap v2076: 321 pgs: 321 active+clean; 134 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 55 op/s
Nov 25 08:50:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 192 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 118 op/s
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.060 253542 DEBUG nova.network.neutron [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.079 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.080 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance network_info: |[{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.080 253542 DEBUG oslo_concurrency.lockutils [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.081 253542 DEBUG nova.network.neutron [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.086 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start _get_guest_xml network_info=[{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.093 253542 WARNING nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.099 253542 DEBUG nova.virt.libvirt.host [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.099 253542 DEBUG nova.virt.libvirt.host [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.106 253542 DEBUG nova.virt.libvirt.host [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.106 253542 DEBUG nova.virt.libvirt.host [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.106 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.107 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.107 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.107 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.107 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.108 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.109 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.109 253542 DEBUG nova.virt.hardware [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.111 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:50:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018625777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.550 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.576 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.583 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.638 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.639 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.667 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.668 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:34 compute-0 nova_compute[253538]: 2025-11-25 08:50:34.669 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:50:34 compute-0 podman[364645]: 2025-11-25 08:50:34.836609847 +0000 UTC m=+0.078145614 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:50:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:50:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174282057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.083 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.086 253542 DEBUG nova.virt.libvirt.vif [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-642924609',display_name='tempest-TestNetworkAdvancedServerOps-server-642924609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-642924609',id=110,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQwQUMdH1inJEZNQ9tUR+z/kDiUab1e20h5rm6qDlszZoYoLqt3pa8Fary6MYkj2oJVBphpUWW4+oVR02Nvg0VNSZNNzWHbc601Ac4/2sW+DdmilXo7ZngfOc7+6JMZJw==',key_name='tempest-TestNetworkAdvancedServerOps-1971435409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-r1bh0apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:50:30Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=b4f98996-3a98-41ad-af66-af37066515d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.087 253542 DEBUG nova.network.os_vif_util [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.088 253542 DEBUG nova.network.os_vif_util [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.090 253542 DEBUG nova.objects.instance [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4f98996-3a98-41ad-af66-af37066515d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.119 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <uuid>b4f98996-3a98-41ad-af66-af37066515d3</uuid>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <name>instance-0000006e</name>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-642924609</nova:name>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:50:34</nova:creationTime>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <nova:port uuid="0547929c-86ba-4aaa-869f-c7e2b5ea7e67">
Nov 25 08:50:35 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <system>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <entry name="serial">b4f98996-3a98-41ad-af66-af37066515d3</entry>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <entry name="uuid">b4f98996-3a98-41ad-af66-af37066515d3</entry>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </system>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <os>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   </os>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <features>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   </features>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/b4f98996-3a98-41ad-af66-af37066515d3_disk">
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/b4f98996-3a98-41ad-af66-af37066515d3_disk.config">
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:50:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:86:20:35"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <target dev="tap0547929c-86"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/console.log" append="off"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <video>
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </video>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:50:35 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:50:35 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:50:35 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:50:35 compute-0 nova_compute[253538]: </domain>
Nov 25 08:50:35 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.121 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Preparing to wait for external event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.122 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.123 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.123 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.124 253542 DEBUG nova.virt.libvirt.vif [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-642924609',display_name='tempest-TestNetworkAdvancedServerOps-server-642924609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-642924609',id=110,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQwQUMdH1inJEZNQ9tUR+z/kDiUab1e20h5rm6qDlszZoYoLqt3pa8Fary6MYkj2oJVBphpUWW4+oVR02Nvg0VNSZNNzWHbc601Ac4/2sW+DdmilXo7ZngfOc7+6JMZJw==',key_name='tempest-TestNetworkAdvancedServerOps-1971435409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-r1bh0apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:50:30Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=b4f98996-3a98-41ad-af66-af37066515d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.124 253542 DEBUG nova.network.os_vif_util [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.125 253542 DEBUG nova.network.os_vif_util [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.126 253542 DEBUG os_vif [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.127 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.128 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.135 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0547929c-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.136 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0547929c-86, col_values=(('external_ids', {'iface-id': '0547929c-86ba-4aaa-869f-c7e2b5ea7e67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:20:35', 'vm-uuid': 'b4f98996-3a98-41ad-af66-af37066515d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:35 compute-0 NetworkManager[48915]: <info>  [1764060635.1391] manager: (tap0547929c-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.148 253542 INFO os_vif [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86')
Nov 25 08:50:35 compute-0 ceph-mon[75015]: pgmap v2077: 321 pgs: 321 active+clean; 192 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 118 op/s
Nov 25 08:50:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2018625777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:50:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/174282057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.196 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.196 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.197 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:86:20:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.197 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Using config drive
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.221 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.902 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Creating config drive at /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config
Nov 25 08:50:35 compute-0 nova_compute[253538]: 2025-11-25 08:50:35.911 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5sios1hf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.075 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5sios1hf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.109 253542 DEBUG nova.storage.rbd_utils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image b4f98996-3a98-41ad-af66-af37066515d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.114 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config b4f98996-3a98-41ad-af66-af37066515d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.283 253542 DEBUG oslo_concurrency.processutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config b4f98996-3a98-41ad-af66-af37066515d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.284 253542 INFO nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Deleting local config drive /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3/disk.config because it was imported into RBD.
Nov 25 08:50:36 compute-0 kernel: tap0547929c-86: entered promiscuous mode
Nov 25 08:50:36 compute-0 NetworkManager[48915]: <info>  [1764060636.3544] manager: (tap0547929c-86): new Tun device (/org/freedesktop/NetworkManager/Devices/441)
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.356 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:36 compute-0 ovn_controller[152859]: 2025-11-25T08:50:36Z|01076|binding|INFO|Claiming lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for this chassis.
Nov 25 08:50:36 compute-0 ovn_controller[152859]: 2025-11-25T08:50:36Z|01077|binding|INFO|0547929c-86ba-4aaa-869f-c7e2b5ea7e67: Claiming fa:16:3e:86:20:35 10.100.0.11
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.364 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:20:35 10.100.0.11'], port_security=['fa:16:3e:86:20:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4f98996-3a98-41ad-af66-af37066515d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0613062-c56d-4f59-a1bd-5487b9cae905', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'db6ee56c-6009-455c-94cc-12101966dbd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eca56c04-944f-49ab-8f25-403bf5089348, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0547929c-86ba-4aaa-869f-c7e2b5ea7e67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.365 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 in datapath c0613062-c56d-4f59-a1bd-5487b9cae905 bound to our chassis
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.366 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 08:50:36 compute-0 ovn_controller[152859]: 2025-11-25T08:50:36Z|01078|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 ovn-installed in OVS
Nov 25 08:50:36 compute-0 ovn_controller[152859]: 2025-11-25T08:50:36Z|01079|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 up in Southbound
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.376 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1e32b9-276e-4bbb-996a-d37f57875901]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.377 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc0613062-c1 in ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.379 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc0613062-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76a326e1-99a5-4ab5-99bd-2a417b57d7cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.380 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d15baec-2b59-45a8-b23f-0904e16491a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:36 compute-0 systemd-udevd[364758]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.391 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[da8d02cf-3105-41b4-84cc-26c9fae72ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 systemd-machined[215790]: New machine qemu-136-instance-0000006e.
Nov 25 08:50:36 compute-0 NetworkManager[48915]: <info>  [1764060636.3967] device (tap0547929c-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:50:36 compute-0 NetworkManager[48915]: <info>  [1764060636.3976] device (tap0547929c-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:50:36 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006e.
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a00d46-a3c3-4e6c-9f88-9661855b86f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.441 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f66d7efb-05f4-4256-92b5-368e48843558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.446 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[745c2219-2632-43f0-8d48-7d6d1d8cc7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 NetworkManager[48915]: <info>  [1764060636.4470] manager: (tapc0613062-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/442)
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.483 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e9cbf7-e4ce-4da0-8f73-13d617812f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.486 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[de44e7d0-6846-4c58-8ecc-0a518beee5c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 NetworkManager[48915]: <info>  [1764060636.5123] device (tapc0613062-c0): carrier: link connected
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.517 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[085625e5-7755-4336-8206-f3c5dca8f3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d561b019-ca23-4866-90f8-5c2607288421]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0613062-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a3:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599111, 'reachable_time': 17002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364791, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.553 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[458f2929-b4b4-4aa9-94dd-91864d3304eb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a31e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599111, 'tstamp': 599111}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364792, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.572 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8627d53-e5b7-4c63-be46-40e5be69bf6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0613062-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a3:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599111, 'reachable_time': 17002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364793, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.587 253542 DEBUG nova.network.neutron [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updated VIF entry in instance network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.587 253542 DEBUG nova.network.neutron [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.600 253542 DEBUG oslo_concurrency.lockutils [req-20db4852-10c6-47d5-b22a-bfc792e5e012 req-9b1a01f6-23e5-4af1-8963-685df28ccad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c150579c-7a42-4f54-9086-489af16ee07f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.652 253542 DEBUG nova.compute.manager [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.653 253542 DEBUG oslo_concurrency.lockutils [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.653 253542 DEBUG oslo_concurrency.lockutils [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.654 253542 DEBUG oslo_concurrency.lockutils [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.655 253542 DEBUG nova.compute.manager [req-af4d1bf3-575f-436a-968d-f2add4e16aee req-70ac3d1c-271e-46e5-b06d-18dcedcb0d51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Processing event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.714 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8222e1a3-f66f-43b6-b59d-4e4a76e40f6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.716 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0613062-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.717 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.718 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0613062-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:36 compute-0 NetworkManager[48915]: <info>  [1764060636.7215] manager: (tapc0613062-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Nov 25 08:50:36 compute-0 kernel: tapc0613062-c0: entered promiscuous mode
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.732 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc0613062-c0, col_values=(('external_ids', {'iface-id': '0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:36 compute-0 ovn_controller[152859]: 2025-11-25T08:50:36Z|01080|binding|INFO|Releasing lport 0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f from this chassis (sb_readonly=0)
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.736 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.737 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37d94f66-1147-4dc8-ad19-17c42de0a083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.738 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:50:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:36.739 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'env', 'PROCESS_TAG=haproxy-c0613062-c56d-4f59-a1bd-5487b9cae905', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c0613062-c56d-4f59-a1bd-5487b9cae905.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:50:36 compute-0 nova_compute[253538]: 2025-11-25 08:50:36.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.120 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060637.119723, b4f98996-3a98-41ad-af66-af37066515d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.121 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Started (Lifecycle Event)
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.123 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.127 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.132 253542 INFO nova.virt.libvirt.driver [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance spawned successfully.
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.132 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.146 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.152 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:50:37 compute-0 podman[364866]: 2025-11-25 08:50:37.154613197 +0000 UTC m=+0.067022776 container create 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.158 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.159 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.159 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.160 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.160 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.160 253542 DEBUG nova.virt.libvirt.driver [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.170 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.170 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060637.1198974, b4f98996-3a98-41ad-af66-af37066515d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.170 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Paused (Lifecycle Event)
Nov 25 08:50:37 compute-0 ceph-mon[75015]: pgmap v2078: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 08:50:37 compute-0 systemd[1]: Started libpod-conmon-12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af.scope.
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.198 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.203 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060637.1258154, b4f98996-3a98-41ad-af66-af37066515d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.203 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Resumed (Lifecycle Event)
Nov 25 08:50:37 compute-0 podman[364866]: 2025-11-25 08:50:37.118533061 +0000 UTC m=+0.030942700 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.220 253542 INFO nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Took 7.06 seconds to spawn the instance on the hypervisor.
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.220 253542 DEBUG nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.222 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.229 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:50:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf67b50908d3dd49b2c0a0481b4509ed99ab81eb1cd3289fe65b839d4061e7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.258 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:50:37 compute-0 podman[364866]: 2025-11-25 08:50:37.260244896 +0000 UTC m=+0.172654475 container init 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:50:37 compute-0 podman[364866]: 2025-11-25 08:50:37.265903787 +0000 UTC m=+0.178313366 container start 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:50:37 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [NOTICE]   (364886) : New worker (364888) forked
Nov 25 08:50:37 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [NOTICE]   (364886) : Loading success.
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.304 253542 INFO nova.compute.manager [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Took 8.21 seconds to build instance.
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.321 253542 DEBUG oslo_concurrency.lockutils [None req-65781c2f-9e36-4f52-9d92-a072abcd87c5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:37 compute-0 nova_compute[253538]: 2025-11-25 08:50:37.633 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.268 253542 INFO nova.compute.manager [None req-2eafb20a-8fee-4b08-adf5-c44d34dd792d 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Get console output
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.273 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.577 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.921 253542 DEBUG nova.compute.manager [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.922 253542 DEBUG oslo_concurrency.lockutils [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.922 253542 DEBUG oslo_concurrency.lockutils [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.923 253542 DEBUG oslo_concurrency.lockutils [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.923 253542 DEBUG nova.compute.manager [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:38 compute-0 nova_compute[253538]: 2025-11-25 08:50:38.923 253542 WARNING nova.compute.manager [req-d8ac8ac8-a3c4-4c61-b411-ff208fe5080a req-1dc29505-e6e9-449f-bf95-f6ee45af8814 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state None.
Nov 25 08:50:39 compute-0 ceph-mon[75015]: pgmap v2079: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 08:50:39 compute-0 nova_compute[253538]: 2025-11-25 08:50:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:39 compute-0 nova_compute[253538]: 2025-11-25 08:50:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Nov 25 08:50:39 compute-0 podman[364897]: 2025-11-25 08:50:39.871676235 +0000 UTC m=+0.124107936 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.585 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:40.685 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:50:40 compute-0 nova_compute[253538]: 2025-11-25 08:50:40.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:40.687 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:50:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:40.687 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:50:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289358426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.053 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:41.074 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:41.075 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:41.076 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.139 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.139 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.142 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.142 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:50:41 compute-0 ceph-mon[75015]: pgmap v2080: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Nov 25 08:50:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4289358426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.336 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.339 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3477MB free_disk=59.92185974121094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.339 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.339 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.516 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.516 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance b4f98996-3a98-41ad-af66-af37066515d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.517 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.517 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:50:41 compute-0 nova_compute[253538]: 2025-11-25 08:50:41.633 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Nov 25 08:50:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:50:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570297059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.200 253542 DEBUG nova.compute.manager [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.201 253542 DEBUG nova.compute.manager [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing instance network info cache due to event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.202 253542 DEBUG oslo_concurrency.lockutils [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.202 253542 DEBUG oslo_concurrency.lockutils [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.203 253542 DEBUG nova.network.neutron [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.204 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.212 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:50:42 compute-0 ceph-mon[75015]: pgmap v2081: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Nov 25 08:50:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3570297059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.233 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:50:42 compute-0 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.270 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.271 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.272 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.272 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.288 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.327 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.327 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.328 253542 DEBUG nova.objects.instance [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'flavor' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.961 253542 DEBUG nova.objects.instance [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_requests' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:42 compute-0 nova_compute[253538]: 2025-11-25 08:50:42.979 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:50:43 compute-0 nova_compute[253538]: 2025-11-25 08:50:43.211 253542 DEBUG nova.policy [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:50:43 compute-0 sudo[364969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:43 compute-0 sudo[364969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:43 compute-0 sudo[364969]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:43 compute-0 sudo[364994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:50:43 compute-0 sudo[364994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:43 compute-0 sudo[364994]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:43 compute-0 sudo[365019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:43 compute-0 sudo[365019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:43 compute-0 sudo[365019]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 25 08:50:43 compute-0 sudo[365044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:50:43 compute-0 sudo[365044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:43 compute-0 nova_compute[253538]: 2025-11-25 08:50:43.882 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Successfully created port: 340ce0e3-8b72-4b40-afcb-53f30e6cc961 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:50:44 compute-0 nova_compute[253538]: 2025-11-25 08:50:44.030 253542 DEBUG nova.network.neutron [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updated VIF entry in instance network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:50:44 compute-0 nova_compute[253538]: 2025-11-25 08:50:44.031 253542 DEBUG nova.network.neutron [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:44 compute-0 nova_compute[253538]: 2025-11-25 08:50:44.046 253542 DEBUG oslo_concurrency.lockutils [req-facbe85b-5c08-474e-ae68-ee2db01af8b1 req-ff8d7fd8-3acc-47c5-b8d6-8fa1e61b6562 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:44 compute-0 sudo[365044]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:50:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:50:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:50:44 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:50:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:50:44 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:50:44 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 43ad9e0a-d835-412f-b19c-390bb7c66bd8 does not exist
Nov 25 08:50:44 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9215fbf3-59fe-4379-ab3a-4cfd39fc3581 does not exist
Nov 25 08:50:44 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev faef36d8-2c4a-416d-b213-56cb66b224b6 does not exist
Nov 25 08:50:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:50:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:50:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:50:44 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:50:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:50:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:50:44 compute-0 sudo[365101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:44 compute-0 sudo[365101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:44 compute-0 sudo[365101]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:44 compute-0 sudo[365126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:50:44 compute-0 sudo[365126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:44 compute-0 sudo[365126]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:44 compute-0 sudo[365151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:44 compute-0 sudo[365151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:44 compute-0 sudo[365151]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:44 compute-0 sudo[365176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:50:44 compute-0 sudo[365176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:44 compute-0 ceph-mon[75015]: pgmap v2082: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 25 08:50:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:50:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:50:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:50:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:50:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:50:44 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:50:45 compute-0 nova_compute[253538]: 2025-11-25 08:50:45.051 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Successfully updated port: 340ce0e3-8b72-4b40-afcb-53f30e6cc961 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:50:45 compute-0 nova_compute[253538]: 2025-11-25 08:50:45.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:45 compute-0 nova_compute[253538]: 2025-11-25 08:50:45.201 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:45 compute-0 nova_compute[253538]: 2025-11-25 08:50:45.202 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:45 compute-0 nova_compute[253538]: 2025-11-25 08:50:45.202 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:50:45 compute-0 podman[365244]: 2025-11-25 08:50:45.142202148 +0000 UTC m=+0.034120405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:50:45 compute-0 podman[365244]: 2025-11-25 08:50:45.312856888 +0000 UTC m=+0.204775075 container create 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 08:50:45 compute-0 nova_compute[253538]: 2025-11-25 08:50:45.374 253542 DEBUG nova.compute.manager [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-changed-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:45 compute-0 nova_compute[253538]: 2025-11-25 08:50:45.375 253542 DEBUG nova.compute.manager [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing instance network info cache due to event network-changed-340ce0e3-8b72-4b40-afcb-53f30e6cc961. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:50:45 compute-0 nova_compute[253538]: 2025-11-25 08:50:45.375 253542 DEBUG oslo_concurrency.lockutils [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:45 compute-0 systemd[1]: Started libpod-conmon-36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98.scope.
Nov 25 08:50:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:45 compute-0 podman[365244]: 2025-11-25 08:50:45.501839829 +0000 UTC m=+0.393758056 container init 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 25 08:50:45 compute-0 podman[365244]: 2025-11-25 08:50:45.51417766 +0000 UTC m=+0.406095807 container start 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:50:45 compute-0 elastic_mccarthy[365260]: 167 167
Nov 25 08:50:45 compute-0 systemd[1]: libpod-36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98.scope: Deactivated successfully.
Nov 25 08:50:45 compute-0 podman[365244]: 2025-11-25 08:50:45.61951375 +0000 UTC m=+0.511431937 container attach 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:50:45 compute-0 podman[365244]: 2025-11-25 08:50:45.620366394 +0000 UTC m=+0.512284571 container died 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 08:50:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 101 op/s
Nov 25 08:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c74f875615c9d312dd008f4958c530c8ed7006d7919e6917d9b470930b88962-merged.mount: Deactivated successfully.
Nov 25 08:50:45 compute-0 podman[365244]: 2025-11-25 08:50:45.78379758 +0000 UTC m=+0.675715747 container remove 36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:50:45 compute-0 systemd[1]: libpod-conmon-36b6052625caada1f1eee4b4c65c8281b887435cd33497a6a4101b76a88f8e98.scope: Deactivated successfully.
Nov 25 08:50:45 compute-0 podman[365284]: 2025-11-25 08:50:45.970044758 +0000 UTC m=+0.045235942 container create 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:50:46 compute-0 systemd[1]: Started libpod-conmon-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope.
Nov 25 08:50:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:46 compute-0 podman[365284]: 2025-11-25 08:50:45.951851411 +0000 UTC m=+0.027042595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:50:46 compute-0 podman[365284]: 2025-11-25 08:50:46.065918866 +0000 UTC m=+0.141110110 container init 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:50:46 compute-0 podman[365284]: 2025-11-25 08:50:46.076258293 +0000 UTC m=+0.151449507 container start 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 08:50:46 compute-0 podman[365284]: 2025-11-25 08:50:46.080698572 +0000 UTC m=+0.155889836 container attach 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 08:50:47 compute-0 ceph-mon[75015]: pgmap v2083: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 101 op/s
Nov 25 08:50:47 compute-0 musing_davinci[365300]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:50:47 compute-0 musing_davinci[365300]: --> relative data size: 1.0
Nov 25 08:50:47 compute-0 musing_davinci[365300]: --> All data devices are unavailable
Nov 25 08:50:47 compute-0 systemd[1]: libpod-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope: Deactivated successfully.
Nov 25 08:50:47 compute-0 systemd[1]: libpod-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope: Consumed 1.044s CPU time.
Nov 25 08:50:47 compute-0 conmon[365300]: conmon 6c9c0d3f27aa7494f240 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope/container/memory.events
Nov 25 08:50:47 compute-0 podman[365284]: 2025-11-25 08:50:47.179768076 +0000 UTC m=+1.254959290 container died 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 08:50:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.588 253542 DEBUG nova.network.neutron [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.609 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.611 253542 DEBUG oslo_concurrency.lockutils [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.611 253542 DEBUG nova.network.neutron [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing network info cache for port 340ce0e3-8b72-4b40-afcb-53f30e6cc961 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.616 253542 DEBUG nova.virt.libvirt.vif [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.616 253542 DEBUG nova.network.os_vif_util [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.618 253542 DEBUG nova.network.os_vif_util [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.618 253542 DEBUG os_vif [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.620 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.621 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.631 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.632 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap340ce0e3-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.633 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap340ce0e3-8b, col_values=(('external_ids', {'iface-id': '340ce0e3-8b72-4b40-afcb-53f30e6cc961', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:f0:9c', 'vm-uuid': '49b75125-0ca4-438d-9f2a-1d130a6b5632'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:47 compute-0 NetworkManager[48915]: <info>  [1764060647.6377] manager: (tap340ce0e3-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.644 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.645 253542 INFO os_vif [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b')
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.646 253542 DEBUG nova.virt.libvirt.vif [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.647 253542 DEBUG nova.network.os_vif_util [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.649 253542 DEBUG nova.network.os_vif_util [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.653 253542 DEBUG nova.virt.libvirt.guest [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:39:f0:9c"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <target dev="tap340ce0e3-8b"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]: </interface>
Nov 25 08:50:47 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:50:47 compute-0 kernel: tap340ce0e3-8b: entered promiscuous mode
Nov 25 08:50:47 compute-0 ovn_controller[152859]: 2025-11-25T08:50:47Z|01081|binding|INFO|Claiming lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 for this chassis.
Nov 25 08:50:47 compute-0 ovn_controller[152859]: 2025-11-25T08:50:47Z|01082|binding|INFO|340ce0e3-8b72-4b40-afcb-53f30e6cc961: Claiming fa:16:3e:39:f0:9c 10.100.0.27
Nov 25 08:50:47 compute-0 NetworkManager[48915]: <info>  [1764060647.6693] manager: (tap340ce0e3-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/445)
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.680 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:f0:9c 10.100.0.27'], port_security=['fa:16:3e:39:f0:9c 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '49b75125-0ca4-438d-9f2a-1d130a6b5632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e204e23-2391-4196-9262-d69db603285d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dda51c2a-7354-4889-adc9-442043cc4089, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=340ce0e3-8b72-4b40-afcb-53f30e6cc961) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.681 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 340ce0e3-8b72-4b40-afcb-53f30e6cc961 in datapath 1e204e23-2391-4196-9262-d69db603285d bound to our chassis
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.683 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e204e23-2391-4196-9262-d69db603285d
Nov 25 08:50:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.703 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81a6bc13-d81a-49cb-bcaa-8ae1e2e92538]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.703 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e204e23-21 in ovnmeta-1e204e23-2391-4196-9262-d69db603285d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.705 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e204e23-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.705 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e272aad6-05b8-4aba-a560-626c06d7498f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[919a74f0-1651-4501-98dd-bef9ef5b9b6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 systemd-udevd[365346]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:50:47 compute-0 ovn_controller[152859]: 2025-11-25T08:50:47Z|01083|binding|INFO|Setting lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 ovn-installed in OVS
Nov 25 08:50:47 compute-0 ovn_controller[152859]: 2025-11-25T08:50:47Z|01084|binding|INFO|Setting lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 up in Southbound
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.734 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[49cf8429-4a9f-4bb3-9d9a-8d0c94e8a02e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 NetworkManager[48915]: <info>  [1764060647.7412] device (tap340ce0e3-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:50:47 compute-0 NetworkManager[48915]: <info>  [1764060647.7426] device (tap340ce0e3-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.753 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b31a01-062d-4080-8c80-088548767fed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.788 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7b740db5-b0b2-4f4f-a1b1-e78d676603bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.796 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb994e04-8fca-4449-9ca5-3302e053c051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 NetworkManager[48915]: <info>  [1764060647.7971] manager: (tap1e204e23-20): new Veth device (/org/freedesktop/NetworkManager/Devices/446)
Nov 25 08:50:47 compute-0 systemd-udevd[365349]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf05399c35cf2bc9da94f342c558490e36ee5c466a45b865134cffca6078c36-merged.mount: Deactivated successfully.
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.846 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[568d642f-6abb-4f63-8850-6d16b520f091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.849 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7d65d6-ed7d-4720-9535-2b0829c84d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.876 253542 DEBUG nova.virt.libvirt.driver [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.876 253542 DEBUG nova.virt.libvirt.driver [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.877 253542 DEBUG nova.virt.libvirt.driver [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:88:13:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.877 253542 DEBUG nova.virt.libvirt.driver [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:39:f0:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:50:47 compute-0 NetworkManager[48915]: <info>  [1764060647.8800] device (tap1e204e23-20): carrier: link connected
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.885 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[07cb6670-19c4-4215-a097-77e42f8faa79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.897 253542 DEBUG nova.virt.libvirt.guest [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:50:47</nova:creationTime>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 08:50:47 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     <nova:port uuid="340ce0e3-8b72-4b40-afcb-53f30e6cc961">
Nov 25 08:50:47 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 25 08:50:47 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:47 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:50:47 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:50:47 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.910 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d03c396-43be-4220-9299-c434f94351eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e204e23-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:32:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600247, 'reachable_time': 25621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365372, 'error': None, 'target': 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 nova_compute[253538]: 2025-11-25 08:50:47.919 253542 DEBUG oslo_concurrency.lockutils [None req-38cbd88f-9fda-4f44-ab67-acd4e2366ab1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.925 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9fa631-c64a-41d8-9751-3664517fa5b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:3256'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600247, 'tstamp': 600247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365373, 'error': None, 'target': 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.943 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c78ac0f7-e13c-4471-a956-9a428867e831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e204e23-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:32:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600247, 'reachable_time': 25621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365374, 'error': None, 'target': 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:47.977 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79ad4580-f050-4d3e-a82c-47377a40cade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.052 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10c6e1e7-e92a-4cab-8da1-7002d1922f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.054 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e204e23-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.055 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.055 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e204e23-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:48 compute-0 NetworkManager[48915]: <info>  [1764060648.0578] manager: (tap1e204e23-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Nov 25 08:50:48 compute-0 kernel: tap1e204e23-20: entered promiscuous mode
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.063 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e204e23-20, col_values=(('external_ids', {'iface-id': '6e90b682-b94d-4606-bcb2-f7667ba8c85c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:48 compute-0 ovn_controller[152859]: 2025-11-25T08:50:48Z|01085|binding|INFO|Releasing lport 6e90b682-b94d-4606-bcb2-f7667ba8c85c from this chassis (sb_readonly=0)
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.086 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e204e23-2391-4196-9262-d69db603285d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e204e23-2391-4196-9262-d69db603285d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.088 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97d9f0a3-f46a-4239-81f7-350579700a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.089 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-1e204e23-2391-4196-9262-d69db603285d
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/1e204e23-2391-4196-9262-d69db603285d.pid.haproxy
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 1e204e23-2391-4196-9262-d69db603285d
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:50:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:48.089 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'env', 'PROCESS_TAG=haproxy-1e204e23-2391-4196-9262-d69db603285d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e204e23-2391-4196-9262-d69db603285d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.161 253542 DEBUG nova.compute.manager [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.162 253542 DEBUG oslo_concurrency.lockutils [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.162 253542 DEBUG oslo_concurrency.lockutils [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.163 253542 DEBUG oslo_concurrency.lockutils [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.163 253542 DEBUG nova.compute.manager [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.163 253542 WARNING nova.compute.manager [req-4bfa353a-45ac-4dc7-aa5c-99f6dafdfd83 req-a177a61c-15b2-40bf-a747-639fffa7f665 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 for instance with vm_state active and task_state None.
Nov 25 08:50:48 compute-0 podman[365284]: 2025-11-25 08:50:48.355439883 +0000 UTC m=+2.430631057 container remove 6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 25 08:50:48 compute-0 sudo[365176]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:48 compute-0 systemd[1]: libpod-conmon-6c9c0d3f27aa7494f24081cd0fbdd13661925a0ba01dee19e8276578267b6858.scope: Deactivated successfully.
Nov 25 08:50:48 compute-0 sudo[365401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:48 compute-0 sudo[365401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:48 compute-0 sudo[365401]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:48 compute-0 podman[365415]: 2025-11-25 08:50:48.48264548 +0000 UTC m=+0.030550370 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:50:48 compute-0 sudo[365444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:50:48 compute-0 sudo[365444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:48 compute-0 sudo[365444]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:48 compute-0 sudo[365469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:48 compute-0 sudo[365469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:48 compute-0 sudo[365469]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:48 compute-0 sudo[365494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:50:48 compute-0 sudo[365494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.898 253542 DEBUG nova.network.neutron [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updated VIF entry in instance network info cache for port 340ce0e3-8b72-4b40-afcb-53f30e6cc961. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.899 253542 DEBUG nova.network.neutron [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:48 compute-0 nova_compute[253538]: 2025-11-25 08:50:48.913 253542 DEBUG oslo_concurrency.lockutils [req-0b48e77c-5c37-4c1a-ada9-9e99047b6fe5 req-a665ee81-929a-40e1-bc9a-9932c18cbcb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:49 compute-0 podman[365415]: 2025-11-25 08:50:49.232827121 +0000 UTC m=+0.780731971 container create 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:50:49 compute-0 systemd[1]: Started libpod-conmon-6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547.scope.
Nov 25 08:50:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcebdde85a3fd006ad7591ab9e8a651e0c91f4e265451ae67fa9db8eed4eafeb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:49 compute-0 ovn_controller[152859]: 2025-11-25T08:50:49Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:f0:9c 10.100.0.27
Nov 25 08:50:49 compute-0 ovn_controller[152859]: 2025-11-25T08:50:49Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:f0:9c 10.100.0.27
Nov 25 08:50:49 compute-0 ceph-mon[75015]: pgmap v2084: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Nov 25 08:50:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Nov 25 08:50:49 compute-0 podman[365415]: 2025-11-25 08:50:49.710272959 +0000 UTC m=+1.258177829 container init 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:50:49 compute-0 podman[365415]: 2025-11-25 08:50:49.71707865 +0000 UTC m=+1.264983500 container start 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:50:49 compute-0 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [NOTICE]   (365551) : New worker (365553) forked
Nov 25 08:50:49 compute-0 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [NOTICE]   (365551) : Loading success.
Nov 25 08:50:50 compute-0 podman[365576]: 2025-11-25 08:50:50.249055467 +0000 UTC m=+0.084119524 container create 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.282 253542 DEBUG nova.compute.manager [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.282 253542 DEBUG oslo_concurrency.lockutils [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.284 253542 DEBUG oslo_concurrency.lockutils [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.284 253542 DEBUG oslo_concurrency.lockutils [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.285 253542 DEBUG nova.compute.manager [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.286 253542 WARNING nova.compute.manager [req-9943ac55-ce78-4f15-aa08-00086e801da9 req-f21460cf-760c-4596-a720-69c145adc6fd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 for instance with vm_state active and task_state None.
Nov 25 08:50:50 compute-0 podman[365576]: 2025-11-25 08:50:50.201664507 +0000 UTC m=+0.036728644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:50:50 compute-0 systemd[1]: Started libpod-conmon-2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e.scope.
Nov 25 08:50:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.347 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-340ce0e3-8b72-4b40-afcb-53f30e6cc961" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.348 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-340ce0e3-8b72-4b40-afcb-53f30e6cc961" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.371 253542 DEBUG nova.objects.instance [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'flavor' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:50 compute-0 podman[365576]: 2025-11-25 08:50:50.373289583 +0000 UTC m=+0.208353730 container init 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 08:50:50 compute-0 podman[365576]: 2025-11-25 08:50:50.38696457 +0000 UTC m=+0.222028657 container start 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.395 253542 DEBUG nova.virt.libvirt.vif [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:50:50 compute-0 brave_varahamihira[365592]: 167 167
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.396 253542 DEBUG nova.network.os_vif_util [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:50 compute-0 systemd[1]: libpod-2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e.scope: Deactivated successfully.
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.396 253542 DEBUG nova.network.os_vif_util [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:50 compute-0 podman[365576]: 2025-11-25 08:50:50.402884936 +0000 UTC m=+0.237949023 container attach 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:50:50 compute-0 podman[365576]: 2025-11-25 08:50:50.403706478 +0000 UTC m=+0.238770535 container died 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.402 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.408 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.412 253542 DEBUG nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Attempting to detach device tap340ce0e3-8b from instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.413 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:39:f0:9c"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <target dev="tap340ce0e3-8b"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]: </interface>
Nov 25 08:50:50 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.438 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.442 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface>not found in domain: <domain type='kvm' id='135'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <name>instance-0000006d</name>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:50:47</nova:creationTime>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:port uuid="340ce0e3-8b72-4b40-afcb-53f30e6cc961">
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:50:50 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <system>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='serial'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='uuid'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </system>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <os>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </os>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <features>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </features>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk' index='2'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config' index='1'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:88:13:51'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target dev='tapd0945383-2a'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:39:f0:9c'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target dev='tap340ce0e3-8b'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='net1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </target>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </console>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <video>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </video>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c434,c684</label>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c684</imagelabel>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:50:50 compute-0 nova_compute[253538]: </domain>
Nov 25 08:50:50 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.442 253542 INFO nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully detached device tap340ce0e3-8b from instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 from the persistent domain config.
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.446 253542 DEBUG nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] (1/8): Attempting to detach device tap340ce0e3-8b with device alias net1 from instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.449 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:39:f0:9c"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <target dev="tap340ce0e3-8b"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]: </interface>
Nov 25 08:50:50 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c5b357e7aa5e1e82865129946a0f8f4702059266ec20bc518aed620db2e52d5-merged.mount: Deactivated successfully.
Nov 25 08:50:50 compute-0 kernel: tap340ce0e3-8b (unregistering): left promiscuous mode
Nov 25 08:50:50 compute-0 NetworkManager[48915]: <info>  [1764060650.5710] device (tap340ce0e3-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:50:50 compute-0 ovn_controller[152859]: 2025-11-25T08:50:50Z|01086|binding|INFO|Releasing lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 from this chassis (sb_readonly=0)
Nov 25 08:50:50 compute-0 ovn_controller[152859]: 2025-11-25T08:50:50Z|01087|binding|INFO|Setting lport 340ce0e3-8b72-4b40-afcb-53f30e6cc961 down in Southbound
Nov 25 08:50:50 compute-0 ovn_controller[152859]: 2025-11-25T08:50:50Z|01088|binding|INFO|Removing iface tap340ce0e3-8b ovn-installed in OVS
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.584 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764060650.5844204, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.589 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:f0:9c 10.100.0.27'], port_security=['fa:16:3e:39:f0:9c 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '49b75125-0ca4-438d-9f2a-1d130a6b5632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e204e23-2391-4196-9262-d69db603285d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dda51c2a-7354-4889-adc9-442043cc4089, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=340ce0e3-8b72-4b40-afcb-53f30e6cc961) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:50:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.591 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 340ce0e3-8b72-4b40-afcb-53f30e6cc961 in datapath 1e204e23-2391-4196-9262-d69db603285d unbound from our chassis
Nov 25 08:50:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.594 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e204e23-2391-4196-9262-d69db603285d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.587 253542 DEBUG nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Start waiting for the detach event from libvirt for device tap340ce0e3-8b with device alias net1 for instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.588 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:50:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.596 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87bc1c9f-74cc-4603-8469-88353288045e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:50.597 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e204e23-2391-4196-9262-d69db603285d namespace which is not needed anymore
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.600 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface>not found in domain: <domain type='kvm' id='135'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <name>instance-0000006d</name>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:50:47</nova:creationTime>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:port uuid="340ce0e3-8b72-4b40-afcb-53f30e6cc961">
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:50:50 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <system>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='serial'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='uuid'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </system>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <os>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </os>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <features>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </features>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk' index='2'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config' index='1'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:88:13:51'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target dev='tapd0945383-2a'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       </target>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </console>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <video>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </video>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c434,c684</label>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c684</imagelabel>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:50:50 compute-0 nova_compute[253538]: </domain>
Nov 25 08:50:50 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.601 253542 INFO nova.virt.libvirt.driver [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully detached device tap340ce0e3-8b from instance 49b75125-0ca4-438d-9f2a-1d130a6b5632 from the live domain config.
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.602 253542 DEBUG nova.virt.libvirt.vif [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.602 253542 DEBUG nova.network.os_vif_util [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.603 253542 DEBUG nova.network.os_vif_util [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.604 253542 DEBUG os_vif [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.607 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap340ce0e3-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.625 253542 INFO os_vif [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b')
Nov 25 08:50:50 compute-0 nova_compute[253538]: 2025-11-25 08:50:50.626 253542 DEBUG nova.virt.libvirt.guest [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:50:50</nova:creationTime>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 08:50:50 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:50:50 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:50 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:50:50 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:50:50 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:50:50 compute-0 podman[365576]: 2025-11-25 08:50:50.653177759 +0000 UTC m=+0.488241846 container remove 2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:50:50 compute-0 systemd[1]: libpod-conmon-2148d0fb1a969b16a5480ca4afb5df7b220952ca79471d37bc9cd83adb44969e.scope: Deactivated successfully.
Nov 25 08:50:50 compute-0 ceph-mon[75015]: pgmap v2085: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Nov 25 08:50:50 compute-0 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [NOTICE]   (365551) : haproxy version is 2.8.14-c23fe91
Nov 25 08:50:50 compute-0 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [NOTICE]   (365551) : path to executable is /usr/sbin/haproxy
Nov 25 08:50:50 compute-0 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [WARNING]  (365551) : Exiting Master process...
Nov 25 08:50:50 compute-0 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [ALERT]    (365551) : Current worker (365553) exited with code 143 (Terminated)
Nov 25 08:50:50 compute-0 neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d[365536]: [WARNING]  (365551) : All workers exited. Exiting... (0)
Nov 25 08:50:50 compute-0 systemd[1]: libpod-6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547.scope: Deactivated successfully.
Nov 25 08:50:50 compute-0 podman[365630]: 2025-11-25 08:50:50.804927483 +0000 UTC m=+0.084928475 container died 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcebdde85a3fd006ad7591ab9e8a651e0c91f4e265451ae67fa9db8eed4eafeb-merged.mount: Deactivated successfully.
Nov 25 08:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547-userdata-shm.mount: Deactivated successfully.
Nov 25 08:50:50 compute-0 podman[365651]: 2025-11-25 08:50:50.884457024 +0000 UTC m=+0.046449266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:50:51 compute-0 podman[365630]: 2025-11-25 08:50:51.094084568 +0000 UTC m=+0.374085490 container cleanup 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:50:51 compute-0 systemd[1]: libpod-conmon-6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547.scope: Deactivated successfully.
Nov 25 08:50:51 compute-0 podman[365651]: 2025-11-25 08:50:51.145247007 +0000 UTC m=+0.307239269 container create 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 08:50:51 compute-0 systemd[1]: Started libpod-conmon-7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac.scope.
Nov 25 08:50:51 compute-0 podman[365675]: 2025-11-25 08:50:51.234172299 +0000 UTC m=+0.114535198 container remove 6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cea430a7-4ee3-4189-98f6-032534167cc4]: (4, ('Tue Nov 25 08:50:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d (6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547)\n6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547\nTue Nov 25 08:50:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e204e23-2391-4196-9262-d69db603285d (6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547)\n6c1d10e7495f789af54765ada4c97b3ece832943715dfcca0194921a1bf7f547\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.246 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d935e2-e206-42ab-bdf6-96503eec19ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.247 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e204e23-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:51 compute-0 kernel: tap1e204e23-20: left promiscuous mode
Nov 25 08:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:51 compute-0 podman[365651]: 2025-11-25 08:50:51.28273741 +0000 UTC m=+0.444729652 container init 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.300 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[69e4bdcc-ed46-4774-b42c-99e220d2d83b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:51 compute-0 podman[365651]: 2025-11-25 08:50:51.301814481 +0000 UTC m=+0.463806723 container start 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 08:50:51 compute-0 podman[365651]: 2025-11-25 08:50:51.317135711 +0000 UTC m=+0.479127963 container attach 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.318 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[34e9d573-863b-4145-aaab-2fd1af56348f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.320 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad300bd-cf2e-4545-90e5-d1dca342b9c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.339 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56060baa-ebe3-4c40-9da9-3a242622273f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600238, 'reachable_time': 15002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365696, 'error': None, 'target': 'ovnmeta-1e204e23-2391-4196-9262-d69db603285d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d1e204e23\x2d2391\x2d4196\x2d9262\x2dd69db603285d.mount: Deactivated successfully.
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.346 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e204e23-2391-4196-9262-d69db603285d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:50:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:51.346 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6a33ca9e-010e-494a-b997-ce910123da90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.493 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.493 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.494 253542 DEBUG nova.network.neutron [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.562 253542 DEBUG nova.compute.manager [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-deleted-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.563 253542 INFO nova.compute.manager [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Neutron deleted interface 340ce0e3-8b72-4b40-afcb-53f30e6cc961; detaching it from the instance and deleting it from the info cache
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.563 253542 DEBUG nova.network.neutron [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.582 253542 DEBUG nova.objects.instance [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'system_metadata' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.605 253542 DEBUG nova.objects.instance [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lazy-loading 'flavor' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.635 253542 DEBUG nova.virt.libvirt.vif [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.636 253542 DEBUG nova.network.os_vif_util [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.637 253542 DEBUG nova.network.os_vif_util [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.643 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.648 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface>not found in domain: <domain type='kvm' id='135'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <name>instance-0000006d</name>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:50:50</nova:creationTime>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:50:51 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <system>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='serial'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='uuid'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </system>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <os>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </os>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <features>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </features>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk' index='2'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config' index='1'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:88:13:51'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target dev='tapd0945383-2a'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </target>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </console>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <video>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </video>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c434,c684</label>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c684</imagelabel>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:50:51 compute-0 nova_compute[253538]: </domain>
Nov 25 08:50:51 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.657 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.662 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:f0:9c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap340ce0e3-8b"/></interface>not found in domain: <domain type='kvm' id='135'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <name>instance-0000006d</name>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <uuid>49b75125-0ca4-438d-9f2a-1d130a6b5632</uuid>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:50:50</nova:creationTime>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:50:51 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <system>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='serial'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='uuid'>49b75125-0ca4-438d-9f2a-1d130a6b5632</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </system>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <os>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </os>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <features>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </features>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk' index='2'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/49b75125-0ca4-438d-9f2a-1d130a6b5632_disk.config' index='1'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:88:13:51'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target dev='tapd0945383-2a'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       </target>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632/console.log' append='off'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </console>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </input>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <video>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </video>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c434,c684</label>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c684</imagelabel>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:50:51 compute-0 nova_compute[253538]: </domain>
Nov 25 08:50:51 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.662 253542 WARNING nova.virt.libvirt.driver [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Detaching interface fa:16:3e:39:f0:9c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap340ce0e3-8b' not found.
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.663 253542 DEBUG nova.virt.libvirt.vif [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.663 253542 DEBUG nova.network.os_vif_util [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converting VIF {"id": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "address": "fa:16:3e:39:f0:9c", "network": {"id": "1e204e23-2391-4196-9262-d69db603285d", "bridge": "br-int", "label": "tempest-network-smoke--297673335", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340ce0e3-8b", "ovs_interfaceid": "340ce0e3-8b72-4b40-afcb-53f30e6cc961", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.664 253542 DEBUG nova.network.os_vif_util [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.664 253542 DEBUG os_vif [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.665 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.665 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap340ce0e3-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.666 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.667 253542 INFO os_vif [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f0:9c,bridge_name='br-int',has_traffic_filtering=True,id=340ce0e3-8b72-4b40-afcb-53f30e6cc961,network=Network(1e204e23-2391-4196-9262-d69db603285d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340ce0e3-8b')
Nov 25 08:50:51 compute-0 nova_compute[253538]: 2025-11-25 08:50:51.668 253542 DEBUG nova.virt.libvirt.guest [req-669a3c3e-d6ef-4c8c-a255-63914030b9ad req-79d97acf-5f96-4dcc-9820-1d08086f1111 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1448452289</nova:name>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:50:51</nova:creationTime>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     <nova:port uuid="d0945383-2a0d-4019-9b60-eea96d667c69">
Nov 25 08:50:51 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:50:51 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:50:51 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:50:51 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:50:51 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:50:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 775 KiB/s rd, 2.3 KiB/s wr, 27 op/s
Nov 25 08:50:52 compute-0 pensive_tharp[365691]: {
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:     "0": [
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:         {
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "devices": [
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "/dev/loop3"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             ],
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_name": "ceph_lv0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_size": "21470642176",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "name": "ceph_lv0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "tags": {
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cluster_name": "ceph",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.crush_device_class": "",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.encrypted": "0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osd_id": "0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.type": "block",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.vdo": "0"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             },
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "type": "block",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "vg_name": "ceph_vg0"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:         }
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:     ],
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:     "1": [
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:         {
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "devices": [
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "/dev/loop4"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             ],
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_name": "ceph_lv1",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_size": "21470642176",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "name": "ceph_lv1",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "tags": {
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cluster_name": "ceph",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.crush_device_class": "",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.encrypted": "0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osd_id": "1",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.type": "block",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.vdo": "0"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             },
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "type": "block",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "vg_name": "ceph_vg1"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:         }
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:     ],
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:     "2": [
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:         {
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "devices": [
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "/dev/loop5"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             ],
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_name": "ceph_lv2",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_size": "21470642176",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "name": "ceph_lv2",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "tags": {
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.cluster_name": "ceph",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.crush_device_class": "",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.encrypted": "0",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osd_id": "2",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.type": "block",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:                 "ceph.vdo": "0"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             },
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "type": "block",
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:             "vg_name": "ceph_vg2"
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:         }
Nov 25 08:50:52 compute-0 pensive_tharp[365691]:     ]
Nov 25 08:50:52 compute-0 pensive_tharp[365691]: }
Nov 25 08:50:52 compute-0 systemd[1]: libpod-7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac.scope: Deactivated successfully.
Nov 25 08:50:52 compute-0 conmon[365691]: conmon 7fe575f0d8cd2a2c305a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac.scope/container/memory.events
Nov 25 08:50:52 compute-0 podman[365651]: 2025-11-25 08:50:52.116135429 +0000 UTC m=+1.278127661 container died 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-efdcb7116fe58680f24853c0dee11caca31899fce1df8c89627d37ce659d313f-merged.mount: Deactivated successfully.
Nov 25 08:50:52 compute-0 podman[365651]: 2025-11-25 08:50:52.175346345 +0000 UTC m=+1.337338577 container remove 7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:50:52 compute-0 systemd[1]: libpod-conmon-7fe575f0d8cd2a2c305adab67c87c00d0aa2380a5bd50d0554497cde18e308ac.scope: Deactivated successfully.
Nov 25 08:50:52 compute-0 sudo[365494]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:52 compute-0 sudo[365718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:52 compute-0 sudo[365718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:52 compute-0 sudo[365718]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:52 compute-0 sudo[365743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:50:52 compute-0 sudo[365743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:52 compute-0 sudo[365743]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.405 253542 DEBUG nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-unplugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.406 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.406 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.406 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.407 253542 DEBUG nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-unplugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.407 253542 WARNING nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-unplugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 for instance with vm_state active and task_state None.
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.407 253542 DEBUG nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.407 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.408 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.408 253542 DEBUG oslo_concurrency.lockutils [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.408 253542 DEBUG nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.408 253542 WARNING nova.compute.manager [req-57fa048d-afdb-4eb4-8d92-8535247a5792 req-7015037a-fa63-477b-a63e-039d7c0cb1a1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-340ce0e3-8b72-4b40-afcb-53f30e6cc961 for instance with vm_state active and task_state None.
Nov 25 08:50:52 compute-0 sudo[365768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:52 compute-0 sudo[365768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:52 compute-0 sudo[365768]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:52 compute-0 ovn_controller[152859]: 2025-11-25T08:50:52Z|01089|binding|INFO|Releasing lport 0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f from this chassis (sb_readonly=0)
Nov 25 08:50:52 compute-0 ovn_controller[152859]: 2025-11-25T08:50:52Z|01090|binding|INFO|Releasing lport bf9779d5-46bc-415b-b1d2-7b9d4a76754d from this chassis (sb_readonly=0)
Nov 25 08:50:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:52 compute-0 sudo[365793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:50:52 compute-0 sudo[365793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:52 compute-0 nova_compute[253538]: 2025-11-25 08:50:52.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:52 compute-0 ceph-mon[75015]: pgmap v2086: 321 pgs: 321 active+clean; 213 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 775 KiB/s rd, 2.3 KiB/s wr, 27 op/s
Nov 25 08:50:52 compute-0 ovn_controller[152859]: 2025-11-25T08:50:52Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:20:35 10.100.0.11
Nov 25 08:50:52 compute-0 ovn_controller[152859]: 2025-11-25T08:50:52Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:20:35 10.100.0.11
Nov 25 08:50:53 compute-0 podman[365854]: 2025-11-25 08:50:52.913517514 +0000 UTC m=+0.029282644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:50:53 compute-0 podman[365854]: 2025-11-25 08:50:53.030739604 +0000 UTC m=+0.146504754 container create 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 08:50:53 compute-0 systemd[1]: Started libpod-conmon-0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed.scope.
Nov 25 08:50:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.253 253542 INFO nova.network.neutron [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Port 340ce0e3-8b72-4b40-afcb-53f30e6cc961 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.254 253542 DEBUG nova.network.neutron [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.269 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.287 253542 DEBUG oslo_concurrency.lockutils [None req-2d7710fb-da8b-49bd-bdba-95d60bef6db1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-49b75125-0ca4-438d-9f2a-1d130a6b5632-340ce0e3-8b72-4b40-afcb-53f30e6cc961" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:50:53
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:50:53 compute-0 podman[365854]: 2025-11-25 08:50:53.330479551 +0000 UTC m=+0.446244701 container init 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:50:53 compute-0 podman[365854]: 2025-11-25 08:50:53.347162228 +0000 UTC m=+0.462927348 container start 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:50:53 compute-0 systemd[1]: libpod-0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed.scope: Deactivated successfully.
Nov 25 08:50:53 compute-0 beautiful_germain[365870]: 167 167
Nov 25 08:50:53 compute-0 conmon[365870]: conmon 0ce92cba8547b96d9280 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed.scope/container/memory.events
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.419 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.419 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.420 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.420 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.420 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.422 253542 INFO nova.compute.manager [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Terminating instance
Nov 25 08:50:53 compute-0 nova_compute[253538]: 2025-11-25 08:50:53.423 253542 DEBUG nova.compute.manager [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:50:53 compute-0 podman[365854]: 2025-11-25 08:50:53.602780533 +0000 UTC m=+0.718545733 container attach 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:50:53 compute-0 podman[365854]: 2025-11-25 08:50:53.603635026 +0000 UTC m=+0.719400166 container died 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 219 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 915 KiB/s rd, 752 KiB/s wr, 58 op/s
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:50:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:50:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17eeb591c5998cef01394ac33c1bc1fc030fd5e1a8425c01f6d9c2d38218d8f-merged.mount: Deactivated successfully.
Nov 25 08:50:54 compute-0 kernel: tapd0945383-2a (unregistering): left promiscuous mode
Nov 25 08:50:54 compute-0 NetworkManager[48915]: <info>  [1764060654.0487] device (tapd0945383-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:50:54 compute-0 ovn_controller[152859]: 2025-11-25T08:50:54Z|01091|binding|INFO|Releasing lport d0945383-2a0d-4019-9b60-eea96d667c69 from this chassis (sb_readonly=0)
Nov 25 08:50:54 compute-0 ovn_controller[152859]: 2025-11-25T08:50:54Z|01092|binding|INFO|Setting lport d0945383-2a0d-4019-9b60-eea96d667c69 down in Southbound
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.064 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:54 compute-0 ovn_controller[152859]: 2025-11-25T08:50:54Z|01093|binding|INFO|Removing iface tapd0945383-2a ovn-installed in OVS
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.074 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:13:51 10.100.0.3'], port_security=['fa:16:3e:88:13:51 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '49b75125-0ca4-438d-9f2a-1d130a6b5632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c833a599-5a18-44d2-82ad-b16f7476c220', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4dacd795-ee8f-4895-b3fe-aaa7865132b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d0e444d-0863-4949-9e8d-d9b0bfd89ac2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d0945383-2a0d-4019-9b60-eea96d667c69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:50:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.075 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d0945383-2a0d-4019-9b60-eea96d667c69 in datapath c833a599-5a18-44d2-82ad-b16f7476c220 unbound from our chassis
Nov 25 08:50:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.076 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c833a599-5a18-44d2-82ad-b16f7476c220, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.077 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.077 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8a2ff5-4b38-4b44-a6db-068d1309a68e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:54.078 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 namespace which is not needed anymore
Nov 25 08:50:54 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 25 08:50:54 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Consumed 15.495s CPU time.
Nov 25 08:50:54 compute-0 systemd-machined[215790]: Machine qemu-135-instance-0000006d terminated.
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.270 253542 INFO nova.virt.libvirt.driver [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Instance destroyed successfully.
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.271 253542 DEBUG nova.objects.instance [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 49b75125-0ca4-438d-9f2a-1d130a6b5632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.290 253542 DEBUG nova.virt.libvirt.vif [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1448452289',display_name='tempest-TestNetworkBasicOps-server-1448452289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1448452289',id=109,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ+ODzmwXlP8InQP3rckT8crJoUn4xNbOC1krI29pEqx8UoV+guV7Zli9feI6GoWEAqOv9PTHopksYzuWw3Q3guPlCcrWq73qjwaBwfCdvh1/iUEG/sFH1cKmRJAuAfBlw==',key_name='tempest-TestNetworkBasicOps-406479871',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-3nsjtln8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:50:18Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=49b75125-0ca4-438d-9f2a-1d130a6b5632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.290 253542 DEBUG nova.network.os_vif_util [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.291 253542 DEBUG nova.network.os_vif_util [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.292 253542 DEBUG os_vif [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.294 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0945383-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.302 253542 INFO os_vif [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:51,bridge_name='br-int',has_traffic_filtering=True,id=d0945383-2a0d-4019-9b60-eea96d667c69,network=Network(c833a599-5a18-44d2-82ad-b16f7476c220),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0945383-2a')
Nov 25 08:50:54 compute-0 podman[365854]: 2025-11-25 08:50:54.375510728 +0000 UTC m=+1.491275838 container remove 0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 25 08:50:54 compute-0 systemd[1]: libpod-conmon-0ce92cba8547b96d92801884d7d5f0ff5ad314591d6c3ff37c0cdbef900f54ed.scope: Deactivated successfully.
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.676 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.677 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing instance network info cache due to event network-changed-d0945383-2a0d-4019-9b60-eea96d667c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.679 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.680 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:54 compute-0 nova_compute[253538]: 2025-11-25 08:50:54.681 253542 DEBUG nova.network.neutron [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Refreshing network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:50:54 compute-0 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [NOTICE]   (364379) : haproxy version is 2.8.14-c23fe91
Nov 25 08:50:54 compute-0 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [NOTICE]   (364379) : path to executable is /usr/sbin/haproxy
Nov 25 08:50:54 compute-0 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [WARNING]  (364379) : Exiting Master process...
Nov 25 08:50:54 compute-0 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [WARNING]  (364379) : Exiting Master process...
Nov 25 08:50:54 compute-0 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [ALERT]    (364379) : Current worker (364382) exited with code 143 (Terminated)
Nov 25 08:50:54 compute-0 neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220[364373]: [WARNING]  (364379) : All workers exited. Exiting... (0)
Nov 25 08:50:54 compute-0 systemd[1]: libpod-56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a.scope: Deactivated successfully.
Nov 25 08:50:54 compute-0 podman[365939]: 2025-11-25 08:50:54.807649531 +0000 UTC m=+0.360203777 container died 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:50:54 compute-0 podman[365956]: 2025-11-25 08:50:54.797845429 +0000 UTC m=+0.277736389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:50:55 compute-0 ceph-mon[75015]: pgmap v2087: 321 pgs: 321 active+clean; 219 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 915 KiB/s rd, 752 KiB/s wr, 58 op/s
Nov 25 08:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a-userdata-shm.mount: Deactivated successfully.
Nov 25 08:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c75c05cfa6c1a51f5b9e1bfff96754528c2b327c0c5c0ba20f6d03e2e0d2380a-merged.mount: Deactivated successfully.
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:50:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 239 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.844 253542 DEBUG nova.network.neutron [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updated VIF entry in instance network info cache for port d0945383-2a0d-4019-9b60-eea96d667c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.844 253542 DEBUG nova.network.neutron [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [{"id": "d0945383-2a0d-4019-9b60-eea96d667c69", "address": "fa:16:3e:88:13:51", "network": {"id": "c833a599-5a18-44d2-82ad-b16f7476c220", "bridge": "br-int", "label": "tempest-network-smoke--480494293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0945383-2a", "ovs_interfaceid": "d0945383-2a0d-4019-9b60-eea96d667c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49b75125-0ca4-438d-9f2a-1d130a6b5632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-unplugged-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.864 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-unplugged-d0945383-2a0d-4019-9b60-eea96d667c69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-unplugged-d0945383-2a0d-4019-9b60-eea96d667c69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG oslo_concurrency.lockutils [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.865 253542 DEBUG nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] No waiting events found dispatching network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:50:55 compute-0 nova_compute[253538]: 2025-11-25 08:50:55.866 253542 WARNING nova.compute.manager [req-99bfc4e8-0491-46f3-b87a-763c7ce5fdff req-53da77a6-7183-4855-b82f-8ef4b2c3c01a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received unexpected event network-vif-plugged-d0945383-2a0d-4019-9b60-eea96d667c69 for instance with vm_state active and task_state deleting.
Nov 25 08:50:55 compute-0 podman[365956]: 2025-11-25 08:50:55.886252138 +0000 UTC m=+1.366143068 container create 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:50:55 compute-0 podman[365939]: 2025-11-25 08:50:55.915741508 +0000 UTC m=+1.468295744 container cleanup 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:50:55 compute-0 systemd[1]: Started libpod-conmon-1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76.scope.
Nov 25 08:50:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:56 compute-0 systemd[1]: libpod-conmon-56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a.scope: Deactivated successfully.
Nov 25 08:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:50:56 compute-0 podman[365956]: 2025-11-25 08:50:56.040568741 +0000 UTC m=+1.520459651 container init 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 08:50:56 compute-0 podman[365956]: 2025-11-25 08:50:56.053688492 +0000 UTC m=+1.533579382 container start 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 25 08:50:56 compute-0 podman[365988]: 2025-11-25 08:50:56.057398902 +0000 UTC m=+0.107294345 container remove 56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.067 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2e315e-9999-49dc-9b8c-0495ef4fac75]: (4, ('Tue Nov 25 08:50:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 (56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a)\n56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a\nTue Nov 25 08:50:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 (56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a)\n56678547c53f8e4e6576d65570c14bac55f5947eb901a1ae850e23bbe040142a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb2a851-efe6-4bd0-8da6-6fc6bb2589ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.070 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc833a599-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:50:56 compute-0 podman[365956]: 2025-11-25 08:50:56.07263095 +0000 UTC m=+1.552521840 container attach 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:50:56 compute-0 kernel: tapc833a599-50: left promiscuous mode
Nov 25 08:50:56 compute-0 nova_compute[253538]: 2025-11-25 08:50:56.073 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:56 compute-0 nova_compute[253538]: 2025-11-25 08:50:56.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.091 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb16900d-a3d8-4146-b275-d87d95fd51a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.118 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae600429-b93e-446e-9dfd-c3f403fe6d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.120 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9686ff0b-ec77-4ac7-ad00-9d68ce314fa4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.138 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[91bb9c11-a0eb-46db-b184-5a1a4562e6ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597187, 'reachable_time': 41576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366009, 'error': None, 'target': 'ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:56 compute-0 systemd[1]: run-netns-ovnmeta\x2dc833a599\x2d5a18\x2d44d2\x2d82ad\x2db16f7476c220.mount: Deactivated successfully.
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.143 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c833a599-5a18-44d2-82ad-b16f7476c220 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:50:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:50:56.143 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[78803b91-ed67-49a4-b280-5abd50cc12e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:50:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 25 08:50:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 25 08:50:56 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 25 08:50:56 compute-0 nova_compute[253538]: 2025-11-25 08:50:56.398 253542 INFO nova.virt.libvirt.driver [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Deleting instance files /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632_del
Nov 25 08:50:56 compute-0 nova_compute[253538]: 2025-11-25 08:50:56.399 253542 INFO nova.virt.libvirt.driver [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Deletion of /var/lib/nova/instances/49b75125-0ca4-438d-9f2a-1d130a6b5632_del complete
Nov 25 08:50:56 compute-0 nova_compute[253538]: 2025-11-25 08:50:56.498 253542 INFO nova.compute.manager [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Took 3.08 seconds to destroy the instance on the hypervisor.
Nov 25 08:50:56 compute-0 nova_compute[253538]: 2025-11-25 08:50:56.499 253542 DEBUG oslo.service.loopingcall [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:50:56 compute-0 nova_compute[253538]: 2025-11-25 08:50:56.500 253542 DEBUG nova.compute.manager [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:50:56 compute-0 nova_compute[253538]: 2025-11-25 08:50:56.500 253542 DEBUG nova.network.neutron [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:50:57 compute-0 ceph-mon[75015]: pgmap v2088: 321 pgs: 321 active+clean; 239 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Nov 25 08:50:57 compute-0 ceph-mon[75015]: osdmap e238: 3 total, 3 up, 3 in
Nov 25 08:50:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 25 08:50:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 25 08:50:57 compute-0 adoring_faraday[365989]: {
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "osd_id": 1,
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "type": "bluestore"
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:     },
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "osd_id": 2,
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "type": "bluestore"
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:     },
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "osd_id": 0,
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:         "type": "bluestore"
Nov 25 08:50:57 compute-0 adoring_faraday[365989]:     }
Nov 25 08:50:57 compute-0 adoring_faraday[365989]: }
Nov 25 08:50:57 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 25 08:50:57 compute-0 systemd[1]: libpod-1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76.scope: Deactivated successfully.
Nov 25 08:50:57 compute-0 podman[365956]: 2025-11-25 08:50:57.234670061 +0000 UTC m=+2.714560951 container died 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:50:57 compute-0 systemd[1]: libpod-1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76.scope: Consumed 1.152s CPU time.
Nov 25 08:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-45c08fa59ea80ba0097df1768a7820a88f02490a03e5d195c2e16a1a30e8d5ca-merged.mount: Deactivated successfully.
Nov 25 08:50:57 compute-0 podman[365956]: 2025-11-25 08:50:57.295768747 +0000 UTC m=+2.775659637 container remove 1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:50:57 compute-0 systemd[1]: libpod-conmon-1dd6295ba9701bc3f9c74e132c6fa64bde4f40cb63f75dcaa25dbbdff29adc76.scope: Deactivated successfully.
Nov 25 08:50:57 compute-0 sudo[365793]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:50:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:50:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:50:57 compute-0 nova_compute[253538]: 2025-11-25 08:50:57.358 253542 DEBUG nova.network.neutron [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:50:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:50:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 00a433db-0208-4058-ab2c-5b88c0c04df6 does not exist
Nov 25 08:50:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 906afcbe-3e35-4330-9d29-0f9a1d073551 does not exist
Nov 25 08:50:57 compute-0 nova_compute[253538]: 2025-11-25 08:50:57.372 253542 INFO nova.compute.manager [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Took 0.87 seconds to deallocate network for instance.
Nov 25 08:50:57 compute-0 nova_compute[253538]: 2025-11-25 08:50:57.416 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:57 compute-0 nova_compute[253538]: 2025-11-25 08:50:57.417 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:57 compute-0 nova_compute[253538]: 2025-11-25 08:50:57.442 253542 DEBUG nova.compute.manager [req-c53d841e-b80c-49c3-a820-2c9c24502234 req-03439cd3-faa4-4710-9680-9bfe2d35926e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Received event network-vif-deleted-d0945383-2a0d-4019-9b60-eea96d667c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:50:57 compute-0 sudo[366053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:50:57 compute-0 sudo[366053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:57 compute-0 sudo[366053]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:50:57 compute-0 nova_compute[253538]: 2025-11-25 08:50:57.491 253542 DEBUG oslo_concurrency.processutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:50:57 compute-0 sudo[366078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:50:57 compute-0 sudo[366078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:50:57 compute-0 sudo[366078]: pam_unix(sudo:session): session closed for user root
Nov 25 08:50:57 compute-0 nova_compute[253538]: 2025-11-25 08:50:57.651 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 199 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 3.2 MiB/s wr, 118 op/s
Nov 25 08:50:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:50:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198989968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:57 compute-0 nova_compute[253538]: 2025-11-25 08:50:57.990 253542 DEBUG oslo_concurrency.processutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.000 253542 DEBUG nova.compute.provider_tree [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.025 253542 DEBUG nova.scheduler.client.report [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.044 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.069 253542 INFO nova.scheduler.client.report [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 49b75125-0ca4-438d-9f2a-1d130a6b5632
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.077 253542 INFO nova.compute.manager [None req-75344707-4b45-4ca5-bd88-ba2015dc36f5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Get console output
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.088 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.145 253542 DEBUG oslo_concurrency.lockutils [None req-fde44410-1549-494f-a72d-57b9cfd054cd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "49b75125-0ca4-438d-9f2a-1d130a6b5632" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:50:58 compute-0 ceph-mon[75015]: osdmap e239: 3 total, 3 up, 3 in
Nov 25 08:50:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:50:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:50:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/198989968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.401 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.402 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.402 253542 INFO nova.compute.manager [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Rebooting instance
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.415 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.416 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:50:58 compute-0 nova_compute[253538]: 2025-11-25 08:50:58.416 253542 DEBUG nova.network.neutron [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:50:59 compute-0 ceph-mon[75015]: pgmap v2091: 321 pgs: 321 active+clean; 199 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 3.2 MiB/s wr, 118 op/s
Nov 25 08:50:59 compute-0 nova_compute[253538]: 2025-11-25 08:50:59.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:50:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 3.2 MiB/s wr, 180 op/s
Nov 25 08:51:00 compute-0 ceph-mon[75015]: pgmap v2092: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 3.2 MiB/s wr, 180 op/s
Nov 25 08:51:00 compute-0 sshd[189888]: Timeout before authentication for connection from 45.78.217.205 to 38.102.83.169, pid = 361282
Nov 25 08:51:00 compute-0 nova_compute[253538]: 2025-11-25 08:51:00.959 253542 DEBUG nova.network.neutron [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:00 compute-0 nova_compute[253538]: 2025-11-25 08:51:00.988 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:00 compute-0 nova_compute[253538]: 2025-11-25 08:51:00.991 253542 DEBUG nova.compute.manager [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 08:51:02 compute-0 ovn_controller[152859]: 2025-11-25T08:51:02Z|01094|binding|INFO|Releasing lport 0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f from this chassis (sb_readonly=0)
Nov 25 08:51:02 compute-0 nova_compute[253538]: 2025-11-25 08:51:02.373 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:02 compute-0 nova_compute[253538]: 2025-11-25 08:51:02.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:02 compute-0 ceph-mon[75015]: pgmap v2093: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 08:51:02 compute-0 podman[366125]: 2025-11-25 08:51:02.85051767 +0000 UTC m=+0.081460523 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 08:51:03 compute-0 kernel: tap0547929c-86 (unregistering): left promiscuous mode
Nov 25 08:51:03 compute-0 NetworkManager[48915]: <info>  [1764060663.4906] device (tap0547929c-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:51:03 compute-0 ovn_controller[152859]: 2025-11-25T08:51:03Z|01095|binding|INFO|Releasing lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 from this chassis (sb_readonly=0)
Nov 25 08:51:03 compute-0 ovn_controller[152859]: 2025-11-25T08:51:03Z|01096|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 down in Southbound
Nov 25 08:51:03 compute-0 ovn_controller[152859]: 2025-11-25T08:51:03Z|01097|binding|INFO|Removing iface tap0547929c-86 ovn-installed in OVS
Nov 25 08:51:03 compute-0 nova_compute[253538]: 2025-11-25 08:51:03.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.513 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:20:35 10.100.0.11'], port_security=['fa:16:3e:86:20:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4f98996-3a98-41ad-af66-af37066515d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0613062-c56d-4f59-a1bd-5487b9cae905', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'db6ee56c-6009-455c-94cc-12101966dbd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eca56c04-944f-49ab-8f25-403bf5089348, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0547929c-86ba-4aaa-869f-c7e2b5ea7e67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.515 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 in datapath c0613062-c56d-4f59-a1bd-5487b9cae905 unbound from our chassis
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.515 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0613062-c56d-4f59-a1bd-5487b9cae905, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.517 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[582c7d5b-ec68-4866-b01c-031219518b6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.518 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 namespace which is not needed anymore
Nov 25 08:51:03 compute-0 nova_compute[253538]: 2025-11-25 08:51:03.528 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:03 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 25 08:51:03 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006e.scope: Consumed 14.817s CPU time.
Nov 25 08:51:03 compute-0 systemd-machined[215790]: Machine qemu-136-instance-0000006e terminated.
Nov 25 08:51:03 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [NOTICE]   (364886) : haproxy version is 2.8.14-c23fe91
Nov 25 08:51:03 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [NOTICE]   (364886) : path to executable is /usr/sbin/haproxy
Nov 25 08:51:03 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [WARNING]  (364886) : Exiting Master process...
Nov 25 08:51:03 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [ALERT]    (364886) : Current worker (364888) exited with code 143 (Terminated)
Nov 25 08:51:03 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[364882]: [WARNING]  (364886) : All workers exited. Exiting... (0)
Nov 25 08:51:03 compute-0 systemd[1]: libpod-12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af.scope: Deactivated successfully.
Nov 25 08:51:03 compute-0 podman[366170]: 2025-11-25 08:51:03.685474031 +0000 UTC m=+0.053775441 container died 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:51:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 115 KiB/s wr, 110 op/s
Nov 25 08:51:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af-userdata-shm.mount: Deactivated successfully.
Nov 25 08:51:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf67b50908d3dd49b2c0a0481b4509ed99ab81eb1cd3289fe65b839d4061e7d-merged.mount: Deactivated successfully.
Nov 25 08:51:03 compute-0 nova_compute[253538]: 2025-11-25 08:51:03.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:03 compute-0 nova_compute[253538]: 2025-11-25 08:51:03.736 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:03 compute-0 podman[366170]: 2025-11-25 08:51:03.737408452 +0000 UTC m=+0.105709852 container cleanup 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:51:03 compute-0 systemd[1]: libpod-conmon-12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af.scope: Deactivated successfully.
Nov 25 08:51:03 compute-0 podman[366205]: 2025-11-25 08:51:03.816632013 +0000 UTC m=+0.052576679 container remove 12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.827 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1752eac3-6274-4b05-bffb-5300166fd1fb]: (4, ('Tue Nov 25 08:51:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 (12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af)\n12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af\nTue Nov 25 08:51:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 (12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af)\n12d6a8bafef2877a643e0b7a66dd30102487d9445daa265a370bb0402a6110af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.829 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[110d7db9-aff3-43f8-b21d-dcdb78de8858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.831 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0613062-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:03 compute-0 nova_compute[253538]: 2025-11-25 08:51:03.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:03 compute-0 kernel: tapc0613062-c0: left promiscuous mode
Nov 25 08:51:03 compute-0 nova_compute[253538]: 2025-11-25 08:51:03.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:03 compute-0 nova_compute[253538]: 2025-11-25 08:51:03.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a6e6f2-5fcc-41a1-bf24-288f19a9fd14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.881 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[621bf3bb-5e8c-44b9-b8da-21cb337888a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.883 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38384457-dd65-4930-afa1-5267586314cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.904 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c182968-a11f-415a-81a4-1a7033af170c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599103, 'reachable_time': 39433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366223, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.906 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:51:03 compute-0 systemd[1]: run-netns-ovnmeta\x2dc0613062\x2dc56d\x2d4f59\x2da1bd\x2d5487b9cae905.mount: Deactivated successfully.
Nov 25 08:51:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:03.907 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[235b6405-31fb-4d25-bd12-3ed8220d5bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.125 253542 DEBUG nova.compute.manager [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.127 253542 DEBUG oslo_concurrency.lockutils [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.127 253542 DEBUG oslo_concurrency.lockutils [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.128 253542 DEBUG oslo_concurrency.lockutils [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.128 253542 DEBUG nova.compute.manager [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.129 253542 WARNING nova.compute.manager [req-4ec39d64-09b8-48b0-9c7e-7a16f3852e27 req-42104617-5041-4f99-9342-ebb97dc58557 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state reboot_started.
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.142 253542 INFO nova.virt.libvirt.driver [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance shutdown successfully.
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007600361398710686 of space, bias 1.0, pg target 0.22801084196132057 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:51:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:51:04 compute-0 kernel: tap0547929c-86: entered promiscuous mode
Nov 25 08:51:04 compute-0 systemd-udevd[366153]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:51:04 compute-0 NetworkManager[48915]: <info>  [1764060664.2299] manager: (tap0547929c-86): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Nov 25 08:51:04 compute-0 ovn_controller[152859]: 2025-11-25T08:51:04Z|01098|binding|INFO|Claiming lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for this chassis.
Nov 25 08:51:04 compute-0 ovn_controller[152859]: 2025-11-25T08:51:04Z|01099|binding|INFO|0547929c-86ba-4aaa-869f-c7e2b5ea7e67: Claiming fa:16:3e:86:20:35 10.100.0.11
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.239 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:20:35 10.100.0.11'], port_security=['fa:16:3e:86:20:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4f98996-3a98-41ad-af66-af37066515d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0613062-c56d-4f59-a1bd-5487b9cae905', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'db6ee56c-6009-455c-94cc-12101966dbd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eca56c04-944f-49ab-8f25-403bf5089348, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0547929c-86ba-4aaa-869f-c7e2b5ea7e67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.241 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 in datapath c0613062-c56d-4f59-a1bd-5487b9cae905 bound to our chassis
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.243 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 08:51:04 compute-0 NetworkManager[48915]: <info>  [1764060664.2445] device (tap0547929c-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:51:04 compute-0 NetworkManager[48915]: <info>  [1764060664.2463] device (tap0547929c-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 ovn_controller[152859]: 2025-11-25T08:51:04Z|01100|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 ovn-installed in OVS
Nov 25 08:51:04 compute-0 ovn_controller[152859]: 2025-11-25T08:51:04Z|01101|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 up in Southbound
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.262 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd0a604-875a-43e0-874f-07fc9ffdef6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.263 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc0613062-c1 in ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.266 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc0613062-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.266 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf122442-511a-43b7-9053-e9296d57b06e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.267 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74f97bf6-eca1-4cc4-9256-f0f29fb31049]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.281 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0d36343b-db2e-43e2-bee9-c2482b8ef950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 systemd-machined[215790]: New machine qemu-137-instance-0000006e.
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.296 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec903d7-5181-4534-a3db-20403b2914f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006e.
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.348 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3e067c61-fdd2-4580-a9fe-562e97288a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.356 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef0ba18-172b-452c-ae17-21ca55dfe967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 NetworkManager[48915]: <info>  [1764060664.3575] manager: (tapc0613062-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.407 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b4df58c3-cb8b-4bfd-bef5-48e42e86104c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.411 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[65c82f1c-0641-47ca-b1b5-17d21eaff36a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 NetworkManager[48915]: <info>  [1764060664.4506] device (tapc0613062-c0): carrier: link connected
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.461 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad389578-1bc6-4e39-98d8-4db036b89d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.491 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e915cf-44dc-4801-a75a-58ee340d9d20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0613062-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a3:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601904, 'reachable_time': 23683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366267, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.512 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf41db1-8bc9-43a8-b774-31592e1f3375]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a31e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601904, 'tstamp': 601904}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366268, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.530 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[255cf0be-f100-4371-8bd0-848f9ab7aa06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0613062-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a3:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601904, 'reachable_time': 23683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366283, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.583 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51f6a5fe-670e-4788-9eca-66c6808e2a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.666 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dee934d6-1776-47fc-b0fc-c3afc6bfba75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.667 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0613062-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.667 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.667 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0613062-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:04 compute-0 NetworkManager[48915]: <info>  [1764060664.6698] manager: (tapc0613062-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Nov 25 08:51:04 compute-0 kernel: tapc0613062-c0: entered promiscuous mode
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.671 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.673 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc0613062-c0, col_values=(('external_ids', {'iface-id': '0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 ovn_controller[152859]: 2025-11-25T08:51:04Z|01102|binding|INFO|Releasing lport 0ffa1e66-a7b3-404c-afe1-b89a3d2ec28f from this chassis (sb_readonly=0)
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.676 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9f45f5-de79-4032-97e1-15f320797043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.679 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/c0613062-c56d-4f59-a1bd-5487b9cae905.pid.haproxy
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID c0613062-c56d-4f59-a1bd-5487b9cae905
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:51:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:04.679 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'env', 'PROCESS_TAG=haproxy-c0613062-c56d-4f59-a1bd-5487b9cae905', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c0613062-c56d-4f59-a1bd-5487b9cae905.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.733 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for b4f98996-3a98-41ad-af66-af37066515d3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.734 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060664.7330246, b4f98996-3a98-41ad-af66-af37066515d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.734 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Resumed (Lifecycle Event)
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.739 253542 INFO nova.virt.libvirt.driver [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance running successfully.
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.739 253542 INFO nova.virt.libvirt.driver [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance soft rebooted successfully.
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.740 253542 DEBUG nova.compute.manager [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.763 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.766 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:51:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.796 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] During sync_power_state the instance has a pending task (reboot_started). Skip.
Nov 25 08:51:04 compute-0 ceph-mon[75015]: pgmap v2094: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 115 KiB/s wr, 110 op/s
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.797 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060664.7357535, b4f98996-3a98-41ad-af66-af37066515d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.797 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Started (Lifecycle Event)
Nov 25 08:51:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 25 08:51:04 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.805 253542 DEBUG oslo_concurrency.lockutils [None req-7f8cb972-1a7e-499f-a306-931fec67a817 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.818 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:04 compute-0 nova_compute[253538]: 2025-11-25 08:51:04.823 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:51:05 compute-0 podman[366343]: 2025-11-25 08:51:05.125788824 +0000 UTC m=+0.054198622 container create dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:51:05 compute-0 systemd[1]: Started libpod-conmon-dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b.scope.
Nov 25 08:51:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06636a18fe499615bcb16111056ae7d170b090417624338dfdbe17fd20509fcd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:51:05 compute-0 podman[366343]: 2025-11-25 08:51:05.191295799 +0000 UTC m=+0.119705617 container init dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:51:05 compute-0 podman[366343]: 2025-11-25 08:51:05.101249778 +0000 UTC m=+0.029659606 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:51:05 compute-0 podman[366343]: 2025-11-25 08:51:05.200108175 +0000 UTC m=+0.128517983 container start dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 08:51:05 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [NOTICE]   (366378) : New worker (366383) forked
Nov 25 08:51:05 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [NOTICE]   (366378) : Loading success.
Nov 25 08:51:05 compute-0 podman[366356]: 2025-11-25 08:51:05.232652227 +0000 UTC m=+0.070489849 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:51:05 compute-0 nova_compute[253538]: 2025-11-25 08:51:05.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 46 KiB/s wr, 64 op/s
Nov 25 08:51:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 25 08:51:05 compute-0 ceph-mon[75015]: osdmap e240: 3 total, 3 up, 3 in
Nov 25 08:51:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 25 08:51:05 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.236 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.236 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.237 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.237 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.238 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.238 253542 WARNING nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state None.
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.238 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.238 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.239 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.239 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.239 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.240 253542 WARNING nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state None.
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.240 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.240 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.241 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.241 253542 DEBUG oslo_concurrency.lockutils [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.241 253542 DEBUG nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:06 compute-0 nova_compute[253538]: 2025-11-25 08:51:06.241 253542 WARNING nova.compute.manager [req-6bd1ecdd-c448-449e-a77c-b74339069f1f req-da5aebe8-8cc1-42d0-8b8f-a8a91c373ad1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state None.
Nov 25 08:51:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 25 08:51:06 compute-0 ceph-mon[75015]: pgmap v2096: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 46 KiB/s wr, 64 op/s
Nov 25 08:51:06 compute-0 ceph-mon[75015]: osdmap e241: 3 total, 3 up, 3 in
Nov 25 08:51:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 25 08:51:06 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 25 08:51:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 25 08:51:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 25 08:51:07 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 25 08:51:07 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 08:51:07 compute-0 nova_compute[253538]: 2025-11-25 08:51:07.656 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 112 op/s
Nov 25 08:51:07 compute-0 ceph-mon[75015]: osdmap e242: 3 total, 3 up, 3 in
Nov 25 08:51:07 compute-0 ceph-mon[75015]: osdmap e243: 3 total, 3 up, 3 in
Nov 25 08:51:08 compute-0 nova_compute[253538]: 2025-11-25 08:51:08.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 25 08:51:08 compute-0 ceph-mon[75015]: pgmap v2100: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 112 op/s
Nov 25 08:51:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 25 08:51:08 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 25 08:51:09 compute-0 nova_compute[253538]: 2025-11-25 08:51:09.267 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060654.2659361, 49b75125-0ca4-438d-9f2a-1d130a6b5632 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:09 compute-0 nova_compute[253538]: 2025-11-25 08:51:09.267 253542 INFO nova.compute.manager [-] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] VM Stopped (Lifecycle Event)
Nov 25 08:51:09 compute-0 nova_compute[253538]: 2025-11-25 08:51:09.288 253542 DEBUG nova.compute.manager [None req-acc0dda8-28df-4d66-9910-c1a5067b3c62 - - - - - -] [instance: 49b75125-0ca4-438d-9f2a-1d130a6b5632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:09 compute-0 nova_compute[253538]: 2025-11-25 08:51:09.344 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 18 KiB/s wr, 347 op/s
Nov 25 08:51:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 25 08:51:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 25 08:51:09 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 25 08:51:09 compute-0 ceph-mon[75015]: osdmap e244: 3 total, 3 up, 3 in
Nov 25 08:51:10 compute-0 podman[366392]: 2025-11-25 08:51:10.896241366 +0000 UTC m=+0.135126799 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:51:10 compute-0 ceph-mon[75015]: pgmap v2102: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 18 KiB/s wr, 347 op/s
Nov 25 08:51:10 compute-0 ceph-mon[75015]: osdmap e245: 3 total, 3 up, 3 in
Nov 25 08:51:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 12 KiB/s wr, 193 op/s
Nov 25 08:51:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:12 compute-0 nova_compute[253538]: 2025-11-25 08:51:12.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 25 08:51:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 25 08:51:12 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 25 08:51:12 compute-0 ceph-mon[75015]: pgmap v2104: 321 pgs: 321 active+clean; 167 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 12 KiB/s wr, 193 op/s
Nov 25 08:51:13 compute-0 nova_compute[253538]: 2025-11-25 08:51:13.489 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:13 compute-0 nova_compute[253538]: 2025-11-25 08:51:13.490 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:13 compute-0 nova_compute[253538]: 2025-11-25 08:51:13.507 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:51:13 compute-0 nova_compute[253538]: 2025-11-25 08:51:13.590 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:13 compute-0 nova_compute[253538]: 2025-11-25 08:51:13.591 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:13 compute-0 nova_compute[253538]: 2025-11-25 08:51:13.600 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:51:13 compute-0 nova_compute[253538]: 2025-11-25 08:51:13.600 253542 INFO nova.compute.claims [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:51:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 15 KiB/s wr, 220 op/s
Nov 25 08:51:13 compute-0 nova_compute[253538]: 2025-11-25 08:51:13.729 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 25 08:51:13 compute-0 ceph-mon[75015]: osdmap e246: 3 total, 3 up, 3 in
Nov 25 08:51:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 25 08:51:14 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 25 08:51:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:51:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891708322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.245 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.251 253542 DEBUG nova.compute.provider_tree [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.272 253542 DEBUG nova.scheduler.client.report [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.302 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.303 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.379 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.380 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.404 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.424 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.521 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.523 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.523 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Creating image(s)
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.552 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.582 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.607 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.611 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.651 253542 DEBUG nova.policy [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.697 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.698 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.698 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.699 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.722 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:14 compute-0 nova_compute[253538]: 2025-11-25 08:51:14.727 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 25 08:51:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 25 08:51:15 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 25 08:51:15 compute-0 ceph-mon[75015]: pgmap v2106: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 15 KiB/s wr, 220 op/s
Nov 25 08:51:15 compute-0 ceph-mon[75015]: osdmap e247: 3 total, 3 up, 3 in
Nov 25 08:51:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2891708322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.069 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.172 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.311 253542 DEBUG nova.objects.instance [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.362 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.363 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Ensure instance console log exists: /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.363 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.363 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.363 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:15 compute-0 nova_compute[253538]: 2025-11-25 08:51:15.435 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Successfully created port: 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:51:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 8.0 KiB/s wr, 78 op/s
Nov 25 08:51:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 25 08:51:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 25 08:51:16 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 25 08:51:16 compute-0 ceph-mon[75015]: osdmap e248: 3 total, 3 up, 3 in
Nov 25 08:51:16 compute-0 nova_compute[253538]: 2025-11-25 08:51:16.125 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Successfully updated port: 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:51:16 compute-0 nova_compute[253538]: 2025-11-25 08:51:16.145 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:16 compute-0 nova_compute[253538]: 2025-11-25 08:51:16.146 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:16 compute-0 nova_compute[253538]: 2025-11-25 08:51:16.146 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:51:16 compute-0 nova_compute[253538]: 2025-11-25 08:51:16.234 253542 DEBUG nova.compute.manager [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:16 compute-0 nova_compute[253538]: 2025-11-25 08:51:16.235 253542 DEBUG nova.compute.manager [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing instance network info cache due to event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:51:16 compute-0 nova_compute[253538]: 2025-11-25 08:51:16.235 253542 DEBUG oslo_concurrency.lockutils [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:16 compute-0 nova_compute[253538]: 2025-11-25 08:51:16.358 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:51:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 25 08:51:17 compute-0 ceph-mon[75015]: pgmap v2109: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 8.0 KiB/s wr, 78 op/s
Nov 25 08:51:17 compute-0 ceph-mon[75015]: osdmap e249: 3 total, 3 up, 3 in
Nov 25 08:51:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 25 08:51:17 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 25 08:51:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.661 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.694 253542 DEBUG nova.network.neutron [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 186 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 619 KiB/s wr, 228 op/s
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.716 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.716 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance network_info: |[{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.717 253542 DEBUG oslo_concurrency.lockutils [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.717 253542 DEBUG nova.network.neutron [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.722 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start _get_guest_xml network_info=[{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:51:17 compute-0 ovn_controller[152859]: 2025-11-25T08:51:17Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:20:35 10.100.0.11
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.729 253542 WARNING nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.740 253542 DEBUG nova.virt.libvirt.host [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.741 253542 DEBUG nova.virt.libvirt.host [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.746 253542 DEBUG nova.virt.libvirt.host [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.746 253542 DEBUG nova.virt.libvirt.host [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.747 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.747 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.748 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.749 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.749 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.750 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.750 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.750 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.751 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.751 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.752 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.752 253542 DEBUG nova.virt.hardware [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:51:17 compute-0 nova_compute[253538]: 2025-11-25 08:51:17.757 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 25 08:51:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 25 08:51:18 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 25 08:51:18 compute-0 ceph-mon[75015]: osdmap e250: 3 total, 3 up, 3 in
Nov 25 08:51:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:51:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801386082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.253 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.292 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.297 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:51:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256411726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.813 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.814 253542 DEBUG nova.virt.libvirt.vif [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-199412432',display_name='tempest-TestNetworkBasicOps-server-199412432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-199412432',id=111,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMuJR0+MVpFtNHEH/qBMUEI9mE13UW6GrULa+2972JvBZYqj7jCYYsMmZITZ+SM7QQhK9eTjWP2J5imfxbLYOM0couLFe8mdKS/uhBmTvd2vRYexSjbqdhkaRLs1gfDUJQ==',key_name='tempest-TestNetworkBasicOps-1778650473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-hcn7lok0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:14Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.815 253542 DEBUG nova.network.os_vif_util [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.815 253542 DEBUG nova.network.os_vif_util [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.817 253542 DEBUG nova.objects.instance [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.830 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <uuid>e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb</uuid>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <name>instance-0000006f</name>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-199412432</nova:name>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:51:17</nova:creationTime>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <nova:port uuid="2ad9a2b7-f59d-49fd-aaa3-5253dd637f18">
Nov 25 08:51:18 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <system>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <entry name="serial">e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb</entry>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <entry name="uuid">e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb</entry>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </system>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <os>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   </os>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <features>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   </features>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk">
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config">
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:51:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:3e:2a:a3"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <target dev="tap2ad9a2b7-f5"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/console.log" append="off"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <video>
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </video>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:51:18 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:51:18 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:51:18 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:51:18 compute-0 nova_compute[253538]: </domain>
Nov 25 08:51:18 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.832 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Preparing to wait for external event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.832 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.832 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.833 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.833 253542 DEBUG nova.virt.libvirt.vif [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-199412432',display_name='tempest-TestNetworkBasicOps-server-199412432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-199412432',id=111,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMuJR0+MVpFtNHEH/qBMUEI9mE13UW6GrULa+2972JvBZYqj7jCYYsMmZITZ+SM7QQhK9eTjWP2J5imfxbLYOM0couLFe8mdKS/uhBmTvd2vRYexSjbqdhkaRLs1gfDUJQ==',key_name='tempest-TestNetworkBasicOps-1778650473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-hcn7lok0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:14Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.834 253542 DEBUG nova.network.os_vif_util [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.834 253542 DEBUG nova.network.os_vif_util [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.834 253542 DEBUG os_vif [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.835 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.835 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.836 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.838 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.838 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ad9a2b7-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.838 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ad9a2b7-f5, col_values=(('external_ids', {'iface-id': '2ad9a2b7-f59d-49fd-aaa3-5253dd637f18', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:2a:a3', 'vm-uuid': 'e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.840 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:18 compute-0 NetworkManager[48915]: <info>  [1764060678.8422] manager: (tap2ad9a2b7-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.842 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.848 253542 INFO os_vif [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5')
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.901 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.901 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.902 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:3e:2a:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.903 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Using config drive
Nov 25 08:51:18 compute-0 nova_compute[253538]: 2025-11-25 08:51:18.937 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:19 compute-0 ceph-mon[75015]: pgmap v2112: 321 pgs: 321 active+clean; 186 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 619 KiB/s wr, 228 op/s
Nov 25 08:51:19 compute-0 ceph-mon[75015]: osdmap e251: 3 total, 3 up, 3 in
Nov 25 08:51:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3801386082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1256411726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.180 253542 DEBUG nova.network.neutron [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updated VIF entry in instance network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.181 253542 DEBUG nova.network.neutron [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.194 253542 DEBUG oslo_concurrency.lockutils [req-34cc0621-2a58-4e58-be34-0d4de92a80f5 req-c2a9f91f-026d-4745-b002-f999de995816 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.294 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Creating config drive at /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.299 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn521ovv7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.448 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn521ovv7" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.474 253542 DEBUG nova.storage.rbd_utils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.477 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.675 253542 DEBUG oslo_concurrency.processutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.676 253542 INFO nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Deleting local config drive /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb/disk.config because it was imported into RBD.
Nov 25 08:51:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.6 MiB/s wr, 432 op/s
Nov 25 08:51:19 compute-0 kernel: tap2ad9a2b7-f5: entered promiscuous mode
Nov 25 08:51:19 compute-0 NetworkManager[48915]: <info>  [1764060679.7454] manager: (tap2ad9a2b7-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/452)
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:19 compute-0 ovn_controller[152859]: 2025-11-25T08:51:19Z|01103|binding|INFO|Claiming lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 for this chassis.
Nov 25 08:51:19 compute-0 ovn_controller[152859]: 2025-11-25T08:51:19Z|01104|binding|INFO|2ad9a2b7-f59d-49fd-aaa3-5253dd637f18: Claiming fa:16:3e:3e:2a:a3 10.100.0.12
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.756 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:2a:a3 10.100.0.12'], port_security=['fa:16:3e:3e:2a:a3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '371b8f16-6d0a-48c6-b770-1fa4712eb5f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b7f6d3d-0fdc-46fc-8ec7-6805fb9ea29c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.758 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 in datapath 8daad2e3-552f-4ebe-8fa4-01c68ec704b1 bound to our chassis
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.760 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8daad2e3-552f-4ebe-8fa4-01c68ec704b1
Nov 25 08:51:19 compute-0 ovn_controller[152859]: 2025-11-25T08:51:19Z|01105|binding|INFO|Setting lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 ovn-installed in OVS
Nov 25 08:51:19 compute-0 ovn_controller[152859]: 2025-11-25T08:51:19Z|01106|binding|INFO|Setting lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 up in Southbound
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:19 compute-0 nova_compute[253538]: 2025-11-25 08:51:19.770 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.773 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3344920b-2dad-4971-94f3-989b3d8b72f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.774 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8daad2e3-51 in ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.777 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8daad2e3-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.777 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e20961-8d20-4473-a685-ae0f323a93ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.778 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7250311d-69e7-4c31-ac06-b75c90ef0ae1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 systemd-udevd[366745]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.787 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[36e5ecf9-e0d3-4072-8d75-1bdc78fdd5ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 systemd-machined[215790]: New machine qemu-138-instance-0000006f.
Nov 25 08:51:19 compute-0 NetworkManager[48915]: <info>  [1764060679.8012] device (tap2ad9a2b7-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:51:19 compute-0 NetworkManager[48915]: <info>  [1764060679.8024] device (tap2ad9a2b7-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.805 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f19af19-71d8-48f5-8dd7-95e18bca5820]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006f.
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.836 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2232733e-cf31-4a1c-9f8a-f3d93af48af3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 systemd-udevd[366748]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:51:19 compute-0 NetworkManager[48915]: <info>  [1764060679.8432] manager: (tap8daad2e3-50): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.843 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6637afda-4259-4f0a-8f88-0f81ebfe11e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.876 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[29af44d6-f5a8-44b8-bb60-a407c95defd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.879 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cea1fe43-5230-4c05-8e22-cca5fe990c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 NetworkManager[48915]: <info>  [1764060679.9026] device (tap8daad2e3-50): carrier: link connected
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.908 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7d4198-4c7b-4bf0-9e18-4b57ee37a0e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.923 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[312f6e7a-a5d0-4ef8-ae6f-d5ef92f7f90f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8daad2e3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:fb:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603450, 'reachable_time': 38589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366776, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.941 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[80541db5-200d-4e35-a735-6a8cc0a7fb3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:fbf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603450, 'tstamp': 603450}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366777, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.956 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b6bd1222-eb84-4bef-bcc5-a6226fc6db97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8daad2e3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:fb:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603450, 'reachable_time': 38589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366778, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:19.993 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30cd24b1-b375-44df-90c5-c5b5aa07fcd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.076 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc8c945-f602-43ca-9d51-76599d3e297a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.078 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daad2e3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.078 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.078 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daad2e3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.080 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:20 compute-0 kernel: tap8daad2e3-50: entered promiscuous mode
Nov 25 08:51:20 compute-0 NetworkManager[48915]: <info>  [1764060680.0811] manager: (tap8daad2e3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.084 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8daad2e3-50, col_values=(('external_ids', {'iface-id': 'e844dcfd-3730-493f-b401-25ee7b281b7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:20 compute-0 ovn_controller[152859]: 2025-11-25T08:51:20Z|01107|binding|INFO|Releasing lport e844dcfd-3730-493f-b401-25ee7b281b7b from this chassis (sb_readonly=0)
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.108 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8daad2e3-552f-4ebe-8fa4-01c68ec704b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8daad2e3-552f-4ebe-8fa4-01c68ec704b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.107 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.109 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aff53141-2a0a-41b6-a0b9-4757c8b1438c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.110 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-8daad2e3-552f-4ebe-8fa4-01c68ec704b1
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/8daad2e3-552f-4ebe-8fa4-01c68ec704b1.pid.haproxy
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 8daad2e3-552f-4ebe-8fa4-01c68ec704b1
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:51:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:20.110 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'env', 'PROCESS_TAG=haproxy-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8daad2e3-552f-4ebe-8fa4-01c68ec704b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.126 253542 DEBUG nova.compute.manager [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.127 253542 DEBUG oslo_concurrency.lockutils [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.127 253542 DEBUG oslo_concurrency.lockutils [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.128 253542 DEBUG oslo_concurrency.lockutils [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.128 253542 DEBUG nova.compute.manager [req-d74417db-c3e3-4f84-bc8d-21e4949b5f16 req-8e305039-0155-4e74-a90f-d56c123b154a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Processing event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.257 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060680.257192, e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.258 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] VM Started (Lifecycle Event)
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.262 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.266 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.270 253542 INFO nova.virt.libvirt.driver [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance spawned successfully.
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.271 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.286 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.289 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.299 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.299 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.300 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.301 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.302 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.302 253542 DEBUG nova.virt.libvirt.driver [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.314 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.314 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060680.2573326, e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.314 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] VM Paused (Lifecycle Event)
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.343 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060680.2649112, e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.343 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] VM Resumed (Lifecycle Event)
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.370 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.374 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.381 253542 INFO nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Took 5.86 seconds to spawn the instance on the hypervisor.
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.381 253542 DEBUG nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.391 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.433 253542 INFO nova.compute.manager [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Took 6.87 seconds to build instance.
Nov 25 08:51:20 compute-0 nova_compute[253538]: 2025-11-25 08:51:20.446 253542 DEBUG oslo_concurrency.lockutils [None req-73791b0c-fec3-40af-918c-8d0544886cd1 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:20 compute-0 podman[366852]: 2025-11-25 08:51:20.470408568 +0000 UTC m=+0.049718212 container create b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 08:51:20 compute-0 systemd[1]: Started libpod-conmon-b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529.scope.
Nov 25 08:51:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2849636997d3f41f71e0b1f9f347c5bb88d5e1c968dd84aab3bbb4c45a6a526/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:51:20 compute-0 podman[366852]: 2025-11-25 08:51:20.44469124 +0000 UTC m=+0.024000944 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:51:20 compute-0 podman[366852]: 2025-11-25 08:51:20.542849909 +0000 UTC m=+0.122159563 container init b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:51:20 compute-0 podman[366852]: 2025-11-25 08:51:20.548363857 +0000 UTC m=+0.127673511 container start b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:51:20 compute-0 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [NOTICE]   (366871) : New worker (366873) forked
Nov 25 08:51:20 compute-0 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [NOTICE]   (366871) : Loading success.
Nov 25 08:51:21 compute-0 ceph-mon[75015]: pgmap v2114: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.6 MiB/s wr, 432 op/s
Nov 25 08:51:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 337 op/s
Nov 25 08:51:22 compute-0 nova_compute[253538]: 2025-11-25 08:51:22.350 253542 DEBUG nova.compute.manager [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:22 compute-0 nova_compute[253538]: 2025-11-25 08:51:22.350 253542 DEBUG oslo_concurrency.lockutils [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:22 compute-0 nova_compute[253538]: 2025-11-25 08:51:22.351 253542 DEBUG oslo_concurrency.lockutils [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:22 compute-0 nova_compute[253538]: 2025-11-25 08:51:22.351 253542 DEBUG oslo_concurrency.lockutils [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:22 compute-0 nova_compute[253538]: 2025-11-25 08:51:22.352 253542 DEBUG nova.compute.manager [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] No waiting events found dispatching network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:22 compute-0 nova_compute[253538]: 2025-11-25 08:51:22.352 253542 WARNING nova.compute.manager [req-3910b8d2-78a0-46f5-9d07-446d0e4f3f13 req-9dcc57d1-4ac0-476f-98e2-30242cd3e775 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received unexpected event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 for instance with vm_state active and task_state None.
Nov 25 08:51:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 25 08:51:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 25 08:51:22 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 25 08:51:22 compute-0 nova_compute[253538]: 2025-11-25 08:51:22.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:23 compute-0 ceph-mon[75015]: pgmap v2115: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 337 op/s
Nov 25 08:51:23 compute-0 ceph-mon[75015]: osdmap e252: 3 total, 3 up, 3 in
Nov 25 08:51:23 compute-0 nova_compute[253538]: 2025-11-25 08:51:23.243 253542 INFO nova.compute.manager [None req-0c53f06c-d7ad-484c-9397-161052b07dd4 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Get console output
Nov 25 08:51:23 compute-0 nova_compute[253538]: 2025-11-25 08:51:23.249 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:51:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:51:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:51:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:51:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:51:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:51:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:51:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 272 op/s
Nov 25 08:51:23 compute-0 nova_compute[253538]: 2025-11-25 08:51:23.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.082 253542 DEBUG nova.compute.manager [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.082 253542 DEBUG nova.compute.manager [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing instance network info cache due to event network-changed-0547929c-86ba-4aaa-869f-c7e2b5ea7e67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.082 253542 DEBUG oslo_concurrency.lockutils [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.083 253542 DEBUG oslo_concurrency.lockutils [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.083 253542 DEBUG nova.network.neutron [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Refreshing network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.145 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.146 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.146 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.146 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.146 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.147 253542 INFO nova.compute.manager [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Terminating instance
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.148 253542 DEBUG nova.compute.manager [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:51:24 compute-0 kernel: tap0547929c-86 (unregistering): left promiscuous mode
Nov 25 08:51:24 compute-0 NetworkManager[48915]: <info>  [1764060684.2002] device (tap0547929c-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:51:24 compute-0 ovn_controller[152859]: 2025-11-25T08:51:24Z|01108|binding|INFO|Releasing lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 from this chassis (sb_readonly=0)
Nov 25 08:51:24 compute-0 ovn_controller[152859]: 2025-11-25T08:51:24Z|01109|binding|INFO|Setting lport 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 down in Southbound
Nov 25 08:51:24 compute-0 ovn_controller[152859]: 2025-11-25T08:51:24Z|01110|binding|INFO|Removing iface tap0547929c-86 ovn-installed in OVS
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.230 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:20:35 10.100.0.11'], port_security=['fa:16:3e:86:20:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4f98996-3a98-41ad-af66-af37066515d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0613062-c56d-4f59-a1bd-5487b9cae905', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'db6ee56c-6009-455c-94cc-12101966dbd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eca56c04-944f-49ab-8f25-403bf5089348, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0547929c-86ba-4aaa-869f-c7e2b5ea7e67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.234 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67 in datapath c0613062-c56d-4f59-a1bd-5487b9cae905 unbound from our chassis
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.236 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0613062-c56d-4f59-a1bd-5487b9cae905, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.238 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[181365b4-b812-49eb-a1c1-e06ba7e16d85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.239 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 namespace which is not needed anymore
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 25 08:51:24 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Consumed 13.310s CPU time.
Nov 25 08:51:24 compute-0 systemd-machined[215790]: Machine qemu-137-instance-0000006e terminated.
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.394 253542 INFO nova.virt.libvirt.driver [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Instance destroyed successfully.
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.395 253542 DEBUG nova.objects.instance [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid b4f98996-3a98-41ad-af66-af37066515d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.405 253542 DEBUG nova.virt.libvirt.vif [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-642924609',display_name='tempest-TestNetworkAdvancedServerOps-server-642924609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-642924609',id=110,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQwQUMdH1inJEZNQ9tUR+z/kDiUab1e20h5rm6qDlszZoYoLqt3pa8Fary6MYkj2oJVBphpUWW4+oVR02Nvg0VNSZNNzWHbc601Ac4/2sW+DdmilXo7ZngfOc7+6JMZJw==',key_name='tempest-TestNetworkAdvancedServerOps-1971435409',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:50:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-r1bh0apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:51:04Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=b4f98996-3a98-41ad-af66-af37066515d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.405 253542 DEBUG nova.network.os_vif_util [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.406 253542 DEBUG nova.network.os_vif_util [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.407 253542 DEBUG os_vif [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.409 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.409 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0547929c-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.411 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.420 253542 INFO os_vif [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:20:35,bridge_name='br-int',has_traffic_filtering=True,id=0547929c-86ba-4aaa-869f-c7e2b5ea7e67,network=Network(c0613062-c56d-4f59-a1bd-5487b9cae905),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0547929c-86')
Nov 25 08:51:24 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [NOTICE]   (366378) : haproxy version is 2.8.14-c23fe91
Nov 25 08:51:24 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [NOTICE]   (366378) : path to executable is /usr/sbin/haproxy
Nov 25 08:51:24 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [WARNING]  (366378) : Exiting Master process...
Nov 25 08:51:24 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [ALERT]    (366378) : Current worker (366383) exited with code 143 (Terminated)
Nov 25 08:51:24 compute-0 neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905[366360]: [WARNING]  (366378) : All workers exited. Exiting... (0)
Nov 25 08:51:24 compute-0 systemd[1]: libpod-dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b.scope: Deactivated successfully.
Nov 25 08:51:24 compute-0 podman[366905]: 2025-11-25 08:51:24.434876803 +0000 UTC m=+0.071416943 container died dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 08:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b-userdata-shm.mount: Deactivated successfully.
Nov 25 08:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-06636a18fe499615bcb16111056ae7d170b090417624338dfdbe17fd20509fcd-merged.mount: Deactivated successfully.
Nov 25 08:51:24 compute-0 podman[366905]: 2025-11-25 08:51:24.490890953 +0000 UTC m=+0.127431053 container cleanup dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:51:24 compute-0 systemd[1]: libpod-conmon-dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b.scope: Deactivated successfully.
Nov 25 08:51:24 compute-0 podman[366964]: 2025-11-25 08:51:24.568363668 +0000 UTC m=+0.050581945 container remove dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[43494fc6-f2f8-4a99-8ba8-1846f4fe64a7]: (4, ('Tue Nov 25 08:51:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 (dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b)\ndda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b\nTue Nov 25 08:51:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 (dda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b)\ndda7a335c431b3b6ccd804e4a60d89574b0669704d04b1923f4b0c950d3bca3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.579 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8084a923-c887-4465-bc48-78547d281b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0613062-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 kernel: tapc0613062-c0: left promiscuous mode
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.591 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2568e9-5928-4873-917d-923c6a4a834e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.608 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[db8151c8-62e5-4659-97c9-3c93bbc7d146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.610 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[284451fa-3b03-4833-a9bd-839c65e154d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.628 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef99021b-99d6-49b7-9abd-6f2c5520cc89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601893, 'reachable_time': 40088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366979, 'error': None, 'target': 'ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.630 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c0613062-c56d-4f59-a1bd-5487b9cae905 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:51:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:24.630 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a020c0b9-1c7c-4144-87cd-e5bf0bd0fa17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:24 compute-0 systemd[1]: run-netns-ovnmeta\x2dc0613062\x2dc56d\x2d4f59\x2da1bd\x2d5487b9cae905.mount: Deactivated successfully.
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.738 253542 DEBUG nova.compute.manager [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.739 253542 DEBUG nova.compute.manager [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing instance network info cache due to event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.739 253542 DEBUG oslo_concurrency.lockutils [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.739 253542 DEBUG oslo_concurrency.lockutils [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.740 253542 DEBUG nova.network.neutron [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.831 253542 INFO nova.virt.libvirt.driver [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Deleting instance files /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3_del
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.834 253542 INFO nova.virt.libvirt.driver [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Deletion of /var/lib/nova/instances/b4f98996-3a98-41ad-af66-af37066515d3_del complete
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.896 253542 INFO nova.compute.manager [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.897 253542 DEBUG oslo.service.loopingcall [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.898 253542 DEBUG nova.compute.manager [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:51:24 compute-0 nova_compute[253538]: 2025-11-25 08:51:24.898 253542 DEBUG nova.network.neutron [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:51:25 compute-0 ceph-mon[75015]: pgmap v2117: 321 pgs: 321 active+clean; 213 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 272 op/s
Nov 25 08:51:25 compute-0 nova_compute[253538]: 2025-11-25 08:51:25.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 215 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.4 MiB/s wr, 287 op/s
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.861 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.862 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.862 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.863 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.863 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.863 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-unplugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.863 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.864 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "b4f98996-3a98-41ad-af66-af37066515d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.864 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.865 253542 DEBUG oslo_concurrency.lockutils [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.865 253542 DEBUG nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] No waiting events found dispatching network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:26 compute-0 nova_compute[253538]: 2025-11-25 08:51:26.865 253542 WARNING nova.compute.manager [req-555a50df-21f8-437a-8e3e-2b6ac8c3bece req-12bd5ad0-1c02-4545-9813-5a5b18614f1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received unexpected event network-vif-plugged-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 for instance with vm_state active and task_state deleting.
Nov 25 08:51:27 compute-0 ceph-mon[75015]: pgmap v2118: 321 pgs: 321 active+clean; 215 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.4 MiB/s wr, 287 op/s
Nov 25 08:51:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 25 08:51:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 25 08:51:27 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 25 08:51:27 compute-0 nova_compute[253538]: 2025-11-25 08:51:27.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 189 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 36 KiB/s wr, 143 op/s
Nov 25 08:51:28 compute-0 sshd[189888]: drop connection #0 from [45.78.217.205]:45522 on [38.102.83.169]:22 penalty: exceeded LoginGraceTime
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.459 253542 DEBUG nova.network.neutron [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updated VIF entry in instance network info cache for port 0547929c-86ba-4aaa-869f-c7e2b5ea7e67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.460 253542 DEBUG nova.network.neutron [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [{"id": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "address": "fa:16:3e:86:20:35", "network": {"id": "c0613062-c56d-4f59-a1bd-5487b9cae905", "bridge": "br-int", "label": "tempest-network-smoke--1258419912", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0547929c-86", "ovs_interfaceid": "0547929c-86ba-4aaa-869f-c7e2b5ea7e67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.481 253542 DEBUG oslo_concurrency.lockutils [req-fb766494-7a89-437f-8fb1-686c94cc07c0 req-9e61290e-7ef7-4911-b1ac-a61f95b488aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-b4f98996-3a98-41ad-af66-af37066515d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.510 253542 DEBUG nova.network.neutron [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:28 compute-0 ceph-mon[75015]: osdmap e253: 3 total, 3 up, 3 in
Nov 25 08:51:28 compute-0 ceph-mon[75015]: pgmap v2120: 321 pgs: 321 active+clean; 189 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 36 KiB/s wr, 143 op/s
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.532 253542 INFO nova.compute.manager [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Took 3.63 seconds to deallocate network for instance.
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.576 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.577 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.636 253542 DEBUG nova.compute.manager [req-33392c5f-ccfe-4e56-b5cc-6ae11e07e2e8 req-2b778316-36d7-4a82-8c2a-b1faf4e609cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Received event network-vif-deleted-0547929c-86ba-4aaa-869f-c7e2b5ea7e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:28 compute-0 nova_compute[253538]: 2025-11-25 08:51:28.686 253542 DEBUG oslo_concurrency.processutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:51:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2801981059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:51:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:51:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2801981059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:51:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:51:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529027540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.108 253542 DEBUG oslo_concurrency.processutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.114 253542 DEBUG nova.compute.provider_tree [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.131 253542 DEBUG nova.scheduler.client.report [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.151 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.180 253542 INFO nova.scheduler.client.report [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance b4f98996-3a98-41ad-af66-af37066515d3
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.241 253542 DEBUG nova.network.neutron [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updated VIF entry in instance network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.242 253542 DEBUG nova.network.neutron [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.263 253542 DEBUG oslo_concurrency.lockutils [None req-4de13c00-17b3-4ea1-939d-eac5a4c2cddd 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "b4f98996-3a98-41ad-af66-af37066515d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.274 253542 DEBUG oslo_concurrency.lockutils [req-0308a17c-396b-4c67-a49e-869127eb407f req-c47b7472-fd34-4c77-b48f-d062b3f1b3e2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:29 compute-0 nova_compute[253538]: 2025-11-25 08:51:29.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2801981059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:51:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2801981059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:51:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/529027540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 134 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 37 KiB/s wr, 180 op/s
Nov 25 08:51:30 compute-0 ceph-mon[75015]: pgmap v2121: 321 pgs: 321 active+clean; 134 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 37 KiB/s wr, 180 op/s
Nov 25 08:51:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 134 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 31 KiB/s wr, 106 op/s
Nov 25 08:51:31 compute-0 nova_compute[253538]: 2025-11-25 08:51:31.843 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:31 compute-0 nova_compute[253538]: 2025-11-25 08:51:31.879 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 08:51:31 compute-0 nova_compute[253538]: 2025-11-25 08:51:31.880 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:31 compute-0 nova_compute[253538]: 2025-11-25 08:51:31.880 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:31 compute-0 nova_compute[253538]: 2025-11-25 08:51:31.911 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:32 compute-0 nova_compute[253538]: 2025-11-25 08:51:32.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:32 compute-0 ceph-mon[75015]: pgmap v2122: 321 pgs: 321 active+clean; 134 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 31 KiB/s wr, 106 op/s
Nov 25 08:51:33 compute-0 nova_compute[253538]: 2025-11-25 08:51:33.592 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 146 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 938 KiB/s wr, 119 op/s
Nov 25 08:51:33 compute-0 podman[367003]: 2025-11-25 08:51:33.818370006 +0000 UTC m=+0.064197270 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:51:33 compute-0 ovn_controller[152859]: 2025-11-25T08:51:33Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:2a:a3 10.100.0.12
Nov 25 08:51:33 compute-0 ovn_controller[152859]: 2025-11-25T08:51:33Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:2a:a3 10.100.0.12
Nov 25 08:51:34 compute-0 ovn_controller[152859]: 2025-11-25T08:51:34Z|01111|binding|INFO|Releasing lport e844dcfd-3730-493f-b401-25ee7b281b7b from this chassis (sb_readonly=0)
Nov 25 08:51:34 compute-0 nova_compute[253538]: 2025-11-25 08:51:34.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:34 compute-0 nova_compute[253538]: 2025-11-25 08:51:34.415 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:34 compute-0 nova_compute[253538]: 2025-11-25 08:51:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:34 compute-0 nova_compute[253538]: 2025-11-25 08:51:34.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:51:34 compute-0 ceph-mon[75015]: pgmap v2123: 321 pgs: 321 active+clean; 146 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 938 KiB/s wr, 119 op/s
Nov 25 08:51:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 157 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 25 08:51:35 compute-0 podman[367022]: 2025-11-25 08:51:35.845971238 +0000 UTC m=+0.083926868 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 08:51:36 compute-0 nova_compute[253538]: 2025-11-25 08:51:36.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:36 compute-0 nova_compute[253538]: 2025-11-25 08:51:36.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:51:36 compute-0 nova_compute[253538]: 2025-11-25 08:51:36.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:51:36 compute-0 ceph-mon[75015]: pgmap v2124: 321 pgs: 321 active+clean; 157 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 25 08:51:37 compute-0 nova_compute[253538]: 2025-11-25 08:51:37.121 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:37 compute-0 nova_compute[253538]: 2025-11-25 08:51:37.122 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:37 compute-0 nova_compute[253538]: 2025-11-25 08:51:37.122 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:51:37 compute-0 nova_compute[253538]: 2025-11-25 08:51:37.122 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:51:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:37 compute-0 nova_compute[253538]: 2025-11-25 08:51:37.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Nov 25 08:51:38 compute-0 nova_compute[253538]: 2025-11-25 08:51:38.538 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:38 compute-0 nova_compute[253538]: 2025-11-25 08:51:38.550 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:38 compute-0 nova_compute[253538]: 2025-11-25 08:51:38.550 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:51:38 compute-0 ceph-mon[75015]: pgmap v2125: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Nov 25 08:51:39 compute-0 sshd-session[367042]: Received disconnect from 45.202.211.6 port 35170:11: Bye Bye [preauth]
Nov 25 08:51:39 compute-0 sshd-session[367042]: Disconnected from authenticating user adm 45.202.211.6 port 35170 [preauth]
Nov 25 08:51:39 compute-0 nova_compute[253538]: 2025-11-25 08:51:39.390 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060684.3889587, b4f98996-3a98-41ad-af66-af37066515d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:39 compute-0 nova_compute[253538]: 2025-11-25 08:51:39.391 253542 INFO nova.compute.manager [-] [instance: b4f98996-3a98-41ad-af66-af37066515d3] VM Stopped (Lifecycle Event)
Nov 25 08:51:39 compute-0 nova_compute[253538]: 2025-11-25 08:51:39.405 253542 DEBUG nova.compute.manager [None req-f2ea9c83-649c-44b5-adfe-68e96ec671de - - - - - -] [instance: b4f98996-3a98-41ad-af66-af37066515d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:39 compute-0 nova_compute[253538]: 2025-11-25 08:51:39.445 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:39 compute-0 nova_compute[253538]: 2025-11-25 08:51:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.550 253542 INFO nova.compute.manager [None req-a8eedef9-005f-445c-bac8-56e75535fb1a 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Get console output
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.559 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:51:40 compute-0 nova_compute[253538]: 2025-11-25 08:51:40.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:40 compute-0 ceph-mon[75015]: pgmap v2126: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 25 08:51:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:51:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969646687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.075 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.076 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.080 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.094 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:51:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:41.095 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.172 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.172 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:51:41 compute-0 podman[367067]: 2025-11-25 08:51:41.253437607 +0000 UTC m=+0.121796993 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.386 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.388 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3682MB free_disk=59.9428825378418GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.388 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.389 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.462 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.463 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.463 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:51:41 compute-0 nova_compute[253538]: 2025-11-25 08:51:41.667 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:51:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3969646687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:51:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445386032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.198 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.205 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.220 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.239 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.240 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.398 253542 DEBUG nova.compute.manager [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.399 253542 DEBUG nova.compute.manager [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing instance network info cache due to event network-changed-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.399 253542 DEBUG oslo_concurrency.lockutils [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.400 253542 DEBUG oslo_concurrency.lockutils [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.400 253542 DEBUG nova.network.neutron [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Refreshing network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:51:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:42 compute-0 nova_compute[253538]: 2025-11-25 08:51:42.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:42 compute-0 ceph-mon[75015]: pgmap v2127: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:51:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2445386032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:51:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:44.097 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:44 compute-0 nova_compute[253538]: 2025-11-25 08:51:44.452 253542 DEBUG nova.network.neutron [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updated VIF entry in instance network info cache for port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:51:44 compute-0 nova_compute[253538]: 2025-11-25 08:51:44.453 253542 DEBUG nova.network.neutron [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [{"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:44 compute-0 nova_compute[253538]: 2025-11-25 08:51:44.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:44 compute-0 nova_compute[253538]: 2025-11-25 08:51:44.497 253542 DEBUG oslo_concurrency.lockutils [req-dbf80395-0c38-4e1f-9bba-bb8267550931 req-bcd5bed4-05be-4e29-8f23-0f926c046f58 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:44 compute-0 ceph-mon[75015]: pgmap v2128: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:51:45 compute-0 nova_compute[253538]: 2025-11-25 08:51:45.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:45 compute-0 nova_compute[253538]: 2025-11-25 08:51:45.240 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:51:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Nov 25 08:51:46 compute-0 ceph-mon[75015]: pgmap v2129: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Nov 25 08:51:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:47 compute-0 nova_compute[253538]: 2025-11-25 08:51:47.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 735 KiB/s wr, 18 op/s
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.033 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.033 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.049 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.129 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.129 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.136 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.136 253542 INFO nova.compute.claims [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.257 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:51:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121873357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.675 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.683 253542 DEBUG nova.compute.provider_tree [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.698 253542 DEBUG nova.scheduler.client.report [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.726 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.727 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.786 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.787 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.810 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.826 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:51:48 compute-0 ceph-mon[75015]: pgmap v2130: 321 pgs: 321 active+clean; 167 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 735 KiB/s wr, 18 op/s
Nov 25 08:51:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2121873357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.957 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.958 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.959 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating image(s)
Nov 25 08:51:48 compute-0 nova_compute[253538]: 2025-11-25 08:51:48.984 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.014 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.049 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.053 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.100 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.100 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.117 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.135 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.136 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.136 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.137 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.159 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.163 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9fed0304-736a-4739-9e78-a95c676d1206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.226 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.227 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.234 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.234 253542 INFO nova.compute.claims [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.334 253542 DEBUG nova.policy [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.401 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.485 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9fed0304-736a-4739-9e78-a95c676d1206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.554 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.658 253542 DEBUG nova.objects.instance [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.674 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.674 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Ensure instance console log exists: /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.675 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.675 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.675 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 183 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 715 KiB/s wr, 15 op/s
Nov 25 08:51:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:51:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932646589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.871 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.879 253542 DEBUG nova.compute.provider_tree [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.895 253542 DEBUG nova.scheduler.client.report [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:51:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2932646589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.942 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:49 compute-0 nova_compute[253538]: 2025-11-25 08:51:49.943 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.005 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.006 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.024 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.043 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.148 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.149 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.150 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Creating image(s)
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.170 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.193 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.217 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.221 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.269 253542 DEBUG nova.policy [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.313 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.314 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.315 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.315 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.339 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:50 compute-0 sshd-session[367325]: Invalid user postgres from 193.32.162.151 port 56478
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.343 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:50 compute-0 sshd-session[367325]: Connection closed by invalid user postgres 193.32.162.151 port 56478 [preauth]
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.503 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Successfully created port: 5518ee18-fcb4-4885-8bc6-a3daba84baff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.697 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.769 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.884 253542 DEBUG nova.objects.instance [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 47279d1c-3634-4ea6-a752-99950cd5ce6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.901 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.902 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Ensure instance console log exists: /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.902 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.903 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:50 compute-0 nova_compute[253538]: 2025-11-25 08:51:50.903 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:50 compute-0 ceph-mon[75015]: pgmap v2131: 321 pgs: 321 active+clean; 183 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 715 KiB/s wr, 15 op/s
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.060 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Successfully created port: a535be3a-db4d-4a49-9772-867e101290fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.575 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Successfully updated port: 5518ee18-fcb4-4885-8bc6-a3daba84baff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.595 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.595 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.595 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.679 253542 DEBUG nova.compute.manager [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.680 253542 DEBUG nova.compute.manager [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing instance network info cache due to event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.680 253542 DEBUG oslo_concurrency.lockutils [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 183 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 714 KiB/s wr, 15 op/s
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.761 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:51:51 compute-0 nova_compute[253538]: 2025-11-25 08:51:51.997 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Successfully updated port: a535be3a-db4d-4a49-9772-867e101290fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.008 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.009 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.009 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.082 253542 DEBUG nova.compute.manager [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-changed-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.083 253542 DEBUG nova.compute.manager [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Refreshing instance network info cache due to event network-changed-a535be3a-db4d-4a49-9772-867e101290fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.083 253542 DEBUG oslo_concurrency.lockutils [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.160 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:51:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.603 253542 DEBUG nova.network.neutron [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.625 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.626 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance network_info: |[{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.626 253542 DEBUG oslo_concurrency.lockutils [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.627 253542 DEBUG nova.network.neutron [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.630 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start _get_guest_xml network_info=[{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.644 253542 WARNING nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.649 253542 DEBUG nova.virt.libvirt.host [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.649 253542 DEBUG nova.virt.libvirt.host [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.653 253542 DEBUG nova.virt.libvirt.host [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.653 253542 DEBUG nova.virt.libvirt.host [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.654 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.654 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.654 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.655 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.655 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.655 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.655 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.656 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.656 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.656 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.656 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.657 253542 DEBUG nova.virt.hardware [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.660 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:52 compute-0 nova_compute[253538]: 2025-11-25 08:51:52.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:52 compute-0 ceph-mon[75015]: pgmap v2132: 321 pgs: 321 active+clean; 183 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 714 KiB/s wr, 15 op/s
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.024 253542 DEBUG nova.network.neutron [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updating instance_info_cache with network_info: [{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.062 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.062 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance network_info: |[{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.063 253542 DEBUG oslo_concurrency.lockutils [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.063 253542 DEBUG nova.network.neutron [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Refreshing network info cache for port a535be3a-db4d-4a49-9772-867e101290fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.065 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start _get_guest_xml network_info=[{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.070 253542 WARNING nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:51:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:51:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404244749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.074 253542 DEBUG nova.virt.libvirt.host [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.074 253542 DEBUG nova.virt.libvirt.host [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.080 253542 DEBUG nova.virt.libvirt.host [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.081 253542 DEBUG nova.virt.libvirt.host [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.081 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.081 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.082 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.083 253542 DEBUG nova.virt.hardware [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.086 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.120 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.148 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.153 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:53 compute-0 sshd-session[367115]: Received disconnect from 45.78.222.2 port 35032:11: Bye Bye [preauth]
Nov 25 08:51:53 compute-0 sshd-session[367115]: Disconnected from authenticating user root 45.78.222.2 port 35032 [preauth]
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:51:53
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.mgr', 'volumes']
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:51:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:51:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35121730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.580 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:51:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/233193304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.603 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.608 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.658 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.661 253542 DEBUG nova.virt.libvirt.vif [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:48Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.662 253542 DEBUG nova.network.os_vif_util [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.663 253542 DEBUG nova.network.os_vif_util [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.665 253542 DEBUG nova.objects.instance [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.678 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <uuid>9fed0304-736a-4739-9e78-a95c676d1206</uuid>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <name>instance-00000070</name>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-327371372</nova:name>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:51:52</nova:creationTime>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <nova:port uuid="5518ee18-fcb4-4885-8bc6-a3daba84baff">
Nov 25 08:51:53 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <system>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <entry name="serial">9fed0304-736a-4739-9e78-a95c676d1206</entry>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <entry name="uuid">9fed0304-736a-4739-9e78-a95c676d1206</entry>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </system>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <os>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   </os>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <features>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   </features>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9fed0304-736a-4739-9e78-a95c676d1206_disk">
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9fed0304-736a-4739-9e78-a95c676d1206_disk.config">
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       </source>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:51:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:aa:ad:17"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <target dev="tap5518ee18-fc"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/console.log" append="off"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <video>
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </video>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:51:53 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:51:53 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:51:53 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:51:53 compute-0 nova_compute[253538]: </domain>
Nov 25 08:51:53 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.680 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Preparing to wait for external event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.681 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.682 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.682 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.684 253542 DEBUG nova.virt.libvirt.vif [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:48Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.684 253542 DEBUG nova.network.os_vif_util [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.685 253542 DEBUG nova.network.os_vif_util [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.686 253542 DEBUG os_vif [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.688 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.689 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.695 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5518ee18-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.696 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5518ee18-fc, col_values=(('external_ids', {'iface-id': '5518ee18-fcb4-4885-8bc6-a3daba84baff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:ad:17', 'vm-uuid': '9fed0304-736a-4739-9e78-a95c676d1206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:53 compute-0 NetworkManager[48915]: <info>  [1764060713.7000] manager: (tap5518ee18-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.710 253542 INFO os_vif [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc')
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 230 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 42 op/s
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.754 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.754 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.755 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:aa:ad:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.755 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Using config drive
Nov 25 08:51:53 compute-0 nova_compute[253538]: 2025-11-25 08:51:53.788 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:51:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1404244749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/35121730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/233193304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:51:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:51:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228729770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.064 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.065 253542 DEBUG nova.virt.libvirt.vif [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-737057420',display_name='tempest-TestNetworkBasicOps-server-737057420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-737057420',id=113,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0V13VdtFjjfuJa+A9AY8vVYQrlDp8VmR/zDbnMpoRaniytKdXYDv2ooGFtOXnD87APiPgGqKaLDSkFHV94Z3CrjduwX8FjMfno6fvPaCxDikVs3WLPJK+CBmQ5ToXLLA==',key_name='tempest-TestNetworkBasicOps-301366135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-xuar01pl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:50Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=47279d1c-3634-4ea6-a752-99950cd5ce6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.066 253542 DEBUG nova.network.os_vif_util [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.066 253542 DEBUG nova.network.os_vif_util [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.067 253542 DEBUG nova.objects.instance [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47279d1c-3634-4ea6-a752-99950cd5ce6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.069 253542 DEBUG nova.network.neutron [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updated VIF entry in instance network info cache for port a535be3a-db4d-4a49-9772-867e101290fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.069 253542 DEBUG nova.network.neutron [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updating instance_info_cache with network_info: [{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.084 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <uuid>47279d1c-3634-4ea6-a752-99950cd5ce6c</uuid>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <name>instance-00000071</name>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-737057420</nova:name>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:51:53</nova:creationTime>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <nova:port uuid="a535be3a-db4d-4a49-9772-867e101290fa">
Nov 25 08:51:54 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <system>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <entry name="serial">47279d1c-3634-4ea6-a752-99950cd5ce6c</entry>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <entry name="uuid">47279d1c-3634-4ea6-a752-99950cd5ce6c</entry>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </system>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <os>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   </os>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <features>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   </features>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/47279d1c-3634-4ea6-a752-99950cd5ce6c_disk">
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       </source>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config">
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       </source>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:51:54 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:5f:1f:f1"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <target dev="tapa535be3a-db"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/console.log" append="off"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <video>
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </video>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:51:54 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:51:54 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:51:54 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:51:54 compute-0 nova_compute[253538]: </domain>
Nov 25 08:51:54 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.085 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Preparing to wait for external event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.085 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.085 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.085 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.086 253542 DEBUG nova.virt.libvirt.vif [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:51:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-737057420',display_name='tempest-TestNetworkBasicOps-server-737057420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-737057420',id=113,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0V13VdtFjjfuJa+A9AY8vVYQrlDp8VmR/zDbnMpoRaniytKdXYDv2ooGFtOXnD87APiPgGqKaLDSkFHV94Z3CrjduwX8FjMfno6fvPaCxDikVs3WLPJK+CBmQ5ToXLLA==',key_name='tempest-TestNetworkBasicOps-301366135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-xuar01pl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:51:50Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=47279d1c-3634-4ea6-a752-99950cd5ce6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.086 253542 DEBUG nova.network.os_vif_util [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.087 253542 DEBUG nova.network.os_vif_util [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.087 253542 DEBUG os_vif [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.088 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.088 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.088 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.089 253542 DEBUG oslo_concurrency.lockutils [req-ca986abd-1769-4c31-bdb7-22787f9cdc03 req-f6df62a0-c7dd-4716-abbc-13899e66de2c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.091 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa535be3a-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.091 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa535be3a-db, col_values=(('external_ids', {'iface-id': 'a535be3a-db4d-4a49-9772-867e101290fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:1f:f1', 'vm-uuid': '47279d1c-3634-4ea6-a752-99950cd5ce6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:54 compute-0 NetworkManager[48915]: <info>  [1764060714.0940] manager: (tapa535be3a-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.095 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.099 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.100 253542 INFO os_vif [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db')
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.160 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.160 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.160 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:5f:1f:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.161 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Using config drive
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.181 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.466 253542 DEBUG nova.network.neutron [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updated VIF entry in instance network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.467 253542 DEBUG nova.network.neutron [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.488 253542 DEBUG oslo_concurrency.lockutils [req-8cf36846-8f7d-4432-b1dc-71913f8930ad req-0700f0e1-da46-4e6f-938d-6de05c8fd6c2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.531 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating config drive at /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.537 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpee75fmk3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.702 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpee75fmk3" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.724 253542 DEBUG nova.storage.rbd_utils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.726 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config 9fed0304-736a-4739-9e78-a95c676d1206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.792 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Creating config drive at /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.797 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaktgeuh6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.922 253542 DEBUG oslo_concurrency.processutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config 9fed0304-736a-4739-9e78-a95c676d1206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.923 253542 INFO nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deleting local config drive /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config because it was imported into RBD.
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.937 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaktgeuh6" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.961 253542 DEBUG nova.storage.rbd_utils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:51:54 compute-0 ceph-mon[75015]: pgmap v2133: 321 pgs: 321 active+clean; 230 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 42 op/s
Nov 25 08:51:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4228729770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:51:54 compute-0 nova_compute[253538]: 2025-11-25 08:51:54.964 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:51:54 compute-0 kernel: tap5518ee18-fc: entered promiscuous mode
Nov 25 08:51:54 compute-0 NetworkManager[48915]: <info>  [1764060714.9685] manager: (tap5518ee18-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Nov 25 08:51:54 compute-0 ovn_controller[152859]: 2025-11-25T08:51:54Z|01112|binding|INFO|Claiming lport 5518ee18-fcb4-4885-8bc6-a3daba84baff for this chassis.
Nov 25 08:51:54 compute-0 ovn_controller[152859]: 2025-11-25T08:51:54Z|01113|binding|INFO|5518ee18-fcb4-4885-8bc6-a3daba84baff: Claiming fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 08:51:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.980 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:17 10.100.0.11'], port_security=['fa:16:3e:aa:ad:17 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9fed0304-736a-4739-9e78-a95c676d1206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78cbfb83-5eb2-43b6-8132-ed291918f722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3be0e54f-d1af-493d-9b04-9b0789174c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2873e9cb-ad93-4607-afe3-5e46bdf03cb7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5518ee18-fcb4-4885-8bc6-a3daba84baff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:51:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.983 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5518ee18-fcb4-4885-8bc6-a3daba84baff in datapath 78cbfb83-5eb2-43b6-8132-ed291918f722 bound to our chassis
Nov 25 08:51:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.985 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 08:51:54 compute-0 ovn_controller[152859]: 2025-11-25T08:51:54Z|01114|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff ovn-installed in OVS
Nov 25 08:51:54 compute-0 ovn_controller[152859]: 2025-11-25T08:51:54Z|01115|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff up in Southbound
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.998 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1675316-4b1e-47d0-ae46-cff9fdec791d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:54.999 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap78cbfb83-51 in ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:51:55 compute-0 systemd-machined[215790]: New machine qemu-139-instance-00000070.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.002 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.001 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap78cbfb83-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[759becdb-88b8-4887-a837-c2dac60228bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94b92ab0-f7b1-4790-9e81-d5af333d4b00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-00000070.
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.018 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9c753d41-16ea-4ca0-a63f-e8a0eca9b7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 systemd-udevd[367739]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:51:55 compute-0 NetworkManager[48915]: <info>  [1764060715.0412] device (tap5518ee18-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:51:55 compute-0 NetworkManager[48915]: <info>  [1764060715.0421] device (tap5518ee18-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.045 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bddf972d-e65b-454c-a1ff-330870304af3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.076 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[29143171-ca1e-4cb5-b573-2838022dbf5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 NetworkManager[48915]: <info>  [1764060715.0832] manager: (tap78cbfb83-50): new Veth device (/org/freedesktop/NetworkManager/Devices/458)
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.082 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a641f973-62b6-4b4c-a740-65bb146619fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 systemd-udevd[367748]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.120 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2d92d6bc-d771-46a9-b285-c9e378727ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.123 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0e23a90b-1df1-4745-a4c3-f2b227d2e24b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.150 253542 DEBUG oslo_concurrency.processutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config 47279d1c-3634-4ea6-a752-99950cd5ce6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.151 253542 INFO nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Deleting local config drive /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c/disk.config because it was imported into RBD.
Nov 25 08:51:55 compute-0 NetworkManager[48915]: <info>  [1764060715.1539] device (tap78cbfb83-50): carrier: link connected
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.157 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[76ba6643-af1e-4bf8-a947-fcd92c730434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.178 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f2744787-a89d-41b0-ada5-b380eff35a83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap78cbfb83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:20:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606975, 'reachable_time': 30188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367790, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.196 253542 DEBUG nova.compute.manager [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.197 253542 DEBUG oslo_concurrency.lockutils [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.197 253542 DEBUG oslo_concurrency.lockutils [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.197 253542 DEBUG oslo_concurrency.lockutils [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.198 253542 DEBUG nova.compute.manager [req-f55a8e5d-d576-44c7-8fdd-fc226d4c8d38 req-4469d890-d158-48d2-a15d-20f2dcfc9872 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Processing event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:51:55 compute-0 kernel: tapa535be3a-db: entered promiscuous mode
Nov 25 08:51:55 compute-0 NetworkManager[48915]: <info>  [1764060715.1997] manager: (tapa535be3a-db): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Nov 25 08:51:55 compute-0 systemd-udevd[367782]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:51:55 compute-0 NetworkManager[48915]: <info>  [1764060715.2196] device (tapa535be3a-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:51:55 compute-0 NetworkManager[48915]: <info>  [1764060715.2210] device (tapa535be3a-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5b65bc-ac0f-437e-a6d5-432c0e1af6a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:20be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606975, 'tstamp': 606975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367796, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_controller[152859]: 2025-11-25T08:51:55Z|01116|binding|INFO|Claiming lport a535be3a-db4d-4a49-9772-867e101290fa for this chassis.
Nov 25 08:51:55 compute-0 ovn_controller[152859]: 2025-11-25T08:51:55Z|01117|binding|INFO|a535be3a-db4d-4a49-9772-867e101290fa: Claiming fa:16:3e:5f:1f:f1 10.100.0.5
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.272 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:1f:f1 10.100.0.5'], port_security=['fa:16:3e:5f:1f:f1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47279d1c-3634-4ea6-a752-99950cd5ce6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'de7d7e23-cab0-4e13-9965-ee46854760f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b7f6d3d-0fdc-46fc-8ec7-6805fb9ea29c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a535be3a-db4d-4a49-9772-867e101290fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:51:55 compute-0 ovn_controller[152859]: 2025-11-25T08:51:55Z|01118|binding|INFO|Setting lport a535be3a-db4d-4a49-9772-867e101290fa ovn-installed in OVS
Nov 25 08:51:55 compute-0 ovn_controller[152859]: 2025-11-25T08:51:55Z|01119|binding|INFO|Setting lport a535be3a-db4d-4a49-9772-867e101290fa up in Southbound
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.283 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.285 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6e774a-9c3c-4ef2-994c-6989f275e804]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap78cbfb83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:20:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606975, 'reachable_time': 30188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367802, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 systemd-machined[215790]: New machine qemu-140-instance-00000071.
Nov 25 08:51:55 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-00000071.
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.323 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc3cbb7-03b9-4852-aa82-2b786f8d9357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8d5f23-f2c3-4bc4-9cfa-5b96ebc59d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.389 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78cbfb83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.389 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.390 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78cbfb83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:55 compute-0 kernel: tap78cbfb83-50: entered promiscuous mode
Nov 25 08:51:55 compute-0 NetworkManager[48915]: <info>  [1764060715.3923] manager: (tap78cbfb83-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.393 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.395 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap78cbfb83-50, col_values=(('external_ids', {'iface-id': '7a0c677f-94d5-4688-b88c-93d2fa378198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:55 compute-0 ovn_controller[152859]: 2025-11-25T08:51:55Z|01120|binding|INFO|Releasing lport 7a0c677f-94d5-4688-b88c-93d2fa378198 from this chassis (sb_readonly=0)
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.412 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.413 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0f74b1c0-60cd-45ba-849d-b8c85594a257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.414 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.414 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'env', 'PROCESS_TAG=haproxy-78cbfb83-5eb2-43b6-8132-ed291918f722', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/78cbfb83-5eb2-43b6-8132-ed291918f722.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.477 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.476883, 9fed0304-736a-4739-9e78-a95c676d1206 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.477 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Started (Lifecycle Event)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.480 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.484 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.493 253542 DEBUG nova.compute.manager [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.493 253542 DEBUG oslo_concurrency.lockutils [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.493 253542 DEBUG oslo_concurrency.lockutils [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.493 253542 DEBUG oslo_concurrency.lockutils [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.494 253542 DEBUG nova.compute.manager [req-5cfc0dfa-d3f1-47d9-90c0-6366ffb4c833 req-ff900022-237a-457b-a9e9-312807a8041a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Processing event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.495 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.499 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance spawned successfully.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.500 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.501 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.531 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.532 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.4771008, 9fed0304-736a-4739-9e78-a95c676d1206 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.532 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Paused (Lifecycle Event)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.546 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.547 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.547 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.547 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.548 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.548 253542 DEBUG nova.virt.libvirt.driver [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.551 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.563 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.48286, 9fed0304-736a-4739-9e78-a95c676d1206 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.564 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Resumed (Lifecycle Event)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.586 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.589 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.606 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.615 253542 INFO nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Took 6.66 seconds to spawn the instance on the hypervisor.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.616 253542 DEBUG nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.672 253542 INFO nova.compute.manager [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Took 7.57 seconds to build instance.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.699 253542 DEBUG oslo_concurrency.lockutils [None req-2ceca59f-39a8-4b14-8061-2836c9cbd1d5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 259 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Nov 25 08:51:55 compute-0 podman[367903]: 2025-11-25 08:51:55.824166344 +0000 UTC m=+0.052958170 container create 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:51:55 compute-0 systemd[1]: Started libpod-conmon-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f.scope.
Nov 25 08:51:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:51:55 compute-0 podman[367903]: 2025-11-25 08:51:55.79265027 +0000 UTC m=+0.021442116 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6f13050eb7b6987378aae86fa19090b68d0867d3d270269cd362f317d03ce43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.900 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.901 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.900513, 47279d1c-3634-4ea6-a752-99950cd5ce6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.901 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] VM Started (Lifecycle Event)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.905 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:51:55 compute-0 podman[367903]: 2025-11-25 08:51:55.914204575 +0000 UTC m=+0.142996421 container init 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.915 253542 INFO nova.virt.libvirt.driver [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance spawned successfully.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.917 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:51:55 compute-0 podman[367903]: 2025-11-25 08:51:55.92071043 +0000 UTC m=+0.149502256 container start 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.922 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.932 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.939 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.940 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.940 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.941 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.941 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.942 253542 DEBUG nova.virt.libvirt.driver [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:51:55 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [NOTICE]   (367947) : New worker (367949) forked
Nov 25 08:51:55 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [NOTICE]   (367947) : Loading success.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.966 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.966 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.9014297, 47279d1c-3634-4ea6-a752-99950cd5ce6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.966 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] VM Paused (Lifecycle Event)
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.985 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a535be3a-db4d-4a49-9772-867e101290fa in datapath 8daad2e3-552f-4ebe-8fa4-01c68ec704b1 unbound from our chassis
Nov 25 08:51:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:55.987 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8daad2e3-552f-4ebe-8fa4-01c68ec704b1
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.992 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.995 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060715.9112117, 47279d1c-3634-4ea6-a752-99950cd5ce6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.996 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] VM Resumed (Lifecycle Event)
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.998 253542 INFO nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Took 5.85 seconds to spawn the instance on the hypervisor.
Nov 25 08:51:55 compute-0 nova_compute[253538]: 2025-11-25 08:51:55.998 253542 DEBUG nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f5ce0f-27d2-47bb-a93e-c1e48e202117]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:56 compute-0 nova_compute[253538]: 2025-11-25 08:51:56.020 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:51:56 compute-0 nova_compute[253538]: 2025-11-25 08:51:56.023 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.029 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee2c581-eced-4311-bd6e-7e04837857ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.032 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[499a49b2-3d68-4c6d-a033-6d33fc8400ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:56 compute-0 nova_compute[253538]: 2025-11-25 08:51:56.047 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:51:56 compute-0 nova_compute[253538]: 2025-11-25 08:51:56.058 253542 INFO nova.compute.manager [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Took 6.85 seconds to build instance.
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.059 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ce3f07-fe63-463e-8e42-96da08f6165c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:56 compute-0 nova_compute[253538]: 2025-11-25 08:51:56.072 253542 DEBUG oslo_concurrency.lockutils [None req-761a63e2-f508-433d-8783-c11001e83e53 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.075 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97c242c1-c655-46da-9758-b0acaea06518]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8daad2e3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:fb:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603450, 'reachable_time': 38589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367963, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[64b15398-2333-4173-aee1-9fe39123f319]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8daad2e3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603462, 'tstamp': 603462}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367964, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8daad2e3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603466, 'tstamp': 603466}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367964, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.099 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daad2e3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:56 compute-0 nova_compute[253538]: 2025-11-25 08:51:56.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:56 compute-0 nova_compute[253538]: 2025-11-25 08:51:56.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daad2e3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.102 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8daad2e3-50, col_values=(('external_ids', {'iface-id': 'e844dcfd-3730-493f-b401-25ee7b281b7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:51:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:51:56.103 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:51:56 compute-0 ceph-mon[75015]: pgmap v2134: 321 pgs: 321 active+clean; 259 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Nov 25 08:51:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:51:57 compute-0 sudo[367965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:51:57 compute-0 sudo[367965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:57 compute-0 sudo[367965]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:57 compute-0 sudo[367990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:51:57 compute-0 sudo[367990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:57 compute-0 sudo[367990]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:57 compute-0 nova_compute[253538]: 2025-11-25 08:51:57.715 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 259 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 105 op/s
Nov 25 08:51:57 compute-0 sudo[368015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:51:57 compute-0 sudo[368015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:57 compute-0 sudo[368015]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:57 compute-0 sudo[368040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 08:51:57 compute-0 sudo[368040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.049 253542 DEBUG nova.compute.manager [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 DEBUG oslo_concurrency.lockutils [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 DEBUG oslo_concurrency.lockutils [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 DEBUG oslo_concurrency.lockutils [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 DEBUG nova.compute.manager [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.050 253542 WARNING nova.compute.manager [req-6a217210-28ce-4903-9324-ffa9824c2ed8 req-046379fc-f848-4b47-8d60-2303373f1dd4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state None.
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.114 253542 DEBUG nova.compute.manager [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.115 253542 DEBUG oslo_concurrency.lockutils [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.115 253542 DEBUG oslo_concurrency.lockutils [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.116 253542 DEBUG oslo_concurrency.lockutils [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.116 253542 DEBUG nova.compute.manager [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] No waiting events found dispatching network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:51:58 compute-0 nova_compute[253538]: 2025-11-25 08:51:58.116 253542 WARNING nova.compute.manager [req-0a40afba-d827-4c93-8b4d-5b83610614bf req-61340031-9ba8-464e-9041-bbfb6ba02c75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received unexpected event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa for instance with vm_state active and task_state None.
Nov 25 08:51:58 compute-0 sudo[368040]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:51:58 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:51:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:51:58 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:51:58 compute-0 sudo[368084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:51:58 compute-0 sudo[368084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:58 compute-0 sudo[368084]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:58 compute-0 sudo[368109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:51:58 compute-0 sudo[368109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:58 compute-0 sudo[368109]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:58 compute-0 sudo[368134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:51:58 compute-0 sudo[368134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:58 compute-0 sudo[368134]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:58 compute-0 sudo[368159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:51:58 compute-0 sudo[368159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:58 compute-0 sudo[368159]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:59 compute-0 nova_compute[253538]: 2025-11-25 08:51:59.095 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:51:59 compute-0 sudo[368213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:51:59 compute-0 sudo[368213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:59 compute-0 sudo[368213]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:59 compute-0 ceph-mon[75015]: pgmap v2135: 321 pgs: 321 active+clean; 259 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 105 op/s
Nov 25 08:51:59 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:51:59 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:51:59 compute-0 sudo[368238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:51:59 compute-0 sudo[368238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:59 compute-0 sudo[368238]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:59 compute-0 sudo[368263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:51:59 compute-0 sudo[368263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:59 compute-0 sudo[368263]: pam_unix(sudo:session): session closed for user root
Nov 25 08:51:59 compute-0 sudo[368288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- inventory --format=json-pretty --filter-for-batch
Nov 25 08:51:59 compute-0 sudo[368288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:51:59 compute-0 podman[368354]: 2025-11-25 08:51:59.71588275 +0000 UTC m=+0.062732181 container create bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:51:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Nov 25 08:51:59 compute-0 systemd[1]: Started libpod-conmon-bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35.scope.
Nov 25 08:51:59 compute-0 podman[368354]: 2025-11-25 08:51:59.690538591 +0000 UTC m=+0.037388092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:51:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:51:59 compute-0 podman[368354]: 2025-11-25 08:51:59.83159798 +0000 UTC m=+0.178447421 container init bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:51:59 compute-0 podman[368354]: 2025-11-25 08:51:59.840798206 +0000 UTC m=+0.187647687 container start bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:51:59 compute-0 podman[368354]: 2025-11-25 08:51:59.847359241 +0000 UTC m=+0.194208692 container attach bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 08:51:59 compute-0 systemd[1]: libpod-bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35.scope: Deactivated successfully.
Nov 25 08:51:59 compute-0 kind_pasteur[368371]: 167 167
Nov 25 08:51:59 compute-0 conmon[368371]: conmon bb38fea992b459763201 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35.scope/container/memory.events
Nov 25 08:51:59 compute-0 podman[368354]: 2025-11-25 08:51:59.852736345 +0000 UTC m=+0.199585766 container died bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d523399cf4cc2c3c97b53d61c54bfd02b4d940f1a209a363e7aa33a89f676f6e-merged.mount: Deactivated successfully.
Nov 25 08:51:59 compute-0 podman[368354]: 2025-11-25 08:51:59.918201018 +0000 UTC m=+0.265050439 container remove bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pasteur, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 08:51:59 compute-0 systemd[1]: libpod-conmon-bb38fea992b4597632019f06dab2c5285d592a7cc5d99a9431a22162887faf35.scope: Deactivated successfully.
Nov 25 08:52:00 compute-0 podman[368396]: 2025-11-25 08:52:00.144658303 +0000 UTC m=+0.053470153 container create 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:52:00 compute-0 systemd[1]: Started libpod-conmon-17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a.scope.
Nov 25 08:52:00 compute-0 podman[368396]: 2025-11-25 08:52:00.119352265 +0000 UTC m=+0.028164125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:52:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.228 253542 DEBUG nova.compute.manager [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-changed-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.230 253542 DEBUG nova.compute.manager [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Refreshing instance network info cache due to event network-changed-a535be3a-db4d-4a49-9772-867e101290fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.230 253542 DEBUG oslo_concurrency.lockutils [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.231 253542 DEBUG oslo_concurrency.lockutils [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.231 253542 DEBUG nova.network.neutron [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Refreshing network info cache for port a535be3a-db4d-4a49-9772-867e101290fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:52:00 compute-0 podman[368396]: 2025-11-25 08:52:00.243263275 +0000 UTC m=+0.152075165 container init 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 08:52:00 compute-0 podman[368396]: 2025-11-25 08:52:00.251757952 +0000 UTC m=+0.160569782 container start 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:52:00 compute-0 podman[368396]: 2025-11-25 08:52:00.264226636 +0000 UTC m=+0.173038506 container attach 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.481 253542 DEBUG nova.compute.manager [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.482 253542 DEBUG nova.compute.manager [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing instance network info cache due to event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.482 253542 DEBUG oslo_concurrency.lockutils [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.483 253542 DEBUG oslo_concurrency.lockutils [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:52:00 compute-0 nova_compute[253538]: 2025-11-25 08:52:00.483 253542 DEBUG nova.network.neutron [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:52:01 compute-0 ceph-mon[75015]: pgmap v2136: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Nov 25 08:52:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 185 op/s
Nov 25 08:52:01 compute-0 nifty_noether[368412]: [
Nov 25 08:52:01 compute-0 nifty_noether[368412]:     {
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         "available": false,
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         "ceph_device": false,
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         "lsm_data": {},
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         "lvs": [],
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         "path": "/dev/sr0",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         "rejected_reasons": [
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "Insufficient space (<5GB)",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "Has a FileSystem"
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         ],
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         "sys_api": {
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "actuators": null,
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "device_nodes": "sr0",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "devname": "sr0",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "human_readable_size": "482.00 KB",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "id_bus": "ata",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "model": "QEMU DVD-ROM",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "nr_requests": "2",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "parent": "/dev/sr0",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "partitions": {},
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "path": "/dev/sr0",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "removable": "1",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "rev": "2.5+",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "ro": "0",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "rotational": "1",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "sas_address": "",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "sas_device_handle": "",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "scheduler_mode": "mq-deadline",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "sectors": 0,
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "sectorsize": "2048",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "size": 493568.0,
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "support_discard": "2048",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "type": "disk",
Nov 25 08:52:01 compute-0 nifty_noether[368412]:             "vendor": "QEMU"
Nov 25 08:52:01 compute-0 nifty_noether[368412]:         }
Nov 25 08:52:01 compute-0 nifty_noether[368412]:     }
Nov 25 08:52:01 compute-0 nifty_noether[368412]: ]
Nov 25 08:52:01 compute-0 nova_compute[253538]: 2025-11-25 08:52:01.746 253542 DEBUG nova.network.neutron [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updated VIF entry in instance network info cache for port a535be3a-db4d-4a49-9772-867e101290fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:52:01 compute-0 nova_compute[253538]: 2025-11-25 08:52:01.749 253542 DEBUG nova.network.neutron [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updating instance_info_cache with network_info: [{"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:52:01 compute-0 nova_compute[253538]: 2025-11-25 08:52:01.767 253542 DEBUG oslo_concurrency.lockutils [req-68294fe3-cb58-425d-8295-ab28300e67aa req-5aad538f-192a-4020-9df1-400dc296698c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-47279d1c-3634-4ea6-a752-99950cd5ce6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:52:01 compute-0 systemd[1]: libpod-17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a.scope: Deactivated successfully.
Nov 25 08:52:01 compute-0 systemd[1]: libpod-17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a.scope: Consumed 1.492s CPU time.
Nov 25 08:52:01 compute-0 podman[370400]: 2025-11-25 08:52:01.814132214 +0000 UTC m=+0.030744454 container died 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 08:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c546deb9b6a912f50dd29fd3cef778dc9c9fcc911a10a6b81b9cb4f6d824da1f-merged.mount: Deactivated successfully.
Nov 25 08:52:01 compute-0 podman[370400]: 2025-11-25 08:52:01.899854361 +0000 UTC m=+0.116466601 container remove 17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_noether, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:52:01 compute-0 systemd[1]: libpod-conmon-17b5d11e16d68ea7b9edb00041d52000dfbc18b13311a6c8c9b33e194bef125a.scope: Deactivated successfully.
Nov 25 08:52:01 compute-0 sudo[368288]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:52:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:52:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:52:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:52:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:52:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:52:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:52:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6f615931-3e1c-42cb-b218-9ea9a16b4f9d does not exist
Nov 25 08:52:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 780bb90f-8739-4266-85b3-78d307608b68 does not exist
Nov 25 08:52:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9788e609-56cf-4cb7-b904-32bb3ba360ec does not exist
Nov 25 08:52:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:52:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:52:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:52:02 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:52:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:52:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:52:02 compute-0 sudo[370414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:52:02 compute-0 sudo[370414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:02 compute-0 sudo[370414]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:02 compute-0 nova_compute[253538]: 2025-11-25 08:52:02.083 253542 DEBUG nova.network.neutron [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updated VIF entry in instance network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:52:02 compute-0 nova_compute[253538]: 2025-11-25 08:52:02.085 253542 DEBUG nova.network.neutron [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:52:02 compute-0 nova_compute[253538]: 2025-11-25 08:52:02.103 253542 DEBUG oslo_concurrency.lockutils [req-39c32d01-fca9-4f6d-bbe0-f777ee183e8b req-6fa4999c-a2a2-4b62-9072-bf0b2374c769 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:52:02 compute-0 sudo[370439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:52:02 compute-0 sudo[370439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:02 compute-0 sudo[370439]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:02 compute-0 sudo[370464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:52:02 compute-0 sudo[370464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:02 compute-0 sudo[370464]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:02 compute-0 sudo[370489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:52:02 compute-0 sudo[370489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:02 compute-0 podman[370554]: 2025-11-25 08:52:02.606618308 +0000 UTC m=+0.038840681 container create 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 08:52:02 compute-0 systemd[1]: Started libpod-conmon-7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e.scope.
Nov 25 08:52:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:52:02 compute-0 podman[370554]: 2025-11-25 08:52:02.588789921 +0000 UTC m=+0.021012314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:52:02 compute-0 podman[370554]: 2025-11-25 08:52:02.696808324 +0000 UTC m=+0.129030727 container init 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:52:02 compute-0 podman[370554]: 2025-11-25 08:52:02.703467653 +0000 UTC m=+0.135690026 container start 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:52:02 compute-0 gifted_meninsky[370571]: 167 167
Nov 25 08:52:02 compute-0 systemd[1]: libpod-7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e.scope: Deactivated successfully.
Nov 25 08:52:02 compute-0 podman[370554]: 2025-11-25 08:52:02.710706806 +0000 UTC m=+0.142929179 container attach 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:52:02 compute-0 conmon[370571]: conmon 7c09e8733af9f00dd40d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e.scope/container/memory.events
Nov 25 08:52:02 compute-0 podman[370554]: 2025-11-25 08:52:02.712037952 +0000 UTC m=+0.144260325 container died 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:52:02 compute-0 nova_compute[253538]: 2025-11-25 08:52:02.716 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd160ed41af57d781304cbacc2ed6a0d95397042c17de0783f3518d3c1033286-merged.mount: Deactivated successfully.
Nov 25 08:52:02 compute-0 podman[370554]: 2025-11-25 08:52:02.778013719 +0000 UTC m=+0.210236092 container remove 7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 08:52:02 compute-0 systemd[1]: libpod-conmon-7c09e8733af9f00dd40d482de2f2b377541b817bbbe384f11725769475719d4e.scope: Deactivated successfully.
Nov 25 08:52:02 compute-0 ceph-mon[75015]: pgmap v2137: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 185 op/s
Nov 25 08:52:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:52:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:52:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:52:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:52:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:52:02 compute-0 podman[370596]: 2025-11-25 08:52:02.993413587 +0000 UTC m=+0.052530608 container create 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 08:52:03 compute-0 systemd[1]: Started libpod-conmon-4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc.scope.
Nov 25 08:52:03 compute-0 podman[370596]: 2025-11-25 08:52:02.962003656 +0000 UTC m=+0.021120667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:52:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:03 compute-0 podman[370596]: 2025-11-25 08:52:03.093359504 +0000 UTC m=+0.152476495 container init 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 08:52:03 compute-0 podman[370596]: 2025-11-25 08:52:03.103151606 +0000 UTC m=+0.162268587 container start 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 08:52:03 compute-0 podman[370596]: 2025-11-25 08:52:03.108845109 +0000 UTC m=+0.167962090 container attach 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:52:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 190 op/s
Nov 25 08:52:04 compute-0 nova_compute[253538]: 2025-11-25 08:52:04.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014566723440450609 of space, bias 1.0, pg target 0.4370017032135183 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:52:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:52:04 compute-0 intelligent_swartz[370612]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:52:04 compute-0 intelligent_swartz[370612]: --> relative data size: 1.0
Nov 25 08:52:04 compute-0 intelligent_swartz[370612]: --> All data devices are unavailable
Nov 25 08:52:04 compute-0 systemd[1]: libpod-4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc.scope: Deactivated successfully.
Nov 25 08:52:04 compute-0 systemd[1]: libpod-4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc.scope: Consumed 1.036s CPU time.
Nov 25 08:52:04 compute-0 podman[370641]: 2025-11-25 08:52:04.302648491 +0000 UTC m=+0.029913492 container died 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e19ad65772af566d697367e7a63863cf44733f3655084e5bf391b2fc12c6634-merged.mount: Deactivated successfully.
Nov 25 08:52:04 compute-0 podman[370642]: 2025-11-25 08:52:04.349643599 +0000 UTC m=+0.057031018 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 08:52:04 compute-0 podman[370641]: 2025-11-25 08:52:04.363182812 +0000 UTC m=+0.090447793 container remove 4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:52:04 compute-0 systemd[1]: libpod-conmon-4794ca68449583edaa52a4acacdd0afde508379d89d6d41a4e0724338cfd94fc.scope: Deactivated successfully.
Nov 25 08:52:04 compute-0 sudo[370489]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:04 compute-0 sudo[370669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:52:04 compute-0 sudo[370669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:04 compute-0 sudo[370669]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:04 compute-0 sudo[370694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:52:04 compute-0 sudo[370694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:04 compute-0 sudo[370694]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:04 compute-0 sudo[370719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:52:04 compute-0 sudo[370719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:04 compute-0 sudo[370719]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:04 compute-0 sudo[370744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:52:04 compute-0 sudo[370744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:05 compute-0 ceph-mon[75015]: pgmap v2138: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 190 op/s
Nov 25 08:52:05 compute-0 podman[370808]: 2025-11-25 08:52:05.075062846 +0000 UTC m=+0.084363269 container create 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:52:05 compute-0 podman[370808]: 2025-11-25 08:52:05.020022583 +0000 UTC m=+0.029323006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:52:05 compute-0 systemd[1]: Started libpod-conmon-93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd.scope.
Nov 25 08:52:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:52:05 compute-0 podman[370808]: 2025-11-25 08:52:05.319581345 +0000 UTC m=+0.328881778 container init 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:52:05 compute-0 podman[370808]: 2025-11-25 08:52:05.329203902 +0000 UTC m=+0.338504315 container start 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:52:05 compute-0 sleepy_lalande[370824]: 167 167
Nov 25 08:52:05 compute-0 systemd[1]: libpod-93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd.scope: Deactivated successfully.
Nov 25 08:52:05 compute-0 podman[370808]: 2025-11-25 08:52:05.570026712 +0000 UTC m=+0.579327155 container attach 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 08:52:05 compute-0 podman[370808]: 2025-11-25 08:52:05.571868682 +0000 UTC m=+0.581169095 container died 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 08:52:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6653c9d1d70de4a691e965d8cba549567a0bf15fb185a643fc470f187d947e4c-merged.mount: Deactivated successfully.
Nov 25 08:52:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 168 op/s
Nov 25 08:52:05 compute-0 podman[370808]: 2025-11-25 08:52:05.98138641 +0000 UTC m=+0.990686853 container remove 93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:52:06 compute-0 systemd[1]: libpod-conmon-93de3221f0f1e068dd555444b1f2bc47371a888d0bc9fa5cf59da227fe6fe2fd.scope: Deactivated successfully.
Nov 25 08:52:06 compute-0 podman[370843]: 2025-11-25 08:52:06.19538215 +0000 UTC m=+0.125096000 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 08:52:06 compute-0 podman[370864]: 2025-11-25 08:52:06.190049867 +0000 UTC m=+0.031471943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:52:06 compute-0 podman[370864]: 2025-11-25 08:52:06.360754509 +0000 UTC m=+0.202176535 container create f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:52:06 compute-0 systemd[1]: Started libpod-conmon-f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585.scope.
Nov 25 08:52:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:52:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:06 compute-0 podman[370864]: 2025-11-25 08:52:06.661020601 +0000 UTC m=+0.502442627 container init f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 08:52:06 compute-0 podman[370864]: 2025-11-25 08:52:06.669480877 +0000 UTC m=+0.510902903 container start f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:52:06 compute-0 podman[370864]: 2025-11-25 08:52:06.806022934 +0000 UTC m=+0.647444970 container attach f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:52:07 compute-0 ceph-mon[75015]: pgmap v2139: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 168 op/s
Nov 25 08:52:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:07 compute-0 optimistic_colden[370882]: {
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:     "0": [
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:         {
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "devices": [
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "/dev/loop3"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             ],
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_name": "ceph_lv0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_size": "21470642176",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "name": "ceph_lv0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "tags": {
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cluster_name": "ceph",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.crush_device_class": "",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.encrypted": "0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osd_id": "0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.type": "block",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.vdo": "0"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             },
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "type": "block",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "vg_name": "ceph_vg0"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:         }
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:     ],
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:     "1": [
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:         {
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "devices": [
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "/dev/loop4"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             ],
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_name": "ceph_lv1",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_size": "21470642176",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "name": "ceph_lv1",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "tags": {
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cluster_name": "ceph",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.crush_device_class": "",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.encrypted": "0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osd_id": "1",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.type": "block",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.vdo": "0"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             },
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "type": "block",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "vg_name": "ceph_vg1"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:         }
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:     ],
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:     "2": [
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:         {
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "devices": [
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "/dev/loop5"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             ],
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_name": "ceph_lv2",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_size": "21470642176",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "name": "ceph_lv2",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "tags": {
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.cluster_name": "ceph",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.crush_device_class": "",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.encrypted": "0",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osd_id": "2",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.type": "block",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:                 "ceph.vdo": "0"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             },
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "type": "block",
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:             "vg_name": "ceph_vg2"
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:         }
Nov 25 08:52:07 compute-0 optimistic_colden[370882]:     ]
Nov 25 08:52:07 compute-0 optimistic_colden[370882]: }
Nov 25 08:52:07 compute-0 systemd[1]: libpod-f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585.scope: Deactivated successfully.
Nov 25 08:52:07 compute-0 podman[370864]: 2025-11-25 08:52:07.540904676 +0000 UTC m=+1.382326692 container died f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 08:52:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-820aead6f93f99be965cc392a43097cb8a3262e7afff66020d5e15bc83ab40f1-merged.mount: Deactivated successfully.
Nov 25 08:52:07 compute-0 nova_compute[253538]: 2025-11-25 08:52:07.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 161 op/s
Nov 25 08:52:08 compute-0 podman[370864]: 2025-11-25 08:52:08.035233994 +0000 UTC m=+1.876656020 container remove f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 08:52:08 compute-0 sudo[370744]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:08 compute-0 systemd[1]: libpod-conmon-f8d8ae674e876a19cee0457d23f9354637c2174422f2040cd948fb7225901585.scope: Deactivated successfully.
Nov 25 08:52:08 compute-0 sudo[370903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:52:08 compute-0 sudo[370903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:08 compute-0 sudo[370903]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:08 compute-0 sudo[370928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:52:08 compute-0 sudo[370928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:08 compute-0 sudo[370928]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:08 compute-0 sudo[370953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:52:08 compute-0 sudo[370953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:08 compute-0 sudo[370953]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:08 compute-0 sudo[370978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:52:08 compute-0 sudo[370978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:08 compute-0 podman[371042]: 2025-11-25 08:52:08.83524985 +0000 UTC m=+0.057986245 container create fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 08:52:08 compute-0 systemd[1]: Started libpod-conmon-fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb.scope.
Nov 25 08:52:08 compute-0 podman[371042]: 2025-11-25 08:52:08.812181291 +0000 UTC m=+0.034917736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:52:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:52:08 compute-0 podman[371042]: 2025-11-25 08:52:08.931377504 +0000 UTC m=+0.154113939 container init fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 08:52:08 compute-0 podman[371042]: 2025-11-25 08:52:08.939651436 +0000 UTC m=+0.162387841 container start fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:52:08 compute-0 podman[371042]: 2025-11-25 08:52:08.943814236 +0000 UTC m=+0.166550691 container attach fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:52:08 compute-0 dazzling_nash[371058]: 167 167
Nov 25 08:52:08 compute-0 systemd[1]: libpod-fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb.scope: Deactivated successfully.
Nov 25 08:52:08 compute-0 conmon[371058]: conmon fe2827d738c01c389eb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb.scope/container/memory.events
Nov 25 08:52:08 compute-0 podman[371042]: 2025-11-25 08:52:08.946589551 +0000 UTC m=+0.169325956 container died fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:52:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cab6055323cbeb4fe2219d840867d1d0b8d7516049c4f00aee61e5b6bbad131f-merged.mount: Deactivated successfully.
Nov 25 08:52:08 compute-0 podman[371042]: 2025-11-25 08:52:08.984461565 +0000 UTC m=+0.207197971 container remove fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:52:09 compute-0 systemd[1]: libpod-conmon-fe2827d738c01c389eb848c05ab854b933b643a3fc8495584b74fd2654e3f6fb.scope: Deactivated successfully.
Nov 25 08:52:09 compute-0 nova_compute[253538]: 2025-11-25 08:52:09.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:09 compute-0 podman[371081]: 2025-11-25 08:52:09.17727418 +0000 UTC m=+0.042275324 container create 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 08:52:09 compute-0 ceph-mon[75015]: pgmap v2140: 321 pgs: 321 active+clean; 260 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 161 op/s
Nov 25 08:52:09 compute-0 systemd[1]: Started libpod-conmon-1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139.scope.
Nov 25 08:52:09 compute-0 podman[371081]: 2025-11-25 08:52:09.157448799 +0000 UTC m=+0.022449973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:52:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:09 compute-0 podman[371081]: 2025-11-25 08:52:09.27100812 +0000 UTC m=+0.136009274 container init 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 08:52:09 compute-0 podman[371081]: 2025-11-25 08:52:09.278524111 +0000 UTC m=+0.143525255 container start 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 08:52:09 compute-0 podman[371081]: 2025-11-25 08:52:09.284812479 +0000 UTC m=+0.149813623 container attach 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 08:52:09 compute-0 ovn_controller[152859]: 2025-11-25T08:52:09Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:1f:f1 10.100.0.5
Nov 25 08:52:09 compute-0 ovn_controller[152859]: 2025-11-25T08:52:09Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:1f:f1 10.100.0.5
Nov 25 08:52:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 284 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Nov 25 08:52:10 compute-0 distracted_fermat[371097]: {
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "osd_id": 1,
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "type": "bluestore"
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:     },
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "osd_id": 2,
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "type": "bluestore"
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:     },
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "osd_id": 0,
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:         "type": "bluestore"
Nov 25 08:52:10 compute-0 distracted_fermat[371097]:     }
Nov 25 08:52:10 compute-0 distracted_fermat[371097]: }
Nov 25 08:52:10 compute-0 systemd[1]: libpod-1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139.scope: Deactivated successfully.
Nov 25 08:52:10 compute-0 conmon[371097]: conmon 1d2ffaec223dc746e98b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139.scope/container/memory.events
Nov 25 08:52:10 compute-0 podman[371081]: 2025-11-25 08:52:10.283345221 +0000 UTC m=+1.148346385 container died 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:52:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4324980b22023e9d7fe5d3c9eb22c39753bad4ee9c0cb5ac4e8a5628511539a-merged.mount: Deactivated successfully.
Nov 25 08:52:10 compute-0 podman[371081]: 2025-11-25 08:52:10.343273837 +0000 UTC m=+1.208274981 container remove 1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermat, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:52:10 compute-0 systemd[1]: libpod-conmon-1d2ffaec223dc746e98b88031cdd867915757904473dcdde45849a2fe1341139.scope: Deactivated successfully.
Nov 25 08:52:10 compute-0 sudo[370978]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:52:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:52:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2a2d1d77-28da-4fd6-adfb-09615c07e00e does not exist
Nov 25 08:52:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5e2f9f6c-db54-41a1-b0cf-463ea072696b does not exist
Nov 25 08:52:10 compute-0 sudo[371143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:52:10 compute-0 sudo[371143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:10 compute-0 sudo[371143]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:10 compute-0 ovn_controller[152859]: 2025-11-25T08:52:10Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 08:52:10 compute-0 ovn_controller[152859]: 2025-11-25T08:52:10Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 08:52:10 compute-0 sudo[371168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:52:10 compute-0 sudo[371168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:52:10 compute-0 sudo[371168]: pam_unix(sudo:session): session closed for user root
Nov 25 08:52:11 compute-0 ceph-mon[75015]: pgmap v2141: 321 pgs: 321 active+clean; 284 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Nov 25 08:52:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:52:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 284 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 434 KiB/s rd, 2.5 MiB/s wr, 95 op/s
Nov 25 08:52:11 compute-0 podman[371193]: 2025-11-25 08:52:11.888561802 +0000 UTC m=+0.130508756 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:52:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:12 compute-0 nova_compute[253538]: 2025-11-25 08:52:12.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:13 compute-0 ceph-mon[75015]: pgmap v2142: 321 pgs: 321 active+clean; 284 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 434 KiB/s rd, 2.5 MiB/s wr, 95 op/s
Nov 25 08:52:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 320 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 4.2 MiB/s wr, 182 op/s
Nov 25 08:52:14 compute-0 nova_compute[253538]: 2025-11-25 08:52:14.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:14 compute-0 nova_compute[253538]: 2025-11-25 08:52:14.884 253542 INFO nova.compute.manager [None req-f8a57218-974c-44b2-989c-6b1b7fca6cfb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Get console output
Nov 25 08:52:14 compute-0 nova_compute[253538]: 2025-11-25 08:52:14.891 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:52:15 compute-0 ceph-mon[75015]: pgmap v2143: 321 pgs: 321 active+clean; 320 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 4.2 MiB/s wr, 182 op/s
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.228 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.228 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.229 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.229 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.229 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.230 253542 INFO nova.compute.manager [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Terminating instance
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.231 253542 DEBUG nova.compute.manager [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:52:15 compute-0 kernel: tapa535be3a-db (unregistering): left promiscuous mode
Nov 25 08:52:15 compute-0 NetworkManager[48915]: <info>  [1764060735.2960] device (tapa535be3a-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:52:15 compute-0 ovn_controller[152859]: 2025-11-25T08:52:15Z|01121|binding|INFO|Releasing lport a535be3a-db4d-4a49-9772-867e101290fa from this chassis (sb_readonly=0)
Nov 25 08:52:15 compute-0 ovn_controller[152859]: 2025-11-25T08:52:15Z|01122|binding|INFO|Setting lport a535be3a-db4d-4a49-9772-867e101290fa down in Southbound
Nov 25 08:52:15 compute-0 ovn_controller[152859]: 2025-11-25T08:52:15Z|01123|binding|INFO|Removing iface tapa535be3a-db ovn-installed in OVS
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.309 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.318 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:1f:f1 10.100.0.5'], port_security=['fa:16:3e:5f:1f:f1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47279d1c-3634-4ea6-a752-99950cd5ce6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'de7d7e23-cab0-4e13-9965-ee46854760f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b7f6d3d-0fdc-46fc-8ec7-6805fb9ea29c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a535be3a-db4d-4a49-9772-867e101290fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.319 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a535be3a-db4d-4a49-9772-867e101290fa in datapath 8daad2e3-552f-4ebe-8fa4-01c68ec704b1 unbound from our chassis
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.320 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8daad2e3-552f-4ebe-8fa4-01c68ec704b1
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.329 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.346 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c5da7d-ba87-4c41-aa48-354196296b93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:15 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 25 08:52:15 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Consumed 13.607s CPU time.
Nov 25 08:52:15 compute-0 systemd-machined[215790]: Machine qemu-140-instance-00000071 terminated.
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.375 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9417b609-9c42-46eb-baab-6db04f1d6cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.378 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0546267f-ecb3-477b-a9ef-432eff5edff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.405 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[74152124-abfd-4014-8078-dc0fafa45161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.421 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6908a1-5a57-4f7d-bf8d-59c3bcfab420]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8daad2e3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:fb:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603450, 'reachable_time': 38589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371231, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.439 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[387bea50-8b09-4ce5-8192-55e3990209b7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8daad2e3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603462, 'tstamp': 603462}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371232, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8daad2e3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603466, 'tstamp': 603466}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371232, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.441 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daad2e3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.442 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.449 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daad2e3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.450 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.450 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8daad2e3-50, col_values=(('external_ids', {'iface-id': 'e844dcfd-3730-493f-b401-25ee7b281b7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:15.451 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.456 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.475 253542 INFO nova.virt.libvirt.driver [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Instance destroyed successfully.
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.475 253542 DEBUG nova.objects.instance [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 47279d1c-3634-4ea6-a752-99950cd5ce6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.489 253542 DEBUG nova.virt.libvirt.vif [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:51:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-737057420',display_name='tempest-TestNetworkBasicOps-server-737057420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-737057420',id=113,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0V13VdtFjjfuJa+A9AY8vVYQrlDp8VmR/zDbnMpoRaniytKdXYDv2ooGFtOXnD87APiPgGqKaLDSkFHV94Z3CrjduwX8FjMfno6fvPaCxDikVs3WLPJK+CBmQ5ToXLLA==',key_name='tempest-TestNetworkBasicOps-301366135',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-xuar01pl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:51:56Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=47279d1c-3634-4ea6-a752-99950cd5ce6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.489 253542 DEBUG nova.network.os_vif_util [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "a535be3a-db4d-4a49-9772-867e101290fa", "address": "fa:16:3e:5f:1f:f1", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa535be3a-db", "ovs_interfaceid": "a535be3a-db4d-4a49-9772-867e101290fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.490 253542 DEBUG nova.network.os_vif_util [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.490 253542 DEBUG os_vif [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa535be3a-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.503 253542 INFO os_vif [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:1f:f1,bridge_name='br-int',has_traffic_filtering=True,id=a535be3a-db4d-4a49-9772-867e101290fa,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa535be3a-db')
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.531 253542 DEBUG nova.compute.manager [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-unplugged-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.532 253542 DEBUG oslo_concurrency.lockutils [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.532 253542 DEBUG oslo_concurrency.lockutils [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.533 253542 DEBUG oslo_concurrency.lockutils [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.533 253542 DEBUG nova.compute.manager [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] No waiting events found dispatching network-vif-unplugged-a535be3a-db4d-4a49-9772-867e101290fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.533 253542 DEBUG nova.compute.manager [req-a303b7c2-563d-433e-9ad6-99cb6773da26 req-033f0dae-18c3-4916-82e9-613e301b278c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-unplugged-a535be3a-db4d-4a49-9772-867e101290fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:52:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 326 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 4.3 MiB/s wr, 198 op/s
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.987 253542 INFO nova.virt.libvirt.driver [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Deleting instance files /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c_del
Nov 25 08:52:15 compute-0 nova_compute[253538]: 2025-11-25 08:52:15.988 253542 INFO nova.virt.libvirt.driver [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Deletion of /var/lib/nova/instances/47279d1c-3634-4ea6-a752-99950cd5ce6c_del complete
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.048 253542 INFO nova.compute.manager [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Took 1.82 seconds to destroy the instance on the hypervisor.
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.049 253542 DEBUG oslo.service.loopingcall [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.050 253542 DEBUG nova.compute.manager [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.050 253542 DEBUG nova.network.neutron [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:52:17 compute-0 ceph-mon[75015]: pgmap v2144: 321 pgs: 321 active+clean; 326 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 4.3 MiB/s wr, 198 op/s
Nov 25 08:52:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.614 253542 DEBUG nova.compute.manager [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.615 253542 DEBUG oslo_concurrency.lockutils [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.615 253542 DEBUG oslo_concurrency.lockutils [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.616 253542 DEBUG oslo_concurrency.lockutils [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.616 253542 DEBUG nova.compute.manager [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] No waiting events found dispatching network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.616 253542 WARNING nova.compute.manager [req-2bd63a49-dfcf-4b49-bdb5-31e90efb7cad req-6b36d64f-46f4-474a-8c23-c49b240e0029 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received unexpected event network-vif-plugged-a535be3a-db4d-4a49-9772-867e101290fa for instance with vm_state active and task_state deleting.
Nov 25 08:52:17 compute-0 nova_compute[253538]: 2025-11-25 08:52:17.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 299 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.3 MiB/s wr, 209 op/s
Nov 25 08:52:18 compute-0 nova_compute[253538]: 2025-11-25 08:52:18.118 253542 INFO nova.compute.manager [None req-f3878c9c-63c5-4ae1-b21b-c1ea62a9a88e 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Get console output
Nov 25 08:52:18 compute-0 nova_compute[253538]: 2025-11-25 08:52:18.123 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:52:18 compute-0 ceph-mon[75015]: pgmap v2145: 321 pgs: 321 active+clean; 299 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.3 MiB/s wr, 209 op/s
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.233455) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738233482, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1792, "num_deletes": 258, "total_data_size": 2816350, "memory_usage": 2856296, "flush_reason": "Manual Compaction"}
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738253452, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2741872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43342, "largest_seqno": 45133, "table_properties": {"data_size": 2733469, "index_size": 5153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17444, "raw_average_key_size": 20, "raw_value_size": 2716733, "raw_average_value_size": 3196, "num_data_blocks": 228, "num_entries": 850, "num_filter_entries": 850, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060578, "oldest_key_time": 1764060578, "file_creation_time": 1764060738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 20035 microseconds, and 11605 cpu microseconds.
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.253488) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2741872 bytes OK
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.253506) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.255154) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.255168) EVENT_LOG_v1 {"time_micros": 1764060738255163, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.255184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2808540, prev total WAL file size 2808540, number of live WAL files 2.
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.256189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2677KB)], [101(7726KB)]
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738256219, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10653507, "oldest_snapshot_seqno": -1}
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6614 keys, 8933452 bytes, temperature: kUnknown
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738322020, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8933452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8889601, "index_size": 26203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 172342, "raw_average_key_size": 26, "raw_value_size": 8771329, "raw_average_value_size": 1326, "num_data_blocks": 1027, "num_entries": 6614, "num_filter_entries": 6614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.322260) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8933452 bytes
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.323800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.7 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.5 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 7140, records dropped: 526 output_compression: NoCompression
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.323819) EVENT_LOG_v1 {"time_micros": 1764060738323809, "job": 60, "event": "compaction_finished", "compaction_time_micros": 65870, "compaction_time_cpu_micros": 20839, "output_level": 6, "num_output_files": 1, "total_output_size": 8933452, "num_input_records": 7140, "num_output_records": 6614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738324544, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060738326060, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.256124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:52:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:52:18.326117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.170 253542 DEBUG nova.network.neutron [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.217 253542 INFO nova.compute.manager [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Took 2.17 seconds to deallocate network for instance.
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.256 253542 DEBUG nova.compute.manager [req-9e7c019a-1fcd-4404-9f4a-eacfaa1ea1ac req-c99b3aa3-1c75-426a-9b51-185c9b489121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Received event network-vif-deleted-a535be3a-db4d-4a49-9772-867e101290fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.274 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.274 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.419 253542 DEBUG oslo_concurrency.processutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.679 253542 INFO nova.compute.manager [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Rebuilding instance
Nov 25 08:52:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 246 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 4.3 MiB/s wr, 215 op/s
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.897 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:52:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226396814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.910 253542 DEBUG nova.compute.manager [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.924 253542 DEBUG oslo_concurrency.processutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.931 253542 DEBUG nova.compute.provider_tree [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.964 253542 DEBUG nova.scheduler.client.report [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.969 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.983 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:19 compute-0 nova_compute[253538]: 2025-11-25 08:52:19.997 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:20 compute-0 nova_compute[253538]: 2025-11-25 08:52:20.006 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:20 compute-0 nova_compute[253538]: 2025-11-25 08:52:20.014 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:52:20 compute-0 nova_compute[253538]: 2025-11-25 08:52:20.020 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:52:20 compute-0 nova_compute[253538]: 2025-11-25 08:52:20.029 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:20 compute-0 nova_compute[253538]: 2025-11-25 08:52:20.091 253542 INFO nova.scheduler.client.report [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 47279d1c-3634-4ea6-a752-99950cd5ce6c
Nov 25 08:52:20 compute-0 nova_compute[253538]: 2025-11-25 08:52:20.184 253542 DEBUG oslo_concurrency.lockutils [None req-caf90b56-93c1-4042-8b13-7f23668a4f7f 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "47279d1c-3634-4ea6-a752-99950cd5ce6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:20 compute-0 nova_compute[253538]: 2025-11-25 08:52:20.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:20 compute-0 ceph-mon[75015]: pgmap v2146: 321 pgs: 321 active+clean; 246 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 4.3 MiB/s wr, 215 op/s
Nov 25 08:52:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4226396814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 246 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 1.8 MiB/s wr, 136 op/s
Nov 25 08:52:22 compute-0 kernel: tap5518ee18-fc (unregistering): left promiscuous mode
Nov 25 08:52:22 compute-0 NetworkManager[48915]: <info>  [1764060742.2766] device (tap5518ee18-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:52:22 compute-0 ovn_controller[152859]: 2025-11-25T08:52:22Z|01124|binding|INFO|Releasing lport 5518ee18-fcb4-4885-8bc6-a3daba84baff from this chassis (sb_readonly=0)
Nov 25 08:52:22 compute-0 ovn_controller[152859]: 2025-11-25T08:52:22Z|01125|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff down in Southbound
Nov 25 08:52:22 compute-0 ovn_controller[152859]: 2025-11-25T08:52:22Z|01126|binding|INFO|Removing iface tap5518ee18-fc ovn-installed in OVS
Nov 25 08:52:22 compute-0 nova_compute[253538]: 2025-11-25 08:52:22.290 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:22 compute-0 nova_compute[253538]: 2025-11-25 08:52:22.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.306 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:17 10.100.0.11'], port_security=['fa:16:3e:aa:ad:17 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9fed0304-736a-4739-9e78-a95c676d1206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78cbfb83-5eb2-43b6-8132-ed291918f722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3be0e54f-d1af-493d-9b04-9b0789174c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2873e9cb-ad93-4607-afe3-5e46bdf03cb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5518ee18-fcb4-4885-8bc6-a3daba84baff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.307 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5518ee18-fcb4-4885-8bc6-a3daba84baff in datapath 78cbfb83-5eb2-43b6-8132-ed291918f722 unbound from our chassis
Nov 25 08:52:22 compute-0 nova_compute[253538]: 2025-11-25 08:52:22.308 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.308 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 78cbfb83-5eb2-43b6-8132-ed291918f722, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.309 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[31d12b23-fcde-4540-b4e2-a5ffcdb72126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.310 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 namespace which is not needed anymore
Nov 25 08:52:22 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 25 08:52:22 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000070.scope: Consumed 14.163s CPU time.
Nov 25 08:52:22 compute-0 systemd-machined[215790]: Machine qemu-139-instance-00000070 terminated.
Nov 25 08:52:22 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [NOTICE]   (367947) : haproxy version is 2.8.14-c23fe91
Nov 25 08:52:22 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [NOTICE]   (367947) : path to executable is /usr/sbin/haproxy
Nov 25 08:52:22 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [WARNING]  (367947) : Exiting Master process...
Nov 25 08:52:22 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [WARNING]  (367947) : Exiting Master process...
Nov 25 08:52:22 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [ALERT]    (367947) : Current worker (367949) exited with code 143 (Terminated)
Nov 25 08:52:22 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[367943]: [WARNING]  (367947) : All workers exited. Exiting... (0)
Nov 25 08:52:22 compute-0 systemd[1]: libpod-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f.scope: Deactivated successfully.
Nov 25 08:52:22 compute-0 conmon[367943]: conmon 354c4c5ebb81452fe5e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f.scope/container/memory.events
Nov 25 08:52:22 compute-0 podman[371306]: 2025-11-25 08:52:22.452635646 +0000 UTC m=+0.044770970 container died 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f-userdata-shm.mount: Deactivated successfully.
Nov 25 08:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6f13050eb7b6987378aae86fa19090b68d0867d3d270269cd362f317d03ce43-merged.mount: Deactivated successfully.
Nov 25 08:52:22 compute-0 podman[371306]: 2025-11-25 08:52:22.507898036 +0000 UTC m=+0.100033310 container cleanup 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:52:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:22 compute-0 systemd[1]: libpod-conmon-354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f.scope: Deactivated successfully.
Nov 25 08:52:22 compute-0 podman[371345]: 2025-11-25 08:52:22.581177518 +0000 UTC m=+0.046484885 container remove 354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.587 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5072c573-aa1c-462e-9052-40acfdf20ba7]: (4, ('Tue Nov 25 08:52:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 (354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f)\n354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f\nTue Nov 25 08:52:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 (354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f)\n354c4c5ebb81452fe5e409e73a536bd402c008eefa5f7f63106b0b5501227d2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.590 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6477721-71f3-4d9e-85f5-5fa62cd355c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.590 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78cbfb83-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:22 compute-0 nova_compute[253538]: 2025-11-25 08:52:22.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:22 compute-0 kernel: tap78cbfb83-50: left promiscuous mode
Nov 25 08:52:22 compute-0 nova_compute[253538]: 2025-11-25 08:52:22.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1b7684-54b2-4bfe-8be4-d97c5d238cb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.630 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[871da7e6-3d07-495e-8a3a-3dd4d2b96f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.631 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5621982-820b-4951-81e9-5e3c6b896d15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.647 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c22c2c-4e5a-46d5-a4be-0d80be280876]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606966, 'reachable_time': 25934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371363, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.649 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:52:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:22.650 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[da0dd7f6-e935-48d8-8af0-2f00ac57538d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d78cbfb83\x2d5eb2\x2d43b6\x2d8132\x2ded291918f722.mount: Deactivated successfully.
Nov 25 08:52:22 compute-0 nova_compute[253538]: 2025-11-25 08:52:22.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:22 compute-0 ceph-mon[75015]: pgmap v2147: 321 pgs: 321 active+clean; 246 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 1.8 MiB/s wr, 136 op/s
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.048 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance shutdown successfully after 3 seconds.
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.056 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance destroyed successfully.
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.061 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance destroyed successfully.
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.062 253542 DEBUG nova.virt.libvirt.vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:19Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.063 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.064 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.064 253542 DEBUG os_vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.066 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5518ee18-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.071 253542 DEBUG nova.compute.manager [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 DEBUG oslo_concurrency.lockutils [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 DEBUG oslo_concurrency.lockutils [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 DEBUG oslo_concurrency.lockutils [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 DEBUG nova.compute.manager [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.072 253542 WARNING nova.compute.manager [req-c745bb40-78ef-4ff0-8462-240c8a99dc84 req-48f0a54f-5729-41f4-9082-a22c4b4aad4d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state rebuilding.
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.073 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.075 253542 INFO os_vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc')
Nov 25 08:52:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:52:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:52:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:52:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:52:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:52:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.500 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deleting instance files /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206_del
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.502 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deletion of /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206_del complete
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.714 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.715 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating image(s)
Nov 25 08:52:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 232 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.745 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.770 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.796 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.801 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.879 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.880 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.880 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.881 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.881 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.883 253542 INFO nova.compute.manager [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Terminating instance
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.884 253542 DEBUG nova.compute.manager [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.907 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.907 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.908 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.909 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "a6a8bade0130d94f6fc91c6e483cc9b6db836676" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:23 compute-0 kernel: tap2ad9a2b7-f5 (unregistering): left promiscuous mode
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.946 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:23 compute-0 NetworkManager[48915]: <info>  [1764060743.9477] device (tap2ad9a2b7-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:52:23 compute-0 ovn_controller[152859]: 2025-11-25T08:52:23Z|01127|binding|INFO|Releasing lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 from this chassis (sb_readonly=0)
Nov 25 08:52:23 compute-0 ovn_controller[152859]: 2025-11-25T08:52:23Z|01128|binding|INFO|Setting lport 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 down in Southbound
Nov 25 08:52:23 compute-0 ovn_controller[152859]: 2025-11-25T08:52:23Z|01129|binding|INFO|Removing iface tap2ad9a2b7-f5 ovn-installed in OVS
Nov 25 08:52:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.961 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:2a:a3 10.100.0.12'], port_security=['fa:16:3e:3e:2a:a3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '371b8f16-6d0a-48c6-b770-1fa4712eb5f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b7f6d3d-0fdc-46fc-8ec7-6805fb9ea29c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:52:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.963 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 in datapath 8daad2e3-552f-4ebe-8fa4-01c68ec704b1 unbound from our chassis
Nov 25 08:52:23 compute-0 nova_compute[253538]: 2025-11-25 08:52:23.962 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 9fed0304-736a-4739-9e78-a95c676d1206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.964 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8daad2e3-552f-4ebe-8fa4-01c68ec704b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:52:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.965 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a28fad-ac65-4b53-9b6d-12241abb6d20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:23.965 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 namespace which is not needed anymore
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:24 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 25 08:52:24 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Consumed 15.200s CPU time.
Nov 25 08:52:24 compute-0 systemd-machined[215790]: Machine qemu-138-instance-0000006f terminated.
Nov 25 08:52:24 compute-0 NetworkManager[48915]: <info>  [1764060744.1055] manager: (tap2ad9a2b7-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/461)
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.131 253542 INFO nova.virt.libvirt.driver [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Instance destroyed successfully.
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.132 253542 DEBUG nova.objects.instance [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:24 compute-0 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [NOTICE]   (366871) : haproxy version is 2.8.14-c23fe91
Nov 25 08:52:24 compute-0 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [NOTICE]   (366871) : path to executable is /usr/sbin/haproxy
Nov 25 08:52:24 compute-0 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [WARNING]  (366871) : Exiting Master process...
Nov 25 08:52:24 compute-0 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [WARNING]  (366871) : Exiting Master process...
Nov 25 08:52:24 compute-0 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [ALERT]    (366871) : Current worker (366873) exited with code 143 (Terminated)
Nov 25 08:52:24 compute-0 neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1[366867]: [WARNING]  (366871) : All workers exited. Exiting... (0)
Nov 25 08:52:24 compute-0 systemd[1]: libpod-b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529.scope: Deactivated successfully.
Nov 25 08:52:24 compute-0 podman[371496]: 2025-11-25 08:52:24.152971384 +0000 UTC m=+0.073526360 container died b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.152 253542 DEBUG nova.virt.libvirt.vif [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:51:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-199412432',display_name='tempest-TestNetworkBasicOps-server-199412432',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-199412432',id=111,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMuJR0+MVpFtNHEH/qBMUEI9mE13UW6GrULa+2972JvBZYqj7jCYYsMmZITZ+SM7QQhK9eTjWP2J5imfxbLYOM0couLFe8mdKS/uhBmTvd2vRYexSjbqdhkaRLs1gfDUJQ==',key_name='tempest-TestNetworkBasicOps-1778650473',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-hcn7lok0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:51:20Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.155 253542 DEBUG nova.network.os_vif_util [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "address": "fa:16:3e:3e:2a:a3", "network": {"id": "8daad2e3-552f-4ebe-8fa4-01c68ec704b1", "bridge": "br-int", "label": "tempest-network-smoke--961197321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad9a2b7-f5", "ovs_interfaceid": "2ad9a2b7-f59d-49fd-aaa3-5253dd637f18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.156 253542 DEBUG nova.network.os_vif_util [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.156 253542 DEBUG os_vif [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.160 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ad9a2b7-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.215 253542 INFO os_vif [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:2a:a3,bridge_name='br-int',has_traffic_filtering=True,id=2ad9a2b7-f59d-49fd-aaa3-5253dd637f18,network=Network(8daad2e3-552f-4ebe-8fa4-01c68ec704b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad9a2b7-f5')
Nov 25 08:52:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529-userdata-shm.mount: Deactivated successfully.
Nov 25 08:52:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2849636997d3f41f71e0b1f9f347c5bb88d5e1c968dd84aab3bbb4c45a6a526-merged.mount: Deactivated successfully.
Nov 25 08:52:24 compute-0 podman[371496]: 2025-11-25 08:52:24.278079054 +0000 UTC m=+0.198634050 container cleanup b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:52:24 compute-0 systemd[1]: libpod-conmon-b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529.scope: Deactivated successfully.
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.306 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676 9fed0304-736a-4739-9e78-a95c676d1206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:24 compute-0 podman[371558]: 2025-11-25 08:52:24.344884344 +0000 UTC m=+0.043084965 container remove b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.354 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6a867c55-6b0d-4d0c-a506-9632f475cae7]: (4, ('Tue Nov 25 08:52:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 (b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529)\nb20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529\nTue Nov 25 08:52:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 (b20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529)\nb20ece1dee115c004ce5ad488b43e4172f6f506e449fe87286dd0509ecda3529\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.356 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ef1a28-c9c8-42c5-8092-47c7ca2dac4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.356 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daad2e3-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:24 compute-0 kernel: tap8daad2e3-50: left promiscuous mode
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.371 253542 DEBUG nova.compute.manager [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-unplugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.371 253542 DEBUG oslo_concurrency.lockutils [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.372 253542 DEBUG oslo_concurrency.lockutils [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.372 253542 DEBUG oslo_concurrency.lockutils [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.372 253542 DEBUG nova.compute.manager [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] No waiting events found dispatching network-vif-unplugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.372 253542 DEBUG nova.compute.manager [req-b0d22655-7050-4c1b-8399-f94ad5ba2dc0 req-9eea4f90-c034-4f44-9f52-a5066f878786 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-unplugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.377 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.391 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56102816-58da-4525-bcc2-bc4c7047e07a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.405 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a890ece-4b52-4cf1-99c9-a678143ebfc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c067a3a3-78a2-4f2a-836c-5cbd0094c978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.427 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8beffa-2641-4982-babe-75b9935d01f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603442, 'reachable_time': 21287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371626, 'error': None, 'target': 'ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.429 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8daad2e3-552f-4ebe-8fa4-01c68ec704b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:52:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:24.429 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[889338ad-3cf5-41d5-97d4-6dc1a95cad4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d8daad2e3\x2d552f\x2d4ebe\x2d8fa4\x2d01c68ec704b1.mount: Deactivated successfully.
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.468 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.468 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Ensure instance console log exists: /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.469 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.469 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.469 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.472 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start _get_guest_xml network_info=[{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.476 253542 WARNING nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.482 253542 DEBUG nova.virt.libvirt.host [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.483 253542 DEBUG nova.virt.libvirt.host [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.486 253542 DEBUG nova.virt.libvirt.host [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.486 253542 DEBUG nova.virt.libvirt.host [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.487 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.487 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:56Z,direct_url=<?>,disk_format='qcow2',id=64385127-d622-49bb-be38-b33beb2692d1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.488 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.488 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.488 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.488 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.489 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.489 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.489 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.490 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.490 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.490 253542 DEBUG nova.virt.hardware [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.491 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.502 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.583 253542 INFO nova.virt.libvirt.driver [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Deleting instance files /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_del
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.584 253542 INFO nova.virt.libvirt.driver [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Deletion of /var/lib/nova/instances/e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb_del complete
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.641 253542 INFO nova.compute.manager [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.641 253542 DEBUG oslo.service.loopingcall [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.642 253542 DEBUG nova.compute.manager [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:52:24 compute-0 nova_compute[253538]: 2025-11-25 08:52:24.642 253542 DEBUG nova.network.neutron [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:52:24 compute-0 ceph-mon[75015]: pgmap v2148: 321 pgs: 321 active+clean; 232 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Nov 25 08:52:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:52:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991475864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.000 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.022 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.026 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.347 253542 DEBUG nova.compute.manager [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.347 253542 DEBUG oslo_concurrency.lockutils [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.348 253542 DEBUG oslo_concurrency.lockutils [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.351 253542 DEBUG oslo_concurrency.lockutils [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.351 253542 DEBUG nova.compute.manager [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.352 253542 WARNING nova.compute.manager [req-eca482a7-de80-426c-b8ac-cc767ab4c4e4 req-6e7b5288-5673-44d0-9b56-630f28d6fb54 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state rebuild_spawning.
Nov 25 08:52:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:52:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917122493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.482 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.483 253542 DEBUG nova.virt.libvirt.vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:23Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.484 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.485 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.488 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <uuid>9fed0304-736a-4739-9e78-a95c676d1206</uuid>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <name>instance-00000070</name>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-327371372</nova:name>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:52:24</nova:creationTime>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="64385127-d622-49bb-be38-b33beb2692d1"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <nova:port uuid="5518ee18-fcb4-4885-8bc6-a3daba84baff">
Nov 25 08:52:25 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <system>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <entry name="serial">9fed0304-736a-4739-9e78-a95c676d1206</entry>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <entry name="uuid">9fed0304-736a-4739-9e78-a95c676d1206</entry>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </system>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <os>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   </os>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <features>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   </features>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9fed0304-736a-4739-9e78-a95c676d1206_disk">
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       </source>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9fed0304-736a-4739-9e78-a95c676d1206_disk.config">
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       </source>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:52:25 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:aa:ad:17"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <target dev="tap5518ee18-fc"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/console.log" append="off"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <video>
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </video>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:52:25 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:52:25 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:52:25 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:52:25 compute-0 nova_compute[253538]: </domain>
Nov 25 08:52:25 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.491 253542 DEBUG nova.virt.libvirt.vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:51:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:23Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.492 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.494 253542 DEBUG nova.network.os_vif_util [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.495 253542 DEBUG os_vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.496 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.497 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.500 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5518ee18-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.501 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5518ee18-fc, col_values=(('external_ids', {'iface-id': '5518ee18-fcb4-4885-8bc6-a3daba84baff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:ad:17', 'vm-uuid': '9fed0304-736a-4739-9e78-a95c676d1206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:25 compute-0 NetworkManager[48915]: <info>  [1764060745.5037] manager: (tap5518ee18-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.509 253542 INFO os_vif [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc')
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.564 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.564 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.565 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:aa:ad:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.565 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Using config drive
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.591 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.605 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.647 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'keypairs' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.722 253542 DEBUG nova.network.neutron [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:52:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 200 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 120 KiB/s rd, 1.3 MiB/s wr, 85 op/s
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.749 253542 INFO nova.compute.manager [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Took 1.11 seconds to deallocate network for instance.
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.793 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.793 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/991475864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:52:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1917122493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:52:25 compute-0 nova_compute[253538]: 2025-11-25 08:52:25.868 253542 DEBUG oslo_concurrency.processutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.105 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Creating config drive at /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.110 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb8b4hnl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.263 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb8b4hnl" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.296 253542 DEBUG nova.storage.rbd_utils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 9fed0304-736a-4739-9e78-a95c676d1206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.300 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config 9fed0304-736a-4739-9e78-a95c676d1206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:52:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/157297453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.371 253542 DEBUG oslo_concurrency.processutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.378 253542 DEBUG nova.compute.provider_tree [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.399 253542 DEBUG nova.scheduler.client.report [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.426 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.459 253542 INFO nova.scheduler.client.report [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.479 253542 DEBUG nova.compute.manager [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.480 253542 DEBUG oslo_concurrency.lockutils [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.481 253542 DEBUG oslo_concurrency.lockutils [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.481 253542 DEBUG oslo_concurrency.lockutils [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.482 253542 DEBUG nova.compute.manager [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] No waiting events found dispatching network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.482 253542 WARNING nova.compute.manager [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received unexpected event network-vif-plugged-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 for instance with vm_state deleted and task_state None.
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.482 253542 DEBUG nova.compute.manager [req-55a759dd-b511-4a05-ade6-6187a913afc9 req-bfcee7c5-002c-4579-b378-d60dc24ac206 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Received event network-vif-deleted-2ad9a2b7-f59d-49fd-aaa3-5253dd637f18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.486 253542 DEBUG oslo_concurrency.processutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config 9fed0304-736a-4739-9e78-a95c676d1206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.487 253542 INFO nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deleting local config drive /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206/disk.config because it was imported into RBD.
Nov 25 08:52:26 compute-0 kernel: tap5518ee18-fc: entered promiscuous mode
Nov 25 08:52:26 compute-0 NetworkManager[48915]: <info>  [1764060746.5569] manager: (tap5518ee18-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/463)
Nov 25 08:52:26 compute-0 systemd-udevd[371801]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:52:26 compute-0 ovn_controller[152859]: 2025-11-25T08:52:26Z|01130|binding|INFO|Claiming lport 5518ee18-fcb4-4885-8bc6-a3daba84baff for this chassis.
Nov 25 08:52:26 compute-0 ovn_controller[152859]: 2025-11-25T08:52:26Z|01131|binding|INFO|5518ee18-fcb4-4885-8bc6-a3daba84baff: Claiming fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.621 253542 DEBUG oslo_concurrency.lockutils [None req-e1fa327e-2a0d-458a-8bb4-96d2ef3d6fc0 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:26 compute-0 NetworkManager[48915]: <info>  [1764060746.6244] device (tap5518ee18-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:52:26 compute-0 NetworkManager[48915]: <info>  [1764060746.6261] device (tap5518ee18-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:52:26 compute-0 systemd-machined[215790]: New machine qemu-141-instance-00000070.
Nov 25 08:52:26 compute-0 ovn_controller[152859]: 2025-11-25T08:52:26Z|01132|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff ovn-installed in OVS
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:26 compute-0 nova_compute[253538]: 2025-11-25 08:52:26.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:26 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-00000070.
Nov 25 08:52:26 compute-0 ovn_controller[152859]: 2025-11-25T08:52:26Z|01133|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff up in Southbound
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.725 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:17 10.100.0.11'], port_security=['fa:16:3e:aa:ad:17 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9fed0304-736a-4739-9e78-a95c676d1206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78cbfb83-5eb2-43b6-8132-ed291918f722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3be0e54f-d1af-493d-9b04-9b0789174c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2873e9cb-ad93-4607-afe3-5e46bdf03cb7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5518ee18-fcb4-4885-8bc6-a3daba84baff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.726 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5518ee18-fcb4-4885-8bc6-a3daba84baff in datapath 78cbfb83-5eb2-43b6-8132-ed291918f722 bound to our chassis
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.728 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.742 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[99d6a78d-0a4d-4466-857a-ccd074d78ca7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.744 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap78cbfb83-51 in ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.746 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap78cbfb83-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.747 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe6781c-ddbd-4f29-80fc-d1f69f7ed005]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.748 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed662358-bc2f-40ac-adf0-fa64f3b94e1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.760 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2fcb8776-5a30-422d-b9b7-670edbceb8c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01d65654-8ec5-4e78-ba28-5065d7603499]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.806 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6a014f26-28b3-41fe-b58d-435a567ea851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32e81b52-3ed2-4ff7-b4ff-463666348b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 NetworkManager[48915]: <info>  [1764060746.8143] manager: (tap78cbfb83-50): new Veth device (/org/freedesktop/NetworkManager/Devices/464)
Nov 25 08:52:26 compute-0 systemd-udevd[371805]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.844 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4502674e-f9e6-40d5-8cba-9ca6ca6bf718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.848 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[61d851ab-9a0a-45d3-a7ce-c834a3663eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ceph-mon[75015]: pgmap v2149: 321 pgs: 321 active+clean; 200 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 120 KiB/s rd, 1.3 MiB/s wr, 85 op/s
Nov 25 08:52:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/157297453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:26 compute-0 NetworkManager[48915]: <info>  [1764060746.8725] device (tap78cbfb83-50): carrier: link connected
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.879 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[33a731ef-7a7a-47f2-90ae-0ddaab991afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56c477e8-6f7b-4e67-9314-fe26ba18786e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap78cbfb83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:20:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 333], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610147, 'reachable_time': 34145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371837, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.917 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e73f3e1-00a0-4348-8955-293d718ed88f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:20be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610147, 'tstamp': 610147}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371838, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.934 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c0651a5b-c104-4a14-9063-38bd240d8918]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap78cbfb83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:20:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 333], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610147, 'reachable_time': 34145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371839, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:26.970 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2a0af7-ff30-4055-a222-6da3e45aa235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.055 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c06782c7-18e3-48d6-986a-e99ff4f66a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.056 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78cbfb83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.056 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.057 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78cbfb83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:27 compute-0 NetworkManager[48915]: <info>  [1764060747.0601] manager: (tap78cbfb83-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Nov 25 08:52:27 compute-0 kernel: tap78cbfb83-50: entered promiscuous mode
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.064 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap78cbfb83-50, col_values=(('external_ids', {'iface-id': '7a0c677f-94d5-4688-b88c-93d2fa378198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:27 compute-0 ovn_controller[152859]: 2025-11-25T08:52:27Z|01134|binding|INFO|Releasing lport 7a0c677f-94d5-4688-b88c-93d2fa378198 from this chassis (sb_readonly=0)
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.097 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.098 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.100 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82cd838a-056d-4438-8f41-36bf05690731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.100 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/78cbfb83-5eb2-43b6-8132-ed291918f722.pid.haproxy
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 78cbfb83-5eb2-43b6-8132-ed291918f722
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:52:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:27.102 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'env', 'PROCESS_TAG=haproxy-78cbfb83-5eb2-43b6-8132-ed291918f722', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/78cbfb83-5eb2-43b6-8132-ed291918f722.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.270 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 9fed0304-736a-4739-9e78-a95c676d1206 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.270 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060747.2697103, 9fed0304-736a-4739-9e78-a95c676d1206 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Resumed (Lifecycle Event)
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.272 253542 DEBUG nova.compute.manager [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.273 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.277 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance spawned successfully.
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.278 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.293 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.299 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.303 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.303 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.304 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.304 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.304 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.305 253542 DEBUG nova.virt.libvirt.driver [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.332 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.332 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060747.2707255, 9fed0304-736a-4739-9e78-a95c676d1206 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.333 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Started (Lifecycle Event)
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.347 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.350 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.381 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.405 253542 DEBUG nova.compute.manager [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.429 253542 DEBUG nova.compute.manager [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.429 253542 DEBUG oslo_concurrency.lockutils [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.430 253542 DEBUG oslo_concurrency.lockutils [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.430 253542 DEBUG oslo_concurrency.lockutils [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.431 253542 DEBUG nova.compute.manager [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.431 253542 WARNING nova.compute.manager [req-423a4341-3a70-4aad-8ab1-76f1b3221bc7 req-7ad78eda-9780-43b0-94ab-177b638cf87f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state rebuild_spawning.
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.450 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.450 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.451 253542 DEBUG nova.objects.instance [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.505 253542 DEBUG oslo_concurrency.lockutils [None req-68fe08f7-1912-4300-9a43-35c82c8b3b9a 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:27 compute-0 podman[371913]: 2025-11-25 08:52:27.527089918 +0000 UTC m=+0.048689534 container create 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:27 compute-0 systemd[1]: Started libpod-conmon-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3.scope.
Nov 25 08:52:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:52:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e55ccd711f80e0874846fa98c3c9a5f347c2a29b653b3ef7eacea6b4a1c5bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:52:27 compute-0 podman[371913]: 2025-11-25 08:52:27.503389493 +0000 UTC m=+0.024989129 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:52:27 compute-0 podman[371913]: 2025-11-25 08:52:27.61454312 +0000 UTC m=+0.136142826 container init 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:52:27 compute-0 podman[371913]: 2025-11-25 08:52:27.62123271 +0000 UTC m=+0.142832366 container start 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:52:27 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [NOTICE]   (371931) : New worker (371933) forked
Nov 25 08:52:27 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [NOTICE]   (371931) : Loading success.
Nov 25 08:52:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 140 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.5 MiB/s wr, 99 op/s
Nov 25 08:52:27 compute-0 nova_compute[253538]: 2025-11-25 08:52:27.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:28 compute-0 ceph-mon[75015]: pgmap v2150: 321 pgs: 321 active+clean; 140 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.5 MiB/s wr, 99 op/s
Nov 25 08:52:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:52:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641560818' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:52:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:52:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641560818' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:52:29 compute-0 nova_compute[253538]: 2025-11-25 08:52:29.520 253542 DEBUG nova.compute.manager [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:29 compute-0 nova_compute[253538]: 2025-11-25 08:52:29.521 253542 DEBUG oslo_concurrency.lockutils [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:29 compute-0 nova_compute[253538]: 2025-11-25 08:52:29.522 253542 DEBUG oslo_concurrency.lockutils [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:29 compute-0 nova_compute[253538]: 2025-11-25 08:52:29.523 253542 DEBUG oslo_concurrency.lockutils [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:29 compute-0 nova_compute[253538]: 2025-11-25 08:52:29.523 253542 DEBUG nova.compute.manager [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:29 compute-0 nova_compute[253538]: 2025-11-25 08:52:29.524 253542 WARNING nova.compute.manager [req-7ae73501-e155-4e7c-9277-579c07c48689 req-2ddeb27b-b666-4690-9d03-a178203eef21 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state active and task_state None.
Nov 25 08:52:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Nov 25 08:52:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2641560818' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:52:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2641560818' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:52:30 compute-0 nova_compute[253538]: 2025-11-25 08:52:30.472 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060735.4713962, 47279d1c-3634-4ea6-a752-99950cd5ce6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:52:30 compute-0 nova_compute[253538]: 2025-11-25 08:52:30.473 253542 INFO nova.compute.manager [-] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] VM Stopped (Lifecycle Event)
Nov 25 08:52:30 compute-0 nova_compute[253538]: 2025-11-25 08:52:30.494 253542 DEBUG nova.compute.manager [None req-62faa6f0-fb7e-4ad2-ab29-84510384c51c - - - - - -] [instance: 47279d1c-3634-4ea6-a752-99950cd5ce6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:52:30 compute-0 ovn_controller[152859]: 2025-11-25T08:52:30Z|01135|binding|INFO|Releasing lport 7a0c677f-94d5-4688-b88c-93d2fa378198 from this chassis (sb_readonly=0)
Nov 25 08:52:30 compute-0 nova_compute[253538]: 2025-11-25 08:52:30.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:30 compute-0 nova_compute[253538]: 2025-11-25 08:52:30.552 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:30 compute-0 ceph-mon[75015]: pgmap v2151: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Nov 25 08:52:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 25 08:52:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:32 compute-0 nova_compute[253538]: 2025-11-25 08:52:32.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:32 compute-0 ceph-mon[75015]: pgmap v2152: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 25 08:52:33 compute-0 nova_compute[253538]: 2025-11-25 08:52:33.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:33 compute-0 nova_compute[253538]: 2025-11-25 08:52:33.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 25 08:52:34 compute-0 podman[371942]: 2025-11-25 08:52:34.797169732 +0000 UTC m=+0.048462168 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 08:52:34 compute-0 ceph-mon[75015]: pgmap v2153: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 25 08:52:35 compute-0 nova_compute[253538]: 2025-11-25 08:52:35.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Nov 25 08:52:36 compute-0 nova_compute[253538]: 2025-11-25 08:52:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:36 compute-0 nova_compute[253538]: 2025-11-25 08:52:36.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:52:36 compute-0 nova_compute[253538]: 2025-11-25 08:52:36.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:52:36 compute-0 podman[371960]: 2025-11-25 08:52:36.823247744 +0000 UTC m=+0.075577425 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 08:52:36 compute-0 ceph-mon[75015]: pgmap v2154: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Nov 25 08:52:36 compute-0 nova_compute[253538]: 2025-11-25 08:52:36.961 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:52:36 compute-0 nova_compute[253538]: 2025-11-25 08:52:36.962 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:52:36 compute-0 nova_compute[253538]: 2025-11-25 08:52:36.962 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:52:36 compute-0 nova_compute[253538]: 2025-11-25 08:52:36.962 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:37 compute-0 nova_compute[253538]: 2025-11-25 08:52:37.772 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 602 KiB/s wr, 122 op/s
Nov 25 08:52:38 compute-0 ceph-mon[75015]: pgmap v2155: 321 pgs: 321 active+clean; 134 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 602 KiB/s wr, 122 op/s
Nov 25 08:52:39 compute-0 nova_compute[253538]: 2025-11-25 08:52:39.129 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060744.127684, e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:52:39 compute-0 nova_compute[253538]: 2025-11-25 08:52:39.129 253542 INFO nova.compute.manager [-] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] VM Stopped (Lifecycle Event)
Nov 25 08:52:39 compute-0 nova_compute[253538]: 2025-11-25 08:52:39.155 253542 DEBUG nova.compute.manager [None req-8908c9dc-1128-48af-bdf8-6ebab6cd4371 - - - - - -] [instance: e7bccd38-8444-4a5d-8a51-a99fa2e7d4fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:52:39 compute-0 ovn_controller[152859]: 2025-11-25T08:52:39Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 08:52:39 compute-0 ovn_controller[152859]: 2025-11-25T08:52:39Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:ad:17 10.100.0.11
Nov 25 08:52:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 142 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 112 op/s
Nov 25 08:52:40 compute-0 nova_compute[253538]: 2025-11-25 08:52:40.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:40 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 08:52:41 compute-0 ceph-mon[75015]: pgmap v2156: 321 pgs: 321 active+clean; 142 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 112 op/s
Nov 25 08:52:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:41.076 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:41 compute-0 nova_compute[253538]: 2025-11-25 08:52:41.174 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:52:41 compute-0 nova_compute[253538]: 2025-11-25 08:52:41.191 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:52:41 compute-0 nova_compute[253538]: 2025-11-25 08:52:41.191 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:52:41 compute-0 nova_compute[253538]: 2025-11-25 08:52:41.192 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:41 compute-0 nova_compute[253538]: 2025-11-25 08:52:41.192 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:41 compute-0 nova_compute[253538]: 2025-11-25 08:52:41.193 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:52:41 compute-0 nova_compute[253538]: 2025-11-25 08:52:41.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:41 compute-0 nova_compute[253538]: 2025-11-25 08:52:41.573 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 152 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 1.2 MiB/s wr, 48 op/s
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:42.350 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:52:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:42.352 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:52:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.590 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.590 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:42 compute-0 podman[372001]: 2025-11-25 08:52:42.866709136 +0000 UTC m=+0.111359074 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:52:42 compute-0 nova_compute[253538]: 2025-11-25 08:52:42.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:43 compute-0 ceph-mon[75015]: pgmap v2157: 321 pgs: 321 active+clean; 152 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 1.2 MiB/s wr, 48 op/s
Nov 25 08:52:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:52:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903804474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.053 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.138 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.139 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.340 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.341 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3647MB free_disk=59.95318603515625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.342 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.342 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.444 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 9fed0304-736a-4739-9e78-a95c676d1206 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.444 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.444 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.461 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.481 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.481 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.495 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.518 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 08:52:43 compute-0 nova_compute[253538]: 2025-11-25 08:52:43.573 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 161 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Nov 25 08:52:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2903804474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:52:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/115009720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:44 compute-0 nova_compute[253538]: 2025-11-25 08:52:44.054 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:44 compute-0 nova_compute[253538]: 2025-11-25 08:52:44.060 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:52:44 compute-0 nova_compute[253538]: 2025-11-25 08:52:44.077 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:52:44 compute-0 nova_compute[253538]: 2025-11-25 08:52:44.250 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:52:44 compute-0 nova_compute[253538]: 2025-11-25 08:52:44.250 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:45 compute-0 ceph-mon[75015]: pgmap v2158: 321 pgs: 321 active+clean; 161 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Nov 25 08:52:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/115009720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:45 compute-0 nova_compute[253538]: 2025-11-25 08:52:45.251 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:52:45 compute-0 nova_compute[253538]: 2025-11-25 08:52:45.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 08:52:45 compute-0 nova_compute[253538]: 2025-11-25 08:52:45.896 253542 INFO nova.compute.manager [None req-7b54f463-9fdc-48ff-975e-b0b6c2a0caff 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Get console output
Nov 25 08:52:45 compute-0 nova_compute[253538]: 2025-11-25 08:52:45.907 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:52:46 compute-0 sshd-session[372053]: Invalid user taiga from 45.202.211.6 port 60690
Nov 25 08:52:46 compute-0 sshd-session[372053]: Received disconnect from 45.202.211.6 port 60690:11: Bye Bye [preauth]
Nov 25 08:52:46 compute-0 sshd-session[372053]: Disconnected from invalid user taiga 45.202.211.6 port 60690 [preauth]
Nov 25 08:52:47 compute-0 ceph-mon[75015]: pgmap v2159: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.403 253542 DEBUG nova.compute.manager [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.405 253542 DEBUG nova.compute.manager [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing instance network info cache due to event network-changed-5518ee18-fcb4-4885-8bc6-a3daba84baff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.405 253542 DEBUG oslo_concurrency.lockutils [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.405 253542 DEBUG oslo_concurrency.lockutils [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.406 253542 DEBUG nova.network.neutron [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Refreshing network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.467 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.468 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.468 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.468 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.469 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.470 253542 INFO nova.compute.manager [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Terminating instance
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.471 253542 DEBUG nova.compute.manager [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:52:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:47 compute-0 kernel: tap5518ee18-fc (unregistering): left promiscuous mode
Nov 25 08:52:47 compute-0 NetworkManager[48915]: <info>  [1764060767.5283] device (tap5518ee18-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:52:47 compute-0 ovn_controller[152859]: 2025-11-25T08:52:47Z|01136|binding|INFO|Releasing lport 5518ee18-fcb4-4885-8bc6-a3daba84baff from this chassis (sb_readonly=0)
Nov 25 08:52:47 compute-0 ovn_controller[152859]: 2025-11-25T08:52:47Z|01137|binding|INFO|Setting lport 5518ee18-fcb4-4885-8bc6-a3daba84baff down in Southbound
Nov 25 08:52:47 compute-0 ovn_controller[152859]: 2025-11-25T08:52:47Z|01138|binding|INFO|Removing iface tap5518ee18-fc ovn-installed in OVS
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.540 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:17 10.100.0.11'], port_security=['fa:16:3e:aa:ad:17 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9fed0304-736a-4739-9e78-a95c676d1206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78cbfb83-5eb2-43b6-8132-ed291918f722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3be0e54f-d1af-493d-9b04-9b0789174c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2873e9cb-ad93-4607-afe3-5e46bdf03cb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=5518ee18-fcb4-4885-8bc6-a3daba84baff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.541 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 5518ee18-fcb4-4885-8bc6-a3daba84baff in datapath 78cbfb83-5eb2-43b6-8132-ed291918f722 unbound from our chassis
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.542 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 78cbfb83-5eb2-43b6-8132-ed291918f722, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.544 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8279534f-d0e6-41f9-9b01-ea9960a4fc83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.545 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 namespace which is not needed anymore
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:47 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 25 08:52:47 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Consumed 14.131s CPU time.
Nov 25 08:52:47 compute-0 systemd-machined[215790]: Machine qemu-141-instance-00000070 terminated.
Nov 25 08:52:47 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [NOTICE]   (371931) : haproxy version is 2.8.14-c23fe91
Nov 25 08:52:47 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [NOTICE]   (371931) : path to executable is /usr/sbin/haproxy
Nov 25 08:52:47 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [WARNING]  (371931) : Exiting Master process...
Nov 25 08:52:47 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [ALERT]    (371931) : Current worker (371933) exited with code 143 (Terminated)
Nov 25 08:52:47 compute-0 neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722[371927]: [WARNING]  (371931) : All workers exited. Exiting... (0)
Nov 25 08:52:47 compute-0 systemd[1]: libpod-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3.scope: Deactivated successfully.
Nov 25 08:52:47 compute-0 conmon[371927]: conmon 8a36c5bc963ac3ed122e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3.scope/container/memory.events
Nov 25 08:52:47 compute-0 podman[372077]: 2025-11-25 08:52:47.692385571 +0000 UTC m=+0.052097905 container died 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.712 253542 INFO nova.virt.libvirt.driver [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Instance destroyed successfully.
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.712 253542 DEBUG nova.objects.instance [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 9fed0304-736a-4739-9e78-a95c676d1206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3-userdata-shm.mount: Deactivated successfully.
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.728 253542 DEBUG nova.virt.libvirt.vif [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-327371372',display_name='tempest-TestNetworkAdvancedServerOps-server-327371372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-327371372',id=112,image_ref='64385127-d622-49bb-be38-b33beb2692d1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMmeM8p5EJ/GlX5lcj2KCLf3TZ7G2ER5A8cVNuOIrPFCSxO8qm+HkDs1IeLM92Y+nFqkA7zCF0FOinPZWhiCPngd1CuHc0FrMJjIXIIvisspammiOznjpk4rLsu9K/sxg==',key_name='tempest-TestNetworkAdvancedServerOps-2049221666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:52:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-2o0w2mgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='64385127-d622-49bb-be38-b33beb2692d1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:52:27Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=9fed0304-736a-4739-9e78-a95c676d1206,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.729 253542 DEBUG nova.network.os_vif_util [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.730 253542 DEBUG nova.network.os_vif_util [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.731 253542 DEBUG os_vif [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7e55ccd711f80e0874846fa98c3c9a5f347c2a29b653b3ef7eacea6b4a1c5bc-merged.mount: Deactivated successfully.
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.733 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.734 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5518ee18-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.736 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.739 253542 INFO os_vif [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:ad:17,bridge_name='br-int',has_traffic_filtering=True,id=5518ee18-fcb4-4885-8bc6-a3daba84baff,network=Network(78cbfb83-5eb2-43b6-8132-ed291918f722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5518ee18-fc')
Nov 25 08:52:47 compute-0 podman[372077]: 2025-11-25 08:52:47.745243567 +0000 UTC m=+0.104955911 container cleanup 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:52:47 compute-0 systemd[1]: libpod-conmon-8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3.scope: Deactivated successfully.
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 08:52:47 compute-0 podman[372131]: 2025-11-25 08:52:47.822666211 +0000 UTC m=+0.044542954 container remove 8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.830 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dcb056-51e6-4800-be94-9657e739ffe5]: (4, ('Tue Nov 25 08:52:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 (8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3)\n8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3\nTue Nov 25 08:52:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 (8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3)\n8a36c5bc963ac3ed122ee2b8648c4534c0b5f0f9e8e5d564f8a25822a95499d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.832 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76777055-b6dd-491f-a149-6c24170c5166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.832 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78cbfb83-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:47 compute-0 kernel: tap78cbfb83-50: left promiscuous mode
Nov 25 08:52:47 compute-0 nova_compute[253538]: 2025-11-25 08:52:47.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.850 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[25460703-45eb-4af1-95de-363286988d5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e83458da-9fea-4aca-8da4-e0d26474f559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.868 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff955e5-f5e9-4049-a4db-b71187d583cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.885 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[864e3d29-2318-4215-b84b-aa480201884c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610139, 'reachable_time': 16993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372149, 'error': None, 'target': 'ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.888 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-78cbfb83-5eb2-43b6-8132-ed291918f722 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:52:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:47.888 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d31f5c-63bd-4f89-8add-2ecccdc79b2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:52:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d78cbfb83\x2d5eb2\x2d43b6\x2d8132\x2ded291918f722.mount: Deactivated successfully.
Nov 25 08:52:48 compute-0 nova_compute[253538]: 2025-11-25 08:52:48.352 253542 INFO nova.virt.libvirt.driver [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deleting instance files /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206_del
Nov 25 08:52:48 compute-0 nova_compute[253538]: 2025-11-25 08:52:48.353 253542 INFO nova.virt.libvirt.driver [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deletion of /var/lib/nova/instances/9fed0304-736a-4739-9e78-a95c676d1206_del complete
Nov 25 08:52:48 compute-0 nova_compute[253538]: 2025-11-25 08:52:48.436 253542 INFO nova.compute.manager [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Took 0.96 seconds to destroy the instance on the hypervisor.
Nov 25 08:52:48 compute-0 nova_compute[253538]: 2025-11-25 08:52:48.437 253542 DEBUG oslo.service.loopingcall [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:52:48 compute-0 nova_compute[253538]: 2025-11-25 08:52:48.437 253542 DEBUG nova.compute.manager [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:52:48 compute-0 nova_compute[253538]: 2025-11-25 08:52:48.438 253542 DEBUG nova.network.neutron [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:52:49 compute-0 ceph-mon[75015]: pgmap v2160: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.286 253542 DEBUG nova.compute.manager [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.286 253542 DEBUG oslo_concurrency.lockutils [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.287 253542 DEBUG oslo_concurrency.lockutils [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.287 253542 DEBUG oslo_concurrency.lockutils [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.287 253542 DEBUG nova.compute.manager [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.287 253542 DEBUG nova.compute.manager [req-5c63fd74-1fd9-4a2f-95ed-8c1d1b6e7415 req-54ffc602-2743-4305-aad3-8e5f6732b9c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-unplugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:52:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 105 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.843 253542 DEBUG nova.network.neutron [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updated VIF entry in instance network info cache for port 5518ee18-fcb4-4885-8bc6-a3daba84baff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.844 253542 DEBUG nova.network.neutron [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [{"id": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "address": "fa:16:3e:aa:ad:17", "network": {"id": "78cbfb83-5eb2-43b6-8132-ed291918f722", "bridge": "br-int", "label": "tempest-network-smoke--234361531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5518ee18-fc", "ovs_interfaceid": "5518ee18-fcb4-4885-8bc6-a3daba84baff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.861 253542 DEBUG oslo_concurrency.lockutils [req-6e2fa655-5e5e-4ff3-9ff7-3871a7648573 req-8e44571b-3960-49ca-9bda-43c52b1dffd6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9fed0304-736a-4739-9e78-a95c676d1206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:52:49 compute-0 nova_compute[253538]: 2025-11-25 08:52:49.982 253542 DEBUG nova.network.neutron [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.000 253542 INFO nova.compute.manager [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Took 1.56 seconds to deallocate network for instance.
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.052 253542 DEBUG nova.compute.manager [req-c173ee66-ee08-4cf8-b67a-b963ff29fd61 req-4605baa8-cf88-4195-9439-8a679222c72a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-deleted-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.055 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.055 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.148 253542 DEBUG oslo_concurrency.processutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:52:50.355 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:52:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006487482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.630 253542 DEBUG oslo_concurrency.processutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.642 253542 DEBUG nova.compute.provider_tree [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.661 253542 DEBUG nova.scheduler.client.report [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.690 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.726 253542 INFO nova.scheduler.client.report [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance 9fed0304-736a-4739-9e78-a95c676d1206
Nov 25 08:52:50 compute-0 nova_compute[253538]: 2025-11-25 08:52:50.829 253542 DEBUG oslo_concurrency.lockutils [None req-d74486d6-1794-4aab-af7b-ae08cd7a99b5 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:51 compute-0 ceph-mon[75015]: pgmap v2161: 321 pgs: 321 active+clean; 105 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 08:52:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2006487482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:51 compute-0 nova_compute[253538]: 2025-11-25 08:52:51.417 253542 DEBUG nova.compute.manager [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:51 compute-0 nova_compute[253538]: 2025-11-25 08:52:51.417 253542 DEBUG oslo_concurrency.lockutils [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9fed0304-736a-4739-9e78-a95c676d1206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:51 compute-0 nova_compute[253538]: 2025-11-25 08:52:51.418 253542 DEBUG oslo_concurrency.lockutils [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:51 compute-0 nova_compute[253538]: 2025-11-25 08:52:51.418 253542 DEBUG oslo_concurrency.lockutils [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9fed0304-736a-4739-9e78-a95c676d1206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:51 compute-0 nova_compute[253538]: 2025-11-25 08:52:51.419 253542 DEBUG nova.compute.manager [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] No waiting events found dispatching network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:52:51 compute-0 nova_compute[253538]: 2025-11-25 08:52:51.419 253542 WARNING nova.compute.manager [req-9b4c5bb6-c031-479e-b93a-e29edb405b35 req-4d2a9390-be4b-44cc-8344-20c64a871315 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Received unexpected event network-vif-plugged-5518ee18-fcb4-4885-8bc6-a3daba84baff for instance with vm_state deleted and task_state None.
Nov 25 08:52:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 1.4 MiB/s wr, 68 op/s
Nov 25 08:52:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:52 compute-0 nova_compute[253538]: 2025-11-25 08:52:52.735 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:52 compute-0 nova_compute[253538]: 2025-11-25 08:52:52.778 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:53 compute-0 ceph-mon[75015]: pgmap v2162: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 1.4 MiB/s wr, 68 op/s
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:52:53
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr']
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:52:53 compute-0 nova_compute[253538]: 2025-11-25 08:52:53.439 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:53 compute-0 nova_compute[253538]: 2025-11-25 08:52:53.439 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:52:53 compute-0 nova_compute[253538]: 2025-11-25 08:52:53.454 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:52:53 compute-0 nova_compute[253538]: 2025-11-25 08:52:53.540 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:53 compute-0 nova_compute[253538]: 2025-11-25 08:52:53.540 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:53 compute-0 nova_compute[253538]: 2025-11-25 08:52:53.547 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:52:53 compute-0 nova_compute[253538]: 2025-11-25 08:52:53.548 253542 INFO nova.compute.claims [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:52:53 compute-0 nova_compute[253538]: 2025-11-25 08:52:53.663 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 941 KiB/s wr, 55 op/s
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:52:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:52:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:52:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1354322793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.213 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.221 253542 DEBUG nova.compute.provider_tree [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.235 253542 DEBUG nova.scheduler.client.report [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.259 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.260 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.308 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.309 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.346 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.370 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.462 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.464 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.464 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Creating image(s)
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.485 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.507 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.529 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.533 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.633 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.635 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.636 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.637 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.670 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.675 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 76611b0b-db06-4903-a22a-59b23a1e0d48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:54 compute-0 nova_compute[253538]: 2025-11-25 08:52:54.723 253542 DEBUG nova.policy [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:52:55 compute-0 ceph-mon[75015]: pgmap v2163: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 941 KiB/s wr, 55 op/s
Nov 25 08:52:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1354322793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.224 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 76611b0b-db06-4903-a22a-59b23a1e0d48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.296 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.389 253542 DEBUG nova.objects.instance [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.392 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.430 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.431 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Ensure instance console log exists: /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.432 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.433 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.433 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:55 compute-0 nova_compute[253538]: 2025-11-25 08:52:55.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 100 KiB/s wr, 45 op/s
Nov 25 08:52:56 compute-0 nova_compute[253538]: 2025-11-25 08:52:56.329 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Successfully created port: 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:52:57 compute-0 ceph-mon[75015]: pgmap v2164: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 100 KiB/s wr, 45 op/s
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.399 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Successfully updated port: 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.420 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.420 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.421 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:52:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.535 253542 DEBUG nova.compute.manager [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.535 253542 DEBUG nova.compute.manager [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.535 253542 DEBUG oslo_concurrency.lockutils [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.737 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 101 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 41 op/s
Nov 25 08:52:57 compute-0 nova_compute[253538]: 2025-11-25 08:52:57.996 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.886 253542 DEBUG nova.network.neutron [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.902 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.902 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance network_info: |[{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.902 253542 DEBUG oslo_concurrency.lockutils [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.903 253542 DEBUG nova.network.neutron [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.905 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start _get_guest_xml network_info=[{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.909 253542 WARNING nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.917 253542 DEBUG nova.virt.libvirt.host [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.917 253542 DEBUG nova.virt.libvirt.host [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.921 253542 DEBUG nova.virt.libvirt.host [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.922 253542 DEBUG nova.virt.libvirt.host [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.922 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.922 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.923 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.924 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.924 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.924 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.924 253542 DEBUG nova.virt.hardware [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:52:58 compute-0 nova_compute[253538]: 2025-11-25 08:52:58.926 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:59 compute-0 ceph-mon[75015]: pgmap v2165: 321 pgs: 321 active+clean; 101 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 41 op/s
Nov 25 08:52:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:52:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028327335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.403 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.428 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.433 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:52:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 08:52:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:52:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/993957718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.881 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.884 253542 DEBUG nova.virt.libvirt.vif [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:54Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.885 253542 DEBUG nova.network.os_vif_util [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.886 253542 DEBUG nova.network.os_vif_util [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.888 253542 DEBUG nova.objects.instance [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.907 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <uuid>76611b0b-db06-4903-a22a-59b23a1e0d48</uuid>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <name>instance-00000072</name>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:52:58</nova:creationTime>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 08:52:59 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <system>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <entry name="serial">76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <entry name="uuid">76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </system>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <os>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   </os>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <features>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   </features>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk">
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config">
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       </source>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:52:59 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:36:b2:21"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <target dev="tap8f1fcc3c-5f"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log" append="off"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <video>
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </video>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:52:59 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:52:59 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:52:59 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:52:59 compute-0 nova_compute[253538]: </domain>
Nov 25 08:52:59 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.909 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Preparing to wait for external event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.910 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.910 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.910 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.912 253542 DEBUG nova.virt.libvirt.vif [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:52:54Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.912 253542 DEBUG nova.network.os_vif_util [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.913 253542 DEBUG nova.network.os_vif_util [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.914 253542 DEBUG os_vif [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.916 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.916 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.921 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f1fcc3c-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.921 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f1fcc3c-5f, col_values=(('external_ids', {'iface-id': '8f1fcc3c-5f46-4272-be9b-4d5213b3aceb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:b2:21', 'vm-uuid': '76611b0b-db06-4903-a22a-59b23a1e0d48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:59 compute-0 NetworkManager[48915]: <info>  [1764060779.9241] manager: (tap8f1fcc3c-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.931 253542 INFO os_vif [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f')
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.985 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.986 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.986 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:36:b2:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:52:59 compute-0 nova_compute[253538]: 2025-11-25 08:52:59.986 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Using config drive
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.010 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1028327335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/993957718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.391 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Creating config drive at /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.396 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz2ubngmk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.440 253542 DEBUG nova.network.neutron [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.441 253542 DEBUG nova.network.neutron [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.454 253542 DEBUG oslo_concurrency.lockutils [req-5d5fde80-d0ee-45d4-a8da-69ed275ee82c req-2046a213-6aa3-48d1-bc8a-8928134b6e7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.547 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz2ubngmk" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.573 253542 DEBUG nova.storage.rbd_utils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.576 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.731 253542 DEBUG oslo_concurrency.processutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config 76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.732 253542 INFO nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Deleting local config drive /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/disk.config because it was imported into RBD.
Nov 25 08:53:00 compute-0 kernel: tap8f1fcc3c-5f: entered promiscuous mode
Nov 25 08:53:00 compute-0 NetworkManager[48915]: <info>  [1764060780.7843] manager: (tap8f1fcc3c-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/467)
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:00 compute-0 ovn_controller[152859]: 2025-11-25T08:53:00Z|01139|binding|INFO|Claiming lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb for this chassis.
Nov 25 08:53:00 compute-0 ovn_controller[152859]: 2025-11-25T08:53:00Z|01140|binding|INFO|8f1fcc3c-5f46-4272-be9b-4d5213b3aceb: Claiming fa:16:3e:36:b2:21 10.100.0.3
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.801 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:b2:21 10.100.0.3'], port_security=['fa:16:3e:36:b2:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '76611b0b-db06-4903-a22a-59b23a1e0d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70c2e597-b59d-412f-a7ad-333ba7cbd35e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b027220-e81c-4ac9-90ba-6c25793cc1d8, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.803 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb in datapath 60f2641c-f03e-4ef3-a462-4bd54e93c59c bound to our chassis
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.803 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60f2641c-f03e-4ef3-a462-4bd54e93c59c
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.816 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3daf525-1d85-4c4b-9356-ee50584a9740]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.817 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60f2641c-f1 in ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.818 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60f2641c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.819 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27a40688-c661-43cc-86b1-788c1092903d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 systemd-machined[215790]: New machine qemu-142-instance-00000072.
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.820 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2efc22-664b-4a72-af4a-78fb09745acb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.832 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8e55b737-df94-43d5-b8a8-5f382cc20421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000072.
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.856 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8519504-a6b4-4a2f-be3e-afdb9ff76ed8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:00 compute-0 systemd-udevd[372500]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:53:00 compute-0 ovn_controller[152859]: 2025-11-25T08:53:00Z|01141|binding|INFO|Setting lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb ovn-installed in OVS
Nov 25 08:53:00 compute-0 ovn_controller[152859]: 2025-11-25T08:53:00Z|01142|binding|INFO|Setting lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb up in Southbound
Nov 25 08:53:00 compute-0 nova_compute[253538]: 2025-11-25 08:53:00.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:00 compute-0 NetworkManager[48915]: <info>  [1764060780.8797] device (tap8f1fcc3c-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:53:00 compute-0 NetworkManager[48915]: <info>  [1764060780.8816] device (tap8f1fcc3c-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.892 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf2bac1-2ad3-46a0-9c15-551ed096ac00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[035e419c-6708-446a-9330-c956a4bda8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 NetworkManager[48915]: <info>  [1764060780.8994] manager: (tap60f2641c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/468)
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.939 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9ba79c-1321-4774-811c-f89aff29046d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.943 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[df40f483-7bdd-4d9d-a2d4-292771f24054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:00 compute-0 NetworkManager[48915]: <info>  [1764060780.9723] device (tap60f2641c-f0): carrier: link connected
Nov 25 08:53:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:00.981 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[453cb564-4891-4e56-aaaf-02f83a332a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.002 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3197cac-04bf-45c4-ae8a-6d51b171fcee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60f2641c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:65:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613557, 'reachable_time': 35003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372530, 'error': None, 'target': 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.019 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59877384-4653-4bb7-8170-e5aa1898bf4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:650e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613557, 'tstamp': 613557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372531, 'error': None, 'target': 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.038 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82e5a50a-f998-4183-b03d-d688ad6b3e24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60f2641c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:65:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613557, 'reachable_time': 35003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372532, 'error': None, 'target': 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.070 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b779864-1e64-42c3-b10a-01815a852090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:01 compute-0 ceph-mon[75015]: pgmap v2166: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.143 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86af22c9-475b-45ce-8dca-da3dc15cc223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.144 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60f2641c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.145 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60f2641c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:01 compute-0 NetworkManager[48915]: <info>  [1764060781.1478] manager: (tap60f2641c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Nov 25 08:53:01 compute-0 kernel: tap60f2641c-f0: entered promiscuous mode
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.149 253542 DEBUG nova.compute.manager [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.150 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60f2641c-f0, col_values=(('external_ids', {'iface-id': 'baf4584e-8381-4bcf-9f75-0a7b69cd8212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.150 253542 DEBUG oslo_concurrency.lockutils [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.150 253542 DEBUG oslo_concurrency.lockutils [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.150 253542 DEBUG oslo_concurrency.lockutils [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.151 253542 DEBUG nova.compute.manager [req-2fcab513-bc6d-411f-834c-02ab70773cbd req-d7f9d82d-88ba-443e-ac21-b916c952e907 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Processing event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:01 compute-0 ovn_controller[152859]: 2025-11-25T08:53:01Z|01143|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.181 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60f2641c-f03e-4ef3-a462-4bd54e93c59c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60f2641c-f03e-4ef3-a462-4bd54e93c59c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.182 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef428e6-cc2d-4676-ae2d-39507fa14c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.183 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-60f2641c-f03e-4ef3-a462-4bd54e93c59c
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/60f2641c-f03e-4ef3-a462-4bd54e93c59c.pid.haproxy
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 60f2641c-f03e-4ef3-a462-4bd54e93c59c
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:53:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:01.185 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'env', 'PROCESS_TAG=haproxy-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60f2641c-f03e-4ef3-a462-4bd54e93c59c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:53:01 compute-0 podman[372564]: 2025-11-25 08:53:01.597859372 +0000 UTC m=+0.052972169 container create f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 08:53:01 compute-0 systemd[1]: Started libpod-conmon-f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e.scope.
Nov 25 08:53:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:01 compute-0 podman[372564]: 2025-11-25 08:53:01.57013071 +0000 UTC m=+0.025243497 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3552cae6011fac03a9da2375e711d654941020090cecb0f909b8a0ba241d8ae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:01 compute-0 podman[372564]: 2025-11-25 08:53:01.68066641 +0000 UTC m=+0.135779207 container init f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:53:01 compute-0 podman[372564]: 2025-11-25 08:53:01.69152157 +0000 UTC m=+0.146634337 container start f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:53:01 compute-0 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [NOTICE]   (372609) : New worker (372623) forked
Nov 25 08:53:01 compute-0 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [NOTICE]   (372609) : Loading success.
Nov 25 08:53:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.826 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.827 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060781.825934, 76611b0b-db06-4903-a22a-59b23a1e0d48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.827 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] VM Started (Lifecycle Event)
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.830 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.834 253542 INFO nova.virt.libvirt.driver [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance spawned successfully.
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.834 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.871 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.877 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.877 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.878 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.878 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.878 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.879 253542 DEBUG nova.virt.libvirt.driver [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.884 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.914 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.914 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060781.826323, 76611b0b-db06-4903-a22a-59b23a1e0d48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.915 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] VM Paused (Lifecycle Event)
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.930 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.935 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060781.829803, 76611b0b-db06-4903-a22a-59b23a1e0d48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.936 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] VM Resumed (Lifecycle Event)
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.962 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.966 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:53:01 compute-0 nova_compute[253538]: 2025-11-25 08:53:01.978 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:53:02 compute-0 nova_compute[253538]: 2025-11-25 08:53:02.155 253542 INFO nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Took 7.69 seconds to spawn the instance on the hypervisor.
Nov 25 08:53:02 compute-0 nova_compute[253538]: 2025-11-25 08:53:02.156 253542 DEBUG nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:02 compute-0 nova_compute[253538]: 2025-11-25 08:53:02.231 253542 INFO nova.compute.manager [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Took 8.73 seconds to build instance.
Nov 25 08:53:02 compute-0 nova_compute[253538]: 2025-11-25 08:53:02.251 253542 DEBUG oslo_concurrency.lockutils [None req-643270b7-dd7c-4c64-a96c-a1a312013354 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:02 compute-0 nova_compute[253538]: 2025-11-25 08:53:02.710 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060767.7089639, 9fed0304-736a-4739-9e78-a95c676d1206 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:02 compute-0 nova_compute[253538]: 2025-11-25 08:53:02.711 253542 INFO nova.compute.manager [-] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] VM Stopped (Lifecycle Event)
Nov 25 08:53:02 compute-0 nova_compute[253538]: 2025-11-25 08:53:02.728 253542 DEBUG nova.compute.manager [None req-9355f7f9-45ed-434f-a7b4-c3fdfd5fd711 - - - - - -] [instance: 9fed0304-736a-4739-9e78-a95c676d1206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:02 compute-0 nova_compute[253538]: 2025-11-25 08:53:02.781 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:03 compute-0 ceph-mon[75015]: pgmap v2167: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 08:53:03 compute-0 nova_compute[253538]: 2025-11-25 08:53:03.239 253542 DEBUG nova.compute.manager [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:03 compute-0 nova_compute[253538]: 2025-11-25 08:53:03.239 253542 DEBUG oslo_concurrency.lockutils [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:03 compute-0 nova_compute[253538]: 2025-11-25 08:53:03.240 253542 DEBUG oslo_concurrency.lockutils [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:03 compute-0 nova_compute[253538]: 2025-11-25 08:53:03.241 253542 DEBUG oslo_concurrency.lockutils [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:03 compute-0 nova_compute[253538]: 2025-11-25 08:53:03.241 253542 DEBUG nova.compute.manager [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:03 compute-0 nova_compute[253538]: 2025-11-25 08:53:03.242 253542 WARNING nova.compute.manager [req-d882b52c-0b88-491a-bd61-1627293b3460 req-d02e61c2-6e13-4fa4-b9a2-502fc8db2417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb for instance with vm_state active and task_state None.
Nov 25 08:53:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10383726680015627 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:53:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:53:04 compute-0 nova_compute[253538]: 2025-11-25 08:53:04.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:05 compute-0 ceph-mon[75015]: pgmap v2168: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 08:53:05 compute-0 nova_compute[253538]: 2025-11-25 08:53:05.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:05 compute-0 NetworkManager[48915]: <info>  [1764060785.5684] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Nov 25 08:53:05 compute-0 NetworkManager[48915]: <info>  [1764060785.5695] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Nov 25 08:53:05 compute-0 ovn_controller[152859]: 2025-11-25T08:53:05Z|01144|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 08:53:05 compute-0 nova_compute[253538]: 2025-11-25 08:53:05.664 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:05 compute-0 ovn_controller[152859]: 2025-11-25T08:53:05Z|01145|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 08:53:05 compute-0 nova_compute[253538]: 2025-11-25 08:53:05.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Nov 25 08:53:05 compute-0 podman[372638]: 2025-11-25 08:53:05.878294809 +0000 UTC m=+0.117746124 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 08:53:05 compute-0 nova_compute[253538]: 2025-11-25 08:53:05.877 253542 DEBUG nova.compute.manager [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:05 compute-0 nova_compute[253538]: 2025-11-25 08:53:05.877 253542 DEBUG nova.compute.manager [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:53:05 compute-0 nova_compute[253538]: 2025-11-25 08:53:05.877 253542 DEBUG oslo_concurrency.lockutils [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:05 compute-0 nova_compute[253538]: 2025-11-25 08:53:05.877 253542 DEBUG oslo_concurrency.lockutils [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:05 compute-0 nova_compute[253538]: 2025-11-25 08:53:05.878 253542 DEBUG nova.network.neutron [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:53:07 compute-0 ceph-mon[75015]: pgmap v2169: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Nov 25 08:53:07 compute-0 nova_compute[253538]: 2025-11-25 08:53:07.232 253542 DEBUG nova.network.neutron [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:53:07 compute-0 nova_compute[253538]: 2025-11-25 08:53:07.233 253542 DEBUG nova.network.neutron [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:07 compute-0 nova_compute[253538]: 2025-11-25 08:53:07.402 253542 DEBUG oslo_concurrency.lockutils [req-e349520d-40a5-4ae1-b20d-2dd5935c5007 req-a00f0c38-f146-418a-bbe2-a143208bd304 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:07 compute-0 nova_compute[253538]: 2025-11-25 08:53:07.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:07 compute-0 nova_compute[253538]: 2025-11-25 08:53:07.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 08:53:07 compute-0 podman[372657]: 2025-11-25 08:53:07.8139716 +0000 UTC m=+0.071888147 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:53:09 compute-0 ceph-mon[75015]: pgmap v2170: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 08:53:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 775 KiB/s wr, 87 op/s
Nov 25 08:53:09 compute-0 nova_compute[253538]: 2025-11-25 08:53:09.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:10 compute-0 sudo[372677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:10 compute-0 sudo[372677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:10 compute-0 sudo[372677]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:10 compute-0 sudo[372702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:53:10 compute-0 sudo[372702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:10 compute-0 sudo[372702]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:10 compute-0 sudo[372727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:10 compute-0 sudo[372727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:10 compute-0 sudo[372727]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:10 compute-0 sudo[372752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:53:10 compute-0 sudo[372752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:11 compute-0 ceph-mon[75015]: pgmap v2171: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 775 KiB/s wr, 87 op/s
Nov 25 08:53:11 compute-0 sudo[372752]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 08:53:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 08:53:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:53:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:53:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:53:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:53:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:53:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:53:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f1572cef-6b91-4b1a-bd6c-96cb4eff0c55 does not exist
Nov 25 08:53:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8399d3c8-dd57-4d30-92fe-03501066f83a does not exist
Nov 25 08:53:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ade82a42-a067-4256-9a53-eb2846d3cb9a does not exist
Nov 25 08:53:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:53:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:53:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:53:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:53:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:53:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:53:11 compute-0 sudo[372811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:11 compute-0 sudo[372811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:11 compute-0 sudo[372811]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:11 compute-0 sudo[372836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:53:11 compute-0 sudo[372836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:11 compute-0 sudo[372836]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:11 compute-0 sudo[372861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:11 compute-0 sudo[372861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:11 compute-0 sudo[372861]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:11 compute-0 sudo[372886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:53:11 compute-0 sudo[372886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.044 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.045 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:12 compute-0 podman[372947]: 2025-11-25 08:53:12.058095764 +0000 UTC m=+0.050523704 container create b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.059 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:53:12 compute-0 systemd[1]: Started libpod-conmon-b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9.scope.
Nov 25 08:53:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:12 compute-0 podman[372947]: 2025-11-25 08:53:12.033977098 +0000 UTC m=+0.026405018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:53:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 08:53:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:53:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:53:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:53:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:53:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:53:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:53:12 compute-0 podman[372947]: 2025-11-25 08:53:12.146482062 +0000 UTC m=+0.138909972 container init b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 08:53:12 compute-0 podman[372947]: 2025-11-25 08:53:12.157808565 +0000 UTC m=+0.150236465 container start b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:53:12 compute-0 sweet_goldwasser[372964]: 167 167
Nov 25 08:53:12 compute-0 podman[372947]: 2025-11-25 08:53:12.165079779 +0000 UTC m=+0.157507719 container attach b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:53:12 compute-0 systemd[1]: libpod-b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9.scope: Deactivated successfully.
Nov 25 08:53:12 compute-0 podman[372947]: 2025-11-25 08:53:12.184425257 +0000 UTC m=+0.176853237 container died b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.188 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.190 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.198 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.199 253542 INFO nova.compute.claims [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-397771b7242a7a653ee77c1099af808974bc553922fd215896b96a8843380842-merged.mount: Deactivated successfully.
Nov 25 08:53:12 compute-0 podman[372947]: 2025-11-25 08:53:12.2341769 +0000 UTC m=+0.226604800 container remove b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 08:53:12 compute-0 systemd[1]: libpod-conmon-b48e593f5cd7d66415e844e8a8120f6c793dc51cb38c29d9ca494e97823329c9.scope: Deactivated successfully.
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.330 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:12 compute-0 podman[372991]: 2025-11-25 08:53:12.427294332 +0000 UTC m=+0.051585953 container create f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 08:53:12 compute-0 systemd[1]: Started libpod-conmon-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope.
Nov 25 08:53:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:12 compute-0 podman[372991]: 2025-11-25 08:53:12.403615867 +0000 UTC m=+0.027907488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:12 compute-0 podman[372991]: 2025-11-25 08:53:12.516904602 +0000 UTC m=+0.141196203 container init f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 08:53:12 compute-0 podman[372991]: 2025-11-25 08:53:12.525893162 +0000 UTC m=+0.150184773 container start f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:53:12 compute-0 podman[372991]: 2025-11-25 08:53:12.529126889 +0000 UTC m=+0.153418480 container attach f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:53:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010193917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.816 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.822 253542 DEBUG nova.compute.provider_tree [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.844 253542 DEBUG nova.scheduler.client.report [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.872 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.873 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.932 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.933 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.959 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:53:12 compute-0 nova_compute[253538]: 2025-11-25 08:53:12.979 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.087 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.089 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.090 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Creating image(s)
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.128 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:13 compute-0 ceph-mon[75015]: pgmap v2172: 321 pgs: 321 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 08:53:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2010193917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.169 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.201 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.204 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.243 253542 DEBUG nova.policy [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.285 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.285 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.286 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.286 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.309 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.314 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2da7049d-715e-4209-8c17-dda96ff6a192_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.645 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2da7049d-715e-4209-8c17-dda96ff6a192_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:13 compute-0 adoring_swanson[373017]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:53:13 compute-0 adoring_swanson[373017]: --> relative data size: 1.0
Nov 25 08:53:13 compute-0 adoring_swanson[373017]: --> All data devices are unavailable
Nov 25 08:53:13 compute-0 systemd[1]: libpod-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope: Deactivated successfully.
Nov 25 08:53:13 compute-0 systemd[1]: libpod-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope: Consumed 1.035s CPU time.
Nov 25 08:53:13 compute-0 conmon[373017]: conmon f58fd611877415f9a4d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope/container/memory.events
Nov 25 08:53:13 compute-0 podman[372991]: 2025-11-25 08:53:13.703048568 +0000 UTC m=+1.327340189 container died f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.734 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d26f6d7e472d03d32efcbcc832fed07109dda0b830f2b7b80f6656aa2a61ef4-merged.mount: Deactivated successfully.
Nov 25 08:53:13 compute-0 podman[372991]: 2025-11-25 08:53:13.775461288 +0000 UTC m=+1.399752879 container remove f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:53:13 compute-0 systemd[1]: libpod-conmon-f58fd611877415f9a4d3ad2bb288a33c5b11ccf8630fa129902c3e426d172cc9.scope: Deactivated successfully.
Nov 25 08:53:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 150 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 80 op/s
Nov 25 08:53:13 compute-0 sudo[372886]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:13 compute-0 ovn_controller[152859]: 2025-11-25T08:53:13Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:b2:21 10.100.0.3
Nov 25 08:53:13 compute-0 ovn_controller[152859]: 2025-11-25T08:53:13Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:b2:21 10.100.0.3
Nov 25 08:53:13 compute-0 podman[373185]: 2025-11-25 08:53:13.839833062 +0000 UTC m=+0.110128120 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.860 253542 DEBUG nova.objects.instance [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.872 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.873 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Ensure instance console log exists: /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:53:13 compute-0 sudo[373238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.874 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.874 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:13 compute-0 nova_compute[253538]: 2025-11-25 08:53:13.875 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:13 compute-0 sudo[373238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:13 compute-0 sudo[373238]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:13 compute-0 sudo[373284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:53:13 compute-0 sudo[373284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:13 compute-0 sudo[373284]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:13 compute-0 sudo[373309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:13 compute-0 sudo[373309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:13 compute-0 sudo[373309]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:14 compute-0 sudo[373334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:53:14 compute-0 sudo[373334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:14 compute-0 podman[373402]: 2025-11-25 08:53:14.476604335 +0000 UTC m=+0.058924579 container create 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 08:53:14 compute-0 systemd[1]: Started libpod-conmon-1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784.scope.
Nov 25 08:53:14 compute-0 podman[373402]: 2025-11-25 08:53:14.452673685 +0000 UTC m=+0.034993969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:53:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:14 compute-0 nova_compute[253538]: 2025-11-25 08:53:14.558 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Successfully created port: e0eb0246-9869-4c10-b45b-bd0799ae0c95 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:53:14 compute-0 podman[373402]: 2025-11-25 08:53:14.56414124 +0000 UTC m=+0.146461564 container init 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:53:14 compute-0 podman[373402]: 2025-11-25 08:53:14.572630898 +0000 UTC m=+0.154951142 container start 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:53:14 compute-0 podman[373402]: 2025-11-25 08:53:14.575561676 +0000 UTC m=+0.157881950 container attach 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:53:14 compute-0 thirsty_euclid[373419]: 167 167
Nov 25 08:53:14 compute-0 systemd[1]: libpod-1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784.scope: Deactivated successfully.
Nov 25 08:53:14 compute-0 podman[373402]: 2025-11-25 08:53:14.618077695 +0000 UTC m=+0.200397969 container died 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:53:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b223c47ae5b7cbbca42189143355e77ef2e0e9e7b00544d74752f6719725a2c5-merged.mount: Deactivated successfully.
Nov 25 08:53:14 compute-0 podman[373402]: 2025-11-25 08:53:14.679275724 +0000 UTC m=+0.261595958 container remove 1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 08:53:14 compute-0 systemd[1]: libpod-conmon-1e2504f0c78dece38ffa09207deb7b39ffab637fa47a33b91ea926067fa0e784.scope: Deactivated successfully.
Nov 25 08:53:14 compute-0 nova_compute[253538]: 2025-11-25 08:53:14.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:14 compute-0 podman[373443]: 2025-11-25 08:53:14.837929332 +0000 UTC m=+0.023688885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:53:14 compute-0 podman[373443]: 2025-11-25 08:53:14.948245177 +0000 UTC m=+0.134004710 container create 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 25 08:53:15 compute-0 systemd[1]: Started libpod-conmon-162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29.scope.
Nov 25 08:53:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:15 compute-0 podman[373443]: 2025-11-25 08:53:15.086230332 +0000 UTC m=+0.271989865 container init 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:53:15 compute-0 podman[373443]: 2025-11-25 08:53:15.093553339 +0000 UTC m=+0.279312882 container start 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 08:53:15 compute-0 podman[373443]: 2025-11-25 08:53:15.097515545 +0000 UTC m=+0.283275118 container attach 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:53:15 compute-0 ceph-mon[75015]: pgmap v2173: 321 pgs: 321 active+clean; 150 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 80 op/s
Nov 25 08:53:15 compute-0 nova_compute[253538]: 2025-11-25 08:53:15.536 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Successfully updated port: e0eb0246-9869-4c10-b45b-bd0799ae0c95 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:53:15 compute-0 nova_compute[253538]: 2025-11-25 08:53:15.553 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:15 compute-0 nova_compute[253538]: 2025-11-25 08:53:15.554 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:15 compute-0 nova_compute[253538]: 2025-11-25 08:53:15.555 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:53:15 compute-0 nova_compute[253538]: 2025-11-25 08:53:15.626 253542 DEBUG nova.compute.manager [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:15 compute-0 nova_compute[253538]: 2025-11-25 08:53:15.626 253542 DEBUG nova.compute.manager [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing instance network info cache due to event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:53:15 compute-0 nova_compute[253538]: 2025-11-25 08:53:15.627 253542 DEBUG oslo_concurrency.lockutils [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:15 compute-0 nova_compute[253538]: 2025-11-25 08:53:15.743 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:53:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 183 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 88 op/s
Nov 25 08:53:15 compute-0 charming_mahavira[373460]: {
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:     "0": [
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:         {
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "devices": [
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "/dev/loop3"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             ],
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_name": "ceph_lv0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_size": "21470642176",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "name": "ceph_lv0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "tags": {
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cluster_name": "ceph",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.crush_device_class": "",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.encrypted": "0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osd_id": "0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.type": "block",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.vdo": "0"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             },
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "type": "block",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "vg_name": "ceph_vg0"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:         }
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:     ],
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:     "1": [
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:         {
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "devices": [
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "/dev/loop4"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             ],
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_name": "ceph_lv1",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_size": "21470642176",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "name": "ceph_lv1",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "tags": {
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cluster_name": "ceph",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.crush_device_class": "",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.encrypted": "0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osd_id": "1",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.type": "block",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.vdo": "0"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             },
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "type": "block",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "vg_name": "ceph_vg1"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:         }
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:     ],
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:     "2": [
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:         {
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "devices": [
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "/dev/loop5"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             ],
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_name": "ceph_lv2",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_size": "21470642176",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "name": "ceph_lv2",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "tags": {
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.cluster_name": "ceph",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.crush_device_class": "",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.encrypted": "0",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osd_id": "2",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.type": "block",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:                 "ceph.vdo": "0"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             },
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "type": "block",
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:             "vg_name": "ceph_vg2"
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:         }
Nov 25 08:53:15 compute-0 charming_mahavira[373460]:     ]
Nov 25 08:53:15 compute-0 charming_mahavira[373460]: }
Nov 25 08:53:15 compute-0 systemd[1]: libpod-162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29.scope: Deactivated successfully.
Nov 25 08:53:15 compute-0 podman[373469]: 2025-11-25 08:53:15.899048971 +0000 UTC m=+0.029082370 container died 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c72796ac21f34f63c04016751c25950b8d8b0faa47d3804553d9bdcf042c7f8e-merged.mount: Deactivated successfully.
Nov 25 08:53:15 compute-0 podman[373469]: 2025-11-25 08:53:15.957385233 +0000 UTC m=+0.087418612 container remove 162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:53:15 compute-0 systemd[1]: libpod-conmon-162a2806d3100b69eca1688499ed5398e5713ed3b744b1ecfd8f5d56397c2a29.scope: Deactivated successfully.
Nov 25 08:53:15 compute-0 sudo[373334]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:16 compute-0 sudo[373484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:16 compute-0 sudo[373484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:16 compute-0 sudo[373484]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:16 compute-0 sudo[373509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:53:16 compute-0 sudo[373509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:16 compute-0 sudo[373509]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:16 compute-0 sudo[373534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:16 compute-0 sudo[373534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:16 compute-0 sudo[373534]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:16 compute-0 sudo[373559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:53:16 compute-0 sudo[373559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:16 compute-0 podman[373625]: 2025-11-25 08:53:16.681064724 +0000 UTC m=+0.047871893 container create 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 08:53:16 compute-0 systemd[1]: Started libpod-conmon-11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98.scope.
Nov 25 08:53:16 compute-0 podman[373625]: 2025-11-25 08:53:16.653675691 +0000 UTC m=+0.020482850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:53:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:16 compute-0 podman[373625]: 2025-11-25 08:53:16.781983447 +0000 UTC m=+0.148790596 container init 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 08:53:16 compute-0 podman[373625]: 2025-11-25 08:53:16.796168037 +0000 UTC m=+0.162975206 container start 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 08:53:16 compute-0 podman[373625]: 2025-11-25 08:53:16.80037433 +0000 UTC m=+0.167181469 container attach 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:53:16 compute-0 flamboyant_wilson[373641]: 167 167
Nov 25 08:53:16 compute-0 systemd[1]: libpod-11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98.scope: Deactivated successfully.
Nov 25 08:53:16 compute-0 podman[373625]: 2025-11-25 08:53:16.80298697 +0000 UTC m=+0.169794139 container died 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-00173d2313ee2c64b0b61b8ff3441b38a9841d6f5323735886a6c34c001b9078-merged.mount: Deactivated successfully.
Nov 25 08:53:16 compute-0 podman[373625]: 2025-11-25 08:53:16.848001545 +0000 UTC m=+0.214808674 container remove 11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:53:16 compute-0 systemd[1]: libpod-conmon-11349692952baaf9d526bd9be97413833b5df302e458f9aeeef26f311d158a98.scope: Deactivated successfully.
Nov 25 08:53:17 compute-0 podman[373665]: 2025-11-25 08:53:17.085374073 +0000 UTC m=+0.096795404 container create 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 08:53:17 compute-0 systemd[1]: Started libpod-conmon-62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34.scope.
Nov 25 08:53:17 compute-0 podman[373665]: 2025-11-25 08:53:17.057919377 +0000 UTC m=+0.069340798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:53:17 compute-0 ceph-mon[75015]: pgmap v2174: 321 pgs: 321 active+clean; 183 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 88 op/s
Nov 25 08:53:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:17 compute-0 podman[373665]: 2025-11-25 08:53:17.195568174 +0000 UTC m=+0.206989535 container init 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:53:17 compute-0 podman[373665]: 2025-11-25 08:53:17.20551161 +0000 UTC m=+0.216932941 container start 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:53:17 compute-0 podman[373665]: 2025-11-25 08:53:17.209444525 +0000 UTC m=+0.220865856 container attach 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.365 253542 DEBUG nova.network.neutron [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.401 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.402 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance network_info: |[{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.402 253542 DEBUG oslo_concurrency.lockutils [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.402 253542 DEBUG nova.network.neutron [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.405 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start _get_guest_xml network_info=[{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.412 253542 WARNING nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.419 253542 DEBUG nova.virt.libvirt.host [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.419 253542 DEBUG nova.virt.libvirt.host [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.423 253542 DEBUG nova.virt.libvirt.host [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.423 253542 DEBUG nova.virt.libvirt.host [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.424 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.424 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.424 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.424 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.425 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.426 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.426 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.426 253542 DEBUG nova.virt.hardware [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.429 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 207 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.7 MiB/s wr, 109 op/s
Nov 25 08:53:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:53:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043780554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.928 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.955 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:17 compute-0 nova_compute[253538]: 2025-11-25 08:53:17.960 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]: {
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "osd_id": 1,
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "type": "bluestore"
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:     },
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "osd_id": 2,
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "type": "bluestore"
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:     },
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "osd_id": 0,
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:         "type": "bluestore"
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]:     }
Nov 25 08:53:18 compute-0 compassionate_driscoll[373681]: }
Nov 25 08:53:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1043780554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:18 compute-0 systemd[1]: libpod-62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34.scope: Deactivated successfully.
Nov 25 08:53:18 compute-0 podman[373665]: 2025-11-25 08:53:18.196735836 +0000 UTC m=+1.208157187 container died 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b07340675ca375edf6cac3afd873b540a2970346d60e3abd726c8af966d2f623-merged.mount: Deactivated successfully.
Nov 25 08:53:18 compute-0 podman[373665]: 2025-11-25 08:53:18.258772618 +0000 UTC m=+1.270193949 container remove 62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:53:18 compute-0 systemd[1]: libpod-conmon-62fc6a628c6184f16f2582c4f247f12b0e7c3e3e782841df98fc6452163dec34.scope: Deactivated successfully.
Nov 25 08:53:18 compute-0 sudo[373559]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:53:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:53:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:53:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:53:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f5967b3e-72fd-44fc-9490-6791292b9d83 does not exist
Nov 25 08:53:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5c7d998b-8ad5-4c02-b93d-4376418d0428 does not exist
Nov 25 08:53:18 compute-0 sudo[373786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:53:18 compute-0 sudo[373786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:18 compute-0 sudo[373786]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:53:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005190395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.456 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.458 253542 DEBUG nova.virt.libvirt.vif [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:53:13Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.458 253542 DEBUG nova.network.os_vif_util [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.459 253542 DEBUG nova.network.os_vif_util [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.461 253542 DEBUG nova.objects.instance [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:18 compute-0 sudo[373811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:53:18 compute-0 sudo[373811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.476 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <uuid>2da7049d-715e-4209-8c17-dda96ff6a192</uuid>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <name>instance-00000073</name>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1007468219</nova:name>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:53:17</nova:creationTime>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <nova:port uuid="e0eb0246-9869-4c10-b45b-bd0799ae0c95">
Nov 25 08:53:18 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:53:18 compute-0 sudo[373811]: pam_unix(sudo:session): session closed for user root
Nov 25 08:53:18 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <system>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <entry name="serial">2da7049d-715e-4209-8c17-dda96ff6a192</entry>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <entry name="uuid">2da7049d-715e-4209-8c17-dda96ff6a192</entry>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </system>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <os>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   </os>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <features>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   </features>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2da7049d-715e-4209-8c17-dda96ff6a192_disk">
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2da7049d-715e-4209-8c17-dda96ff6a192_disk.config">
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       </source>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:53:18 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:ea:4d:89"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <target dev="tape0eb0246-98"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/console.log" append="off"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <video>
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </video>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:53:18 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:53:18 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:53:18 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:53:18 compute-0 nova_compute[253538]: </domain>
Nov 25 08:53:18 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.477 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Preparing to wait for external event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.477 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.477 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.477 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.478 253542 DEBUG nova.virt.libvirt.vif [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:53:13Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.478 253542 DEBUG nova.network.os_vif_util [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.479 253542 DEBUG nova.network.os_vif_util [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.479 253542 DEBUG os_vif [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.480 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.481 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.485 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0eb0246-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.485 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0eb0246-98, col_values=(('external_ids', {'iface-id': 'e0eb0246-9869-4c10-b45b-bd0799ae0c95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:4d:89', 'vm-uuid': '2da7049d-715e-4209-8c17-dda96ff6a192'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.487 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:18 compute-0 NetworkManager[48915]: <info>  [1764060798.4885] manager: (tape0eb0246-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.494 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.496 253542 INFO os_vif [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98')
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.548 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.549 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.550 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:ea:4d:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.550 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Using config drive
Nov 25 08:53:18 compute-0 nova_compute[253538]: 2025-11-25 08:53:18.576 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.161 253542 DEBUG nova.network.neutron [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updated VIF entry in instance network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.161 253542 DEBUG nova.network.neutron [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.171 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Creating config drive at /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config
Nov 25 08:53:19 compute-0 ceph-mon[75015]: pgmap v2175: 321 pgs: 321 active+clean; 207 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.7 MiB/s wr, 109 op/s
Nov 25 08:53:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:53:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:53:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1005190395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.181 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5lngydeb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.229 253542 DEBUG oslo_concurrency.lockutils [req-d7f0cac9-e8f4-4d28-b4d7-e818e52e2d44 req-cc8a2eeb-e00e-4b75-b3c3-8fc020d2ceff b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.335 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5lngydeb" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.361 253542 DEBUG nova.storage.rbd_utils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.365 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.542 253542 DEBUG oslo_concurrency.processutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config 2da7049d-715e-4209-8c17-dda96ff6a192_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.543 253542 INFO nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Deleting local config drive /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/disk.config because it was imported into RBD.
Nov 25 08:53:19 compute-0 kernel: tape0eb0246-98: entered promiscuous mode
Nov 25 08:53:19 compute-0 NetworkManager[48915]: <info>  [1764060799.6051] manager: (tape0eb0246-98): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Nov 25 08:53:19 compute-0 ovn_controller[152859]: 2025-11-25T08:53:19Z|01146|binding|INFO|Claiming lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 for this chassis.
Nov 25 08:53:19 compute-0 ovn_controller[152859]: 2025-11-25T08:53:19Z|01147|binding|INFO|e0eb0246-9869-4c10-b45b-bd0799ae0c95: Claiming fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.614 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:4d:89 10.100.0.6'], port_security=['fa:16:3e:ea:4d:89 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2da7049d-715e-4209-8c17-dda96ff6a192', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '622b72dc-477c-41ae-9a43-d0ddc5df890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbdce223-39b2-47e0-8888-53244c8f0c36, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e0eb0246-9869-4c10-b45b-bd0799ae0c95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.617 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e0eb0246-9869-4c10-b45b-bd0799ae0c95 in datapath ab581a21-5712-4b8e-87f9-b943349fbfcb bound to our chassis
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.619 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.633 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5d7b55-1031-4331-9f51-1d793fddc228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.634 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab581a21-51 in ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.637 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab581a21-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.638 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5453d83-cd19-4bf8-8ec5-ac8cc2be2dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.638 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc127147-c344-4d3d-9983-4c54d93b4865]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_controller[152859]: 2025-11-25T08:53:19Z|01148|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 ovn-installed in OVS
Nov 25 08:53:19 compute-0 ovn_controller[152859]: 2025-11-25T08:53:19Z|01149|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 up in Southbound
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:19 compute-0 systemd-machined[215790]: New machine qemu-143-instance-00000073.
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.656 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[3820ce2f-889d-4a87-8416-0ff6d9f7d80b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000073.
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.677 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6362c18-f75a-4ba1-9f88-1ffb73c2f857]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 systemd-udevd[373915]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:53:19 compute-0 NetworkManager[48915]: <info>  [1764060799.7009] device (tape0eb0246-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:53:19 compute-0 NetworkManager[48915]: <info>  [1764060799.7017] device (tape0eb0246-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.721 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b43caa17-8af5-48d1-abe0-2bb464f60d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.726 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0f68907c-0090-45df-a926-6754a8de25b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 NetworkManager[48915]: <info>  [1764060799.7277] manager: (tapab581a21-50): new Veth device (/org/freedesktop/NetworkManager/Devices/474)
Nov 25 08:53:19 compute-0 systemd-udevd[373919]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.764 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d2994d5a-02dd-4f35-8d2b-97fbaaa4ec62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.767 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[152de971-7bbc-4691-85a2-9ca61bf1f0d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 NetworkManager[48915]: <info>  [1764060799.7911] device (tapab581a21-50): carrier: link connected
Nov 25 08:53:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.798 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7874146d-e78b-4d4c-9e08-de3c69592211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.821 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9a1867-ae34-496c-b482-7af215322829]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab581a21-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:79:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615439, 'reachable_time': 30428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373945, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.837 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[518ada18-4876-4360-82c3-f9a99916c9bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:797a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 615439, 'tstamp': 615439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373946, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.859 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b06cdee9-6a8d-4a67-b937-4931cfa049d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab581a21-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:79:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615439, 'reachable_time': 30428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373947, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.892 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[088d2636-c3ec-4794-bc2a-3c9788b80d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.970 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23b245cd-cfec-471d-97cd-6b377ad481d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.973 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab581a21-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.973 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.974 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab581a21-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:19 compute-0 kernel: tapab581a21-50: entered promiscuous mode
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.976 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:19 compute-0 NetworkManager[48915]: <info>  [1764060799.9799] manager: (tapab581a21-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab581a21-50, col_values=(('external_ids', {'iface-id': 'b956a451-af5c-4f4e-b3b8-704d71686765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:19 compute-0 ovn_controller[152859]: 2025-11-25T08:53:19Z|01150|binding|INFO|Releasing lport b956a451-af5c-4f4e-b3b8-704d71686765 from this chassis (sb_readonly=0)
Nov 25 08:53:19 compute-0 nova_compute[253538]: 2025-11-25 08:53:19.988 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.991 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.992 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0206fb18-65bb-4036-ab34-c4d70bf2ed92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.993 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:53:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:19.994 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'env', 'PROCESS_TAG=haproxy-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab581a21-5712-4b8e-87f9-b943349fbfcb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.201 253542 DEBUG nova.compute.manager [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.201 253542 DEBUG oslo_concurrency.lockutils [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.202 253542 DEBUG oslo_concurrency.lockutils [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.203 253542 DEBUG oslo_concurrency.lockutils [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.203 253542 DEBUG nova.compute.manager [req-c353fe6a-f78c-4118-bac8-c86755ec20cd req-bf1200db-2cc2-4e5f-b944-e7b73bc05523 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Processing event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.244 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060800.2439125, 2da7049d-715e-4209-8c17-dda96ff6a192 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.245 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Started (Lifecycle Event)
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.248 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.257 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.261 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance spawned successfully.
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.262 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.265 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.271 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.288 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.289 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.290 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.290 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.291 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.291 253542 DEBUG nova.virt.libvirt.driver [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.318 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.318 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060800.2441359, 2da7049d-715e-4209-8c17-dda96ff6a192 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.319 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Paused (Lifecycle Event)
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.353 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.357 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060800.2565794, 2da7049d-715e-4209-8c17-dda96ff6a192 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.358 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Resumed (Lifecycle Event)
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.379 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.381 253542 INFO nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Took 7.29 seconds to spawn the instance on the hypervisor.
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.381 253542 DEBUG nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.384 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.413 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:53:20 compute-0 podman[374021]: 2025-11-25 08:53:20.417995693 +0000 UTC m=+0.048586942 container create 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.444 253542 INFO nova.compute.manager [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Took 8.34 seconds to build instance.
Nov 25 08:53:20 compute-0 systemd[1]: Started libpod-conmon-4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd.scope.
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.472 253542 DEBUG oslo_concurrency.lockutils [None req-df790dd1-a03f-490c-a5fd-544bc83166c6 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53e3214adfe5885526c3929acf65ff9e3b10430b7f7fcf526d2c5c055a47429/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:20 compute-0 podman[374021]: 2025-11-25 08:53:20.397063033 +0000 UTC m=+0.027654302 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:53:20 compute-0 podman[374021]: 2025-11-25 08:53:20.502129416 +0000 UTC m=+0.132720715 container init 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:53:20 compute-0 podman[374021]: 2025-11-25 08:53:20.511074415 +0000 UTC m=+0.141665664 container start 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:53:20 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [NOTICE]   (374040) : New worker (374042) forked
Nov 25 08:53:20 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [NOTICE]   (374040) : Loading success.
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.601 253542 INFO nova.compute.manager [None req-68c25e1f-d24f-479c-af3b-2edbd6a3be38 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Get console output
Nov 25 08:53:20 compute-0 nova_compute[253538]: 2025-11-25 08:53:20.608 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:53:21 compute-0 ceph-mon[75015]: pgmap v2176: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 08:53:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Nov 25 08:53:22 compute-0 nova_compute[253538]: 2025-11-25 08:53:22.265 253542 DEBUG nova.compute.manager [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:22 compute-0 nova_compute[253538]: 2025-11-25 08:53:22.266 253542 DEBUG oslo_concurrency.lockutils [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:22 compute-0 nova_compute[253538]: 2025-11-25 08:53:22.266 253542 DEBUG oslo_concurrency.lockutils [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:22 compute-0 nova_compute[253538]: 2025-11-25 08:53:22.267 253542 DEBUG oslo_concurrency.lockutils [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:22 compute-0 nova_compute[253538]: 2025-11-25 08:53:22.267 253542 DEBUG nova.compute.manager [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:22 compute-0 nova_compute[253538]: 2025-11-25 08:53:22.267 253542 WARNING nova.compute.manager [req-d159983d-a9c5-407f-8fe7-8540d3cced2e req-0dc6ed96-7fe2-4f14-9242-1838a8652be0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state active and task_state None.
Nov 25 08:53:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:22 compute-0 nova_compute[253538]: 2025-11-25 08:53:22.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:23 compute-0 ceph-mon[75015]: pgmap v2177: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Nov 25 08:53:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:53:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:53:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:53:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:53:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:53:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:53:23 compute-0 nova_compute[253538]: 2025-11-25 08:53:23.487 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Nov 25 08:53:25 compute-0 ceph-mon[75015]: pgmap v2178: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.387 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.388 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.389 253542 DEBUG nova.objects.instance [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'flavor' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 151 op/s
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.806 253542 DEBUG nova.objects.instance [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_requests' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.821 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.856 253542 DEBUG nova.compute.manager [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.856 253542 DEBUG nova.compute.manager [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing instance network info cache due to event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.857 253542 DEBUG oslo_concurrency.lockutils [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.857 253542 DEBUG oslo_concurrency.lockutils [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:25 compute-0 nova_compute[253538]: 2025-11-25 08:53:25.858 253542 DEBUG nova.network.neutron [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:53:26 compute-0 nova_compute[253538]: 2025-11-25 08:53:26.004 253542 DEBUG nova.policy [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:53:26 compute-0 nova_compute[253538]: 2025-11-25 08:53:26.805 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Successfully created port: 1959aca7-b25c-4fe5-b59a-70db352af78b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:53:27 compute-0 ceph-mon[75015]: pgmap v2179: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 151 op/s
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.289 253542 DEBUG nova.network.neutron [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updated VIF entry in instance network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.289 253542 DEBUG nova.network.neutron [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.306 253542 DEBUG oslo_concurrency.lockutils [req-778a5f67-bd48-4dcc-bd8b-d5244b731a1e req-38ac3873-07df-435d-83b0-2d012ec170e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.423 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Successfully updated port: 1959aca7-b25c-4fe5-b59a-70db352af78b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.435 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.436 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.436 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:53:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.528 253542 DEBUG nova.compute.manager [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.529 253542 DEBUG nova.compute.manager [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-1959aca7-b25c-4fe5-b59a-70db352af78b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.529 253542 DEBUG oslo_concurrency.lockutils [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:27 compute-0 nova_compute[253538]: 2025-11-25 08:53:27.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 123 op/s
Nov 25 08:53:28 compute-0 nova_compute[253538]: 2025-11-25 08:53:28.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:28 compute-0 nova_compute[253538]: 2025-11-25 08:53:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:53:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543939752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:53:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:53:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543939752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:53:29 compute-0 ceph-mon[75015]: pgmap v2180: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 123 op/s
Nov 25 08:53:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2543939752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:53:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2543939752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:53:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 245 KiB/s wr, 91 op/s
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.311 253542 DEBUG nova.network.neutron [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.341 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.342 253542 DEBUG oslo_concurrency.lockutils [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.343 253542 DEBUG nova.network.neutron [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 1959aca7-b25c-4fe5-b59a-70db352af78b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.346 253542 DEBUG nova.virt.libvirt.vif [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.346 253542 DEBUG nova.network.os_vif_util [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.347 253542 DEBUG nova.network.os_vif_util [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.348 253542 DEBUG os_vif [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.348 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.349 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.349 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.353 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.353 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1959aca7-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.354 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1959aca7-b2, col_values=(('external_ids', {'iface-id': '1959aca7-b25c-4fe5-b59a-70db352af78b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:c5:52', 'vm-uuid': '76611b0b-db06-4903-a22a-59b23a1e0d48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 NetworkManager[48915]: <info>  [1764060810.3564] manager: (tap1959aca7-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.358 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.370 253542 INFO os_vif [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2')
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.371 253542 DEBUG nova.virt.libvirt.vif [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.371 253542 DEBUG nova.network.os_vif_util [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.372 253542 DEBUG nova.network.os_vif_util [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.375 253542 DEBUG nova.virt.libvirt.guest [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] attach device xml: <interface type="ethernet">
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:d6:c5:52"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <target dev="tap1959aca7-b2"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]: </interface>
Nov 25 08:53:30 compute-0 nova_compute[253538]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 08:53:30 compute-0 NetworkManager[48915]: <info>  [1764060810.4029] manager: (tap1959aca7-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/477)
Nov 25 08:53:30 compute-0 kernel: tap1959aca7-b2: entered promiscuous mode
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.405 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 ovn_controller[152859]: 2025-11-25T08:53:30Z|01151|binding|INFO|Claiming lport 1959aca7-b25c-4fe5-b59a-70db352af78b for this chassis.
Nov 25 08:53:30 compute-0 ovn_controller[152859]: 2025-11-25T08:53:30Z|01152|binding|INFO|1959aca7-b25c-4fe5-b59a-70db352af78b: Claiming fa:16:3e:d6:c5:52 10.100.0.26
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.421 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:c5:52 10.100.0.26'], port_security=['fa:16:3e:d6:c5:52 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '76611b0b-db06-4903-a22a-59b23a1e0d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1959aca7-b25c-4fe5-b59a-70db352af78b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.423 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1959aca7-b25c-4fe5-b59a-70db352af78b in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 bound to our chassis
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.425 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 08:53:30 compute-0 systemd-udevd[374058]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a167b9b6-16ab-4a36-9fab-d726a2389334]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.443 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f0f7d83-b1 in ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.446 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f0f7d83-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.446 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8e243095-614d-44b1-b6e7-b9ad1352ff95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed9c0ae-2b7c-46ed-a81b-1aff17481e6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.466 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcc2495-8151-4b1e-b4fa-868c9a3cbec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_controller[152859]: 2025-11-25T08:53:30Z|01153|binding|INFO|Setting lport 1959aca7-b25c-4fe5-b59a-70db352af78b ovn-installed in OVS
Nov 25 08:53:30 compute-0 ovn_controller[152859]: 2025-11-25T08:53:30Z|01154|binding|INFO|Setting lport 1959aca7-b25c-4fe5-b59a-70db352af78b up in Southbound
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 NetworkManager[48915]: <info>  [1764060810.4736] device (tap1959aca7-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:53:30 compute-0 NetworkManager[48915]: <info>  [1764060810.4750] device (tap1959aca7-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.503 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41e0dcfd-e9c9-463f-bcc6-eb78394677d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.517 253542 DEBUG nova.virt.libvirt.driver [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.517 253542 DEBUG nova.virt.libvirt.driver [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.517 253542 DEBUG nova.virt.libvirt.driver [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:36:b2:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.517 253542 DEBUG nova.virt.libvirt.driver [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:d6:c5:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.543 253542 DEBUG nova.virt.libvirt.guest [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:53:30</nova:creationTime>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 08:53:30 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     <nova:port uuid="1959aca7-b25c-4fe5-b59a-70db352af78b">
Nov 25 08:53:30 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 08:53:30 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:53:30 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:53:30 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:53:30 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.547 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c62f272a-1424-4aaf-bcc1-446b6ba1b462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 NetworkManager[48915]: <info>  [1764060810.5558] manager: (tap8f0f7d83-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/478)
Nov 25 08:53:30 compute-0 systemd-udevd[374061]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.556 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[91ef1049-1106-4798-8782-8a9f3f0470fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.582 253542 DEBUG oslo_concurrency.lockutils [None req-b64d71f1-b88b-41a0-91c0-6bcfe4f8ef88 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.593 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[62bb72dc-4361-477e-94b1-705a0c60e9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.596 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3e27edfd-ee18-4075-b3bc-90b97dc8dc73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 NetworkManager[48915]: <info>  [1764060810.6194] device (tap8f0f7d83-b0): carrier: link connected
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.623 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7f51ac-73d2-4600-8b01-ad73b9b22d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e4233c-75b8-446f-97bf-3cdcaed31aa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374084, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.656 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30600fff-e3ce-456c-bf65-384816a736dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:fd91'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616521, 'tstamp': 616521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374085, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.671 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51737ee8-a821-44f0-9bc7-75c5716cafd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374086, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ad30bcb7-991c-4d67-b2b6-5e1dc7ed52dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.810 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c91c59d5-c4fe-43f7-8b63-99ba576efddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.811 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 kernel: tap8f0f7d83-b0: entered promiscuous mode
Nov 25 08:53:30 compute-0 NetworkManager[48915]: <info>  [1764060810.8154] manager: (tap8f0f7d83-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.819 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:30 compute-0 ovn_controller[152859]: 2025-11-25T08:53:30Z|01155|binding|INFO|Releasing lport 4bc48b70-3942-46d1-ac71-5fa19e5d9ae3 from this chassis (sb_readonly=0)
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.824 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f0f7d83-b45f-4a49-9b0c-4eced5b56b37.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f0f7d83-b45f-4a49-9b0c-4eced5b56b37.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.837 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c61e9462-829e-4cbb-b996-eee8a68d0f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.838 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/8f0f7d83-b45f-4a49-9b0c-4eced5b56b37.pid.haproxy
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:53:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:30.839 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'env', 'PROCESS_TAG=haproxy-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f0f7d83-b45f-4a49-9b0c-4eced5b56b37.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:53:30 compute-0 nova_compute[253538]: 2025-11-25 08:53:30.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:31 compute-0 nova_compute[253538]: 2025-11-25 08:53:31.160 253542 DEBUG nova.compute.manager [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:31 compute-0 nova_compute[253538]: 2025-11-25 08:53:31.161 253542 DEBUG oslo_concurrency.lockutils [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:31 compute-0 nova_compute[253538]: 2025-11-25 08:53:31.161 253542 DEBUG oslo_concurrency.lockutils [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:31 compute-0 nova_compute[253538]: 2025-11-25 08:53:31.161 253542 DEBUG oslo_concurrency.lockutils [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:31 compute-0 nova_compute[253538]: 2025-11-25 08:53:31.161 253542 DEBUG nova.compute.manager [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:31 compute-0 nova_compute[253538]: 2025-11-25 08:53:31.162 253542 WARNING nova.compute.manager [req-c7f3a4ec-a9d8-4055-9f6a-0ee6ac0d1780 req-c6d08ffc-28fb-4dc5-822a-efb455fb9713 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b for instance with vm_state active and task_state None.
Nov 25 08:53:31 compute-0 ceph-mon[75015]: pgmap v2181: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 245 KiB/s wr, 91 op/s
Nov 25 08:53:31 compute-0 podman[374118]: 2025-11-25 08:53:31.298351517 +0000 UTC m=+0.069358770 container create a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:53:31 compute-0 systemd[1]: Started libpod-conmon-a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c.scope.
Nov 25 08:53:31 compute-0 podman[374118]: 2025-11-25 08:53:31.266723651 +0000 UTC m=+0.037730924 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:53:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ca46848af656bb6a002ab5710a7e5831c6c96bc3a4e29b10c26e75820c7aca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:31 compute-0 podman[374118]: 2025-11-25 08:53:31.396905984 +0000 UTC m=+0.167913237 container init a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:53:31 compute-0 podman[374118]: 2025-11-25 08:53:31.409732076 +0000 UTC m=+0.180739389 container start a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:53:31 compute-0 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [NOTICE]   (374137) : New worker (374139) forked
Nov 25 08:53:31 compute-0 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [NOTICE]   (374137) : Loading success.
Nov 25 08:53:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 215 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 123 KiB/s wr, 80 op/s
Nov 25 08:53:32 compute-0 nova_compute[253538]: 2025-11-25 08:53:32.056 253542 DEBUG nova.network.neutron [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 1959aca7-b25c-4fe5-b59a-70db352af78b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:53:32 compute-0 nova_compute[253538]: 2025-11-25 08:53:32.056 253542 DEBUG nova.network.neutron [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:32 compute-0 nova_compute[253538]: 2025-11-25 08:53:32.075 253542 DEBUG oslo_concurrency.lockutils [req-d524d148-208e-426f-9087-6b37fcc3db0a req-fb04bd9c-375d-4630-ae69-8c93da1c6cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:32 compute-0 ovn_controller[152859]: 2025-11-25T08:53:32Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 08:53:32 compute-0 ovn_controller[152859]: 2025-11-25T08:53:32Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 08:53:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:32 compute-0 nova_compute[253538]: 2025-11-25 08:53:32.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:33 compute-0 ceph-mon[75015]: pgmap v2182: 321 pgs: 321 active+clean; 215 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 123 KiB/s wr, 80 op/s
Nov 25 08:53:33 compute-0 nova_compute[253538]: 2025-11-25 08:53:33.250 253542 DEBUG nova.compute.manager [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:33 compute-0 nova_compute[253538]: 2025-11-25 08:53:33.251 253542 DEBUG oslo_concurrency.lockutils [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:33 compute-0 nova_compute[253538]: 2025-11-25 08:53:33.251 253542 DEBUG oslo_concurrency.lockutils [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:33 compute-0 nova_compute[253538]: 2025-11-25 08:53:33.251 253542 DEBUG oslo_concurrency.lockutils [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:33 compute-0 nova_compute[253538]: 2025-11-25 08:53:33.252 253542 DEBUG nova.compute.manager [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:33 compute-0 nova_compute[253538]: 2025-11-25 08:53:33.252 253542 WARNING nova.compute.manager [req-ca5d8caf-3103-4a1a-9e03-b3bded05e784 req-36f0948c-5208-4a33-8b8b-22a79c8f0fd8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b for instance with vm_state active and task_state None.
Nov 25 08:53:33 compute-0 ovn_controller[152859]: 2025-11-25T08:53:33Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:c5:52 10.100.0.26
Nov 25 08:53:33 compute-0 ovn_controller[152859]: 2025-11-25T08:53:33Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:c5:52 10.100.0.26
Nov 25 08:53:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 227 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 808 KiB/s wr, 51 op/s
Nov 25 08:53:34 compute-0 ceph-mon[75015]: pgmap v2183: 321 pgs: 321 active+clean; 227 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 808 KiB/s wr, 51 op/s
Nov 25 08:53:34 compute-0 nova_compute[253538]: 2025-11-25 08:53:34.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:35 compute-0 nova_compute[253538]: 2025-11-25 08:53:35.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 232 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 849 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Nov 25 08:53:36 compute-0 nova_compute[253538]: 2025-11-25 08:53:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:36 compute-0 nova_compute[253538]: 2025-11-25 08:53:36.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:53:36 compute-0 nova_compute[253538]: 2025-11-25 08:53:36.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:53:36 compute-0 podman[374149]: 2025-11-25 08:53:36.811246661 +0000 UTC m=+0.063343575 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 08:53:36 compute-0 ceph-mon[75015]: pgmap v2184: 321 pgs: 321 active+clean; 232 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 849 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Nov 25 08:53:36 compute-0 nova_compute[253538]: 2025-11-25 08:53:36.969 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:36 compute-0 nova_compute[253538]: 2025-11-25 08:53:36.970 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:36 compute-0 nova_compute[253538]: 2025-11-25 08:53:36.970 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:53:36 compute-0 nova_compute[253538]: 2025-11-25 08:53:36.970 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:37 compute-0 nova_compute[253538]: 2025-11-25 08:53:37.797 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 244 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 08:53:38 compute-0 podman[374168]: 2025-11-25 08:53:38.812130202 +0000 UTC m=+0.062363188 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:53:38 compute-0 ceph-mon[75015]: pgmap v2185: 321 pgs: 321 active+clean; 244 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 08:53:39 compute-0 nova_compute[253538]: 2025-11-25 08:53:39.385 253542 INFO nova.compute.manager [None req-cc4d202c-01ab-4e8b-a455-6903f57f3cc0 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Get console output
Nov 25 08:53:39 compute-0 nova_compute[253538]: 2025-11-25 08:53:39.390 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:53:39 compute-0 nova_compute[253538]: 2025-11-25 08:53:39.637 253542 DEBUG oslo_concurrency.lockutils [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:39 compute-0 nova_compute[253538]: 2025-11-25 08:53:39.638 253542 DEBUG oslo_concurrency.lockutils [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:39 compute-0 nova_compute[253538]: 2025-11-25 08:53:39.639 253542 DEBUG nova.compute.manager [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:39 compute-0 nova_compute[253538]: 2025-11-25 08:53:39.644 253542 DEBUG nova.compute.manager [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 08:53:39 compute-0 nova_compute[253538]: 2025-11-25 08:53:39.645 253542 DEBUG nova.objects.instance [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:39 compute-0 nova_compute[253538]: 2025-11-25 08:53:39.662 253542 DEBUG nova.virt.libvirt.driver [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 08:53:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 08:53:40 compute-0 nova_compute[253538]: 2025-11-25 08:53:40.407 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:40 compute-0 ceph-mon[75015]: pgmap v2186: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 08:53:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:41.078 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Nov 25 08:53:41 compute-0 nova_compute[253538]: 2025-11-25 08:53:41.819 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:41 compute-0 nova_compute[253538]: 2025-11-25 08:53:41.856 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:41 compute-0 nova_compute[253538]: 2025-11-25 08:53:41.856 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:53:41 compute-0 nova_compute[253538]: 2025-11-25 08:53:41.857 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:41 compute-0 nova_compute[253538]: 2025-11-25 08:53:41.857 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:41 compute-0 nova_compute[253538]: 2025-11-25 08:53:41.858 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:53:41 compute-0 kernel: tape0eb0246-98 (unregistering): left promiscuous mode
Nov 25 08:53:41 compute-0 NetworkManager[48915]: <info>  [1764060821.9766] device (tape0eb0246-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:53:41 compute-0 nova_compute[253538]: 2025-11-25 08:53:41.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:41 compute-0 ovn_controller[152859]: 2025-11-25T08:53:41Z|01156|binding|INFO|Releasing lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 from this chassis (sb_readonly=0)
Nov 25 08:53:41 compute-0 ovn_controller[152859]: 2025-11-25T08:53:41Z|01157|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 down in Southbound
Nov 25 08:53:41 compute-0 ovn_controller[152859]: 2025-11-25T08:53:41Z|01158|binding|INFO|Removing iface tape0eb0246-98 ovn-installed in OVS
Nov 25 08:53:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:41.998 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:4d:89 10.100.0.6'], port_security=['fa:16:3e:ea:4d:89 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2da7049d-715e-4209-8c17-dda96ff6a192', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '622b72dc-477c-41ae-9a43-d0ddc5df890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbdce223-39b2-47e0-8888-53244c8f0c36, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e0eb0246-9869-4c10-b45b-bd0799ae0c95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.000 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e0eb0246-9869-4c10-b45b-bd0799ae0c95 in datapath ab581a21-5712-4b8e-87f9-b943349fbfcb unbound from our chassis
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.003 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab581a21-5712-4b8e-87f9-b943349fbfcb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a966bcb8-573b-4873-b3ca-3dab46b29fc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.005 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb namespace which is not needed anymore
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:42 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 25 08:53:42 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000073.scope: Consumed 14.361s CPU time.
Nov 25 08:53:42 compute-0 systemd-machined[215790]: Machine qemu-143-instance-00000073 terminated.
Nov 25 08:53:42 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [NOTICE]   (374040) : haproxy version is 2.8.14-c23fe91
Nov 25 08:53:42 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [NOTICE]   (374040) : path to executable is /usr/sbin/haproxy
Nov 25 08:53:42 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [WARNING]  (374040) : Exiting Master process...
Nov 25 08:53:42 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [WARNING]  (374040) : Exiting Master process...
Nov 25 08:53:42 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [ALERT]    (374040) : Current worker (374042) exited with code 143 (Terminated)
Nov 25 08:53:42 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374036]: [WARNING]  (374040) : All workers exited. Exiting... (0)
Nov 25 08:53:42 compute-0 systemd[1]: libpod-4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd.scope: Deactivated successfully.
Nov 25 08:53:42 compute-0 podman[374212]: 2025-11-25 08:53:42.224924311 +0000 UTC m=+0.072469805 container died 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 08:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd-userdata-shm.mount: Deactivated successfully.
Nov 25 08:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d53e3214adfe5885526c3929acf65ff9e3b10430b7f7fcf526d2c5c055a47429-merged.mount: Deactivated successfully.
Nov 25 08:53:42 compute-0 podman[374212]: 2025-11-25 08:53:42.282132297 +0000 UTC m=+0.129677761 container cleanup 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:53:42 compute-0 systemd[1]: libpod-conmon-4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd.scope: Deactivated successfully.
Nov 25 08:53:42 compute-0 podman[374250]: 2025-11-25 08:53:42.372790438 +0000 UTC m=+0.055749807 container remove 4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45a8831d-8f03-42f7-b65e-4cd416ac924c]: (4, ('Tue Nov 25 08:53:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb (4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd)\n4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd\nTue Nov 25 08:53:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb (4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd)\n4589d8b885927712e3e4ff7370f0a7b7fd524b8512fabdebcd406b222e11eefd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.381 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9db0a86-5b34-4261-934f-c4e0d3d6d8ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.382 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab581a21-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.384 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:42 compute-0 kernel: tapab581a21-50: left promiscuous mode
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.410 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.413 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e203bbd7-abad-4a8c-8dde-133fb9e789da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.431 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac45020-4aca-40e0-ab4e-13b5707674dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.434 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a887f43-8ffb-4329-98a3-1423374b08bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.450 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf427cf-ee95-4028-848d-215fc19f7f15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615431, 'reachable_time': 18459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374267, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dab581a21\x2d5712\x2d4b8e\x2d87f9\x2db943349fbfcb.mount: Deactivated successfully.
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.453 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.454 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[014b3946-7bfb-46dc-8a67-ced0cc5b367b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.507 253542 DEBUG nova.compute.manager [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.507 253542 DEBUG oslo_concurrency.lockutils [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.508 253542 DEBUG oslo_concurrency.lockutils [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.508 253542 DEBUG oslo_concurrency.lockutils [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.508 253542 DEBUG nova.compute.manager [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.509 253542 WARNING nova.compute.manager [req-deba0c59-f095-415b-90e3-cb3ee6a4c60a req-ef700aa4-b722-431a-a8d9-dab1bc1b55fa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state active and task_state powering-off.
Nov 25 08:53:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.538 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:42.539 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.678 253542 INFO nova.virt.libvirt.driver [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance shutdown successfully after 3 seconds.
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.685 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance destroyed successfully.
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.685 253542 DEBUG nova.objects.instance [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.694 253542 DEBUG nova.compute.manager [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.771 253542 DEBUG oslo_concurrency.lockutils [None req-d42ff47c-450b-4b3d-9bee-c90199d3b13c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:42 compute-0 nova_compute[253538]: 2025-11-25 08:53:42.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:42 compute-0 ceph-mon[75015]: pgmap v2187: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Nov 25 08:53:43 compute-0 nova_compute[253538]: 2025-11-25 08:53:43.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:43 compute-0 nova_compute[253538]: 2025-11-25 08:53:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:43 compute-0 nova_compute[253538]: 2025-11-25 08:53:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:53:43 compute-0 nova_compute[253538]: 2025-11-25 08:53:43.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:43 compute-0 nova_compute[253538]: 2025-11-25 08:53:43.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:43 compute-0 nova_compute[253538]: 2025-11-25 08:53:43.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:43 compute-0 nova_compute[253538]: 2025-11-25 08:53:43.586 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:53:43 compute-0 nova_compute[253538]: 2025-11-25 08:53:43.587 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 08:53:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:53:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971015882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.104 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.218 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.219 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.221 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.221 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.228 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.229 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.236 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:53:44 compute-0 podman[374290]: 2025-11-25 08:53:44.306136699 +0000 UTC m=+0.138133642 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.333 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.334 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.342 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.343 253542 INFO nova.compute.claims [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.485 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.567 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.569 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3646MB free_disk=59.89712142944336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.569 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.596 253542 DEBUG nova.compute.manager [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.596 253542 DEBUG oslo_concurrency.lockutils [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.596 253542 DEBUG oslo_concurrency.lockutils [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.597 253542 DEBUG oslo_concurrency.lockutils [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.597 253542 DEBUG nova.compute.manager [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.597 253542 WARNING nova.compute.manager [req-7f1dbc04-98cc-4c81-a9fe-832dfd006d4a req-ae922ddb-3ec1-4edb-8ba4-73d54a45a248 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state stopped and task_state None.
Nov 25 08:53:44 compute-0 ceph-mon[75015]: pgmap v2188: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 08:53:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2971015882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:53:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:53:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128690214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.969 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.977 253542 DEBUG nova.compute.provider_tree [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:53:44 compute-0 nova_compute[253538]: 2025-11-25 08:53:44.997 253542 DEBUG nova.scheduler.client.report [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.019 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.020 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.026 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.102 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.103 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.126 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.130 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 76611b0b-db06-4903-a22a-59b23a1e0d48 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.131 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2da7049d-715e-4209-8c17-dda96ff6a192 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.131 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.131 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.131 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.164 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.200 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.312 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.314 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.315 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Creating image(s)
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.346 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.380 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.410 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.416 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.518 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.520 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.521 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.521 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:45.541 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.551 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.556 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:53:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163139968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.664 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.671 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.686 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.713 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.880 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1128690214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:53:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/163139968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.958 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:53:45 compute-0 nova_compute[253538]: 2025-11-25 08:53:45.994 253542 DEBUG nova.policy [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.063 253542 DEBUG nova.objects.instance [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.076 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.077 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Ensure instance console log exists: /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.077 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.078 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.078 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.228 253542 INFO nova.compute.manager [None req-80114ece-5bfa-4ba3-981a-b469714094ff 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Get console output
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.637 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.660 253542 DEBUG oslo_concurrency.lockutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.660 253542 DEBUG oslo_concurrency.lockutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.661 253542 DEBUG nova.network.neutron [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:53:46 compute-0 nova_compute[253538]: 2025-11-25 08:53:46.662 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'info_cache' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:46 compute-0 ceph-mon[75015]: pgmap v2189: 321 pgs: 321 active+clean; 246 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Nov 25 08:53:47 compute-0 nova_compute[253538]: 2025-11-25 08:53:47.176 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Successfully created port: 0797e76b-3f15-4c7e-ae0d-0f4813d59967 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:53:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:47 compute-0 nova_compute[253538]: 2025-11-25 08:53:47.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 258 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 1.3 MiB/s wr, 31 op/s
Nov 25 08:53:48 compute-0 ceph-mon[75015]: pgmap v2190: 321 pgs: 321 active+clean; 258 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 1.3 MiB/s wr, 31 op/s
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.919 253542 DEBUG nova.network.neutron [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.939 253542 DEBUG oslo_concurrency.lockutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.962 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Successfully updated port: 0797e76b-3f15-4c7e-ae0d-0f4813d59967 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.970 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance destroyed successfully.
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.971 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.985 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.991 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.991 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:48 compute-0 nova_compute[253538]: 2025-11-25 08:53:48.991 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.006 253542 DEBUG nova.virt.libvirt.vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:42Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.006 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.007 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.008 253542 DEBUG os_vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.011 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0eb0246-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.026 253542 INFO os_vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98')
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.039 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start _get_guest_xml network_info=[{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.043 253542 DEBUG nova.compute.manager [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-changed-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.044 253542 DEBUG nova.compute.manager [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Refreshing instance network info cache due to event network-changed-0797e76b-3f15-4c7e-ae0d-0f4813d59967. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.044 253542 DEBUG oslo_concurrency.lockutils [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.048 253542 WARNING nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.052 253542 DEBUG nova.virt.libvirt.host [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.053 253542 DEBUG nova.virt.libvirt.host [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.056 253542 DEBUG nova.virt.libvirt.host [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.057 253542 DEBUG nova.virt.libvirt.host [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.058 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.058 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.059 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.060 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.060 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.061 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.061 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.061 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.062 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.062 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.062 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.063 253542 DEBUG nova.virt.hardware [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.063 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.077 253542 DEBUG oslo_concurrency.processutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.204 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:53:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:53:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687222348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.542 253542 DEBUG oslo_concurrency.processutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:49 compute-0 nova_compute[253538]: 2025-11-25 08:53:49.575 253542 DEBUG oslo_concurrency.processutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 08:53:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2687222348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:53:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791381028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.052 253542 DEBUG oslo_concurrency.processutils [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.054 253542 DEBUG nova.virt.libvirt.vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:42Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.054 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.055 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.056 253542 DEBUG nova.objects.instance [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.072 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <uuid>2da7049d-715e-4209-8c17-dda96ff6a192</uuid>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <name>instance-00000073</name>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1007468219</nova:name>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:53:49</nova:creationTime>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <nova:port uuid="e0eb0246-9869-4c10-b45b-bd0799ae0c95">
Nov 25 08:53:50 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <system>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <entry name="serial">2da7049d-715e-4209-8c17-dda96ff6a192</entry>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <entry name="uuid">2da7049d-715e-4209-8c17-dda96ff6a192</entry>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </system>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <os>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   </os>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <features>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   </features>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2da7049d-715e-4209-8c17-dda96ff6a192_disk">
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2da7049d-715e-4209-8c17-dda96ff6a192_disk.config">
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       </source>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:53:50 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:ea:4d:89"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <target dev="tape0eb0246-98"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192/console.log" append="off"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <video>
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </video>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:53:50 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:53:50 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:53:50 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:53:50 compute-0 nova_compute[253538]: </domain>
Nov 25 08:53:50 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.075 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.075 253542 DEBUG nova.virt.libvirt.driver [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.076 253542 DEBUG nova.virt.libvirt.vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:42Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.077 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.078 253542 DEBUG nova.network.os_vif_util [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.078 253542 DEBUG os_vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.079 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.080 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.081 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.085 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0eb0246-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.086 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0eb0246-98, col_values=(('external_ids', {'iface-id': 'e0eb0246-9869-4c10-b45b-bd0799ae0c95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:4d:89', 'vm-uuid': '2da7049d-715e-4209-8c17-dda96ff6a192'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.087 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 NetworkManager[48915]: <info>  [1764060830.0885] manager: (tape0eb0246-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.094 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.095 253542 INFO os_vif [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98')
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.140 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.141 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.143 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.144 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d50bcd8c-a813-4d0d-8f7e-d38a1dfe5eb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 kernel: tape0eb0246-98: entered promiscuous mode
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 NetworkManager[48915]: <info>  [1764060830.1833] manager: (tape0eb0246-98): new Tun device (/org/freedesktop/NetworkManager/Devices/481)
Nov 25 08:53:50 compute-0 ovn_controller[152859]: 2025-11-25T08:53:50Z|01159|binding|INFO|Claiming lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 for this chassis.
Nov 25 08:53:50 compute-0 ovn_controller[152859]: 2025-11-25T08:53:50Z|01160|binding|INFO|e0eb0246-9869-4c10-b45b-bd0799ae0c95: Claiming fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.192 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:4d:89 10.100.0.6'], port_security=['fa:16:3e:ea:4d:89 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2da7049d-715e-4209-8c17-dda96ff6a192', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '622b72dc-477c-41ae-9a43-d0ddc5df890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbdce223-39b2-47e0-8888-53244c8f0c36, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e0eb0246-9869-4c10-b45b-bd0799ae0c95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.193 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e0eb0246-9869-4c10-b45b-bd0799ae0c95 in datapath ab581a21-5712-4b8e-87f9-b943349fbfcb bound to our chassis
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.194 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 08:53:50 compute-0 ovn_controller[152859]: 2025-11-25T08:53:50Z|01161|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 ovn-installed in OVS
Nov 25 08:53:50 compute-0 ovn_controller[152859]: 2025-11-25T08:53:50Z|01162|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 up in Southbound
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[070b9ec9-b205-4cf9-a371-93aacea29a95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.226 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab581a21-51 in ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.228 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab581a21-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67cfda9f-b2a4-4769-a679-2de6544aa5c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.230 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[538994f5-3a7c-4c3e-9613-1e59a08c4a8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 systemd-machined[215790]: New machine qemu-144-instance-00000073.
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.246 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[937ba49f-b93a-4ff5-b6db-a5c19f2ecec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000073.
Nov 25 08:53:50 compute-0 systemd-udevd[374607]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:53:50 compute-0 NetworkManager[48915]: <info>  [1764060830.2775] device (tape0eb0246-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:53:50 compute-0 NetworkManager[48915]: <info>  [1764060830.2787] device (tape0eb0246-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[040d138c-6e06-4ed9-9374-4a77edd5b06a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.313 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ad730a4b-6803-409f-b3ad-de5b76073976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 NetworkManager[48915]: <info>  [1764060830.3203] manager: (tapab581a21-50): new Veth device (/org/freedesktop/NetworkManager/Devices/482)
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.319 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[86c3aa34-1ad9-482e-b212-b360019fa8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 systemd-udevd[374610]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.360 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c28977bc-401f-4130-b66e-7fca89d636a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.364 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8410c992-1eef-4fbf-b301-a1978d3b474d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 NetworkManager[48915]: <info>  [1764060830.3988] device (tapab581a21-50): carrier: link connected
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.406 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aacb59d2-e03f-4bdf-8aea-03e2e9bb8c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.428 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f230d77-4c37-46b3-bb9e-cf71bd7d772c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab581a21-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:79:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618499, 'reachable_time': 26439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374637, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.446 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[09771235-a8cc-4290-83e4-ef21b52a863b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:797a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618499, 'tstamp': 618499}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374638, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.470 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01e239f5-58bf-4c5a-bf9a-580eef201909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab581a21-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:79:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618499, 'reachable_time': 26439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374639, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.484 253542 DEBUG nova.compute.manager [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.485 253542 DEBUG oslo_concurrency.lockutils [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.486 253542 DEBUG oslo_concurrency.lockutils [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.486 253542 DEBUG oslo_concurrency.lockutils [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.487 253542 DEBUG nova.compute.manager [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.488 253542 WARNING nova.compute.manager [req-0834014d-4f27-446a-a233-4b6649626c63 req-2dec6cc7-103d-4237-8dae-a94c6d544c98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state stopped and task_state powering-on.
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.515 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a66a8a0-7fb5-4a3a-a991-72bbb173f465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.575 253542 DEBUG nova.network.neutron [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Updating instance_info_cache with network_info: [{"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.609 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56747a97-e240-48c2-ae84-652efd43dd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.611 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab581a21-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.611 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.612 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab581a21-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.614 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 NetworkManager[48915]: <info>  [1764060830.6152] manager: (tapab581a21-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Nov 25 08:53:50 compute-0 kernel: tapab581a21-50: entered promiscuous mode
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.618 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab581a21-50, col_values=(('external_ids', {'iface-id': 'b956a451-af5c-4f4e-b3b8-704d71686765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:50 compute-0 ovn_controller[152859]: 2025-11-25T08:53:50Z|01163|binding|INFO|Releasing lport b956a451-af5c-4f4e-b3b8-704d71686765 from this chassis (sb_readonly=0)
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.649 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.650 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e17c153a-5fdb-4de5-8368-66f7b45c57ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.651 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/ab581a21-5712-4b8e-87f9-b943349fbfcb.pid.haproxy
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID ab581a21-5712-4b8e-87f9-b943349fbfcb
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:53:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:50.651 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'env', 'PROCESS_TAG=haproxy-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab581a21-5712-4b8e-87f9-b943349fbfcb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.700 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.701 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance network_info: |[{"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.701 253542 DEBUG oslo_concurrency.lockutils [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.702 253542 DEBUG nova.network.neutron [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Refreshing network info cache for port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.705 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start _get_guest_xml network_info=[{"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.713 253542 WARNING nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.723 253542 DEBUG nova.virt.libvirt.host [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.724 253542 DEBUG nova.virt.libvirt.host [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.728 253542 DEBUG nova.virt.libvirt.host [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.729 253542 DEBUG nova.virt.libvirt.host [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.730 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.730 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.731 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.731 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.731 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.732 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.732 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.732 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.732 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.733 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.733 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.733 253542 DEBUG nova.virt.hardware [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.737 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.833 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 2da7049d-715e-4209-8c17-dda96ff6a192 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.835 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060830.8317401, 2da7049d-715e-4209-8c17-dda96ff6a192 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.836 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Resumed (Lifecycle Event)
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.841 253542 DEBUG nova.compute.manager [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.848 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance rebooted successfully.
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.849 253542 DEBUG nova.compute.manager [None req-516d2bae-345f-4add-b8df-da3179b8a89d 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.857 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.863 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.882 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.882 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060830.8340163, 2da7049d-715e-4209-8c17-dda96ff6a192 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.883 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Started (Lifecycle Event)
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.904 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.907 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:53:50 compute-0 nova_compute[253538]: 2025-11-25 08:53:50.951 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 08:53:50 compute-0 ceph-mon[75015]: pgmap v2191: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 08:53:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2791381028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:51 compute-0 podman[374730]: 2025-11-25 08:53:51.000949509 +0000 UTC m=+0.024023324 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:53:51 compute-0 podman[374730]: 2025-11-25 08:53:51.132296035 +0000 UTC m=+0.155369820 container create ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:53:51 compute-0 systemd[1]: Started libpod-conmon-ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f.scope.
Nov 25 08:53:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:53:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:53:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500140972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a887c0faf322aa9a07d1834b6a53fb5f904fd4dfee4a01fe03dcd58a1b43320c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.252 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:51 compute-0 podman[374730]: 2025-11-25 08:53:51.267877086 +0000 UTC m=+0.290950901 container init ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:53:51 compute-0 podman[374730]: 2025-11-25 08:53:51.274816314 +0000 UTC m=+0.297890099 container start ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.276 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.280 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:51 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [NOTICE]   (374770) : New worker (374773) forked
Nov 25 08:53:51 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [NOTICE]   (374770) : Loading success.
Nov 25 08:53:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:53:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968671767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.785 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.787 253542 DEBUG nova.virt.libvirt.vif [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-160796765',display_name='tempest-TestNetworkBasicOps-server-160796765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-160796765',id=116,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0f+XBfXdSU4e2+02qYGk42nbRwIu1Vshv2fAHcU2M9HY4bsiawDBYsAh0BiTPD2qOg4I+4cye8z+LuwXaU2+YwQ92/nUDN4SrklXs8+Sfqmmth2xZ1VW9badcZ/6ZoHg==',key_name='tempest-TestNetworkBasicOps-278129425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8e5h58cs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:53:45Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e14b791-8860-44a3-87e0-5c7fcc1dcf12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.787 253542 DEBUG nova.network.os_vif_util [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.789 253542 DEBUG nova.network.os_vif_util [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.791 253542 DEBUG nova.objects.instance [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.805 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <uuid>5e14b791-8860-44a3-87e0-5c7fcc1dcf12</uuid>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <name>instance-00000074</name>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-160796765</nova:name>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:53:50</nova:creationTime>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <nova:port uuid="0797e76b-3f15-4c7e-ae0d-0f4813d59967">
Nov 25 08:53:51 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <system>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <entry name="serial">5e14b791-8860-44a3-87e0-5c7fcc1dcf12</entry>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <entry name="uuid">5e14b791-8860-44a3-87e0-5c7fcc1dcf12</entry>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </system>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <os>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   </os>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <features>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   </features>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk">
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config">
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       </source>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:53:51 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:21:4a:e2"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <target dev="tap0797e76b-3f"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/console.log" append="off"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <video>
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </video>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:53:51 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:53:51 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:53:51 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:53:51 compute-0 nova_compute[253538]: </domain>
Nov 25 08:53:51 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.807 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Preparing to wait for external event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:53:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.807 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.807 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.807 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.808 253542 DEBUG nova.virt.libvirt.vif [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-160796765',display_name='tempest-TestNetworkBasicOps-server-160796765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-160796765',id=116,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0f+XBfXdSU4e2+02qYGk42nbRwIu1Vshv2fAHcU2M9HY4bsiawDBYsAh0BiTPD2qOg4I+4cye8z+LuwXaU2+YwQ92/nUDN4SrklXs8+Sfqmmth2xZ1VW9badcZ/6ZoHg==',key_name='tempest-TestNetworkBasicOps-278129425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8e5h58cs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:53:45Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e14b791-8860-44a3-87e0-5c7fcc1dcf12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.808 253542 DEBUG nova.network.os_vif_util [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.809 253542 DEBUG nova.network.os_vif_util [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.809 253542 DEBUG os_vif [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.812 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.813 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.813 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0797e76b-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0797e76b-3f, col_values=(('external_ids', {'iface-id': '0797e76b-3f15-4c7e-ae0d-0f4813d59967', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:4a:e2', 'vm-uuid': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:51 compute-0 NetworkManager[48915]: <info>  [1764060831.8221] manager: (tap0797e76b-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.825 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.830 253542 INFO os_vif [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f')
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.879 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.879 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.880 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:21:4a:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.880 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Using config drive
Nov 25 08:53:51 compute-0 nova_compute[253538]: 2025-11-25 08:53:51.900 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3500140972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/968671767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.405 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Creating config drive at /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.414 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_f46sz4b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.459 253542 DEBUG nova.network.neutron [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Updated VIF entry in instance network info cache for port 0797e76b-3f15-4c7e-ae0d-0f4813d59967. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.460 253542 DEBUG nova.network.neutron [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Updating instance_info_cache with network_info: [{"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.479 253542 DEBUG oslo_concurrency.lockutils [req-5633a7d5-6a60-4453-8d8c-862bb082b7dd req-33bc177e-60d4-46d7-aa5d-0357806ccfe1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e14b791-8860-44a3-87e0-5c7fcc1dcf12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:53:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.550 253542 DEBUG nova.compute.manager [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.551 253542 DEBUG oslo_concurrency.lockutils [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.551 253542 DEBUG oslo_concurrency.lockutils [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.552 253542 DEBUG oslo_concurrency.lockutils [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.552 253542 DEBUG nova.compute.manager [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.552 253542 WARNING nova.compute.manager [req-0b814fe1-367b-4e0b-a931-141258a2ede2 req-9527ee40-c442-42d4-b885-a041b47655b8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state active and task_state None.
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.563 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_f46sz4b" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.587 253542 DEBUG nova.storage.rbd_utils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.590 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.806 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.994 253542 DEBUG oslo_concurrency.processutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config 5e14b791-8860-44a3-87e0-5c7fcc1dcf12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:53:52 compute-0 nova_compute[253538]: 2025-11-25 08:53:52.995 253542 INFO nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Deleting local config drive /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12/disk.config because it was imported into RBD.
Nov 25 08:53:53 compute-0 ceph-mon[75015]: pgmap v2192: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Nov 25 08:53:53 compute-0 NetworkManager[48915]: <info>  [1764060833.0681] manager: (tap0797e76b-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/485)
Nov 25 08:53:53 compute-0 kernel: tap0797e76b-3f: entered promiscuous mode
Nov 25 08:53:53 compute-0 ovn_controller[152859]: 2025-11-25T08:53:53Z|01164|binding|INFO|Claiming lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 for this chassis.
Nov 25 08:53:53 compute-0 ovn_controller[152859]: 2025-11-25T08:53:53Z|01165|binding|INFO|0797e76b-3f15-4c7e-ae0d-0f4813d59967: Claiming fa:16:3e:21:4a:e2 10.100.0.30
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.083 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:4a:e2 10.100.0.30'], port_security=['fa:16:3e:21:4a:e2 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96035e03-5ed4-4905-a41d-a5f2952043fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0797e76b-3f15-4c7e-ae0d-0f4813d59967) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.084 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 bound to our chassis
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.086 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 08:53:53 compute-0 ovn_controller[152859]: 2025-11-25T08:53:53Z|01166|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 ovn-installed in OVS
Nov 25 08:53:53 compute-0 ovn_controller[152859]: 2025-11-25T08:53:53Z|01167|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 up in Southbound
Nov 25 08:53:53 compute-0 NetworkManager[48915]: <info>  [1764060833.0919] device (tap0797e76b-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:53 compute-0 NetworkManager[48915]: <info>  [1764060833.0931] device (tap0797e76b-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.105 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a42ee767-f798-4f91-8806-31ffe2305cb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:53 compute-0 systemd-machined[215790]: New machine qemu-145-instance-00000074.
Nov 25 08:53:53 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000074.
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.143 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[37484728-cea6-4447-bb1f-494511d09975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.147 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[06fadc81-0445-4bd8-a058-e1c4c3e435dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.173 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce0caf0-f582-4a09-b62c-77575fb5657d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.191 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[428b7952-e8e2-458b-bf09-eaf0e50539f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374884, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.237 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[644c76bc-0f32-4a4e-bf3c-0ed006154bc5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616535, 'tstamp': 616535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374888, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616539, 'tstamp': 616539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374888, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.239 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.240 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.243 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.243 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.243 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:53:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:53.243 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.307 253542 DEBUG nova.compute.manager [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.307 253542 DEBUG oslo_concurrency.lockutils [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.308 253542 DEBUG oslo_concurrency.lockutils [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.309 253542 DEBUG oslo_concurrency.lockutils [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:53 compute-0 nova_compute[253538]: 2025-11-25 08:53:53.309 253542 DEBUG nova.compute.manager [req-2c6f8a8b-5504-43a4-aa41-77e4547c5da8 req-a99a1500-e487-485f-b986-1fbd1a913ee6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Processing event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:53:53
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 745 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:53:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:53:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:54.414 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:54.416 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated
Nov 25 08:53:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:54.419 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:53:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:54.421 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e81db3f6-e5b2-4538-a861-2c2e396c4298]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:54 compute-0 sshd-session[374891]: Invalid user operador from 45.202.211.6 port 57340
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.566 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.567 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060834.5656435, 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.567 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] VM Started (Lifecycle Event)
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.570 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.575 253542 INFO nova.virt.libvirt.driver [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance spawned successfully.
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.576 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.589 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.596 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.602 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.603 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.603 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.604 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.604 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.604 253542 DEBUG nova.virt.libvirt.driver [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.627 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.627 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060834.5678415, 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.628 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] VM Paused (Lifecycle Event)
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.698 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.703 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060834.5699406, 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.703 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] VM Resumed (Lifecycle Event)
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.743 253542 INFO nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Took 9.43 seconds to spawn the instance on the hypervisor.
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.744 253542 DEBUG nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:54 compute-0 sshd-session[374891]: Received disconnect from 45.202.211.6 port 57340:11: Bye Bye [preauth]
Nov 25 08:53:54 compute-0 sshd-session[374891]: Disconnected from invalid user operador 45.202.211.6 port 57340 [preauth]
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.790 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.793 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.825 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.836 253542 INFO nova.compute.manager [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Took 10.55 seconds to build instance.
Nov 25 08:53:54 compute-0 nova_compute[253538]: 2025-11-25 08:53:54.850 253542 DEBUG oslo_concurrency.lockutils [None req-dd7c073b-5906-452e-a821-d734a5b66929 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:54 compute-0 sshd-session[374893]: Invalid user hduser from 193.32.162.151 port 43210
Nov 25 08:53:55 compute-0 ceph-mon[75015]: pgmap v2193: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 745 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 08:53:55 compute-0 sshd-session[374893]: Connection closed by invalid user hduser 193.32.162.151 port 43210 [preauth]
Nov 25 08:53:55 compute-0 nova_compute[253538]: 2025-11-25 08:53:55.431 253542 DEBUG nova.compute.manager [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:53:55 compute-0 nova_compute[253538]: 2025-11-25 08:53:55.432 253542 DEBUG oslo_concurrency.lockutils [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:53:55 compute-0 nova_compute[253538]: 2025-11-25 08:53:55.432 253542 DEBUG oslo_concurrency.lockutils [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:53:55 compute-0 nova_compute[253538]: 2025-11-25 08:53:55.432 253542 DEBUG oslo_concurrency.lockutils [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:53:55 compute-0 nova_compute[253538]: 2025-11-25 08:53:55.433 253542 DEBUG nova.compute.manager [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:53:55 compute-0 nova_compute[253538]: 2025-11-25 08:53:55.433 253542 WARNING nova.compute.manager [req-af0fc61e-a871-46aa-966e-1d6c32f416ad req-908e7f20-88ff-4258-b028-996e84fd205e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state None.
Nov 25 08:53:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 08:53:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:56.395 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:53:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:56.396 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated
Nov 25 08:53:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:56.398 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:53:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:53:56.399 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8dd362-dbce-4a1a-88b5-16ad33777c32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:53:56 compute-0 sshd-session[374938]: Received disconnect from 45.78.217.205 port 41752:11: Bye Bye [preauth]
Nov 25 08:53:56 compute-0 sshd-session[374938]: Disconnected from authenticating user root 45.78.217.205 port 41752 [preauth]
Nov 25 08:53:56 compute-0 nova_compute[253538]: 2025-11-25 08:53:56.823 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:57 compute-0 ceph-mon[75015]: pgmap v2194: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 08:53:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:53:57 compute-0 nova_compute[253538]: 2025-11-25 08:53:57.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:53:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 08:53:59 compute-0 ceph-mon[75015]: pgmap v2195: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 08:53:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Nov 25 08:54:01 compute-0 ceph-mon[75015]: pgmap v2196: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Nov 25 08:54:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 145 op/s
Nov 25 08:54:01 compute-0 nova_compute[253538]: 2025-11-25 08:54:01.827 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:02.352 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:02.356 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated
Nov 25 08:54:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:02.358 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:54:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:02.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1752d2d1-eb23-4be0-b88f-16ae1d5d8646]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:02 compute-0 ceph-mon[75015]: pgmap v2197: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 145 op/s
Nov 25 08:54:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:02 compute-0 nova_compute[253538]: 2025-11-25 08:54:02.809 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:03.107 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:03.108 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated
Nov 25 08:54:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:03.110 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:54:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:03.110 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[54b9514f-d060-4cb4-b665-ec5ffeb5e9e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 25 KiB/s wr, 139 op/s
Nov 25 08:54:04 compute-0 ovn_controller[152859]: 2025-11-25T08:54:04Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:4d:89 10.100.0.6
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018696304006316337 of space, bias 1.0, pg target 0.5608891201894901 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:54:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:54:04 compute-0 ceph-mon[75015]: pgmap v2198: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 25 KiB/s wr, 139 op/s
Nov 25 08:54:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 26 KiB/s wr, 141 op/s
Nov 25 08:54:06 compute-0 nova_compute[253538]: 2025-11-25 08:54:06.832 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:06 compute-0 ceph-mon[75015]: pgmap v2199: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 26 KiB/s wr, 141 op/s
Nov 25 08:54:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:07.141 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:07.142 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated
Nov 25 08:54:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:07.143 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:54:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:07.144 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9e1641-ed63-4c13-92fc-dfe31a659a83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:07 compute-0 nova_compute[253538]: 2025-11-25 08:54:07.812 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 296 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 466 KiB/s wr, 139 op/s
Nov 25 08:54:07 compute-0 podman[374941]: 2025-11-25 08:54:07.82807465 +0000 UTC m=+0.066903532 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:54:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:08.128 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:08.129 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated
Nov 25 08:54:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:08.130 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:54:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:08.131 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0577f64e-5fd8-48a5-af1a-79038e46b231]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:08 compute-0 ovn_controller[152859]: 2025-11-25T08:54:08Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:4a:e2 10.100.0.30
Nov 25 08:54:08 compute-0 ovn_controller[152859]: 2025-11-25T08:54:08Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:4a:e2 10.100.0.30
Nov 25 08:54:08 compute-0 ceph-mon[75015]: pgmap v2200: 321 pgs: 321 active+clean; 296 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 466 KiB/s wr, 139 op/s
Nov 25 08:54:09 compute-0 nova_compute[253538]: 2025-11-25 08:54:09.430 253542 INFO nova.compute.manager [None req-cd96e6ac-1fda-4d66-9ebc-cd22ff3c24df 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Get console output
Nov 25 08:54:09 compute-0 nova_compute[253538]: 2025-11-25 08:54:09.434 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:54:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 317 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Nov 25 08:54:09 compute-0 podman[374960]: 2025-11-25 08:54:09.823058561 +0000 UTC m=+0.071573368 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.193 253542 DEBUG nova.compute.manager [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.193 253542 DEBUG nova.compute.manager [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing instance network info cache due to event network-changed-e0eb0246-9869-4c10-b45b-bd0799ae0c95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.194 253542 DEBUG oslo_concurrency.lockutils [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.194 253542 DEBUG oslo_concurrency.lockutils [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.194 253542 DEBUG nova.network.neutron [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Refreshing network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.310 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.311 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.312 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.312 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.313 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.314 253542 INFO nova.compute.manager [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Terminating instance
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.317 253542 DEBUG nova.compute.manager [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:54:10 compute-0 kernel: tape0eb0246-98 (unregistering): left promiscuous mode
Nov 25 08:54:10 compute-0 NetworkManager[48915]: <info>  [1764060850.3800] device (tape0eb0246-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.390 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:10 compute-0 ovn_controller[152859]: 2025-11-25T08:54:10Z|01168|binding|INFO|Releasing lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 from this chassis (sb_readonly=0)
Nov 25 08:54:10 compute-0 ovn_controller[152859]: 2025-11-25T08:54:10Z|01169|binding|INFO|Setting lport e0eb0246-9869-4c10-b45b-bd0799ae0c95 down in Southbound
Nov 25 08:54:10 compute-0 ovn_controller[152859]: 2025-11-25T08:54:10Z|01170|binding|INFO|Removing iface tape0eb0246-98 ovn-installed in OVS
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.407 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:10 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 25 08:54:10 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Consumed 13.689s CPU time.
Nov 25 08:54:10 compute-0 systemd-machined[215790]: Machine qemu-144-instance-00000073 terminated.
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.496 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:4d:89 10.100.0.6'], port_security=['fa:16:3e:ea:4d:89 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2da7049d-715e-4209-8c17-dda96ff6a192', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '6', 'neutron:security_group_ids': '622b72dc-477c-41ae-9a43-d0ddc5df890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbdce223-39b2-47e0-8888-53244c8f0c36, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e0eb0246-9869-4c10-b45b-bd0799ae0c95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.499 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e0eb0246-9869-4c10-b45b-bd0799ae0c95 in datapath ab581a21-5712-4b8e-87f9-b943349fbfcb unbound from our chassis
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.501 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab581a21-5712-4b8e-87f9-b943349fbfcb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.503 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d191c10-b7f0-4c39-a55d-26742aa33a06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.504 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb namespace which is not needed anymore
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.562 253542 INFO nova.virt.libvirt.driver [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Instance destroyed successfully.
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.563 253542 DEBUG nova.objects.instance [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 2da7049d-715e-4209-8c17-dda96ff6a192 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.579 253542 DEBUG nova.virt.libvirt.vif [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1007468219',display_name='tempest-TestNetworkAdvancedServerOps-server-1007468219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1007468219',id=115,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU68q8J+sEr75b4404lN3PmtYldQRjuuyP6SvWU7eV24JrfwADcpjaDPQik5pfkJbRZMvyTiPsHIXeXsaypDt7sTW7yrvZEo+4ZU8LLmZuoq5gv1NHdtvyWg13jQU9yUw==',key_name='tempest-TestNetworkAdvancedServerOps-1510547977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-020ctw8m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:50Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=2da7049d-715e-4209-8c17-dda96ff6a192,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.581 253542 DEBUG nova.network.os_vif_util [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.582 253542 DEBUG nova.network.os_vif_util [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.582 253542 DEBUG os_vif [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.585 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0eb0246-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.592 253542 INFO os_vif [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:4d:89,bridge_name='br-int',has_traffic_filtering=True,id=e0eb0246-9869-4c10-b45b-bd0799ae0c95,network=Network(ab581a21-5712-4b8e-87f9-b943349fbfcb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0eb0246-98')
Nov 25 08:54:10 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [NOTICE]   (374770) : haproxy version is 2.8.14-c23fe91
Nov 25 08:54:10 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [NOTICE]   (374770) : path to executable is /usr/sbin/haproxy
Nov 25 08:54:10 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [WARNING]  (374770) : Exiting Master process...
Nov 25 08:54:10 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [WARNING]  (374770) : Exiting Master process...
Nov 25 08:54:10 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [ALERT]    (374770) : Current worker (374773) exited with code 143 (Terminated)
Nov 25 08:54:10 compute-0 neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb[374743]: [WARNING]  (374770) : All workers exited. Exiting... (0)
Nov 25 08:54:10 compute-0 systemd[1]: libpod-ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f.scope: Deactivated successfully.
Nov 25 08:54:10 compute-0 podman[375013]: 2025-11-25 08:54:10.664959397 +0000 UTC m=+0.059364047 container died ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f-userdata-shm.mount: Deactivated successfully.
Nov 25 08:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a887c0faf322aa9a07d1834b6a53fb5f904fd4dfee4a01fe03dcd58a1b43320c-merged.mount: Deactivated successfully.
Nov 25 08:54:10 compute-0 podman[375013]: 2025-11-25 08:54:10.740897254 +0000 UTC m=+0.135301904 container cleanup ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:54:10 compute-0 systemd[1]: libpod-conmon-ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f.scope: Deactivated successfully.
Nov 25 08:54:10 compute-0 podman[375063]: 2025-11-25 08:54:10.820324786 +0000 UTC m=+0.055714358 container remove ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.830 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[84f7a9e6-4399-484d-9273-c113f3e1353d]: (4, ('Tue Nov 25 08:54:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb (ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f)\nac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f\nTue Nov 25 08:54:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb (ac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f)\nac4143c5654b3f2d26e52800ec0179efe78b75cd7b70bac827c41ee2ea6faa2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.833 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b28a108-899d-48a6-a452-6d3208a352ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.834 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab581a21-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:10 compute-0 kernel: tapab581a21-50: left promiscuous mode
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.870 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[383f4016-f80d-4b34-ab70-1bbf137ec4e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.882 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27d5f58b-aa6a-4281-a3e0-93e46c0785ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.884 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e383ea9f-35b7-4df5-8077-4835c5edf706]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.889 253542 DEBUG nova.compute.manager [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.889 253542 DEBUG oslo_concurrency.lockutils [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.890 253542 DEBUG oslo_concurrency.lockutils [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.890 253542 DEBUG oslo_concurrency.lockutils [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.891 253542 DEBUG nova.compute.manager [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:54:10 compute-0 nova_compute[253538]: 2025-11-25 08:54:10.891 253542 DEBUG nova.compute.manager [req-621efb7e-1c71-4ca1-9707-939952fb6b90 req-7cbc8932-88cf-40f9-aec0-7ed2bde4a2f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-unplugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.902 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5de881-aa0f-4c41-b38a-701fdc8510a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618490, 'reachable_time': 16296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375078, 'error': None, 'target': 'ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.904 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab581a21-5712-4b8e-87f9-b943349fbfcb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:54:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:10.904 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a1801047-96cc-4696-ae9f-a48cf6676ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:10 compute-0 ceph-mon[75015]: pgmap v2201: 321 pgs: 321 active+clean; 317 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Nov 25 08:54:10 compute-0 systemd[1]: run-netns-ovnmeta\x2dab581a21\x2d5712\x2d4b8e\x2d87f9\x2db943349fbfcb.mount: Deactivated successfully.
Nov 25 08:54:11 compute-0 nova_compute[253538]: 2025-11-25 08:54:11.253 253542 INFO nova.virt.libvirt.driver [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Deleting instance files /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192_del
Nov 25 08:54:11 compute-0 nova_compute[253538]: 2025-11-25 08:54:11.254 253542 INFO nova.virt.libvirt.driver [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Deletion of /var/lib/nova/instances/2da7049d-715e-4209-8c17-dda96ff6a192_del complete
Nov 25 08:54:11 compute-0 nova_compute[253538]: 2025-11-25 08:54:11.321 253542 INFO nova.compute.manager [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Took 1.00 seconds to destroy the instance on the hypervisor.
Nov 25 08:54:11 compute-0 nova_compute[253538]: 2025-11-25 08:54:11.322 253542 DEBUG oslo.service.loopingcall [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:54:11 compute-0 nova_compute[253538]: 2025-11-25 08:54:11.322 253542 DEBUG nova.compute.manager [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:54:11 compute-0 nova_compute[253538]: 2025-11-25 08:54:11.322 253542 DEBUG nova.network.neutron [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:54:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 299 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 902 KiB/s rd, 2.2 MiB/s wr, 134 op/s
Nov 25 08:54:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:12 compute-0 nova_compute[253538]: 2025-11-25 08:54:12.657 253542 DEBUG nova.network.neutron [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:54:12 compute-0 nova_compute[253538]: 2025-11-25 08:54:12.681 253542 INFO nova.compute.manager [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Took 1.36 seconds to deallocate network for instance.
Nov 25 08:54:12 compute-0 nova_compute[253538]: 2025-11-25 08:54:12.720 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:12 compute-0 nova_compute[253538]: 2025-11-25 08:54:12.721 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:12 compute-0 nova_compute[253538]: 2025-11-25 08:54:12.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:12 compute-0 nova_compute[253538]: 2025-11-25 08:54:12.822 253542 DEBUG oslo_concurrency.processutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:12 compute-0 ceph-mon[75015]: pgmap v2202: 321 pgs: 321 active+clean; 299 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 902 KiB/s rd, 2.2 MiB/s wr, 134 op/s
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:12.999 253542 DEBUG nova.compute.manager [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.000 253542 DEBUG oslo_concurrency.lockutils [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.001 253542 DEBUG oslo_concurrency.lockutils [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.001 253542 DEBUG oslo_concurrency.lockutils [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.001 253542 DEBUG nova.compute.manager [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] No waiting events found dispatching network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.002 253542 WARNING nova.compute.manager [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received unexpected event network-vif-plugged-e0eb0246-9869-4c10-b45b-bd0799ae0c95 for instance with vm_state deleted and task_state None.
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.002 253542 DEBUG nova.compute.manager [req-b6639c09-bd11-4b2f-b85b-69fa90362ba3 req-66502262-4f41-438a-bfe2-9bbd442fad56 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Received event network-vif-deleted-e0eb0246-9869-4c10-b45b-bd0799ae0c95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.205 253542 DEBUG nova.network.neutron [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updated VIF entry in instance network info cache for port e0eb0246-9869-4c10-b45b-bd0799ae0c95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.205 253542 DEBUG nova.network.neutron [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Updating instance_info_cache with network_info: [{"id": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "address": "fa:16:3e:ea:4d:89", "network": {"id": "ab581a21-5712-4b8e-87f9-b943349fbfcb", "bridge": "br-int", "label": "tempest-network-smoke--1196736477", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0eb0246-98", "ovs_interfaceid": "e0eb0246-9869-4c10-b45b-bd0799ae0c95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.224 253542 DEBUG oslo_concurrency.lockutils [req-8cb2affc-3661-4157-b5fd-88a57bc97d47 req-97661487-2f26-4889-8611-a373ffe4e07d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2da7049d-715e-4209-8c17-dda96ff6a192" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:54:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:54:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533077817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.293 253542 DEBUG oslo_concurrency.processutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.304 253542 DEBUG nova.compute.provider_tree [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.325 253542 DEBUG nova.scheduler.client.report [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.364 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.422 253542 INFO nova.scheduler.client.report [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance 2da7049d-715e-4209-8c17-dda96ff6a192
Nov 25 08:54:13 compute-0 nova_compute[253538]: 2025-11-25 08:54:13.491 253542 DEBUG oslo_concurrency.lockutils [None req-9a61d765-1b7e-48be-bf50-91425e9de404 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "2da7049d-715e-4209-8c17-dda96ff6a192" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 273 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 898 KiB/s rd, 2.2 MiB/s wr, 135 op/s
Nov 25 08:54:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/533077817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:54:14 compute-0 podman[375103]: 2025-11-25 08:54:14.864471493 +0000 UTC m=+0.112873014 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:54:14 compute-0 ceph-mon[75015]: pgmap v2203: 321 pgs: 321 active+clean; 273 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 898 KiB/s rd, 2.2 MiB/s wr, 135 op/s
Nov 25 08:54:15 compute-0 nova_compute[253538]: 2025-11-25 08:54:15.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 765 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 08:54:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:16.223 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 10.100.0.2 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:16.224 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 updated
Nov 25 08:54:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:16.225 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:54:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:16.226 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb9c72b-93c9-41e4-bac8-24240d6ef951]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:16 compute-0 ceph-mon[75015]: pgmap v2204: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 765 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 08:54:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:17 compute-0 nova_compute[253538]: 2025-11-25 08:54:17.777 253542 DEBUG nova.compute.manager [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:17 compute-0 nova_compute[253538]: 2025-11-25 08:54:17.777 253542 DEBUG nova.compute.manager [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-1959aca7-b25c-4fe5-b59a-70db352af78b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:54:17 compute-0 nova_compute[253538]: 2025-11-25 08:54:17.778 253542 DEBUG oslo_concurrency.lockutils [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:54:17 compute-0 nova_compute[253538]: 2025-11-25 08:54:17.778 253542 DEBUG oslo_concurrency.lockutils [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:54:17 compute-0 nova_compute[253538]: 2025-11-25 08:54:17.778 253542 DEBUG nova.network.neutron [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 1959aca7-b25c-4fe5-b59a-70db352af78b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:54:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Nov 25 08:54:17 compute-0 nova_compute[253538]: 2025-11-25 08:54:17.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:18 compute-0 sudo[375129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:18 compute-0 sudo[375129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:18 compute-0 sudo[375129]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:18 compute-0 sudo[375154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:54:18 compute-0 sudo[375154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:18 compute-0 sudo[375154]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:18 compute-0 sudo[375179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:18 compute-0 sudo[375179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:18 compute-0 sudo[375179]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:18 compute-0 sudo[375204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:54:18 compute-0 sudo[375204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:18 compute-0 ceph-mon[75015]: pgmap v2205: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Nov 25 08:54:19 compute-0 sudo[375204]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:54:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:54:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:54:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:54:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:54:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:54:19 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 41318a4f-00ed-4c34-8700-0c603a4e2c15 does not exist
Nov 25 08:54:19 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e138be15-3e66-49b8-820f-da43b86b2c2a does not exist
Nov 25 08:54:19 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6c5287ec-572e-4aad-9f4e-59a72c9e65c4 does not exist
Nov 25 08:54:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:54:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:54:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:54:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:54:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:54:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:54:19 compute-0 sudo[375261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:19 compute-0 sudo[375261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:19 compute-0 sudo[375261]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:19 compute-0 sudo[375286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:54:19 compute-0 sudo[375286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:19 compute-0 ovn_controller[152859]: 2025-11-25T08:54:19Z|01171|binding|INFO|Releasing lport 4bc48b70-3942-46d1-ac71-5fa19e5d9ae3 from this chassis (sb_readonly=0)
Nov 25 08:54:19 compute-0 ovn_controller[152859]: 2025-11-25T08:54:19Z|01172|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 08:54:19 compute-0 sudo[375286]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:19 compute-0 nova_compute[253538]: 2025-11-25 08:54:19.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:19 compute-0 sudo[375311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:19 compute-0 sudo[375311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:19 compute-0 sudo[375311]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 1.7 MiB/s wr, 91 op/s
Nov 25 08:54:19 compute-0 sudo[375336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:54:19 compute-0 sudo[375336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:19 compute-0 nova_compute[253538]: 2025-11-25 08:54:19.840 253542 DEBUG nova.network.neutron [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 1959aca7-b25c-4fe5-b59a-70db352af78b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:54:19 compute-0 nova_compute[253538]: 2025-11-25 08:54:19.841 253542 DEBUG nova.network.neutron [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:54:19 compute-0 nova_compute[253538]: 2025-11-25 08:54:19.854 253542 DEBUG oslo_concurrency.lockutils [req-e1b41852-c347-41ee-8941-913e1d54bfe5 req-beb79459-f1af-4a5a-9283-d161b63c2a51 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:54:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:54:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:54:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:54:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:54:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:54:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:54:20 compute-0 podman[375404]: 2025-11-25 08:54:20.192760862 +0000 UTC m=+0.047784152 container create 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:54:20 compute-0 systemd[1]: Started libpod-conmon-7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359.scope.
Nov 25 08:54:20 compute-0 podman[375404]: 2025-11-25 08:54:20.172530622 +0000 UTC m=+0.027553952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:54:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:54:20 compute-0 podman[375404]: 2025-11-25 08:54:20.297323468 +0000 UTC m=+0.152346778 container init 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:54:20 compute-0 podman[375404]: 2025-11-25 08:54:20.309989713 +0000 UTC m=+0.165013043 container start 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 08:54:20 compute-0 podman[375404]: 2025-11-25 08:54:20.313681983 +0000 UTC m=+0.168705283 container attach 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 08:54:20 compute-0 affectionate_fermat[375419]: 167 167
Nov 25 08:54:20 compute-0 systemd[1]: libpod-7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359.scope: Deactivated successfully.
Nov 25 08:54:20 compute-0 podman[375404]: 2025-11-25 08:54:20.319183713 +0000 UTC m=+0.174207053 container died 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-335d52f3583ee6922a8fc22fb8cf7bc018025bf073b9c7b46a4ccdb037c04064-merged.mount: Deactivated successfully.
Nov 25 08:54:20 compute-0 podman[375404]: 2025-11-25 08:54:20.366793189 +0000 UTC m=+0.221816519 container remove 7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 08:54:20 compute-0 systemd[1]: libpod-conmon-7f4de7591f021a28ef14d18113f9fee714e44f128fe1d4beecfbb62783314359.scope: Deactivated successfully.
Nov 25 08:54:20 compute-0 nova_compute[253538]: 2025-11-25 08:54:20.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:20 compute-0 podman[375442]: 2025-11-25 08:54:20.628534713 +0000 UTC m=+0.056902870 container create 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 08:54:20 compute-0 systemd[1]: Started libpod-conmon-0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33.scope.
Nov 25 08:54:20 compute-0 podman[375442]: 2025-11-25 08:54:20.602920666 +0000 UTC m=+0.031288843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:54:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:20 compute-0 podman[375442]: 2025-11-25 08:54:20.738445785 +0000 UTC m=+0.166814022 container init 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 08:54:20 compute-0 podman[375442]: 2025-11-25 08:54:20.746765102 +0000 UTC m=+0.175133259 container start 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:54:20 compute-0 podman[375442]: 2025-11-25 08:54:20.751722336 +0000 UTC m=+0.180090493 container attach 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 08:54:20 compute-0 ceph-mon[75015]: pgmap v2206: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 1.7 MiB/s wr, 91 op/s
Nov 25 08:54:21 compute-0 eager_aryabhata[375459]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:54:21 compute-0 eager_aryabhata[375459]: --> relative data size: 1.0
Nov 25 08:54:21 compute-0 eager_aryabhata[375459]: --> All data devices are unavailable
Nov 25 08:54:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 25 08:54:21 compute-0 systemd[1]: libpod-0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33.scope: Deactivated successfully.
Nov 25 08:54:21 compute-0 systemd[1]: libpod-0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33.scope: Consumed 1.017s CPU time.
Nov 25 08:54:21 compute-0 podman[375442]: 2025-11-25 08:54:21.862525061 +0000 UTC m=+1.290893218 container died 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 08:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-961e005e26c165fb058e545803a43a0b357a398dfad8ed5330d26df707c843a1-merged.mount: Deactivated successfully.
Nov 25 08:54:21 compute-0 podman[375442]: 2025-11-25 08:54:21.921365452 +0000 UTC m=+1.349733629 container remove 0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:54:21 compute-0 systemd[1]: libpod-conmon-0068d0f50011a2e4c64362a70f8661473d9c6680b62d2bfe9846a577d626cf33.scope: Deactivated successfully.
Nov 25 08:54:21 compute-0 sudo[375336]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:22 compute-0 sudo[375502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:22 compute-0 sudo[375502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:22 compute-0 sudo[375502]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:22 compute-0 sudo[375527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:54:22 compute-0 sudo[375527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:22 compute-0 sudo[375527]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:22 compute-0 sudo[375552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:22 compute-0 sudo[375552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:22 compute-0 sudo[375552]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:22 compute-0 sudo[375577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:54:22 compute-0 sudo[375577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:22 compute-0 podman[375641]: 2025-11-25 08:54:22.632223171 +0000 UTC m=+0.049781746 container create 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 08:54:22 compute-0 systemd[1]: Started libpod-conmon-270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d.scope.
Nov 25 08:54:22 compute-0 podman[375641]: 2025-11-25 08:54:22.6079581 +0000 UTC m=+0.025516755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:54:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:54:22 compute-0 podman[375641]: 2025-11-25 08:54:22.721981074 +0000 UTC m=+0.139539669 container init 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:54:22 compute-0 podman[375641]: 2025-11-25 08:54:22.730077515 +0000 UTC m=+0.147636090 container start 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:54:22 compute-0 serene_sanderson[375659]: 167 167
Nov 25 08:54:22 compute-0 systemd[1]: libpod-270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d.scope: Deactivated successfully.
Nov 25 08:54:22 compute-0 podman[375641]: 2025-11-25 08:54:22.748839885 +0000 UTC m=+0.166398480 container attach 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:54:22 compute-0 podman[375641]: 2025-11-25 08:54:22.749954396 +0000 UTC m=+0.167512971 container died 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1609451b0be2cdfb2634def33fe4d7fa2f9f9e310a628fa0e30e19750fd263b-merged.mount: Deactivated successfully.
Nov 25 08:54:22 compute-0 podman[375641]: 2025-11-25 08:54:22.801016135 +0000 UTC m=+0.218574710 container remove 270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 08:54:22 compute-0 systemd[1]: libpod-conmon-270c57f77eb8c477783d0037a7fbdc35d2fccf53f62298d06c11ef970bb4901d.scope: Deactivated successfully.
Nov 25 08:54:22 compute-0 nova_compute[253538]: 2025-11-25 08:54:22.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:22 compute-0 podman[375683]: 2025-11-25 08:54:22.985838826 +0000 UTC m=+0.041029338 container create 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:54:23 compute-0 ceph-mon[75015]: pgmap v2207: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 25 08:54:23 compute-0 systemd[1]: Started libpod-conmon-386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016.scope.
Nov 25 08:54:23 compute-0 podman[375683]: 2025-11-25 08:54:22.968874894 +0000 UTC m=+0.024065426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:54:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:23 compute-0 podman[375683]: 2025-11-25 08:54:23.081043227 +0000 UTC m=+0.136233769 container init 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:54:23 compute-0 podman[375683]: 2025-11-25 08:54:23.088002276 +0000 UTC m=+0.143192798 container start 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:54:23 compute-0 podman[375683]: 2025-11-25 08:54:23.091184273 +0000 UTC m=+0.146374785 container attach 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 08:54:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:54:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:54:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:54:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:54:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:54:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:54:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 16 KiB/s wr, 4 op/s
Nov 25 08:54:23 compute-0 adoring_dirac[375700]: {
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:     "0": [
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:         {
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "devices": [
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "/dev/loop3"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             ],
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_name": "ceph_lv0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_size": "21470642176",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "name": "ceph_lv0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "tags": {
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cluster_name": "ceph",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.crush_device_class": "",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.encrypted": "0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osd_id": "0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.type": "block",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.vdo": "0"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             },
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "type": "block",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "vg_name": "ceph_vg0"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:         }
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:     ],
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:     "1": [
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:         {
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "devices": [
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "/dev/loop4"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             ],
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_name": "ceph_lv1",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_size": "21470642176",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "name": "ceph_lv1",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "tags": {
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cluster_name": "ceph",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.crush_device_class": "",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.encrypted": "0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osd_id": "1",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.type": "block",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.vdo": "0"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             },
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "type": "block",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "vg_name": "ceph_vg1"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:         }
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:     ],
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:     "2": [
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:         {
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "devices": [
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "/dev/loop5"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             ],
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_name": "ceph_lv2",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_size": "21470642176",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "name": "ceph_lv2",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "tags": {
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.cluster_name": "ceph",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.crush_device_class": "",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.encrypted": "0",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osd_id": "2",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.type": "block",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:                 "ceph.vdo": "0"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             },
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "type": "block",
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:             "vg_name": "ceph_vg2"
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:         }
Nov 25 08:54:23 compute-0 adoring_dirac[375700]:     ]
Nov 25 08:54:23 compute-0 adoring_dirac[375700]: }
Nov 25 08:54:23 compute-0 systemd[1]: libpod-386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016.scope: Deactivated successfully.
Nov 25 08:54:23 compute-0 podman[375683]: 2025-11-25 08:54:23.881779182 +0000 UTC m=+0.936969704 container died 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 08:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a110b35e0cc81ed823941d9a31a95ddb08363b129f71c78152d3ffbb73af77f-merged.mount: Deactivated successfully.
Nov 25 08:54:23 compute-0 podman[375683]: 2025-11-25 08:54:23.945745902 +0000 UTC m=+1.000936414 container remove 386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 08:54:23 compute-0 systemd[1]: libpod-conmon-386dd646460c915514bea0f35f0a434debcd005a51dab4245298524e8fe30016.scope: Deactivated successfully.
Nov 25 08:54:23 compute-0 sudo[375577]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:24 compute-0 sudo[375721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:24 compute-0 sudo[375721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:24 compute-0 sudo[375721]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:24 compute-0 sudo[375746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:54:24 compute-0 sudo[375746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:24 compute-0 sudo[375746]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:24 compute-0 sudo[375771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:24 compute-0 sudo[375771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:24 compute-0 sudo[375771]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:24 compute-0 sudo[375796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:54:24 compute-0 sudo[375796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:24 compute-0 podman[375861]: 2025-11-25 08:54:24.683452532 +0000 UTC m=+0.049525649 container create 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 08:54:24 compute-0 systemd[1]: Started libpod-conmon-9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555.scope.
Nov 25 08:54:24 compute-0 podman[375861]: 2025-11-25 08:54:24.664986539 +0000 UTC m=+0.031059606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:54:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:54:24 compute-0 podman[375861]: 2025-11-25 08:54:24.793122707 +0000 UTC m=+0.159195824 container init 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:54:24 compute-0 podman[375861]: 2025-11-25 08:54:24.811060645 +0000 UTC m=+0.177133662 container start 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:54:24 compute-0 exciting_curran[375877]: 167 167
Nov 25 08:54:24 compute-0 podman[375861]: 2025-11-25 08:54:24.81747153 +0000 UTC m=+0.183544567 container attach 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 08:54:24 compute-0 systemd[1]: libpod-9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555.scope: Deactivated successfully.
Nov 25 08:54:24 compute-0 podman[375861]: 2025-11-25 08:54:24.818290742 +0000 UTC m=+0.184363759 container died 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 08:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a33d0b57af9dd363113d18594d7a4245bc24e7ba31584de762a5209c1b10a160-merged.mount: Deactivated successfully.
Nov 25 08:54:24 compute-0 podman[375861]: 2025-11-25 08:54:24.863663957 +0000 UTC m=+0.229736974 container remove 9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:54:24 compute-0 systemd[1]: libpod-conmon-9a58ac960e67a3b06258fd883890cc5dafee79332e358733bd97825095730555.scope: Deactivated successfully.
Nov 25 08:54:25 compute-0 ceph-mon[75015]: pgmap v2208: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 16 KiB/s wr, 4 op/s
Nov 25 08:54:25 compute-0 podman[375902]: 2025-11-25 08:54:25.085934007 +0000 UTC m=+0.042769165 container create 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:54:25 compute-0 systemd[1]: Started libpod-conmon-4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788.scope.
Nov 25 08:54:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:54:25 compute-0 podman[375902]: 2025-11-25 08:54:25.069722536 +0000 UTC m=+0.026557714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:25 compute-0 podman[375902]: 2025-11-25 08:54:25.18741711 +0000 UTC m=+0.144252308 container init 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 08:54:25 compute-0 podman[375902]: 2025-11-25 08:54:25.198467031 +0000 UTC m=+0.155302219 container start 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:54:25 compute-0 podman[375902]: 2025-11-25 08:54:25.20289136 +0000 UTC m=+0.159726508 container attach 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 08:54:25 compute-0 nova_compute[253538]: 2025-11-25 08:54:25.558 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060850.557127, 2da7049d-715e-4209-8c17-dda96ff6a192 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:54:25 compute-0 nova_compute[253538]: 2025-11-25 08:54:25.559 253542 INFO nova.compute.manager [-] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] VM Stopped (Lifecycle Event)
Nov 25 08:54:25 compute-0 nova_compute[253538]: 2025-11-25 08:54:25.574 253542 DEBUG nova.compute.manager [None req-58fc3531-b33d-468a-b94f-566b5baaaed7 - - - - - -] [instance: 2da7049d-715e-4209-8c17-dda96ff6a192] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:54:25 compute-0 nova_compute[253538]: 2025-11-25 08:54:25.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s
Nov 25 08:54:26 compute-0 jovial_moore[375919]: {
Nov 25 08:54:26 compute-0 jovial_moore[375919]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "osd_id": 1,
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "type": "bluestore"
Nov 25 08:54:26 compute-0 jovial_moore[375919]:     },
Nov 25 08:54:26 compute-0 jovial_moore[375919]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "osd_id": 2,
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "type": "bluestore"
Nov 25 08:54:26 compute-0 jovial_moore[375919]:     },
Nov 25 08:54:26 compute-0 jovial_moore[375919]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "osd_id": 0,
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:54:26 compute-0 jovial_moore[375919]:         "type": "bluestore"
Nov 25 08:54:26 compute-0 jovial_moore[375919]:     }
Nov 25 08:54:26 compute-0 jovial_moore[375919]: }
Nov 25 08:54:26 compute-0 systemd[1]: libpod-4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788.scope: Deactivated successfully.
Nov 25 08:54:26 compute-0 systemd[1]: libpod-4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788.scope: Consumed 1.037s CPU time.
Nov 25 08:54:26 compute-0 podman[375902]: 2025-11-25 08:54:26.226734849 +0000 UTC m=+1.183570017 container died 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba6e337355e3d0323a5f98a7e10d20eccbe3c333a8190e073517217c17146ad9-merged.mount: Deactivated successfully.
Nov 25 08:54:26 compute-0 podman[375902]: 2025-11-25 08:54:26.289094116 +0000 UTC m=+1.245929274 container remove 4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_moore, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 08:54:26 compute-0 systemd[1]: libpod-conmon-4837d63a07529fc50cde55d3cbf53191bed97eab99bacbbe5eabd130679e6788.scope: Deactivated successfully.
Nov 25 08:54:26 compute-0 nova_compute[253538]: 2025-11-25 08:54:26.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:26 compute-0 sudo[375796]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:54:26 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:54:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:54:26 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:54:26 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 05a83e71-60ef-4169-b208-36978a2250e0 does not exist
Nov 25 08:54:26 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6306d883-3669-479c-8db8-624fe51bfe5b does not exist
Nov 25 08:54:26 compute-0 sudo[375965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:54:26 compute-0 sudo[375965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:26 compute-0 sudo[375965]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:26 compute-0 sudo[375990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:54:26 compute-0 sudo[375990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:54:26 compute-0 sudo[375990]: pam_unix(sudo:session): session closed for user root
Nov 25 08:54:27 compute-0 ceph-mon[75015]: pgmap v2209: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s
Nov 25 08:54:27 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:54:27 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:54:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 25 08:54:27 compute-0 nova_compute[253538]: 2025-11-25 08:54:27.823 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:54:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638025801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:54:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:54:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638025801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:54:29 compute-0 ceph-mon[75015]: pgmap v2210: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 25 08:54:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1638025801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:54:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1638025801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:54:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Nov 25 08:54:30 compute-0 sshd-session[375896]: Received disconnect from 45.78.222.2 port 35992:11: Bye Bye [preauth]
Nov 25 08:54:30 compute-0 sshd-session[375896]: Disconnected from authenticating user root 45.78.222.2 port 35992 [preauth]
Nov 25 08:54:30 compute-0 nova_compute[253538]: 2025-11-25 08:54:30.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:30 compute-0 nova_compute[253538]: 2025-11-25 08:54:30.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:31 compute-0 ceph-mon[75015]: pgmap v2211: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Nov 25 08:54:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Nov 25 08:54:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:32 compute-0 nova_compute[253538]: 2025-11-25 08:54:32.713 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:32 compute-0 nova_compute[253538]: 2025-11-25 08:54:32.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:33 compute-0 ceph-mon[75015]: pgmap v2212: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Nov 25 08:54:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Nov 25 08:54:34 compute-0 nova_compute[253538]: 2025-11-25 08:54:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:35 compute-0 ceph-mon[75015]: pgmap v2213: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.363 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.364 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.381 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.472 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.473 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.484 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.485 253542 INFO nova.compute.claims [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:35 compute-0 nova_compute[253538]: 2025-11-25 08:54:35.669 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Nov 25 08:54:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:54:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600970949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.178 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.185 253542 DEBUG nova.compute.provider_tree [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:54:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3600970949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.196 253542 DEBUG nova.scheduler.client.report [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.227 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.228 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.315 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.315 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.336 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.354 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.440 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.441 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.442 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Creating image(s)
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.466 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.496 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.527 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.531 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.637 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.638 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.639 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.639 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.660 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.663 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7e82fa8c-6663-439c-833c-2b28f22282a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:36 compute-0 nova_compute[253538]: 2025-11-25 08:54:36.966 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7e82fa8c-6663-439c-833c-2b28f22282a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.041 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] resizing rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.078 253542 DEBUG nova.policy [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '009378dc36154271ba5b4590ce67ddde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.123 253542 DEBUG nova.objects.instance [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.144 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.145 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Ensure instance console log exists: /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.145 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.145 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.146 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:37 compute-0 ceph-mon[75015]: pgmap v2214: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Nov 25 08:54:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.596 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:54:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 5.4 KiB/s wr, 0 op/s
Nov 25 08:54:37 compute-0 nova_compute[253538]: 2025-11-25 08:54:37.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:38 compute-0 nova_compute[253538]: 2025-11-25 08:54:38.753 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Successfully created port: 52157627-d75e-4670-9215-6471bda94ba6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:54:38 compute-0 podman[376203]: 2025-11-25 08:54:38.801163205 +0000 UTC m=+0.054431202 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:54:39 compute-0 ceph-mon[75015]: pgmap v2215: 321 pgs: 321 active+clean; 246 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 5.4 KiB/s wr, 0 op/s
Nov 25 08:54:39 compute-0 nova_compute[253538]: 2025-11-25 08:54:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:39 compute-0 nova_compute[253538]: 2025-11-25 08:54:39.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:54:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 273 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 16 op/s
Nov 25 08:54:40 compute-0 nova_compute[253538]: 2025-11-25 08:54:40.603 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:40 compute-0 podman[376223]: 2025-11-25 08:54:40.812966775 +0000 UTC m=+0.066391368 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:54:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:41.077 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:41 compute-0 ceph-mon[75015]: pgmap v2216: 321 pgs: 321 active+clean; 273 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 16 op/s
Nov 25 08:54:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:42 compute-0 ceph-mon[75015]: pgmap v2217: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:42 compute-0 nova_compute[253538]: 2025-11-25 08:54:42.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:42 compute-0 nova_compute[253538]: 2025-11-25 08:54:42.831 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:43 compute-0 nova_compute[253538]: 2025-11-25 08:54:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:44 compute-0 nova_compute[253538]: 2025-11-25 08:54:44.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:44 compute-0 ceph-mon[75015]: pgmap v2218: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:45 compute-0 nova_compute[253538]: 2025-11-25 08:54:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:45 compute-0 nova_compute[253538]: 2025-11-25 08:54:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:45 compute-0 nova_compute[253538]: 2025-11-25 08:54:45.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:45 compute-0 nova_compute[253538]: 2025-11-25 08:54:45.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:45 compute-0 nova_compute[253538]: 2025-11-25 08:54:45.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:45 compute-0 nova_compute[253538]: 2025-11-25 08:54:45.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:54:45 compute-0 nova_compute[253538]: 2025-11-25 08:54:45.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:45 compute-0 nova_compute[253538]: 2025-11-25 08:54:45.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:45 compute-0 podman[376254]: 2025-11-25 08:54:45.853362568 +0000 UTC m=+0.106580902 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:54:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:54:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637140863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.042 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.137 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.138 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.144 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.145 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.409 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.411 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3370MB free_disk=59.876190185546875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.411 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:46 compute-0 nova_compute[253538]: 2025-11-25 08:54:46.411 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:47 compute-0 ceph-mon[75015]: pgmap v2219: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2637140863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:54:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:47 compute-0 nova_compute[253538]: 2025-11-25 08:54:47.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.418 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 76611b0b-db06-4903-a22a-59b23a1e0d48 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.419 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.419 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7e82fa8c-6663-439c-833c-2b28f22282a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.420 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.420 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.713 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.875 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Successfully updated port: 52157627-d75e-4670-9215-6471bda94ba6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:48.884 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:48.885 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.910 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.911 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.911 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.991 253542 DEBUG nova.compute.manager [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-changed-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.991 253542 DEBUG nova.compute.manager [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing instance network info cache due to event network-changed-52157627-d75e-4670-9215-6471bda94ba6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:54:48 compute-0 nova_compute[253538]: 2025-11-25 08:54:48.992 253542 DEBUG oslo_concurrency.lockutils [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:54:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:54:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4060622671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:54:49 compute-0 nova_compute[253538]: 2025-11-25 08:54:49.176 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:49 compute-0 nova_compute[253538]: 2025-11-25 08:54:49.181 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:54:49 compute-0 nova_compute[253538]: 2025-11-25 08:54:49.195 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:54:49 compute-0 nova_compute[253538]: 2025-11-25 08:54:49.233 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:54:49 compute-0 nova_compute[253538]: 2025-11-25 08:54:49.234 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:49 compute-0 ceph-mon[75015]: pgmap v2220: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4060622671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:54:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:49.888 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:50 compute-0 nova_compute[253538]: 2025-11-25 08:54:50.061 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:54:50 compute-0 nova_compute[253538]: 2025-11-25 08:54:50.229 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:54:50 compute-0 nova_compute[253538]: 2025-11-25 08:54:50.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:50 compute-0 ceph-mon[75015]: pgmap v2221: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.829 253542 DEBUG nova.network.neutron [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:54:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 575 KiB/s wr, 12 op/s
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.884 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.885 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance network_info: |[{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.885 253542 DEBUG oslo_concurrency.lockutils [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.886 253542 DEBUG nova.network.neutron [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing network info cache for port 52157627-d75e-4670-9215-6471bda94ba6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.888 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start _get_guest_xml network_info=[{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.892 253542 WARNING nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.897 253542 DEBUG nova.virt.libvirt.host [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.898 253542 DEBUG nova.virt.libvirt.host [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.905 253542 DEBUG nova.virt.libvirt.host [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.906 253542 DEBUG nova.virt.libvirt.host [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.906 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.906 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.907 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.908 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.908 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.908 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.908 253542 DEBUG nova.virt.hardware [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:54:51 compute-0 nova_compute[253538]: 2025-11-25 08:54:51.912 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:54:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324861545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.349 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.373 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.376 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:54:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4053244308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.831 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.834 253542 DEBUG nova.virt.libvirt.vif [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-360976027',display_name='tempest-TestNetworkAdvancedServerOps-server-360976027',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-360976027',id=117,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMP7QXPZDPXrUql830fJueudXHoXZ2zww6EaNe3PFgcJ0/Sar4viBph4zMnjSpS0mXHOOoM16QZ2fgUMXO9FFKEQnCoRRYMYlR8af4rdgpILBOKGdgI2okSUsyg3suBciA==',key_name='tempest-TestNetworkAdvancedServerOps-1316847301',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-p9ynh0fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:54:36Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=7e82fa8c-6663-439c-833c-2b28f22282a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.834 253542 DEBUG nova.network.os_vif_util [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.836 253542 DEBUG nova.network.os_vif_util [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.838 253542 DEBUG nova.objects.instance [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.840 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.858 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <uuid>7e82fa8c-6663-439c-833c-2b28f22282a8</uuid>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <name>instance-00000075</name>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-360976027</nova:name>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:54:51</nova:creationTime>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <nova:user uuid="009378dc36154271ba5b4590ce67ddde">tempest-TestNetworkAdvancedServerOps-1132090577-project-member</nova:user>
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <nova:project uuid="7dcaf3c96bfc4db3a41291debd385c67">tempest-TestNetworkAdvancedServerOps-1132090577</nova:project>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <nova:port uuid="52157627-d75e-4670-9215-6471bda94ba6">
Nov 25 08:54:52 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <system>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <entry name="serial">7e82fa8c-6663-439c-833c-2b28f22282a8</entry>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <entry name="uuid">7e82fa8c-6663-439c-833c-2b28f22282a8</entry>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </system>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <os>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   </os>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <features>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   </features>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7e82fa8c-6663-439c-833c-2b28f22282a8_disk">
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       </source>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config">
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       </source>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:54:52 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:a5:b5:38"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <target dev="tap52157627-d7"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/console.log" append="off"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <video>
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </video>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:54:52 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:54:52 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:54:52 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:54:52 compute-0 nova_compute[253538]: </domain>
Nov 25 08:54:52 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.859 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Preparing to wait for external event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.859 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.860 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.860 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.861 253542 DEBUG nova.virt.libvirt.vif [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-360976027',display_name='tempest-TestNetworkAdvancedServerOps-server-360976027',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-360976027',id=117,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMP7QXPZDPXrUql830fJueudXHoXZ2zww6EaNe3PFgcJ0/Sar4viBph4zMnjSpS0mXHOOoM16QZ2fgUMXO9FFKEQnCoRRYMYlR8af4rdgpILBOKGdgI2okSUsyg3suBciA==',key_name='tempest-TestNetworkAdvancedServerOps-1316847301',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-p9ynh0fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:54:36Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=7e82fa8c-6663-439c-833c-2b28f22282a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.861 253542 DEBUG nova.network.os_vif_util [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.862 253542 DEBUG nova.network.os_vif_util [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.862 253542 DEBUG os_vif [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.864 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.864 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.868 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52157627-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.869 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52157627-d7, col_values=(('external_ids', {'iface-id': '52157627-d75e-4670-9215-6471bda94ba6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:b5:38', 'vm-uuid': '7e82fa8c-6663-439c-833c-2b28f22282a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:52 compute-0 NetworkManager[48915]: <info>  [1764060892.8714] manager: (tap52157627-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:52 compute-0 nova_compute[253538]: 2025-11-25 08:54:52.878 253542 INFO os_vif [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7')
Nov 25 08:54:53 compute-0 nova_compute[253538]: 2025-11-25 08:54:53.155 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:54:53 compute-0 nova_compute[253538]: 2025-11-25 08:54:53.156 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:54:53 compute-0 nova_compute[253538]: 2025-11-25 08:54:53.156 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] No VIF found with MAC fa:16:3e:a5:b5:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:54:53 compute-0 nova_compute[253538]: 2025-11-25 08:54:53.156 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Using config drive
Nov 25 08:54:53 compute-0 nova_compute[253538]: 2025-11-25 08:54:53.194 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:54:53 compute-0 ceph-mon[75015]: pgmap v2222: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 575 KiB/s wr, 12 op/s
Nov 25 08:54:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3324861545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:54:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4053244308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:54:53
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'vms']
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:54:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:54:54 compute-0 nova_compute[253538]: 2025-11-25 08:54:54.073 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Creating config drive at /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config
Nov 25 08:54:54 compute-0 nova_compute[253538]: 2025-11-25 08:54:54.079 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkqel1zb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:54 compute-0 nova_compute[253538]: 2025-11-25 08:54:54.232 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkqel1zb" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:54 compute-0 nova_compute[253538]: 2025-11-25 08:54:54.269 253542 DEBUG nova.storage.rbd_utils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] rbd image 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:54:54 compute-0 nova_compute[253538]: 2025-11-25 08:54:54.274 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:54:54 compute-0 ceph-mon[75015]: pgmap v2223: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Nov 25 08:54:54 compute-0 nova_compute[253538]: 2025-11-25 08:54:54.709 253542 DEBUG nova.network.neutron [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updated VIF entry in instance network info cache for port 52157627-d75e-4670-9215-6471bda94ba6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:54:54 compute-0 nova_compute[253538]: 2025-11-25 08:54:54.710 253542 DEBUG nova.network.neutron [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:54:54 compute-0 nova_compute[253538]: 2025-11-25 08:54:54.725 253542 DEBUG oslo_concurrency.lockutils [req-679a13d9-d6c6-4e67-9a89-b6bdd6dd4135 req-3d06c655-57c3-4e71-a7db-8c0ccca42559 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:54:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s wr, 1 op/s
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.455 253542 DEBUG oslo_concurrency.processutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config 7e82fa8c-6663-439c-833c-2b28f22282a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.455 253542 INFO nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Deleting local config drive /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8/disk.config because it was imported into RBD.
Nov 25 08:54:56 compute-0 kernel: tap52157627-d7: entered promiscuous mode
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:56 compute-0 ovn_controller[152859]: 2025-11-25T08:54:56Z|01173|binding|INFO|Claiming lport 52157627-d75e-4670-9215-6471bda94ba6 for this chassis.
Nov 25 08:54:56 compute-0 ovn_controller[152859]: 2025-11-25T08:54:56Z|01174|binding|INFO|52157627-d75e-4670-9215-6471bda94ba6: Claiming fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 08:54:56 compute-0 NetworkManager[48915]: <info>  [1764060896.5249] manager: (tap52157627-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.530 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.531 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b bound to our chassis
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.532 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 08:54:56 compute-0 ovn_controller[152859]: 2025-11-25T08:54:56Z|01175|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 08:54:56 compute-0 ovn_controller[152859]: 2025-11-25T08:54:56Z|01176|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 up in Southbound
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.543 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.548 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc42ec74-dd98-45a1-a8d0-9027e632f0f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.549 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap703bdacb-51 in ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.552 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap703bdacb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2662a858-ec87-49c7-a988-ec5feb7a242f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.553 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[13ccb0bd-41a3-4c79-9a5c-43ffc0f2cf72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.559 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:56 compute-0 systemd-udevd[376448]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.568 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[703fc4af-05c2-4da4-9233-b1efd593b5b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 NetworkManager[48915]: <info>  [1764060896.5814] device (tap52157627-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:54:56 compute-0 NetworkManager[48915]: <info>  [1764060896.5827] device (tap52157627-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:54:56 compute-0 systemd-machined[215790]: New machine qemu-146-instance-00000075.
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.582 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[446a638b-a76b-4c7b-96dd-c35c472c898c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000075.
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.625 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ffe411-8b3a-4bfc-9327-98a5e17a71f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.632 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2adb51e3-6a4a-4905-8427-889fb6663782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 systemd-udevd[376454]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:54:56 compute-0 NetworkManager[48915]: <info>  [1764060896.6342] manager: (tap703bdacb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/488)
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.686 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[43b772a6-1ade-45b3-af00-2783cc06b094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.691 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b84c5b-4f24-41fd-baa5-a8a3ae3bd956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 NetworkManager[48915]: <info>  [1764060896.7251] device (tap703bdacb-50): carrier: link connected
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.734 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2d87f492-8b70-4832-ba66-9405e954f7ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.760 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5caf2357-7e37-4093-9886-0e50fe6eff38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap703bdacb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:81:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625132, 'reachable_time': 20172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376482, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.787 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1086c4-96e6-4df2-b1ef-7cdea9e3c93f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:81bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625132, 'tstamp': 625132}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376483, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.797 253542 DEBUG nova.compute.manager [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.798 253542 DEBUG oslo_concurrency.lockutils [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.798 253542 DEBUG oslo_concurrency.lockutils [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.799 253542 DEBUG oslo_concurrency.lockutils [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.799 253542 DEBUG nova.compute.manager [req-96fc5201-313e-466a-a925-d73bc8ae1057 req-d6b3cfc4-0efc-4fde-b559-bcf00a280a49 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Processing event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.816 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73b7b0da-7c32-4517-a08f-f78fba4bf168]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap703bdacb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:81:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625132, 'reachable_time': 20172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376484, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85fd4858-cae6-4ede-a9e1-312e4bd3f122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.938 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff426b0d-5015-4823-b17e-77402984b5c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.939 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap703bdacb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.939 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.940 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap703bdacb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:56 compute-0 kernel: tap703bdacb-50: entered promiscuous mode
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:56 compute-0 NetworkManager[48915]: <info>  [1764060896.9432] manager: (tap703bdacb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap703bdacb-50, col_values=(('external_ids', {'iface-id': '583866cf-82da-4259-9189-db9f58620872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:56 compute-0 ovn_controller[152859]: 2025-11-25T08:54:56Z|01177|binding|INFO|Releasing lport 583866cf-82da-4259-9189-db9f58620872 from this chassis (sb_readonly=0)
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.945 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:56 compute-0 nova_compute[253538]: 2025-11-25 08:54:56.958 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.959 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.960 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d752028-5509-4655-aa1d-16b008b002c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.961 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:54:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:56.961 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'env', 'PROCESS_TAG=haproxy-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/703bdacb-53cd-40a1-9c2c-c632a29e049b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:54:57 compute-0 ceph-mon[75015]: pgmap v2224: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s wr, 1 op/s
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.320 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.321 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060897.3191752, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.321 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Started (Lifecycle Event)
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.329 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.334 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance spawned successfully.
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.335 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.346 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.360 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.360 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.361 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.361 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.361 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.362 253542 DEBUG nova.virt.libvirt.driver [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.366 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.366 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060897.3237886, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.366 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Paused (Lifecycle Event)
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.395 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.399 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060897.3268743, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.399 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Resumed (Lifecycle Event)
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.419 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.423 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.428 253542 INFO nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Took 20.99 seconds to spawn the instance on the hypervisor.
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.429 253542 DEBUG nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.447 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:54:57 compute-0 podman[376559]: 2025-11-25 08:54:57.373299106 +0000 UTC m=+0.039702951 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:54:57 compute-0 podman[376559]: 2025-11-25 08:54:57.485184472 +0000 UTC m=+0.151588217 container create f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.485 253542 INFO nova.compute.manager [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Took 22.05 seconds to build instance.
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.498 253542 DEBUG oslo_concurrency.lockutils [None req-d945cceb-25fb-483b-a9c5-26730e041c66 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:54:57 compute-0 systemd[1]: Started libpod-conmon-f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3.scope.
Nov 25 08:54:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099a140f52146212a9c7c93bc8139169246af3392159627f1d84ff95a83d956b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:54:57 compute-0 podman[376559]: 2025-11-25 08:54:57.613551906 +0000 UTC m=+0.279955651 container init f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:54:57 compute-0 podman[376559]: 2025-11-25 08:54:57.621853172 +0000 UTC m=+0.288256917 container start f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:54:57 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [NOTICE]   (376578) : New worker (376580) forked
Nov 25 08:54:57 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [NOTICE]   (376578) : Loading success.
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.839 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 21 KiB/s wr, 2 op/s
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.975 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.976 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.976 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.976 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.977 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.978 253542 INFO nova.compute.manager [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Terminating instance
Nov 25 08:54:57 compute-0 nova_compute[253538]: 2025-11-25 08:54:57.979 253542 DEBUG nova.compute.manager [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:54:58 compute-0 kernel: tap0797e76b-3f (unregistering): left promiscuous mode
Nov 25 08:54:58 compute-0 NetworkManager[48915]: <info>  [1764060898.0824] device (tap0797e76b-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01178|binding|INFO|Releasing lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 from this chassis (sb_readonly=0)
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01179|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 down in Southbound
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01180|binding|INFO|Removing iface tap0797e76b-3f ovn-installed in OVS
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.111 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:4a:e2 10.100.0.30'], port_security=['fa:16:3e:21:4a:e2 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96035e03-5ed4-4905-a41d-a5f2952043fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0797e76b-3f15-4c7e-ae0d-0f4813d59967) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.112 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 unbound from our chassis
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.115 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8:0:1:f816:3eff:fec9:408d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adce94c1-2312-402b-8ec0-ddb674bf630f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7654d474-edb7-40de-964c-94dde0499c7f) old=Port_Binding(mac=['fa:16:3e:c9:40:8d 2001:db8::f816:3eff:fec9:408d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec9:408d/64', 'neutron:device_id': 'ovnmeta-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3def65f0-e242-4a46-9e8e-162652f023c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5d3235b532e4e7e87aab12bda12e650', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.117 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.145 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5287f052-8700-47a7-9ca4-f9413eb2b009]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 25 08:54:58 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Consumed 16.816s CPU time.
Nov 25 08:54:58 compute-0 systemd-machined[215790]: Machine qemu-145-instance-00000074 terminated.
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.186 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0d226cdf-5763-47c5-9115-198940fa2dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.191 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[043db0e2-7ff9-41e5-aea1-105c5089bf2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 kernel: tap0797e76b-3f: entered promiscuous mode
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01181|binding|INFO|Claiming lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 for this chassis.
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01182|binding|INFO|0797e76b-3f15-4c7e-ae0d-0f4813d59967: Claiming fa:16:3e:21:4a:e2 10.100.0.30
Nov 25 08:54:58 compute-0 NetworkManager[48915]: <info>  [1764060898.2031] manager: (tap0797e76b-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/490)
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 systemd-udevd[376462]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.211 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:4a:e2 10.100.0.30'], port_security=['fa:16:3e:21:4a:e2 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96035e03-5ed4-4905-a41d-a5f2952043fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0797e76b-3f15-4c7e-ae0d-0f4813d59967) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:58 compute-0 kernel: tap0797e76b-3f (unregistering): left promiscuous mode
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01183|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 ovn-installed in OVS
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01184|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 up in Southbound
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01185|binding|INFO|Releasing lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 from this chassis (sb_readonly=1)
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01186|binding|INFO|Removing iface tap0797e76b-3f ovn-installed in OVS
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01187|if_status|INFO|Dropped 1 log messages in last 320 seconds (most recently, 320 seconds ago) due to excessive rate
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01188|if_status|INFO|Not setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 down as sb is readonly
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01189|binding|INFO|Releasing lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 from this chassis (sb_readonly=0)
Nov 25 08:54:58 compute-0 ovn_controller[152859]: 2025-11-25T08:54:58Z|01190|binding|INFO|Setting lport 0797e76b-3f15-4c7e-ae0d-0f4813d59967 down in Southbound
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.245 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:4a:e2 10.100.0.30'], port_security=['fa:16:3e:21:4a:e2 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '5e14b791-8860-44a3-87e0-5c7fcc1dcf12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96035e03-5ed4-4905-a41d-a5f2952043fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0797e76b-3f15-4c7e-ae0d-0f4813d59967) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.243 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[683a06c5-4978-4603-b65b-a2c407854e5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.246 253542 INFO nova.virt.libvirt.driver [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Instance destroyed successfully.
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.247 253542 DEBUG nova.objects.instance [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.257 253542 DEBUG nova.virt.libvirt.vif [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-160796765',display_name='tempest-TestNetworkBasicOps-server-160796765',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-160796765',id=116,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0f+XBfXdSU4e2+02qYGk42nbRwIu1Vshv2fAHcU2M9HY4bsiawDBYsAh0BiTPD2qOg4I+4cye8z+LuwXaU2+YwQ92/nUDN4SrklXs8+Sfqmmth2xZ1VW9badcZ/6ZoHg==',key_name='tempest-TestNetworkBasicOps-278129425',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8e5h58cs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:54Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e14b791-8860-44a3-87e0-5c7fcc1dcf12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.258 253542 DEBUG nova.network.os_vif_util [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "address": "fa:16:3e:21:4a:e2", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0797e76b-3f", "ovs_interfaceid": "0797e76b-3f15-4c7e-ae0d-0f4813d59967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.258 253542 DEBUG nova.network.os_vif_util [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.259 253542 DEBUG os_vif [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.260 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.260 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0797e76b-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.266 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.269 253542 INFO os_vif [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:4a:e2,bridge_name='br-int',has_traffic_filtering=True,id=0797e76b-3f15-4c7e-ae0d-0f4813d59967,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0797e76b-3f')
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[17f6c713-4b79-4d5f-a597-348d1b340418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376603, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.299 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f631bed-42aa-4d7e-896e-44627edab412]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616535, 'tstamp': 616535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376618, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616539, 'tstamp': 616539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376618, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.300 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.302 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.303 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.303 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.304 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.306 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 7654d474-edb7-40de-964c-94dde0499c7f in datapath 3def65f0-e242-4a46-9e8e-162652f023c1 unbound from our chassis
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.307 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3def65f0-e242-4a46-9e8e-162652f023c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bffe5059-c1f6-4934-a3bf-85dea9e560f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.310 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 unbound from our chassis
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.311 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.326 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[738e0517-5014-41cd-b09e-bed374a90e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.361 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[02a5fcd7-884e-415a-a84f-3c6dd83527c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.365 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6444d208-28e3-4c11-a9e0-77c7abe110d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.400 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a037fd-922c-4588-b9bc-fe7a9594a7a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.418 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[159eec9e-216e-4781-ad64-6d8ccfe8d0c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376628, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.432 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c40ec250-36bc-4a4e-a0df-6909cb7ade47]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616535, 'tstamp': 616535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376629, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616539, 'tstamp': 616539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376629, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.434 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.439 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.439 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.440 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.440 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.441 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0797e76b-3f15-4c7e-ae0d-0f4813d59967 in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 unbound from our chassis
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.443 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.463 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a4286d7c-4527-460e-88be-edb0c606a2f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.495 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6577bed7-d4c0-42e1-9d7a-40fdbf8b46d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.498 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[321754e9-12f9-4ee9-8fb3-4c040dc37dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.534 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a433dd5f-32b4-4c90-ae81-e337f44de5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.557 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb4eab5-8f55-4fb7-a68a-f7d183331729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f0f7d83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:fd:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616521, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376635, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.581 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e9b9a7-7d49-47f4-b392-52e72a83e25f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616535, 'tstamp': 616535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376636, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f0f7d83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616539, 'tstamp': 616539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376636, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.586 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f0f7d83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.586 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.587 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f0f7d83-b0, col_values=(('external_ids', {'iface-id': '4bc48b70-3942-46d1-ac71-5fa19e5d9ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:54:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:54:58.587 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.895 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.896 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.896 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.896 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.897 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] No waiting events found dispatching network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.897 253542 WARNING nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received unexpected event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 for instance with vm_state active and task_state None.
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.897 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.897 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.898 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.898 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.898 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.898 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.899 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.899 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.900 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.900 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.900 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.901 253542 WARNING nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state deleting.
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.901 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.901 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.902 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.902 253542 DEBUG oslo_concurrency.lockutils [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.903 253542 DEBUG nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:54:58 compute-0 nova_compute[253538]: 2025-11-25 08:54:58.903 253542 WARNING nova.compute.manager [req-66b7386c-3dca-4f59-baae-52782ba28094 req-665cd3a8-7025-4f94-b4b5-eec501b0d63c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state deleting.
Nov 25 08:54:59 compute-0 ceph-mon[75015]: pgmap v2225: 321 pgs: 321 active+clean; 293 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 21 KiB/s wr, 2 op/s
Nov 25 08:54:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 259 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 23 KiB/s wr, 54 op/s
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.241 253542 INFO nova.virt.libvirt.driver [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Deleting instance files /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12_del
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.243 253542 INFO nova.virt.libvirt.driver [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Deletion of /var/lib/nova/instances/5e14b791-8860-44a3-87e0-5c7fcc1dcf12_del complete
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.303 253542 INFO nova.compute.manager [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Took 2.32 seconds to destroy the instance on the hypervisor.
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.304 253542 DEBUG oslo.service.loopingcall [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.305 253542 DEBUG nova.compute.manager [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.305 253542 DEBUG nova.network.neutron [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.992 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.993 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.993 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.994 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.994 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.994 253542 WARNING nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state deleting.
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.994 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.995 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.995 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.995 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.996 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.996 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-unplugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.996 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.997 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.997 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.997 253542 DEBUG oslo_concurrency.lockutils [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.997 253542 DEBUG nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] No waiting events found dispatching network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:00 compute-0 nova_compute[253538]: 2025-11-25 08:55:00.998 253542 WARNING nova.compute.manager [req-738c4fd5-77b4-4ee4-894f-541311c117ad req-4ec5de02-214d-4846-8535-d4e99fb77966 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received unexpected event network-vif-plugged-0797e76b-3f15-4c7e-ae0d-0f4813d59967 for instance with vm_state active and task_state deleting.
Nov 25 08:55:01 compute-0 ceph-mon[75015]: pgmap v2226: 321 pgs: 321 active+clean; 259 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 23 KiB/s wr, 54 op/s
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.129 253542 DEBUG nova.network.neutron [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.145 253542 INFO nova.compute.manager [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Took 0.84 seconds to deallocate network for instance.
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.204 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.205 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.264 253542 DEBUG nova.compute.manager [req-2bca0e44-50b1-4c27-8d42-adf29616893b req-c826736a-d3b6-4868-a196-2c439a06afee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Received event network-vif-deleted-0797e76b-3f15-4c7e-ae0d-0f4813d59967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.302 253542 DEBUG oslo_concurrency.processutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:55:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347868721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.779 253542 DEBUG oslo_concurrency.processutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.786 253542 DEBUG nova.compute.provider_tree [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.803 253542 DEBUG nova.scheduler.client.report [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.821 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 235 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 23 KiB/s wr, 91 op/s
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.853 253542 INFO nova.scheduler.client.report [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 5e14b791-8860-44a3-87e0-5c7fcc1dcf12
Nov 25 08:55:01 compute-0 nova_compute[253538]: 2025-11-25 08:55:01.908 253542 DEBUG oslo_concurrency.lockutils [None req-cc824f98-43a0-4581-947b-65ade4def281 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e14b791-8860-44a3-87e0-5c7fcc1dcf12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3347868721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:02 compute-0 nova_compute[253538]: 2025-11-25 08:55:02.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 ceph-mon[75015]: pgmap v2227: 321 pgs: 321 active+clean; 235 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 23 KiB/s wr, 91 op/s
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.170 253542 DEBUG nova.compute.manager [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-changed-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.171 253542 DEBUG nova.compute.manager [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing instance network info cache due to event network-changed-52157627-d75e-4670-9215-6471bda94ba6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.171 253542 DEBUG oslo_concurrency.lockutils [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.171 253542 DEBUG oslo_concurrency.lockutils [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.172 253542 DEBUG nova.network.neutron [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing network info cache for port 52157627-d75e-4670-9215-6471bda94ba6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.263 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.368 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-1959aca7-b25c-4fe5-b59a-70db352af78b" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.369 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-1959aca7-b25c-4fe5-b59a-70db352af78b" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.384 253542 DEBUG nova.objects.instance [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'flavor' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.404 253542 DEBUG nova.virt.libvirt.vif [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.405 253542 DEBUG nova.network.os_vif_util [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.406 253542 DEBUG nova.network.os_vif_util [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.410 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.413 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.416 253542 DEBUG nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Attempting to detach device tap1959aca7-b2 from instance 76611b0b-db06-4903-a22a-59b23a1e0d48 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.416 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:d6:c5:52"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <target dev="tap1959aca7-b2"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]: </interface>
Nov 25 08:55:03 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.425 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.430 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface>not found in domain: <domain type='kvm' id='142'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <name>instance-00000072</name>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <uuid>76611b0b-db06-4903-a22a-59b23a1e0d48</uuid>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:53:30</nova:creationTime>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:port uuid="1959aca7-b25c-4fe5-b59a-70db352af78b">
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:55:03 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <system>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='serial'>76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='uuid'>76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </system>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <os>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </os>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <features>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </features>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk' index='2'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </source>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config' index='1'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </source>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:36:b2:21'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target dev='tap8f1fcc3c-5f'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:d6:c5:52'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target dev='tap1959aca7-b2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='net1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log' append='off'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </target>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log' append='off'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </console>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </input>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </input>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </input>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <video>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </video>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c344,c569</label>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c344,c569</imagelabel>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:55:03 compute-0 nova_compute[253538]: </domain>
Nov 25 08:55:03 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.435 253542 INFO nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully detached device tap1959aca7-b2 from instance 76611b0b-db06-4903-a22a-59b23a1e0d48 from the persistent domain config.
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.436 253542 DEBUG nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] (1/8): Attempting to detach device tap1959aca7-b2 with device alias net1 from instance 76611b0b-db06-4903-a22a-59b23a1e0d48 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.436 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] detach device xml: <interface type="ethernet">
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <mac address="fa:16:3e:d6:c5:52"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <model type="virtio"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <mtu size="1442"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <target dev="tap1959aca7-b2"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]: </interface>
Nov 25 08:55:03 compute-0 nova_compute[253538]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 08:55:03 compute-0 kernel: tap1959aca7-b2 (unregistering): left promiscuous mode
Nov 25 08:55:03 compute-0 ovn_controller[152859]: 2025-11-25T08:55:03Z|01191|binding|INFO|Releasing lport 1959aca7-b25c-4fe5-b59a-70db352af78b from this chassis (sb_readonly=0)
Nov 25 08:55:03 compute-0 ovn_controller[152859]: 2025-11-25T08:55:03Z|01192|binding|INFO|Setting lport 1959aca7-b25c-4fe5-b59a-70db352af78b down in Southbound
Nov 25 08:55:03 compute-0 ovn_controller[152859]: 2025-11-25T08:55:03Z|01193|binding|INFO|Removing iface tap1959aca7-b2 ovn-installed in OVS
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 NetworkManager[48915]: <info>  [1764060903.5538] device (tap1959aca7-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.557 253542 DEBUG nova.virt.libvirt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Received event <DeviceRemovedEvent: 1764060903.5567229, 76611b0b-db06-4903-a22a-59b23a1e0d48 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.558 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:c5:52 10.100.0.26', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '76611b0b-db06-4903-a22a-59b23a1e0d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927e21d4-cff1-4c45-a730-0e5e5d57b4cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=1959aca7-b25c-4fe5-b59a-70db352af78b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.560 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 1959aca7-b25c-4fe5-b59a-70db352af78b in datapath 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 unbound from our chassis
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.562 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.563 253542 DEBUG nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Start waiting for the detach event from libvirt for device tap1959aca7-b2 with device alias net1 for instance 76611b0b-db06-4903-a22a-59b23a1e0d48 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.564 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.563 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f4463a-d487-4a9a-94b3-541719baf2f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.564 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 namespace which is not needed anymore
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.569 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:c5:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1959aca7-b2"/></interface>not found in domain: <domain type='kvm' id='142'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <name>instance-00000072</name>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <uuid>76611b0b-db06-4903-a22a-59b23a1e0d48</uuid>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:53:30</nova:creationTime>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:port uuid="1959aca7-b25c-4fe5-b59a-70db352af78b">
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:55:03 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <memory unit='KiB'>131072</memory>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <vcpu placement='static'>1</vcpu>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <resource>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <partition>/machine</partition>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </resource>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <sysinfo type='smbios'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <system>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='manufacturer'>RDO</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='serial'>76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='uuid'>76611b0b-db06-4903-a22a-59b23a1e0d48</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <entry name='family'>Virtual Machine</entry>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </system>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <os>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <boot dev='hd'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <smbios mode='sysinfo'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </os>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <features>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <vmcoreinfo state='on'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </features>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <vendor>AMD</vendor>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='x2apic'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='hypervisor'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='stibp'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='ssbd'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='overflow-recov'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='succor'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='ibrs'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='lbrv'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='pause-filter'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='xsaves'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='svm'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='require' name='topoext'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='npt'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <feature policy='disable' name='nrip-save'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <clock offset='utc'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <timer name='hpet' present='no'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <on_poweroff>destroy</on_poweroff>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <on_reboot>restart</on_reboot>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <on_crash>destroy</on_crash>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <disk type='network' device='disk'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk' index='2'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </source>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target dev='vda' bus='virtio'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='virtio-disk0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <disk type='network' device='cdrom'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <auth username='openstack'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <secret type='ceph' uuid='a058ea16-8b73-51e1-b172-ed66107102bf'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <source protocol='rbd' name='vms/76611b0b-db06-4903-a22a-59b23a1e0d48_disk.config' index='1'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <host name='192.168.122.100' port='6789'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </source>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target dev='sda' bus='sata'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <readonly/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='sata0-0-0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pcie.0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='1' port='0x10'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='2' port='0x11'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='3' port='0x12'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.3'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='4' port='0x13'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.4'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='5' port='0x14'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.5'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='6' port='0x15'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.6'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='7' port='0x16'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.7'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='8' port='0x17'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.8'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='9' port='0x18'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.9'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='10' port='0x19'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.10'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='11' port='0x1a'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.11'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='12' port='0x1b'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.12'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='13' port='0x1c'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.13'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='14' port='0x1d'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.14'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='15' port='0x1e'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.15'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='16' port='0x1f'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.16'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='17' port='0x20'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.17'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='18' port='0x21'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.18'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='19' port='0x22'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.19'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='20' port='0x23'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.20'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='21' port='0x24'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.21'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='22' port='0x25'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.22'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='23' port='0x26'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.23'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='24' port='0x27'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.24'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-root-port'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target chassis='25' port='0x28'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.25'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model name='pcie-pci-bridge'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='pci.26'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='usb'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <controller type='sata' index='0'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='ide'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </controller>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <interface type='ethernet'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <mac address='fa:16:3e:36:b2:21'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target dev='tap8f1fcc3c-5f'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model type='virtio'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <mtu size='1442'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='net0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <serial type='pty'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log' append='off'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target type='isa-serial' port='0'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:         <model name='isa-serial'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       </target>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <source path='/dev/pts/0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <log file='/var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48/console.log' append='off'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <target type='serial' port='0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='serial0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </console>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <input type='tablet' bus='usb'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='input0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='usb' bus='0' port='1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </input>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <input type='mouse' bus='ps2'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='input1'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </input>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <input type='keyboard' bus='ps2'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='input2'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </input>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <listen type='address' address='::0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </graphics>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <audio id='1' type='none'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <video>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='video0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </video>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <watchdog model='itco' action='reset'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='watchdog0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </watchdog>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <memballoon model='virtio'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <stats period='10'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='balloon0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <rng model='virtio'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <backend model='random'>/dev/urandom</backend>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <alias name='rng0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <label>system_u:system_r:svirt_t:s0:c344,c569</label>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c344,c569</imagelabel>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <label>+107:+107</label>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <imagelabel>+107:+107</imagelabel>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </seclabel>
Nov 25 08:55:03 compute-0 nova_compute[253538]: </domain>
Nov 25 08:55:03 compute-0 nova_compute[253538]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.569 253542 INFO nova.virt.libvirt.driver [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully detached device tap1959aca7-b2 from instance 76611b0b-db06-4903-a22a-59b23a1e0d48 from the live domain config.
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.570 253542 DEBUG nova.virt.libvirt.vif [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.570 253542 DEBUG nova.network.os_vif_util [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "1959aca7-b25c-4fe5-b59a-70db352af78b", "address": "fa:16:3e:d6:c5:52", "network": {"id": "8f0f7d83-b45f-4a49-9b0c-4eced5b56b37", "bridge": "br-int", "label": "tempest-network-smoke--701944320", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1959aca7-b2", "ovs_interfaceid": "1959aca7-b25c-4fe5-b59a-70db352af78b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.571 253542 DEBUG nova.network.os_vif_util [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.571 253542 DEBUG os_vif [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.576 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1959aca7-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.578 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.584 253542 INFO os_vif [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c5:52,bridge_name='br-int',has_traffic_filtering=True,id=1959aca7-b25c-4fe5-b59a-70db352af78b,network=Network(8f0f7d83-b45f-4a49-9b0c-4eced5b56b37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1959aca7-b2')
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.585 253542 DEBUG nova.virt.libvirt.guest [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:name>tempest-TestNetworkBasicOps-server-1241900398</nova:name>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:creationTime>2025-11-25 08:55:03</nova:creationTime>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:flavor name="m1.nano">
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:memory>128</nova:memory>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:disk>1</nova:disk>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:swap>0</nova:swap>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:vcpus>1</nova:vcpus>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:flavor>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:owner>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:owner>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   <nova:ports>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     <nova:port uuid="8f1fcc3c-5f46-4272-be9b-4d5213b3aceb">
Nov 25 08:55:03 compute-0 nova_compute[253538]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:55:03 compute-0 nova_compute[253538]:     </nova:port>
Nov 25 08:55:03 compute-0 nova_compute[253538]:   </nova:ports>
Nov 25 08:55:03 compute-0 nova_compute[253538]: </nova:instance>
Nov 25 08:55:03 compute-0 nova_compute[253538]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 08:55:03 compute-0 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [NOTICE]   (374137) : haproxy version is 2.8.14-c23fe91
Nov 25 08:55:03 compute-0 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [NOTICE]   (374137) : path to executable is /usr/sbin/haproxy
Nov 25 08:55:03 compute-0 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [WARNING]  (374137) : Exiting Master process...
Nov 25 08:55:03 compute-0 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [WARNING]  (374137) : Exiting Master process...
Nov 25 08:55:03 compute-0 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [ALERT]    (374137) : Current worker (374139) exited with code 143 (Terminated)
Nov 25 08:55:03 compute-0 neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37[374133]: [WARNING]  (374137) : All workers exited. Exiting... (0)
Nov 25 08:55:03 compute-0 systemd[1]: libpod-a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c.scope: Deactivated successfully.
Nov 25 08:55:03 compute-0 podman[376686]: 2025-11-25 08:55:03.745949301 +0000 UTC m=+0.060156999 container died a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c-userdata-shm.mount: Deactivated successfully.
Nov 25 08:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9ca46848af656bb6a002ab5710a7e5831c6c96bc3a4e29b10c26e75820c7aca-merged.mount: Deactivated successfully.
Nov 25 08:55:03 compute-0 podman[376686]: 2025-11-25 08:55:03.795512389 +0000 UTC m=+0.109720087 container cleanup a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:55:03 compute-0 systemd[1]: libpod-conmon-a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c.scope: Deactivated successfully.
Nov 25 08:55:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 214 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 103 op/s
Nov 25 08:55:03 compute-0 podman[376718]: 2025-11-25 08:55:03.866396029 +0000 UTC m=+0.044464521 container remove a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad8d968-13a0-4b6d-aefa-471e6502a14a]: (4, ('Tue Nov 25 08:55:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 (a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c)\na13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c\nTue Nov 25 08:55:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 (a13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c)\na13d5ce423b857682add3ea01f6435ad249dc4a50e6b1f6fbcb9a1810b5f0c2c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.873 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dfded48b-308a-4108-896e-58e52b65b8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.874 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f0f7d83-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 kernel: tap8f0f7d83-b0: left promiscuous mode
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.880 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cde4d765-e1f0-4473-86bd-7dffca54ef2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:03 compute-0 nova_compute[253538]: 2025-11-25 08:55:03.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f686c2a-1f69-4ad4-bdf5-3bc1079bf853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f28313f-767d-4ff1-9c7b-0c089fea7d88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.918 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61eebccf-e9c4-4e58-939f-d44ed9028397]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616513, 'reachable_time': 40428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376731, 'error': None, 'target': 'ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d8f0f7d83\x2db45f\x2d4a49\x2d9b0c\x2d4eced5b56b37.mount: Deactivated successfully.
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.921 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f0f7d83-b45f-4a49-9b0c-4eced5b56b37 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:55:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:03.921 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[da457046-5d57-42e9-a86b-9a3f4c28ae63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:03 compute-0 sshd-session[376660]: Invalid user splunk from 45.202.211.6 port 42348
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011146179266783877 of space, bias 1.0, pg target 0.3343853780035163 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:55:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.207 253542 DEBUG nova.compute.manager [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-unplugged-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.207 253542 DEBUG oslo_concurrency.lockutils [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.208 253542 DEBUG oslo_concurrency.lockutils [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.208 253542 DEBUG oslo_concurrency.lockutils [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.208 253542 DEBUG nova.compute.manager [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-unplugged-1959aca7-b25c-4fe5-b59a-70db352af78b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.208 253542 WARNING nova.compute.manager [req-19d1e93e-a965-4c6d-93b3-1c9f50716884 req-6c5d3981-f086-462b-ba17-8761c84f4832 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-unplugged-1959aca7-b25c-4fe5-b59a-70db352af78b for instance with vm_state active and task_state None.
Nov 25 08:55:04 compute-0 sshd-session[376660]: Received disconnect from 45.202.211.6 port 42348:11: Bye Bye [preauth]
Nov 25 08:55:04 compute-0 sshd-session[376660]: Disconnected from invalid user splunk 45.202.211.6 port 42348 [preauth]
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.463 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.463 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:55:04 compute-0 nova_compute[253538]: 2025-11-25 08:55:04.464 253542 DEBUG nova.network.neutron [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:55:05 compute-0 nova_compute[253538]: 2025-11-25 08:55:05.122 253542 DEBUG nova.network.neutron [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updated VIF entry in instance network info cache for port 52157627-d75e-4670-9215-6471bda94ba6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:55:05 compute-0 nova_compute[253538]: 2025-11-25 08:55:05.123 253542 DEBUG nova.network.neutron [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:05 compute-0 nova_compute[253538]: 2025-11-25 08:55:05.142 253542 DEBUG oslo_concurrency.lockutils [req-433556bd-b9b5-4e0b-be37-a3f4ac0bb0c1 req-d49c2388-0fd1-4bb7-96d5-a3894c0f8606 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:55:05 compute-0 ceph-mon[75015]: pgmap v2228: 321 pgs: 321 active+clean; 214 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 103 op/s
Nov 25 08:55:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 103 op/s
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.132 253542 INFO nova.network.neutron [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Port 1959aca7-b25c-4fe5-b59a-70db352af78b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.132 253542 DEBUG nova.network.neutron [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.151 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.171 253542 DEBUG oslo_concurrency.lockutils [None req-0f2da78b-3e4c-45a0-8c24-80ffc71596f8 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "interface-76611b0b-db06-4903-a22a-59b23a1e0d48-1959aca7-b25c-4fe5-b59a-70db352af78b" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:06 compute-0 ovn_controller[152859]: 2025-11-25T08:55:06Z|01194|binding|INFO|Releasing lport baf4584e-8381-4bcf-9f75-0a7b69cd8212 from this chassis (sb_readonly=0)
Nov 25 08:55:06 compute-0 ovn_controller[152859]: 2025-11-25T08:55:06Z|01195|binding|INFO|Releasing lport 583866cf-82da-4259-9189-db9f58620872 from this chassis (sb_readonly=0)
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.350 253542 DEBUG nova.compute.manager [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 DEBUG oslo_concurrency.lockutils [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 DEBUG oslo_concurrency.lockutils [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 DEBUG oslo_concurrency.lockutils [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 DEBUG nova.compute.manager [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.351 253542 WARNING nova.compute.manager [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-1959aca7-b25c-4fe5-b59a-70db352af78b for instance with vm_state active and task_state None.
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.352 253542 DEBUG nova.compute.manager [req-7cda733b-0b0d-4577-ad78-8c5e69ece9ec req-2ec7d47d-bc0d-4856-a2a5-a3bb79bda1da b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-deleted-1959aca7-b25c-4fe5-b59a-70db352af78b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:06 compute-0 nova_compute[253538]: 2025-11-25 08:55:06.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 ceph-mon[75015]: pgmap v2229: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 103 op/s
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.278 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.279 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.280 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.281 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.282 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.284 253542 INFO nova.compute.manager [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Terminating instance
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.286 253542 DEBUG nova.compute.manager [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:55:07 compute-0 kernel: tap8f1fcc3c-5f (unregistering): left promiscuous mode
Nov 25 08:55:07 compute-0 NetworkManager[48915]: <info>  [1764060907.3454] device (tap8f1fcc3c-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:55:07 compute-0 ovn_controller[152859]: 2025-11-25T08:55:07Z|01196|binding|INFO|Releasing lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb from this chassis (sb_readonly=0)
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.373 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 ovn_controller[152859]: 2025-11-25T08:55:07Z|01197|binding|INFO|Setting lport 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb down in Southbound
Nov 25 08:55:07 compute-0 ovn_controller[152859]: 2025-11-25T08:55:07Z|01198|binding|INFO|Removing iface tap8f1fcc3c-5f ovn-installed in OVS
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.388 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:b2:21 10.100.0.3'], port_security=['fa:16:3e:36:b2:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '76611b0b-db06-4903-a22a-59b23a1e0d48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70c2e597-b59d-412f-a7ad-333ba7cbd35e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b027220-e81c-4ac9-90ba-6c25793cc1d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.390 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb in datapath 60f2641c-f03e-4ef3-a462-4bd54e93c59c unbound from our chassis
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.392 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60f2641c-f03e-4ef3-a462-4bd54e93c59c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.394 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7243d39f-b1ed-4338-906d-962ba0625156]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.395 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c namespace which is not needed anymore
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.409 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000072.scope: Deactivated successfully.
Nov 25 08:55:07 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000072.scope: Consumed 20.250s CPU time.
Nov 25 08:55:07 compute-0 systemd-machined[215790]: Machine qemu-142-instance-00000072 terminated.
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.529 253542 INFO nova.virt.libvirt.driver [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Instance destroyed successfully.
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.530 253542 DEBUG nova.objects.instance [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 76611b0b-db06-4903-a22a-59b23a1e0d48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.543 253542 DEBUG nova.virt.libvirt.vif [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:52:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1241900398',display_name='tempest-TestNetworkBasicOps-server-1241900398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1241900398',id=114,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQnIncww1mRwX9O2GT7bKPsyEkgA1/4oI48uojAID9Vm7PrMk+vKtiibHK4EoiYH2/wbPYy0HX20b3AH2Q3Q4yusIjxaAQq6AUEbJthvns45ZkdO1WCW3z0AmP1FYwxQ==',key_name='tempest-TestNetworkBasicOps-922693977',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-1z0lb28a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:53:02Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=76611b0b-db06-4903-a22a-59b23a1e0d48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.543 253542 DEBUG nova.network.os_vif_util [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.544 253542 DEBUG nova.network.os_vif_util [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.545 253542 DEBUG os_vif [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.547 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f1fcc3c-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.556 253542 INFO os_vif [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:b2:21,bridge_name='br-int',has_traffic_filtering=True,id=8f1fcc3c-5f46-4272-be9b-4d5213b3aceb,network=Network(60f2641c-f03e-4ef3-a462-4bd54e93c59c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f1fcc3c-5f')
Nov 25 08:55:07 compute-0 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [NOTICE]   (372609) : haproxy version is 2.8.14-c23fe91
Nov 25 08:55:07 compute-0 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [NOTICE]   (372609) : path to executable is /usr/sbin/haproxy
Nov 25 08:55:07 compute-0 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [WARNING]  (372609) : Exiting Master process...
Nov 25 08:55:07 compute-0 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [WARNING]  (372609) : Exiting Master process...
Nov 25 08:55:07 compute-0 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [ALERT]    (372609) : Current worker (372623) exited with code 143 (Terminated)
Nov 25 08:55:07 compute-0 neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c[372579]: [WARNING]  (372609) : All workers exited. Exiting... (0)
Nov 25 08:55:07 compute-0 systemd[1]: libpod-f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e.scope: Deactivated successfully.
Nov 25 08:55:07 compute-0 podman[376761]: 2025-11-25 08:55:07.602639044 +0000 UTC m=+0.052390706 container died f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e-userdata-shm.mount: Deactivated successfully.
Nov 25 08:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3552cae6011fac03a9da2375e711d654941020090cecb0f909b8a0ba241d8ae3-merged.mount: Deactivated successfully.
Nov 25 08:55:07 compute-0 podman[376761]: 2025-11-25 08:55:07.648450651 +0000 UTC m=+0.098202333 container cleanup f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:55:07 compute-0 systemd[1]: libpod-conmon-f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e.scope: Deactivated successfully.
Nov 25 08:55:07 compute-0 podman[376814]: 2025-11-25 08:55:07.725409676 +0000 UTC m=+0.051255746 container remove f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.731 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0116d301-58c6-4d02-b548-b7c7a57fa1ed]: (4, ('Tue Nov 25 08:55:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c (f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e)\nf93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e\nTue Nov 25 08:55:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c (f93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e)\nf93db09c6af719a38bb678e200d6d4705ea435f5ca121747b3ebd1a98fd7f49e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d212334-7e18-409d-9962-595f2b45ba69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.735 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60f2641c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:07 compute-0 kernel: tap60f2641c-f0: left promiscuous mode
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.740 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.756 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[814ef78b-9bb9-4e4c-a762-97c8a3aef2b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:07 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:55:07 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.776 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d43b191-93a5-412b-86ec-e768a7a337d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.782 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5fbb3e-6999-4e25-ad55-256c2189ab6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.799 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f668401c-6053-4229-8291-dba7febef235]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613548, 'reachable_time': 19392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376827, 'error': None, 'target': 'ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d60f2641c\x2df03e\x2d4ef3\x2da462\x2d4bd54e93c59c.mount: Deactivated successfully.
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.801 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60f2641c-f03e-4ef3-a462-4bd54e93c59c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:55:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:07.802 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb64680-0302-4019-aba0-7334f43ef75a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.966 253542 INFO nova.virt.libvirt.driver [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Deleting instance files /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48_del
Nov 25 08:55:07 compute-0 nova_compute[253538]: 2025-11-25 08:55:07.967 253542 INFO nova.virt.libvirt.driver [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Deletion of /var/lib/nova/instances/76611b0b-db06-4903-a22a-59b23a1e0d48_del complete
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.026 253542 INFO nova.compute.manager [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.028 253542 DEBUG oslo.service.loopingcall [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.028 253542 DEBUG nova.compute.manager [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.029 253542 DEBUG nova.network.neutron [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.477 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.478 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing instance network info cache due to event network-changed-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.478 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.478 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:55:08 compute-0 nova_compute[253538]: 2025-11-25 08:55:08.478 253542 DEBUG nova.network.neutron [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Refreshing network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:55:09 compute-0 ceph-mon[75015]: pgmap v2230: 321 pgs: 321 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Nov 25 08:55:09 compute-0 nova_compute[253538]: 2025-11-25 08:55:09.807 253542 DEBUG nova.network.neutron [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:09 compute-0 nova_compute[253538]: 2025-11-25 08:55:09.824 253542 INFO nova.compute.manager [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Took 1.80 seconds to deallocate network for instance.
Nov 25 08:55:09 compute-0 podman[376830]: 2025-11-25 08:55:09.841187965 +0000 UTC m=+0.082810546 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:55:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 164 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 394 KiB/s wr, 115 op/s
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.025 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.026 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.097 253542 DEBUG oslo_concurrency.processutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.547 253542 DEBUG nova.compute.manager [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.547 253542 DEBUG oslo_concurrency.lockutils [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.548 253542 DEBUG oslo_concurrency.lockutils [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.548 253542 DEBUG oslo_concurrency.lockutils [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.548 253542 DEBUG nova.compute.manager [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.548 253542 WARNING nova.compute.manager [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received unexpected event network-vif-plugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb for instance with vm_state deleted and task_state None.
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.549 253542 DEBUG nova.compute.manager [req-454db832-d013-42ac-a2d0-6626550a2deb req-4133bf90-38e2-4a2f-9949-ecbc30365deb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-deleted-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:55:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455161361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.593 253542 DEBUG oslo_concurrency.processutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.598 253542 DEBUG nova.network.neutron [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updated VIF entry in instance network info cache for port 8f1fcc3c-5f46-4272-be9b-4d5213b3aceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.598 253542 DEBUG nova.network.neutron [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Updating instance_info_cache with network_info: [{"id": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "address": "fa:16:3e:36:b2:21", "network": {"id": "60f2641c-f03e-4ef3-a462-4bd54e93c59c", "bridge": "br-int", "label": "tempest-network-smoke--706068333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f1fcc3c-5f", "ovs_interfaceid": "8f1fcc3c-5f46-4272-be9b-4d5213b3aceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.605 253542 DEBUG nova.compute.provider_tree [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.622 253542 DEBUG nova.scheduler.client.report [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.626 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-76611b0b-db06-4903-a22a-59b23a1e0d48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.627 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-unplugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.627 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.627 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.628 253542 DEBUG oslo_concurrency.lockutils [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.628 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] No waiting events found dispatching network-vif-unplugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.628 253542 DEBUG nova.compute.manager [req-420cf99b-0203-4462-97a3-90ae2b379910 req-72851e51-9993-49f0-8ad0-2d4c3533c87d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Received event network-vif-unplugged-8f1fcc3c-5f46-4272-be9b-4d5213b3aceb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.655 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.694 253542 INFO nova.scheduler.client.report [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 76611b0b-db06-4903-a22a-59b23a1e0d48
Nov 25 08:55:10 compute-0 nova_compute[253538]: 2025-11-25 08:55:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bf35c612-f29c-4b39-9c44-a88cbc38ae68 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "76611b0b-db06-4903-a22a-59b23a1e0d48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:11 compute-0 ceph-mon[75015]: pgmap v2231: 321 pgs: 321 active+clean; 164 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 394 KiB/s wr, 115 op/s
Nov 25 08:55:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1455161361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:11 compute-0 ovn_controller[152859]: 2025-11-25T08:55:11Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 08:55:11 compute-0 ovn_controller[152859]: 2025-11-25T08:55:11Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 08:55:11 compute-0 podman[376871]: 2025-11-25 08:55:11.810903408 +0000 UTC m=+0.065338810 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 08:55:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 150 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 116 op/s
Nov 25 08:55:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:12 compute-0 nova_compute[253538]: 2025-11-25 08:55:12.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:12 compute-0 nova_compute[253538]: 2025-11-25 08:55:12.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:13 compute-0 ceph-mon[75015]: pgmap v2232: 321 pgs: 321 active+clean; 150 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 116 op/s
Nov 25 08:55:13 compute-0 nova_compute[253538]: 2025-11-25 08:55:13.243 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060898.2377136, 5e14b791-8860-44a3-87e0-5c7fcc1dcf12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:13 compute-0 nova_compute[253538]: 2025-11-25 08:55:13.244 253542 INFO nova.compute.manager [-] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] VM Stopped (Lifecycle Event)
Nov 25 08:55:13 compute-0 nova_compute[253538]: 2025-11-25 08:55:13.258 253542 DEBUG nova.compute.manager [None req-af17d690-bcec-4aa2-87d9-0da6f0b4c3d5 - - - - - -] [instance: 5e14b791-8860-44a3-87e0-5c7fcc1dcf12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 159 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.7 MiB/s wr, 91 op/s
Nov 25 08:55:15 compute-0 ovn_controller[152859]: 2025-11-25T08:55:15Z|01199|binding|INFO|Releasing lport 583866cf-82da-4259-9189-db9f58620872 from this chassis (sb_readonly=0)
Nov 25 08:55:15 compute-0 nova_compute[253538]: 2025-11-25 08:55:15.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:15 compute-0 ceph-mon[75015]: pgmap v2233: 321 pgs: 321 active+clean; 159 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.7 MiB/s wr, 91 op/s
Nov 25 08:55:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 08:55:16 compute-0 ceph-mon[75015]: pgmap v2234: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 08:55:16 compute-0 podman[376892]: 2025-11-25 08:55:16.904821349 +0000 UTC m=+0.154829375 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller)
Nov 25 08:55:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:17 compute-0 nova_compute[253538]: 2025-11-25 08:55:17.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:17 compute-0 nova_compute[253538]: 2025-11-25 08:55:17.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 08:55:18 compute-0 nova_compute[253538]: 2025-11-25 08:55:18.824 253542 INFO nova.compute.manager [None req-c794c82d-9f08-481b-9dcc-1b462cdfb009 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Get console output
Nov 25 08:55:18 compute-0 nova_compute[253538]: 2025-11-25 08:55:18.832 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:55:18 compute-0 ceph-mon[75015]: pgmap v2235: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 08:55:19 compute-0 nova_compute[253538]: 2025-11-25 08:55:19.326 253542 DEBUG nova.objects.instance [None req-a6afed9d-ecd5-4636-9a7b-65cbdcefd586 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:19 compute-0 nova_compute[253538]: 2025-11-25 08:55:19.350 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060919.3498385, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:19 compute-0 nova_compute[253538]: 2025-11-25 08:55:19.350 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Paused (Lifecycle Event)
Nov 25 08:55:19 compute-0 nova_compute[253538]: 2025-11-25 08:55:19.375 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:19 compute-0 nova_compute[253538]: 2025-11-25 08:55:19.379 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:55:19 compute-0 nova_compute[253538]: 2025-11-25 08:55:19.397 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 08:55:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 08:55:19 compute-0 kernel: tap52157627-d7 (unregistering): left promiscuous mode
Nov 25 08:55:19 compute-0 NetworkManager[48915]: <info>  [1764060919.9246] device (tap52157627-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:55:19 compute-0 ovn_controller[152859]: 2025-11-25T08:55:19Z|01200|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=0)
Nov 25 08:55:19 compute-0 ovn_controller[152859]: 2025-11-25T08:55:19Z|01201|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 down in Southbound
Nov 25 08:55:19 compute-0 ovn_controller[152859]: 2025-11-25T08:55:19Z|01202|binding|INFO|Removing iface tap52157627-d7 ovn-installed in OVS
Nov 25 08:55:19 compute-0 nova_compute[253538]: 2025-11-25 08:55:19.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:19 compute-0 nova_compute[253538]: 2025-11-25 08:55:19.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.969 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.971 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis
Nov 25 08:55:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.973 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.975 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19748c90-9ee6-4034-8761-712e84260129]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:19.975 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b namespace which is not needed anymore
Nov 25 08:55:19 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 25 08:55:19 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000075.scope: Consumed 14.238s CPU time.
Nov 25 08:55:19 compute-0 systemd-machined[215790]: Machine qemu-146-instance-00000075 terminated.
Nov 25 08:55:20 compute-0 kernel: tap52157627-d7: entered promiscuous mode
Nov 25 08:55:20 compute-0 kernel: tap52157627-d7 (unregistering): left promiscuous mode
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:20 compute-0 ovn_controller[152859]: 2025-11-25T08:55:20Z|01203|binding|INFO|Claiming lport 52157627-d75e-4670-9215-6471bda94ba6 for this chassis.
Nov 25 08:55:20 compute-0 ovn_controller[152859]: 2025-11-25T08:55:20Z|01204|binding|INFO|52157627-d75e-4670-9215-6471bda94ba6: Claiming fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.115 253542 DEBUG nova.compute.manager [None req-a6afed9d-ecd5-4636-9a7b-65cbdcefd586 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:20 compute-0 ovn_controller[152859]: 2025-11-25T08:55:20Z|01205|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 08:55:20 compute-0 ovn_controller[152859]: 2025-11-25T08:55:20Z|01206|if_status|INFO|Dropped 1 log messages in last 22 seconds (most recently, 22 seconds ago) due to excessive rate
Nov 25 08:55:20 compute-0 ovn_controller[152859]: 2025-11-25T08:55:20Z|01207|if_status|INFO|Not setting lport 52157627-d75e-4670-9215-6471bda94ba6 down as sb is readonly
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:20 compute-0 ovn_controller[152859]: 2025-11-25T08:55:20Z|01208|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=0)
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.125 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:20 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [NOTICE]   (376578) : haproxy version is 2.8.14-c23fe91
Nov 25 08:55:20 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [NOTICE]   (376578) : path to executable is /usr/sbin/haproxy
Nov 25 08:55:20 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [WARNING]  (376578) : Exiting Master process...
Nov 25 08:55:20 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [ALERT]    (376578) : Current worker (376580) exited with code 143 (Terminated)
Nov 25 08:55:20 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[376574]: [WARNING]  (376578) : All workers exited. Exiting... (0)
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.144 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:20 compute-0 systemd[1]: libpod-f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3.scope: Deactivated successfully.
Nov 25 08:55:20 compute-0 podman[376947]: 2025-11-25 08:55:20.153655798 +0000 UTC m=+0.055008369 container died f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:55:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3-userdata-shm.mount: Deactivated successfully.
Nov 25 08:55:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-099a140f52146212a9c7c93bc8139169246af3392159627f1d84ff95a83d956b-merged.mount: Deactivated successfully.
Nov 25 08:55:20 compute-0 podman[376947]: 2025-11-25 08:55:20.193854242 +0000 UTC m=+0.095206813 container cleanup f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:55:20 compute-0 systemd[1]: libpod-conmon-f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3.scope: Deactivated successfully.
Nov 25 08:55:20 compute-0 podman[376981]: 2025-11-25 08:55:20.266722095 +0000 UTC m=+0.046791515 container remove f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.275 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24837c32-c345-49ec-8214-e5bd11ea6690]: (4, ('Tue Nov 25 08:55:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b (f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3)\nf8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3\nTue Nov 25 08:55:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b (f8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3)\nf8a3bc495476bea27fc86f25c2d67e1acf3426aa0302bf5642894ee244335eb3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76d2b0c0-f995-4ac9-a546-e962e4e384be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.279 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap703bdacb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:20 compute-0 kernel: tap703bdacb-50: left promiscuous mode
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.310 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.313 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78c68840-4bdf-4223-9ec4-796c2224f039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.332 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5c19b8-2f0f-4074-bbed-0f37519cc295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[554bc9c5-a10f-4bba-a5af-6bf35dccf0a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.349 253542 DEBUG nova.compute.manager [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-unplugged-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.350 253542 DEBUG oslo_concurrency.lockutils [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.351 253542 DEBUG oslo_concurrency.lockutils [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.352 253542 DEBUG oslo_concurrency.lockutils [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.352 253542 DEBUG nova.compute.manager [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] No waiting events found dispatching network-vif-unplugged-52157627-d75e-4670-9215-6471bda94ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:20 compute-0 nova_compute[253538]: 2025-11-25 08:55:20.353 253542 WARNING nova.compute.manager [req-cb344dcd-d5b2-46d2-90fa-8570edc09b49 req-5fa1826b-7f92-476f-af40-da5a6cc294ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received unexpected event network-vif-unplugged-52157627-d75e-4670-9215-6471bda94ba6 for instance with vm_state suspended and task_state None.
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.354 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e495895-8fef-40bc-9138-1702a6b6b471]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625121, 'reachable_time': 31613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376999, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.358 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.359 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[c6387ab4-a1f4-4e51-bef7-94ac72580db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.360 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis
Nov 25 08:55:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d703bdacb\x2d53cd\x2d40a1\x2d9c2c\x2dc632a29e049b.mount: Deactivated successfully.
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.362 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.363 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2969eac1-fffd-45d2-b6e8-e1bcc18ab1d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.364 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.365 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:20.366 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96c8a696-d2b5-40ae-b117-2c324165ed88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:20 compute-0 ceph-mon[75015]: pgmap v2236: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 08:55:21 compute-0 nova_compute[253538]: 2025-11-25 08:55:21.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.527 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060907.5259435, 76611b0b-db06-4903-a22a-59b23a1e0d48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.528 253542 INFO nova.compute.manager [-] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] VM Stopped (Lifecycle Event)
Nov 25 08:55:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.545 253542 DEBUG nova.compute.manager [None req-59a63d91-652c-4975-8ecf-b9d76e39a190 - - - - - -] [instance: 76611b0b-db06-4903-a22a-59b23a1e0d48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.791 253542 DEBUG nova.compute.manager [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.791 253542 DEBUG oslo_concurrency.lockutils [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.792 253542 DEBUG oslo_concurrency.lockutils [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.792 253542 DEBUG oslo_concurrency.lockutils [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.793 253542 DEBUG nova.compute.manager [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] No waiting events found dispatching network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.793 253542 WARNING nova.compute.manager [req-6a231e1b-2edf-423a-8d90-d204010d5814 req-08e5e5e0-545d-4f4b-823f-12d2c8ce022e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received unexpected event network-vif-plugged-52157627-d75e-4670-9215-6471bda94ba6 for instance with vm_state suspended and task_state None.
Nov 25 08:55:22 compute-0 nova_compute[253538]: 2025-11-25 08:55:22.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:22 compute-0 ceph-mon[75015]: pgmap v2237: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 08:55:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:55:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:55:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:55:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:55:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:55:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:55:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 888 KiB/s wr, 24 op/s
Nov 25 08:55:24 compute-0 nova_compute[253538]: 2025-11-25 08:55:24.912 253542 INFO nova.compute.manager [None req-5e861d71-549c-4e19-b35d-83826b042cfe 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Get console output
Nov 25 08:55:24 compute-0 ceph-mon[75015]: pgmap v2238: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 888 KiB/s wr, 24 op/s
Nov 25 08:55:25 compute-0 nova_compute[253538]: 2025-11-25 08:55:25.140 253542 INFO nova.compute.manager [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Resuming
Nov 25 08:55:25 compute-0 nova_compute[253538]: 2025-11-25 08:55:25.141 253542 DEBUG nova.objects.instance [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'flavor' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:25 compute-0 nova_compute[253538]: 2025-11-25 08:55:25.177 253542 DEBUG oslo_concurrency.lockutils [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:55:25 compute-0 nova_compute[253538]: 2025-11-25 08:55:25.178 253542 DEBUG oslo_concurrency.lockutils [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:55:25 compute-0 nova_compute[253538]: 2025-11-25 08:55:25.178 253542 DEBUG nova.network.neutron [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:55:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 466 KiB/s wr, 12 op/s
Nov 25 08:55:26 compute-0 sudo[377000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:26 compute-0 sudo[377000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:26 compute-0 sudo[377000]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:26 compute-0 sudo[377025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:55:26 compute-0 sudo[377025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:26 compute-0 sudo[377025]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.709 253542 DEBUG nova.network.neutron [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.734 253542 DEBUG oslo_concurrency.lockutils [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:55:26 compute-0 sudo[377050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.741 253542 DEBUG nova.virt.libvirt.vif [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-360976027',display_name='tempest-TestNetworkAdvancedServerOps-server-360976027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-360976027',id=117,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMP7QXPZDPXrUql830fJueudXHoXZ2zww6EaNe3PFgcJ0/Sar4viBph4zMnjSpS0mXHOOoM16QZ2fgUMXO9FFKEQnCoRRYMYlR8af4rdgpILBOKGdgI2okSUsyg3suBciA==',key_name='tempest-TestNetworkAdvancedServerOps-1316847301',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:54:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-p9ynh0fp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:55:20Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=7e82fa8c-6663-439c-833c-2b28f22282a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.742 253542 DEBUG nova.network.os_vif_util [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:55:26 compute-0 sudo[377050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.743 253542 DEBUG nova.network.os_vif_util [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.743 253542 DEBUG os_vif [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.744 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.744 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.745 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:55:26 compute-0 sudo[377050]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.749 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52157627-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.749 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52157627-d7, col_values=(('external_ids', {'iface-id': '52157627-d75e-4670-9215-6471bda94ba6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:b5:38', 'vm-uuid': '7e82fa8c-6663-439c-833c-2b28f22282a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.751 253542 INFO os_vif [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7')
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.783 253542 DEBUG nova.objects.instance [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:26 compute-0 sudo[377075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 08:55:26 compute-0 sudo[377075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:26 compute-0 kernel: tap52157627-d7: entered promiscuous mode
Nov 25 08:55:26 compute-0 NetworkManager[48915]: <info>  [1764060926.8717] manager: (tap52157627-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:26 compute-0 ovn_controller[152859]: 2025-11-25T08:55:26Z|01209|binding|INFO|Claiming lport 52157627-d75e-4670-9215-6471bda94ba6 for this chassis.
Nov 25 08:55:26 compute-0 ovn_controller[152859]: 2025-11-25T08:55:26Z|01210|binding|INFO|52157627-d75e-4670-9215-6471bda94ba6: Claiming fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 08:55:26 compute-0 ovn_controller[152859]: 2025-11-25T08:55:26Z|01211|binding|INFO|Removing lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.884 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.885 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b bound to our chassis
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.887 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 08:55:26 compute-0 ovn_controller[152859]: 2025-11-25T08:55:26Z|01212|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 08:55:26 compute-0 ovn_controller[152859]: 2025-11-25T08:55:26Z|01213|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 up in Southbound
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.890 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:26 compute-0 nova_compute[253538]: 2025-11-25 08:55:26.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[352860f1-669e-4f2e-b688-0ff4aa65f5ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.902 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap703bdacb-51 in ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.904 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap703bdacb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.904 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[db9948f6-4835-4d2c-bf92-59f85b8c9e9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.905 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1bff56f9-8598-454c-8a87-e967c035fff2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:26 compute-0 systemd-udevd[377116]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.921 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[33be21ad-db7f-4885-99bf-3ce3169b8e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:26 compute-0 systemd-machined[215790]: New machine qemu-147-instance-00000075.
Nov 25 08:55:26 compute-0 NetworkManager[48915]: <info>  [1764060926.9347] device (tap52157627-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:55:26 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000075.
Nov 25 08:55:26 compute-0 NetworkManager[48915]: <info>  [1764060926.9371] device (tap52157627-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:55:26 compute-0 ceph-mon[75015]: pgmap v2239: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 466 KiB/s wr, 12 op/s
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.945 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bafe9c22-3c9b-47d9-bfe7-72dd6ba6366d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.981 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[903d1cf8-365d-443c-8ae0-ee6a9895f393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:26.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3f55ac-a1a6-47a3-8b6f-4a26e73e701b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:26 compute-0 systemd-udevd[377119]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:55:26 compute-0 NetworkManager[48915]: <info>  [1764060926.9888] manager: (tap703bdacb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/492)
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.024 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[59294259-6a91-48eb-8d34-f07a12d2e268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.028 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[80ebf1f4-bae8-4194-8b05-60bd5997b157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 NetworkManager[48915]: <info>  [1764060927.0549] device (tap703bdacb-50): carrier: link connected
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.063 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aa565c-057e-46f4-9f81-5d903152c5b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.080 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6b6cb0-0118-4fce-b131-c1893a000c27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap703bdacb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:81:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628165, 'reachable_time': 24356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377169, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18cee531-c5fd-44e5-8f3c-ddc69c2291e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:81bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628165, 'tstamp': 628165}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377172, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.116 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19a62ef1-56b1-4012-ba93-4fb1d2427c3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap703bdacb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:81:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628165, 'reachable_time': 24356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377186, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.153 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44543f23-aa1e-4819-812c-f8e53e2f0ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.226 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2c418b3b-f475-485b-b7c0-3341866efa13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.228 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap703bdacb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.228 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.228 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap703bdacb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:27 compute-0 NetworkManager[48915]: <info>  [1764060927.2306] manager: (tap703bdacb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:27 compute-0 kernel: tap703bdacb-50: entered promiscuous mode
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.235 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap703bdacb-50, col_values=(('external_ids', {'iface-id': '583866cf-82da-4259-9189-db9f58620872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:27 compute-0 ovn_controller[152859]: 2025-11-25T08:55:27Z|01214|binding|INFO|Releasing lport 583866cf-82da-4259-9189-db9f58620872 from this chassis (sb_readonly=0)
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.237 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.238 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.239 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8403c0be-9b6f-4976-9767-2c060056b2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.240 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/703bdacb-53cd-40a1-9c2c-c632a29e049b.pid.haproxy
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 703bdacb-53cd-40a1-9c2c-c632a29e049b
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:55:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:27.240 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'env', 'PROCESS_TAG=haproxy-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/703bdacb-53cd-40a1-9c2c-c632a29e049b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:27 compute-0 podman[377266]: 2025-11-25 08:55:27.379144115 +0000 UTC m=+0.075015923 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.425 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for 7e82fa8c-6663-439c-833c-2b28f22282a8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.425 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060927.424899, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.426 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Started (Lifecycle Event)
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.450 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.463 253542 DEBUG nova.compute.manager [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.464 253542 DEBUG nova.objects.instance [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.470 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.488 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance running successfully.
Nov 25 08:55:27 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.492 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.493 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060927.4329023, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.493 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Resumed (Lifecycle Event)
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.497 253542 DEBUG nova.virt.libvirt.guest [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.498 253542 DEBUG nova.compute.manager [None req-5616caea-af6a-47a1-8baa-f6897e154db8 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:27 compute-0 podman[377266]: 2025-11-25 08:55:27.515058235 +0000 UTC m=+0.210930043 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.520 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.524 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:55:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.546 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:27 compute-0 podman[377320]: 2025-11-25 08:55:27.640528869 +0000 UTC m=+0.054372621 container create 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:55:27 compute-0 systemd[1]: Started libpod-conmon-489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296.scope.
Nov 25 08:55:27 compute-0 podman[377320]: 2025-11-25 08:55:27.612954049 +0000 UTC m=+0.026797801 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802d7b725468ff6e6315af7a30e30a1e715910a6a92e1c856150d350c2e2cd52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:27 compute-0 podman[377320]: 2025-11-25 08:55:27.746026791 +0000 UTC m=+0.159870523 container init 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:55:27 compute-0 podman[377320]: 2025-11-25 08:55:27.752172988 +0000 UTC m=+0.166016720 container start 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 08:55:27 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [NOTICE]   (377366) : New worker (377372) forked
Nov 25 08:55:27 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [NOTICE]   (377366) : Loading success.
Nov 25 08:55:27 compute-0 nova_compute[253538]: 2025-11-25 08:55:27.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 08:55:28 compute-0 sudo[377075]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:55:28 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:55:28 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:28 compute-0 sudo[377464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:28 compute-0 sudo[377464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:28 compute-0 sudo[377464]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:28 compute-0 sudo[377489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:55:28 compute-0 sudo[377489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:28 compute-0 sudo[377489]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:28 compute-0 sudo[377514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:28 compute-0 sudo[377514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:28 compute-0 sudo[377514]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:28 compute-0 sudo[377539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:55:28 compute-0 sudo[377539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:28 compute-0 ceph-mon[75015]: pgmap v2240: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 08:55:28 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:28 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:55:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016422686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:55:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016422686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:55:29 compute-0 sudo[377539]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:55:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:55:29 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:55:29 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:29 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 746380e6-f276-4262-beb1-d716bc2b22b0 does not exist
Nov 25 08:55:29 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1f0ef677-faf7-4ba5-8a32-688d709cbc22 does not exist
Nov 25 08:55:29 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 72df9519-4e17-4e03-9142-06ddb3f35e8b does not exist
Nov 25 08:55:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:55:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:55:29 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:55:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:55:29 compute-0 sudo[377595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:29 compute-0 sudo[377595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:29 compute-0 sudo[377595]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:29 compute-0 sudo[377620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:55:29 compute-0 sudo[377620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:29 compute-0 sudo[377620]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:29 compute-0 sudo[377645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:29 compute-0 sudo[377645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:29 compute-0 sudo[377645]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:29 compute-0 sudo[377670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:55:29 compute-0 sudo[377670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 3 op/s
Nov 25 08:55:29 compute-0 nova_compute[253538]: 2025-11-25 08:55:29.894 253542 INFO nova.compute.manager [None req-9b8323bd-9b4a-4093-b1af-72c9e6a8836c 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Get console output
Nov 25 08:55:29 compute-0 nova_compute[253538]: 2025-11-25 08:55:29.899 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:55:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4016422686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4016422686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:29 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:55:29 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:55:29 compute-0 podman[377737]: 2025-11-25 08:55:29.974081085 +0000 UTC m=+0.044249615 container create e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 08:55:30 compute-0 systemd[1]: Started libpod-conmon-e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029.scope.
Nov 25 08:55:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:55:30 compute-0 podman[377737]: 2025-11-25 08:55:29.95218128 +0000 UTC m=+0.022349820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:55:30 compute-0 podman[377737]: 2025-11-25 08:55:30.056856909 +0000 UTC m=+0.127025459 container init e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:55:30 compute-0 podman[377737]: 2025-11-25 08:55:30.064529978 +0000 UTC m=+0.134698548 container start e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 08:55:30 compute-0 epic_franklin[377753]: 167 167
Nov 25 08:55:30 compute-0 systemd[1]: libpod-e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029.scope: Deactivated successfully.
Nov 25 08:55:30 compute-0 podman[377737]: 2025-11-25 08:55:30.097782713 +0000 UTC m=+0.167951273 container attach e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 08:55:30 compute-0 podman[377737]: 2025-11-25 08:55:30.100260219 +0000 UTC m=+0.170428759 container died e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 08:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c6d1441326e258abdca401ea51af6d9ba8cb7410f9dec55b8a4a69dc63fb00a-merged.mount: Deactivated successfully.
Nov 25 08:55:30 compute-0 podman[377737]: 2025-11-25 08:55:30.170839401 +0000 UTC m=+0.241007931 container remove e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 08:55:30 compute-0 systemd[1]: libpod-conmon-e8a39aedd54f40e1c5309c5dd5b94a1d25123524a164ba15ddf001afed619029.scope: Deactivated successfully.
Nov 25 08:55:30 compute-0 podman[377779]: 2025-11-25 08:55:30.361323336 +0000 UTC m=+0.040254897 container create 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:55:30 compute-0 systemd[1]: Started libpod-conmon-4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779.scope.
Nov 25 08:55:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:30 compute-0 podman[377779]: 2025-11-25 08:55:30.343588323 +0000 UTC m=+0.022519904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:55:30 compute-0 podman[377779]: 2025-11-25 08:55:30.448214711 +0000 UTC m=+0.127146302 container init 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 08:55:30 compute-0 podman[377779]: 2025-11-25 08:55:30.461486882 +0000 UTC m=+0.140418433 container start 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 08:55:30 compute-0 podman[377779]: 2025-11-25 08:55:30.464351129 +0000 UTC m=+0.143282690 container attach 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.823 253542 DEBUG nova.compute.manager [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-changed-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.824 253542 DEBUG nova.compute.manager [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing instance network info cache due to event network-changed-52157627-d75e-4670-9215-6471bda94ba6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.824 253542 DEBUG oslo_concurrency.lockutils [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.824 253542 DEBUG oslo_concurrency.lockutils [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.825 253542 DEBUG nova.network.neutron [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Refreshing network info cache for port 52157627-d75e-4670-9215-6471bda94ba6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.922 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.923 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.923 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.923 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.923 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.924 253542 INFO nova.compute.manager [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Terminating instance
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.925 253542 DEBUG nova.compute.manager [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:55:30 compute-0 ceph-mon[75015]: pgmap v2241: 321 pgs: 321 active+clean; 167 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 3 op/s
Nov 25 08:55:30 compute-0 kernel: tap52157627-d7 (unregistering): left promiscuous mode
Nov 25 08:55:30 compute-0 NetworkManager[48915]: <info>  [1764060930.9759] device (tap52157627-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:55:30 compute-0 ovn_controller[152859]: 2025-11-25T08:55:30Z|01215|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=0)
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.992 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:30 compute-0 ovn_controller[152859]: 2025-11-25T08:55:30Z|01216|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 down in Southbound
Nov 25 08:55:30 compute-0 ovn_controller[152859]: 2025-11-25T08:55:30Z|01217|binding|INFO|Removing iface tap52157627-d7 ovn-installed in OVS
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:30 compute-0 nova_compute[253538]: 2025-11-25 08:55:30.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:30.999 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.000 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.002 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.003 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[be6c3fcf-86fc-4004-9013-25cbda53c175]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.004 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b namespace which is not needed anymore
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 25 08:55:31 compute-0 systemd-machined[215790]: Machine qemu-147-instance-00000075 terminated.
Nov 25 08:55:31 compute-0 kernel: tap52157627-d7: entered promiscuous mode
Nov 25 08:55:31 compute-0 systemd-udevd[377805]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.190 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 ovn_controller[152859]: 2025-11-25T08:55:31Z|01218|binding|INFO|Claiming lport 52157627-d75e-4670-9215-6471bda94ba6 for this chassis.
Nov 25 08:55:31 compute-0 kernel: tap52157627-d7 (unregistering): left promiscuous mode
Nov 25 08:55:31 compute-0 ovn_controller[152859]: 2025-11-25T08:55:31Z|01219|binding|INFO|52157627-d75e-4670-9215-6471bda94ba6: Claiming fa:16:3e:a5:b5:38 10.100.0.6
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.206 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:31 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [NOTICE]   (377366) : haproxy version is 2.8.14-c23fe91
Nov 25 08:55:31 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [NOTICE]   (377366) : path to executable is /usr/sbin/haproxy
Nov 25 08:55:31 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [WARNING]  (377366) : Exiting Master process...
Nov 25 08:55:31 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [WARNING]  (377366) : Exiting Master process...
Nov 25 08:55:31 compute-0 ovn_controller[152859]: 2025-11-25T08:55:31Z|01220|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 ovn-installed in OVS
Nov 25 08:55:31 compute-0 ovn_controller[152859]: 2025-11-25T08:55:31Z|01221|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 up in Southbound
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.221 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [ALERT]    (377366) : Current worker (377372) exited with code 143 (Terminated)
Nov 25 08:55:31 compute-0 neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b[377350]: [WARNING]  (377366) : All workers exited. Exiting... (0)
Nov 25 08:55:31 compute-0 ovn_controller[152859]: 2025-11-25T08:55:31Z|01222|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=1)
Nov 25 08:55:31 compute-0 ovn_controller[152859]: 2025-11-25T08:55:31Z|01223|binding|INFO|Removing iface tap52157627-d7 ovn-installed in OVS
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 systemd[1]: libpod-489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296.scope: Deactivated successfully.
Nov 25 08:55:31 compute-0 ovn_controller[152859]: 2025-11-25T08:55:31Z|01224|binding|INFO|Releasing lport 52157627-d75e-4670-9215-6471bda94ba6 from this chassis (sb_readonly=0)
Nov 25 08:55:31 compute-0 ovn_controller[152859]: 2025-11-25T08:55:31Z|01225|binding|INFO|Setting lport 52157627-d75e-4670-9215-6471bda94ba6 down in Southbound
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.234 253542 INFO nova.virt.libvirt.driver [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Instance destroyed successfully.
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.235 253542 DEBUG nova.objects.instance [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lazy-loading 'resources' on Instance uuid 7e82fa8c-6663-439c-833c-2b28f22282a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.238 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:b5:38 10.100.0.6'], port_security=['fa:16:3e:a5:b5:38 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e82fa8c-6663-439c-833c-2b28f22282a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dcaf3c96bfc4db3a41291debd385c67', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8861bf3c-0dd7-44a1-b3d5-4af8cd78fda2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1772063d-dae9-4681-ab29-d1a2754b4cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=52157627-d75e-4670-9215-6471bda94ba6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.240 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 podman[377824]: 2025-11-25 08:55:31.240791205 +0000 UTC m=+0.120436710 container died 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.249 253542 DEBUG nova.virt.libvirt.vif [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-360976027',display_name='tempest-TestNetworkAdvancedServerOps-server-360976027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-360976027',id=117,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMP7QXPZDPXrUql830fJueudXHoXZ2zww6EaNe3PFgcJ0/Sar4viBph4zMnjSpS0mXHOOoM16QZ2fgUMXO9FFKEQnCoRRYMYlR8af4rdgpILBOKGdgI2okSUsyg3suBciA==',key_name='tempest-TestNetworkAdvancedServerOps-1316847301',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:54:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7dcaf3c96bfc4db3a41291debd385c67',ramdisk_id='',reservation_id='r-p9ynh0fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1132090577',owner_user_name='tempest-TestNetworkAdvancedServerOps-1132090577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:55:27Z,user_data=None,user_id='009378dc36154271ba5b4590ce67ddde',uuid=7e82fa8c-6663-439c-833c-2b28f22282a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.249 253542 DEBUG nova.network.os_vif_util [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converting VIF {"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.250 253542 DEBUG nova.network.os_vif_util [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.251 253542 DEBUG os_vif [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.253 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52157627-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.260 253542 INFO os_vif [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:b5:38,bridge_name='br-int',has_traffic_filtering=True,id=52157627-d75e-4670-9215-6471bda94ba6,network=Network(703bdacb-53cd-40a1-9c2c-c632a29e049b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52157627-d7')
Nov 25 08:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296-userdata-shm.mount: Deactivated successfully.
Nov 25 08:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-802d7b725468ff6e6315af7a30e30a1e715910a6a92e1c856150d350c2e2cd52-merged.mount: Deactivated successfully.
Nov 25 08:55:31 compute-0 podman[377824]: 2025-11-25 08:55:31.313798661 +0000 UTC m=+0.193444146 container cleanup 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:55:31 compute-0 systemd[1]: libpod-conmon-489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296.scope: Deactivated successfully.
Nov 25 08:55:31 compute-0 podman[377878]: 2025-11-25 08:55:31.37843805 +0000 UTC m=+0.039065034 container remove 489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.387 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efb5c976-27d1-4e33-91c1-2739770bf5af]: (4, ('Tue Nov 25 08:55:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b (489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296)\n489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296\nTue Nov 25 08:55:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b (489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296)\n489b2c5f19d16e4e5592b32594c1a060b2f357fccef1644f8ff566a9c5ff2296\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.389 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8bc21d-504e-4eb5-9871-55111beb5784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.390 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap703bdacb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:31 compute-0 kernel: tap703bdacb-50: left promiscuous mode
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.397 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c369b3f-bfd5-447c-bea0-cce669e0bc45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.409 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.413 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[523bb802-21c9-4982-a270-759f23657f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.414 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b973c891-6e91-4aa3-9f74-0af9e4d4a6d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.431 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe47f0f-1fe0-49a5-879d-af32b05dfef4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628157, 'reachable_time': 15191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377897, 'error': None, 'target': 'ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d703bdacb\x2d53cd\x2d40a1\x2d9c2c\x2dc632a29e049b.mount: Deactivated successfully.
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.435 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-703bdacb-53cd-40a1-9c2c-c632a29e049b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.435 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b5980a43-40cf-48df-aa34-20e4d60c5afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.436 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.438 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.439 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[32354173-89ba-4cd1-827c-a56384849a6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.440 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 52157627-d75e-4670-9215-6471bda94ba6 in datapath 703bdacb-53cd-40a1-9c2c-c632a29e049b unbound from our chassis
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.441 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 703bdacb-53cd-40a1-9c2c-c632a29e049b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:31.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c050cf50-757d-45d0-9feb-6a71a0526805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:31 compute-0 gracious_kirch[377796]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:55:31 compute-0 gracious_kirch[377796]: --> relative data size: 1.0
Nov 25 08:55:31 compute-0 gracious_kirch[377796]: --> All data devices are unavailable
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.613 253542 INFO nova.virt.libvirt.driver [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Deleting instance files /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8_del
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.614 253542 INFO nova.virt.libvirt.driver [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Deletion of /var/lib/nova/instances/7e82fa8c-6663-439c-833c-2b28f22282a8_del complete
Nov 25 08:55:31 compute-0 systemd[1]: libpod-4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779.scope: Deactivated successfully.
Nov 25 08:55:31 compute-0 systemd[1]: libpod-4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779.scope: Consumed 1.084s CPU time.
Nov 25 08:55:31 compute-0 podman[377779]: 2025-11-25 08:55:31.64517516 +0000 UTC m=+1.324106771 container died 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.668 253542 INFO nova.compute.manager [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.669 253542 DEBUG oslo.service.loopingcall [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.670 253542 DEBUG nova.compute.manager [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:55:31 compute-0 nova_compute[253538]: 2025-11-25 08:55:31.670 253542 DEBUG nova.network.neutron [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-075f72d6cccc2c84f3169140b8c8e368ac2cef046a3e53978c8c180eafacaffa-merged.mount: Deactivated successfully.
Nov 25 08:55:31 compute-0 podman[377779]: 2025-11-25 08:55:31.721739394 +0000 UTC m=+1.400670965 container remove 4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 08:55:31 compute-0 systemd[1]: libpod-conmon-4044db565a9239e07a477716389854abc1b78069214975560d796bdcc2930779.scope: Deactivated successfully.
Nov 25 08:55:31 compute-0 sudo[377670]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:31 compute-0 sudo[377923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:31 compute-0 sudo[377923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:31 compute-0 sudo[377923]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 137 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 5.5 KiB/s wr, 19 op/s
Nov 25 08:55:31 compute-0 sudo[377948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:55:31 compute-0 sudo[377948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:31 compute-0 sudo[377948]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:31 compute-0 sudo[377973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:31 compute-0 sudo[377973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:32 compute-0 sudo[377973]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:32 compute-0 sudo[377998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:55:32 compute-0 sudo[377998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.292 253542 DEBUG nova.network.neutron [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updated VIF entry in instance network info cache for port 52157627-d75e-4670-9215-6471bda94ba6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.293 253542 DEBUG nova.network.neutron [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [{"id": "52157627-d75e-4670-9215-6471bda94ba6", "address": "fa:16:3e:a5:b5:38", "network": {"id": "703bdacb-53cd-40a1-9c2c-c632a29e049b", "bridge": "br-int", "label": "tempest-network-smoke--919155047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7dcaf3c96bfc4db3a41291debd385c67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52157627-d7", "ovs_interfaceid": "52157627-d75e-4670-9215-6471bda94ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.318 253542 DEBUG oslo_concurrency.lockutils [req-2d5b4246-3438-4077-92e3-26a7fc8d9ee3 req-a0cd1187-9c6d-4402-9852-badb63f5b42b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7e82fa8c-6663-439c-833c-2b28f22282a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.456 253542 DEBUG nova.network.neutron [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.473 253542 INFO nova.compute.manager [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Took 0.80 seconds to deallocate network for instance.
Nov 25 08:55:32 compute-0 podman[378063]: 2025-11-25 08:55:32.523156897 +0000 UTC m=+0.062746048 container create 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.526 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.527 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:32 compute-0 systemd[1]: Started libpod-conmon-0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba.scope.
Nov 25 08:55:32 compute-0 podman[378063]: 2025-11-25 08:55:32.487516868 +0000 UTC m=+0.027106069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.586 253542 DEBUG oslo_concurrency.processutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:55:32 compute-0 podman[378063]: 2025-11-25 08:55:32.647113852 +0000 UTC m=+0.186703023 container init 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:55:32 compute-0 podman[378063]: 2025-11-25 08:55:32.655391466 +0000 UTC m=+0.194980577 container start 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:55:32 compute-0 podman[378063]: 2025-11-25 08:55:32.658943504 +0000 UTC m=+0.198532655 container attach 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.661 253542 DEBUG nova.compute.manager [req-487fc684-66d0-44ad-a5f5-f58783fed6a5 req-aa252e42-b4bc-42ee-b310-c7213fa461cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Received event network-vif-deleted-52157627-d75e-4670-9215-6471bda94ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:32 compute-0 silly_babbage[378080]: 167 167
Nov 25 08:55:32 compute-0 systemd[1]: libpod-0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba.scope: Deactivated successfully.
Nov 25 08:55:32 compute-0 podman[378063]: 2025-11-25 08:55:32.662728806 +0000 UTC m=+0.202317917 container died 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 08:55:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc9e8cd4f76f074d2393934a0da7aa1821b868e576bab5c767031027235dc8f5-merged.mount: Deactivated successfully.
Nov 25 08:55:32 compute-0 podman[378063]: 2025-11-25 08:55:32.706669113 +0000 UTC m=+0.246258234 container remove 0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:55:32 compute-0 systemd[1]: libpod-conmon-0bb03fa762d470311aec58582d9cd0fdab98827c907738e92cebee0453e492ba.scope: Deactivated successfully.
Nov 25 08:55:32 compute-0 nova_compute[253538]: 2025-11-25 08:55:32.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:32 compute-0 podman[378123]: 2025-11-25 08:55:32.890720952 +0000 UTC m=+0.051564724 container create aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 08:55:32 compute-0 systemd[1]: Started libpod-conmon-aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6.scope.
Nov 25 08:55:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:32 compute-0 podman[378123]: 2025-11-25 08:55:32.958048335 +0000 UTC m=+0.118892107 container init aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:55:32 compute-0 podman[378123]: 2025-11-25 08:55:32.865244898 +0000 UTC m=+0.026088700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:55:32 compute-0 podman[378123]: 2025-11-25 08:55:32.970784391 +0000 UTC m=+0.131628163 container start aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 08:55:32 compute-0 podman[378123]: 2025-11-25 08:55:32.973896506 +0000 UTC m=+0.134740278 container attach aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:55:32 compute-0 ceph-mon[75015]: pgmap v2242: 321 pgs: 321 active+clean; 137 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 5.5 KiB/s wr, 19 op/s
Nov 25 08:55:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:55:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305180320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:33 compute-0 nova_compute[253538]: 2025-11-25 08:55:33.070 253542 DEBUG oslo_concurrency.processutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:33 compute-0 nova_compute[253538]: 2025-11-25 08:55:33.079 253542 DEBUG nova.compute.provider_tree [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:55:33 compute-0 nova_compute[253538]: 2025-11-25 08:55:33.095 253542 DEBUG nova.scheduler.client.report [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:55:33 compute-0 nova_compute[253538]: 2025-11-25 08:55:33.118 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:33 compute-0 nova_compute[253538]: 2025-11-25 08:55:33.149 253542 INFO nova.scheduler.client.report [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Deleted allocations for instance 7e82fa8c-6663-439c-833c-2b28f22282a8
Nov 25 08:55:33 compute-0 nova_compute[253538]: 2025-11-25 08:55:33.219 253542 DEBUG oslo_concurrency.lockutils [None req-909947b0-2a81-4cc4-a5d9-5fb476fbcc1b 009378dc36154271ba5b4590ce67ddde 7dcaf3c96bfc4db3a41291debd385c67 - - default default] Lock "7e82fa8c-6663-439c-833c-2b28f22282a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]: {
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:     "0": [
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:         {
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "devices": [
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "/dev/loop3"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             ],
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_name": "ceph_lv0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_size": "21470642176",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "name": "ceph_lv0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "tags": {
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cluster_name": "ceph",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.crush_device_class": "",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.encrypted": "0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osd_id": "0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.type": "block",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.vdo": "0"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             },
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "type": "block",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "vg_name": "ceph_vg0"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:         }
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:     ],
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:     "1": [
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:         {
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "devices": [
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "/dev/loop4"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             ],
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_name": "ceph_lv1",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_size": "21470642176",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "name": "ceph_lv1",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "tags": {
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cluster_name": "ceph",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.crush_device_class": "",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.encrypted": "0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osd_id": "1",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.type": "block",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.vdo": "0"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             },
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "type": "block",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "vg_name": "ceph_vg1"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:         }
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:     ],
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:     "2": [
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:         {
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "devices": [
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "/dev/loop5"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             ],
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_name": "ceph_lv2",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_size": "21470642176",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "name": "ceph_lv2",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "tags": {
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.cluster_name": "ceph",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.crush_device_class": "",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.encrypted": "0",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osd_id": "2",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.type": "block",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:                 "ceph.vdo": "0"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             },
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "type": "block",
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:             "vg_name": "ceph_vg2"
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:         }
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]:     ]
Nov 25 08:55:33 compute-0 lucid_bhabha[378140]: }
Nov 25 08:55:33 compute-0 systemd[1]: libpod-aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6.scope: Deactivated successfully.
Nov 25 08:55:33 compute-0 podman[378123]: 2025-11-25 08:55:33.792388184 +0000 UTC m=+0.953231966 container died aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 08:55:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 115 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Nov 25 08:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-617d664dd5a4266051b8724908abee2621787256c00c6030f38cfceff57a50d6-merged.mount: Deactivated successfully.
Nov 25 08:55:33 compute-0 podman[378123]: 2025-11-25 08:55:33.932064796 +0000 UTC m=+1.092908568 container remove aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:55:33 compute-0 systemd[1]: libpod-conmon-aedd00449e37ef40af327302d19ee779aad2eb2171263dc064bcb98c090108a6.scope: Deactivated successfully.
Nov 25 08:55:33 compute-0 sudo[377998]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/305180320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:34 compute-0 sudo[378165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:34 compute-0 sudo[378165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:34 compute-0 sudo[378165]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:34 compute-0 sudo[378190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:55:34 compute-0 sudo[378190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:34 compute-0 sudo[378190]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:34 compute-0 sudo[378215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:34 compute-0 sudo[378215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:34 compute-0 sudo[378215]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:34 compute-0 sudo[378240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:55:34 compute-0 sudo[378240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:34 compute-0 podman[378306]: 2025-11-25 08:55:34.612879197 +0000 UTC m=+0.061039783 container create 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 08:55:34 compute-0 systemd[1]: Started libpod-conmon-301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98.scope.
Nov 25 08:55:34 compute-0 podman[378306]: 2025-11-25 08:55:34.586644443 +0000 UTC m=+0.034805069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:55:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:55:34 compute-0 podman[378306]: 2025-11-25 08:55:34.727551428 +0000 UTC m=+0.175712074 container init 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:55:34 compute-0 podman[378306]: 2025-11-25 08:55:34.736720887 +0000 UTC m=+0.184881443 container start 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:55:34 compute-0 podman[378306]: 2025-11-25 08:55:34.741531049 +0000 UTC m=+0.189691645 container attach 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 08:55:34 compute-0 agitated_tesla[378322]: 167 167
Nov 25 08:55:34 compute-0 systemd[1]: libpod-301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98.scope: Deactivated successfully.
Nov 25 08:55:34 compute-0 podman[378306]: 2025-11-25 08:55:34.745148667 +0000 UTC m=+0.193309223 container died 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 08:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-024863d8799286d81c7e0ca6ae448e8700382ff96269c25c34de0cd2e05dd4b4-merged.mount: Deactivated successfully.
Nov 25 08:55:34 compute-0 podman[378306]: 2025-11-25 08:55:34.816095718 +0000 UTC m=+0.264256284 container remove 301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 08:55:34 compute-0 systemd[1]: libpod-conmon-301cafe31c9fdb375c0766445a47055db3e02bc6ee3ae1b23878eed45c0b1a98.scope: Deactivated successfully.
Nov 25 08:55:34 compute-0 ceph-mon[75015]: pgmap v2243: 321 pgs: 321 active+clean; 115 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Nov 25 08:55:35 compute-0 podman[378345]: 2025-11-25 08:55:35.009616226 +0000 UTC m=+0.051805092 container create af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 08:55:35 compute-0 systemd[1]: Started libpod-conmon-af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0.scope.
Nov 25 08:55:35 compute-0 podman[378345]: 2025-11-25 08:55:34.981370887 +0000 UTC m=+0.023559753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:55:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:35 compute-0 podman[378345]: 2025-11-25 08:55:35.12365918 +0000 UTC m=+0.165848076 container init af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 08:55:35 compute-0 podman[378345]: 2025-11-25 08:55:35.13286068 +0000 UTC m=+0.175049546 container start af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 08:55:35 compute-0 podman[378345]: 2025-11-25 08:55:35.137594349 +0000 UTC m=+0.179783226 container attach af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.413 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.416 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.435 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.511 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.512 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.522 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.522 253542 INFO nova.compute.claims [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:35 compute-0 nova_compute[253538]: 2025-11-25 08:55:35.610 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 88 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.8 KiB/s wr, 33 op/s
Nov 25 08:55:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:55:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149975109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]: {
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "osd_id": 1,
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "type": "bluestore"
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:     },
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "osd_id": 2,
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "type": "bluestore"
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:     },
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "osd_id": 0,
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:         "type": "bluestore"
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]:     }
Nov 25 08:55:36 compute-0 peaceful_banzai[378361]: }
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.078 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.086 253542 DEBUG nova.compute.provider_tree [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:55:36 compute-0 systemd[1]: libpod-af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0.scope: Deactivated successfully.
Nov 25 08:55:36 compute-0 podman[378345]: 2025-11-25 08:55:36.095914474 +0000 UTC m=+1.138103300 container died af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.101 253542 DEBUG nova.scheduler.client.report [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:55:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-db210f23a82398e427ca754024134826b5e2d7b7a8d12a11af2c30aa6e60993b-merged.mount: Deactivated successfully.
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.133 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.134 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.184 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.185 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:55:36 compute-0 podman[378345]: 2025-11-25 08:55:36.188947206 +0000 UTC m=+1.231136032 container remove af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 08:55:36 compute-0 systemd[1]: libpod-conmon-af3d0f87e223b5caf5603be23af9017886c230bab2645b1d36d33a1e530970c0.scope: Deactivated successfully.
Nov 25 08:55:36 compute-0 sudo[378240]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:55:36 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:55:36 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:36 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev aa319047-607a-432e-b886-5541a71b79d6 does not exist
Nov 25 08:55:36 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9bfefde3-1c74-4ae0-8352-3e06f3e0bdfd does not exist
Nov 25 08:55:36 compute-0 sudo[378431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:55:36 compute-0 sudo[378431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:36 compute-0 sudo[378431]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.371 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.387 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:55:36 compute-0 sudo[378456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:55:36 compute-0 sudo[378456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:55:36 compute-0 sudo[378456]: pam_unix(sudo:session): session closed for user root
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.524 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.525 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.526 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Creating image(s)
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.547 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.569 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.595 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.600 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.724 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.726 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.727 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.728 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.757 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:55:36 compute-0 nova_compute[253538]: 2025-11-25 08:55:36.761 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f028149d-de9a-49c3-8805-49336474a101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:37 compute-0 ceph-mon[75015]: pgmap v2244: 321 pgs: 321 active+clean; 88 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.8 KiB/s wr, 33 op/s
Nov 25 08:55:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3149975109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:37 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:37 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.025 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f028149d-de9a-49c3-8805-49336474a101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.090 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image f028149d-de9a-49c3-8805-49336474a101_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.192 253542 DEBUG nova.objects.instance [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid f028149d-de9a-49c3-8805-49336474a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.208 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.209 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Ensure instance console log exists: /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.210 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.210 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.210 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.265 253542 DEBUG nova.policy [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:55:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 88 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.8 KiB/s wr, 33 op/s
Nov 25 08:55:37 compute-0 nova_compute[253538]: 2025-11-25 08:55:37.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:38 compute-0 nova_compute[253538]: 2025-11-25 08:55:38.162 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Successfully updated port: 3df2cc50-c6c1-476a-a12a-0d02fae91559 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:55:38 compute-0 nova_compute[253538]: 2025-11-25 08:55:38.191 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:55:38 compute-0 nova_compute[253538]: 2025-11-25 08:55:38.191 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:55:38 compute-0 nova_compute[253538]: 2025-11-25 08:55:38.191 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:55:38 compute-0 nova_compute[253538]: 2025-11-25 08:55:38.355 253542 DEBUG nova.compute.manager [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:38 compute-0 nova_compute[253538]: 2025-11-25 08:55:38.356 253542 DEBUG nova.compute.manager [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Refreshing instance network info cache due to event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:55:38 compute-0 nova_compute[253538]: 2025-11-25 08:55:38.356 253542 DEBUG oslo_concurrency.lockutils [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:55:39 compute-0 ceph-mon[75015]: pgmap v2245: 321 pgs: 321 active+clean; 88 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.8 KiB/s wr, 33 op/s
Nov 25 08:55:39 compute-0 nova_compute[253538]: 2025-11-25 08:55:39.018 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:55:39 compute-0 nova_compute[253538]: 2025-11-25 08:55:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:39 compute-0 nova_compute[253538]: 2025-11-25 08:55:39.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:55:39 compute-0 nova_compute[253538]: 2025-11-25 08:55:39.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:55:39 compute-0 nova_compute[253538]: 2025-11-25 08:55:39.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 08:55:39 compute-0 nova_compute[253538]: 2025-11-25 08:55:39.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:55:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 122 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.304 253542 DEBUG nova.network.neutron [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.318 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.319 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance network_info: |[{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.319 253542 DEBUG oslo_concurrency.lockutils [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.320 253542 DEBUG nova.network.neutron [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Refreshing network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.323 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start _get_guest_xml network_info=[{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.328 253542 WARNING nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.337 253542 DEBUG nova.virt.libvirt.host [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.337 253542 DEBUG nova.virt.libvirt.host [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.341 253542 DEBUG nova.virt.libvirt.host [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.342 253542 DEBUG nova.virt.libvirt.host [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.342 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.343 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.343 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.344 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.344 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.344 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.345 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.345 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.345 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.346 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.346 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.346 253542 DEBUG nova.virt.hardware [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.350 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:55:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352452760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.840 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:40 compute-0 podman[378668]: 2025-11-25 08:55:40.850953779 +0000 UTC m=+0.086431414 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.875 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:55:40 compute-0 nova_compute[253538]: 2025-11-25 08:55:40.882 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:41 compute-0 ceph-mon[75015]: pgmap v2246: 321 pgs: 321 active+clean; 122 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Nov 25 08:55:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3352452760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:55:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:41.078 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:55:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2715981006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.319 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.321 253542 DEBUG nova.virt.libvirt.vif [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:55:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-443459317',display_name='tempest-TestNetworkBasicOps-server-443459317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-443459317',id=118,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwIQI4smh03agRxaCUyBdpTkZuGWd18KsbSAlDGRTalp6+OaIXJV7ErpMU5iOukAfWckmlqdBb7cA7hp/AAowmL6erSk1AV13d1Hs/ktP4LutA1fVkErwMJ9ccrFBLyEA==',key_name='tempest-TestNetworkBasicOps-292858725',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-0xuf3mxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:55:36Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=f028149d-de9a-49c3-8805-49336474a101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.321 253542 DEBUG nova.network.os_vif_util [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.322 253542 DEBUG nova.network.os_vif_util [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.323 253542 DEBUG nova.objects.instance [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid f028149d-de9a-49c3-8805-49336474a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.341 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <uuid>f028149d-de9a-49c3-8805-49336474a101</uuid>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <name>instance-00000076</name>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-443459317</nova:name>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:55:40</nova:creationTime>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <nova:port uuid="3df2cc50-c6c1-476a-a12a-0d02fae91559">
Nov 25 08:55:41 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <system>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <entry name="serial">f028149d-de9a-49c3-8805-49336474a101</entry>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <entry name="uuid">f028149d-de9a-49c3-8805-49336474a101</entry>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </system>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <os>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   </os>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <features>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   </features>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/f028149d-de9a-49c3-8805-49336474a101_disk">
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       </source>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/f028149d-de9a-49c3-8805-49336474a101_disk.config">
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       </source>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:55:41 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:54:4e:c2"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <target dev="tap3df2cc50-c6"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/console.log" append="off"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <video>
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </video>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:55:41 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:55:41 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:55:41 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:55:41 compute-0 nova_compute[253538]: </domain>
Nov 25 08:55:41 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.341 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Preparing to wait for external event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.342 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.342 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.342 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.343 253542 DEBUG nova.virt.libvirt.vif [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:55:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-443459317',display_name='tempest-TestNetworkBasicOps-server-443459317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-443459317',id=118,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwIQI4smh03agRxaCUyBdpTkZuGWd18KsbSAlDGRTalp6+OaIXJV7ErpMU5iOukAfWckmlqdBb7cA7hp/AAowmL6erSk1AV13d1Hs/ktP4LutA1fVkErwMJ9ccrFBLyEA==',key_name='tempest-TestNetworkBasicOps-292858725',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-0xuf3mxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:55:36Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=f028149d-de9a-49c3-8805-49336474a101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.343 253542 DEBUG nova.network.os_vif_util [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.344 253542 DEBUG nova.network.os_vif_util [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.344 253542 DEBUG os_vif [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.345 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.345 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.346 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.350 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3df2cc50-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.351 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3df2cc50-c6, col_values=(('external_ids', {'iface-id': '3df2cc50-c6c1-476a-a12a-0d02fae91559', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:4e:c2', 'vm-uuid': 'f028149d-de9a-49c3-8805-49336474a101'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.353 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:41 compute-0 NetworkManager[48915]: <info>  [1764060941.3547] manager: (tap3df2cc50-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.361 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.362 253542 INFO os_vif [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6')
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.421 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.422 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.422 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:54:4e:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.423 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Using config drive
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.445 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:55:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.931 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Creating config drive at /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config
Nov 25 08:55:41 compute-0 nova_compute[253538]: 2025-11-25 08:55:41.935 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8p3ow2_x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2715981006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.077 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8p3ow2_x" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.115 253542 DEBUG nova.storage.rbd_utils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image f028149d-de9a-49c3-8805-49336474a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.120 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config f028149d-de9a-49c3-8805-49336474a101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.301 253542 DEBUG oslo_concurrency.processutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config f028149d-de9a-49c3-8805-49336474a101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.302 253542 INFO nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Deleting local config drive /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101/disk.config because it was imported into RBD.
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.340 253542 DEBUG nova.network.neutron [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updated VIF entry in instance network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.341 253542 DEBUG nova.network.neutron [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.356 253542 DEBUG oslo_concurrency.lockutils [req-df588e23-2dca-41a8-a2fd-a035719b9c75 req-b01e9bef-321c-44ba-9801-26fd45b66884 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:55:42 compute-0 kernel: tap3df2cc50-c6: entered promiscuous mode
Nov 25 08:55:42 compute-0 NetworkManager[48915]: <info>  [1764060942.3770] manager: (tap3df2cc50-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Nov 25 08:55:42 compute-0 ovn_controller[152859]: 2025-11-25T08:55:42Z|01226|binding|INFO|Claiming lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 for this chassis.
Nov 25 08:55:42 compute-0 ovn_controller[152859]: 2025-11-25T08:55:42Z|01227|binding|INFO|3df2cc50-c6c1-476a-a12a-0d02fae91559: Claiming fa:16:3e:54:4e:c2 10.100.0.14
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.392 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:4e:c2 10.100.0.14'], port_security=['fa:16:3e:54:4e:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f028149d-de9a-49c3-8805-49336474a101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05073ace-d35c-48d1-9399-5c8964c484d2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00a1a17-1a1d-406f-9e8c-c8e478c002e9, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3df2cc50-c6c1-476a-a12a-0d02fae91559) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.394 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3df2cc50-c6c1-476a-a12a-0d02fae91559 in datapath 05073ace-d35c-48d1-9399-5c8964c484d2 bound to our chassis
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.396 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.411 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1356ce0d-9927-455b-a137-90e6fe8a31b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.412 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap05073ace-d1 in ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.415 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap05073ace-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.415 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e21ae9-f901-479d-ad27-d54fe04af978]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[14133ac0-b20b-4dd3-997a-a1bf44613df0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.432 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4e96da64-c335-4ff0-8a0b-1837064288cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000076.
Nov 25 08:55:42 compute-0 systemd-machined[215790]: New machine qemu-148-instance-00000076.
Nov 25 08:55:42 compute-0 systemd-udevd[378812]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:55:42 compute-0 NetworkManager[48915]: <info>  [1764060942.4584] device (tap3df2cc50-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[607cd86f-a64d-4703-a388-2584e98ee66d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 NetworkManager[48915]: <info>  [1764060942.4614] device (tap3df2cc50-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 ovn_controller[152859]: 2025-11-25T08:55:42Z|01228|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 ovn-installed in OVS
Nov 25 08:55:42 compute-0 ovn_controller[152859]: 2025-11-25T08:55:42Z|01229|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 up in Southbound
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.544 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1d05a8fa-7aff-4a00-b02f-22964ceb6500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.550 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e51fe1f-9963-4a10-ae12-d57662af4449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 systemd-udevd[378819]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:55:42 compute-0 NetworkManager[48915]: <info>  [1764060942.5525] manager: (tap05073ace-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Nov 25 08:55:42 compute-0 podman[378801]: 2025-11-25 08:55:42.587550057 +0000 UTC m=+0.160507480 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.590 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbfa1b5-71cc-44b2-9390-89667a325853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.593 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e515bb6f-ba93-4275-bfea-7041786b0a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 NetworkManager[48915]: <info>  [1764060942.6219] device (tap05073ace-d0): carrier: link connected
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.629 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[eb964c24-170f-414a-85c5-7395527c95f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.655 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85751b2c-cd98-4e14-b92f-f87a7a03d6b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05073ace-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:25:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629722, 'reachable_time': 18808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378854, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.673 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ac7d6d-ec18-48c0-8818-541b2405a342]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:25c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 629722, 'tstamp': 629722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378855, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.694 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cec05173-34b2-48c8-afbb-e073759a6ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05073ace-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:25:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629722, 'reachable_time': 18808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378856, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.730 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e5f875-9633-4009-9544-3d7e693a0d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.810 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0db55a2d-024f-4ffc-ad64-e2313abbf553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05073ace-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.812 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05073ace-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 NetworkManager[48915]: <info>  [1764060942.8149] manager: (tap05073ace-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Nov 25 08:55:42 compute-0 kernel: tap05073ace-d0: entered promiscuous mode
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.818 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05073ace-d0, col_values=(('external_ids', {'iface-id': '38363726-6a82-410a-a283-1a7b285deea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 ovn_controller[152859]: 2025-11-25T08:55:42Z|01230|binding|INFO|Releasing lport 38363726-6a82-410a-a283-1a7b285deea5 from this chassis (sb_readonly=0)
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.821 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.822 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[522c0b2b-848d-4500-94c8-407e4ff9c67b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.822 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:55:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:42.823 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'env', 'PROCESS_TAG=haproxy-05073ace-d35c-48d1-9399-5c8964c484d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/05073ace-d35c-48d1-9399-5c8964c484d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.864 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060942.8637898, f028149d-de9a-49c3-8805-49336474a101 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.865 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] VM Started (Lifecycle Event)
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.884 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.900 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060942.8639305, f028149d-de9a-49c3-8805-49336474a101 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.900 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] VM Paused (Lifecycle Event)
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.907 253542 DEBUG nova.compute.manager [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.907 253542 DEBUG oslo_concurrency.lockutils [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.908 253542 DEBUG oslo_concurrency.lockutils [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.909 253542 DEBUG oslo_concurrency.lockutils [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.909 253542 DEBUG nova.compute.manager [req-b910b7bf-148d-4646-ac05-b957f96c4554 req-ce5f7ade-97a1-4772-8cec-a0001aac71ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Processing event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.910 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.928 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.928 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.935 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060942.915906, f028149d-de9a-49c3-8805-49336474a101 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.935 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] VM Resumed (Lifecycle Event)
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.937 253542 INFO nova.virt.libvirt.driver [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance spawned successfully.
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.939 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.973 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.978 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.978 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.978 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.979 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.979 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.979 253542 DEBUG nova.virt.libvirt.driver [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:55:42 compute-0 nova_compute[253538]: 2025-11-25 08:55:42.983 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:55:43 compute-0 nova_compute[253538]: 2025-11-25 08:55:43.009 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:55:43 compute-0 nova_compute[253538]: 2025-11-25 08:55:43.031 253542 INFO nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Took 6.51 seconds to spawn the instance on the hypervisor.
Nov 25 08:55:43 compute-0 nova_compute[253538]: 2025-11-25 08:55:43.031 253542 DEBUG nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:43 compute-0 ceph-mon[75015]: pgmap v2247: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 08:55:43 compute-0 nova_compute[253538]: 2025-11-25 08:55:43.087 253542 INFO nova.compute.manager [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Took 7.59 seconds to build instance.
Nov 25 08:55:43 compute-0 nova_compute[253538]: 2025-11-25 08:55:43.106 253542 DEBUG oslo_concurrency.lockutils [None req-59e6579b-ccc5-4f68-852e-e3bd22f9798e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:43 compute-0 podman[378930]: 2025-11-25 08:55:43.212598129 +0000 UTC m=+0.047710089 container create d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 08:55:43 compute-0 systemd[1]: Started libpod-conmon-d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e.scope.
Nov 25 08:55:43 compute-0 podman[378930]: 2025-11-25 08:55:43.1894643 +0000 UTC m=+0.024576290 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:55:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c502fd0fc85e055f1ddf2aec0772fba519495d438e9ce6a804788847f7e5e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:55:43 compute-0 podman[378930]: 2025-11-25 08:55:43.316677792 +0000 UTC m=+0.151789772 container init d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 08:55:43 compute-0 podman[378930]: 2025-11-25 08:55:43.32392519 +0000 UTC m=+0.159037150 container start d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:55:43 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [NOTICE]   (378949) : New worker (378951) forked
Nov 25 08:55:43 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [NOTICE]   (378949) : Loading success.
Nov 25 08:55:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 08:55:44 compute-0 nova_compute[253538]: 2025-11-25 08:55:44.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:44 compute-0 nova_compute[253538]: 2025-11-25 08:55:44.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.005 253542 DEBUG nova.compute.manager [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.006 253542 DEBUG oslo_concurrency.lockutils [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.006 253542 DEBUG oslo_concurrency.lockutils [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.007 253542 DEBUG oslo_concurrency.lockutils [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.007 253542 DEBUG nova.compute.manager [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] No waiting events found dispatching network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.008 253542 WARNING nova.compute.manager [req-ac6dc1c7-8597-4e11-b305-0a5fd684b57b req-ef03b73f-2606-4620-a5d4-585022dc3303 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received unexpected event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with vm_state active and task_state None.
Nov 25 08:55:45 compute-0 ceph-mon[75015]: pgmap v2248: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:55:45 compute-0 nova_compute[253538]: 2025-11-25 08:55:45.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 08:55:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:55:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2456873636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.029 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2456873636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.092 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.093 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.230 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060931.2297456, 7e82fa8c-6663-439c-833c-2b28f22282a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.231 253542 INFO nova.compute.manager [-] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] VM Stopped (Lifecycle Event)
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.247 253542 DEBUG nova.compute.manager [None req-ac113287-e704-499c-b72b-1cfe9e33ae74 - - - - - -] [instance: 7e82fa8c-6663-439c-833c-2b28f22282a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.298 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.300 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.967384338378906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.300 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.300 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.547 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance f028149d-de9a-49c3-8805-49336474a101 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.548 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.548 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:55:46 compute-0 nova_compute[253538]: 2025-11-25 08:55:46.718 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:47 compute-0 ceph-mon[75015]: pgmap v2249: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 08:55:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:55:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071085532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.170 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.177 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.199 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.231 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.232 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:47 compute-0 ovn_controller[152859]: 2025-11-25T08:55:47Z|01231|binding|INFO|Releasing lport 38363726-6a82-410a-a283-1a7b285deea5 from this chassis (sb_readonly=0)
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:47 compute-0 NetworkManager[48915]: <info>  [1764060947.4605] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Nov 25 08:55:47 compute-0 NetworkManager[48915]: <info>  [1764060947.4620] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Nov 25 08:55:47 compute-0 ovn_controller[152859]: 2025-11-25T08:55:47Z|01232|binding|INFO|Releasing lport 38363726-6a82-410a-a283-1a7b285deea5 from this chassis (sb_readonly=0)
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:47 compute-0 podman[379006]: 2025-11-25 08:55:47.895288546 +0000 UTC m=+0.136248410 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.925 253542 DEBUG nova.compute.manager [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.926 253542 DEBUG nova.compute.manager [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Refreshing instance network info cache due to event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.926 253542 DEBUG oslo_concurrency.lockutils [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.926 253542 DEBUG oslo_concurrency.lockutils [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:55:47 compute-0 nova_compute[253538]: 2025-11-25 08:55:47.926 253542 DEBUG nova.network.neutron [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Refreshing network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:55:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4071085532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.136 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.137 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.138 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.138 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.139 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.141 253542 INFO nova.compute.manager [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Terminating instance
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.144 253542 DEBUG nova.compute.manager [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:55:48 compute-0 kernel: tap3df2cc50-c6 (unregistering): left promiscuous mode
Nov 25 08:55:48 compute-0 NetworkManager[48915]: <info>  [1764060948.1856] device (tap3df2cc50-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:48 compute-0 ovn_controller[152859]: 2025-11-25T08:55:48Z|01233|binding|INFO|Releasing lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 from this chassis (sb_readonly=0)
Nov 25 08:55:48 compute-0 ovn_controller[152859]: 2025-11-25T08:55:48Z|01234|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 down in Southbound
Nov 25 08:55:48 compute-0 ovn_controller[152859]: 2025-11-25T08:55:48Z|01235|binding|INFO|Removing iface tap3df2cc50-c6 ovn-installed in OVS
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.205 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:4e:c2 10.100.0.14'], port_security=['fa:16:3e:54:4e:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f028149d-de9a-49c3-8805-49336474a101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05073ace-d35c-48d1-9399-5c8964c484d2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00a1a17-1a1d-406f-9e8c-c8e478c002e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3df2cc50-c6c1-476a-a12a-0d02fae91559) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.208 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3df2cc50-c6c1-476a-a12a-0d02fae91559 in datapath 05073ace-d35c-48d1-9399-5c8964c484d2 unbound from our chassis
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.210 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05073ace-d35c-48d1-9399-5c8964c484d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.212 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa608817-01b9-48d3-81cb-97cf12dc7edb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.212 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 namespace which is not needed anymore
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:48 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 25 08:55:48 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Consumed 5.724s CPU time.
Nov 25 08:55:48 compute-0 systemd-machined[215790]: Machine qemu-148-instance-00000076 terminated.
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.372 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:48 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [NOTICE]   (378949) : haproxy version is 2.8.14-c23fe91
Nov 25 08:55:48 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [NOTICE]   (378949) : path to executable is /usr/sbin/haproxy
Nov 25 08:55:48 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [WARNING]  (378949) : Exiting Master process...
Nov 25 08:55:48 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [ALERT]    (378949) : Current worker (378951) exited with code 143 (Terminated)
Nov 25 08:55:48 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[378944]: [WARNING]  (378949) : All workers exited. Exiting... (0)
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:48 compute-0 systemd[1]: libpod-d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e.scope: Deactivated successfully.
Nov 25 08:55:48 compute-0 podman[379054]: 2025-11-25 08:55:48.388207473 +0000 UTC m=+0.066911583 container died d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.391 253542 INFO nova.virt.libvirt.driver [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Instance destroyed successfully.
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.391 253542 DEBUG nova.objects.instance [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid f028149d-de9a-49c3-8805-49336474a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.412 253542 DEBUG nova.virt.libvirt.vif [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:55:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-443459317',display_name='tempest-TestNetworkBasicOps-server-443459317',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-443459317',id=118,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwIQI4smh03agRxaCUyBdpTkZuGWd18KsbSAlDGRTalp6+OaIXJV7ErpMU5iOukAfWckmlqdBb7cA7hp/AAowmL6erSk1AV13d1Hs/ktP4LutA1fVkErwMJ9ccrFBLyEA==',key_name='tempest-TestNetworkBasicOps-292858725',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:55:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-0xuf3mxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:55:43Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=f028149d-de9a-49c3-8805-49336474a101,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.413 253542 DEBUG nova.network.os_vif_util [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.416 253542 DEBUG nova.network.os_vif_util [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.416 253542 DEBUG os_vif [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.419 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3df2cc50-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.420 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e-userdata-shm.mount: Deactivated successfully.
Nov 25 08:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-83c502fd0fc85e055f1ddf2aec0772fba519495d438e9ce6a804788847f7e5e0-merged.mount: Deactivated successfully.
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.428 253542 INFO os_vif [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6')
Nov 25 08:55:48 compute-0 podman[379054]: 2025-11-25 08:55:48.436810735 +0000 UTC m=+0.115514855 container cleanup d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:55:48 compute-0 systemd[1]: libpod-conmon-d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e.scope: Deactivated successfully.
Nov 25 08:55:48 compute-0 podman[379106]: 2025-11-25 08:55:48.512986689 +0000 UTC m=+0.048302086 container remove d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.521 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b91a19f-efc5-43b4-89cf-c3c69c34f25b]: (4, ('Tue Nov 25 08:55:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 (d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e)\nd14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e\nTue Nov 25 08:55:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 (d14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e)\nd14bc14d7e6c47297bc507502f09a9fb5fe7ad742154a140e7a05ba54f8ca10e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.523 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67ecb0da-dbe6-42ef-8354-92b8baceffd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.524 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05073ace-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.526 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:48 compute-0 kernel: tap05073ace-d0: left promiscuous mode
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.541 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bb01c1-6d8b-4c26-8fa4-9b97c8513bb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.557 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8f5d47-10f6-40b5-8e9e-1a6840bb1abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.559 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94c93587-9cac-437b-962a-a362712f09aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.584 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[287fae9b-36c0-4184-bdb2-f26ba8afade1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629713, 'reachable_time': 27686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379125, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.589 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:55:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:48.589 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[73a48f67-e28a-41e8-8bc0-998706aef3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:55:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d05073ace\x2dd35c\x2d48d1\x2d9399\x2d5c8964c484d2.mount: Deactivated successfully.
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.737 253542 INFO nova.virt.libvirt.driver [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Deleting instance files /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101_del
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.738 253542 INFO nova.virt.libvirt.driver [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Deletion of /var/lib/nova/instances/f028149d-de9a-49c3-8805-49336474a101_del complete
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.790 253542 INFO nova.compute.manager [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Took 0.65 seconds to destroy the instance on the hypervisor.
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.791 253542 DEBUG oslo.service.loopingcall [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.792 253542 DEBUG nova.compute.manager [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:55:48 compute-0 nova_compute[253538]: 2025-11-25 08:55:48.792 253542 DEBUG nova.network.neutron [-] [instance: f028149d-de9a-49c3-8805-49336474a101] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:55:49 compute-0 ceph-mon[75015]: pgmap v2250: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 08:55:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:49.241 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:55:49 compute-0 nova_compute[253538]: 2025-11-25 08:55:49.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:49.243 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:55:49 compute-0 nova_compute[253538]: 2025-11-25 08:55:49.277 253542 DEBUG nova.network.neutron [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updated VIF entry in instance network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:55:49 compute-0 nova_compute[253538]: 2025-11-25 08:55:49.278 253542 DEBUG nova.network.neutron [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:49 compute-0 nova_compute[253538]: 2025-11-25 08:55:49.299 253542 DEBUG oslo_concurrency.lockutils [req-e5fad395-e797-43bd-89f7-229cfada0e2d req-1bb0e161-8aed-42bb-869f-d2cb148c91cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f028149d-de9a-49c3-8805-49336474a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:55:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 100 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 08:55:50 compute-0 nova_compute[253538]: 2025-11-25 08:55:50.281 253542 DEBUG nova.compute.manager [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:55:50 compute-0 nova_compute[253538]: 2025-11-25 08:55:50.281 253542 DEBUG oslo_concurrency.lockutils [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f028149d-de9a-49c3-8805-49336474a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:50 compute-0 nova_compute[253538]: 2025-11-25 08:55:50.281 253542 DEBUG oslo_concurrency.lockutils [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:50 compute-0 nova_compute[253538]: 2025-11-25 08:55:50.282 253542 DEBUG oslo_concurrency.lockutils [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:50 compute-0 nova_compute[253538]: 2025-11-25 08:55:50.282 253542 DEBUG nova.compute.manager [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] No waiting events found dispatching network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:55:50 compute-0 nova_compute[253538]: 2025-11-25 08:55:50.282 253542 WARNING nova.compute.manager [req-979d496d-b8f7-40f9-9242-09458906e405 req-63f3ab21-5ff9-4e66-b3c8-8a7a763f12f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f028149d-de9a-49c3-8805-49336474a101] Received unexpected event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with vm_state active and task_state deleting.
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.068 253542 DEBUG nova.network.neutron [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.085 253542 INFO nova.compute.manager [-] [instance: f028149d-de9a-49c3-8805-49336474a101] Took 2.29 seconds to deallocate network for instance.
Nov 25 08:55:51 compute-0 ceph-mon[75015]: pgmap v2251: 321 pgs: 321 active+clean; 100 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.135 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.136 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.187 253542 DEBUG oslo_concurrency.processutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:55:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:55:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784495931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.635 253542 DEBUG oslo_concurrency.processutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.642 253542 DEBUG nova.compute.provider_tree [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.660 253542 DEBUG nova.scheduler.client.report [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.684 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.712 253542 INFO nova.scheduler.client.report [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance f028149d-de9a-49c3-8805-49336474a101
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.791 253542 DEBUG oslo_concurrency.lockutils [None req-88b52273-158b-4235-b219-e18391a39d62 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "f028149d-de9a-49c3-8805-49336474a101" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:55:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 88 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 KiB/s wr, 123 op/s
Nov 25 08:55:51 compute-0 nova_compute[253538]: 2025-11-25 08:55:51.933 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2784495931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:55:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:52 compute-0 nova_compute[253538]: 2025-11-25 08:55:52.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:53 compute-0 ceph-mon[75015]: pgmap v2252: 321 pgs: 321 active+clean; 88 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 KiB/s wr, 123 op/s
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:55:53
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'backups', '.mgr']
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:55:53 compute-0 nova_compute[253538]: 2025-11-25 08:55:53.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:55:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:55:55 compute-0 ceph-mon[75015]: pgmap v2253: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Nov 25 08:55:55 compute-0 nova_compute[253538]: 2025-11-25 08:55:55.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:55:55 compute-0 nova_compute[253538]: 2025-11-25 08:55:55.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 08:55:55 compute-0 nova_compute[253538]: 2025-11-25 08:55:55.586 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 08:55:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 96 op/s
Nov 25 08:55:57 compute-0 ceph-mon[75015]: pgmap v2254: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 96 op/s
Nov 25 08:55:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:55:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 838 KiB/s rd, 1.4 KiB/s wr, 56 op/s
Nov 25 08:55:57 compute-0 nova_compute[253538]: 2025-11-25 08:55:57.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:58 compute-0 nova_compute[253538]: 2025-11-25 08:55:58.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:58 compute-0 nova_compute[253538]: 2025-11-25 08:55:58.318 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:58 compute-0 nova_compute[253538]: 2025-11-25 08:55:58.428 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:55:59 compute-0 ceph-mon[75015]: pgmap v2255: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 838 KiB/s rd, 1.4 KiB/s wr, 56 op/s
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.149298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959149346, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2061, "num_deletes": 251, "total_data_size": 3348479, "memory_usage": 3394752, "flush_reason": "Manual Compaction"}
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959162994, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3281579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45134, "largest_seqno": 47194, "table_properties": {"data_size": 3272291, "index_size": 5846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19098, "raw_average_key_size": 20, "raw_value_size": 3253739, "raw_average_value_size": 3439, "num_data_blocks": 259, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060739, "oldest_key_time": 1764060739, "file_creation_time": 1764060959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 13713 microseconds, and 6759 cpu microseconds.
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.163032) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3281579 bytes OK
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.163050) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.164299) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.164327) EVENT_LOG_v1 {"time_micros": 1764060959164321, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.164344) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3339825, prev total WAL file size 3339825, number of live WAL files 2.
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.165360) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3204KB)], [104(8724KB)]
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959165397, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12215031, "oldest_snapshot_seqno": -1}
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7046 keys, 10541886 bytes, temperature: kUnknown
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959227009, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10541886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10493687, "index_size": 29463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 182013, "raw_average_key_size": 25, "raw_value_size": 10366448, "raw_average_value_size": 1471, "num_data_blocks": 1162, "num_entries": 7046, "num_filter_entries": 7046, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.228433) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10541886 bytes
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.230125) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.6 rd, 167.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 7560, records dropped: 514 output_compression: NoCompression
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.230157) EVENT_LOG_v1 {"time_micros": 1764060959230144, "job": 62, "event": "compaction_finished", "compaction_time_micros": 62785, "compaction_time_cpu_micros": 34971, "output_level": 6, "num_output_files": 1, "total_output_size": 10541886, "num_input_records": 7560, "num_output_records": 7046, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959230966, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060959233259, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.165219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:55:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:55:59.233307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:55:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:55:59.246 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:55:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Nov 25 08:56:00 compute-0 nova_compute[253538]: 2025-11-25 08:56:00.872 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:00 compute-0 nova_compute[253538]: 2025-11-25 08:56:00.873 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:00 compute-0 nova_compute[253538]: 2025-11-25 08:56:00.890 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:56:00 compute-0 nova_compute[253538]: 2025-11-25 08:56:00.979 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:00 compute-0 nova_compute[253538]: 2025-11-25 08:56:00.980 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:00 compute-0 nova_compute[253538]: 2025-11-25 08:56:00.986 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:56:00 compute-0 nova_compute[253538]: 2025-11-25 08:56:00.986 253542 INFO nova.compute.claims [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.076 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:01 compute-0 anacron[157083]: Job `cron.monthly' started
Nov 25 08:56:01 compute-0 ceph-mon[75015]: pgmap v2256: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Nov 25 08:56:01 compute-0 anacron[157083]: Job `cron.monthly' terminated
Nov 25 08:56:01 compute-0 anacron[157083]: Normal exit (3 jobs run)
Nov 25 08:56:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:56:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2900014663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.546 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.552 253542 DEBUG nova.compute.provider_tree [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.568 253542 DEBUG nova.scheduler.client.report [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.592 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.593 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.634 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.635 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.649 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.668 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.740 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.741 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.742 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Creating image(s)
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.761 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.781 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.800 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.804 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.882 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.883 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.884 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.885 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.906 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:01 compute-0 nova_compute[253538]: 2025-11-25 08:56:01.909 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.080 253542 DEBUG nova.policy [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:56:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2900014663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.559 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.638 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.926 253542 DEBUG nova.objects.instance [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 4cb9b212-92f6-4b10-ac69-ba251266bfd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.941 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.941 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Ensure instance console log exists: /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.942 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.942 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:02 compute-0 nova_compute[253538]: 2025-11-25 08:56:02.943 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:03 compute-0 ceph-mon[75015]: pgmap v2257: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Nov 25 08:56:03 compute-0 nova_compute[253538]: 2025-11-25 08:56:03.389 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060948.3882115, f028149d-de9a-49c3-8805-49336474a101 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:03 compute-0 nova_compute[253538]: 2025-11-25 08:56:03.389 253542 INFO nova.compute.manager [-] [instance: f028149d-de9a-49c3-8805-49336474a101] VM Stopped (Lifecycle Event)
Nov 25 08:56:03 compute-0 nova_compute[253538]: 2025-11-25 08:56:03.407 253542 DEBUG nova.compute.manager [None req-8be8f295-0aab-420e-9e7b-3ec2d53d9182 - - - - - -] [instance: f028149d-de9a-49c3-8805-49336474a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:03 compute-0 nova_compute[253538]: 2025-11-25 08:56:03.430 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 95 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 53 KiB/s wr, 2 op/s
Nov 25 08:56:03 compute-0 nova_compute[253538]: 2025-11-25 08:56:03.954 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Successfully updated port: 3df2cc50-c6c1-476a-a12a-0d02fae91559 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:56:03 compute-0 nova_compute[253538]: 2025-11-25 08:56:03.967 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:03 compute-0 nova_compute[253538]: 2025-11-25 08:56:03.967 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:03 compute-0 nova_compute[253538]: 2025-11-25 08:56:03.968 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:56:04 compute-0 nova_compute[253538]: 2025-11-25 08:56:04.124 253542 DEBUG nova.compute.manager [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:04 compute-0 nova_compute[253538]: 2025-11-25 08:56:04.126 253542 DEBUG nova.compute.manager [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Refreshing instance network info cache due to event network-changed-3df2cc50-c6c1-476a-a12a-0d02fae91559. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:56:04 compute-0 nova_compute[253538]: 2025-11-25 08:56:04.127 253542 DEBUG oslo_concurrency.lockutils [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.0174513251286059e-05 of space, bias 1.0, pg target 0.0030523539753858175 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:56:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:56:04 compute-0 nova_compute[253538]: 2025-11-25 08:56:04.300 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:56:04 compute-0 sshd-session[379342]: Invalid user hduser from 193.32.162.151 port 58170
Nov 25 08:56:04 compute-0 sshd-session[379342]: Connection closed by invalid user hduser 193.32.162.151 port 58170 [preauth]
Nov 25 08:56:05 compute-0 ceph-mon[75015]: pgmap v2258: 321 pgs: 321 active+clean; 95 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 53 KiB/s wr, 2 op/s
Nov 25 08:56:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 111 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 501 KiB/s wr, 25 op/s
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.130 253542 DEBUG nova.network.neutron [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.151 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.151 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance network_info: |[{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.152 253542 DEBUG oslo_concurrency.lockutils [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.152 253542 DEBUG nova.network.neutron [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Refreshing network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.155 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start _get_guest_xml network_info=[{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.161 253542 WARNING nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.171 253542 DEBUG nova.virt.libvirt.host [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.171 253542 DEBUG nova.virt.libvirt.host [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.188 253542 DEBUG nova.virt.libvirt.host [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.188 253542 DEBUG nova.virt.libvirt.host [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.189 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.189 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.190 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.191 253542 DEBUG nova.virt.hardware [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.194 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:06 compute-0 ceph-mon[75015]: pgmap v2259: 321 pgs: 321 active+clean; 111 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 501 KiB/s wr, 25 op/s
Nov 25 08:56:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:56:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673339189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.664 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.689 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:06 compute-0 nova_compute[253538]: 2025-11-25 08:56:06.695 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:56:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224367298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.158 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.159 253542 DEBUG nova.virt.libvirt.vif [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1751152359',display_name='tempest-TestNetworkBasicOps-server-1751152359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1751152359',id=119,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJubiPmrXfIUrmgC1xrGBFg1DApHg3WIsLuifdfOtIZqe386aSym92+Q91jDIGAFqlPVRU4cUyjEEd/Wo80iuNqs/Lk8M1iBken5yj9tIQXPukgxgH/HSGQZNiNB4Q+Uyg==',key_name='tempest-TestNetworkBasicOps-141024212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-i7jk6l9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:01Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=4cb9b212-92f6-4b10-ac69-ba251266bfd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.160 253542 DEBUG nova.network.os_vif_util [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.161 253542 DEBUG nova.network.os_vif_util [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.162 253542 DEBUG nova.objects.instance [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cb9b212-92f6-4b10-ac69-ba251266bfd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.177 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <uuid>4cb9b212-92f6-4b10-ac69-ba251266bfd2</uuid>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <name>instance-00000077</name>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-1751152359</nova:name>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:56:06</nova:creationTime>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <nova:port uuid="3df2cc50-c6c1-476a-a12a-0d02fae91559">
Nov 25 08:56:07 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <system>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <entry name="serial">4cb9b212-92f6-4b10-ac69-ba251266bfd2</entry>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <entry name="uuid">4cb9b212-92f6-4b10-ac69-ba251266bfd2</entry>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </system>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <os>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   </os>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <features>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   </features>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk">
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       </source>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config">
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       </source>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:56:07 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:54:4e:c2"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <target dev="tap3df2cc50-c6"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/console.log" append="off"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <video>
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </video>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:56:07 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:56:07 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:56:07 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:56:07 compute-0 nova_compute[253538]: </domain>
Nov 25 08:56:07 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.180 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Preparing to wait for external event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.180 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.181 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.181 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.183 253542 DEBUG nova.virt.libvirt.vif [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1751152359',display_name='tempest-TestNetworkBasicOps-server-1751152359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1751152359',id=119,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJubiPmrXfIUrmgC1xrGBFg1DApHg3WIsLuifdfOtIZqe386aSym92+Q91jDIGAFqlPVRU4cUyjEEd/Wo80iuNqs/Lk8M1iBken5yj9tIQXPukgxgH/HSGQZNiNB4Q+Uyg==',key_name='tempest-TestNetworkBasicOps-141024212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-i7jk6l9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:01Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=4cb9b212-92f6-4b10-ac69-ba251266bfd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.184 253542 DEBUG nova.network.os_vif_util [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.185 253542 DEBUG nova.network.os_vif_util [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.185 253542 DEBUG os_vif [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.186 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.189 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.194 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3df2cc50-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.195 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3df2cc50-c6, col_values=(('external_ids', {'iface-id': '3df2cc50-c6c1-476a-a12a-0d02fae91559', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:4e:c2', 'vm-uuid': '4cb9b212-92f6-4b10-ac69-ba251266bfd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:07 compute-0 NetworkManager[48915]: <info>  [1764060967.1978] manager: (tap3df2cc50-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.205 253542 INFO os_vif [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6')
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.271 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.272 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.272 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:54:4e:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.273 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Using config drive
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.298 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2673339189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1224367298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.415 253542 DEBUG nova.network.neutron [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Updated VIF entry in instance network info cache for port 3df2cc50-c6c1-476a-a12a-0d02fae91559. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.415 253542 DEBUG nova.network.neutron [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Updating instance_info_cache with network_info: [{"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.426 253542 DEBUG oslo_concurrency.lockutils [req-e530de18-b282-4f58-82df-1c24602f92a6 req-a6695f05-63de-49cc-a177-c4dcd1d2c6c0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4cb9b212-92f6-4b10-ac69-ba251266bfd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.712 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Creating config drive at /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.717 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqe_bnj2c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.867 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqe_bnj2c" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.894 253542 DEBUG nova.storage.rbd_utils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.898 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:07 compute-0 nova_compute[253538]: 2025-11-25 08:56:07.933 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.055 253542 DEBUG oslo_concurrency.processutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config 4cb9b212-92f6-4b10-ac69-ba251266bfd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.056 253542 INFO nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Deleting local config drive /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2/disk.config because it was imported into RBD.
Nov 25 08:56:08 compute-0 kernel: tap3df2cc50-c6: entered promiscuous mode
Nov 25 08:56:08 compute-0 NetworkManager[48915]: <info>  [1764060968.1214] manager: (tap3df2cc50-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 ovn_controller[152859]: 2025-11-25T08:56:08Z|01236|binding|INFO|Claiming lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 for this chassis.
Nov 25 08:56:08 compute-0 ovn_controller[152859]: 2025-11-25T08:56:08Z|01237|binding|INFO|3df2cc50-c6c1-476a-a12a-0d02fae91559: Claiming fa:16:3e:54:4e:c2 10.100.0.14
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 NetworkManager[48915]: <info>  [1764060968.1663] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Nov 25 08:56:08 compute-0 NetworkManager[48915]: <info>  [1764060968.1675] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.169 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:4e:c2 10.100.0.14'], port_security=['fa:16:3e:54:4e:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4cb9b212-92f6-4b10-ac69-ba251266bfd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05073ace-d35c-48d1-9399-5c8964c484d2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00a1a17-1a1d-406f-9e8c-c8e478c002e9, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3df2cc50-c6c1-476a-a12a-0d02fae91559) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.171 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3df2cc50-c6c1-476a-a12a-0d02fae91559 in datapath 05073ace-d35c-48d1-9399-5c8964c484d2 bound to our chassis
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.172 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.187 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c701c4-27a3-4e8c-a28c-5daf5dee3eea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.189 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap05073ace-d1 in ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:56:08 compute-0 systemd-udevd[379479]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.191 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap05073ace-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.191 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3b12e699-c6e5-4134-b656-ca3286aee7bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.193 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b72aa7a-10bb-4f13-bd02-c19b5d0656d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 systemd-machined[215790]: New machine qemu-149-instance-00000077.
Nov 25 08:56:08 compute-0 NetworkManager[48915]: <info>  [1764060968.2054] device (tap3df2cc50-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:56:08 compute-0 NetworkManager[48915]: <info>  [1764060968.2063] device (tap3df2cc50-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.210 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[aa54ce3c-b822-46b3-a956-228e15b9afeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000077.
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a268769-cfe6-4d6b-bb47-1d5fc70994e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.265 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 ovn_controller[152859]: 2025-11-25T08:56:08Z|01238|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 ovn-installed in OVS
Nov 25 08:56:08 compute-0 ovn_controller[152859]: 2025-11-25T08:56:08Z|01239|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 up in Southbound
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.277 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.281 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba5ba61-d343-4ea4-8c93-bc69fe6f2a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 systemd-udevd[379483]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:56:08 compute-0 NetworkManager[48915]: <info>  [1764060968.2895] manager: (tap05073ace-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/504)
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dca4a725-69c2-4ff1-b203-31053948f797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.333 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8066a18e-b1f2-4253-a44a-249eaca23b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.336 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2576ba-cd2b-4d32-bd22-01f897eddd36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 NetworkManager[48915]: <info>  [1764060968.3630] device (tap05073ace-d0): carrier: link connected
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b94d5c-7554-4ca9-9773-2aa22fb6acba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ceph-mon[75015]: pgmap v2260: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.389 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[44388534-fc7c-42e4-a8ed-fd1cec6611bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05073ace-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:25:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632296, 'reachable_time': 21388, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379512, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01ae4a3f-8f9d-4641-b076-fca3660eefed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:25c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632296, 'tstamp': 632296}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379513, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.426 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf29180f-f134-4ce2-837c-69c9f6f1e4f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05073ace-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:25:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632296, 'reachable_time': 21388, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379514, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.454 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70b1a560-8871-4ec1-8454-18fada95185d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.516 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7d05e6-1fb5-4f4e-b5a2-af1e973cf222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.517 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05073ace-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.517 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.518 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05073ace-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.519 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 NetworkManager[48915]: <info>  [1764060968.5199] manager: (tap05073ace-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Nov 25 08:56:08 compute-0 kernel: tap05073ace-d0: entered promiscuous mode
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.523 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05073ace-d0, col_values=(('external_ids', {'iface-id': '38363726-6a82-410a-a283-1a7b285deea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 ovn_controller[152859]: 2025-11-25T08:56:08Z|01240|binding|INFO|Releasing lport 38363726-6a82-410a-a283-1a7b285deea5 from this chassis (sb_readonly=0)
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.527 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.527 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[61f02a7d-6f1e-4338-a6b7-dbc326b7cafd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.528 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/05073ace-d35c-48d1-9399-5c8964c484d2.pid.haproxy
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 05073ace-d35c-48d1-9399-5c8964c484d2
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:56:08 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:08.529 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'env', 'PROCESS_TAG=haproxy-05073ace-d35c-48d1-9399-5c8964c484d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/05073ace-d35c-48d1-9399-5c8964c484d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.556 253542 DEBUG nova.compute.manager [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.556 253542 DEBUG oslo_concurrency.lockutils [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.557 253542 DEBUG oslo_concurrency.lockutils [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.557 253542 DEBUG oslo_concurrency.lockutils [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:08 compute-0 nova_compute[253538]: 2025-11-25 08:56:08.557 253542 DEBUG nova.compute.manager [req-eb871ff4-d365-4076-a860-df0210c8ba96 req-047d9037-b50a-4aec-bd4a-83be98bf1070 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Processing event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:56:08 compute-0 podman[379546]: 2025-11-25 08:56:08.925635244 +0000 UTC m=+0.054848815 container create f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:56:08 compute-0 systemd[1]: Started libpod-conmon-f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f.scope.
Nov 25 08:56:08 compute-0 podman[379546]: 2025-11-25 08:56:08.896988894 +0000 UTC m=+0.026202515 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:56:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e19a01a3f2d2ec0b2a11352993021e8379576d9a8252569db20d988f648bd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:09 compute-0 podman[379546]: 2025-11-25 08:56:09.009668421 +0000 UTC m=+0.138882012 container init f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 08:56:09 compute-0 podman[379546]: 2025-11-25 08:56:09.014572284 +0000 UTC m=+0.143785855 container start f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:56:09 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [NOTICE]   (379565) : New worker (379567) forked
Nov 25 08:56:09 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [NOTICE]   (379565) : Loading success.
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.409 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060969.4086294, 4cb9b212-92f6-4b10-ac69-ba251266bfd2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.409 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] VM Started (Lifecycle Event)
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.411 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.415 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.418 253542 INFO nova.virt.libvirt.driver [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance spawned successfully.
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.418 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.434 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.438 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.450 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.450 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.451 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.451 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.451 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.452 253542 DEBUG nova.virt.libvirt.driver [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.473 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.473 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060969.4087436, 4cb9b212-92f6-4b10-ac69-ba251266bfd2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.474 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] VM Paused (Lifecycle Event)
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.497 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.500 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060969.4139462, 4cb9b212-92f6-4b10-ac69-ba251266bfd2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.501 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] VM Resumed (Lifecycle Event)
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.518 253542 INFO nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Took 7.78 seconds to spawn the instance on the hypervisor.
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.518 253542 DEBUG nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.520 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.525 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.554 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.586 253542 INFO nova.compute.manager [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Took 8.64 seconds to build instance.
Nov 25 08:56:09 compute-0 nova_compute[253538]: 2025-11-25 08:56:09.602 253542 DEBUG oslo_concurrency.lockutils [None req-6c92f596-e137-450a-bddc-fc7a7472dbe6 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 08:56:10 compute-0 nova_compute[253538]: 2025-11-25 08:56:10.637 253542 DEBUG nova.compute.manager [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:10 compute-0 nova_compute[253538]: 2025-11-25 08:56:10.637 253542 DEBUG oslo_concurrency.lockutils [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:10 compute-0 nova_compute[253538]: 2025-11-25 08:56:10.638 253542 DEBUG oslo_concurrency.lockutils [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:10 compute-0 nova_compute[253538]: 2025-11-25 08:56:10.638 253542 DEBUG oslo_concurrency.lockutils [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:10 compute-0 nova_compute[253538]: 2025-11-25 08:56:10.638 253542 DEBUG nova.compute.manager [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] No waiting events found dispatching network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:56:10 compute-0 nova_compute[253538]: 2025-11-25 08:56:10.638 253542 WARNING nova.compute.manager [req-481f8935-5e0b-40e1-b8d6-5459bb6ef1da req-e32b1067-5d3c-473e-9941-a4a6c83cfea1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received unexpected event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with vm_state active and task_state None.
Nov 25 08:56:10 compute-0 ceph-mon[75015]: pgmap v2261: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 08:56:11 compute-0 podman[379618]: 2025-11-25 08:56:11.844525022 +0000 UTC m=+0.087697029 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 08:56:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.200 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:12 compute-0 podman[379637]: 2025-11-25 08:56:12.818679926 +0000 UTC m=+0.068494144 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.897 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.898 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.898 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.898 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.898 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.900 253542 INFO nova.compute.manager [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Terminating instance
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.901 253542 DEBUG nova.compute.manager [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:12 compute-0 kernel: tap3df2cc50-c6 (unregistering): left promiscuous mode
Nov 25 08:56:12 compute-0 ceph-mon[75015]: pgmap v2262: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 08:56:12 compute-0 NetworkManager[48915]: <info>  [1764060972.9532] device (tap3df2cc50-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:12 compute-0 ovn_controller[152859]: 2025-11-25T08:56:12Z|01241|binding|INFO|Releasing lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 from this chassis (sb_readonly=0)
Nov 25 08:56:12 compute-0 ovn_controller[152859]: 2025-11-25T08:56:12Z|01242|binding|INFO|Setting lport 3df2cc50-c6c1-476a-a12a-0d02fae91559 down in Southbound
Nov 25 08:56:12 compute-0 ovn_controller[152859]: 2025-11-25T08:56:12Z|01243|binding|INFO|Removing iface tap3df2cc50-c6 ovn-installed in OVS
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.982 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:4e:c2 10.100.0.14'], port_security=['fa:16:3e:54:4e:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4cb9b212-92f6-4b10-ac69-ba251266bfd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05073ace-d35c-48d1-9399-5c8964c484d2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1598437000', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd480e0b0-4ff3-496c-bb44-a333ae5d1ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00a1a17-1a1d-406f-9e8c-c8e478c002e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3df2cc50-c6c1-476a-a12a-0d02fae91559) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:56:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.984 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3df2cc50-c6c1-476a-a12a-0d02fae91559 in datapath 05073ace-d35c-48d1-9399-5c8964c484d2 unbound from our chassis
Nov 25 08:56:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.985 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05073ace-d35c-48d1-9399-5c8964c484d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:56:12 compute-0 nova_compute[253538]: 2025-11-25 08:56:12.985 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1361355f-b934-4565-9953-5916232f2276]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:12.987 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 namespace which is not needed anymore
Nov 25 08:56:13 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 25 08:56:13 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000077.scope: Consumed 4.751s CPU time.
Nov 25 08:56:13 compute-0 systemd-machined[215790]: Machine qemu-149-instance-00000077 terminated.
Nov 25 08:56:13 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [NOTICE]   (379565) : haproxy version is 2.8.14-c23fe91
Nov 25 08:56:13 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [NOTICE]   (379565) : path to executable is /usr/sbin/haproxy
Nov 25 08:56:13 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [WARNING]  (379565) : Exiting Master process...
Nov 25 08:56:13 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [ALERT]    (379565) : Current worker (379567) exited with code 143 (Terminated)
Nov 25 08:56:13 compute-0 neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2[379561]: [WARNING]  (379565) : All workers exited. Exiting... (0)
Nov 25 08:56:13 compute-0 systemd[1]: libpod-f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f.scope: Deactivated successfully.
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.142 253542 INFO nova.virt.libvirt.driver [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Instance destroyed successfully.
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.142 253542 DEBUG nova.objects.instance [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 4cb9b212-92f6-4b10-ac69-ba251266bfd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:56:13 compute-0 podman[379681]: 2025-11-25 08:56:13.144596708 +0000 UTC m=+0.047384961 container died f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.169 253542 DEBUG nova.virt.libvirt.vif [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1751152359',display_name='tempest-TestNetworkBasicOps-server-1751152359',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1751152359',id=119,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJubiPmrXfIUrmgC1xrGBFg1DApHg3WIsLuifdfOtIZqe386aSym92+Q91jDIGAFqlPVRU4cUyjEEd/Wo80iuNqs/Lk8M1iBken5yj9tIQXPukgxgH/HSGQZNiNB4Q+Uyg==',key_name='tempest-TestNetworkBasicOps-141024212',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:56:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-i7jk6l9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:56:09Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=4cb9b212-92f6-4b10-ac69-ba251266bfd2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.170 253542 DEBUG nova.network.os_vif_util [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "address": "fa:16:3e:54:4e:c2", "network": {"id": "05073ace-d35c-48d1-9399-5c8964c484d2", "bridge": "br-int", "label": "tempest-network-smoke--1386100427", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df2cc50-c6", "ovs_interfaceid": "3df2cc50-c6c1-476a-a12a-0d02fae91559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.170 253542 DEBUG nova.network.os_vif_util [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.171 253542 DEBUG os_vif [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.173 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3df2cc50-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f-userdata-shm.mount: Deactivated successfully.
Nov 25 08:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2e19a01a3f2d2ec0b2a11352993021e8379576d9a8252569db20d988f648bd5-merged.mount: Deactivated successfully.
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.182 253542 INFO os_vif [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:4e:c2,bridge_name='br-int',has_traffic_filtering=True,id=3df2cc50-c6c1-476a-a12a-0d02fae91559,network=Network(05073ace-d35c-48d1-9399-5c8964c484d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3df2cc50-c6')
Nov 25 08:56:13 compute-0 podman[379681]: 2025-11-25 08:56:13.189063259 +0000 UTC m=+0.091851522 container cleanup f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:56:13 compute-0 systemd[1]: libpod-conmon-f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f.scope: Deactivated successfully.
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.237 253542 DEBUG nova.compute.manager [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-unplugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.237 253542 DEBUG oslo_concurrency.lockutils [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.238 253542 DEBUG oslo_concurrency.lockutils [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.238 253542 DEBUG oslo_concurrency.lockutils [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.238 253542 DEBUG nova.compute.manager [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] No waiting events found dispatching network-vif-unplugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.238 253542 DEBUG nova.compute.manager [req-7ed0e0c5-2c59-4b33-bc76-9c183897bb63 req-95ba9ff8-d9e7-4c9d-93a3-77d62ba0b049 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-unplugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:56:13 compute-0 podman[379732]: 2025-11-25 08:56:13.253836672 +0000 UTC m=+0.044579075 container remove f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[69c91dba-ea39-4011-b798-85461b0fac82]: (4, ('Tue Nov 25 08:56:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 (f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f)\nf3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f\nTue Nov 25 08:56:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 (f3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f)\nf3b692900a0094d0f52a9c0871748349df624ab8032c765c771f1ff5e8fee42f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb639e3-34bc-41ab-80fb-30cd274d85be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.266 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05073ace-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:13 compute-0 kernel: tap05073ace-d0: left promiscuous mode
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.272 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2139ae-4b60-4e35-aa65-a5d11ccab341]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.282 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.289 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23647399-9b1d-4120-93c6-06286bd65c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.291 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[204d9758-f50a-4b82-af63-2c2c87bff0e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.308 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68c2181e-73f0-46d1-aaa3-d7345a2b104f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632287, 'reachable_time': 30742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379750, 'error': None, 'target': 'ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.311 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-05073ace-d35c-48d1-9399-5c8964c484d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:56:13 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:13.311 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7afc4a-6319-42ee-a4a2-627d8a9b4c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d05073ace\x2dd35c\x2d48d1\x2d9399\x2d5c8964c484d2.mount: Deactivated successfully.
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.520 253542 INFO nova.virt.libvirt.driver [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Deleting instance files /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2_del
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.521 253542 INFO nova.virt.libvirt.driver [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Deletion of /var/lib/nova/instances/4cb9b212-92f6-4b10-ac69-ba251266bfd2_del complete
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.572 253542 INFO nova.compute.manager [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Took 0.67 seconds to destroy the instance on the hypervisor.
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.572 253542 DEBUG oslo.service.loopingcall [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.572 253542 DEBUG nova.compute.manager [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:56:13 compute-0 nova_compute[253538]: 2025-11-25 08:56:13.573 253542 DEBUG nova.network.neutron [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:56:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 127 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 08:56:14 compute-0 ceph-mon[75015]: pgmap v2263: 321 pgs: 321 active+clean; 127 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.571 253542 DEBUG nova.compute.manager [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 DEBUG oslo_concurrency.lockutils [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 DEBUG oslo_concurrency.lockutils [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 DEBUG oslo_concurrency.lockutils [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 DEBUG nova.compute.manager [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] No waiting events found dispatching network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.572 253542 WARNING nova.compute.manager [req-b8cc945e-712e-4969-8afa-3837a924e5be req-3dafbbca-c5b1-478d-b19a-29a8c40891dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Received unexpected event network-vif-plugged-3df2cc50-c6c1-476a-a12a-0d02fae91559 for instance with vm_state active and task_state deleting.
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.600 253542 DEBUG nova.network.neutron [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.621 253542 INFO nova.compute.manager [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Took 2.05 seconds to deallocate network for instance.
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.661 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.662 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:15 compute-0 nova_compute[253538]: 2025-11-25 08:56:15.711 253542 DEBUG oslo_concurrency.processutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:15 compute-0 sshd-session[379753]: Invalid user q from 45.202.211.6 port 56932
Nov 25 08:56:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 111 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Nov 25 08:56:16 compute-0 sshd-session[379753]: Received disconnect from 45.202.211.6 port 56932:11: Bye Bye [preauth]
Nov 25 08:56:16 compute-0 sshd-session[379753]: Disconnected from invalid user q 45.202.211.6 port 56932 [preauth]
Nov 25 08:56:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:56:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31950200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:16 compute-0 nova_compute[253538]: 2025-11-25 08:56:16.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:16 compute-0 nova_compute[253538]: 2025-11-25 08:56:16.184 253542 DEBUG oslo_concurrency.processutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:16 compute-0 nova_compute[253538]: 2025-11-25 08:56:16.192 253542 DEBUG nova.compute.provider_tree [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:56:16 compute-0 nova_compute[253538]: 2025-11-25 08:56:16.205 253542 DEBUG nova.scheduler.client.report [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:56:16 compute-0 nova_compute[253538]: 2025-11-25 08:56:16.223 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:16 compute-0 nova_compute[253538]: 2025-11-25 08:56:16.246 253542 INFO nova.scheduler.client.report [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 4cb9b212-92f6-4b10-ac69-ba251266bfd2
Nov 25 08:56:16 compute-0 nova_compute[253538]: 2025-11-25 08:56:16.311 253542 DEBUG oslo_concurrency.lockutils [None req-fd2e3a8c-2f98-40b7-b331-a312efe41bbd 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "4cb9b212-92f6-4b10-ac69-ba251266bfd2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:16 compute-0 ceph-mon[75015]: pgmap v2264: 321 pgs: 321 active+clean; 111 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Nov 25 08:56:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/31950200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 101 op/s
Nov 25 08:56:17 compute-0 nova_compute[253538]: 2025-11-25 08:56:17.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:18 compute-0 nova_compute[253538]: 2025-11-25 08:56:18.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:18 compute-0 podman[379777]: 2025-11-25 08:56:18.863003046 +0000 UTC m=+0.107020384 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 08:56:18 compute-0 ceph-mon[75015]: pgmap v2265: 321 pgs: 321 active+clean; 88 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 101 op/s
Nov 25 08:56:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 08:56:20 compute-0 nova_compute[253538]: 2025-11-25 08:56:20.778 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:20 compute-0 nova_compute[253538]: 2025-11-25 08:56:20.779 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:20 compute-0 nova_compute[253538]: 2025-11-25 08:56:20.803 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:56:20 compute-0 nova_compute[253538]: 2025-11-25 08:56:20.871 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:20 compute-0 nova_compute[253538]: 2025-11-25 08:56:20.871 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:20 compute-0 nova_compute[253538]: 2025-11-25 08:56:20.881 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:56:20 compute-0 nova_compute[253538]: 2025-11-25 08:56:20.881 253542 INFO nova.compute.claims [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.003 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:21 compute-0 ceph-mon[75015]: pgmap v2266: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 08:56:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:56:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190466925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.830 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.827s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.836 253542 DEBUG nova.compute.provider_tree [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.853 253542 DEBUG nova.scheduler.client.report [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.882 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.883 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:56:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.961 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.962 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:56:21 compute-0 nova_compute[253538]: 2025-11-25 08:56:21.982 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.009 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.125 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.127 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.128 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Creating image(s)
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.159 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.194 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.221 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.224 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.299 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.300 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.301 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.301 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.324 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.328 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ce8c3428-f7e4-49aa-9978-faaf5d514663_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.382 253542 DEBUG nova.policy [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:56:22 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/190466925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:22 compute-0 ceph-mon[75015]: pgmap v2267: 321 pgs: 321 active+clean; 88 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Nov 25 08:56:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.656 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc ce8c3428-f7e4-49aa-9978-faaf5d514663_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.743 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.865 253542 DEBUG nova.objects.instance [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.884 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.884 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Ensure instance console log exists: /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.885 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.885 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.885 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:22 compute-0 nova_compute[253538]: 2025-11-25 08:56:22.900 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:23 compute-0 nova_compute[253538]: 2025-11-25 08:56:23.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:23 compute-0 nova_compute[253538]: 2025-11-25 08:56:23.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:23 compute-0 nova_compute[253538]: 2025-11-25 08:56:23.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:56:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:56:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:56:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:56:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:56:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:56:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 99 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 602 KiB/s rd, 523 KiB/s wr, 48 op/s
Nov 25 08:56:24 compute-0 nova_compute[253538]: 2025-11-25 08:56:24.119 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Successfully created port: d92eef96-9bbe-4743-96d0-393e7e6de4ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:56:25 compute-0 ceph-mon[75015]: pgmap v2268: 321 pgs: 321 active+clean; 99 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 602 KiB/s rd, 523 KiB/s wr, 48 op/s
Nov 25 08:56:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 119 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Nov 25 08:56:26 compute-0 nova_compute[253538]: 2025-11-25 08:56:26.877 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Successfully updated port: d92eef96-9bbe-4743-96d0-393e7e6de4ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:56:26 compute-0 nova_compute[253538]: 2025-11-25 08:56:26.889 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:26 compute-0 nova_compute[253538]: 2025-11-25 08:56:26.889 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:26 compute-0 nova_compute[253538]: 2025-11-25 08:56:26.889 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:56:26 compute-0 nova_compute[253538]: 2025-11-25 08:56:26.973 253542 DEBUG nova.compute.manager [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:26 compute-0 nova_compute[253538]: 2025-11-25 08:56:26.973 253542 DEBUG nova.compute.manager [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing instance network info cache due to event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:56:26 compute-0 nova_compute[253538]: 2025-11-25 08:56:26.973 253542 DEBUG oslo_concurrency.lockutils [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.030 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:56:27 compute-0 ceph-mon[75015]: pgmap v2269: 321 pgs: 321 active+clean; 119 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Nov 25 08:56:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.908 253542 DEBUG nova.network.neutron [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.923 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.923 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance network_info: |[{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.923 253542 DEBUG oslo_concurrency.lockutils [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.924 253542 DEBUG nova.network.neutron [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.926 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start _get_guest_xml network_info=[{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.932 253542 WARNING nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.941 253542 DEBUG nova.virt.libvirt.host [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.941 253542 DEBUG nova.virt.libvirt.host [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.945 253542 DEBUG nova.virt.libvirt.host [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.946 253542 DEBUG nova.virt.libvirt.host [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.946 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.946 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.947 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.948 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.948 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.948 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.948 253542 DEBUG nova.virt.hardware [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:56:27 compute-0 nova_compute[253538]: 2025-11-25 08:56:27.951 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.140 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764060973.1393652, 4cb9b212-92f6-4b10-ac69-ba251266bfd2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.140 253542 INFO nova.compute.manager [-] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] VM Stopped (Lifecycle Event)
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.159 253542 DEBUG nova.compute.manager [None req-69f77c51-f368-452b-af2a-bd475e0ea4f1 - - - - - -] [instance: 4cb9b212-92f6-4b10-ac69-ba251266bfd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:56:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275504292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.402 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.427 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.432 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:56:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4134776803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.890 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.892 253542 DEBUG nova.virt.libvirt.vif [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=120,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMzihuAIn/3zUYmC89IQHlRQOFsDPQXmR2lUEIBbP/zsJ4Wb7ryhi2Z+PoqeUCEWAj2u1hLvngwGPYFPPFVKkLQWsKMEmPgeFVkFH2scsb2/c4cLoNH5bP+xcccrYAT8g==',key_name='tempest-TestSecurityGroupsBasicOps-373950628',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-hsvmofjq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:22Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=ce8c3428-f7e4-49aa-9978-faaf5d514663,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.893 253542 DEBUG nova.network.os_vif_util [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.894 253542 DEBUG nova.network.os_vif_util [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.895 253542 DEBUG nova.objects.instance [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.912 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <uuid>ce8c3428-f7e4-49aa-9978-faaf5d514663</uuid>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <name>instance-00000078</name>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537</nova:name>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:56:27</nova:creationTime>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <nova:port uuid="d92eef96-9bbe-4743-96d0-393e7e6de4ee">
Nov 25 08:56:28 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <system>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <entry name="serial">ce8c3428-f7e4-49aa-9978-faaf5d514663</entry>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <entry name="uuid">ce8c3428-f7e4-49aa-9978-faaf5d514663</entry>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </system>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <os>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   </os>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <features>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   </features>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ce8c3428-f7e4-49aa-9978-faaf5d514663_disk">
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config">
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       </source>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:56:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:0d:26:ec"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <target dev="tapd92eef96-9b"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/console.log" append="off"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <video>
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </video>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:56:28 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:56:28 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:56:28 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:56:28 compute-0 nova_compute[253538]: </domain>
Nov 25 08:56:28 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.913 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Preparing to wait for external event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.913 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.913 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.914 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.915 253542 DEBUG nova.virt.libvirt.vif [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=120,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMzihuAIn/3zUYmC89IQHlRQOFsDPQXmR2lUEIBbP/zsJ4Wb7ryhi2Z+PoqeUCEWAj2u1hLvngwGPYFPPFVKkLQWsKMEmPgeFVkFH2scsb2/c4cLoNH5bP+xcccrYAT8g==',key_name='tempest-TestSecurityGroupsBasicOps-373950628',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-hsvmofjq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:22Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=ce8c3428-f7e4-49aa-9978-faaf5d514663,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.915 253542 DEBUG nova.network.os_vif_util [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.916 253542 DEBUG nova.network.os_vif_util [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.916 253542 DEBUG os_vif [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.917 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.918 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.921 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd92eef96-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.921 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd92eef96-9b, col_values=(('external_ids', {'iface-id': 'd92eef96-9bbe-4743-96d0-393e7e6de4ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:26:ec', 'vm-uuid': 'ce8c3428-f7e4-49aa-9978-faaf5d514663'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:28 compute-0 NetworkManager[48915]: <info>  [1764060988.9250] manager: (tapd92eef96-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.931 253542 INFO os_vif [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b')
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.993 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.994 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.994 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:0d:26:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:56:28 compute-0 nova_compute[253538]: 2025-11-25 08:56:28.994 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Using config drive
Nov 25 08:56:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:56:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1979291121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:56:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:56:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1979291121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:56:29 compute-0 nova_compute[253538]: 2025-11-25 08:56:29.020 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:29 compute-0 ceph-mon[75015]: pgmap v2270: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 25 08:56:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1275504292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4134776803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1979291121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:56:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1979291121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:56:29 compute-0 nova_compute[253538]: 2025-11-25 08:56:29.314 253542 DEBUG nova.network.neutron [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated VIF entry in instance network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:56:29 compute-0 nova_compute[253538]: 2025-11-25 08:56:29.315 253542 DEBUG nova.network.neutron [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:29 compute-0 nova_compute[253538]: 2025-11-25 08:56:29.334 253542 DEBUG oslo_concurrency.lockutils [req-9f14fbe2-142f-47bb-9580-38dc27f689af req-3227d531-8c5a-49bd-99cb-f8923709080c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.048 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Creating config drive at /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.054 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkdchm5r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.199 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkdchm5r" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.229 253542 DEBUG nova.storage.rbd_utils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.233 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.428 253542 DEBUG oslo_concurrency.processutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config ce8c3428-f7e4-49aa-9978-faaf5d514663_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.430 253542 INFO nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Deleting local config drive /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663/disk.config because it was imported into RBD.
Nov 25 08:56:30 compute-0 kernel: tapd92eef96-9b: entered promiscuous mode
Nov 25 08:56:30 compute-0 NetworkManager[48915]: <info>  [1764060990.5106] manager: (tapd92eef96-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Nov 25 08:56:30 compute-0 ovn_controller[152859]: 2025-11-25T08:56:30Z|01244|binding|INFO|Claiming lport d92eef96-9bbe-4743-96d0-393e7e6de4ee for this chassis.
Nov 25 08:56:30 compute-0 ovn_controller[152859]: 2025-11-25T08:56:30Z|01245|binding|INFO|d92eef96-9bbe-4743-96d0-393e7e6de4ee: Claiming fa:16:3e:0d:26:ec 10.100.0.3
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.528 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:26:ec 10.100.0.3'], port_security=['fa:16:3e:0d:26:ec 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce8c3428-f7e4-49aa-9978-faaf5d514663', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58e30486-fde6-46bb-8263-c463bd38a1f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '84e8c954-c3f0-4a6c-88b0-2dc68f7ce745 aec330ab-8d77-47ae-8de6-bec0741c3114', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8915864-93a9-4ad1-b7bb-a11d22ed3f29, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d92eef96-9bbe-4743-96d0-393e7e6de4ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.531 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d92eef96-9bbe-4743-96d0-393e7e6de4ee in datapath 58e30486-fde6-46bb-8263-c463bd38a1f9 bound to our chassis
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.533 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58e30486-fde6-46bb-8263-c463bd38a1f9
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.553 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[181db5c8-2d5a-44c2-bff9-0af8aeccded2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.555 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58e30486-f1 in ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.558 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58e30486-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0c849eaf-7363-4a41-bad8-e7454ba893fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.560 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5b36e553-c417-4652-aa7c-74bf524d3a41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 systemd-udevd[380132]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:56:30 compute-0 systemd-machined[215790]: New machine qemu-150-instance-00000078.
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.574 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b9027864-6297-4950-82bb-970a7dc49601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 NetworkManager[48915]: <info>  [1764060990.5828] device (tapd92eef96-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:56:30 compute-0 NetworkManager[48915]: <info>  [1764060990.5835] device (tapd92eef96-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:56:30 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000078.
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.608 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[943e2306-3206-4712-9c47-017c96e8216a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_controller[152859]: 2025-11-25T08:56:30Z|01246|binding|INFO|Setting lport d92eef96-9bbe-4743-96d0-393e7e6de4ee ovn-installed in OVS
Nov 25 08:56:30 compute-0 ovn_controller[152859]: 2025-11-25T08:56:30Z|01247|binding|INFO|Setting lport d92eef96-9bbe-4743-96d0-393e7e6de4ee up in Southbound
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:30 compute-0 auditd[703]: Audit daemon rotating log files
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.647 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e30866b7-bf10-494d-9278-c1111ed4d567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.652 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f78bb86-aad0-46ff-9ab2-9ddd6fca04a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 NetworkManager[48915]: <info>  [1764060990.6533] manager: (tap58e30486-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/508)
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.682 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8c003bb9-56e7-4f8a-a14a-9b83c9d61051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.685 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8bf430-56e6-4b74-a012-b690b80cabd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 NetworkManager[48915]: <info>  [1764060990.7056] device (tap58e30486-f0): carrier: link connected
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.710 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[120425b1-c75b-411f-96b7-d7f93ef475c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.732 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e03be5-450d-4a7e-97e6-a17b0e122b06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58e30486-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d5:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634530, 'reachable_time': 24303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380164, 'error': None, 'target': 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.751 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a33c26be-a27c-4509-b753-42723d17cdf2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:d5e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634530, 'tstamp': 634530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380165, 'error': None, 'target': 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb39c793-816f-4427-837b-f8304200c8b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58e30486-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d5:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634530, 'reachable_time': 24303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380166, 'error': None, 'target': 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.821 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2045f3-d046-408e-b73d-d6f1ba0c4c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.907 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[da193fd3-b83b-4730-9241-fb95bc4f75d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.909 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58e30486-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58e30486-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:30 compute-0 kernel: tap58e30486-f0: entered promiscuous mode
Nov 25 08:56:30 compute-0 NetworkManager[48915]: <info>  [1764060990.9139] manager: (tap58e30486-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.917 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58e30486-f0, col_values=(('external_ids', {'iface-id': 'f9837d3c-3aa7-48de-b240-549f6bf978b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:30 compute-0 ovn_controller[152859]: 2025-11-25T08:56:30Z|01248|binding|INFO|Releasing lport f9837d3c-3aa7-48de-b240-549f6bf978b3 from this chassis (sb_readonly=0)
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.932 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58e30486-fde6-46bb-8263-c463bd38a1f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58e30486-fde6-46bb-8263-c463bd38a1f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3aa950-fa8e-4f65-8dae-8a969c2604a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.933 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-58e30486-fde6-46bb-8263-c463bd38a1f9
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/58e30486-fde6-46bb-8263-c463bd38a1f9.pid.haproxy
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 58e30486-fde6-46bb-8263-c463bd38a1f9
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:56:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:30.935 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'env', 'PROCESS_TAG=haproxy-58e30486-fde6-46bb-8263-c463bd38a1f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58e30486-fde6-46bb-8263-c463bd38a1f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.952 253542 DEBUG nova.compute.manager [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.953 253542 DEBUG oslo_concurrency.lockutils [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.953 253542 DEBUG oslo_concurrency.lockutils [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.953 253542 DEBUG oslo_concurrency.lockutils [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.953 253542 DEBUG nova.compute.manager [req-ce80dfa8-f373-452b-a287-1b293d61773b req-a00bf98a-596d-4590-af7b-79163c607ca3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Processing event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.960 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060990.9598863, ce8c3428-f7e4-49aa-9978-faaf5d514663 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.960 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] VM Started (Lifecycle Event)
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.963 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.966 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.969 253542 INFO nova.virt.libvirt.driver [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance spawned successfully.
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.969 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:56:30 compute-0 nova_compute[253538]: 2025-11-25 08:56:30.991 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.004 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.005 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.006 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.007 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.008 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.009 253542 DEBUG nova.virt.libvirt.driver [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.014 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.058 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.066 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060990.9608629, ce8c3428-f7e4-49aa-9978-faaf5d514663 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.067 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] VM Paused (Lifecycle Event)
Nov 25 08:56:31 compute-0 ceph-mon[75015]: pgmap v2271: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.104 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.108 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764060990.9654958, ce8c3428-f7e4-49aa-9978-faaf5d514663 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.109 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] VM Resumed (Lifecycle Event)
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.132 253542 INFO nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Took 9.01 seconds to spawn the instance on the hypervisor.
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.133 253542 DEBUG nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.134 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.140 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.183 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.205 253542 INFO nova.compute.manager [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Took 10.35 seconds to build instance.
Nov 25 08:56:31 compute-0 nova_compute[253538]: 2025-11-25 08:56:31.217 253542 DEBUG oslo_concurrency.lockutils [None req-ea27c167-309c-42ef-9057-32ae2ead652c 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:31 compute-0 podman[380240]: 2025-11-25 08:56:31.358049173 +0000 UTC m=+0.059854550 container create 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:56:31 compute-0 systemd[1]: Started libpod-conmon-3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a.scope.
Nov 25 08:56:31 compute-0 podman[380240]: 2025-11-25 08:56:31.323624246 +0000 UTC m=+0.025429633 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:56:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97985d9ed470fd0792a7d93bf1b618a370e10b24c8a74fa4f0901da8068fed82/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:31 compute-0 podman[380240]: 2025-11-25 08:56:31.449599575 +0000 UTC m=+0.151405002 container init 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 08:56:31 compute-0 podman[380240]: 2025-11-25 08:56:31.457280754 +0000 UTC m=+0.159086131 container start 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:56:31 compute-0 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [NOTICE]   (380259) : New worker (380261) forked
Nov 25 08:56:31 compute-0 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [NOTICE]   (380259) : Loading success.
Nov 25 08:56:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 08:56:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:32 compute-0 nova_compute[253538]: 2025-11-25 08:56:32.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:33 compute-0 nova_compute[253538]: 2025-11-25 08:56:33.043 253542 DEBUG nova.compute.manager [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:33 compute-0 nova_compute[253538]: 2025-11-25 08:56:33.044 253542 DEBUG oslo_concurrency.lockutils [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:33 compute-0 nova_compute[253538]: 2025-11-25 08:56:33.044 253542 DEBUG oslo_concurrency.lockutils [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:33 compute-0 nova_compute[253538]: 2025-11-25 08:56:33.044 253542 DEBUG oslo_concurrency.lockutils [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:33 compute-0 nova_compute[253538]: 2025-11-25 08:56:33.045 253542 DEBUG nova.compute.manager [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] No waiting events found dispatching network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:56:33 compute-0 nova_compute[253538]: 2025-11-25 08:56:33.045 253542 WARNING nova.compute.manager [req-d2e006af-aca7-43b8-97e1-e7fa6da998ce req-58ce4911-2e6d-4c2b-87ed-f85942c97862 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received unexpected event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee for instance with vm_state active and task_state None.
Nov 25 08:56:33 compute-0 ceph-mon[75015]: pgmap v2272: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 08:56:33 compute-0 nova_compute[253538]: 2025-11-25 08:56:33.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 914 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 25 08:56:33 compute-0 nova_compute[253538]: 2025-11-25 08:56:33.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:35 compute-0 ceph-mon[75015]: pgmap v2273: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 914 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 25 08:56:35 compute-0 NetworkManager[48915]: <info>  [1764060995.4044] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Nov 25 08:56:35 compute-0 NetworkManager[48915]: <info>  [1764060995.4053] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Nov 25 08:56:35 compute-0 nova_compute[253538]: 2025-11-25 08:56:35.403 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:35 compute-0 ovn_controller[152859]: 2025-11-25T08:56:35Z|01249|binding|INFO|Releasing lport f9837d3c-3aa7-48de-b240-549f6bf978b3 from this chassis (sb_readonly=0)
Nov 25 08:56:35 compute-0 nova_compute[253538]: 2025-11-25 08:56:35.518 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:35 compute-0 nova_compute[253538]: 2025-11-25 08:56:35.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:35 compute-0 nova_compute[253538]: 2025-11-25 08:56:35.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:35 compute-0 sshd-session[379993]: Connection closed by 45.78.217.205 port 36066 [preauth]
Nov 25 08:56:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 92 op/s
Nov 25 08:56:36 compute-0 nova_compute[253538]: 2025-11-25 08:56:36.336 253542 DEBUG nova.compute.manager [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:36 compute-0 nova_compute[253538]: 2025-11-25 08:56:36.337 253542 DEBUG nova.compute.manager [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing instance network info cache due to event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:56:36 compute-0 nova_compute[253538]: 2025-11-25 08:56:36.337 253542 DEBUG oslo_concurrency.lockutils [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:36 compute-0 nova_compute[253538]: 2025-11-25 08:56:36.338 253542 DEBUG oslo_concurrency.lockutils [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:36 compute-0 nova_compute[253538]: 2025-11-25 08:56:36.338 253542 DEBUG nova.network.neutron [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:56:36 compute-0 sudo[380271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:36 compute-0 sudo[380271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:36 compute-0 sudo[380271]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:36 compute-0 sudo[380296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:56:36 compute-0 sudo[380296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:36 compute-0 sudo[380296]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:36 compute-0 sudo[380321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:36 compute-0 sudo[380321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:36 compute-0 sudo[380321]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:36 compute-0 sudo[380346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:56:36 compute-0 sudo[380346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:37 compute-0 ceph-mon[75015]: pgmap v2274: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 92 op/s
Nov 25 08:56:37 compute-0 sudo[380346]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:56:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:56:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:56:37 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:56:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:56:37 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:56:37 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 58649c52-1fac-4559-a770-9e0283e359a0 does not exist
Nov 25 08:56:37 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 614daa20-b0e0-4788-91c4-05ab3de665ff does not exist
Nov 25 08:56:37 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 38cfaaee-32b2-4997-b778-8672e3b3be46 does not exist
Nov 25 08:56:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:56:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:56:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:56:37 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:56:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:56:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:56:37 compute-0 sudo[380402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:37 compute-0 sudo[380402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:37 compute-0 sudo[380402]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:37 compute-0 sudo[380427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:56:37 compute-0 sudo[380427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:37 compute-0 sudo[380427]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:37 compute-0 sudo[380452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:37 compute-0 sudo[380452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:37 compute-0 sudo[380452]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:37 compute-0 sudo[380477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:56:37 compute-0 sudo[380477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.557000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997557074, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 551, "num_deletes": 250, "total_data_size": 557611, "memory_usage": 568616, "flush_reason": "Manual Compaction"}
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997564386, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 381187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47195, "largest_seqno": 47745, "table_properties": {"data_size": 378478, "index_size": 681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7348, "raw_average_key_size": 20, "raw_value_size": 372913, "raw_average_value_size": 1038, "num_data_blocks": 31, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060960, "oldest_key_time": 1764060960, "file_creation_time": 1764060997, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 7453 microseconds, and 3657 cpu microseconds.
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.564453) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 381187 bytes OK
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.564479) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.565636) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.565661) EVENT_LOG_v1 {"time_micros": 1764060997565654, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.565683) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 554512, prev total WAL file size 554512, number of live WAL files 2.
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.566343) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373535' seq:72057594037927935, type:22 .. '6D6772737461740032303036' seq:0, type:0; will stop at (end)
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(372KB)], [107(10MB)]
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997566387, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10923073, "oldest_snapshot_seqno": -1}
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6908 keys, 7804615 bytes, temperature: kUnknown
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997615548, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7804615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7761787, "index_size": 24476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 179336, "raw_average_key_size": 25, "raw_value_size": 7641363, "raw_average_value_size": 1106, "num_data_blocks": 955, "num_entries": 6908, "num_filter_entries": 6908, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764060997, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.615842) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7804615 bytes
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.617005) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.7 rd, 158.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(49.1) write-amplify(20.5) OK, records in: 7405, records dropped: 497 output_compression: NoCompression
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.617033) EVENT_LOG_v1 {"time_micros": 1764060997617019, "job": 64, "event": "compaction_finished", "compaction_time_micros": 49276, "compaction_time_cpu_micros": 21822, "output_level": 6, "num_output_files": 1, "total_output_size": 7804615, "num_input_records": 7405, "num_output_records": 6908, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997617546, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764060997620269, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.566241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:37.620372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:37 compute-0 podman[380542]: 2025-11-25 08:56:37.887728552 +0000 UTC m=+0.047093893 container create b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 08:56:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 484 KiB/s wr, 96 op/s
Nov 25 08:56:37 compute-0 nova_compute[253538]: 2025-11-25 08:56:37.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:37 compute-0 systemd[1]: Started libpod-conmon-b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3.scope.
Nov 25 08:56:37 compute-0 podman[380542]: 2025-11-25 08:56:37.866991898 +0000 UTC m=+0.026357279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:56:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:37 compute-0 podman[380542]: 2025-11-25 08:56:37.987977381 +0000 UTC m=+0.147342752 container init b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 08:56:37 compute-0 podman[380542]: 2025-11-25 08:56:37.999284198 +0000 UTC m=+0.158649559 container start b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:56:38 compute-0 podman[380542]: 2025-11-25 08:56:38.002952298 +0000 UTC m=+0.162317659 container attach b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 08:56:38 compute-0 tender_thompson[380559]: 167 167
Nov 25 08:56:38 compute-0 systemd[1]: libpod-b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3.scope: Deactivated successfully.
Nov 25 08:56:38 compute-0 podman[380542]: 2025-11-25 08:56:38.009927298 +0000 UTC m=+0.169292659 container died b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 08:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8afdd2643f1e2a657c9eec1a4f5cc5586bb4f214e7388a35ac4159bb17d18d0d-merged.mount: Deactivated successfully.
Nov 25 08:56:38 compute-0 podman[380542]: 2025-11-25 08:56:38.068210115 +0000 UTC m=+0.227575456 container remove b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 08:56:38 compute-0 nova_compute[253538]: 2025-11-25 08:56:38.085 253542 DEBUG nova.network.neutron [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated VIF entry in instance network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:56:38 compute-0 nova_compute[253538]: 2025-11-25 08:56:38.086 253542 DEBUG nova.network.neutron [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:38 compute-0 systemd[1]: libpod-conmon-b1dcc863a652e3a9937ebcc970ed49718872b2923aa233091110aa9d4d5e76a3.scope: Deactivated successfully.
Nov 25 08:56:38 compute-0 nova_compute[253538]: 2025-11-25 08:56:38.115 253542 DEBUG oslo_concurrency.lockutils [req-c181a1d7-54c0-4b96-bbaa-15215493d832 req-f0fa2f96-a178-4819-9d86-cb1b4fd90cdf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:38 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:56:38 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:56:38 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:56:38 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:56:38 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:56:38 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:56:38 compute-0 podman[380582]: 2025-11-25 08:56:38.28958913 +0000 UTC m=+0.052364086 container create 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:56:38 compute-0 systemd[1]: Started libpod-conmon-2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e.scope.
Nov 25 08:56:38 compute-0 podman[380582]: 2025-11-25 08:56:38.268528387 +0000 UTC m=+0.031303413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:56:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:38 compute-0 podman[380582]: 2025-11-25 08:56:38.385887961 +0000 UTC m=+0.148662947 container init 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:56:38 compute-0 podman[380582]: 2025-11-25 08:56:38.40311813 +0000 UTC m=+0.165893106 container start 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 08:56:38 compute-0 podman[380582]: 2025-11-25 08:56:38.409535544 +0000 UTC m=+0.172310550 container attach 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 08:56:38 compute-0 nova_compute[253538]: 2025-11-25 08:56:38.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:39 compute-0 ceph-mon[75015]: pgmap v2275: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 484 KiB/s wr, 96 op/s
Nov 25 08:56:39 compute-0 quirky_curran[380599]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:56:39 compute-0 quirky_curran[380599]: --> relative data size: 1.0
Nov 25 08:56:39 compute-0 quirky_curran[380599]: --> All data devices are unavailable
Nov 25 08:56:39 compute-0 systemd[1]: libpod-2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e.scope: Deactivated successfully.
Nov 25 08:56:39 compute-0 systemd[1]: libpod-2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e.scope: Consumed 1.139s CPU time.
Nov 25 08:56:39 compute-0 podman[380582]: 2025-11-25 08:56:39.591437254 +0000 UTC m=+1.354212250 container died 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:56:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1bf04a6353f380c9ba796f3a8d205aa4af89ced26f45b4afa294aa124fa9f12-merged.mount: Deactivated successfully.
Nov 25 08:56:39 compute-0 podman[380582]: 2025-11-25 08:56:39.662808517 +0000 UTC m=+1.425583463 container remove 2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:56:39 compute-0 systemd[1]: libpod-conmon-2887f1e01a4ef86378e94bf33ff6526c2d3c26e6d6c1ba2409ba55a5f8dcfc1e.scope: Deactivated successfully.
Nov 25 08:56:39 compute-0 sudo[380477]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:39 compute-0 sudo[380642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:39 compute-0 sudo[380642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:39 compute-0 sudo[380642]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:39 compute-0 sudo[380667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:56:39 compute-0 sudo[380667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:39 compute-0 sudo[380667]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 08:56:39 compute-0 sudo[380692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:39 compute-0 sudo[380692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:39 compute-0 sudo[380692]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:39 compute-0 sudo[380717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:56:39 compute-0 sudo[380717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:40 compute-0 podman[380781]: 2025-11-25 08:56:40.29926847 +0000 UTC m=+0.048329206 container create 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 08:56:40 compute-0 podman[380781]: 2025-11-25 08:56:40.280945182 +0000 UTC m=+0.030005918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:56:40 compute-0 systemd[1]: Started libpod-conmon-0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549.scope.
Nov 25 08:56:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:40 compute-0 podman[380781]: 2025-11-25 08:56:40.442245252 +0000 UTC m=+0.191305988 container init 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 08:56:40 compute-0 podman[380781]: 2025-11-25 08:56:40.450518417 +0000 UTC m=+0.199579153 container start 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:56:40 compute-0 distracted_heisenberg[380796]: 167 167
Nov 25 08:56:40 compute-0 systemd[1]: libpod-0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549.scope: Deactivated successfully.
Nov 25 08:56:40 compute-0 podman[380781]: 2025-11-25 08:56:40.457469697 +0000 UTC m=+0.206530473 container attach 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 08:56:40 compute-0 podman[380781]: 2025-11-25 08:56:40.457907339 +0000 UTC m=+0.206968085 container died 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:56:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e259fa2e3ff4a26f1a697643623ff41ca4510560f88d0802becd22120dd1701-merged.mount: Deactivated successfully.
Nov 25 08:56:40 compute-0 podman[380781]: 2025-11-25 08:56:40.534669518 +0000 UTC m=+0.283730254 container remove 0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:56:40 compute-0 systemd[1]: libpod-conmon-0c4562bc9520a296f803dfca446c7ebb071e52ab497238cdbf300fe73cff8549.scope: Deactivated successfully.
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.559 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.564 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.564 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.748 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.748 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.749 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.749 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:56:40 compute-0 podman[380819]: 2025-11-25 08:56:40.783142571 +0000 UTC m=+0.118709083 container create 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 08:56:40 compute-0 podman[380819]: 2025-11-25 08:56:40.692294158 +0000 UTC m=+0.027860690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.898 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.902 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:40 compute-0 nova_compute[253538]: 2025-11-25 08:56:40.917 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:56:40 compute-0 systemd[1]: Started libpod-conmon-4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8.scope.
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.002 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.003 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.013 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.013 253542 INFO nova.compute.claims [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:56:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:41 compute-0 podman[380819]: 2025-11-25 08:56:41.054184769 +0000 UTC m=+0.389751281 container init 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:56:41 compute-0 podman[380819]: 2025-11-25 08:56:41.062576687 +0000 UTC m=+0.398143189 container start 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 08:56:41 compute-0 podman[380819]: 2025-11-25 08:56:41.068331914 +0000 UTC m=+0.403898496 container attach 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:56:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:41.079 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:41.080 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:41 compute-0 ceph-mon[75015]: pgmap v2276: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.234 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:56:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980668265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.675 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.683 253542 DEBUG nova.compute.provider_tree [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.714 253542 DEBUG nova.scheduler.client.report [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.742 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.743 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.811 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.812 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.829 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:56:41 compute-0 magical_leavitt[380836]: {
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:     "0": [
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:         {
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "devices": [
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "/dev/loop3"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             ],
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_name": "ceph_lv0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_size": "21470642176",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "name": "ceph_lv0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "tags": {
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cluster_name": "ceph",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.crush_device_class": "",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.encrypted": "0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osd_id": "0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.type": "block",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.vdo": "0"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             },
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "type": "block",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "vg_name": "ceph_vg0"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:         }
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:     ],
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:     "1": [
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:         {
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "devices": [
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "/dev/loop4"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             ],
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_name": "ceph_lv1",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_size": "21470642176",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "name": "ceph_lv1",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "tags": {
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cluster_name": "ceph",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.crush_device_class": "",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.encrypted": "0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osd_id": "1",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.type": "block",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.vdo": "0"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             },
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "type": "block",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "vg_name": "ceph_vg1"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:         }
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:     ],
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:     "2": [
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:         {
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "devices": [
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "/dev/loop5"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             ],
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_name": "ceph_lv2",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_size": "21470642176",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "name": "ceph_lv2",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "tags": {
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.cluster_name": "ceph",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.crush_device_class": "",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.encrypted": "0",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osd_id": "2",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.type": "block",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:                 "ceph.vdo": "0"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             },
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "type": "block",
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:             "vg_name": "ceph_vg2"
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:         }
Nov 25 08:56:41 compute-0 magical_leavitt[380836]:     ]
Nov 25 08:56:41 compute-0 magical_leavitt[380836]: }
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.882 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:56:41 compute-0 systemd[1]: libpod-4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8.scope: Deactivated successfully.
Nov 25 08:56:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 08:56:41 compute-0 podman[380868]: 2025-11-25 08:56:41.967549439 +0000 UTC m=+0.053701533 container died 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.989 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.991 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:56:41 compute-0 nova_compute[253538]: 2025-11-25 08:56:41.991 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Creating image(s)
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.025 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.053 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.076 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.079 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.117 253542 DEBUG nova.policy [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.121 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.141 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.142 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.162 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.163 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.163 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.164 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.190 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.195 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2980668265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-fad02d2b11501102f1352bcab2bf857424bf7ecf1b5625d57167ca6b0714ec6e-merged.mount: Deactivated successfully.
Nov 25 08:56:42 compute-0 podman[380868]: 2025-11-25 08:56:42.394486219 +0000 UTC m=+0.480638303 container remove 4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:56:42 compute-0 systemd[1]: libpod-conmon-4775811d3abd53beb9662b847f8aeda8c4de43590b40a4086db173ac11dafcf8.scope: Deactivated successfully.
Nov 25 08:56:42 compute-0 podman[380867]: 2025-11-25 08:56:42.428641329 +0000 UTC m=+0.499389873 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 08:56:42 compute-0 sudo[380717]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:42 compute-0 sudo[380991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:42 compute-0 sudo[380991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:42 compute-0 sudo[380991]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.556 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.569058) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002569086, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 314, "num_deletes": 255, "total_data_size": 103986, "memory_usage": 111488, "flush_reason": "Manual Compaction"}
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002579710, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 103384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47746, "largest_seqno": 48059, "table_properties": {"data_size": 101349, "index_size": 199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5061, "raw_average_key_size": 17, "raw_value_size": 97335, "raw_average_value_size": 340, "num_data_blocks": 9, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764060998, "oldest_key_time": 1764060998, "file_creation_time": 1764061002, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 10692 microseconds, and 818 cpu microseconds.
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.579748) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 103384 bytes OK
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.579765) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583214) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583239) EVENT_LOG_v1 {"time_micros": 1764061002583233, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583257) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 101725, prev total WAL file size 101725, number of live WAL files 2.
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583880) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373635' seq:72057594037927935, type:22 .. '6C6F676D0032303136' seq:0, type:0; will stop at (end)
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(100KB)], [110(7621KB)]
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002583911, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 7907999, "oldest_snapshot_seqno": -1}
Nov 25 08:56:42 compute-0 sudo[381017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:56:42 compute-0 sudo[381017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:42 compute-0 sudo[381017]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.623 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:56:42 compute-0 sudo[381060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:42 compute-0 sudo[381060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:42 compute-0 sudo[381060]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:42 compute-0 sudo[381121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:56:42 compute-0 sudo[381121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6677 keys, 7787573 bytes, temperature: kUnknown
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002723811, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 7787573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7745695, "index_size": 24102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175495, "raw_average_key_size": 26, "raw_value_size": 7628682, "raw_average_value_size": 1142, "num_data_blocks": 935, "num_entries": 6677, "num_filter_entries": 6677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061002, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.724026) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 7787573 bytes
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.731089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.5 rd, 55.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.4 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(151.8) write-amplify(75.3) OK, records in: 7194, records dropped: 517 output_compression: NoCompression
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.731121) EVENT_LOG_v1 {"time_micros": 1764061002731107, "job": 66, "event": "compaction_finished", "compaction_time_micros": 139968, "compaction_time_cpu_micros": 20490, "output_level": 6, "num_output_files": 1, "total_output_size": 7787573, "num_input_records": 7194, "num_output_records": 6677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002731280, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061002733597, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.583830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:42 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:56:42.733682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:56:42 compute-0 nova_compute[253538]: 2025-11-25 08:56:42.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.040 253542 DEBUG nova.objects.instance [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 43128a42-ed0f-42ff-8282-4ef978e7c43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.060 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.061 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Ensure instance console log exists: /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.061 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.062 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.062 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.172 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Successfully created port: 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:56:43 compute-0 podman[381204]: 2025-11-25 08:56:43.174474959 +0000 UTC m=+0.071713572 container create 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 08:56:43 compute-0 systemd[1]: Started libpod-conmon-0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341.scope.
Nov 25 08:56:43 compute-0 podman[381204]: 2025-11-25 08:56:43.127341807 +0000 UTC m=+0.024580410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:56:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:43 compute-0 podman[381204]: 2025-11-25 08:56:43.262900326 +0000 UTC m=+0.160138919 container init 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 08:56:43 compute-0 podman[381204]: 2025-11-25 08:56:43.27147071 +0000 UTC m=+0.168709283 container start 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:56:43 compute-0 youthful_darwin[381222]: 167 167
Nov 25 08:56:43 compute-0 systemd[1]: libpod-0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341.scope: Deactivated successfully.
Nov 25 08:56:43 compute-0 podman[381204]: 2025-11-25 08:56:43.281632157 +0000 UTC m=+0.178870750 container attach 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:56:43 compute-0 podman[381204]: 2025-11-25 08:56:43.282970193 +0000 UTC m=+0.180208796 container died 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:56:43 compute-0 podman[381218]: 2025-11-25 08:56:43.296276165 +0000 UTC m=+0.082758444 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 08:56:43 compute-0 ceph-mon[75015]: pgmap v2277: 321 pgs: 321 active+clean; 134 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 08:56:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-12df25f68876a8eab5310ab331d30ec98c138b94564c857defc2858343924eb7-merged.mount: Deactivated successfully.
Nov 25 08:56:43 compute-0 podman[381204]: 2025-11-25 08:56:43.345941927 +0000 UTC m=+0.243180520 container remove 0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_darwin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:56:43 compute-0 systemd[1]: libpod-conmon-0c269861ea50c67e8f5fcb24be821124b0b24c615932419c7200389dc59a5341.scope: Deactivated successfully.
Nov 25 08:56:43 compute-0 podman[381264]: 2025-11-25 08:56:43.561944516 +0000 UTC m=+0.069960055 container create 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:56:43 compute-0 systemd[1]: Started libpod-conmon-8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea.scope.
Nov 25 08:56:43 compute-0 podman[381264]: 2025-11-25 08:56:43.533192223 +0000 UTC m=+0.041207782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:56:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:43 compute-0 podman[381264]: 2025-11-25 08:56:43.872717115 +0000 UTC m=+0.380732654 container init 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 08:56:43 compute-0 podman[381264]: 2025-11-25 08:56:43.884194367 +0000 UTC m=+0.392209876 container start 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 08:56:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 146 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 640 KiB/s wr, 71 op/s
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.918 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Successfully updated port: 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.937 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.937 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.937 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:56:43 compute-0 nova_compute[253538]: 2025-11-25 08:56:43.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.021 253542 DEBUG nova.compute.manager [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.022 253542 DEBUG nova.compute.manager [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing instance network info cache due to event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.022 253542 DEBUG oslo_concurrency.lockutils [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.089 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:56:44 compute-0 podman[381264]: 2025-11-25 08:56:44.223000999 +0000 UTC m=+0.731016508 container attach 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:56:44 compute-0 ceph-mon[75015]: pgmap v2278: 321 pgs: 321 active+clean; 146 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 640 KiB/s wr, 71 op/s
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:44 compute-0 ovn_controller[152859]: 2025-11-25T08:56:44Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:26:ec 10.100.0.3
Nov 25 08:56:44 compute-0 ovn_controller[152859]: 2025-11-25T08:56:44Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:26:ec 10.100.0.3
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.733 253542 DEBUG nova.network.neutron [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.750 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.750 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance network_info: |[{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.751 253542 DEBUG oslo_concurrency.lockutils [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.751 253542 DEBUG nova.network.neutron [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.755 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start _get_guest_xml network_info=[{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.761 253542 WARNING nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.768 253542 DEBUG nova.virt.libvirt.host [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.770 253542 DEBUG nova.virt.libvirt.host [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.778 253542 DEBUG nova.virt.libvirt.host [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.779 253542 DEBUG nova.virt.libvirt.host [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.779 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.779 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.780 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.780 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.780 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.781 253542 DEBUG nova.virt.hardware [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:56:44 compute-0 nova_compute[253538]: 2025-11-25 08:56:44.784 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:44 compute-0 nice_joliot[381280]: {
Nov 25 08:56:44 compute-0 nice_joliot[381280]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "osd_id": 1,
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "type": "bluestore"
Nov 25 08:56:44 compute-0 nice_joliot[381280]:     },
Nov 25 08:56:44 compute-0 nice_joliot[381280]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "osd_id": 2,
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "type": "bluestore"
Nov 25 08:56:44 compute-0 nice_joliot[381280]:     },
Nov 25 08:56:44 compute-0 nice_joliot[381280]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "osd_id": 0,
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:56:44 compute-0 nice_joliot[381280]:         "type": "bluestore"
Nov 25 08:56:44 compute-0 nice_joliot[381280]:     }
Nov 25 08:56:44 compute-0 nice_joliot[381280]: }
Nov 25 08:56:44 compute-0 systemd[1]: libpod-8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea.scope: Deactivated successfully.
Nov 25 08:56:44 compute-0 podman[381264]: 2025-11-25 08:56:44.896946993 +0000 UTC m=+1.404962502 container died 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:56:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-62fda1ceb53621411682938b8635b928d4053befdb573ae8275ea43292e55191-merged.mount: Deactivated successfully.
Nov 25 08:56:44 compute-0 podman[381264]: 2025-11-25 08:56:44.997008946 +0000 UTC m=+1.505024455 container remove 8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 08:56:45 compute-0 systemd[1]: libpod-conmon-8ee5c8f3487744f723da9cdc41dbe9bdf7d68613041b145e2c618cb34d80ccea.scope: Deactivated successfully.
Nov 25 08:56:45 compute-0 sudo[381121]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:56:45 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:56:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:56:45 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:56:45 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c4c00c97-d109-4459-ac6f-973872f879cf does not exist
Nov 25 08:56:45 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 945e6483-3da6-41d7-82c7-5485613d822c does not exist
Nov 25 08:56:45 compute-0 sudo[381345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:56:45 compute-0 sudo[381345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:45 compute-0 sudo[381345]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:45 compute-0 sudo[381370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:56:45 compute-0 sudo[381370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:56:45 compute-0 sudo[381370]: pam_unix(sudo:session): session closed for user root
Nov 25 08:56:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:56:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43139613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.228 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.261 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.265 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:56:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/186944104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.727 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.729 253542 DEBUG nova.virt.libvirt.vif [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1161720817',display_name='tempest-TestNetworkBasicOps-server-1161720817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1161720817',id=121,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMygd7bgjp7je066rs+JSqi7wDw8mZA8bTJqZMTdVQ59AGIvWGIfB++nH0hDU9JXJAgSqR6ykwwbMc5hRBsfmnOJwLqxckNDbUsZU2WcEt8EN+Pk8Qs/v8+WIfKw25whKw==',key_name='tempest-TestNetworkBasicOps-341234383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8fli2zzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:41Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=43128a42-ed0f-42ff-8282-4ef978e7c43c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.729 253542 DEBUG nova.network.os_vif_util [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.730 253542 DEBUG nova.network.os_vif_util [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.732 253542 DEBUG nova.objects.instance [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 43128a42-ed0f-42ff-8282-4ef978e7c43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.745 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <uuid>43128a42-ed0f-42ff-8282-4ef978e7c43c</uuid>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <name>instance-00000079</name>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-1161720817</nova:name>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:56:44</nova:creationTime>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <nova:port uuid="3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6">
Nov 25 08:56:45 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <system>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <entry name="serial">43128a42-ed0f-42ff-8282-4ef978e7c43c</entry>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <entry name="uuid">43128a42-ed0f-42ff-8282-4ef978e7c43c</entry>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </system>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <os>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   </os>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <features>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   </features>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/43128a42-ed0f-42ff-8282-4ef978e7c43c_disk">
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       </source>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config">
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       </source>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:56:45 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:ec:7c:8c"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <target dev="tap3f8cf5bd-c7"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/console.log" append="off"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <video>
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </video>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:56:45 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:56:45 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:56:45 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:56:45 compute-0 nova_compute[253538]: </domain>
Nov 25 08:56:45 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.745 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Preparing to wait for external event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.746 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.746 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.746 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.747 253542 DEBUG nova.virt.libvirt.vif [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1161720817',display_name='tempest-TestNetworkBasicOps-server-1161720817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1161720817',id=121,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMygd7bgjp7je066rs+JSqi7wDw8mZA8bTJqZMTdVQ59AGIvWGIfB++nH0hDU9JXJAgSqR6ykwwbMc5hRBsfmnOJwLqxckNDbUsZU2WcEt8EN+Pk8Qs/v8+WIfKw25whKw==',key_name='tempest-TestNetworkBasicOps-341234383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8fli2zzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:56:41Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=43128a42-ed0f-42ff-8282-4ef978e7c43c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.748 253542 DEBUG nova.network.os_vif_util [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.748 253542 DEBUG nova.network.os_vif_util [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.749 253542 DEBUG os_vif [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.750 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.754 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f8cf5bd-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.754 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f8cf5bd-c7, col_values=(('external_ids', {'iface-id': '3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:7c:8c', 'vm-uuid': '43128a42-ed0f-42ff-8282-4ef978e7c43c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:45 compute-0 NetworkManager[48915]: <info>  [1764061005.7583] manager: (tap3f8cf5bd-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.759 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.765 253542 INFO os_vif [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7')
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.812 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.813 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.813 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:ec:7c:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.814 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Using config drive
Nov 25 08:56:45 compute-0 nova_compute[253538]: 2025-11-25 08:56:45.841 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 172 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 08:56:46 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:56:46 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:56:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/43139613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/186944104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:56:46 compute-0 nova_compute[253538]: 2025-11-25 08:56:46.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.065 253542 DEBUG nova.network.neutron [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updated VIF entry in instance network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.065 253542 DEBUG nova.network.neutron [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.078 253542 DEBUG oslo_concurrency.lockutils [req-482d5728-6747-4f61-8a45-43cb392bfbb7 req-e902f5a5-4da3-4287-be61-d361660192aa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:47 compute-0 ceph-mon[75015]: pgmap v2279: 321 pgs: 321 active+clean; 172 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.582 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.582 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 203 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 08:56:47 compute-0 nova_compute[253538]: 2025-11-25 08:56:47.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:56:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037464691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.073 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Creating config drive at /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.077 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcp8r4un execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.108 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.217 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.218 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.220 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcp8r4un" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3037464691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.338 253542 DEBUG nova.storage.rbd_utils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.345 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.416 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.417 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.614 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.616 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.94587326049805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.616 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.616 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.726 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ce8c3428-f7e4-49aa-9978-faaf5d514663 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.726 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 43128a42-ed0f-42ff-8282-4ef978e7c43c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.727 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.727 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:56:48 compute-0 nova_compute[253538]: 2025-11-25 08:56:48.810 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:56:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:56:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487965594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.261 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.269 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.286 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.318 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.319 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:49 compute-0 ceph-mon[75015]: pgmap v2280: 321 pgs: 321 active+clean; 203 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 08:56:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3487965594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.406 253542 DEBUG oslo_concurrency.processutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config 43128a42-ed0f-42ff-8282-4ef978e7c43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.407 253542 INFO nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Deleting local config drive /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c/disk.config because it was imported into RBD.
Nov 25 08:56:49 compute-0 kernel: tap3f8cf5bd-c7: entered promiscuous mode
Nov 25 08:56:49 compute-0 NetworkManager[48915]: <info>  [1764061009.4941] manager: (tap3f8cf5bd-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.494 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:49 compute-0 ovn_controller[152859]: 2025-11-25T08:56:49Z|01250|binding|INFO|Claiming lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 for this chassis.
Nov 25 08:56:49 compute-0 ovn_controller[152859]: 2025-11-25T08:56:49Z|01251|binding|INFO|3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6: Claiming fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.505 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:7c:8c 10.100.0.11'], port_security=['fa:16:3e:ec:7c:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '43128a42-ed0f-42ff-8282-4ef978e7c43c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a3361c5-4f78-4935-9e24-d43d47b272af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa7587ee-e656-41ca-b100-9a0da067d1dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75d3afb1-0c55-49ec-b7d6-fd301cdfea08, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.508 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 in datapath 6a3361c5-4f78-4935-9e24-d43d47b272af bound to our chassis
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.511 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a3361c5-4f78-4935-9e24-d43d47b272af
Nov 25 08:56:49 compute-0 ovn_controller[152859]: 2025-11-25T08:56:49Z|01252|binding|INFO|Setting lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 ovn-installed in OVS
Nov 25 08:56:49 compute-0 ovn_controller[152859]: 2025-11-25T08:56:49Z|01253|binding|INFO|Setting lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 up in Southbound
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.530 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b46a92d1-820e-4459-bf0d-4056866c82a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.531 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a3361c5-41 in ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.534 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a3361c5-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.535 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[880f7156-e399-4e01-a90d-b9bc44ad0eed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.538 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d62e8bed-0bfa-42f5-a409-3250b355f134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.555 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5751e282-17cd-4ced-bf74-50519a603416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 systemd-machined[215790]: New machine qemu-151-instance-00000079.
Nov 25 08:56:49 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000079.
Nov 25 08:56:49 compute-0 systemd-udevd[381572]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.586 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c623ed1f-8320-4edc-b9c7-978df7fd1df7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 NetworkManager[48915]: <info>  [1764061009.6055] device (tap3f8cf5bd-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:56:49 compute-0 NetworkManager[48915]: <info>  [1764061009.6069] device (tap3f8cf5bd-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.624 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3e845e13-88c9-45aa-b1c6-6118c1149d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.629 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[249a776d-a28e-440a-b519-eecb0bef8783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 NetworkManager[48915]: <info>  [1764061009.6302] manager: (tap6a3361c5-40): new Veth device (/org/freedesktop/NetworkManager/Devices/514)
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.670 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a81204fe-a794-4235-9eee-0d356785f6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.673 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4c0ede-1b46-4827-9d31-31506d67c524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 podman[381552]: 2025-11-25 08:56:49.688289338 +0000 UTC m=+0.142151271 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:56:49 compute-0 NetworkManager[48915]: <info>  [1764061009.7035] device (tap6a3361c5-40): carrier: link connected
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.708 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3fd366-ea52-41c9-bf59-56ff6363e4ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.730 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71e95015-49ee-4eff-af25-758e596c8da1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a3361c5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:4e:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636430, 'reachable_time': 26935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381615, 'error': None, 'target': 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.749 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffca75f-d5f4-4f3c-a693-3840dd359371]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:4e9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636430, 'tstamp': 636430}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381616, 'error': None, 'target': 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff36863-34ec-4f7c-a17a-47b4b53ea2ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a3361c5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:4e:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636430, 'reachable_time': 26935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381617, 'error': None, 'target': 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.803 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94a41d70-de9c-4fa9-9253-d8f9b0a7a976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[84905fa4-3710-4a59-883c-a21b0b52a9a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.873 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a3361c5-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.873 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.874 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a3361c5-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:49 compute-0 kernel: tap6a3361c5-40: entered promiscuous mode
Nov 25 08:56:49 compute-0 NetworkManager[48915]: <info>  [1764061009.8772] manager: (tap6a3361c5-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.878 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.881 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a3361c5-40, col_values=(('external_ids', {'iface-id': '83822cb3-5f48-4869-a929-1fd9fc361865'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.882 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:49 compute-0 ovn_controller[152859]: 2025-11-25T08:56:49Z|01254|binding|INFO|Releasing lport 83822cb3-5f48-4869-a929-1fd9fc361865 from this chassis (sb_readonly=0)
Nov 25 08:56:49 compute-0 nova_compute[253538]: 2025-11-25 08:56:49.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.905 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a3361c5-4f78-4935-9e24-d43d47b272af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a3361c5-4f78-4935-9e24-d43d47b272af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.906 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a864ec30-ec73-4c4a-b155-176cc608d20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.907 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-6a3361c5-4f78-4935-9e24-d43d47b272af
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/6a3361c5-4f78-4935-9e24-d43d47b272af.pid.haproxy
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 6a3361c5-4f78-4935-9e24-d43d47b272af
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:56:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:49.908 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'env', 'PROCESS_TAG=haproxy-6a3361c5-4f78-4935-9e24-d43d47b272af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a3361c5-4f78-4935-9e24-d43d47b272af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.100 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061010.100013, 43128a42-ed0f-42ff-8282-4ef978e7c43c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.101 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] VM Started (Lifecycle Event)
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.129 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.133 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061010.100198, 43128a42-ed0f-42ff-8282-4ef978e7c43c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.134 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] VM Paused (Lifecycle Event)
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.163 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.167 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.201 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.271 253542 DEBUG nova.compute.manager [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.272 253542 DEBUG oslo_concurrency.lockutils [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.272 253542 DEBUG oslo_concurrency.lockutils [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.272 253542 DEBUG oslo_concurrency.lockutils [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.273 253542 DEBUG nova.compute.manager [req-acba5256-2c76-4785-b592-8a2143a1ba41 req-70cde542-6608-4038-96de-832f15596afa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Processing event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.273 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.276 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061010.2768142, 43128a42-ed0f-42ff-8282-4ef978e7c43c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.277 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] VM Resumed (Lifecycle Event)
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.278 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.282 253542 INFO nova.virt.libvirt.driver [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance spawned successfully.
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.283 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.313 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.316 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.323 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.327 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.328 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.328 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.329 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.329 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.330 253542 DEBUG nova.virt.libvirt.driver [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.356 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:56:50 compute-0 podman[381689]: 2025-11-25 08:56:50.282961034 +0000 UTC m=+0.034951153 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.385 253542 INFO nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Took 8.40 seconds to spawn the instance on the hypervisor.
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.386 253542 DEBUG nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.448 253542 INFO nova.compute.manager [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Took 9.47 seconds to build instance.
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.464 253542 DEBUG oslo_concurrency.lockutils [None req-442d7c35-8ef3-4365-870d-736cdeba020e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:50 compute-0 ceph-mon[75015]: pgmap v2281: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 08:56:50 compute-0 nova_compute[253538]: 2025-11-25 08:56:50.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:50 compute-0 podman[381689]: 2025-11-25 08:56:50.888153716 +0000 UTC m=+0.640143795 container create de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 08:56:51 compute-0 systemd[1]: Started libpod-conmon-de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da.scope.
Nov 25 08:56:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b842b0da7b301077b2be32a4ebc9c2c07bb6ef40e3f596e187ca5e2f7084ab4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:56:51 compute-0 podman[381689]: 2025-11-25 08:56:51.16274009 +0000 UTC m=+0.914730239 container init de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:56:51 compute-0 podman[381689]: 2025-11-25 08:56:51.175196398 +0000 UTC m=+0.927186487 container start de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:56:51 compute-0 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [NOTICE]   (381708) : New worker (381710) forked
Nov 25 08:56:51 compute-0 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [NOTICE]   (381708) : Loading success.
Nov 25 08:56:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Nov 25 08:56:52 compute-0 nova_compute[253538]: 2025-11-25 08:56:52.385 253542 DEBUG nova.compute.manager [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:52 compute-0 nova_compute[253538]: 2025-11-25 08:56:52.386 253542 DEBUG oslo_concurrency.lockutils [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:56:52 compute-0 nova_compute[253538]: 2025-11-25 08:56:52.386 253542 DEBUG oslo_concurrency.lockutils [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:56:52 compute-0 nova_compute[253538]: 2025-11-25 08:56:52.386 253542 DEBUG oslo_concurrency.lockutils [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:56:52 compute-0 nova_compute[253538]: 2025-11-25 08:56:52.386 253542 DEBUG nova.compute.manager [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] No waiting events found dispatching network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:56:52 compute-0 nova_compute[253538]: 2025-11-25 08:56:52.387 253542 WARNING nova.compute.manager [req-4a70d342-0aa3-424f-a8ce-0177a9f7929b req-dc904973-3464-4560-ad29-e5e744615406 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received unexpected event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 for instance with vm_state active and task_state None.
Nov 25 08:56:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:52 compute-0 nova_compute[253538]: 2025-11-25 08:56:52.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:53 compute-0 ceph-mon[75015]: pgmap v2282: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Nov 25 08:56:53 compute-0 nova_compute[253538]: 2025-11-25 08:56:53.305 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:53.305 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:56:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:53.308 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:56:53
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups']
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:56:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:56:54 compute-0 ceph-mon[75015]: pgmap v2283: 321 pgs: 321 active+clean; 213 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Nov 25 08:56:54 compute-0 nova_compute[253538]: 2025-11-25 08:56:54.658 253542 DEBUG nova.compute.manager [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:56:54 compute-0 nova_compute[253538]: 2025-11-25 08:56:54.659 253542 DEBUG nova.compute.manager [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing instance network info cache due to event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:56:54 compute-0 nova_compute[253538]: 2025-11-25 08:56:54.660 253542 DEBUG oslo_concurrency.lockutils [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:56:54 compute-0 nova_compute[253538]: 2025-11-25 08:56:54.661 253542 DEBUG oslo_concurrency.lockutils [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:56:54 compute-0 nova_compute[253538]: 2025-11-25 08:56:54.662 253542 DEBUG nova.network.neutron [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:56:55 compute-0 nova_compute[253538]: 2025-11-25 08:56:55.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 151 op/s
Nov 25 08:56:56 compute-0 nova_compute[253538]: 2025-11-25 08:56:56.375 253542 DEBUG nova.network.neutron [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updated VIF entry in instance network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:56:56 compute-0 nova_compute[253538]: 2025-11-25 08:56:56.375 253542 DEBUG nova.network.neutron [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:56:56 compute-0 nova_compute[253538]: 2025-11-25 08:56:56.408 253542 DEBUG oslo_concurrency.lockutils [req-c26c3644-e83e-4fb9-9ac4-bb5f331839d4 req-7a8a1a83-c847-40fa-9d75-322d5b4a3119 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:56:57 compute-0 ceph-mon[75015]: pgmap v2284: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 151 op/s
Nov 25 08:56:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:56:57.312 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:56:57 compute-0 nova_compute[253538]: 2025-11-25 08:56:57.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:56:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 25 08:56:57 compute-0 nova_compute[253538]: 2025-11-25 08:56:57.916 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:56:59 compute-0 ceph-mon[75015]: pgmap v2285: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 25 08:56:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 48 KiB/s wr, 78 op/s
Nov 25 08:57:00 compute-0 nova_compute[253538]: 2025-11-25 08:57:00.800 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:01 compute-0 ceph-mon[75015]: pgmap v2286: 321 pgs: 321 active+clean; 213 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 48 KiB/s wr, 78 op/s
Nov 25 08:57:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 216 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 172 KiB/s wr, 75 op/s
Nov 25 08:57:02 compute-0 nova_compute[253538]: 2025-11-25 08:57:02.153 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:02 compute-0 nova_compute[253538]: 2025-11-25 08:57:02.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:03 compute-0 ceph-mon[75015]: pgmap v2287: 321 pgs: 321 active+clean; 216 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 172 KiB/s wr, 75 op/s
Nov 25 08:57:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 224 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 681 KiB/s wr, 54 op/s
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012537543953897246 of space, bias 1.0, pg target 0.37612631861691737 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:57:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:57:04 compute-0 ovn_controller[152859]: 2025-11-25T08:57:04Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 08:57:04 compute-0 ovn_controller[152859]: 2025-11-25T08:57:04Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 08:57:05 compute-0 ceph-mon[75015]: pgmap v2288: 321 pgs: 321 active+clean; 224 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 681 KiB/s wr, 54 op/s
Nov 25 08:57:05 compute-0 nova_compute[253538]: 2025-11-25 08:57:05.803 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 235 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 592 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Nov 25 08:57:07 compute-0 ceph-mon[75015]: pgmap v2289: 321 pgs: 321 active+clean; 235 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 592 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Nov 25 08:57:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 243 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 25 08:57:07 compute-0 nova_compute[253538]: 2025-11-25 08:57:07.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:08 compute-0 ceph-mon[75015]: pgmap v2290: 321 pgs: 321 active+clean; 243 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 25 08:57:08 compute-0 nova_compute[253538]: 2025-11-25 08:57:08.447 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:08 compute-0 nova_compute[253538]: 2025-11-25 08:57:08.448 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:08 compute-0 nova_compute[253538]: 2025-11-25 08:57:08.468 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:57:08 compute-0 nova_compute[253538]: 2025-11-25 08:57:08.566 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:08 compute-0 nova_compute[253538]: 2025-11-25 08:57:08.567 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:08 compute-0 nova_compute[253538]: 2025-11-25 08:57:08.578 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:57:08 compute-0 nova_compute[253538]: 2025-11-25 08:57:08.578 253542 INFO nova.compute.claims [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:57:08 compute-0 nova_compute[253538]: 2025-11-25 08:57:08.777 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:57:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466261657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.256 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.263 253542 DEBUG nova.compute.provider_tree [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.278 253542 DEBUG nova.scheduler.client.report [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.310 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.311 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.352 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.353 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.385 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:57:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3466261657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.444 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.534 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.536 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.537 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Creating image(s)
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.572 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.611 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.644 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.650 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.763 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.764 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.765 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.765 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.794 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:09 compute-0 nova_compute[253538]: 2025-11-25 08:57:09.820 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9b511004-21d7-4867-aa46-4e7219827b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 246 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 08:57:10 compute-0 sshd-session[381719]: Connection closed by 45.78.222.2 port 43498 [preauth]
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.095 253542 DEBUG nova.policy [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8ce5e2935141427a90707c14e4a73ad9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '54ab33f9507e43fca43c45e6fc57f565', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.184 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9b511004-21d7-4867-aa46-4e7219827b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.272 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] resizing rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.361 253542 DEBUG nova.objects.instance [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b511004-21d7-4867-aa46-4e7219827b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.373 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.374 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Ensure instance console log exists: /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.374 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.375 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.375 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:10 compute-0 ceph-mon[75015]: pgmap v2291: 321 pgs: 321 active+clean; 246 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:10 compute-0 nova_compute[253538]: 2025-11-25 08:57:10.876 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Successfully created port: 88814764-016b-4232-82f8-72dbbb384932 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:57:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 266 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Nov 25 08:57:11 compute-0 nova_compute[253538]: 2025-11-25 08:57:11.967 253542 INFO nova.compute.manager [None req-c2a560f4-8cf1-4e84-a6b6-90c6ff3e17c3 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Get console output
Nov 25 08:57:11 compute-0 nova_compute[253538]: 2025-11-25 08:57:11.974 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.009 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Successfully updated port: 88814764-016b-4232-82f8-72dbbb384932 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.023 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.023 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquired lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.024 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.117 253542 DEBUG nova.compute.manager [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-changed-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.118 253542 DEBUG nova.compute.manager [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing instance network info cache due to event network-changed-88814764-016b-4232-82f8-72dbbb384932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.119 253542 DEBUG oslo_concurrency.lockutils [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.187 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:57:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:12 compute-0 podman[381909]: 2025-11-25 08:57:12.844410469 +0000 UTC m=+0.086752643 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 25 08:57:12 compute-0 nova_compute[253538]: 2025-11-25 08:57:12.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:13 compute-0 ceph-mon[75015]: pgmap v2292: 321 pgs: 321 active+clean; 266 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Nov 25 08:57:13 compute-0 ovn_controller[152859]: 2025-11-25T08:57:13Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 08:57:13 compute-0 podman[381929]: 2025-11-25 08:57:13.842603677 +0000 UTC m=+0.086510185 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:57:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 280 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Nov 25 08:57:13 compute-0 nova_compute[253538]: 2025-11-25 08:57:13.959 253542 DEBUG nova.network.neutron [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:13 compute-0 nova_compute[253538]: 2025-11-25 08:57:13.982 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Releasing lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:13 compute-0 nova_compute[253538]: 2025-11-25 08:57:13.983 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance network_info: |[{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:57:13 compute-0 nova_compute[253538]: 2025-11-25 08:57:13.983 253542 DEBUG oslo_concurrency.lockutils [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:13 compute-0 nova_compute[253538]: 2025-11-25 08:57:13.984 253542 DEBUG nova.network.neutron [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing network info cache for port 88814764-016b-4232-82f8-72dbbb384932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:57:13 compute-0 nova_compute[253538]: 2025-11-25 08:57:13.989 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start _get_guest_xml network_info=[{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:57:13 compute-0 nova_compute[253538]: 2025-11-25 08:57:13.996 253542 WARNING nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.002 253542 DEBUG nova.virt.libvirt.host [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.003 253542 DEBUG nova.virt.libvirt.host [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.013 253542 DEBUG nova.virt.libvirt.host [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.014 253542 DEBUG nova.virt.libvirt.host [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.015 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.015 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.016 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.016 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.017 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.017 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.017 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.018 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.018 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.019 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.019 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.020 253542 DEBUG nova.virt.hardware [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.024 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:57:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947024011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.526 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.556 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:14 compute-0 nova_compute[253538]: 2025-11-25 08:57:14.562 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:15 compute-0 ovn_controller[152859]: 2025-11-25T08:57:15Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:7c:8c 10.100.0.11
Nov 25 08:57:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:57:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301800043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.141 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.145 253542 DEBUG nova.virt.libvirt.vif [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-429479529-acc',id=122,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOtszPERwqLbWIYPkzCz4rQadM31V2GuDIdwxeEnuQlWaJ4QslduyQHZMa0L1KML6aXdq0ZmWZAYQ/HsmjaUOuAcVDO3uLPU+Gh2V/Iwtg0WToybeZDWhsFovRGb4ZBh3Q==',key_name='tempest-TestSecurityGroupsBasicOps-652156064',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54ab33f9507e43fca43c45e6fc57f565',ramdisk_id='',reservation_id='r-pvks0odp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-429479529',owner_user_name='tempest-TestSecurityGroupsBasicOps-429479529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:57:09Z,user_data=None,user_id='8ce5e2935141427a90707c14e4a73ad9',uuid=9b511004-21d7-4867-aa46-4e7219827b6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.146 253542 DEBUG nova.network.os_vif_util [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converting VIF {"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.148 253542 DEBUG nova.network.os_vif_util [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.151 253542 DEBUG nova.objects.instance [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b511004-21d7-4867-aa46-4e7219827b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.172 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.172 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.173 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.173 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.173 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.175 253542 INFO nova.compute.manager [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Terminating instance
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.176 253542 DEBUG nova.compute.manager [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.179 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <uuid>9b511004-21d7-4867-aa46-4e7219827b6e</uuid>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <name>instance-0000007a</name>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725</nova:name>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:57:13</nova:creationTime>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <nova:user uuid="8ce5e2935141427a90707c14e4a73ad9">tempest-TestSecurityGroupsBasicOps-429479529-project-member</nova:user>
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <nova:project uuid="54ab33f9507e43fca43c45e6fc57f565">tempest-TestSecurityGroupsBasicOps-429479529</nova:project>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <nova:port uuid="88814764-016b-4232-82f8-72dbbb384932">
Nov 25 08:57:15 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <system>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <entry name="serial">9b511004-21d7-4867-aa46-4e7219827b6e</entry>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <entry name="uuid">9b511004-21d7-4867-aa46-4e7219827b6e</entry>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </system>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <os>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   </os>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <features>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   </features>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9b511004-21d7-4867-aa46-4e7219827b6e_disk">
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       </source>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9b511004-21d7-4867-aa46-4e7219827b6e_disk.config">
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       </source>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:57:15 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:4c:8a:98"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <target dev="tap88814764-01"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/console.log" append="off"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <video>
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </video>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:57:15 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:57:15 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:57:15 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:57:15 compute-0 nova_compute[253538]: </domain>
Nov 25 08:57:15 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.181 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Preparing to wait for external event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.182 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.183 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.183 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.184 253542 DEBUG nova.virt.libvirt.vif [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-429479529-acc',id=122,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOtszPERwqLbWIYPkzCz4rQadM31V2GuDIdwxeEnuQlWaJ4QslduyQHZMa0L1KML6aXdq0ZmWZAYQ/HsmjaUOuAcVDO3uLPU+Gh2V/Iwtg0WToybeZDWhsFovRGb4ZBh3Q==',key_name='tempest-TestSecurityGroupsBasicOps-652156064',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54ab33f9507e43fca43c45e6fc57f565',ramdisk_id='',reservation_id='r-pvks0odp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-429479529',owner_user_name='tempest-TestSecurityGroupsBasicOps-429479529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:57:09Z,user_data=None,user_id='8ce5e2935141427a90707c14e4a73ad9',uuid=9b511004-21d7-4867-aa46-4e7219827b6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.185 253542 DEBUG nova.network.os_vif_util [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converting VIF {"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.186 253542 DEBUG nova.network.os_vif_util [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.187 253542 DEBUG os_vif [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.190 253542 DEBUG nova.compute.manager [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.190 253542 DEBUG nova.compute.manager [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing instance network info cache due to event network-changed-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.191 253542 DEBUG oslo_concurrency.lockutils [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.191 253542 DEBUG oslo_concurrency.lockutils [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.192 253542 DEBUG nova.network.neutron [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Refreshing network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.195 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.196 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.197 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.205 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88814764-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.206 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88814764-01, col_values=(('external_ids', {'iface-id': '88814764-016b-4232-82f8-72dbbb384932', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:8a:98', 'vm-uuid': '9b511004-21d7-4867-aa46-4e7219827b6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:15 compute-0 NetworkManager[48915]: <info>  [1764061035.2092] manager: (tap88814764-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.219 253542 INFO os_vif [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01')
Nov 25 08:57:15 compute-0 ceph-mon[75015]: pgmap v2293: 321 pgs: 321 active+clean; 280 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Nov 25 08:57:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2947024011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:57:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3301800043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.387 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.388 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.388 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] No VIF found with MAC fa:16:3e:4c:8a:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.388 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Using config drive
Nov 25 08:57:15 compute-0 kernel: tap3f8cf5bd-c7 (unregistering): left promiscuous mode
Nov 25 08:57:15 compute-0 NetworkManager[48915]: <info>  [1764061035.4120] device (tap3f8cf5bd-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:57:15 compute-0 ovn_controller[152859]: 2025-11-25T08:57:15Z|01255|binding|INFO|Releasing lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 from this chassis (sb_readonly=0)
Nov 25 08:57:15 compute-0 ovn_controller[152859]: 2025-11-25T08:57:15Z|01256|binding|INFO|Setting lport 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 down in Southbound
Nov 25 08:57:15 compute-0 ovn_controller[152859]: 2025-11-25T08:57:15Z|01257|binding|INFO|Removing iface tap3f8cf5bd-c7 ovn-installed in OVS
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.438 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.444 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:7c:8c 10.100.0.11'], port_security=['fa:16:3e:ec:7c:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '43128a42-ed0f-42ff-8282-4ef978e7c43c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a3361c5-4f78-4935-9e24-d43d47b272af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa7587ee-e656-41ca-b100-9a0da067d1dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75d3afb1-0c55-49ec-b7d6-fd301cdfea08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:57:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.445 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 in datapath 6a3361c5-4f78-4935-9e24-d43d47b272af unbound from our chassis
Nov 25 08:57:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.447 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a3361c5-4f78-4935-9e24-d43d47b272af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[28432021-166b-4c5f-82e7-c80eeb7675bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:15 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:15.449 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af namespace which is not needed anymore
Nov 25 08:57:15 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 25 08:57:15 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000079.scope: Consumed 13.709s CPU time.
Nov 25 08:57:15 compute-0 systemd-machined[215790]: Machine qemu-151-instance-00000079 terminated.
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.648 253542 INFO nova.virt.libvirt.driver [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Instance destroyed successfully.
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.649 253542 DEBUG nova.objects.instance [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 43128a42-ed0f-42ff-8282-4ef978e7c43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.666 253542 DEBUG nova.virt.libvirt.vif [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1161720817',display_name='tempest-TestNetworkBasicOps-server-1161720817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1161720817',id=121,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMygd7bgjp7je066rs+JSqi7wDw8mZA8bTJqZMTdVQ59AGIvWGIfB++nH0hDU9JXJAgSqR6ykwwbMc5hRBsfmnOJwLqxckNDbUsZU2WcEt8EN+Pk8Qs/v8+WIfKw25whKw==',key_name='tempest-TestNetworkBasicOps-341234383',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:56:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-8fli2zzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:56:50Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=43128a42-ed0f-42ff-8282-4ef978e7c43c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.666 253542 DEBUG nova.network.os_vif_util [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.667 253542 DEBUG nova.network.os_vif_util [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.668 253542 DEBUG os_vif [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.669 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.670 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f8cf5bd-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.677 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:15 compute-0 nova_compute[253538]: 2025-11-25 08:57:15.680 253542 INFO os_vif [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:7c:8c,bridge_name='br-int',has_traffic_filtering=True,id=3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6,network=Network(6a3361c5-4f78-4935-9e24-d43d47b272af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f8cf5bd-c7')
Nov 25 08:57:15 compute-0 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [NOTICE]   (381708) : haproxy version is 2.8.14-c23fe91
Nov 25 08:57:15 compute-0 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [NOTICE]   (381708) : path to executable is /usr/sbin/haproxy
Nov 25 08:57:15 compute-0 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [WARNING]  (381708) : Exiting Master process...
Nov 25 08:57:15 compute-0 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [WARNING]  (381708) : Exiting Master process...
Nov 25 08:57:15 compute-0 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [ALERT]    (381708) : Current worker (381710) exited with code 143 (Terminated)
Nov 25 08:57:15 compute-0 neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af[381704]: [WARNING]  (381708) : All workers exited. Exiting... (0)
Nov 25 08:57:15 compute-0 systemd[1]: libpod-de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da.scope: Deactivated successfully.
Nov 25 08:57:15 compute-0 podman[382059]: 2025-11-25 08:57:15.714535388 +0000 UTC m=+0.177774009 container died de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da-userdata-shm.mount: Deactivated successfully.
Nov 25 08:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b842b0da7b301077b2be32a4ebc9c2c07bb6ef40e3f596e187ca5e2f7084ab4-merged.mount: Deactivated successfully.
Nov 25 08:57:15 compute-0 podman[382059]: 2025-11-25 08:57:15.849390819 +0000 UTC m=+0.312629440 container cleanup de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 08:57:15 compute-0 systemd[1]: libpod-conmon-de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da.scope: Deactivated successfully.
Nov 25 08:57:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 293 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 3.3 MiB/s wr, 67 op/s
Nov 25 08:57:16 compute-0 podman[382120]: 2025-11-25 08:57:16.023637992 +0000 UTC m=+0.148212775 container remove de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.030 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ab51aaec-5cb4-4975-8488-fe79ff0b1519]: (4, ('Tue Nov 25 08:57:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af (de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da)\nde4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da\nTue Nov 25 08:57:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af (de4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da)\nde4d6031ca24a438be3d92b1dab083865d78ca9073b06086aceb0b95605dc5da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.033 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7fea18-994f-4b12-b641-a76b9794092f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.035 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a3361c5-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:16 compute-0 kernel: tap6a3361c5-40: left promiscuous mode
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.074 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5c8880-86d0-48fd-99a2-ab3da66549cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.092 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[411fc128-7e82-49a1-8f98-49341af569da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.093 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29bd70c9-1537-481a-bb07-472bfdc2c34d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.110 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[acf64a33-706d-4304-a85a-214b51b17f40]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636421, 'reachable_time': 40953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382138, 'error': None, 'target': 'ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.114 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a3361c5-4f78-4935-9e24-d43d47b272af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.114 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[c431361b-7f51-4874-9db5-fdf529b0f4bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a3361c5\x2d4f78\x2d4935\x2d9e24\x2dd43d47b272af.mount: Deactivated successfully.
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.236 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Creating config drive at /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.248 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_k7ex2jg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.313 253542 INFO nova.virt.libvirt.driver [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Deleting instance files /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c_del
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.315 253542 INFO nova.virt.libvirt.driver [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Deletion of /var/lib/nova/instances/43128a42-ed0f-42ff-8282-4ef978e7c43c_del complete
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.377 253542 INFO nova.compute.manager [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Took 1.20 seconds to destroy the instance on the hypervisor.
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.378 253542 DEBUG oslo.service.loopingcall [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.378 253542 DEBUG nova.compute.manager [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.378 253542 DEBUG nova.network.neutron [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.396 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_k7ex2jg" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.421 253542 DEBUG nova.storage.rbd_utils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] rbd image 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.424 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.494 253542 DEBUG nova.network.neutron [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updated VIF entry in instance network info cache for port 88814764-016b-4232-82f8-72dbbb384932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.496 253542 DEBUG nova.network.neutron [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.515 253542 DEBUG oslo_concurrency.lockutils [req-4cb82b57-45d2-4e5c-9ddb-bd30169d3395 req-158ea824-d9ef-4d29-ba2f-e8774fce8338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.593 253542 DEBUG oslo_concurrency.processutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config 9b511004-21d7-4867-aa46-4e7219827b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.594 253542 INFO nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Deleting local config drive /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e/disk.config because it was imported into RBD.
Nov 25 08:57:16 compute-0 kernel: tap88814764-01: entered promiscuous mode
Nov 25 08:57:16 compute-0 NetworkManager[48915]: <info>  [1764061036.6684] manager: (tap88814764-01): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Nov 25 08:57:16 compute-0 ovn_controller[152859]: 2025-11-25T08:57:16Z|01258|binding|INFO|Claiming lport 88814764-016b-4232-82f8-72dbbb384932 for this chassis.
Nov 25 08:57:16 compute-0 systemd-udevd[382037]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:57:16 compute-0 ovn_controller[152859]: 2025-11-25T08:57:16Z|01259|binding|INFO|88814764-016b-4232-82f8-72dbbb384932: Claiming fa:16:3e:4c:8a:98 10.100.0.4
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.668 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.678 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:8a:98 10.100.0.4'], port_security=['fa:16:3e:4c:8a:98 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b511004-21d7-4867-aa46-4e7219827b6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55543b53-ee52-41ca-ba2e-341088afdcaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54ab33f9507e43fca43c45e6fc57f565', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6831af45-79a5-4ed4-afa6-6b43609f2269 cfebd55d-d59e-4c7b-966a-1919c4910d21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90ba94bc-a7de-4a4f-bb6b-69f6919bb708, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=88814764-016b-4232-82f8-72dbbb384932) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.680 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 88814764-016b-4232-82f8-72dbbb384932 in datapath 55543b53-ee52-41ca-ba2e-341088afdcaa bound to our chassis
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.681 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55543b53-ee52-41ca-ba2e-341088afdcaa
Nov 25 08:57:16 compute-0 NetworkManager[48915]: <info>  [1764061036.6847] device (tap88814764-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:57:16 compute-0 ovn_controller[152859]: 2025-11-25T08:57:16Z|01260|binding|INFO|Setting lport 88814764-016b-4232-82f8-72dbbb384932 ovn-installed in OVS
Nov 25 08:57:16 compute-0 ovn_controller[152859]: 2025-11-25T08:57:16Z|01261|binding|INFO|Setting lport 88814764-016b-4232-82f8-72dbbb384932 up in Southbound
Nov 25 08:57:16 compute-0 NetworkManager[48915]: <info>  [1764061036.6859] device (tap88814764-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:16 compute-0 nova_compute[253538]: 2025-11-25 08:57:16.690 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.694 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[05cae298-dbd7-4e5b-bbf1-613f3864ccd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.694 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55543b53-e1 in ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.697 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55543b53-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.697 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d816fea1-3298-4ec7-aff8-7e401d1a9d20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee5210d-654d-4aa9-9631-a645251ae41b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 systemd-machined[215790]: New machine qemu-152-instance-0000007a.
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.712 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[50be2c86-54bd-4891-b1cc-2cab95369f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-0000007a.
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.736 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e18702be-890c-4a44-9291-6c40cb0beb2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.785 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[202833c2-187a-41d7-a27c-20fda2f68da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.792 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75d4a8f0-f649-4d6a-b311-8e62917a9041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 NetworkManager[48915]: <info>  [1764061036.7939] manager: (tap55543b53-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/518)
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.955 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d817a81e-b028-4de1-9fe4-6b2c5c83fd31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:16.959 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d8705904-687b-44e7-bb90-2f4dad156cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:16 compute-0 NetworkManager[48915]: <info>  [1764061036.9894] device (tap55543b53-e0): carrier: link connected
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.000 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d611a926-0990-4021-8fe8-b4af46ca53f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.031 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e0d942-5d9e-4228-933e-a84ec60045d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55543b53-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:79:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639158, 'reachable_time': 28515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382222, 'error': None, 'target': 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.050 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2b2884-28ca-422f-b332-86b0d2578bfd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:794a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639158, 'tstamp': 639158}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382223, 'error': None, 'target': 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.066 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24c35fc5-cef8-47cb-b01a-a8876c91b5ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55543b53-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:79:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639158, 'reachable_time': 28515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382224, 'error': None, 'target': 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.104 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[961bb28f-60c3-449e-ad37-211c0ec88920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.182 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2302ae09-47e5-42ab-8971-f15a995bf84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.183 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55543b53-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.183 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.184 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55543b53-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:17 compute-0 kernel: tap55543b53-e0: entered promiscuous mode
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.185 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:17 compute-0 NetworkManager[48915]: <info>  [1764061037.1871] manager: (tap55543b53-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.187 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55543b53-e0, col_values=(('external_ids', {'iface-id': '6aef391e-f696-4074-917b-8b0c7a47b4b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:17 compute-0 ovn_controller[152859]: 2025-11-25T08:57:17Z|01262|binding|INFO|Releasing lport 6aef391e-f696-4074-917b-8b0c7a47b4b6 from this chassis (sb_readonly=0)
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.188 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.204 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55543b53-ee52-41ca-ba2e-341088afdcaa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55543b53-ee52-41ca-ba2e-341088afdcaa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.205 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[387b963b-ccb1-4d5f-8160-a174baa52697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.206 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-55543b53-ee52-41ca-ba2e-341088afdcaa
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/55543b53-ee52-41ca-ba2e-341088afdcaa.pid.haproxy
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 55543b53-ee52-41ca-ba2e-341088afdcaa
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:57:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:17.207 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'env', 'PROCESS_TAG=haproxy-55543b53-ee52-41ca-ba2e-341088afdcaa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55543b53-ee52-41ca-ba2e-341088afdcaa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:57:17 compute-0 ceph-mon[75015]: pgmap v2294: 321 pgs: 321 active+clean; 293 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 3.3 MiB/s wr, 67 op/s
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.302 253542 DEBUG nova.compute.manager [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-unplugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.304 253542 DEBUG oslo_concurrency.lockutils [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.304 253542 DEBUG oslo_concurrency.lockutils [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.305 253542 DEBUG oslo_concurrency.lockutils [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.305 253542 DEBUG nova.compute.manager [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] No waiting events found dispatching network-vif-unplugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.305 253542 DEBUG nova.compute.manager [req-19b78f91-ea08-4b83-be58-3e2c91d9b6c3 req-ec2c21a0-6f27-438d-9737-6b3817576a2a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-unplugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.306 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061037.298575, 9b511004-21d7-4867-aa46-4e7219827b6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.307 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] VM Started (Lifecycle Event)
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.341 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.345 253542 DEBUG nova.network.neutron [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.350 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061037.3035667, 9b511004-21d7-4867-aa46-4e7219827b6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.350 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] VM Paused (Lifecycle Event)
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.372 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.374 253542 INFO nova.compute.manager [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Took 1.00 seconds to deallocate network for instance.
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.381 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.409 253542 DEBUG nova.compute.manager [req-4bda0695-a94d-469b-8fda-b575ca1fe5e9 req-99af7b91-9863-4b98-bb19-b6be1c855461 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-deleted-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.411 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.434 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.435 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.548 253542 DEBUG oslo_concurrency.processutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:17 compute-0 podman[382298]: 2025-11-25 08:57:17.588198487 +0000 UTC m=+0.058639907 container create c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:57:17 compute-0 systemd[1]: Started libpod-conmon-c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866.scope.
Nov 25 08:57:17 compute-0 podman[382298]: 2025-11-25 08:57:17.557401159 +0000 UTC m=+0.027842619 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:57:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8b26fcd98a92c913e834a912a7b97668705f1f438f00d41eb5de431fe4dbaf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:17 compute-0 podman[382298]: 2025-11-25 08:57:17.688654442 +0000 UTC m=+0.159095882 container init c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:57:17 compute-0 podman[382298]: 2025-11-25 08:57:17.698064108 +0000 UTC m=+0.168505518 container start c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:57:17 compute-0 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [NOTICE]   (382318) : New worker (382330) forked
Nov 25 08:57:17 compute-0 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [NOTICE]   (382318) : Loading success.
Nov 25 08:57:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 265 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 08:57:17 compute-0 nova_compute[253538]: 2025-11-25 08:57:17.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:57:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905058715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.058 253542 DEBUG oslo_concurrency.processutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.066 253542 DEBUG nova.compute.provider_tree [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.089 253542 DEBUG nova.scheduler.client.report [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.132 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.179 253542 INFO nova.scheduler.client.report [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 43128a42-ed0f-42ff-8282-4ef978e7c43c
Nov 25 08:57:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2905058715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.271 253542 DEBUG oslo_concurrency.lockutils [None req-1b614170-21cc-49c4-842b-7f28eb643f6e 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.394 253542 DEBUG nova.network.neutron [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updated VIF entry in instance network info cache for port 3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.394 253542 DEBUG nova.network.neutron [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Updating instance_info_cache with network_info: [{"id": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "address": "fa:16:3e:ec:7c:8c", "network": {"id": "6a3361c5-4f78-4935-9e24-d43d47b272af", "bridge": "br-int", "label": "tempest-network-smoke--1071758038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f8cf5bd-c7", "ovs_interfaceid": "3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:18 compute-0 nova_compute[253538]: 2025-11-25 08:57:18.423 253542 DEBUG oslo_concurrency.lockutils [req-d4f4634d-03f7-42ef-bec5-39ada549cd50 req-250aa808-e02c-412f-96d3-2db495da7499 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-43128a42-ed0f-42ff-8282-4ef978e7c43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:19 compute-0 ceph-mon[75015]: pgmap v2295: 321 pgs: 321 active+clean; 265 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.417 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.418 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.418 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.418 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "43128a42-ed0f-42ff-8282-4ef978e7c43c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.418 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] No waiting events found dispatching network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.419 253542 WARNING nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Received unexpected event network-vif-plugged-3f8cf5bd-c7bb-44c3-a8bf-a8dd8e78c2c6 for instance with vm_state deleted and task_state None.
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.419 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.419 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.420 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.420 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.420 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Processing event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.420 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.421 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.421 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.422 253542 DEBUG oslo_concurrency.lockutils [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.422 253542 DEBUG nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] No waiting events found dispatching network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.422 253542 WARNING nova.compute.manager [req-4edc396f-8d89-44a2-b43d-0265928419fa req-e0dafa2c-5ac2-405e-9d37-31c70b2b7358 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received unexpected event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 for instance with vm_state building and task_state spawning.
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.423 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.427 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061039.427386, 9b511004-21d7-4867-aa46-4e7219827b6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.428 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] VM Resumed (Lifecycle Event)
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.429 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.432 253542 INFO nova.virt.libvirt.driver [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance spawned successfully.
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.432 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.443 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.448 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.451 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.452 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.452 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.453 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.453 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.454 253542 DEBUG nova.virt.libvirt.driver [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.477 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.517 253542 INFO nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Took 9.98 seconds to spawn the instance on the hypervisor.
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.518 253542 DEBUG nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.592 253542 INFO nova.compute.manager [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Took 11.07 seconds to build instance.
Nov 25 08:57:19 compute-0 nova_compute[253538]: 2025-11-25 08:57:19.606 253542 DEBUG oslo_concurrency.lockutils [None req-37647ab5-bd9f-4491-8acb-d42f86c9642b 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:19 compute-0 podman[382350]: 2025-11-25 08:57:19.845393895 +0000 UTC m=+0.092356584 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 08:57:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 213 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 08:57:20 compute-0 ceph-mon[75015]: pgmap v2296: 321 pgs: 321 active+clean; 213 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 08:57:20 compute-0 nova_compute[253538]: 2025-11-25 08:57:20.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 08:57:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:22 compute-0 ovn_controller[152859]: 2025-11-25T08:57:22Z|01263|binding|INFO|Releasing lport f9837d3c-3aa7-48de-b240-549f6bf978b3 from this chassis (sb_readonly=0)
Nov 25 08:57:22 compute-0 ovn_controller[152859]: 2025-11-25T08:57:22Z|01264|binding|INFO|Releasing lport 6aef391e-f696-4074-917b-8b0c7a47b4b6 from this chassis (sb_readonly=0)
Nov 25 08:57:22 compute-0 nova_compute[253538]: 2025-11-25 08:57:22.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:22 compute-0 nova_compute[253538]: 2025-11-25 08:57:22.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:22 compute-0 ceph-mon[75015]: pgmap v2297: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 08:57:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:57:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:57:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:57:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:57:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:57:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:57:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 832 KiB/s wr, 98 op/s
Nov 25 08:57:24 compute-0 ceph-mon[75015]: pgmap v2298: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 832 KiB/s wr, 98 op/s
Nov 25 08:57:25 compute-0 nova_compute[253538]: 2025-11-25 08:57:25.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:25 compute-0 nova_compute[253538]: 2025-11-25 08:57:25.859 253542 DEBUG nova.compute.manager [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-changed-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:25 compute-0 nova_compute[253538]: 2025-11-25 08:57:25.860 253542 DEBUG nova.compute.manager [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing instance network info cache due to event network-changed-88814764-016b-4232-82f8-72dbbb384932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:57:25 compute-0 nova_compute[253538]: 2025-11-25 08:57:25.860 253542 DEBUG oslo_concurrency.lockutils [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:25 compute-0 nova_compute[253538]: 2025-11-25 08:57:25.861 253542 DEBUG oslo_concurrency.lockutils [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:25 compute-0 nova_compute[253538]: 2025-11-25 08:57:25.861 253542 DEBUG nova.network.neutron [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing network info cache for port 88814764-016b-4232-82f8-72dbbb384932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:57:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 337 KiB/s wr, 104 op/s
Nov 25 08:57:27 compute-0 ceph-mon[75015]: pgmap v2299: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 337 KiB/s wr, 104 op/s
Nov 25 08:57:27 compute-0 sshd-session[382376]: Received disconnect from 45.202.211.6 port 46388:11: Bye Bye [preauth]
Nov 25 08:57:27 compute-0 sshd-session[382376]: Disconnected from authenticating user root 45.202.211.6 port 46388 [preauth]
Nov 25 08:57:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 101 op/s
Nov 25 08:57:27 compute-0 nova_compute[253538]: 2025-11-25 08:57:27.951 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:57:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1336926226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:57:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:57:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1336926226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:57:29 compute-0 ceph-mon[75015]: pgmap v2300: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 101 op/s
Nov 25 08:57:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1336926226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:57:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1336926226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:57:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 80 op/s
Nov 25 08:57:30 compute-0 nova_compute[253538]: 2025-11-25 08:57:30.547 253542 DEBUG nova.network.neutron [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updated VIF entry in instance network info cache for port 88814764-016b-4232-82f8-72dbbb384932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:57:30 compute-0 nova_compute[253538]: 2025-11-25 08:57:30.548 253542 DEBUG nova.network.neutron [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:30 compute-0 nova_compute[253538]: 2025-11-25 08:57:30.573 253542 DEBUG oslo_concurrency.lockutils [req-dea0fe30-0dad-41c2-96bf-75a9e7339e2c req-5ba0c0d0-e8e3-49ef-8058-939ca920df48 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:30 compute-0 nova_compute[253538]: 2025-11-25 08:57:30.646 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061035.6448607, 43128a42-ed0f-42ff-8282-4ef978e7c43c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:57:30 compute-0 nova_compute[253538]: 2025-11-25 08:57:30.646 253542 INFO nova.compute.manager [-] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] VM Stopped (Lifecycle Event)
Nov 25 08:57:30 compute-0 nova_compute[253538]: 2025-11-25 08:57:30.666 253542 DEBUG nova.compute.manager [None req-21b6eb11-ef14-4acb-b771-31350e23435b - - - - - -] [instance: 43128a42-ed0f-42ff-8282-4ef978e7c43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:30 compute-0 nova_compute[253538]: 2025-11-25 08:57:30.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:30 compute-0 nova_compute[253538]: 2025-11-25 08:57:30.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:31 compute-0 ceph-mon[75015]: pgmap v2301: 321 pgs: 321 active+clean; 213 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 80 op/s
Nov 25 08:57:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 218 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 485 KiB/s wr, 79 op/s
Nov 25 08:57:32 compute-0 ovn_controller[152859]: 2025-11-25T08:57:32Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:8a:98 10.100.0.4
Nov 25 08:57:32 compute-0 ovn_controller[152859]: 2025-11-25T08:57:32Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:8a:98 10.100.0.4
Nov 25 08:57:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:32 compute-0 nova_compute[253538]: 2025-11-25 08:57:32.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:33 compute-0 ceph-mon[75015]: pgmap v2302: 321 pgs: 321 active+clean; 218 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 485 KiB/s wr, 79 op/s
Nov 25 08:57:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 229 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 25 08:57:34 compute-0 nova_compute[253538]: 2025-11-25 08:57:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:35 compute-0 ceph-mon[75015]: pgmap v2303: 321 pgs: 321 active+clean; 229 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 25 08:57:35 compute-0 nova_compute[253538]: 2025-11-25 08:57:35.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:35 compute-0 nova_compute[253538]: 2025-11-25 08:57:35.561 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:35 compute-0 nova_compute[253538]: 2025-11-25 08:57:35.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 244 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 562 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:57:37 compute-0 ceph-mon[75015]: pgmap v2304: 321 pgs: 321 active+clean; 244 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 562 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 08:57:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 08:57:37 compute-0 nova_compute[253538]: 2025-11-25 08:57:37.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:39 compute-0 ceph-mon[75015]: pgmap v2305: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 08:57:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 08:57:40 compute-0 nova_compute[253538]: 2025-11-25 08:57:40.717 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:41.080 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:41 compute-0 ceph-mon[75015]: pgmap v2306: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 08:57:41 compute-0 nova_compute[253538]: 2025-11-25 08:57:41.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:41 compute-0 nova_compute[253538]: 2025-11-25 08:57:41.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:57:41 compute-0 nova_compute[253538]: 2025-11-25 08:57:41.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:57:41 compute-0 nova_compute[253538]: 2025-11-25 08:57:41.789 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:41 compute-0 nova_compute[253538]: 2025-11-25 08:57:41.789 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:41 compute-0 nova_compute[253538]: 2025-11-25 08:57:41.789 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 08:57:41 compute-0 nova_compute[253538]: 2025-11-25 08:57:41.790 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:57:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 08:57:42 compute-0 ceph-mon[75015]: pgmap v2307: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 08:57:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:42 compute-0 nova_compute[253538]: 2025-11-25 08:57:42.958 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.447 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.447 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.461 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.538 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.538 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.547 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.547 253542 INFO nova.compute.claims [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.629 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.654 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.655 253542 DEBUG nova.compute.provider_tree [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.671 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.704 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.737 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.756 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.756 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.757 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.758 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:57:43 compute-0 nova_compute[253538]: 2025-11-25 08:57:43.778 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:43 compute-0 podman[382378]: 2025-11-25 08:57:43.833347911 +0000 UTC m=+0.066160992 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 08:57:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.7 MiB/s wr, 50 op/s
Nov 25 08:57:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:57:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974948934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.277 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.287 253542 DEBUG nova.compute.provider_tree [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.304 253542 DEBUG nova.scheduler.client.report [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.341 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.342 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.409 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.409 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.431 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.448 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.542 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.544 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.545 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Creating image(s)
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.580 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.613 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.639 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.643 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.688 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.737 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.737 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.738 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.738 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.757 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:44 compute-0 nova_compute[253538]: 2025-11-25 08:57:44.761 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:44 compute-0 podman[382475]: 2025-11-25 08:57:44.804565346 +0000 UTC m=+0.058651797 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 08:57:45 compute-0 ceph-mon[75015]: pgmap v2308: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.7 MiB/s wr, 50 op/s
Nov 25 08:57:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1974948934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.100 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.131 253542 DEBUG nova.policy [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.167 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:57:45 compute-0 sudo[382587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:45 compute-0 sudo[382587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:45 compute-0 sudo[382587]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.286 253542 DEBUG nova.objects.instance [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e55aa91-2aa5-4443-b976-0f3e4409e8ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.303 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.304 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Ensure instance console log exists: /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.305 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.306 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.306 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:45 compute-0 sudo[382630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:57:45 compute-0 sudo[382630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:45 compute-0 sudo[382630]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:45 compute-0 sudo[382655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:45 compute-0 sudo[382655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:45 compute-0 sudo[382655]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:45 compute-0 sudo[382680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:57:45 compute-0 sudo[382680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:45 compute-0 nova_compute[253538]: 2025-11-25 08:57:45.870 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Successfully created port: 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:57:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.1 MiB/s wr, 29 op/s
Nov 25 08:57:46 compute-0 sudo[382680]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:57:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:57:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:57:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:57:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:57:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:57:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev da6d54f2-07cd-4708-ae01-0afea329c729 does not exist
Nov 25 08:57:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1b45df8e-ae1f-4507-8a28-ce5bfb36f342 does not exist
Nov 25 08:57:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev af0116fb-88eb-4d06-b211-2a39b9ea5135 does not exist
Nov 25 08:57:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:57:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:57:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:57:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:57:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:57:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:57:46 compute-0 sudo[382735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:46 compute-0 sudo[382735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:46 compute-0 sudo[382735]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:46 compute-0 sudo[382760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:57:46 compute-0 sudo[382760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:46 compute-0 sudo[382760]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:46 compute-0 sudo[382785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:46 compute-0 sudo[382785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:46 compute-0 sudo[382785]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:46 compute-0 sudo[382810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:57:46 compute-0 sudo[382810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:46 compute-0 nova_compute[253538]: 2025-11-25 08:57:46.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:46 compute-0 podman[382875]: 2025-11-25 08:57:46.90326911 +0000 UTC m=+0.068489865 container create 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 08:57:46 compute-0 systemd[1]: Started libpod-conmon-8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9.scope.
Nov 25 08:57:46 compute-0 podman[382875]: 2025-11-25 08:57:46.87423427 +0000 UTC m=+0.039455075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:57:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:57:47 compute-0 ceph-mon[75015]: pgmap v2309: 321 pgs: 321 active+clean; 246 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.1 MiB/s wr, 29 op/s
Nov 25 08:57:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:57:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:57:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:57:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:57:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:57:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:57:47 compute-0 podman[382875]: 2025-11-25 08:57:47.022523176 +0000 UTC m=+0.187743981 container init 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 08:57:47 compute-0 podman[382875]: 2025-11-25 08:57:47.036151497 +0000 UTC m=+0.201372262 container start 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 08:57:47 compute-0 podman[382875]: 2025-11-25 08:57:47.041041321 +0000 UTC m=+0.206262106 container attach 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:57:47 compute-0 naughty_robinson[382891]: 167 167
Nov 25 08:57:47 compute-0 systemd[1]: libpod-8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9.scope: Deactivated successfully.
Nov 25 08:57:47 compute-0 conmon[382891]: conmon 8e361f66d041016362b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9.scope/container/memory.events
Nov 25 08:57:47 compute-0 podman[382875]: 2025-11-25 08:57:47.048553424 +0000 UTC m=+0.213774199 container died 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-92d12538079d78a05f067a0d6ade0c71e5266dfef5c0f8443eeb5ce42854d25d-merged.mount: Deactivated successfully.
Nov 25 08:57:47 compute-0 podman[382875]: 2025-11-25 08:57:47.101019593 +0000 UTC m=+0.266240348 container remove 8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 08:57:47 compute-0 systemd[1]: libpod-conmon-8e361f66d041016362b80f980d14f400c180dcfa80416f95cafac2ef38f43ae9.scope: Deactivated successfully.
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.164 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Successfully updated port: 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.179 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.179 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.180 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.256 253542 DEBUG nova.compute.manager [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.257 253542 DEBUG nova.compute.manager [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.257 253542 DEBUG oslo_concurrency.lockutils [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:47 compute-0 podman[382915]: 2025-11-25 08:57:47.319096779 +0000 UTC m=+0.048563153 container create f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.335 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:57:47 compute-0 systemd[1]: Started libpod-conmon-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope.
Nov 25 08:57:47 compute-0 podman[382915]: 2025-11-25 08:57:47.298858458 +0000 UTC m=+0.028324882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:57:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:47 compute-0 podman[382915]: 2025-11-25 08:57:47.409082868 +0000 UTC m=+0.138549262 container init f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:57:47 compute-0 podman[382915]: 2025-11-25 08:57:47.426385489 +0000 UTC m=+0.155851903 container start f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:57:47 compute-0 podman[382915]: 2025-11-25 08:57:47.431027655 +0000 UTC m=+0.160494049 container attach f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 263 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 914 KiB/s wr, 8 op/s
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.962 253542 DEBUG nova.compute.manager [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-changed-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.963 253542 DEBUG nova.compute.manager [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing instance network info cache due to event network-changed-88814764-016b-4232-82f8-72dbbb384932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.963 253542 DEBUG oslo_concurrency.lockutils [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.963 253542 DEBUG oslo_concurrency.lockutils [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.964 253542 DEBUG nova.network.neutron [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Refreshing network info cache for port 88814764-016b-4232-82f8-72dbbb384932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:57:47 compute-0 nova_compute[253538]: 2025-11-25 08:57:47.965 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:57:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166584326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/166584326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.028 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.094 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.096 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.096 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.097 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.097 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.100 253542 INFO nova.compute.manager [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Terminating instance
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.102 253542 DEBUG nova.compute.manager [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:57:48 compute-0 kernel: tap88814764-01 (unregistering): left promiscuous mode
Nov 25 08:57:48 compute-0 NetworkManager[48915]: <info>  [1764061068.1998] device (tap88814764-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:57:48 compute-0 ovn_controller[152859]: 2025-11-25T08:57:48Z|01265|binding|INFO|Releasing lport 88814764-016b-4232-82f8-72dbbb384932 from this chassis (sb_readonly=0)
Nov 25 08:57:48 compute-0 ovn_controller[152859]: 2025-11-25T08:57:48Z|01266|binding|INFO|Setting lport 88814764-016b-4232-82f8-72dbbb384932 down in Southbound
Nov 25 08:57:48 compute-0 ovn_controller[152859]: 2025-11-25T08:57:48Z|01267|binding|INFO|Removing iface tap88814764-01 ovn-installed in OVS
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.259 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:8a:98 10.100.0.4'], port_security=['fa:16:3e:4c:8a:98 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b511004-21d7-4867-aa46-4e7219827b6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55543b53-ee52-41ca-ba2e-341088afdcaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54ab33f9507e43fca43c45e6fc57f565', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6831af45-79a5-4ed4-afa6-6b43609f2269 cfebd55d-d59e-4c7b-966a-1919c4910d21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90ba94bc-a7de-4a4f-bb6b-69f6919bb708, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=88814764-016b-4232-82f8-72dbbb384932) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:57:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 88814764-016b-4232-82f8-72dbbb384932 in datapath 55543b53-ee52-41ca-ba2e-341088afdcaa unbound from our chassis
Nov 25 08:57:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.261 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55543b53-ee52-41ca-ba2e-341088afdcaa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:57:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.263 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9df05a-b3a4-4b13-88ff-20b0a3ad7849]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:48.263 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa namespace which is not needed anymore
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:48 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 25 08:57:48 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007a.scope: Consumed 13.510s CPU time.
Nov 25 08:57:48 compute-0 systemd-machined[215790]: Machine qemu-152-instance-0000007a terminated.
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.323 253542 DEBUG nova.network.neutron [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.341 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.341 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance network_info: |[{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.342 253542 DEBUG oslo_concurrency.lockutils [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.342 253542 DEBUG nova.network.neutron [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.346 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start _get_guest_xml network_info=[{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.351 253542 WARNING nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.356 253542 DEBUG nova.virt.libvirt.host [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.357 253542 DEBUG nova.virt.libvirt.host [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.360 253542 DEBUG nova.virt.libvirt.host [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.360 253542 DEBUG nova.virt.libvirt.host [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.361 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.361 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.362 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.362 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.362 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.363 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.363 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.363 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.363 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.364 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.364 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.364 253542 DEBUG nova.virt.hardware [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.369 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:48 compute-0 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [NOTICE]   (382318) : haproxy version is 2.8.14-c23fe91
Nov 25 08:57:48 compute-0 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [NOTICE]   (382318) : path to executable is /usr/sbin/haproxy
Nov 25 08:57:48 compute-0 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [WARNING]  (382318) : Exiting Master process...
Nov 25 08:57:48 compute-0 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [ALERT]    (382318) : Current worker (382330) exited with code 143 (Terminated)
Nov 25 08:57:48 compute-0 neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa[382314]: [WARNING]  (382318) : All workers exited. Exiting... (0)
Nov 25 08:57:48 compute-0 systemd[1]: libpod-c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866.scope: Deactivated successfully.
Nov 25 08:57:48 compute-0 podman[382997]: 2025-11-25 08:57:48.47132333 +0000 UTC m=+0.093120765 container died c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:57:48 compute-0 vibrant_shockley[382932]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:57:48 compute-0 vibrant_shockley[382932]: --> relative data size: 1.0
Nov 25 08:57:48 compute-0 vibrant_shockley[382932]: --> All data devices are unavailable
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.560 253542 INFO nova.virt.libvirt.driver [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Instance destroyed successfully.
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.561 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.565 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.566 253542 DEBUG nova.objects.instance [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lazy-loading 'resources' on Instance uuid 9b511004-21d7-4867-aa46-4e7219827b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:57:48 compute-0 systemd[1]: libpod-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope: Deactivated successfully.
Nov 25 08:57:48 compute-0 systemd[1]: libpod-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope: Consumed 1.040s CPU time.
Nov 25 08:57:48 compute-0 conmon[382932]: conmon f751eb85115f95617175 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope/container/memory.events
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.575 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.576 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.581 253542 DEBUG nova.virt.libvirt.vif [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-429479529-access_point-266233725',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-429479529-acc',id=122,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOtszPERwqLbWIYPkzCz4rQadM31V2GuDIdwxeEnuQlWaJ4QslduyQHZMa0L1KML6aXdq0ZmWZAYQ/HsmjaUOuAcVDO3uLPU+Gh2V/Iwtg0WToybeZDWhsFovRGb4ZBh3Q==',key_name='tempest-TestSecurityGroupsBasicOps-652156064',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:57:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54ab33f9507e43fca43c45e6fc57f565',ramdisk_id='',reservation_id='r-pvks0odp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-429479529',owner_user_name='tempest-TestSecurityGroupsBasicOps-429479529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:57:19Z,user_data=None,user_id='8ce5e2935141427a90707c14e4a73ad9',uuid=9b511004-21d7-4867-aa46-4e7219827b6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.581 253542 DEBUG nova.network.os_vif_util [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converting VIF {"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.583 253542 DEBUG nova.network.os_vif_util [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.583 253542 DEBUG os_vif [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.586 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88814764-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.591 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.594 253542 INFO os_vif [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:8a:98,bridge_name='br-int',has_traffic_filtering=True,id=88814764-016b-4232-82f8-72dbbb384932,network=Network(55543b53-ee52-41ca-ba2e-341088afdcaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88814764-01')
Nov 25 08:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866-userdata-shm.mount: Deactivated successfully.
Nov 25 08:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf8b26fcd98a92c913e834a912a7b97668705f1f438f00d41eb5de431fe4dbaf-merged.mount: Deactivated successfully.
Nov 25 08:57:48 compute-0 podman[383065]: 2025-11-25 08:57:48.635233612 +0000 UTC m=+0.037327857 container died f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.813 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.815 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3304MB free_disk=59.89720153808594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.815 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.815 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:57:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631145107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.864 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.891 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.896 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.941 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance ce8c3428-f7e4-49aa-9978-faaf5d514663 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.942 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 9b511004-21d7-4867-aa46-4e7219827b6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.942 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 5e55aa91-2aa5-4443-b976-0f3e4409e8ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.943 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:57:48 compute-0 nova_compute[253538]: 2025-11-25 08:57:48.943 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-be339c62705c97828622372a4f66b82f19ed5acb57a26246a94e18db53e50b85-merged.mount: Deactivated successfully.
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.038 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:49 compute-0 ceph-mon[75015]: pgmap v2310: 321 pgs: 321 active+clean; 263 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 914 KiB/s wr, 8 op/s
Nov 25 08:57:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1631145107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:57:49 compute-0 podman[383065]: 2025-11-25 08:57:49.239776796 +0000 UTC m=+0.641871051 container remove f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 08:57:49 compute-0 systemd[1]: libpod-conmon-f751eb85115f956171758b6641bf133fce1dad3ab8f30a809ff9c3fc17a1046b.scope: Deactivated successfully.
Nov 25 08:57:49 compute-0 sudo[382810]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:57:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/907602818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.359 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-unplugged-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] No waiting events found dispatching network-vif-unplugged-88814764-016b-4232-82f8-72dbbb384932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.360 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-unplugged-88814764-016b-4232-82f8-72dbbb384932 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG oslo_concurrency.lockutils [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 DEBUG nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] No waiting events found dispatching network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.361 253542 WARNING nova.compute.manager [req-4baa14cc-8b90-4169-84cd-e5556b6b837b req-ada0a325-e442-4080-9ba9-f2f76d2dbc92 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received unexpected event network-vif-plugged-88814764-016b-4232-82f8-72dbbb384932 for instance with vm_state active and task_state deleting.
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.362 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.363 253542 DEBUG nova.virt.libvirt.vif [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2129911111',display_name='tempest-TestNetworkBasicOps-server-2129911111',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2129911111',id=123,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN+yQyEQntLRhVgjlecOd8kZgcuJccGeN4XUmVTtZkXatG89jriyrNCx89aMj4+ppyzZUWi3hVDPwYltwxWsUBkgwfbxG4JDKfBHeP0jrr3H+wCGmTdCkbNarpgdJwyag==',key_name='tempest-TestNetworkBasicOps-619601627',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-vzp7o245',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:57:44Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e55aa91-2aa5-4443-b976-0f3e4409e8ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.363 253542 DEBUG nova.network.os_vif_util [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.364 253542 DEBUG nova.network.os_vif_util [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.366 253542 DEBUG nova.objects.instance [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e55aa91-2aa5-4443-b976-0f3e4409e8ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:57:49 compute-0 podman[382997]: 2025-11-25 08:57:49.369849377 +0000 UTC m=+0.991646802 container cleanup c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:57:49 compute-0 systemd[1]: libpod-conmon-c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866.scope: Deactivated successfully.
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.388 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <uuid>5e55aa91-2aa5-4443-b976-0f3e4409e8ec</uuid>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <name>instance-0000007b</name>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-2129911111</nova:name>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:57:48</nova:creationTime>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <nova:port uuid="027edfd6-09a6-4bf4-88df-8a19e59d1f72">
Nov 25 08:57:49 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <system>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <entry name="serial">5e55aa91-2aa5-4443-b976-0f3e4409e8ec</entry>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <entry name="uuid">5e55aa91-2aa5-4443-b976-0f3e4409e8ec</entry>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </system>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <os>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   </os>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <features>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   </features>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk">
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       </source>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config">
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       </source>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:57:49 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d9:e8:8d"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <target dev="tap027edfd6-09"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/console.log" append="off"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <video>
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </video>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:57:49 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:57:49 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:57:49 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:57:49 compute-0 nova_compute[253538]: </domain>
Nov 25 08:57:49 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.389 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Preparing to wait for external event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.389 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.389 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.389 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.390 253542 DEBUG nova.virt.libvirt.vif [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2129911111',display_name='tempest-TestNetworkBasicOps-server-2129911111',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2129911111',id=123,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN+yQyEQntLRhVgjlecOd8kZgcuJccGeN4XUmVTtZkXatG89jriyrNCx89aMj4+ppyzZUWi3hVDPwYltwxWsUBkgwfbxG4JDKfBHeP0jrr3H+wCGmTdCkbNarpgdJwyag==',key_name='tempest-TestNetworkBasicOps-619601627',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-vzp7o245',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:57:44Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e55aa91-2aa5-4443-b976-0f3e4409e8ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.390 253542 DEBUG nova.network.os_vif_util [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.391 253542 DEBUG nova.network.os_vif_util [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.391 253542 DEBUG os_vif [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.392 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.393 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.396 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap027edfd6-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.396 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap027edfd6-09, col_values=(('external_ids', {'iface-id': '027edfd6-09a6-4bf4-88df-8a19e59d1f72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:e8:8d', 'vm-uuid': '5e55aa91-2aa5-4443-b976-0f3e4409e8ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:49 compute-0 sudo[383160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:49 compute-0 NetworkManager[48915]: <info>  [1764061069.4365] manager: (tap027edfd6-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/520)
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:57:49 compute-0 sudo[383160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.446 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.447 253542 INFO os_vif [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09')
Nov 25 08:57:49 compute-0 sudo[383160]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:49 compute-0 podman[383186]: 2025-11-25 08:57:49.497302916 +0000 UTC m=+0.062257976 container remove c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.513 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66b62e78-ac29-4494-958c-0730975c4091]: (4, ('Tue Nov 25 08:57:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa (c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866)\nc5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866\nTue Nov 25 08:57:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa (c5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866)\nc5711778772d41910161078c587892ac983a62ecffb93e967a20e59df055e866\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.518 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb31cc7-3921-4c70-af80-756665af7c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.517 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.518 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.519 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:d9:e8:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.519 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55543b53-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.520 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Using config drive
Nov 25 08:57:49 compute-0 kernel: tap55543b53-e0: left promiscuous mode
Nov 25 08:57:49 compute-0 sudo[383199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:57:49 compute-0 sudo[383199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:49 compute-0 sudo[383199]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:57:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2483430628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.549 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8da167f-2431-4999-bc12-7027a94d1d66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.561 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dc869958-aadb-4fa6-b795-53289a1764e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.563 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d7ea93-28d6-40f3-a507-45e964c9fe39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.571 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.580 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c0128292-54d0-44d3-b727-16dfba4c7e7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639137, 'reachable_time': 27178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383258, 'error': None, 'target': 'ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.584 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55543b53-ee52-41ca-ba2e-341088afdcaa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:57:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:49.584 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7a18cf-b1be-4921-aaf4-bc6202e837d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.584 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:57:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d55543b53\x2dee52\x2d41ca\x2dba2e\x2d341088afdcaa.mount: Deactivated successfully.
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.601 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:57:49 compute-0 sudo[383244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:49 compute-0 sudo[383244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:49 compute-0 sudo[383244]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.624 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.625 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:49 compute-0 sudo[383279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:57:49 compute-0 sudo[383279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.754 253542 INFO nova.virt.libvirt.driver [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Deleting instance files /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e_del
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.755 253542 INFO nova.virt.libvirt.driver [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Deletion of /var/lib/nova/instances/9b511004-21d7-4867-aa46-4e7219827b6e_del complete
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.807 253542 INFO nova.compute.manager [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Took 1.70 seconds to destroy the instance on the hypervisor.
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.808 253542 DEBUG oslo.service.loopingcall [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.808 253542 DEBUG nova.compute.manager [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:57:49 compute-0 nova_compute[253538]: 2025-11-25 08:57:49.809 253542 DEBUG nova.network.neutron [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:57:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 265 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 08:57:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/907602818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:57:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2483430628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:50 compute-0 podman[383345]: 2025-11-25 08:57:50.127654663 +0000 UTC m=+0.068961797 container create 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 08:57:50 compute-0 systemd[1]: Started libpod-conmon-2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2.scope.
Nov 25 08:57:50 compute-0 podman[383345]: 2025-11-25 08:57:50.098893151 +0000 UTC m=+0.040200325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.196 253542 DEBUG nova.network.neutron [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updated VIF entry in instance network info cache for port 88814764-016b-4232-82f8-72dbbb384932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.197 253542 DEBUG nova.network.neutron [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [{"id": "88814764-016b-4232-82f8-72dbbb384932", "address": "fa:16:3e:4c:8a:98", "network": {"id": "55543b53-ee52-41ca-ba2e-341088afdcaa", "bridge": "br-int", "label": "tempest-network-smoke--246766377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54ab33f9507e43fca43c45e6fc57f565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88814764-01", "ovs_interfaceid": "88814764-016b-4232-82f8-72dbbb384932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.222 253542 DEBUG oslo_concurrency.lockutils [req-a8edc7ba-f2f1-4885-a616-0ff86e2cfc07 req-3a478600-1659-4fdf-8c0a-cac6468c9d72 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9b511004-21d7-4867-aa46-4e7219827b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:50 compute-0 podman[383345]: 2025-11-25 08:57:50.227965054 +0000 UTC m=+0.169272248 container init 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:57:50 compute-0 podman[383345]: 2025-11-25 08:57:50.24028269 +0000 UTC m=+0.181589824 container start 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 08:57:50 compute-0 podman[383345]: 2025-11-25 08:57:50.245046389 +0000 UTC m=+0.186353573 container attach 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:57:50 compute-0 eloquent_kirch[383362]: 167 167
Nov 25 08:57:50 compute-0 systemd[1]: libpod-2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2.scope: Deactivated successfully.
Nov 25 08:57:50 compute-0 podman[383345]: 2025-11-25 08:57:50.249291255 +0000 UTC m=+0.190598349 container died 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.277 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Creating config drive at /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config
Nov 25 08:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-075fa444125eae345e8aaadbfeb4286d847d6d7fe6ecee7692abf608b566dc29-merged.mount: Deactivated successfully.
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.287 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpudr_f3iy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:50 compute-0 podman[383345]: 2025-11-25 08:57:50.30238818 +0000 UTC m=+0.243695304 container remove 2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:57:50 compute-0 systemd[1]: libpod-conmon-2b8407cd436740bc0ed15dea868d88ebbc48ac0a7d0546bae4a456e7410425c2.scope: Deactivated successfully.
Nov 25 08:57:50 compute-0 podman[383359]: 2025-11-25 08:57:50.328197432 +0000 UTC m=+0.145024888 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.441 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpudr_f3iy" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.482 253542 DEBUG nova.storage.rbd_utils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.486 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.526 253542 DEBUG nova.network.neutron [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.528 253542 DEBUG nova.network.neutron [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.545 253542 DEBUG oslo_concurrency.lockutils [req-a1c77e16-d3d1-4c46-8c49-0585f19f0df7 req-59ef1be7-f012-4815-95d9-9aeecf551f31 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:50 compute-0 podman[383416]: 2025-11-25 08:57:50.547401348 +0000 UTC m=+0.063610751 container create fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 08:57:50 compute-0 systemd[1]: Started libpod-conmon-fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62.scope.
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.609 253542 DEBUG nova.network.neutron [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:50 compute-0 podman[383416]: 2025-11-25 08:57:50.518858682 +0000 UTC m=+0.035068105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.625 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.627 253542 INFO nova.compute.manager [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Took 0.82 seconds to deallocate network for instance.
Nov 25 08:57:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:50 compute-0 podman[383416]: 2025-11-25 08:57:50.670622292 +0000 UTC m=+0.186831695 container init fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.677 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.678 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:50 compute-0 podman[383416]: 2025-11-25 08:57:50.687954804 +0000 UTC m=+0.204164187 container start fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 08:57:50 compute-0 podman[383416]: 2025-11-25 08:57:50.691789409 +0000 UTC m=+0.207998802 container attach fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.694 253542 DEBUG oslo_concurrency.processutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config 5e55aa91-2aa5-4443-b976-0f3e4409e8ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.694 253542 INFO nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Deleting local config drive /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec/disk.config because it was imported into RBD.
Nov 25 08:57:50 compute-0 kernel: tap027edfd6-09: entered promiscuous mode
Nov 25 08:57:50 compute-0 NetworkManager[48915]: <info>  [1764061070.7600] manager: (tap027edfd6-09): new Tun device (/org/freedesktop/NetworkManager/Devices/521)
Nov 25 08:57:50 compute-0 systemd-udevd[382970]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:57:50 compute-0 ovn_controller[152859]: 2025-11-25T08:57:50Z|01268|binding|INFO|Claiming lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 for this chassis.
Nov 25 08:57:50 compute-0 ovn_controller[152859]: 2025-11-25T08:57:50Z|01269|binding|INFO|027edfd6-09a6-4bf4-88df-8a19e59d1f72: Claiming fa:16:3e:d9:e8:8d 10.100.0.9
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.771 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:e8:8d 10.100.0.9'], port_security=['fa:16:3e:d9:e8:8d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5e55aa91-2aa5-4443-b976-0f3e4409e8ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf619d00-d285-4b9e-9996-77997075375e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d7a9182-641d-4ca5-a3ab-361222a77391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8f4cf68-54bc-4d0b-a896-1ef280f60a0a, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=027edfd6-09a6-4bf4-88df-8a19e59d1f72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.772 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 in datapath bf619d00-d285-4b9e-9996-77997075375e bound to our chassis
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.774 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf619d00-d285-4b9e-9996-77997075375e
Nov 25 08:57:50 compute-0 NetworkManager[48915]: <info>  [1764061070.7797] device (tap027edfd6-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:57:50 compute-0 NetworkManager[48915]: <info>  [1764061070.7806] device (tap027edfd6-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.785 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[123d1eee-e45b-4efa-9277-8e97f9cc2b0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.786 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf619d00-d1 in ovnmeta-bf619d00-d285-4b9e-9996-77997075375e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.790 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf619d00-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.790 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ee34ec05-7c7b-46ac-8a71-8afa3af24a74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 ovn_controller[152859]: 2025-11-25T08:57:50Z|01270|binding|INFO|Setting lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 ovn-installed in OVS
Nov 25 08:57:50 compute-0 ovn_controller[152859]: 2025-11-25T08:57:50Z|01271|binding|INFO|Setting lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 up in Southbound
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.795 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8db51c1b-de41-4681-b898-1a64a232cdd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.800 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:50 compute-0 nova_compute[253538]: 2025-11-25 08:57:50.802 253542 DEBUG oslo_concurrency.processutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.811 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d91435ee-afae-443d-a0cf-922be8f24858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 systemd-machined[215790]: New machine qemu-153-instance-0000007b.
Nov 25 08:57:50 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-0000007b.
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1b869b56-a1ff-427d-8b6f-11e83878f6a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.859 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[727fae92-6d8e-4829-b9a7-f42c6480cb6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 NetworkManager[48915]: <info>  [1764061070.8657] manager: (tapbf619d00-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/522)
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.864 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03a96821-fad6-467d-9d93-c98e32abe30e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.907 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[be5267df-9d08-408a-b8e5-d0b3c5084766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.911 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[50965635-bc0d-4938-9e48-9c7e3529ab92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 NetworkManager[48915]: <info>  [1764061070.9336] device (tapbf619d00-d0): carrier: link connected
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.941 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d84d9e-2db5-4fe0-a5a4-2b8ce5dc51b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.961 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af6e1ae3-dcf3-4ca6-884b-3c3db040d4ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf619d00-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:be:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642553, 'reachable_time': 19033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383534, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.980 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8833099c-a374-4d65-a971-6da3dfddeb88]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:be0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642553, 'tstamp': 642553}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383535, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:50.997 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0919a9-f5ee-4501-a1b8-b681fc7e4c19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf619d00-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:be:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642553, 'reachable_time': 19033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383536, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.033 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4023bd-1e31-4621-9040-5caf113bd05c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:51 compute-0 ceph-mon[75015]: pgmap v2311: 321 pgs: 321 active+clean; 265 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.099 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[47b8491b-7b3f-4647-9eb4-912ba65a1ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.101 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf619d00-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.101 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.101 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf619d00-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:51 compute-0 kernel: tapbf619d00-d0: entered promiscuous mode
Nov 25 08:57:51 compute-0 NetworkManager[48915]: <info>  [1764061071.1044] manager: (tapbf619d00-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.113 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf619d00-d0, col_values=(('external_ids', {'iface-id': 'c544ed70-c59f-4fbe-97c5-a521f548f971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:51 compute-0 ovn_controller[152859]: 2025-11-25T08:57:51Z|01272|binding|INFO|Releasing lport c544ed70-c59f-4fbe-97c5-a521f548f971 from this chassis (sb_readonly=0)
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.116 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.143 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf619d00-d285-4b9e-9996-77997075375e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf619d00-d285-4b9e-9996-77997075375e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.144 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90239e25-1c6b-424b-8b44-bbe4a547583f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.144 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-bf619d00-d285-4b9e-9996-77997075375e
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/bf619d00-d285-4b9e-9996-77997075375e.pid.haproxy
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID bf619d00-d285-4b9e-9996-77997075375e
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:57:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:51.145 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'env', 'PROCESS_TAG=haproxy-bf619d00-d285-4b9e-9996-77997075375e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf619d00-d285-4b9e-9996-77997075375e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:57:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:57:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521971948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.266 253542 DEBUG oslo_concurrency.processutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.273 253542 DEBUG nova.compute.provider_tree [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.294 253542 DEBUG nova.scheduler.client.report [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.319 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.331 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061071.3305693, 5e55aa91-2aa5-4443-b976-0f3e4409e8ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.332 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] VM Started (Lifecycle Event)
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.349 253542 INFO nova.scheduler.client.report [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Deleted allocations for instance 9b511004-21d7-4867-aa46-4e7219827b6e
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.351 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.355 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061071.335, 5e55aa91-2aa5-4443-b976-0f3e4409e8ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.356 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] VM Paused (Lifecycle Event)
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.395 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.404 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.432 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Received event network-vif-deleted-88814764-016b-4232-82f8-72dbbb384932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.433 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.433 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.434 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.434 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.435 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Processing event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.436 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.437 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.437 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.438 253542 DEBUG oslo_concurrency.lockutils [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.438 253542 DEBUG nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.439 253542 WARNING nova.compute.manager [req-eceda9a1-6a77-4ba2-b59e-e77d47be816b req-f2dd6384-26ed-4de7-8855-8e03bf514d7c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state building and task_state spawning.
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.441 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.442 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.447 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061071.4468732, 5e55aa91-2aa5-4443-b976-0f3e4409e8ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.447 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] VM Resumed (Lifecycle Event)
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.455 253542 DEBUG oslo_concurrency.lockutils [None req-8b77574d-7777-4a2e-a943-44bc36002e5e 8ce5e2935141427a90707c14e4a73ad9 54ab33f9507e43fca43c45e6fc57f565 - - default default] Lock "9b511004-21d7-4867-aa46-4e7219827b6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.459 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.466 253542 INFO nova.virt.libvirt.driver [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance spawned successfully.
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.468 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.471 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.473 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.491 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.492 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.492 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.492 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.493 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.493 253542 DEBUG nova.virt.libvirt.driver [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.498 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:57:51 compute-0 youthful_hopper[383462]: {
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:     "0": [
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:         {
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "devices": [
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "/dev/loop3"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             ],
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_name": "ceph_lv0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_size": "21470642176",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "name": "ceph_lv0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "tags": {
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cluster_name": "ceph",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.crush_device_class": "",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.encrypted": "0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osd_id": "0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.type": "block",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.vdo": "0"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             },
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "type": "block",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "vg_name": "ceph_vg0"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:         }
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:     ],
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:     "1": [
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:         {
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "devices": [
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "/dev/loop4"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             ],
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_name": "ceph_lv1",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_size": "21470642176",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "name": "ceph_lv1",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "tags": {
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cluster_name": "ceph",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.crush_device_class": "",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.encrypted": "0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osd_id": "1",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.type": "block",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.vdo": "0"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             },
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "type": "block",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "vg_name": "ceph_vg1"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:         }
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:     ],
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:     "2": [
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:         {
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "devices": [
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "/dev/loop5"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             ],
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_name": "ceph_lv2",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_size": "21470642176",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "name": "ceph_lv2",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "tags": {
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.cluster_name": "ceph",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.crush_device_class": "",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.encrypted": "0",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osd_id": "2",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.type": "block",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:                 "ceph.vdo": "0"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             },
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "type": "block",
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:             "vg_name": "ceph_vg2"
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:         }
Nov 25 08:57:51 compute-0 youthful_hopper[383462]:     ]
Nov 25 08:57:51 compute-0 youthful_hopper[383462]: }
Nov 25 08:57:51 compute-0 podman[383617]: 2025-11-25 08:57:51.535468302 +0000 UTC m=+0.046821075 container create 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 08:57:51 compute-0 systemd[1]: libpod-fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62.scope: Deactivated successfully.
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.546 253542 INFO nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Took 7.00 seconds to spawn the instance on the hypervisor.
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.547 253542 DEBUG nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:57:51 compute-0 podman[383416]: 2025-11-25 08:57:51.550371018 +0000 UTC m=+1.066580421 container died fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:57:51 compute-0 systemd[1]: Started libpod-conmon-2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94.scope.
Nov 25 08:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f97c5d228d9de2f80ae6d93692016dc66f50aef332a35e2aeb7414bd19a657f6-merged.mount: Deactivated successfully.
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.596 253542 INFO nova.compute.manager [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Took 8.09 seconds to build instance.
Nov 25 08:57:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:57:51 compute-0 podman[383617]: 2025-11-25 08:57:51.510795861 +0000 UTC m=+0.022148654 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4540958df62a9ce7e0b75ff06f1bfd795b20e2ca8437d0035d39c10d955612c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:51 compute-0 nova_compute[253538]: 2025-11-25 08:57:51.614 253542 DEBUG oslo_concurrency.lockutils [None req-f20549d3-bfb1-4bd8-978c-3adeec9d7569 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:51 compute-0 podman[383617]: 2025-11-25 08:57:51.621857024 +0000 UTC m=+0.133209817 container init 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:57:51 compute-0 podman[383617]: 2025-11-25 08:57:51.630647143 +0000 UTC m=+0.141999906 container start 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:57:51 compute-0 podman[383416]: 2025-11-25 08:57:51.63237181 +0000 UTC m=+1.148581203 container remove fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hopper, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:57:51 compute-0 systemd[1]: libpod-conmon-fcbbb70a0d243168e8771fe6868df6d0b278b52f5ada96bdc58f06af7f89cf62.scope: Deactivated successfully.
Nov 25 08:57:51 compute-0 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [NOTICE]   (383647) : New worker (383649) forked
Nov 25 08:57:51 compute-0 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [NOTICE]   (383647) : Loading success.
Nov 25 08:57:51 compute-0 sudo[383279]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:51 compute-0 sudo[383658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:51 compute-0 sudo[383658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:51 compute-0 sudo[383658]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:51 compute-0 sudo[383683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:57:51 compute-0 sudo[383683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:51 compute-0 sudo[383683]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:51 compute-0 sudo[383708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:51 compute-0 sudo[383708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:51 compute-0 sudo[383708]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 238 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 08:57:51 compute-0 sudo[383733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:57:51 compute-0 sudo[383733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3521971948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:52 compute-0 podman[383797]: 2025-11-25 08:57:52.308475253 +0000 UTC m=+0.050948758 container create 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 08:57:52 compute-0 systemd[1]: Started libpod-conmon-4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf.scope.
Nov 25 08:57:52 compute-0 podman[383797]: 2025-11-25 08:57:52.280612104 +0000 UTC m=+0.023085629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:57:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:57:52 compute-0 podman[383797]: 2025-11-25 08:57:52.424450389 +0000 UTC m=+0.166923904 container init 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 08:57:52 compute-0 podman[383797]: 2025-11-25 08:57:52.432809356 +0000 UTC m=+0.175282861 container start 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 08:57:52 compute-0 podman[383797]: 2025-11-25 08:57:52.436915419 +0000 UTC m=+0.179388914 container attach 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 08:57:52 compute-0 friendly_hopper[383813]: 167 167
Nov 25 08:57:52 compute-0 systemd[1]: libpod-4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf.scope: Deactivated successfully.
Nov 25 08:57:52 compute-0 conmon[383813]: conmon 4c6f23bdbb271279c068 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf.scope/container/memory.events
Nov 25 08:57:52 compute-0 podman[383797]: 2025-11-25 08:57:52.441447011 +0000 UTC m=+0.183920506 container died 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:57:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f145041488365d4a77ff28b2ae9cd508b93550684605cb3f543443de51deda3-merged.mount: Deactivated successfully.
Nov 25 08:57:52 compute-0 podman[383797]: 2025-11-25 08:57:52.480517295 +0000 UTC m=+0.222990790 container remove 4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:57:52 compute-0 systemd[1]: libpod-conmon-4c6f23bdbb271279c0682eb3b5fb95b5654ad8e628c6d5884ab4d468cb7070cf.scope: Deactivated successfully.
Nov 25 08:57:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:52 compute-0 podman[383837]: 2025-11-25 08:57:52.67360064 +0000 UTC m=+0.051048780 container create 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:57:52 compute-0 systemd[1]: Started libpod-conmon-851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95.scope.
Nov 25 08:57:52 compute-0 podman[383837]: 2025-11-25 08:57:52.649632939 +0000 UTC m=+0.027081109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:57:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:57:52 compute-0 podman[383837]: 2025-11-25 08:57:52.776849071 +0000 UTC m=+0.154297211 container init 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:57:52 compute-0 podman[383837]: 2025-11-25 08:57:52.785102636 +0000 UTC m=+0.162550756 container start 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:57:52 compute-0 podman[383837]: 2025-11-25 08:57:52.788256831 +0000 UTC m=+0.165704981 container attach 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 08:57:52 compute-0 nova_compute[253538]: 2025-11-25 08:57:52.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:53 compute-0 ceph-mon[75015]: pgmap v2312: 321 pgs: 321 active+clean; 238 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:57:53
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'images', '.rgw.root']
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]: {
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "osd_id": 1,
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "type": "bluestore"
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:     },
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "osd_id": 2,
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "type": "bluestore"
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:     },
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "osd_id": 0,
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:         "type": "bluestore"
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]:     }
Nov 25 08:57:53 compute-0 lucid_goldstine[383854]: }
Nov 25 08:57:53 compute-0 systemd[1]: libpod-851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95.scope: Deactivated successfully.
Nov 25 08:57:53 compute-0 podman[383887]: 2025-11-25 08:57:53.835799844 +0000 UTC m=+0.033668337 container died 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 08:57:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f5ca8650db8467a5956d45ae08b17a10035deee8dfc70097d5be3204ddaf258-merged.mount: Deactivated successfully.
Nov 25 08:57:53 compute-0 podman[383887]: 2025-11-25 08:57:53.907625169 +0000 UTC m=+0.105493662 container remove 851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 08:57:53 compute-0 systemd[1]: libpod-conmon-851c337aea400ddee13455985729ede7c2af1db7b983df6380022a8345f65d95.scope: Deactivated successfully.
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 214 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 979 KiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 25 08:57:53 compute-0 sudo[383733]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:57:53 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:57:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:57:53 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6d68413d-4660-4103-9cd1-583098033ae1 does not exist
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev bad59913-bdd8-486a-bdbd-7222b7fa76f4 does not exist
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:57:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:57:54 compute-0 sudo[383901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:57:54 compute-0 sudo[383901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:54 compute-0 sudo[383901]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:54.122 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:57:54 compute-0 nova_compute[253538]: 2025-11-25 08:57:54.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:54.124 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:57:54 compute-0 sudo[383926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:57:54 compute-0 sudo[383926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:57:54 compute-0 sudo[383926]: pam_unix(sudo:session): session closed for user root
Nov 25 08:57:54 compute-0 nova_compute[253538]: 2025-11-25 08:57:54.259 253542 DEBUG nova.compute.manager [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:54 compute-0 nova_compute[253538]: 2025-11-25 08:57:54.260 253542 DEBUG nova.compute.manager [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:57:54 compute-0 nova_compute[253538]: 2025-11-25 08:57:54.261 253542 DEBUG oslo_concurrency.lockutils [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:54 compute-0 nova_compute[253538]: 2025-11-25 08:57:54.261 253542 DEBUG oslo_concurrency.lockutils [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:54 compute-0 nova_compute[253538]: 2025-11-25 08:57:54.262 253542 DEBUG nova.network.neutron [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:57:54 compute-0 nova_compute[253538]: 2025-11-25 08:57:54.435 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:54 compute-0 ceph-mon[75015]: pgmap v2313: 321 pgs: 321 active+clean; 214 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 979 KiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 25 08:57:54 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:57:54 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:57:55 compute-0 nova_compute[253538]: 2025-11-25 08:57:55.509 253542 DEBUG nova.network.neutron [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:57:55 compute-0 nova_compute[253538]: 2025-11-25 08:57:55.509 253542 DEBUG nova.network.neutron [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:55 compute-0 nova_compute[253538]: 2025-11-25 08:57:55.526 253542 DEBUG oslo_concurrency.lockutils [req-8ec47ad1-bda1-4b3b-ad46-5a56b7c750ff req-aa4c4559-ba62-465a-873f-db99bf3877b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:55 compute-0 ovn_controller[152859]: 2025-11-25T08:57:55Z|01273|binding|INFO|Releasing lport f9837d3c-3aa7-48de-b240-549f6bf978b3 from this chassis (sb_readonly=0)
Nov 25 08:57:55 compute-0 ovn_controller[152859]: 2025-11-25T08:57:55Z|01274|binding|INFO|Releasing lport c544ed70-c59f-4fbe-97c5-a521f548f971 from this chassis (sb_readonly=0)
Nov 25 08:57:55 compute-0 nova_compute[253538]: 2025-11-25 08:57:55.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 08:57:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.127 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.680 253542 DEBUG nova.compute.manager [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.681 253542 DEBUG nova.compute.manager [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing instance network info cache due to event network-changed-d92eef96-9bbe-4743-96d0-393e7e6de4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.682 253542 DEBUG oslo_concurrency.lockutils [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.682 253542 DEBUG oslo_concurrency.lockutils [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.683 253542 DEBUG nova.network.neutron [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Refreshing network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.774 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.775 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.776 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.776 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.777 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.780 253542 INFO nova.compute.manager [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Terminating instance
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.782 253542 DEBUG nova.compute.manager [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:57:56 compute-0 kernel: tapd92eef96-9b (unregistering): left promiscuous mode
Nov 25 08:57:56 compute-0 NetworkManager[48915]: <info>  [1764061076.8521] device (tapd92eef96-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:57:56 compute-0 ovn_controller[152859]: 2025-11-25T08:57:56Z|01275|binding|INFO|Releasing lport d92eef96-9bbe-4743-96d0-393e7e6de4ee from this chassis (sb_readonly=0)
Nov 25 08:57:56 compute-0 ovn_controller[152859]: 2025-11-25T08:57:56Z|01276|binding|INFO|Setting lport d92eef96-9bbe-4743-96d0-393e7e6de4ee down in Southbound
Nov 25 08:57:56 compute-0 ovn_controller[152859]: 2025-11-25T08:57:56Z|01277|binding|INFO|Removing iface tapd92eef96-9b ovn-installed in OVS
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.878 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:26:ec 10.100.0.3'], port_security=['fa:16:3e:0d:26:ec 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce8c3428-f7e4-49aa-9978-faaf5d514663', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58e30486-fde6-46bb-8263-c463bd38a1f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '84e8c954-c3f0-4a6c-88b0-2dc68f7ce745 aec330ab-8d77-47ae-8de6-bec0741c3114', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8915864-93a9-4ad1-b7bb-a11d22ed3f29, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d92eef96-9bbe-4743-96d0-393e7e6de4ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:57:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.879 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d92eef96-9bbe-4743-96d0-393e7e6de4ee in datapath 58e30486-fde6-46bb-8263-c463bd38a1f9 unbound from our chassis
Nov 25 08:57:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.880 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58e30486-fde6-46bb-8263-c463bd38a1f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:57:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.884 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3073298e-5604-4f1c-988a-3cf2867aa3f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:56.885 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 namespace which is not needed anymore
Nov 25 08:57:56 compute-0 nova_compute[253538]: 2025-11-25 08:57:56.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:56 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 25 08:57:56 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000078.scope: Consumed 17.128s CPU time.
Nov 25 08:57:56 compute-0 systemd-machined[215790]: Machine qemu-150-instance-00000078 terminated.
Nov 25 08:57:56 compute-0 ceph-mon[75015]: pgmap v2314: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.030 253542 INFO nova.virt.libvirt.driver [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Instance destroyed successfully.
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.030 253542 DEBUG nova.objects.instance [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid ce8c3428-f7e4-49aa-9978-faaf5d514663 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.041 253542 DEBUG nova.virt.libvirt.vif [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1402608537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=120,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMzihuAIn/3zUYmC89IQHlRQOFsDPQXmR2lUEIBbP/zsJ4Wb7ryhi2Z+PoqeUCEWAj2u1hLvngwGPYFPPFVKkLQWsKMEmPgeFVkFH2scsb2/c4cLoNH5bP+xcccrYAT8g==',key_name='tempest-TestSecurityGroupsBasicOps-373950628',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:56:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-hsvmofjq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:56:31Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=ce8c3428-f7e4-49aa-9978-faaf5d514663,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.041 253542 DEBUG nova.network.os_vif_util [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.042 253542 DEBUG nova.network.os_vif_util [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.042 253542 DEBUG os_vif [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.044 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.045 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd92eef96-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:57:57 compute-0 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [NOTICE]   (380259) : haproxy version is 2.8.14-c23fe91
Nov 25 08:57:57 compute-0 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [NOTICE]   (380259) : path to executable is /usr/sbin/haproxy
Nov 25 08:57:57 compute-0 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [WARNING]  (380259) : Exiting Master process...
Nov 25 08:57:57 compute-0 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [WARNING]  (380259) : Exiting Master process...
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.054 253542 INFO os_vif [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:ec,bridge_name='br-int',has_traffic_filtering=True,id=d92eef96-9bbe-4743-96d0-393e7e6de4ee,network=Network(58e30486-fde6-46bb-8263-c463bd38a1f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92eef96-9b')
Nov 25 08:57:57 compute-0 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [ALERT]    (380259) : Current worker (380261) exited with code 143 (Terminated)
Nov 25 08:57:57 compute-0 neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9[380255]: [WARNING]  (380259) : All workers exited. Exiting... (0)
Nov 25 08:57:57 compute-0 systemd[1]: libpod-3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a.scope: Deactivated successfully.
Nov 25 08:57:57 compute-0 podman[383975]: 2025-11-25 08:57:57.065259686 +0000 UTC m=+0.060548620 container died 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:57:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a-userdata-shm.mount: Deactivated successfully.
Nov 25 08:57:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-97985d9ed470fd0792a7d93bf1b618a370e10b24c8a74fa4f0901da8068fed82-merged.mount: Deactivated successfully.
Nov 25 08:57:57 compute-0 podman[383975]: 2025-11-25 08:57:57.111848854 +0000 UTC m=+0.107137788 container cleanup 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 08:57:57 compute-0 systemd[1]: libpod-conmon-3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a.scope: Deactivated successfully.
Nov 25 08:57:57 compute-0 podman[384028]: 2025-11-25 08:57:57.21272399 +0000 UTC m=+0.068775454 container remove 3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.220 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b703ce56-9d99-4666-b426-5e7ca06dfe0c]: (4, ('Tue Nov 25 08:57:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 (3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a)\n3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a\nTue Nov 25 08:57:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 (3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a)\n3a5e054c3e2922afccdd6a996a2f3812b74fb741e37e9172311278ab6fa9774a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.222 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1311ce-24dc-46c5-b820-3588a3265c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.223 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58e30486-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:57 compute-0 kernel: tap58e30486-f0: left promiscuous mode
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.238 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.242 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.244 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[074a3ff8-445a-42cc-8d2c-02c360a6c973]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.260 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2361aa0f-dcb1-444e-a5a5-74cae6729394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.261 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36168b02-700c-4ccf-b6cc-881821153070]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.293 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1e47d2-8d38-4887-b172-af25c454be38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634523, 'reachable_time': 35682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384043, 'error': None, 'target': 'ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d58e30486\x2dfde6\x2d46bb\x2d8263\x2dc463bd38a1f9.mount: Deactivated successfully.
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.296 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58e30486-fde6-46bb-8263-c463bd38a1f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:57:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:57:57.296 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4eac92bb-8337-413c-8ab1-b51437dc6166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.570 253542 INFO nova.virt.libvirt.driver [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Deleting instance files /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663_del
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.571 253542 INFO nova.virt.libvirt.driver [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Deletion of /var/lib/nova/instances/ce8c3428-f7e4-49aa-9978-faaf5d514663_del complete
Nov 25 08:57:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.643 253542 INFO nova.compute.manager [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.643 253542 DEBUG oslo.service.loopingcall [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.646 253542 DEBUG nova.compute.manager [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.646 253542 DEBUG nova.network.neutron [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:57:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Nov 25 08:57:57 compute-0 nova_compute[253538]: 2025-11-25 08:57:57.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.411 253542 DEBUG nova.network.neutron [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updated VIF entry in instance network info cache for port d92eef96-9bbe-4743-96d0-393e7e6de4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.412 253542 DEBUG nova.network.neutron [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [{"id": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "address": "fa:16:3e:0d:26:ec", "network": {"id": "58e30486-fde6-46bb-8263-c463bd38a1f9", "bridge": "br-int", "label": "tempest-network-smoke--1964902798", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92eef96-9b", "ovs_interfaceid": "d92eef96-9bbe-4743-96d0-393e7e6de4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.446 253542 DEBUG oslo_concurrency.lockutils [req-1679c61b-85f9-4955-91e5-c8125fb212cc req-0709f118-3a13-4b8a-812c-63bdae5cd440 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-ce8c3428-f7e4-49aa-9978-faaf5d514663" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.637 253542 DEBUG nova.network.neutron [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.657 253542 INFO nova.compute.manager [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Took 1.01 seconds to deallocate network for instance.
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.710 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.711 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.796 253542 DEBUG oslo_concurrency.processutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.839 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-unplugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.839 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.840 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.840 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.840 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] No waiting events found dispatching network-vif-unplugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.841 253542 WARNING nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received unexpected event network-vif-unplugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee for instance with vm_state deleted and task_state None.
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.841 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.841 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.842 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.842 253542 DEBUG oslo_concurrency.lockutils [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.842 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] No waiting events found dispatching network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.843 253542 WARNING nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received unexpected event network-vif-plugged-d92eef96-9bbe-4743-96d0-393e7e6de4ee for instance with vm_state deleted and task_state None.
Nov 25 08:57:58 compute-0 nova_compute[253538]: 2025-11-25 08:57:58.843 253542 DEBUG nova.compute.manager [req-cd893fa3-28dd-47d5-95aa-27204a07ea01 req-f733e173-c2ad-40fd-a32c-911f6b60de64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Received event network-vif-deleted-d92eef96-9bbe-4743-96d0-393e7e6de4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:57:58 compute-0 ceph-mon[75015]: pgmap v2315: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.058 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.058 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.082 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.167 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:57:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:57:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29797299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.265 253542 DEBUG oslo_concurrency.processutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.274 253542 DEBUG nova.compute.provider_tree [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.289 253542 DEBUG nova.scheduler.client.report [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.312 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.315 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.325 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.325 253542 INFO nova.compute.claims [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.354 253542 INFO nova.scheduler.client.report [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance ce8c3428-f7e4-49aa-9978-faaf5d514663
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.430 253542 DEBUG oslo_concurrency.lockutils [None req-ed3f8497-365c-4404-a5bf-0bfe08645606 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ce8c3428-f7e4-49aa-9978-faaf5d514663" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.467 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:57:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 164 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 941 KiB/s wr, 142 op/s
Nov 25 08:57:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:57:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367924863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.972 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:57:59 compute-0 nova_compute[253538]: 2025-11-25 08:57:59.978 253542 DEBUG nova.compute.provider_tree [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:57:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/29797299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:57:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2367924863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.003 253542 DEBUG nova.scheduler.client.report [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.026 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.027 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.082 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.083 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.103 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.122 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.208 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.210 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.211 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Creating image(s)
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.243 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.271 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.297 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.301 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.373 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.374 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.374 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.375 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.403 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.408 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7be1983b-1609-4155-b634-d14fc92539e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.603 253542 DEBUG nova.policy [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.708 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7be1983b-1609-4155-b634-d14fc92539e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.800 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.909 253542 DEBUG nova.objects.instance [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 7be1983b-1609-4155-b634-d14fc92539e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.927 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.928 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Ensure instance console log exists: /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.929 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.929 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:00 compute-0 nova_compute[253538]: 2025-11-25 08:58:00.930 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:00 compute-0 ceph-mon[75015]: pgmap v2316: 321 pgs: 321 active+clean; 164 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 941 KiB/s wr, 142 op/s
Nov 25 08:58:01 compute-0 nova_compute[253538]: 2025-11-25 08:58:01.851 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Successfully created port: 91977aa8-6282-46cb-bc4f-42567be639f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:58:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 163 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 151 op/s
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.642 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Successfully updated port: 91977aa8-6282-46cb-bc4f-42567be639f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.659 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.660 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.660 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.722 253542 DEBUG nova.compute.manager [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.723 253542 DEBUG nova.compute.manager [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing instance network info cache due to event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.723 253542 DEBUG oslo_concurrency.lockutils [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.825 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:58:02 compute-0 nova_compute[253538]: 2025-11-25 08:58:02.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:03 compute-0 ceph-mon[75015]: pgmap v2317: 321 pgs: 321 active+clean; 163 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 151 op/s
Nov 25 08:58:03 compute-0 ovn_controller[152859]: 2025-11-25T08:58:03Z|01278|binding|INFO|Releasing lport c544ed70-c59f-4fbe-97c5-a521f548f971 from this chassis (sb_readonly=0)
Nov 25 08:58:03 compute-0 nova_compute[253538]: 2025-11-25 08:58:03.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:03 compute-0 nova_compute[253538]: 2025-11-25 08:58:03.556 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061068.552513, 9b511004-21d7-4867-aa46-4e7219827b6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:03 compute-0 nova_compute[253538]: 2025-11-25 08:58:03.556 253542 INFO nova.compute.manager [-] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] VM Stopped (Lifecycle Event)
Nov 25 08:58:03 compute-0 nova_compute[253538]: 2025-11-25 08:58:03.579 253542 DEBUG nova.compute.manager [None req-014f0b8c-9555-4ecb-ba76-e0b55af00c4b - - - - - -] [instance: 9b511004-21d7-4867-aa46-4e7219827b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 175 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 138 op/s
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007180662727095136 of space, bias 1.0, pg target 0.21541988181285407 quantized to 32 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:58:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:58:05 compute-0 ceph-mon[75015]: pgmap v2318: 321 pgs: 321 active+clean; 175 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 138 op/s
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.114 253542 DEBUG nova.network.neutron [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.135 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.136 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance network_info: |[{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.136 253542 DEBUG oslo_concurrency.lockutils [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.136 253542 DEBUG nova.network.neutron [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.139 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start _get_guest_xml network_info=[{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.143 253542 WARNING nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.150 253542 DEBUG nova.virt.libvirt.host [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.150 253542 DEBUG nova.virt.libvirt.host [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.153 253542 DEBUG nova.virt.libvirt.host [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.154 253542 DEBUG nova.virt.libvirt.host [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.154 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.154 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.155 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.155 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.155 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.155 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.156 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.156 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.156 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.156 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.157 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.157 253542 DEBUG nova.virt.hardware [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.160 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:05 compute-0 ovn_controller[152859]: 2025-11-25T08:58:05Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:e8:8d 10.100.0.9
Nov 25 08:58:05 compute-0 ovn_controller[152859]: 2025-11-25T08:58:05Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:e8:8d 10.100.0.9
Nov 25 08:58:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:58:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2680538638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.583 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.610 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:05 compute-0 nova_compute[253538]: 2025-11-25 08:58:05.615 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 08:58:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 48K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1441 writes, 7014 keys, 1441 commit groups, 1.0 writes per commit group, ingest: 9.13 MB, 0.02 MB/s
                                           Interval WAL: 1441 writes, 1441 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.7      1.81              0.20        33    0.055       0      0       0.0       0.0
                                             L6      1/0    7.43 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3     62.1     51.6      4.79              0.82        32    0.150    185K    17K       0.0       0.0
                                            Sum      1/0    7.43 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     45.0     46.2      6.60              1.03        65    0.101    185K    17K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    107.9    103.0      0.62              0.22        14    0.044     50K   3600       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     62.1     51.6      4.79              0.82        32    0.150    185K    17K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     31.7      1.81              0.20        32    0.056       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.056, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.30 GB write, 0.07 MB/s write, 0.29 GB read, 0.07 MB/s read, 6.6 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 33.79 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.00027 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2226,32.44 MB,10.6714%) FilterBlock(66,524.55 KB,0.168504%) IndexBlock(66,852.62 KB,0.273895%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 08:58:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 190 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 114 op/s
Nov 25 08:58:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:58:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502249698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:58:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2680538638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.065 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.069 253542 DEBUG nova.virt.libvirt.vif [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1380054515',display_name='tempest-TestNetworkBasicOps-server-1380054515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1380054515',id=124,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0onj7xYRJ9u2qwke0uzdeZmqzf6dyWSNUBde1bKNBXCsKy64L0Qx4G4FAfzNe1upG08i2qlETnDI+nze7Y9Zy5eH5tzWkkjtBeM6yGgDQ+VcsL5Xix937kJOB4ium+2w==',key_name='tempest-TestNetworkBasicOps-618890367',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-81q88fvv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:00Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=7be1983b-1609-4155-b634-d14fc92539e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.069 253542 DEBUG nova.network.os_vif_util [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.072 253542 DEBUG nova.network.os_vif_util [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.074 253542 DEBUG nova.objects.instance [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7be1983b-1609-4155-b634-d14fc92539e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.098 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <uuid>7be1983b-1609-4155-b634-d14fc92539e8</uuid>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <name>instance-0000007c</name>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-1380054515</nova:name>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:58:05</nova:creationTime>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <nova:port uuid="91977aa8-6282-46cb-bc4f-42567be639f9">
Nov 25 08:58:06 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <system>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <entry name="serial">7be1983b-1609-4155-b634-d14fc92539e8</entry>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <entry name="uuid">7be1983b-1609-4155-b634-d14fc92539e8</entry>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </system>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <os>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   </os>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <features>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   </features>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7be1983b-1609-4155-b634-d14fc92539e8_disk">
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       </source>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7be1983b-1609-4155-b634-d14fc92539e8_disk.config">
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       </source>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:58:06 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:60:48:54"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <target dev="tap91977aa8-62"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/console.log" append="off"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <video>
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </video>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:58:06 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:58:06 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:58:06 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:58:06 compute-0 nova_compute[253538]: </domain>
Nov 25 08:58:06 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.101 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Preparing to wait for external event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.102 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.102 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.103 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.104 253542 DEBUG nova.virt.libvirt.vif [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1380054515',display_name='tempest-TestNetworkBasicOps-server-1380054515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1380054515',id=124,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0onj7xYRJ9u2qwke0uzdeZmqzf6dyWSNUBde1bKNBXCsKy64L0Qx4G4FAfzNe1upG08i2qlETnDI+nze7Y9Zy5eH5tzWkkjtBeM6yGgDQ+VcsL5Xix937kJOB4ium+2w==',key_name='tempest-TestNetworkBasicOps-618890367',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-81q88fvv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:00Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=7be1983b-1609-4155-b634-d14fc92539e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.105 253542 DEBUG nova.network.os_vif_util [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.106 253542 DEBUG nova.network.os_vif_util [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.106 253542 DEBUG os_vif [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.107 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.108 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.109 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.112 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.113 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91977aa8-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.114 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91977aa8-62, col_values=(('external_ids', {'iface-id': '91977aa8-6282-46cb-bc4f-42567be639f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:48:54', 'vm-uuid': '7be1983b-1609-4155-b634-d14fc92539e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:06 compute-0 NetworkManager[48915]: <info>  [1764061086.1169] manager: (tap91977aa8-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.115 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.124 253542 INFO os_vif [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62')
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.189 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.190 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.190 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:60:48:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.191 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Using config drive
Nov 25 08:58:06 compute-0 nova_compute[253538]: 2025-11-25 08:58:06.218 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:07 compute-0 ceph-mon[75015]: pgmap v2319: 321 pgs: 321 active+clean; 190 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 114 op/s
Nov 25 08:58:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3502249698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.305 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Creating config drive at /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.309 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvi1gferr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.449 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvi1gferr" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.487 253542 DEBUG nova.storage.rbd_utils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 7be1983b-1609-4155-b634-d14fc92539e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.491 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config 7be1983b-1609-4155-b634-d14fc92539e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.629 253542 DEBUG nova.network.neutron [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updated VIF entry in instance network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.630 253542 DEBUG nova.network.neutron [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.645 253542 DEBUG oslo_concurrency.lockutils [req-1b1722d2-ec6f-40b6-bbef-a50d7ce0d3cc req-285c439f-4130-4a1c-bfea-6be690a681ec b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.665 253542 DEBUG oslo_concurrency.processutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config 7be1983b-1609-4155-b634-d14fc92539e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.665 253542 INFO nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Deleting local config drive /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8/disk.config because it was imported into RBD.
Nov 25 08:58:07 compute-0 NetworkManager[48915]: <info>  [1764061087.7185] manager: (tap91977aa8-62): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Nov 25 08:58:07 compute-0 kernel: tap91977aa8-62: entered promiscuous mode
Nov 25 08:58:07 compute-0 ovn_controller[152859]: 2025-11-25T08:58:07Z|01279|binding|INFO|Claiming lport 91977aa8-6282-46cb-bc4f-42567be639f9 for this chassis.
Nov 25 08:58:07 compute-0 ovn_controller[152859]: 2025-11-25T08:58:07Z|01280|binding|INFO|91977aa8-6282-46cb-bc4f-42567be639f9: Claiming fa:16:3e:60:48:54 10.100.0.10
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:07 compute-0 ovn_controller[152859]: 2025-11-25T08:58:07Z|01281|binding|INFO|Setting lport 91977aa8-6282-46cb-bc4f-42567be639f9 ovn-installed in OVS
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.740 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:07 compute-0 ovn_controller[152859]: 2025-11-25T08:58:07Z|01282|binding|INFO|Setting lport 91977aa8-6282-46cb-bc4f-42567be639f9 up in Southbound
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.744 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:48:54 10.100.0.10'], port_security=['fa:16:3e:60:48:54 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7be1983b-1609-4155-b634-d14fc92539e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf619d00-d285-4b9e-9996-77997075375e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d244d5b-1cd0-48b4-a9a9-c4313e58642b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8f4cf68-54bc-4d0b-a896-1ef280f60a0a, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=91977aa8-6282-46cb-bc4f-42567be639f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.746 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 91977aa8-6282-46cb-bc4f-42567be639f9 in datapath bf619d00-d285-4b9e-9996-77997075375e bound to our chassis
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.747 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf619d00-d285-4b9e-9996-77997075375e
Nov 25 08:58:07 compute-0 systemd-udevd[384392]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:58:07 compute-0 systemd-machined[215790]: New machine qemu-154-instance-0000007c.
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.764 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[264b6d12-ffd6-4dc3-bb5b-345d4a40f055]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:07 compute-0 NetworkManager[48915]: <info>  [1764061087.7710] device (tap91977aa8-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:58:07 compute-0 NetworkManager[48915]: <info>  [1764061087.7723] device (tap91977aa8-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:58:07 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007c.
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.794 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[48fbc33b-563b-44ca-a68a-62e1b9851b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.796 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[773609a7-c57c-4406-96ae-141c38bd577d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.822 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7715f221-f2c0-4473-b1a2-49f06a08228a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.838 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3a30d5e5-8e06-4dad-88ce-dc4e50b7bdcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf619d00-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:be:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642553, 'reachable_time': 19033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384404, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.852 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[688402a8-2830-4c2d-96ec-c6d5d59bc090]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf619d00-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642566, 'tstamp': 642566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384406, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbf619d00-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642569, 'tstamp': 642569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384406, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.854 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf619d00-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.855 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.856 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf619d00-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.856 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.856 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf619d00-d0, col_values=(('external_ids', {'iface-id': 'c544ed70-c59f-4fbe-97c5-a521f548f971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:07 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:07.857 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:58:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 203 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 121 op/s
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.968 253542 DEBUG nova.compute.manager [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.968 253542 DEBUG oslo_concurrency.lockutils [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.968 253542 DEBUG oslo_concurrency.lockutils [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.969 253542 DEBUG oslo_concurrency.lockutils [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.969 253542 DEBUG nova.compute.manager [req-1a5e7d2a-1978-4261-849b-d03400d8063b req-b372be22-bbf4-455c-b9d6-fd433a108bbf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Processing event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:58:07 compute-0 nova_compute[253538]: 2025-11-25 08:58:07.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.443 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.557 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061088.5568595, 7be1983b-1609-4155-b634-d14fc92539e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.557 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] VM Started (Lifecycle Event)
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.561 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.564 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.568 253542 INFO nova.virt.libvirt.driver [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance spawned successfully.
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.568 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.586 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.596 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.601 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.602 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.603 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.603 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.604 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.604 253542 DEBUG nova.virt.libvirt.driver [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.657 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.657 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061088.557015, 7be1983b-1609-4155-b634-d14fc92539e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.658 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] VM Paused (Lifecycle Event)
Nov 25 08:58:08 compute-0 sshd-session[384407]: Invalid user hduser from 193.32.162.151 port 44916
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.705 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.715 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061088.5633826, 7be1983b-1609-4155-b634-d14fc92539e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.715 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] VM Resumed (Lifecycle Event)
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.736 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.739 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.746 253542 INFO nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Took 8.54 seconds to spawn the instance on the hypervisor.
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.747 253542 DEBUG nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.755 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:58:08 compute-0 sshd-session[384407]: Connection closed by invalid user hduser 193.32.162.151 port 44916 [preauth]
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.803 253542 INFO nova.compute.manager [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Took 9.66 seconds to build instance.
Nov 25 08:58:08 compute-0 nova_compute[253538]: 2025-11-25 08:58:08.818 253542 DEBUG oslo_concurrency.lockutils [None req-7ad4c5e8-daeb-48e5-949c-59a86a73b69b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:09 compute-0 ceph-mon[75015]: pgmap v2320: 321 pgs: 321 active+clean; 203 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 121 op/s
Nov 25 08:58:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 457 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 25 08:58:10 compute-0 nova_compute[253538]: 2025-11-25 08:58:10.112 253542 DEBUG nova.compute.manager [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:10 compute-0 nova_compute[253538]: 2025-11-25 08:58:10.113 253542 DEBUG oslo_concurrency.lockutils [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:10 compute-0 nova_compute[253538]: 2025-11-25 08:58:10.113 253542 DEBUG oslo_concurrency.lockutils [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:10 compute-0 nova_compute[253538]: 2025-11-25 08:58:10.113 253542 DEBUG oslo_concurrency.lockutils [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:10 compute-0 nova_compute[253538]: 2025-11-25 08:58:10.114 253542 DEBUG nova.compute.manager [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] No waiting events found dispatching network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:10 compute-0 nova_compute[253538]: 2025-11-25 08:58:10.114 253542 WARNING nova.compute.manager [req-ee5c7c8e-678c-445e-b7ad-ba1c6f9ef9b6 req-28ea3b4a-2731-4a23-8fe3-6c1240c0f644 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received unexpected event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 for instance with vm_state active and task_state None.
Nov 25 08:58:11 compute-0 ceph-mon[75015]: pgmap v2321: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 457 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 25 08:58:11 compute-0 nova_compute[253538]: 2025-11-25 08:58:11.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.028 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061077.0272484, ce8c3428-f7e4-49aa-9978-faaf5d514663 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.028 253542 INFO nova.compute.manager [-] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] VM Stopped (Lifecycle Event)
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.052 253542 DEBUG nova.compute.manager [None req-7dbb174a-4358-4b50-b226-ee0d2f210010 - - - - - -] [instance: ce8c3428-f7e4-49aa-9978-faaf5d514663] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.208 253542 DEBUG nova.compute.manager [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.211 253542 DEBUG nova.compute.manager [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing instance network info cache due to event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.211 253542 DEBUG oslo_concurrency.lockutils [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.212 253542 DEBUG oslo_concurrency.lockutils [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.212 253542 DEBUG nova.network.neutron [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:58:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:12 compute-0 nova_compute[253538]: 2025-11-25 08:58:12.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:13 compute-0 ceph-mon[75015]: pgmap v2322: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Nov 25 08:58:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 124 op/s
Nov 25 08:58:14 compute-0 nova_compute[253538]: 2025-11-25 08:58:14.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:14 compute-0 podman[384452]: 2025-11-25 08:58:14.806986821 +0000 UTC m=+0.052270044 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 08:58:14 compute-0 podman[384471]: 2025-11-25 08:58:14.889276322 +0000 UTC m=+0.047228277 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 08:58:15 compute-0 ceph-mon[75015]: pgmap v2323: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 124 op/s
Nov 25 08:58:15 compute-0 nova_compute[253538]: 2025-11-25 08:58:15.121 253542 DEBUG nova.network.neutron [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updated VIF entry in instance network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:58:15 compute-0 nova_compute[253538]: 2025-11-25 08:58:15.122 253542 DEBUG nova.network.neutron [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:15 compute-0 nova_compute[253538]: 2025-11-25 08:58:15.143 253542 DEBUG oslo_concurrency.lockutils [req-2061cbd6-f518-417a-9a0f-5a05848ec74c req-b32d5822-0ff6-4444-a460-d0d4efc7e02f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Nov 25 08:58:16 compute-0 nova_compute[253538]: 2025-11-25 08:58:16.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:17 compute-0 ceph-mon[75015]: pgmap v2324: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Nov 25 08:58:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 111 op/s
Nov 25 08:58:17 compute-0 nova_compute[253538]: 2025-11-25 08:58:17.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:19 compute-0 ceph-mon[75015]: pgmap v2325: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 111 op/s
Nov 25 08:58:19 compute-0 nova_compute[253538]: 2025-11-25 08:58:19.387 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:19 compute-0 nova_compute[253538]: 2025-11-25 08:58:19.387 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:19 compute-0 nova_compute[253538]: 2025-11-25 08:58:19.406 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:58:19 compute-0 nova_compute[253538]: 2025-11-25 08:58:19.483 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:19 compute-0 nova_compute[253538]: 2025-11-25 08:58:19.484 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:19 compute-0 nova_compute[253538]: 2025-11-25 08:58:19.497 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:58:19 compute-0 nova_compute[253538]: 2025-11-25 08:58:19.498 253542 INFO nova.compute.claims [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:58:19 compute-0 nova_compute[253538]: 2025-11-25 08:58:19.731 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 105 KiB/s wr, 97 op/s
Nov 25 08:58:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:58:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179490891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.209 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.220 253542 DEBUG nova.compute.provider_tree [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.237 253542 DEBUG nova.scheduler.client.report [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.275 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.276 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.363 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.363 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.384 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.407 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.501 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.502 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.502 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Creating image(s)
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.522 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.544 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.566 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.570 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.616 253542 DEBUG nova.policy [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.661 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.662 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.662 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.663 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.686 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:20 compute-0 nova_compute[253538]: 2025-11-25 08:58:20.690 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9447890d-1fff-4536-a0cd-b889c23f7479_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:20 compute-0 podman[384589]: 2025-11-25 08:58:20.85827723 +0000 UTC m=+0.109490181 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:58:21 compute-0 nova_compute[253538]: 2025-11-25 08:58:21.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:21 compute-0 ovn_controller[152859]: 2025-11-25T08:58:21Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:48:54 10.100.0.10
Nov 25 08:58:21 compute-0 ovn_controller[152859]: 2025-11-25T08:58:21Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:48:54 10.100.0.10
Nov 25 08:58:21 compute-0 ceph-mon[75015]: pgmap v2326: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 105 KiB/s wr, 97 op/s
Nov 25 08:58:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/179490891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:21 compute-0 nova_compute[253538]: 2025-11-25 08:58:21.923 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Successfully created port: 6e0acf79-7148-4555-9265-b449f234806e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:58:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 225 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Nov 25 08:58:22 compute-0 nova_compute[253538]: 2025-11-25 08:58:22.052 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 9447890d-1fff-4536-a0cd-b889c23f7479_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:22 compute-0 nova_compute[253538]: 2025-11-25 08:58:22.135 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:58:22 compute-0 nova_compute[253538]: 2025-11-25 08:58:22.241 253542 DEBUG nova.objects.instance [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 9447890d-1fff-4536-a0cd-b889c23f7479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:58:22 compute-0 nova_compute[253538]: 2025-11-25 08:58:22.257 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:58:22 compute-0 nova_compute[253538]: 2025-11-25 08:58:22.257 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Ensure instance console log exists: /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:58:22 compute-0 nova_compute[253538]: 2025-11-25 08:58:22.258 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:22 compute-0 nova_compute[253538]: 2025-11-25 08:58:22.258 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:22 compute-0 nova_compute[253538]: 2025-11-25 08:58:22.258 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:22 compute-0 ceph-mon[75015]: pgmap v2327: 321 pgs: 321 active+clean; 225 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.028 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.386 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Successfully updated port: 6e0acf79-7148-4555-9265-b449f234806e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.399 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.399 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.400 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:58:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:58:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:58:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:58:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:58:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:58:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.474 253542 DEBUG nova.compute.manager [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-changed-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.475 253542 DEBUG nova.compute.manager [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing instance network info cache due to event network-changed-6e0acf79-7148-4555-9265-b449f234806e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.475 253542 DEBUG oslo_concurrency.lockutils [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:23 compute-0 nova_compute[253538]: 2025-11-25 08:58:23.920 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:58:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 248 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 949 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 08:58:25 compute-0 ceph-mon[75015]: pgmap v2328: 321 pgs: 321 active+clean; 248 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 949 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.230 253542 DEBUG nova.network.neutron [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.340 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.340 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance network_info: |[{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.340 253542 DEBUG oslo_concurrency.lockutils [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.341 253542 DEBUG nova.network.neutron [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing network info cache for port 6e0acf79-7148-4555-9265-b449f234806e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.344 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start _get_guest_xml network_info=[{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.349 253542 WARNING nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.358 253542 DEBUG nova.virt.libvirt.host [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.360 253542 DEBUG nova.virt.libvirt.host [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.364 253542 DEBUG nova.virt.libvirt.host [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.365 253542 DEBUG nova.virt.libvirt.host [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.366 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.367 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.368 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.368 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.369 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.369 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.369 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.370 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.370 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.371 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.371 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.372 253542 DEBUG nova.virt.hardware [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.377 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:58:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732512523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.870 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.891 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:25 compute-0 nova_compute[253538]: 2025-11-25 08:58:25.895 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 271 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 762 KiB/s rd, 3.4 MiB/s wr, 78 op/s
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.168 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/732512523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:58:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:58:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3299988978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.326 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.328 253542 DEBUG nova.virt.libvirt.vif [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:58:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=125,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-llehtawf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:20Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=9447890d-1fff-4536-a0cd-b889c23f7479,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.328 253542 DEBUG nova.network.os_vif_util [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.329 253542 DEBUG nova.network.os_vif_util [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.330 253542 DEBUG nova.objects.instance [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9447890d-1fff-4536-a0cd-b889c23f7479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.360 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <uuid>9447890d-1fff-4536-a0cd-b889c23f7479</uuid>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <name>instance-0000007d</name>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251</nova:name>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:58:25</nova:creationTime>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <nova:port uuid="6e0acf79-7148-4555-9265-b449f234806e">
Nov 25 08:58:26 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <system>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <entry name="serial">9447890d-1fff-4536-a0cd-b889c23f7479</entry>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <entry name="uuid">9447890d-1fff-4536-a0cd-b889c23f7479</entry>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </system>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <os>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   </os>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <features>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   </features>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9447890d-1fff-4536-a0cd-b889c23f7479_disk">
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       </source>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/9447890d-1fff-4536-a0cd-b889c23f7479_disk.config">
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       </source>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:58:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:f7:96:34"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <target dev="tap6e0acf79-71"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/console.log" append="off"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <video>
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </video>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:58:26 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:58:26 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:58:26 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:58:26 compute-0 nova_compute[253538]: </domain>
Nov 25 08:58:26 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.362 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Preparing to wait for external event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.362 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.363 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.363 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.365 253542 DEBUG nova.virt.libvirt.vif [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:58:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=125,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-llehtawf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:20Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=9447890d-1fff-4536-a0cd-b889c23f7479,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.365 253542 DEBUG nova.network.os_vif_util [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.366 253542 DEBUG nova.network.os_vif_util [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.367 253542 DEBUG os_vif [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.369 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.370 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.374 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e0acf79-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.376 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e0acf79-71, col_values=(('external_ids', {'iface-id': '6e0acf79-7148-4555-9265-b449f234806e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:96:34', 'vm-uuid': '9447890d-1fff-4536-a0cd-b889c23f7479'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:26 compute-0 NetworkManager[48915]: <info>  [1764061106.3802] manager: (tap6e0acf79-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.386 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.387 253542 INFO os_vif [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71')
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.449 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.450 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.450 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:f7:96:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.451 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Using config drive
Nov 25 08:58:26 compute-0 nova_compute[253538]: 2025-11-25 08:58:26.476 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:27 compute-0 ceph-mon[75015]: pgmap v2329: 321 pgs: 321 active+clean; 271 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 762 KiB/s rd, 3.4 MiB/s wr, 78 op/s
Nov 25 08:58:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3299988978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.359 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Creating config drive at /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.370 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuzaq3xh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.440 253542 DEBUG nova.network.neutron [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updated VIF entry in instance network info cache for port 6e0acf79-7148-4555-9265-b449f234806e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.442 253542 DEBUG nova.network.neutron [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.461 253542 DEBUG oslo_concurrency.lockutils [req-fe773c2a-af5f-438b-884b-bbf448e94031 req-007958fc-0cbe-4110-80c7-e1d419d0ae10 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.560 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuzaq3xh" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.595 253542 DEBUG nova.storage.rbd_utils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.599 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.773 253542 INFO nova.compute.manager [None req-2f61dd6f-f29c-452d-8b89-91800300b0ff 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Get console output
Nov 25 08:58:27 compute-0 nova_compute[253538]: 2025-11-25 08:58:27.780 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:58:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 292 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.222 253542 DEBUG oslo_concurrency.processutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config 9447890d-1fff-4536-a0cd-b889c23f7479_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.223 253542 INFO nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Deleting local config drive /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479/disk.config because it was imported into RBD.
Nov 25 08:58:28 compute-0 kernel: tap6e0acf79-71: entered promiscuous mode
Nov 25 08:58:28 compute-0 NetworkManager[48915]: <info>  [1764061108.2997] manager: (tap6e0acf79-71): new Tun device (/org/freedesktop/NetworkManager/Devices/527)
Nov 25 08:58:28 compute-0 ovn_controller[152859]: 2025-11-25T08:58:28Z|01283|binding|INFO|Claiming lport 6e0acf79-7148-4555-9265-b449f234806e for this chassis.
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:28 compute-0 ovn_controller[152859]: 2025-11-25T08:58:28Z|01284|binding|INFO|6e0acf79-7148-4555-9265-b449f234806e: Claiming fa:16:3e:f7:96:34 10.100.0.5
Nov 25 08:58:28 compute-0 ovn_controller[152859]: 2025-11-25T08:58:28Z|01285|binding|INFO|Setting lport 6e0acf79-7148-4555-9265-b449f234806e ovn-installed in OVS
Nov 25 08:58:28 compute-0 ovn_controller[152859]: 2025-11-25T08:58:28Z|01286|binding|INFO|Setting lport 6e0acf79-7148-4555-9265-b449f234806e up in Southbound
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.325 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:96:34 10.100.0.5'], port_security=['fa:16:3e:f7:96:34 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9447890d-1fff-4536-a0cd-b889c23f7479', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20edaba8-b789-425d-a55d-3fa69c803e14 31fd3dba-a142-469b-a6ad-eb14c55eb5d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acab9bf8-f301-413f-8774-b60482e3db77, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6e0acf79-7148-4555-9265-b449f234806e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.328 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6e0acf79-7148-4555-9265-b449f234806e in datapath bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 bound to our chassis
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.331 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.331 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfb2a9da-10d5-4cf0-a585-a59d66a02fa0
Nov 25 08:58:28 compute-0 systemd-machined[215790]: New machine qemu-155-instance-0000007d.
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d64e0e64-23cf-4228-8d0d-ff4dc65c99f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.352 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbfb2a9da-11 in ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:58:28 compute-0 systemd-udevd[384844]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.355 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbfb2a9da-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.355 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1ce302-d60b-4249-9c40-28a8de296e66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.357 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[441d8f61-095d-4514-9d92-04ccf1dab754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007d.
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.373 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[63027f1e-0516-4057-af90-ceff8cdf3afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 NetworkManager[48915]: <info>  [1764061108.3765] device (tap6e0acf79-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:58:28 compute-0 NetworkManager[48915]: <info>  [1764061108.3778] device (tap6e0acf79-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.391 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0c97f1-b362-46b6-8715-27428448e13d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.425 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcf25dd-76a6-45d1-9b96-674a7b40d57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 systemd-udevd[384847]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.430 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35ab4140-be03-488d-abd7-b92e7c1a0dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 NetworkManager[48915]: <info>  [1764061108.4326] manager: (tapbfb2a9da-10): new Veth device (/org/freedesktop/NetworkManager/Devices/528)
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.466 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d03df461-c461-46f1-8e43-9ad427dd0973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.469 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[35e2fe99-b596-4171-a966-1dacba4f5b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 NetworkManager[48915]: <info>  [1764061108.4937] device (tapbfb2a9da-10): carrier: link connected
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.498 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3875290f-5695-4b5e-b77c-fa92551b1555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.522 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc61ba6-f079-40b8-993b-f4739d07f6f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfb2a9da-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:95:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646309, 'reachable_time': 34452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384876, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.537 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd00221-fed1-4d92-bc96-df06f99bfe10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:9582'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646309, 'tstamp': 646309}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384877, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[08ffdb82-3713-40b8-92f7-af828d1dbba1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfb2a9da-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:95:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646309, 'reachable_time': 34452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384878, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.583 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c999816-2e14-4b02-aa2a-b68f0c8444f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.650 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24050fd2-8a35-4e32-816f-2fd7c47c1ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.652 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb2a9da-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.652 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.652 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfb2a9da-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:28 compute-0 NetworkManager[48915]: <info>  [1764061108.7057] manager: (tapbfb2a9da-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Nov 25 08:58:28 compute-0 kernel: tapbfb2a9da-10: entered promiscuous mode
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.709 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfb2a9da-10, col_values=(('external_ids', {'iface-id': '0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.710 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:28 compute-0 ovn_controller[152859]: 2025-11-25T08:58:28Z|01287|binding|INFO|Releasing lport 0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f from this chassis (sb_readonly=0)
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.716 253542 DEBUG nova.compute.manager [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.717 253542 DEBUG oslo_concurrency.lockutils [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.717 253542 DEBUG oslo_concurrency.lockutils [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.718 253542 DEBUG oslo_concurrency.lockutils [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.718 253542 DEBUG nova.compute.manager [req-ada1435e-ccb5-4f2f-8c8e-55b5a91c74d6 req-f9057b7a-46ef-4eea-95bf-389cdf019a8f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Processing event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.727 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bfb2a9da-10d5-4cf0-a585-a59d66a02fa0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bfb2a9da-10d5-4cf0-a585-a59d66a02fa0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.728 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8bb9ae-5432-4629-aedc-1d837741856c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.729 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/bfb2a9da-10d5-4cf0-a585-a59d66a02fa0.pid.haproxy
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID bfb2a9da-10d5-4cf0-a585-a59d66a02fa0
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:58:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:28.730 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'env', 'PROCESS_TAG=haproxy-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bfb2a9da-10d5-4cf0-a585-a59d66a02fa0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.894 253542 DEBUG nova.compute.manager [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.895 253542 DEBUG oslo_concurrency.lockutils [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.895 253542 DEBUG oslo_concurrency.lockutils [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.896 253542 DEBUG oslo_concurrency.lockutils [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.896 253542 DEBUG nova.compute.manager [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:28 compute-0 nova_compute[253538]: 2025-11-25 08:58:28.897 253542 WARNING nova.compute.manager [req-6a178f23-fd9f-40d5-8591-4f04e7c1aa23 req-cd62ac00-8c9b-4316-be02-c043e122f543 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state None.
Nov 25 08:58:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:58:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080222673' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:58:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:58:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080222673' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.142 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.144 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061109.1413898, 9447890d-1fff-4536-a0cd-b889c23f7479 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.145 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] VM Started (Lifecycle Event)
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.155 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:58:29 compute-0 podman[384951]: 2025-11-25 08:58:29.156839348 +0000 UTC m=+0.071662272 container create 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.166 253542 INFO nova.virt.libvirt.driver [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance spawned successfully.
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.167 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.191 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.201 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:58:29 compute-0 systemd[1]: Started libpod-conmon-6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31.scope.
Nov 25 08:58:29 compute-0 podman[384951]: 2025-11-25 08:58:29.112111801 +0000 UTC m=+0.026934815 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.205 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.208 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.209 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.210 253542 DEBUG nova.virt.libvirt.driver [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:58:29 compute-0 ceph-mon[75015]: pgmap v2330: 321 pgs: 321 active+clean; 292 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 08:58:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3080222673' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:58:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3080222673' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:58:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e04d9311f2566d0bdda347b5c2a05beefee95a9cbfe91f3a4e7e52afc379d7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.247 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.248 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061109.1418667, 9447890d-1fff-4536-a0cd-b889c23f7479 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.248 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] VM Paused (Lifecycle Event)
Nov 25 08:58:29 compute-0 podman[384951]: 2025-11-25 08:58:29.256049158 +0000 UTC m=+0.170872102 container init 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 08:58:29 compute-0 podman[384951]: 2025-11-25 08:58:29.261578089 +0000 UTC m=+0.176401013 container start 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.278 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.282 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061109.1538963, 9447890d-1fff-4536-a0cd-b889c23f7479 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.282 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] VM Resumed (Lifecycle Event)
Nov 25 08:58:29 compute-0 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [NOTICE]   (384971) : New worker (384973) forked
Nov 25 08:58:29 compute-0 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [NOTICE]   (384971) : Loading success.
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.293 253542 INFO nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Took 8.79 seconds to spawn the instance on the hypervisor.
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.293 253542 DEBUG nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.316 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.320 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.349 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.365 253542 INFO nova.compute.manager [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Took 9.91 seconds to build instance.
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.402 253542 DEBUG oslo_concurrency.lockutils [None req-85f9b72b-2a81-4b90-a114-2e48d85b1357 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.935 253542 INFO nova.compute.manager [None req-ea625294-b460-4f30-b6b1-2374ee7e0de2 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Get console output
Nov 25 08:58:29 compute-0 nova_compute[253538]: 2025-11-25 08:58:29.942 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:58:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.814 253542 DEBUG nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.815 253542 DEBUG nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.815 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.816 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.816 253542 DEBUG nova.network.neutron [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.997 253542 DEBUG nova.compute.manager [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.998 253542 DEBUG oslo_concurrency.lockutils [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.998 253542 DEBUG oslo_concurrency.lockutils [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:30 compute-0 nova_compute[253538]: 2025-11-25 08:58:30.999 253542 DEBUG oslo_concurrency.lockutils [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:31 compute-0 nova_compute[253538]: 2025-11-25 08:58:31.000 253542 DEBUG nova.compute.manager [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:31 compute-0 nova_compute[253538]: 2025-11-25 08:58:31.000 253542 WARNING nova.compute.manager [req-aa9fa364-0b16-4fb0-8afe-b43826732643 req-e310efea-8c56-45d1-95d3-43e839c59ecb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state None.
Nov 25 08:58:31 compute-0 ceph-mon[75015]: pgmap v2331: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Nov 25 08:58:31 compute-0 nova_compute[253538]: 2025-11-25 08:58:31.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 902 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.340 253542 INFO nova.compute.manager [None req-f8e5d89b-e181-41ce-944e-e89a3ee1341a 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Get console output
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.347 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.533 253542 DEBUG nova.network.neutron [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.534 253542 DEBUG nova.network.neutron [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.549 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.550 253542 DEBUG nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.550 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.550 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.550 253542 DEBUG oslo_concurrency.lockutils [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.551 253542 DEBUG nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] No waiting events found dispatching network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:32 compute-0 nova_compute[253538]: 2025-11-25 08:58:32.551 253542 WARNING nova.compute.manager [req-ddc78a16-693c-4c33-b982-bec61904289b req-87d8ae24-61c2-4fb0-9855-2849655cba83 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received unexpected event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e for instance with vm_state active and task_state None.
Nov 25 08:58:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:33 compute-0 nova_compute[253538]: 2025-11-25 08:58:33.033 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:33 compute-0 nova_compute[253538]: 2025-11-25 08:58:33.125 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:33 compute-0 nova_compute[253538]: 2025-11-25 08:58:33.126 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:33 compute-0 nova_compute[253538]: 2025-11-25 08:58:33.127 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:33 compute-0 nova_compute[253538]: 2025-11-25 08:58:33.128 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:33 compute-0 nova_compute[253538]: 2025-11-25 08:58:33.129 253542 DEBUG nova.network.neutron [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:58:33 compute-0 ceph-mon[75015]: pgmap v2332: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 902 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Nov 25 08:58:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 104 op/s
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.373 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.375 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.376 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.376 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.377 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.379 253542 INFO nova.compute.manager [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Terminating instance
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.381 253542 DEBUG nova.compute.manager [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:58:34 compute-0 kernel: tap91977aa8-62 (unregistering): left promiscuous mode
Nov 25 08:58:34 compute-0 NetworkManager[48915]: <info>  [1764061114.4463] device (tap91977aa8-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:58:34 compute-0 ovn_controller[152859]: 2025-11-25T08:58:34Z|01288|binding|INFO|Releasing lport 91977aa8-6282-46cb-bc4f-42567be639f9 from this chassis (sb_readonly=0)
Nov 25 08:58:34 compute-0 ovn_controller[152859]: 2025-11-25T08:58:34Z|01289|binding|INFO|Setting lport 91977aa8-6282-46cb-bc4f-42567be639f9 down in Southbound
Nov 25 08:58:34 compute-0 ovn_controller[152859]: 2025-11-25T08:58:34Z|01290|binding|INFO|Removing iface tap91977aa8-62 ovn-installed in OVS
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.457 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.484 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:48:54 10.100.0.10'], port_security=['fa:16:3e:60:48:54 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7be1983b-1609-4155-b634-d14fc92539e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf619d00-d285-4b9e-9996-77997075375e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d244d5b-1cd0-48b4-a9a9-c4313e58642b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8f4cf68-54bc-4d0b-a896-1ef280f60a0a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=91977aa8-6282-46cb-bc4f-42567be639f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.488 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.489 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 91977aa8-6282-46cb-bc4f-42567be639f9 in datapath bf619d00-d285-4b9e-9996-77997075375e unbound from our chassis
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.491 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf619d00-d285-4b9e-9996-77997075375e
Nov 25 08:58:34 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Nov 25 08:58:34 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007c.scope: Consumed 14.350s CPU time.
Nov 25 08:58:34 compute-0 systemd-machined[215790]: Machine qemu-154-instance-0000007c terminated.
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.504 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f49c466d-43ea-4b22-a079-d271b07d1c09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.538 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8658ac62-1bcf-476b-ac47-c8a088576dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.544 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfed9c9-61d7-4918-b7ac-5aa0e69dfa56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.575 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[486c0961-160f-4487-bee4-0f35f38b5475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.595 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3beb41-4329-43b1-9f81-7b2ad093d759]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf619d00-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:be:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642553, 'reachable_time': 19033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384994, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a4028733-9dd8-475e-85ca-9ec7923d8daf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf619d00-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642566, 'tstamp': 642566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384997, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbf619d00-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642569, 'tstamp': 642569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384997, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf619d00-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.669 253542 INFO nova.virt.libvirt.driver [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Instance destroyed successfully.
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.669 253542 DEBUG nova.objects.instance [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 7be1983b-1609-4155-b634-d14fc92539e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.674 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf619d00-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.675 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.675 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf619d00-d0, col_values=(('external_ids', {'iface-id': 'c544ed70-c59f-4fbe-97c5-a521f548f971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:34.675 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.684 253542 DEBUG nova.virt.libvirt.vif [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:57:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1380054515',display_name='tempest-TestNetworkBasicOps-server-1380054515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1380054515',id=124,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0onj7xYRJ9u2qwke0uzdeZmqzf6dyWSNUBde1bKNBXCsKy64L0Qx4G4FAfzNe1upG08i2qlETnDI+nze7Y9Zy5eH5tzWkkjtBeM6yGgDQ+VcsL5Xix937kJOB4ium+2w==',key_name='tempest-TestNetworkBasicOps-618890367',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:58:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-81q88fvv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:58:08Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=7be1983b-1609-4155-b634-d14fc92539e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.685 253542 DEBUG nova.network.os_vif_util [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.686 253542 DEBUG nova.network.os_vif_util [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.687 253542 DEBUG os_vif [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.689 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.689 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91977aa8-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.694 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:58:34 compute-0 nova_compute[253538]: 2025-11-25 08:58:34.696 253542 INFO os_vif [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:48:54,bridge_name='br-int',has_traffic_filtering=True,id=91977aa8-6282-46cb-bc4f-42567be639f9,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91977aa8-62')
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.200 253542 INFO nova.virt.libvirt.driver [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Deleting instance files /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8_del
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.202 253542 INFO nova.virt.libvirt.driver [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Deletion of /var/lib/nova/instances/7be1983b-1609-4155-b634-d14fc92539e8_del complete
Nov 25 08:58:35 compute-0 ceph-mon[75015]: pgmap v2333: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 104 op/s
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.335 253542 DEBUG nova.compute.manager [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-unplugged-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.336 253542 DEBUG oslo_concurrency.lockutils [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.336 253542 DEBUG oslo_concurrency.lockutils [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.337 253542 DEBUG oslo_concurrency.lockutils [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.338 253542 DEBUG nova.compute.manager [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] No waiting events found dispatching network-vif-unplugged-91977aa8-6282-46cb-bc4f-42567be639f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.338 253542 DEBUG nova.compute.manager [req-a0863f22-8c8e-424d-80c4-357582aff581 req-0d334204-feaa-41a6-8fd4-7fbda4375481 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-unplugged-91977aa8-6282-46cb-bc4f-42567be639f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.343 253542 INFO nova.compute.manager [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Took 0.96 seconds to destroy the instance on the hypervisor.
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.344 253542 DEBUG oslo.service.loopingcall [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.345 253542 DEBUG nova.compute.manager [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.345 253542 DEBUG nova.network.neutron [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.422 253542 DEBUG nova.compute.manager [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.423 253542 DEBUG nova.compute.manager [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing instance network info cache due to event network-changed-91977aa8-6282-46cb-bc4f-42567be639f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.424 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.424 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:35 compute-0 nova_compute[253538]: 2025-11-25 08:58:35.425 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Refreshing network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:58:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.441 253542 DEBUG nova.network.neutron [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.443 253542 DEBUG nova.network.neutron [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.463 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.467 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.468 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.469 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.469 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.470 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.470 253542 WARNING nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state None.
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.471 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.472 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.472 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.473 253542 DEBUG oslo_concurrency.lockutils [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.473 253542 DEBUG nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.474 253542 WARNING nova.compute.manager [req-586291b1-8def-440e-a140-d6f436bc9a84 req-de1ee20a-960c-4c7f-bb19-93f1e3fff779 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state None.
Nov 25 08:58:36 compute-0 ceph-mon[75015]: pgmap v2334: 321 pgs: 321 active+clean; 293 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.835 253542 DEBUG nova.network.neutron [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:36 compute-0 nova_compute[253538]: 2025-11-25 08:58:36.869 253542 INFO nova.compute.manager [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Took 1.52 seconds to deallocate network for instance.
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.049 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.050 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.187 253542 DEBUG oslo_concurrency.processutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:37 compute-0 sshd-session[385027]: Invalid user sonarqube from 45.202.211.6 port 55918
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.441 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updated VIF entry in instance network info cache for port 91977aa8-6282-46cb-bc4f-42567be639f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.443 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Updating instance_info_cache with network_info: [{"id": "91977aa8-6282-46cb-bc4f-42567be639f9", "address": "fa:16:3e:60:48:54", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91977aa8-62", "ovs_interfaceid": "91977aa8-6282-46cb-bc4f-42567be639f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:37 compute-0 sshd-session[385027]: Received disconnect from 45.202.211.6 port 55918:11: Bye Bye [preauth]
Nov 25 08:58:37 compute-0 sshd-session[385027]: Disconnected from invalid user sonarqube 45.202.211.6 port 55918 [preauth]
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.556 253542 DEBUG nova.compute.manager [req-29ece294-e5a2-40cb-bb57-d11b517832ab req-44ad8be3-f8bc-4f7f-88bf-f6d3fa3b08cf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-deleted-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.592 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7be1983b-1609-4155-b634-d14fc92539e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.593 253542 DEBUG nova.compute.manager [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-changed-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.593 253542 DEBUG nova.compute.manager [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing instance network info cache due to event network-changed-6e0acf79-7148-4555-9265-b449f234806e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.593 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.594 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.594 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing network info cache for port 6e0acf79-7148-4555-9265-b449f234806e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.637 253542 DEBUG nova.compute.manager [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.638 253542 DEBUG oslo_concurrency.lockutils [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7be1983b-1609-4155-b634-d14fc92539e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.638 253542 DEBUG oslo_concurrency.lockutils [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.639 253542 DEBUG oslo_concurrency.lockutils [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.639 253542 DEBUG nova.compute.manager [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] No waiting events found dispatching network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.640 253542 WARNING nova.compute.manager [req-de2f0bbe-add0-45a4-aa44-a69ce9182534 req-b542062e-58b6-459a-856a-4bb90ec64ca7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Received unexpected event network-vif-plugged-91977aa8-6282-46cb-bc4f-42567be639f9 for instance with vm_state deleted and task_state None.
Nov 25 08:58:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:58:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2276302590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.677 253542 DEBUG oslo_concurrency.processutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.685 253542 DEBUG nova.compute.provider_tree [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.709 253542 DEBUG nova.scheduler.client.report [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.778 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2276302590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:37 compute-0 nova_compute[253538]: 2025-11-25 08:58:37.908 253542 INFO nova.scheduler.client.report [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 7be1983b-1609-4155-b634-d14fc92539e8
Nov 25 08:58:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 245 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 514 KiB/s wr, 125 op/s
Nov 25 08:58:38 compute-0 nova_compute[253538]: 2025-11-25 08:58:38.015 253542 DEBUG oslo_concurrency.lockutils [None req-77a66044-8283-44d2-8074-30349f14cadf 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "7be1983b-1609-4155-b634-d14fc92539e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:38 compute-0 nova_compute[253538]: 2025-11-25 08:58:38.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:38 compute-0 ceph-mon[75015]: pgmap v2335: 321 pgs: 321 active+clean; 245 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 514 KiB/s wr, 125 op/s
Nov 25 08:58:39 compute-0 nova_compute[253538]: 2025-11-25 08:58:39.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 213 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 103 op/s
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.145 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updated VIF entry in instance network info cache for port 6e0acf79-7148-4555-9265-b449f234806e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.146 253542 DEBUG nova.network.neutron [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.168 253542 DEBUG oslo_concurrency.lockutils [req-e9909664-05b2-4ecd-806e-585d941829ac req-154f9cc2-45b6-4491-91b8-dc4978dbd242 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.607 253542 DEBUG nova.compute.manager [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.607 253542 DEBUG nova.compute.manager [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing instance network info cache due to event network-changed-027edfd6-09a6-4bf4-88df-8a19e59d1f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.608 253542 DEBUG oslo_concurrency.lockutils [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.608 253542 DEBUG oslo_concurrency.lockutils [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.609 253542 DEBUG nova.network.neutron [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Refreshing network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.871 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.872 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.872 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.872 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.873 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.874 253542 INFO nova.compute.manager [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Terminating instance
Nov 25 08:58:40 compute-0 nova_compute[253538]: 2025-11-25 08:58:40.875 253542 DEBUG nova.compute.manager [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:58:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.081 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.082 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:41 compute-0 ceph-mon[75015]: pgmap v2336: 321 pgs: 321 active+clean; 213 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 103 op/s
Nov 25 08:58:41 compute-0 kernel: tap027edfd6-09 (unregistering): left promiscuous mode
Nov 25 08:58:41 compute-0 NetworkManager[48915]: <info>  [1764061121.4334] device (tap027edfd6-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:58:41 compute-0 ovn_controller[152859]: 2025-11-25T08:58:41Z|01291|binding|INFO|Releasing lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 from this chassis (sb_readonly=0)
Nov 25 08:58:41 compute-0 ovn_controller[152859]: 2025-11-25T08:58:41Z|01292|binding|INFO|Setting lport 027edfd6-09a6-4bf4-88df-8a19e59d1f72 down in Southbound
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.443 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:41 compute-0 ovn_controller[152859]: 2025-11-25T08:58:41Z|01293|binding|INFO|Removing iface tap027edfd6-09 ovn-installed in OVS
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.447 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.487 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:e8:8d 10.100.0.9'], port_security=['fa:16:3e:d9:e8:8d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5e55aa91-2aa5-4443-b976-0f3e4409e8ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf619d00-d285-4b9e-9996-77997075375e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9d7a9182-641d-4ca5-a3ab-361222a77391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8f4cf68-54bc-4d0b-a896-1ef280f60a0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=027edfd6-09a6-4bf4-88df-8a19e59d1f72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:58:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.489 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 027edfd6-09a6-4bf4-88df-8a19e59d1f72 in datapath bf619d00-d285-4b9e-9996-77997075375e unbound from our chassis
Nov 25 08:58:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.490 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf619d00-d285-4b9e-9996-77997075375e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:58:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.491 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b8a73b-861d-49dc-9ea5-da044b4ede54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:41.492 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf619d00-d285-4b9e-9996-77997075375e namespace which is not needed anymore
Nov 25 08:58:41 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 25 08:58:41 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007b.scope: Consumed 15.511s CPU time.
Nov 25 08:58:41 compute-0 systemd-machined[215790]: Machine qemu-153-instance-0000007b terminated.
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.577 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:58:41 compute-0 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [NOTICE]   (383647) : haproxy version is 2.8.14-c23fe91
Nov 25 08:58:41 compute-0 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [NOTICE]   (383647) : path to executable is /usr/sbin/haproxy
Nov 25 08:58:41 compute-0 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [WARNING]  (383647) : Exiting Master process...
Nov 25 08:58:41 compute-0 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [ALERT]    (383647) : Current worker (383649) exited with code 143 (Terminated)
Nov 25 08:58:41 compute-0 neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e[383642]: [WARNING]  (383647) : All workers exited. Exiting... (0)
Nov 25 08:58:41 compute-0 systemd[1]: libpod-2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94.scope: Deactivated successfully.
Nov 25 08:58:41 compute-0 podman[385076]: 2025-11-25 08:58:41.688653256 +0000 UTC m=+0.083077472 container died 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.727 253542 INFO nova.virt.libvirt.driver [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Instance destroyed successfully.
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.730 253542 DEBUG nova.objects.instance [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 5e55aa91-2aa5-4443-b976-0f3e4409e8ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.742 253542 DEBUG nova.virt.libvirt.vif [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2129911111',display_name='tempest-TestNetworkBasicOps-server-2129911111',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2129911111',id=123,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN+yQyEQntLRhVgjlecOd8kZgcuJccGeN4XUmVTtZkXatG89jriyrNCx89aMj4+ppyzZUWi3hVDPwYltwxWsUBkgwfbxG4JDKfBHeP0jrr3H+wCGmTdCkbNarpgdJwyag==',key_name='tempest-TestNetworkBasicOps-619601627',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:57:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-vzp7o245',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:57:51Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=5e55aa91-2aa5-4443-b976-0f3e4409e8ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.745 253542 DEBUG nova.network.os_vif_util [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.747 253542 DEBUG nova.network.os_vif_util [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.748 253542 DEBUG os_vif [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap027edfd6-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:58:41 compute-0 nova_compute[253538]: 2025-11-25 08:58:41.758 253542 INFO os_vif [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:e8:8d,bridge_name='br-int',has_traffic_filtering=True,id=027edfd6-09a6-4bf4-88df-8a19e59d1f72,network=Network(bf619d00-d285-4b9e-9996-77997075375e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap027edfd6-09')
Nov 25 08:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4540958df62a9ce7e0b75ff06f1bfd795b20e2ca8437d0035d39c10d955612c-merged.mount: Deactivated successfully.
Nov 25 08:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94-userdata-shm.mount: Deactivated successfully.
Nov 25 08:58:41 compute-0 podman[385076]: 2025-11-25 08:58:41.933610303 +0000 UTC m=+0.328034479 container cleanup 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 08:58:41 compute-0 systemd[1]: libpod-conmon-2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94.scope: Deactivated successfully.
Nov 25 08:58:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 224 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 884 KiB/s wr, 106 op/s
Nov 25 08:58:42 compute-0 podman[385136]: 2025-11-25 08:58:42.244579538 +0000 UTC m=+0.280906987 container remove 2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.253 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[527e4013-ea1c-4db4-93a5-131b40a1173f]: (4, ('Tue Nov 25 08:58:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e (2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94)\n2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94\nTue Nov 25 08:58:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bf619d00-d285-4b9e-9996-77997075375e (2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94)\n2e741201d07043758356ba915932a1ecc9e834e5043588acafeb96a919795c94\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.256 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c1953a-60d5-475d-98f7-8bbf22ecc2aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.257 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf619d00-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.259 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:42 compute-0 kernel: tapbf619d00-d0: left promiscuous mode
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.265 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.265 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[81324436-b5de-4d43-b231-05d1d4d3c12f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2d5d28-abe6-4bd0-a724-d3391bac558b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.285 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[854e1cf2-9ad7-4a29-b422-86a41d5da7d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a0b870-d8e4-401b-b7a1-99d357fa5eda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642545, 'reachable_time': 19642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385152, 'error': None, 'target': 'ovnmeta-bf619d00-d285-4b9e-9996-77997075375e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dbf619d00\x2dd285\x2d4b9e\x2d9996\x2d77997075375e.mount: Deactivated successfully.
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.311 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf619d00-d285-4b9e-9996-77997075375e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:58:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:42.311 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[759ea5ec-818b-4b5a-b531-8d692e9fb199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.411 253542 DEBUG nova.network.neutron [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updated VIF entry in instance network info cache for port 027edfd6-09a6-4bf4-88df-8a19e59d1f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.411 253542 DEBUG nova.network.neutron [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [{"id": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "address": "fa:16:3e:d9:e8:8d", "network": {"id": "bf619d00-d285-4b9e-9996-77997075375e", "bridge": "br-int", "label": "tempest-network-smoke--598319533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap027edfd6-09", "ovs_interfaceid": "027edfd6-09a6-4bf4-88df-8a19e59d1f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.727 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.727 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.728 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.728 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.728 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.728 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-unplugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG oslo_concurrency.lockutils [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.729 253542 DEBUG nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] No waiting events found dispatching network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.730 253542 WARNING nova.compute.manager [req-29d82191-a9f1-4446-8725-9728b8797021 req-4362fd0e-1994-4d49-ac2e-bcb65f5cd51d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received unexpected event network-vif-plugged-027edfd6-09a6-4bf4-88df-8a19e59d1f72 for instance with vm_state active and task_state deleting.
Nov 25 08:58:42 compute-0 nova_compute[253538]: 2025-11-25 08:58:42.826 253542 DEBUG oslo_concurrency.lockutils [req-909e201a-740e-4baf-8a18-f64eedd9253e req-1773e90d-1899-4600-9a76-d0dfda02bd3f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-5e55aa91-2aa5-4443-b976-0f3e4409e8ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:58:43 compute-0 nova_compute[253538]: 2025-11-25 08:58:43.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:43 compute-0 nova_compute[253538]: 2025-11-25 08:58:43.065 253542 INFO nova.virt.libvirt.driver [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Deleting instance files /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec_del
Nov 25 08:58:43 compute-0 nova_compute[253538]: 2025-11-25 08:58:43.066 253542 INFO nova.virt.libvirt.driver [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Deletion of /var/lib/nova/instances/5e55aa91-2aa5-4443-b976-0f3e4409e8ec_del complete
Nov 25 08:58:43 compute-0 ovn_controller[152859]: 2025-11-25T08:58:43Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:96:34 10.100.0.5
Nov 25 08:58:43 compute-0 ovn_controller[152859]: 2025-11-25T08:58:43Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:96:34 10.100.0.5
Nov 25 08:58:43 compute-0 ceph-mon[75015]: pgmap v2337: 321 pgs: 321 active+clean; 224 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 884 KiB/s wr, 106 op/s
Nov 25 08:58:43 compute-0 nova_compute[253538]: 2025-11-25 08:58:43.278 253542 INFO nova.compute.manager [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Took 2.40 seconds to destroy the instance on the hypervisor.
Nov 25 08:58:43 compute-0 nova_compute[253538]: 2025-11-25 08:58:43.278 253542 DEBUG oslo.service.loopingcall [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:58:43 compute-0 nova_compute[253538]: 2025-11-25 08:58:43.279 253542 DEBUG nova.compute.manager [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:58:43 compute-0 nova_compute[253538]: 2025-11-25 08:58:43.279 253542 DEBUG nova.network.neutron [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:58:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 197 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 113 op/s
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.304 253542 DEBUG nova.network.neutron [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.344 253542 INFO nova.compute.manager [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Took 1.07 seconds to deallocate network for instance.
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.446 253542 DEBUG nova.compute.manager [req-18605d96-9067-4a5a-b84e-2d4a5c3a519c req-b815ea7f-66ab-44d1-aac9-ac8ed8d491cd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Received event network-vif-deleted-027edfd6-09a6-4bf4-88df-8a19e59d1f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.449 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.449 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.536 253542 DEBUG oslo_concurrency.processutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.581 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:58:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:58:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767464915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:44 compute-0 nova_compute[253538]: 2025-11-25 08:58:44.994 253542 DEBUG oslo_concurrency.processutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:45 compute-0 nova_compute[253538]: 2025-11-25 08:58:45.001 253542 DEBUG nova.compute.provider_tree [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:58:45 compute-0 nova_compute[253538]: 2025-11-25 08:58:45.017 253542 DEBUG nova.scheduler.client.report [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:58:45 compute-0 nova_compute[253538]: 2025-11-25 08:58:45.062 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:45 compute-0 nova_compute[253538]: 2025-11-25 08:58:45.146 253542 INFO nova.scheduler.client.report [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 5e55aa91-2aa5-4443-b976-0f3e4409e8ec
Nov 25 08:58:45 compute-0 nova_compute[253538]: 2025-11-25 08:58:45.232 253542 DEBUG oslo_concurrency.lockutils [None req-268b8892-ed17-439d-8a6d-ef1f296429bb 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "5e55aa91-2aa5-4443-b976-0f3e4409e8ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:45 compute-0 ceph-mon[75015]: pgmap v2338: 321 pgs: 321 active+clean; 197 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 113 op/s
Nov 25 08:58:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2767464915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:45 compute-0 nova_compute[253538]: 2025-11-25 08:58:45.559 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:45 compute-0 podman[385178]: 2025-11-25 08:58:45.849141039 +0000 UTC m=+0.091391778 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 08:58:45 compute-0 podman[385179]: 2025-11-25 08:58:45.86278834 +0000 UTC m=+0.105786040 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 08:58:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 192 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Nov 25 08:58:46 compute-0 nova_compute[253538]: 2025-11-25 08:58:46.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:46 compute-0 nova_compute[253538]: 2025-11-25 08:58:46.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:47 compute-0 ceph-mon[75015]: pgmap v2339: 321 pgs: 321 active+clean; 192 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Nov 25 08:58:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 164 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:48 compute-0 ceph-mon[75015]: pgmap v2340: 321 pgs: 321 active+clean; 164 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:58:48 compute-0 nova_compute[253538]: 2025-11-25 08:58:48.575 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:58:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798145365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.015 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.088 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.088 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.283 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.284 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.94315719604492GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.284 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.285 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.492 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 9447890d-1fff-4536-a0cd-b889c23f7479 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.493 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.494 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:58:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2798145365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.539 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.668 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061114.6664689, 7be1983b-1609-4155-b634-d14fc92539e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.669 253542 INFO nova.compute.manager [-] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] VM Stopped (Lifecycle Event)
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.694 253542 DEBUG nova.compute.manager [None req-76e8fb6b-ed11-47b4-8c18-f26d967503ad - - - - - -] [instance: 7be1983b-1609-4155-b634-d14fc92539e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 25 08:58:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:58:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/299233274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.977 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.984 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:58:49 compute-0 nova_compute[253538]: 2025-11-25 08:58:49.998 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:58:50 compute-0 nova_compute[253538]: 2025-11-25 08:58:50.037 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:58:50 compute-0 nova_compute[253538]: 2025-11-25 08:58:50.038 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:50 compute-0 ceph-mon[75015]: pgmap v2341: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 25 08:58:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/299233274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:51 compute-0 nova_compute[253538]: 2025-11-25 08:58:51.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 25 08:58:52 compute-0 ovn_controller[152859]: 2025-11-25T08:58:52Z|01294|binding|INFO|Releasing lport 0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f from this chassis (sb_readonly=0)
Nov 25 08:58:52 compute-0 nova_compute[253538]: 2025-11-25 08:58:52.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:52 compute-0 podman[385262]: 2025-11-25 08:58:52.850216187 +0000 UTC m=+0.100852006 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 08:58:53 compute-0 ceph-mon[75015]: pgmap v2342: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 25 08:58:53 compute-0 nova_compute[253538]: 2025-11-25 08:58:53.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:58:53
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'vms', 'backups', 'cephfs.cephfs.data', 'images']
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 1.3 MiB/s wr, 82 op/s
Nov 25 08:58:53 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:58:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:58:54 compute-0 sudo[385288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:58:54 compute-0 sudo[385288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:54 compute-0 sudo[385288]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:54 compute-0 sudo[385313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:58:54 compute-0 sudo[385313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:54 compute-0 sudo[385313]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:54 compute-0 sudo[385338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:58:54 compute-0 sudo[385338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:54 compute-0 sudo[385338]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:54 compute-0 sudo[385363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 08:58:54 compute-0 sudo[385363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:54 compute-0 sudo[385363]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:58:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:58:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 08:58:55 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:58:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 08:58:55 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:58:55 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev da4b1229-30f9-4445-9f66-2eb4d44743e8 does not exist
Nov 25 08:58:55 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5af46c16-af4c-429d-85e8-b73b94d2dfa1 does not exist
Nov 25 08:58:55 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8d6a4438-54d8-4f10-939f-5ea9632673e6 does not exist
Nov 25 08:58:55 compute-0 ceph-mon[75015]: pgmap v2343: 321 pgs: 321 active+clean; 167 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 1.3 MiB/s wr, 82 op/s
Nov 25 08:58:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:58:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 08:58:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 08:58:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:58:55 compute-0 nova_compute[253538]: 2025-11-25 08:58:55.031 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:58:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 08:58:55 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:58:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 08:58:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:58:55 compute-0 sudo[385419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:58:55 compute-0 sudo[385419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:55 compute-0 sudo[385419]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:55 compute-0 sudo[385444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:58:55 compute-0 sudo[385444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:55 compute-0 sudo[385444]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:55 compute-0 sudo[385469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:58:55 compute-0 sudo[385469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:55 compute-0 sudo[385469]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:55 compute-0 sudo[385494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 08:58:55 compute-0 sudo[385494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:55 compute-0 podman[385558]: 2025-11-25 08:58:55.647241399 +0000 UTC m=+0.041500730 container create da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 08:58:55 compute-0 podman[385558]: 2025-11-25 08:58:55.627135162 +0000 UTC m=+0.021394523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:58:55 compute-0 systemd[1]: Started libpod-conmon-da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58.scope.
Nov 25 08:58:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 167 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 641 KiB/s wr, 55 op/s
Nov 25 08:58:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:58:56 compute-0 podman[385558]: 2025-11-25 08:58:56.049100708 +0000 UTC m=+0.443360069 container init da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:58:56 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:58:56 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 08:58:56 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 08:58:56 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 08:58:56 compute-0 podman[385558]: 2025-11-25 08:58:56.058356119 +0000 UTC m=+0.452615460 container start da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 08:58:56 compute-0 podman[385558]: 2025-11-25 08:58:56.062114082 +0000 UTC m=+0.456373423 container attach da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:58:56 compute-0 sad_morse[385574]: 167 167
Nov 25 08:58:56 compute-0 systemd[1]: libpod-da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58.scope: Deactivated successfully.
Nov 25 08:58:56 compute-0 podman[385558]: 2025-11-25 08:58:56.066229234 +0000 UTC m=+0.460488615 container died da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 08:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-62db4a025b71b211f6fd7d76d7e892faaad23e1fcfe2f324738f59dbd14d6697-merged.mount: Deactivated successfully.
Nov 25 08:58:56 compute-0 podman[385558]: 2025-11-25 08:58:56.117946082 +0000 UTC m=+0.512205423 container remove da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 08:58:56 compute-0 systemd[1]: libpod-conmon-da50987d75d4106e19d5686d113d082bbe253d955d58072d55057429c725fe58.scope: Deactivated successfully.
Nov 25 08:58:56 compute-0 podman[385599]: 2025-11-25 08:58:56.318011587 +0000 UTC m=+0.046855347 container create c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 08:58:56 compute-0 systemd[1]: Started libpod-conmon-c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e.scope.
Nov 25 08:58:56 compute-0 podman[385599]: 2025-11-25 08:58:56.302157096 +0000 UTC m=+0.031000876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:58:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.433 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.435 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:56 compute-0 podman[385599]: 2025-11-25 08:58:56.441454067 +0000 UTC m=+0.170297847 container init c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:58:56 compute-0 podman[385599]: 2025-11-25 08:58:56.453765023 +0000 UTC m=+0.182608813 container start c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 08:58:56 compute-0 podman[385599]: 2025-11-25 08:58:56.457098364 +0000 UTC m=+0.185942134 container attach c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.456 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.581 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.582 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.593 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.593 253542 INFO nova.compute.claims [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.724 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061121.7238796, 5e55aa91-2aa5-4443-b976-0f3e4409e8ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.725 253542 INFO nova.compute.manager [-] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] VM Stopped (Lifecycle Event)
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.740 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:56 compute-0 nova_compute[253538]: 2025-11-25 08:58:56.794 253542 DEBUG nova.compute.manager [None req-d616372a-d88a-4b1b-972b-b00b38dec989 - - - - - -] [instance: 5e55aa91-2aa5-4443-b976-0f3e4409e8ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:58:57 compute-0 ceph-mon[75015]: pgmap v2344: 321 pgs: 321 active+clean; 167 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 641 KiB/s wr, 55 op/s
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:58:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521499799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.292 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.299 253542 DEBUG nova.compute.provider_tree [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.317 253542 DEBUG nova.scheduler.client.report [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.337 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.338 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.387 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.388 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.409 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.435 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:58:57 compute-0 crazy_mahavira[385616]: --> passed data devices: 0 physical, 3 LVM
Nov 25 08:58:57 compute-0 crazy_mahavira[385616]: --> relative data size: 1.0
Nov 25 08:58:57 compute-0 crazy_mahavira[385616]: --> All data devices are unavailable
Nov 25 08:58:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:57.520 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.520 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:58:57.522 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 08:58:57 compute-0 systemd[1]: libpod-c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e.scope: Deactivated successfully.
Nov 25 08:58:57 compute-0 systemd[1]: libpod-c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e.scope: Consumed 1.023s CPU time.
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.550 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.552 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.553 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Creating image(s)
Nov 25 08:58:57 compute-0 podman[385667]: 2025-11-25 08:58:57.569024209 +0000 UTC m=+0.025825924 container died c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.582 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e9f2ddfce02a80f6433a7fd7712f33de9fe1f314c3c200627070a35fd77fd7a-merged.mount: Deactivated successfully.
Nov 25 08:58:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.617 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:57 compute-0 podman[385667]: 2025-11-25 08:58:57.637855282 +0000 UTC m=+0.094656977 container remove c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mahavira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.643 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:57 compute-0 systemd[1]: libpod-conmon-c125562cb943c45c8913eae2f6cfcdfb53d7576ee0d585e1c9530b65ca4bc03e.scope: Deactivated successfully.
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.649 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:57 compute-0 sudo[385494]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.698 253542 DEBUG nova.policy [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.739 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.740 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.740 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.740 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:57 compute-0 sudo[385738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:58:57 compute-0 sudo[385738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:57 compute-0 sudo[385738]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.763 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:58:57 compute-0 nova_compute[253538]: 2025-11-25 08:58:57.770 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:58:57 compute-0 sudo[385783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:58:57 compute-0 sudo[385783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:57 compute-0 sudo[385783]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:57 compute-0 sudo[385809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:58:57 compute-0 sudo[385809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:57 compute-0 sudo[385809]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 167 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 112 KiB/s wr, 38 op/s
Nov 25 08:58:57 compute-0 sudo[385852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 08:58:57 compute-0 sudo[385852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:58:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1521499799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.119 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.188 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.330 253542 DEBUG nova.objects.instance [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 68c6ff41-ad19-4b3d-947d-0a5d72e4042c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.341 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.342 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Ensure instance console log exists: /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.342 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.343 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.343 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:58:58 compute-0 podman[385971]: 2025-11-25 08:58:58.384881605 +0000 UTC m=+0.064899857 container create 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:58:58 compute-0 systemd[1]: Started libpod-conmon-7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5.scope.
Nov 25 08:58:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:58:58 compute-0 podman[385971]: 2025-11-25 08:58:58.35640008 +0000 UTC m=+0.036418382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:58:58 compute-0 podman[385971]: 2025-11-25 08:58:58.460996717 +0000 UTC m=+0.141014999 container init 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 08:58:58 compute-0 podman[385971]: 2025-11-25 08:58:58.468642335 +0000 UTC m=+0.148660557 container start 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:58:58 compute-0 podman[385971]: 2025-11-25 08:58:58.471576785 +0000 UTC m=+0.151595047 container attach 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 08:58:58 compute-0 elegant_gauss[386004]: 167 167
Nov 25 08:58:58 compute-0 systemd[1]: libpod-7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5.scope: Deactivated successfully.
Nov 25 08:58:58 compute-0 podman[385971]: 2025-11-25 08:58:58.473875607 +0000 UTC m=+0.153893839 container died 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 08:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a052771c98df8563db783cd7b54dd061ceadf1d71206a81c1033bd162ae34e20-merged.mount: Deactivated successfully.
Nov 25 08:58:58 compute-0 podman[385971]: 2025-11-25 08:58:58.508222312 +0000 UTC m=+0.188240534 container remove 7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 08:58:58 compute-0 systemd[1]: libpod-conmon-7b45892b6fb9d141f1aecbf4c09f58ae93d785f6e2110bf1f9b55abb5832fcf5.scope: Deactivated successfully.
Nov 25 08:58:58 compute-0 nova_compute[253538]: 2025-11-25 08:58:58.605 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Successfully created port: eba714df-d5db-464e-b5b6-6d56c52d33fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:58:58 compute-0 podman[386027]: 2025-11-25 08:58:58.772263539 +0000 UTC m=+0.079437823 container create acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:58:58 compute-0 systemd[1]: Started libpod-conmon-acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09.scope.
Nov 25 08:58:58 compute-0 podman[386027]: 2025-11-25 08:58:58.732912638 +0000 UTC m=+0.040087002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:58:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:58:58 compute-0 podman[386027]: 2025-11-25 08:58:58.885779579 +0000 UTC m=+0.192953903 container init acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:58:58 compute-0 podman[386027]: 2025-11-25 08:58:58.893168499 +0000 UTC m=+0.200342793 container start acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:58:58 compute-0 podman[386027]: 2025-11-25 08:58:58.897470847 +0000 UTC m=+0.204645211 container attach acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 08:58:59 compute-0 ceph-mon[75015]: pgmap v2345: 321 pgs: 321 active+clean; 167 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 112 KiB/s wr, 38 op/s
Nov 25 08:58:59 compute-0 nova_compute[253538]: 2025-11-25 08:58:59.273 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Successfully updated port: eba714df-d5db-464e-b5b6-6d56c52d33fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:58:59 compute-0 nova_compute[253538]: 2025-11-25 08:58:59.310 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:59 compute-0 nova_compute[253538]: 2025-11-25 08:58:59.311 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:58:59 compute-0 nova_compute[253538]: 2025-11-25 08:58:59.312 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:58:59 compute-0 nova_compute[253538]: 2025-11-25 08:58:59.512 253542 DEBUG nova.compute.manager [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-changed-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:58:59 compute-0 nova_compute[253538]: 2025-11-25 08:58:59.512 253542 DEBUG nova.compute.manager [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Refreshing instance network info cache due to event network-changed-eba714df-d5db-464e-b5b6-6d56c52d33fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:58:59 compute-0 nova_compute[253538]: 2025-11-25 08:58:59.512 253542 DEBUG oslo_concurrency.lockutils [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:58:59 compute-0 nova_compute[253538]: 2025-11-25 08:58:59.590 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:58:59 compute-0 vibrant_ride[386044]: {
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:     "0": [
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:         {
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "devices": [
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "/dev/loop3"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             ],
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_name": "ceph_lv0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_size": "21470642176",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "name": "ceph_lv0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "tags": {
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cluster_name": "ceph",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.crush_device_class": "",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.encrypted": "0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osd_id": "0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.type": "block",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.vdo": "0"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             },
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "type": "block",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "vg_name": "ceph_vg0"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:         }
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:     ],
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:     "1": [
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:         {
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "devices": [
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "/dev/loop4"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             ],
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_name": "ceph_lv1",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_size": "21470642176",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "name": "ceph_lv1",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "tags": {
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cluster_name": "ceph",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.crush_device_class": "",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.encrypted": "0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osd_id": "1",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.type": "block",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.vdo": "0"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             },
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "type": "block",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "vg_name": "ceph_vg1"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:         }
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:     ],
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:     "2": [
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:         {
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "devices": [
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "/dev/loop5"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             ],
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_name": "ceph_lv2",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_size": "21470642176",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "name": "ceph_lv2",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "tags": {
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.cluster_name": "ceph",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.crush_device_class": "",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.encrypted": "0",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osd_id": "2",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.type": "block",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:                 "ceph.vdo": "0"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             },
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "type": "block",
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:             "vg_name": "ceph_vg2"
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:         }
Nov 25 08:58:59 compute-0 vibrant_ride[386044]:     ]
Nov 25 08:58:59 compute-0 vibrant_ride[386044]: }
Nov 25 08:58:59 compute-0 systemd[1]: libpod-acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09.scope: Deactivated successfully.
Nov 25 08:58:59 compute-0 podman[386027]: 2025-11-25 08:58:59.768840934 +0000 UTC m=+1.076015238 container died acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 08:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3424c645a66e07900c11cbb0b08ef80aaad1cf2de381e0d67caaf7220e9d9465-merged.mount: Deactivated successfully.
Nov 25 08:58:59 compute-0 podman[386027]: 2025-11-25 08:58:59.846211191 +0000 UTC m=+1.153385465 container remove acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 08:58:59 compute-0 systemd[1]: libpod-conmon-acb3e83852462d7d755e1046fb7479ba3ae7bd9c9720149dbb21336d7b7e5e09.scope: Deactivated successfully.
Nov 25 08:58:59 compute-0 sudo[385852]: pam_unix(sudo:session): session closed for user root
Nov 25 08:58:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 209 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Nov 25 08:58:59 compute-0 sudo[386066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:58:59 compute-0 sudo[386066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:58:59 compute-0 sudo[386066]: pam_unix(sudo:session): session closed for user root
Nov 25 08:59:00 compute-0 sudo[386091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 08:59:00 compute-0 sudo[386091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:59:00 compute-0 sudo[386091]: pam_unix(sudo:session): session closed for user root
Nov 25 08:59:00 compute-0 sudo[386116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:59:00 compute-0 sudo[386116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:59:00 compute-0 sudo[386116]: pam_unix(sudo:session): session closed for user root
Nov 25 08:59:00 compute-0 sudo[386141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 08:59:00 compute-0 sudo[386141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:59:00 compute-0 podman[386206]: 2025-11-25 08:59:00.586564192 +0000 UTC m=+0.065669559 container create 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:59:00 compute-0 systemd[1]: Started libpod-conmon-3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e.scope.
Nov 25 08:59:00 compute-0 podman[386206]: 2025-11-25 08:59:00.558538889 +0000 UTC m=+0.037644306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:59:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:59:00 compute-0 podman[386206]: 2025-11-25 08:59:00.668846191 +0000 UTC m=+0.147951638 container init 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 08:59:00 compute-0 podman[386206]: 2025-11-25 08:59:00.67688342 +0000 UTC m=+0.155988827 container start 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 08:59:00 compute-0 podman[386206]: 2025-11-25 08:59:00.67982046 +0000 UTC m=+0.158925867 container attach 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 08:59:00 compute-0 sleepy_haslett[386222]: 167 167
Nov 25 08:59:00 compute-0 systemd[1]: libpod-3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e.scope: Deactivated successfully.
Nov 25 08:59:00 compute-0 podman[386206]: 2025-11-25 08:59:00.685680909 +0000 UTC m=+0.164786306 container died 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.694 253542 DEBUG nova.network.neutron [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Updating instance_info_cache with network_info: [{"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.709 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dafd25d5898352859509f39a402ce96d59cf4468d9cfe317266c8d25b033853-merged.mount: Deactivated successfully.
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.710 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance network_info: |[{"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.710 253542 DEBUG oslo_concurrency.lockutils [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.711 253542 DEBUG nova.network.neutron [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Refreshing network info cache for port eba714df-d5db-464e-b5b6-6d56c52d33fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.714 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start _get_guest_xml network_info=[{"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.721 253542 WARNING nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:59:00 compute-0 podman[386206]: 2025-11-25 08:59:00.725145033 +0000 UTC m=+0.204250410 container remove 3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.728 253542 DEBUG nova.virt.libvirt.host [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.729 253542 DEBUG nova.virt.libvirt.host [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.733 253542 DEBUG nova.virt.libvirt.host [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.733 253542 DEBUG nova.virt.libvirt.host [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.734 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.734 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.735 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.735 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.735 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.736 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.737 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.737 253542 DEBUG nova.virt.hardware [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:59:00 compute-0 nova_compute[253538]: 2025-11-25 08:59:00.740 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:00 compute-0 systemd[1]: libpod-conmon-3a9ee4cca17c5cf32fbd39f3b46526a755f3766ddf29f19650e3dc6393602f1e.scope: Deactivated successfully.
Nov 25 08:59:00 compute-0 podman[386257]: 2025-11-25 08:59:00.985424998 +0000 UTC m=+0.074395215 container create eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 08:59:01 compute-0 systemd[1]: Started libpod-conmon-eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929.scope.
Nov 25 08:59:01 compute-0 podman[386257]: 2025-11-25 08:59:00.95278418 +0000 UTC m=+0.041754477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 08:59:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 08:59:01 compute-0 podman[386257]: 2025-11-25 08:59:01.076113437 +0000 UTC m=+0.165083684 container init eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 08:59:01 compute-0 podman[386257]: 2025-11-25 08:59:01.088293788 +0000 UTC m=+0.177264035 container start eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 08:59:01 compute-0 podman[386257]: 2025-11-25 08:59:01.093619563 +0000 UTC m=+0.182589830 container attach eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 08:59:01 compute-0 ceph-mon[75015]: pgmap v2346: 321 pgs: 321 active+clean; 209 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Nov 25 08:59:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:59:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16663584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.238 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.260 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.263 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:59:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068820527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.673 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.676 253542 DEBUG nova.virt.libvirt.vif [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=126,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-1i7wo9f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:57Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=68c6ff41-ad19-4b3d-947d-0a5d72e4042c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.677 253542 DEBUG nova.network.os_vif_util [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.679 253542 DEBUG nova.network.os_vif_util [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.681 253542 DEBUG nova.objects.instance [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68c6ff41-ad19-4b3d-947d-0a5d72e4042c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.697 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <uuid>68c6ff41-ad19-4b3d-947d-0a5d72e4042c</uuid>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <name>instance-0000007e</name>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683</nova:name>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:59:00</nova:creationTime>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <nova:port uuid="eba714df-d5db-464e-b5b6-6d56c52d33fd">
Nov 25 08:59:01 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <system>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <entry name="serial">68c6ff41-ad19-4b3d-947d-0a5d72e4042c</entry>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <entry name="uuid">68c6ff41-ad19-4b3d-947d-0a5d72e4042c</entry>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </system>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <os>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   </os>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <features>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   </features>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk">
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       </source>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config">
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       </source>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:59:01 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:55:3b:fd"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <target dev="tapeba714df-d5"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/console.log" append="off"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <video>
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </video>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:59:01 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:59:01 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:59:01 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:59:01 compute-0 nova_compute[253538]: </domain>
Nov 25 08:59:01 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.700 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Preparing to wait for external event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.701 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.701 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.702 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.703 253542 DEBUG nova.virt.libvirt.vif [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=126,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-1i7wo9f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:58:57Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=68c6ff41-ad19-4b3d-947d-0a5d72e4042c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.703 253542 DEBUG nova.network.os_vif_util [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.704 253542 DEBUG nova.network.os_vif_util [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.705 253542 DEBUG os_vif [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.706 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.707 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.708 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.714 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeba714df-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.715 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeba714df-d5, col_values=(('external_ids', {'iface-id': 'eba714df-d5db-464e-b5b6-6d56c52d33fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:3b:fd', 'vm-uuid': '68c6ff41-ad19-4b3d-947d-0a5d72e4042c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.717 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:01 compute-0 NetworkManager[48915]: <info>  [1764061141.7184] manager: (tapeba714df-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/530)
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.726 253542 INFO os_vif [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5')
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.782 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.782 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.783 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:55:3b:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.784 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Using config drive
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.815 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:01 compute-0 nova_compute[253538]: 2025-11-25 08:59:01.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:59:02 compute-0 objective_herschel[386283]: {
Nov 25 08:59:02 compute-0 objective_herschel[386283]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "osd_id": 1,
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "type": "bluestore"
Nov 25 08:59:02 compute-0 objective_herschel[386283]:     },
Nov 25 08:59:02 compute-0 objective_herschel[386283]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "osd_id": 2,
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "type": "bluestore"
Nov 25 08:59:02 compute-0 objective_herschel[386283]:     },
Nov 25 08:59:02 compute-0 objective_herschel[386283]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "osd_id": 0,
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 08:59:02 compute-0 objective_herschel[386283]:         "type": "bluestore"
Nov 25 08:59:02 compute-0 objective_herschel[386283]:     }
Nov 25 08:59:02 compute-0 objective_herschel[386283]: }
Nov 25 08:59:02 compute-0 systemd[1]: libpod-eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929.scope: Deactivated successfully.
Nov 25 08:59:02 compute-0 podman[386378]: 2025-11-25 08:59:02.09722059 +0000 UTC m=+0.020987342 container died eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.107 253542 DEBUG nova.network.neutron [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Updated VIF entry in instance network info cache for port eba714df-d5db-464e-b5b6-6d56c52d33fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.108 253542 DEBUG nova.network.neutron [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Updating instance_info_cache with network_info: [{"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/16663584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:59:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4068820527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.119 253542 DEBUG oslo_concurrency.lockutils [req-c51cf682-e22e-479a-ae64-fe2af2ac581a req-65759925-b4a3-415c-9f8b-0bcbacecf6a5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-68c6ff41-ad19-4b3d-947d-0a5d72e4042c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c348421604df6d0682b54110a8895878a79ac0bbe2171aa572db4ebfd8b9f9f-merged.mount: Deactivated successfully.
Nov 25 08:59:02 compute-0 podman[386378]: 2025-11-25 08:59:02.171867392 +0000 UTC m=+0.095634114 container remove eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_herschel, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 08:59:02 compute-0 systemd[1]: libpod-conmon-eae9e6dcf7224dbf09ffe41430950f8dde1043774e5e92a26c00d280b7733929.scope: Deactivated successfully.
Nov 25 08:59:02 compute-0 sudo[386141]: pam_unix(sudo:session): session closed for user root
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.214 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Creating config drive at /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config
Nov 25 08:59:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.219 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdzg0rwe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:02 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:59:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 08:59:02 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:59:02 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev dc077c3a-9fbe-4feb-95a0-835b50048c04 does not exist
Nov 25 08:59:02 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 41f705ae-e521-4a0a-a2d3-ddb6b07b7078 does not exist
Nov 25 08:59:02 compute-0 sudo[386395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 08:59:02 compute-0 sudo[386395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:59:02 compute-0 sudo[386395]: pam_unix(sudo:session): session closed for user root
Nov 25 08:59:02 compute-0 sudo[386422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 08:59:02 compute-0 sudo[386422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.357 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdzg0rwe" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:02 compute-0 sudo[386422]: pam_unix(sudo:session): session closed for user root
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.383 253542 DEBUG nova.storage.rbd_utils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.387 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.625 253542 DEBUG oslo_concurrency.processutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config 68c6ff41-ad19-4b3d-947d-0a5d72e4042c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.626 253542 INFO nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Deleting local config drive /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c/disk.config because it was imported into RBD.
Nov 25 08:59:02 compute-0 kernel: tapeba714df-d5: entered promiscuous mode
Nov 25 08:59:02 compute-0 ovn_controller[152859]: 2025-11-25T08:59:02Z|01295|binding|INFO|Claiming lport eba714df-d5db-464e-b5b6-6d56c52d33fd for this chassis.
Nov 25 08:59:02 compute-0 ovn_controller[152859]: 2025-11-25T08:59:02Z|01296|binding|INFO|eba714df-d5db-464e-b5b6-6d56c52d33fd: Claiming fa:16:3e:55:3b:fd 10.100.0.6
Nov 25 08:59:02 compute-0 NetworkManager[48915]: <info>  [1764061142.7018] manager: (tapeba714df-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/531)
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.701 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.711 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:3b:fd 10.100.0.6'], port_security=['fa:16:3e:55:3b:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '68c6ff41-ad19-4b3d-947d-0a5d72e4042c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20edaba8-b789-425d-a55d-3fa69c803e14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acab9bf8-f301-413f-8774-b60482e3db77, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eba714df-d5db-464e-b5b6-6d56c52d33fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.714 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eba714df-d5db-464e-b5b6-6d56c52d33fd in datapath bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 bound to our chassis
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.717 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfb2a9da-10d5-4cf0-a585-a59d66a02fa0
Nov 25 08:59:02 compute-0 ovn_controller[152859]: 2025-11-25T08:59:02Z|01297|binding|INFO|Setting lport eba714df-d5db-464e-b5b6-6d56c52d33fd ovn-installed in OVS
Nov 25 08:59:02 compute-0 ovn_controller[152859]: 2025-11-25T08:59:02Z|01298|binding|INFO|Setting lport eba714df-d5db-464e-b5b6-6d56c52d33fd up in Southbound
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.741 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c643d5-85bf-49c1-a9c8-65dffb8bee2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:02 compute-0 systemd-machined[215790]: New machine qemu-156-instance-0000007e.
Nov 25 08:59:02 compute-0 systemd-udevd[386498]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:59:02 compute-0 NetworkManager[48915]: <info>  [1764061142.7694] device (tapeba714df-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:59:02 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007e.
Nov 25 08:59:02 compute-0 NetworkManager[48915]: <info>  [1764061142.7707] device (tapeba714df-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.777 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01906bc5-d692-4070-a483-d42b1311182f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.781 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b36f2ac5-6097-4e2a-a9c3-d99300b77723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.811 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1799dadf-3f0d-4e6e-931f-fbb84e8c1eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.832 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fd05cc-ec85-4a83-9dff-919b2ef17905]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfb2a9da-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:95:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646309, 'reachable_time': 34452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386505, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.849 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[99527602-66e2-4faf-8a4e-e2deb9efbdd7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbfb2a9da-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646321, 'tstamp': 646321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386510, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbfb2a9da-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646324, 'tstamp': 646324}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386510, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.851 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb2a9da-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.855 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfb2a9da-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.856 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.857 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfb2a9da-10, col_values=(('external_ids', {'iface-id': '0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:02.858 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.927 253542 DEBUG nova.compute.manager [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.928 253542 DEBUG oslo_concurrency.lockutils [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.929 253542 DEBUG oslo_concurrency.lockutils [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.930 253542 DEBUG oslo_concurrency.lockutils [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:02 compute-0 nova_compute[253538]: 2025-11-25 08:59:02.931 253542 DEBUG nova.compute.manager [req-b5ea68df-1e86-4843-9cfa-a0d1e503af93 req-7cca8697-88ec-4544-8516-079c8015dc91 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Processing event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:03 compute-0 ceph-mon[75015]: pgmap v2347: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 08:59:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:59:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.270 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061143.270425, 68c6ff41-ad19-4b3d-947d-0a5d72e4042c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] VM Started (Lifecycle Event)
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.274 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.278 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.281 253542 INFO nova.virt.libvirt.driver [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance spawned successfully.
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.282 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.441 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.448 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.449 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.449 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.450 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.451 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.451 253542 DEBUG nova.virt.libvirt.driver [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.459 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.488 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.488 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061143.2706258, 68c6ff41-ad19-4b3d-947d-0a5d72e4042c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.489 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] VM Paused (Lifecycle Event)
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.505 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.509 253542 INFO nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Took 5.96 seconds to spawn the instance on the hypervisor.
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.509 253542 DEBUG nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.513 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061143.2769334, 68c6ff41-ad19-4b3d-947d-0a5d72e4042c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.513 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] VM Resumed (Lifecycle Event)
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.548 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.551 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.564 253542 INFO nova.compute.manager [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Took 7.03 seconds to build instance.
Nov 25 08:59:03 compute-0 nova_compute[253538]: 2025-11-25 08:59:03.576 253542 DEBUG oslo_concurrency.lockutils [None req-788cdb73-26ad-4e61-9df0-725af05c8335 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011052700926287686 of space, bias 1.0, pg target 0.3315810277886306 quantized to 32 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 08:59:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 08:59:05 compute-0 ceph-mon[75015]: pgmap v2348: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 08:59:05 compute-0 nova_compute[253538]: 2025-11-25 08:59:05.385 253542 DEBUG nova.compute.manager [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:05 compute-0 nova_compute[253538]: 2025-11-25 08:59:05.386 253542 DEBUG oslo_concurrency.lockutils [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:05 compute-0 nova_compute[253538]: 2025-11-25 08:59:05.386 253542 DEBUG oslo_concurrency.lockutils [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:05 compute-0 nova_compute[253538]: 2025-11-25 08:59:05.387 253542 DEBUG oslo_concurrency.lockutils [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:05 compute-0 nova_compute[253538]: 2025-11-25 08:59:05.387 253542 DEBUG nova.compute.manager [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] No waiting events found dispatching network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:59:05 compute-0 nova_compute[253538]: 2025-11-25 08:59:05.387 253542 WARNING nova.compute.manager [req-94b00f07-beec-4864-9cdd-133f126bcec7 req-9af4c5c2-003f-4101-b54e-3ecbeebba8c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received unexpected event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd for instance with vm_state active and task_state None.
Nov 25 08:59:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 08:59:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:06.524 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:06 compute-0 nova_compute[253538]: 2025-11-25 08:59:06.718 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:07 compute-0 ceph-mon[75015]: pgmap v2349: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 08:59:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.547 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.548 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.590 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.790 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.790 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.797 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.797 253542 INFO nova.compute.claims [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:59:08 compute-0 nova_compute[253538]: 2025-11-25 08:59:08.957 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:09 compute-0 sshd-session[385176]: Received disconnect from 45.78.217.205 port 32888:11: Bye Bye [preauth]
Nov 25 08:59:09 compute-0 sshd-session[385176]: Disconnected from 45.78.217.205 port 32888 [preauth]
Nov 25 08:59:09 compute-0 ceph-mon[75015]: pgmap v2350: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Nov 25 08:59:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:59:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947983238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.420 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.426 253542 DEBUG nova.compute.provider_tree [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.439 253542 DEBUG nova.scheduler.client.report [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.459 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.461 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.505 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.505 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.522 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.545 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.632 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.633 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.634 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Creating image(s)
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.660 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.687 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.713 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.718 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.757 253542 DEBUG nova.policy [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4211995133cc45db8e38c47f747fb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92faeb767e7a423586eaaf32661ce771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.799 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.800 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.800 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.801 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.826 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:09 compute-0 nova_compute[253538]: 2025-11-25 08:59:09.830 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.165 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.217 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] resizing rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:59:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2947983238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.337 253542 DEBUG nova.objects.instance [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a6a3230-e005-48d6-b758-3cf5d4f9410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.357 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.358 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Ensure instance console log exists: /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.358 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.359 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.359 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:10 compute-0 nova_compute[253538]: 2025-11-25 08:59:10.661 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Successfully created port: 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:59:11 compute-0 ceph-mon[75015]: pgmap v2351: 321 pgs: 321 active+clean; 213 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 08:59:11 compute-0 nova_compute[253538]: 2025-11-25 08:59:11.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 226 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 494 KiB/s wr, 76 op/s
Nov 25 08:59:12 compute-0 nova_compute[253538]: 2025-11-25 08:59:12.060 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Successfully updated port: 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:59:12 compute-0 nova_compute[253538]: 2025-11-25 08:59:12.075 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:59:12 compute-0 nova_compute[253538]: 2025-11-25 08:59:12.076 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquired lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:59:12 compute-0 nova_compute[253538]: 2025-11-25 08:59:12.076 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:59:12 compute-0 nova_compute[253538]: 2025-11-25 08:59:12.149 253542 DEBUG nova.compute.manager [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:12 compute-0 nova_compute[253538]: 2025-11-25 08:59:12.149 253542 DEBUG nova.compute.manager [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing instance network info cache due to event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:59:12 compute-0 nova_compute[253538]: 2025-11-25 08:59:12.150 253542 DEBUG oslo_concurrency.lockutils [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:59:12 compute-0 nova_compute[253538]: 2025-11-25 08:59:12.333 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:59:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.235 253542 DEBUG nova.network.neutron [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.260 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Releasing lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.261 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance network_info: |[{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.261 253542 DEBUG oslo_concurrency.lockutils [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.261 253542 DEBUG nova.network.neutron [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.264 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start _get_guest_xml network_info=[{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.268 253542 WARNING nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.274 253542 DEBUG nova.virt.libvirt.host [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.275 253542 DEBUG nova.virt.libvirt.host [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.278 253542 DEBUG nova.virt.libvirt.host [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.278 253542 DEBUG nova.virt.libvirt.host [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.278 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.279 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.279 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.279 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.280 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.280 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.280 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.280 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.281 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.281 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.281 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.281 253542 DEBUG nova.virt.hardware [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.284 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:13 compute-0 ceph-mon[75015]: pgmap v2352: 321 pgs: 321 active+clean; 226 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 494 KiB/s wr, 76 op/s
Nov 25 08:59:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:59:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555515879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.751 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.781 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:13 compute-0 nova_compute[253538]: 2025-11-25 08:59:13.786 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 246 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Nov 25 08:59:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 08:59:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438497949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.243 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.245 253542 DEBUG nova.virt.libvirt.vif [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-571013725',display_name='tempest-TestNetworkBasicOps-server-571013725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-571013725',id=127,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9vDi5E2E+ywOEJjDZ0PQ8FPpnwynuIL6i1c9SarSwpCQ7QnXbRw7n+Ck5BMm/3gHxZu4fef569DYJ0xiHgyqCAtkk+E+7ZMYtBKG+VyGO33faTg/ful5ZkeC+zSQwIDw==',key_name='tempest-TestNetworkBasicOps-363283039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-kwd70nuj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:59:09Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=6a6a3230-e005-48d6-b758-3cf5d4f9410f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.245 253542 DEBUG nova.network.os_vif_util [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.246 253542 DEBUG nova.network.os_vif_util [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.247 253542 DEBUG nova.objects.instance [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a6a3230-e005-48d6-b758-3cf5d4f9410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.263 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <uuid>6a6a3230-e005-48d6-b758-3cf5d4f9410f</uuid>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <name>instance-0000007f</name>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <metadata>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <nova:name>tempest-TestNetworkBasicOps-server-571013725</nova:name>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 08:59:13</nova:creationTime>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <nova:user uuid="4211995133cc45db8e38c47f747fb092">tempest-TestNetworkBasicOps-2019122229-project-member</nova:user>
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <nova:project uuid="92faeb767e7a423586eaaf32661ce771">tempest-TestNetworkBasicOps-2019122229</nova:project>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <nova:port uuid="0b7bd252-0c0e-43fd-b9ae-27d615ec9c29">
Nov 25 08:59:14 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   </metadata>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <system>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <entry name="serial">6a6a3230-e005-48d6-b758-3cf5d4f9410f</entry>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <entry name="uuid">6a6a3230-e005-48d6-b758-3cf5d4f9410f</entry>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </system>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <os>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   </os>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <features>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <apic/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   </features>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   </clock>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   </cpu>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   <devices>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk">
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       </source>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config">
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       </source>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 08:59:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:61:6b:c3"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <target dev="tap0b7bd252-0c"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </interface>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/console.log" append="off"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </serial>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <video>
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </video>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </rng>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 08:59:14 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 08:59:14 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 08:59:14 compute-0 nova_compute[253538]:   </devices>
Nov 25 08:59:14 compute-0 nova_compute[253538]: </domain>
Nov 25 08:59:14 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.264 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Preparing to wait for external event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.265 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.266 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.266 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.267 253542 DEBUG nova.virt.libvirt.vif [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-571013725',display_name='tempest-TestNetworkBasicOps-server-571013725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-571013725',id=127,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9vDi5E2E+ywOEJjDZ0PQ8FPpnwynuIL6i1c9SarSwpCQ7QnXbRw7n+Ck5BMm/3gHxZu4fef569DYJ0xiHgyqCAtkk+E+7ZMYtBKG+VyGO33faTg/ful5ZkeC+zSQwIDw==',key_name='tempest-TestNetworkBasicOps-363283039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-kwd70nuj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:59:09Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=6a6a3230-e005-48d6-b758-3cf5d4f9410f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.267 253542 DEBUG nova.network.os_vif_util [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.268 253542 DEBUG nova.network.os_vif_util [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.269 253542 DEBUG os_vif [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.270 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.271 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.276 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b7bd252-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.277 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b7bd252-0c, col_values=(('external_ids', {'iface-id': '0b7bd252-0c0e-43fd-b9ae-27d615ec9c29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:6b:c3', 'vm-uuid': '6a6a3230-e005-48d6-b758-3cf5d4f9410f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.278 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:14 compute-0 NetworkManager[48915]: <info>  [1764061154.2796] manager: (tap0b7bd252-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.286 253542 INFO os_vif [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c')
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.359 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.360 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] No VIF found with MAC fa:16:3e:61:6b:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.361 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Using config drive
Nov 25 08:59:14 compute-0 nova_compute[253538]: 2025-11-25 08:59:14.385 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3555515879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:59:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1438497949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 08:59:14 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 08:59:15 compute-0 ceph-mon[75015]: pgmap v2353: 321 pgs: 321 active+clean; 246 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Nov 25 08:59:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 260 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.125 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Creating config drive at /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.131 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuy8mhbhj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.290 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuy8mhbhj" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.333 253542 DEBUG nova.storage.rbd_utils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] rbd image 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.338 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.383 253542 DEBUG nova.network.neutron [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updated VIF entry in instance network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.385 253542 DEBUG nova.network.neutron [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.402 253542 DEBUG oslo_concurrency.lockutils [req-cf679e49-f47a-4d2d-9d23-268d22e141cf req-926f2c1e-a30b-41f0-8252-54564a96b5a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:59:16 compute-0 ceph-mon[75015]: pgmap v2354: 321 pgs: 321 active+clean; 260 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.663 253542 DEBUG oslo_concurrency.processutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config 6a6a3230-e005-48d6-b758-3cf5d4f9410f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.663 253542 INFO nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Deleting local config drive /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f/disk.config because it was imported into RBD.
Nov 25 08:59:16 compute-0 ovn_controller[152859]: 2025-11-25T08:59:16Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:3b:fd 10.100.0.6
Nov 25 08:59:16 compute-0 ovn_controller[152859]: 2025-11-25T08:59:16Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:3b:fd 10.100.0.6
Nov 25 08:59:16 compute-0 kernel: tap0b7bd252-0c: entered promiscuous mode
Nov 25 08:59:16 compute-0 NetworkManager[48915]: <info>  [1764061156.7238] manager: (tap0b7bd252-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Nov 25 08:59:16 compute-0 ovn_controller[152859]: 2025-11-25T08:59:16Z|01299|binding|INFO|Claiming lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 for this chassis.
Nov 25 08:59:16 compute-0 ovn_controller[152859]: 2025-11-25T08:59:16Z|01300|binding|INFO|0b7bd252-0c0e-43fd-b9ae-27d615ec9c29: Claiming fa:16:3e:61:6b:c3 10.100.0.8
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.755 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:6b:c3 10.100.0.8'], port_security=['fa:16:3e:61:6b:c3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6a6a3230-e005-48d6-b758-3cf5d4f9410f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fbcd9381-e965-435b-8fa1-373c60075d6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff74e0b1-b375-483e-985d-fce7814dd7fc, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.757 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 in datapath d0402a09-5c1d-4dec-b1c6-38e77edc4409 bound to our chassis
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.759 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0402a09-5c1d-4dec-b1c6-38e77edc4409
Nov 25 08:59:16 compute-0 ovn_controller[152859]: 2025-11-25T08:59:16Z|01301|binding|INFO|Setting lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 ovn-installed in OVS
Nov 25 08:59:16 compute-0 ovn_controller[152859]: 2025-11-25T08:59:16Z|01302|binding|INFO|Setting lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 up in Southbound
Nov 25 08:59:16 compute-0 nova_compute[253538]: 2025-11-25 08:59:16.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.783 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a7621050-37f1-4844-a7be-dd0e82406349]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.784 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0402a09-51 in ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.787 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0402a09-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.787 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8392d444-b89a-4370-b699-04e5e8c81229]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.790 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57bba2ea-0f30-45c0-8f97-b1d91000dfea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 systemd-udevd[386891]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.803 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ac89dd-6c76-4092-b916-960517dd9b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 systemd-machined[215790]: New machine qemu-157-instance-0000007f.
Nov 25 08:59:16 compute-0 NetworkManager[48915]: <info>  [1764061156.8084] device (tap0b7bd252-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 08:59:16 compute-0 NetworkManager[48915]: <info>  [1764061156.8099] device (tap0b7bd252-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 08:59:16 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007f.
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2e3f1e-ad04-415b-8e44-427b599a684e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 podman[386875]: 2025-11-25 08:59:16.847286566 +0000 UTC m=+0.076273947 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.865 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e64ed48f-72dd-4a0a-b748-924452be6136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 NetworkManager[48915]: <info>  [1764061156.8715] manager: (tapd0402a09-50): new Veth device (/org/freedesktop/NetworkManager/Devices/534)
Nov 25 08:59:16 compute-0 systemd-udevd[386903]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.873 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0e509a-4df9-41e7-946c-1a7a88b7e6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 podman[386876]: 2025-11-25 08:59:16.888544779 +0000 UTC m=+0.105046150 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.915 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[61d2d5db-1a72-4e8a-ac4a-864af39e5f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.924 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb94b71-88ab-49a4-b1db-0f287be1c85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 NetworkManager[48915]: <info>  [1764061156.9477] device (tapd0402a09-50): carrier: link connected
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.958 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d35d6de0-0ece-45e9-8ff5-a49ffea96d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:16.987 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59cc16af-fc12-4983-87b9-5ade18d9ed6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0402a09-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:9d:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651154, 'reachable_time': 26559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386949, 'error': None, 'target': 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8d688a-81c0-4122-b0ac-351a72026e84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:9dde'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651154, 'tstamp': 651154}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386950, 'error': None, 'target': 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.030 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c53a19ef-0286-4d76-84bd-181c20d6696c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0402a09-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:9d:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651154, 'reachable_time': 26559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386951, 'error': None, 'target': 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.067 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2850f8a7-991f-4ec9-8bf4-74cdda115853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.125 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c44bd1f4-848d-4d56-92c3-bc7129a72041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0402a09-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.127 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.127 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0402a09-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:17 compute-0 NetworkManager[48915]: <info>  [1764061157.1292] manager: (tapd0402a09-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Nov 25 08:59:17 compute-0 kernel: tapd0402a09-50: entered promiscuous mode
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.133 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0402a09-50, col_values=(('external_ids', {'iface-id': '936f7aae-c7b9-4c9a-a88d-93ca5394771e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:17 compute-0 ovn_controller[152859]: 2025-11-25T08:59:17Z|01303|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.138 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0402a09-5c1d-4dec-b1c6-38e77edc4409.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0402a09-5c1d-4dec-b1c6-38e77edc4409.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.140 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2adfc2c7-377f-4066-8764-94d94fcc7f1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.141 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: global
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-d0402a09-5c1d-4dec-b1c6-38e77edc4409
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/d0402a09-5c1d-4dec-b1c6-38e77edc4409.pid.haproxy
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID d0402a09-5c1d-4dec-b1c6-38e77edc4409
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 08:59:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:17.142 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'env', 'PROCESS_TAG=haproxy-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0402a09-5c1d-4dec-b1c6-38e77edc4409.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.282 253542 DEBUG nova.compute.manager [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.283 253542 DEBUG oslo_concurrency.lockutils [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.283 253542 DEBUG oslo_concurrency.lockutils [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.284 253542 DEBUG oslo_concurrency.lockutils [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.284 253542 DEBUG nova.compute.manager [req-6e842302-5a25-4750-91d6-26e03254dd71 req-61f4b62b-8ffb-4d35-8e86-3ca4637e27cc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Processing event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.369 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061157.36897, 6a6a3230-e005-48d6-b758-3cf5d4f9410f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.370 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] VM Started (Lifecycle Event)
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.372 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.376 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.380 253542 INFO nova.virt.libvirt.driver [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance spawned successfully.
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.380 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.401 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.407 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.407 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.407 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.408 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.408 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.409 253542 DEBUG nova.virt.libvirt.driver [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.413 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.447 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.448 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061157.3692114, 6a6a3230-e005-48d6-b758-3cf5d4f9410f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.448 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] VM Paused (Lifecycle Event)
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.468 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.471 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061157.375583, 6a6a3230-e005-48d6-b758-3cf5d4f9410f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.472 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] VM Resumed (Lifecycle Event)
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.480 253542 INFO nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Took 7.85 seconds to spawn the instance on the hypervisor.
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.480 253542 DEBUG nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.505 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.509 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.547 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.552 253542 INFO nova.compute.manager [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Took 8.80 seconds to build instance.
Nov 25 08:59:17 compute-0 nova_compute[253538]: 2025-11-25 08:59:17.565 253542 DEBUG oslo_concurrency.lockutils [None req-ef5a73b8-ce48-4958-aadb-1ae0fdb7d727 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:17 compute-0 podman[387025]: 2025-11-25 08:59:17.56717034 +0000 UTC m=+0.090103893 container create a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:59:17 compute-0 podman[387025]: 2025-11-25 08:59:17.51240891 +0000 UTC m=+0.035342493 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 08:59:17 compute-0 systemd[1]: Started libpod-conmon-a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407.scope.
Nov 25 08:59:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 08:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2bcefa150aa50a2485c101e427760b62e7f7acb3630901c864073572e7a19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 08:59:17 compute-0 podman[387025]: 2025-11-25 08:59:17.661426026 +0000 UTC m=+0.184359599 container init a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 08:59:17 compute-0 podman[387025]: 2025-11-25 08:59:17.669221229 +0000 UTC m=+0.192154782 container start a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 08:59:17 compute-0 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [NOTICE]   (387044) : New worker (387046) forked
Nov 25 08:59:17 compute-0 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [NOTICE]   (387044) : Loading success.
Nov 25 08:59:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 264 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 08:59:18 compute-0 nova_compute[253538]: 2025-11-25 08:59:18.219 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:19 compute-0 ceph-mon[75015]: pgmap v2355: 321 pgs: 321 active+clean; 264 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 08:59:19 compute-0 nova_compute[253538]: 2025-11-25 08:59:19.279 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:19 compute-0 nova_compute[253538]: 2025-11-25 08:59:19.400 253542 DEBUG nova.compute.manager [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:19 compute-0 nova_compute[253538]: 2025-11-25 08:59:19.401 253542 DEBUG oslo_concurrency.lockutils [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:19 compute-0 nova_compute[253538]: 2025-11-25 08:59:19.401 253542 DEBUG oslo_concurrency.lockutils [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:19 compute-0 nova_compute[253538]: 2025-11-25 08:59:19.401 253542 DEBUG oslo_concurrency.lockutils [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:19 compute-0 nova_compute[253538]: 2025-11-25 08:59:19.402 253542 DEBUG nova.compute.manager [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] No waiting events found dispatching network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:59:19 compute-0 nova_compute[253538]: 2025-11-25 08:59:19.402 253542 WARNING nova.compute.manager [req-700368b8-b221-411f-b173-0a04474cd10f req-d3e0716d-7df4-4415-b43a-26577744f2c7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received unexpected event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 for instance with vm_state active and task_state None.
Nov 25 08:59:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 292 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 175 op/s
Nov 25 08:59:21 compute-0 ceph-mon[75015]: pgmap v2356: 321 pgs: 321 active+clean; 292 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 175 op/s
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.756 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.757 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.757 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.757 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.757 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.759 253542 INFO nova.compute.manager [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Terminating instance
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.760 253542 DEBUG nova.compute.manager [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:59:21 compute-0 kernel: tapeba714df-d5 (unregistering): left promiscuous mode
Nov 25 08:59:21 compute-0 NetworkManager[48915]: <info>  [1764061161.8057] device (tapeba714df-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:59:21 compute-0 ovn_controller[152859]: 2025-11-25T08:59:21Z|01304|binding|INFO|Releasing lport eba714df-d5db-464e-b5b6-6d56c52d33fd from this chassis (sb_readonly=0)
Nov 25 08:59:21 compute-0 ovn_controller[152859]: 2025-11-25T08:59:21Z|01305|binding|INFO|Setting lport eba714df-d5db-464e-b5b6-6d56c52d33fd down in Southbound
Nov 25 08:59:21 compute-0 ovn_controller[152859]: 2025-11-25T08:59:21Z|01306|binding|INFO|Removing iface tapeba714df-d5 ovn-installed in OVS
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:21 compute-0 nova_compute[253538]: 2025-11-25 08:59:21.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.881 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:3b:fd 10.100.0.6'], port_security=['fa:16:3e:55:3b:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '68c6ff41-ad19-4b3d-947d-0a5d72e4042c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20edaba8-b789-425d-a55d-3fa69c803e14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acab9bf8-f301-413f-8774-b60482e3db77, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=eba714df-d5db-464e-b5b6-6d56c52d33fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:59:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.883 162739 INFO neutron.agent.ovn.metadata.agent [-] Port eba714df-d5db-464e-b5b6-6d56c52d33fd in datapath bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 unbound from our chassis
Nov 25 08:59:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.885 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfb2a9da-10d5-4cf0-a585-a59d66a02fa0
Nov 25 08:59:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.902 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3aa89b-9876-4ebb-b6e6-a41835d336ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:21 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 25 08:59:21 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007e.scope: Consumed 14.262s CPU time.
Nov 25 08:59:21 compute-0 systemd-machined[215790]: Machine qemu-156-instance-0000007e terminated.
Nov 25 08:59:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.936 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad1445d-42ea-49a2-8573-2b4f3d18f141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.940 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a297c185-efc8-42ff-b3e3-23fbdfa53ebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 293 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 25 08:59:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:21.989 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[863774c9-155c-4ee9-bee6-1f57a3455275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.004 253542 INFO nova.virt.libvirt.driver [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Instance destroyed successfully.
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.005 253542 DEBUG nova.objects.instance [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 68c6ff41-ad19-4b3d-947d-0a5d72e4042c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:59:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.016 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76ea3350-f438-40de-a365-d286903996bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfb2a9da-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:95:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646309, 'reachable_time': 34452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387075, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.018 253542 DEBUG nova.virt.libvirt.vif [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-0-1184974683',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=126,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:59:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-1i7wo9f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:59:03Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=68c6ff41-ad19-4b3d-947d-0a5d72e4042c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.019 253542 DEBUG nova.network.os_vif_util [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "address": "fa:16:3e:55:3b:fd", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeba714df-d5", "ovs_interfaceid": "eba714df-d5db-464e-b5b6-6d56c52d33fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.020 253542 DEBUG nova.network.os_vif_util [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.020 253542 DEBUG os_vif [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.024 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeba714df-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.031 253542 INFO os_vif [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:3b:fd,bridge_name='br-int',has_traffic_filtering=True,id=eba714df-d5db-464e-b5b6-6d56c52d33fd,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeba714df-d5')
Nov 25 08:59:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.031 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[90379492-0f7a-48dd-ade5-2ad0d676f686]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbfb2a9da-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646321, 'tstamp': 646321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387078, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbfb2a9da-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646324, 'tstamp': 646324}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387078, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.042 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb2a9da-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.044 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfb2a9da-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.045 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:59:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.045 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfb2a9da-10, col_values=(('external_ids', {'iface-id': '0b3aeb91-b51d-4bb6-a8fa-9a80bd12b96f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:22 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:22.045 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.062 253542 DEBUG nova.compute.manager [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.062 253542 DEBUG nova.compute.manager [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing instance network info cache due to event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.063 253542 DEBUG oslo_concurrency.lockutils [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.063 253542 DEBUG oslo_concurrency.lockutils [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.063 253542 DEBUG nova.network.neutron [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:59:22 compute-0 ceph-mon[75015]: pgmap v2357: 321 pgs: 321 active+clean; 293 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 25 08:59:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.639 253542 INFO nova.virt.libvirt.driver [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Deleting instance files /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c_del
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.641 253542 INFO nova.virt.libvirt.driver [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Deletion of /var/lib/nova/instances/68c6ff41-ad19-4b3d-947d-0a5d72e4042c_del complete
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.808 253542 INFO nova.compute.manager [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Took 1.05 seconds to destroy the instance on the hypervisor.
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.808 253542 DEBUG oslo.service.loopingcall [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.809 253542 DEBUG nova.compute.manager [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:59:22 compute-0 nova_compute[253538]: 2025-11-25 08:59:22.809 253542 DEBUG nova.network.neutron [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:59:23 compute-0 nova_compute[253538]: 2025-11-25 08:59:23.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:59:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:59:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:59:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:59:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:59:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:59:23 compute-0 podman[387098]: 2025-11-25 08:59:23.875099474 +0000 UTC m=+0.117160271 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 08:59:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 263 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 175 op/s
Nov 25 08:59:23 compute-0 nova_compute[253538]: 2025-11-25 08:59:23.973 253542 DEBUG nova.network.neutron [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:23 compute-0 nova_compute[253538]: 2025-11-25 08:59:23.993 253542 DEBUG nova.network.neutron [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updated VIF entry in instance network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:59:23 compute-0 nova_compute[253538]: 2025-11-25 08:59:23.994 253542 DEBUG nova.network.neutron [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:23 compute-0 nova_compute[253538]: 2025-11-25 08:59:23.997 253542 INFO nova.compute.manager [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Took 1.19 seconds to deallocate network for instance.
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.029 253542 DEBUG oslo_concurrency.lockutils [req-d5c2b644-bfd0-40ea-9b66-b3c87b4783c8 req-2d558549-b065-4c18-a1be-7fa6eda7417a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.054 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.054 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.116 253542 DEBUG nova.compute.manager [req-0f1e83af-d9e9-44b0-a11a-a28f9b4f3df5 req-ba9990ea-bd40-43c6-a4d2-5f4ada38799f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-deleted-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.167 253542 DEBUG oslo_concurrency.processutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.218 253542 DEBUG nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-unplugged-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.219 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.220 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.221 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.222 253542 DEBUG nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] No waiting events found dispatching network-vif-unplugged-eba714df-d5db-464e-b5b6-6d56c52d33fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.222 253542 WARNING nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received unexpected event network-vif-unplugged-eba714df-d5db-464e-b5b6-6d56c52d33fd for instance with vm_state deleted and task_state None.
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.223 253542 DEBUG nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.224 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.225 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.225 253542 DEBUG oslo_concurrency.lockutils [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.226 253542 DEBUG nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] No waiting events found dispatching network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.227 253542 WARNING nova.compute.manager [req-1d94d367-f888-4b7e-b173-052b4568e0ea req-6b144e0a-5994-4c33-bf46-ffda1014772c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Received unexpected event network-vif-plugged-eba714df-d5db-464e-b5b6-6d56c52d33fd for instance with vm_state deleted and task_state None.
Nov 25 08:59:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:59:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104438142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.632 253542 DEBUG oslo_concurrency.processutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.638 253542 DEBUG nova.compute.provider_tree [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.652 253542 DEBUG nova.scheduler.client.report [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.676 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.708 253542 INFO nova.scheduler.client.report [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 68c6ff41-ad19-4b3d-947d-0a5d72e4042c
Nov 25 08:59:24 compute-0 nova_compute[253538]: 2025-11-25 08:59:24.794 253542 DEBUG oslo_concurrency.lockutils [None req-02663957-0ce7-4244-beb3-89f6639d78be 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "68c6ff41-ad19-4b3d-947d-0a5d72e4042c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:25 compute-0 ceph-mon[75015]: pgmap v2358: 321 pgs: 321 active+clean; 263 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 175 op/s
Nov 25 08:59:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/104438142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.702 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.703 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.703 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.703 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.704 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.705 253542 INFO nova.compute.manager [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Terminating instance
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.706 253542 DEBUG nova.compute.manager [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:59:25 compute-0 kernel: tap6e0acf79-71 (unregistering): left promiscuous mode
Nov 25 08:59:25 compute-0 NetworkManager[48915]: <info>  [1764061165.7902] device (tap6e0acf79-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:25 compute-0 ovn_controller[152859]: 2025-11-25T08:59:25Z|01307|binding|INFO|Releasing lport 6e0acf79-7148-4555-9265-b449f234806e from this chassis (sb_readonly=0)
Nov 25 08:59:25 compute-0 ovn_controller[152859]: 2025-11-25T08:59:25Z|01308|binding|INFO|Setting lport 6e0acf79-7148-4555-9265-b449f234806e down in Southbound
Nov 25 08:59:25 compute-0 ovn_controller[152859]: 2025-11-25T08:59:25Z|01309|binding|INFO|Removing iface tap6e0acf79-71 ovn-installed in OVS
Nov 25 08:59:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.822 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:96:34 10.100.0.5'], port_security=['fa:16:3e:f7:96:34 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9447890d-1fff-4536-a0cd-b889c23f7479', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20edaba8-b789-425d-a55d-3fa69c803e14 31fd3dba-a142-469b-a6ad-eb14c55eb5d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acab9bf8-f301-413f-8774-b60482e3db77, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6e0acf79-7148-4555-9265-b449f234806e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.824 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6e0acf79-7148-4555-9265-b449f234806e in datapath bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 unbound from our chassis
Nov 25 08:59:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.827 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:59:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9abff8-b63a-4746-b256-623ba95d6122]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:25.831 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 namespace which is not needed anymore
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:25 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 25 08:59:25 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007d.scope: Consumed 15.955s CPU time.
Nov 25 08:59:25 compute-0 systemd-machined[215790]: Machine qemu-155-instance-0000007d terminated.
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.949 253542 INFO nova.virt.libvirt.driver [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Instance destroyed successfully.
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.950 253542 DEBUG nova.objects.instance [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 9447890d-1fff-4536-a0cd-b889c23f7479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.963 253542 DEBUG nova.virt.libvirt.vif [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:58:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-711993251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=125,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsKC+lTfCuui0Z3hitR52PvpfumhaLVjQ69ujiuIIevy7+I1pIFYPSg9LVKZgq98SCLPdhgyGDo4sdCyIoEofU+K0/ToXdERCiT/ZZOJi+hnfAHkBLfHK2RUxUGr/PGKw==',key_name='tempest-TestSecurityGroupsBasicOps-214386099',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:58:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-llehtawf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:58:29Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=9447890d-1fff-4536-a0cd-b889c23f7479,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.963 253542 DEBUG nova.network.os_vif_util [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.964 253542 DEBUG nova.network.os_vif_util [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.964 253542 DEBUG os_vif [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.967 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e0acf79-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 222 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 173 op/s
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:59:25 compute-0 nova_compute[253538]: 2025-11-25 08:59:25.974 253542 INFO os_vif [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:34,bridge_name='br-int',has_traffic_filtering=True,id=6e0acf79-7148-4555-9265-b449f234806e,network=Network(bfb2a9da-10d5-4cf0-a585-a59d66a02fa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0acf79-71')
Nov 25 08:59:25 compute-0 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [NOTICE]   (384971) : haproxy version is 2.8.14-c23fe91
Nov 25 08:59:25 compute-0 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [NOTICE]   (384971) : path to executable is /usr/sbin/haproxy
Nov 25 08:59:25 compute-0 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [WARNING]  (384971) : Exiting Master process...
Nov 25 08:59:25 compute-0 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [WARNING]  (384971) : Exiting Master process...
Nov 25 08:59:25 compute-0 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [ALERT]    (384971) : Current worker (384973) exited with code 143 (Terminated)
Nov 25 08:59:25 compute-0 neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0[384967]: [WARNING]  (384971) : All workers exited. Exiting... (0)
Nov 25 08:59:25 compute-0 systemd[1]: libpod-6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31.scope: Deactivated successfully.
Nov 25 08:59:26 compute-0 podman[387169]: 2025-11-25 08:59:26.003106655 +0000 UTC m=+0.088519430 container died 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 08:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31-userdata-shm.mount: Deactivated successfully.
Nov 25 08:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e04d9311f2566d0bdda347b5c2a05beefee95a9cbfe91f3a4e7e52afc379d7d-merged.mount: Deactivated successfully.
Nov 25 08:59:26 compute-0 podman[387169]: 2025-11-25 08:59:26.062087141 +0000 UTC m=+0.147499906 container cleanup 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 08:59:26 compute-0 systemd[1]: libpod-conmon-6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31.scope: Deactivated successfully.
Nov 25 08:59:26 compute-0 podman[387227]: 2025-11-25 08:59:26.12708955 +0000 UTC m=+0.044507133 container remove 6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.134 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[42aa46af-a6ca-4c3b-981c-7dcfb77a6128]: (4, ('Tue Nov 25 08:59:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 (6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31)\n6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31\nTue Nov 25 08:59:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 (6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31)\n6f3c69fe408619841166a42c4fd7b3fb3f54802beb68f1157082d7e916a8cb31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.136 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b5916862-9d53-4599-9eb6-8b55b19b3076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.138 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb2a9da-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:26 compute-0 kernel: tapbfb2a9da-10: left promiscuous mode
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.139 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.152 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.157 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[558bf816-d1b6-4c0d-865f-b814d834195b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.176 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[79de7430-36a9-4bbd-be2a-8da5aeed1eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.177 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf2e6a6-827e-44df-b0f1-2f5786ee369d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.184 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-unplugged-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.185 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.185 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.186 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.186 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] No waiting events found dispatching network-vif-unplugged-6e0acf79-7148-4555-9265-b449f234806e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.187 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-unplugged-6e0acf79-7148-4555-9265-b449f234806e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.187 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.187 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.188 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.188 253542 DEBUG oslo_concurrency.lockutils [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.189 253542 DEBUG nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] No waiting events found dispatching network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.189 253542 WARNING nova.compute.manager [req-c695081a-6207-4cb6-9c08-4aa612ee787a req-31571555-f56d-4685-a1b8-79913c547640 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received unexpected event network-vif-plugged-6e0acf79-7148-4555-9265-b449f234806e for instance with vm_state active and task_state deleting.
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b78be2f1-010b-40ee-b828-9b7ed1ce599d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646301, 'reachable_time': 19680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387242, 'error': None, 'target': 'ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dbfb2a9da\x2d10d5\x2d4cf0\x2da585\x2da59d66a02fa0.mount: Deactivated successfully.
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.199 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bfb2a9da-10d5-4cf0-a585-a59d66a02fa0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:59:26 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:26.200 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[155f18cc-57d2-489f-bb16-dcc8b24af6c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.242 253542 DEBUG nova.compute.manager [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-changed-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.243 253542 DEBUG nova.compute.manager [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing instance network info cache due to event network-changed-6e0acf79-7148-4555-9265-b449f234806e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.243 253542 DEBUG oslo_concurrency.lockutils [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.243 253542 DEBUG oslo_concurrency.lockutils [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.243 253542 DEBUG nova.network.neutron [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Refreshing network info cache for port 6e0acf79-7148-4555-9265-b449f234806e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.395 253542 INFO nova.virt.libvirt.driver [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Deleting instance files /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479_del
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.396 253542 INFO nova.virt.libvirt.driver [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Deletion of /var/lib/nova/instances/9447890d-1fff-4536-a0cd-b889c23f7479_del complete
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.440 253542 INFO nova.compute.manager [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.441 253542 DEBUG oslo.service.loopingcall [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.442 253542 DEBUG nova.compute.manager [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:59:26 compute-0 nova_compute[253538]: 2025-11-25 08:59:26.442 253542 DEBUG nova.network.neutron [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:59:27 compute-0 ceph-mon[75015]: pgmap v2359: 321 pgs: 321 active+clean; 222 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 173 op/s
Nov 25 08:59:27 compute-0 nova_compute[253538]: 2025-11-25 08:59:27.582 253542 DEBUG nova.network.neutron [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:27 compute-0 nova_compute[253538]: 2025-11-25 08:59:27.602 253542 INFO nova.compute.manager [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Took 1.16 seconds to deallocate network for instance.
Nov 25 08:59:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:27 compute-0 nova_compute[253538]: 2025-11-25 08:59:27.668 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:27 compute-0 nova_compute[253538]: 2025-11-25 08:59:27.669 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:27 compute-0 nova_compute[253538]: 2025-11-25 08:59:27.762 253542 DEBUG oslo_concurrency.processutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 186 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Nov 25 08:59:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:59:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544755068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.218 253542 DEBUG oslo_concurrency.processutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.228 253542 DEBUG nova.compute.provider_tree [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.258 253542 DEBUG nova.scheduler.client.report [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.268 253542 DEBUG nova.network.neutron [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updated VIF entry in instance network info cache for port 6e0acf79-7148-4555-9265-b449f234806e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.269 253542 DEBUG nova.network.neutron [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [{"id": "6e0acf79-7148-4555-9265-b449f234806e", "address": "fa:16:3e:f7:96:34", "network": {"id": "bfb2a9da-10d5-4cf0-a585-a59d66a02fa0", "bridge": "br-int", "label": "tempest-network-smoke--16312468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0acf79-71", "ovs_interfaceid": "6e0acf79-7148-4555-9265-b449f234806e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.282 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.291 253542 DEBUG oslo_concurrency.lockutils [req-1c61dd1e-5139-4a04-b94b-737ce8f1d861 req-4f4ebcf6-787c-49b8-80fd-a6902a66bcaf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-9447890d-1fff-4536-a0cd-b889c23f7479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.327 253542 INFO nova.scheduler.client.report [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 9447890d-1fff-4536-a0cd-b889c23f7479
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.402 253542 DEBUG nova.compute.manager [req-5cf50249-b4f9-41a3-bb9b-f543fb9af5ad req-0988b32a-de94-4d83-939a-178db7704487 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Received event network-vif-deleted-6e0acf79-7148-4555-9265-b449f234806e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.403 253542 INFO nova.compute.manager [req-5cf50249-b4f9-41a3-bb9b-f543fb9af5ad req-0988b32a-de94-4d83-939a-178db7704487 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Neutron deleted interface 6e0acf79-7148-4555-9265-b449f234806e; detaching it from the instance and deleting it from the info cache
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.404 253542 DEBUG nova.network.neutron [req-5cf50249-b4f9-41a3-bb9b-f543fb9af5ad req-0988b32a-de94-4d83-939a-178db7704487 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.433 253542 DEBUG oslo_concurrency.lockutils [None req-d4e9d3c9-5388-4747-941d-94bce3c9c71d 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "9447890d-1fff-4536-a0cd-b889c23f7479" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:28 compute-0 nova_compute[253538]: 2025-11-25 08:59:28.441 253542 DEBUG nova.compute.manager [req-5cf50249-b4f9-41a3-bb9b-f543fb9af5ad req-0988b32a-de94-4d83-939a-178db7704487 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Detach interface failed, port_id=6e0acf79-7148-4555-9265-b449f234806e, reason: Instance 9447890d-1fff-4536-a0cd-b889c23f7479 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 08:59:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 08:59:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/761127550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:59:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 08:59:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/761127550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:59:29 compute-0 ceph-mon[75015]: pgmap v2360: 321 pgs: 321 active+clean; 186 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Nov 25 08:59:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1544755068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/761127550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 08:59:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/761127550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 08:59:29 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 08:59:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 135 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Nov 25 08:59:30 compute-0 ovn_controller[152859]: 2025-11-25T08:59:30Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:6b:c3 10.100.0.8
Nov 25 08:59:30 compute-0 ovn_controller[152859]: 2025-11-25T08:59:30Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:6b:c3 10.100.0.8
Nov 25 08:59:30 compute-0 nova_compute[253538]: 2025-11-25 08:59:30.970 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:31 compute-0 ceph-mon[75015]: pgmap v2361: 321 pgs: 321 active+clean; 135 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Nov 25 08:59:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 154 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 1.7 MiB/s wr, 105 op/s
Nov 25 08:59:32 compute-0 ovn_controller[152859]: 2025-11-25T08:59:32Z|01310|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 08:59:32 compute-0 nova_compute[253538]: 2025-11-25 08:59:32.575 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:33 compute-0 ceph-mon[75015]: pgmap v2362: 321 pgs: 321 active+clean; 154 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 1.7 MiB/s wr, 105 op/s
Nov 25 08:59:33 compute-0 nova_compute[253538]: 2025-11-25 08:59:33.290 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 163 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 25 08:59:35 compute-0 ceph-mon[75015]: pgmap v2363: 321 pgs: 321 active+clean; 163 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 25 08:59:35 compute-0 nova_compute[253538]: 2025-11-25 08:59:35.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Nov 25 08:59:36 compute-0 nova_compute[253538]: 2025-11-25 08:59:36.837 253542 INFO nova.compute.manager [None req-2c58d1b1-3f79-4c03-8851-0545f2f45b85 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Get console output
Nov 25 08:59:36 compute-0 nova_compute[253538]: 2025-11-25 08:59:36.843 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:59:37 compute-0 nova_compute[253538]: 2025-11-25 08:59:37.001 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061161.9994311, 68c6ff41-ad19-4b3d-947d-0a5d72e4042c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:37 compute-0 nova_compute[253538]: 2025-11-25 08:59:37.002 253542 INFO nova.compute.manager [-] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] VM Stopped (Lifecycle Event)
Nov 25 08:59:37 compute-0 nova_compute[253538]: 2025-11-25 08:59:37.022 253542 DEBUG nova.compute.manager [None req-c0d5e154-31e6-474f-8de5-98352445c924 - - - - - -] [instance: 68c6ff41-ad19-4b3d-947d-0a5d72e4042c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:37 compute-0 ceph-mon[75015]: pgmap v2364: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.131093) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177131170, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1700, "num_deletes": 251, "total_data_size": 2649244, "memory_usage": 2681744, "flush_reason": "Manual Compaction"}
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177147532, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 2600111, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48060, "largest_seqno": 49759, "table_properties": {"data_size": 2592404, "index_size": 4586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16299, "raw_average_key_size": 20, "raw_value_size": 2576882, "raw_average_value_size": 3177, "num_data_blocks": 204, "num_entries": 811, "num_filter_entries": 811, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061003, "oldest_key_time": 1764061003, "file_creation_time": 1764061177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 16484 microseconds, and 7014 cpu microseconds.
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.147583) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 2600111 bytes OK
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.147609) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.149438) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.149457) EVENT_LOG_v1 {"time_micros": 1764061177149451, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.149476) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2641917, prev total WAL file size 2641917, number of live WAL files 2.
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.150430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(2539KB)], [113(7605KB)]
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177150483, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10387684, "oldest_snapshot_seqno": -1}
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6974 keys, 8624620 bytes, temperature: kUnknown
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177207842, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8624620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8580112, "index_size": 25964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 182367, "raw_average_key_size": 26, "raw_value_size": 8457288, "raw_average_value_size": 1212, "num_data_blocks": 1010, "num_entries": 6974, "num_filter_entries": 6974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.208170) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8624620 bytes
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.209740) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.8 rd, 150.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.4 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 7488, records dropped: 514 output_compression: NoCompression
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.209771) EVENT_LOG_v1 {"time_micros": 1764061177209757, "job": 68, "event": "compaction_finished", "compaction_time_micros": 57449, "compaction_time_cpu_micros": 23656, "output_level": 6, "num_output_files": 1, "total_output_size": 8624620, "num_input_records": 7488, "num_output_records": 6974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177210881, "job": 68, "event": "table_file_deletion", "file_number": 115}
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061177213562, "job": 68, "event": "table_file_deletion", "file_number": 113}
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.150264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:59:37 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-08:59:37.213645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 08:59:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Nov 25 08:59:38 compute-0 nova_compute[253538]: 2025-11-25 08:59:38.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:38 compute-0 ovn_controller[152859]: 2025-11-25T08:59:38Z|01311|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 08:59:38 compute-0 nova_compute[253538]: 2025-11-25 08:59:38.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:38 compute-0 ovn_controller[152859]: 2025-11-25T08:59:38Z|01312|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 08:59:38 compute-0 nova_compute[253538]: 2025-11-25 08:59:38.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:38 compute-0 nova_compute[253538]: 2025-11-25 08:59:38.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:38 compute-0 nova_compute[253538]: 2025-11-25 08:59:38.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:39 compute-0 ceph-mon[75015]: pgmap v2365: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Nov 25 08:59:39 compute-0 nova_compute[253538]: 2025-11-25 08:59:39.657 253542 INFO nova.compute.manager [None req-8557c5e5-95f0-4f75-a2cd-3a1b89257541 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Get console output
Nov 25 08:59:39 compute-0 nova_compute[253538]: 2025-11-25 08:59:39.664 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:59:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 25 08:59:40 compute-0 nova_compute[253538]: 2025-11-25 08:59:40.948 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061165.947497, 9447890d-1fff-4536-a0cd-b889c23f7479 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:40 compute-0 nova_compute[253538]: 2025-11-25 08:59:40.948 253542 INFO nova.compute.manager [-] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] VM Stopped (Lifecycle Event)
Nov 25 08:59:40 compute-0 nova_compute[253538]: 2025-11-25 08:59:40.973 253542 DEBUG nova.compute.manager [None req-7b1068af-f4a0-4917-b708-4fa46f40ab23 - - - - - -] [instance: 9447890d-1fff-4536-a0cd-b889c23f7479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:40 compute-0 nova_compute[253538]: 2025-11-25 08:59:40.975 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:41.082 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:41.083 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:41 compute-0 ceph-mon[75015]: pgmap v2366: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 25 08:59:41 compute-0 nova_compute[253538]: 2025-11-25 08:59:41.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:41 compute-0 NetworkManager[48915]: <info>  [1764061181.4046] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Nov 25 08:59:41 compute-0 NetworkManager[48915]: <info>  [1764061181.4062] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Nov 25 08:59:41 compute-0 nova_compute[253538]: 2025-11-25 08:59:41.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:41 compute-0 ovn_controller[152859]: 2025-11-25T08:59:41Z|01313|binding|INFO|Releasing lport 936f7aae-c7b9-4c9a-a88d-93ca5394771e from this chassis (sb_readonly=0)
Nov 25 08:59:41 compute-0 nova_compute[253538]: 2025-11-25 08:59:41.515 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Nov 25 08:59:42 compute-0 nova_compute[253538]: 2025-11-25 08:59:42.115 253542 INFO nova.compute.manager [None req-ee83d514-173b-4d71-bf5f-e0a4f83342a5 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Get console output
Nov 25 08:59:42 compute-0 nova_compute[253538]: 2025-11-25 08:59:42.122 310639 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 08:59:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.140 253542 DEBUG nova.compute.manager [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.140 253542 DEBUG nova.compute.manager [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing instance network info cache due to event network-changed-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.141 253542 DEBUG oslo_concurrency.lockutils [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.141 253542 DEBUG oslo_concurrency.lockutils [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.141 253542 DEBUG nova.network.neutron [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Refreshing network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 08:59:43 compute-0 ceph-mon[75015]: pgmap v2367: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.229 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.230 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.231 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.231 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.231 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.233 253542 INFO nova.compute.manager [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Terminating instance
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.234 253542 DEBUG nova.compute.manager [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 08:59:43 compute-0 kernel: tap0b7bd252-0c (unregistering): left promiscuous mode
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 NetworkManager[48915]: <info>  [1764061183.3656] device (tap0b7bd252-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 08:59:43 compute-0 ovn_controller[152859]: 2025-11-25T08:59:43Z|01314|binding|INFO|Releasing lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 from this chassis (sb_readonly=0)
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 ovn_controller[152859]: 2025-11-25T08:59:43Z|01315|binding|INFO|Setting lport 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 down in Southbound
Nov 25 08:59:43 compute-0 ovn_controller[152859]: 2025-11-25T08:59:43Z|01316|binding|INFO|Removing iface tap0b7bd252-0c ovn-installed in OVS
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.380 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:6b:c3 10.100.0.8'], port_security=['fa:16:3e:61:6b:c3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6a6a3230-e005-48d6-b758-3cf5d4f9410f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92faeb767e7a423586eaaf32661ce771', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fbcd9381-e965-435b-8fa1-373c60075d6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff74e0b1-b375-483e-985d-fce7814dd7fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.381 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 in datapath d0402a09-5c1d-4dec-b1c6-38e77edc4409 unbound from our chassis
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.382 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0402a09-5c1d-4dec-b1c6-38e77edc4409, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.383 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e638c993-31fb-440f-8bdb-4c073f0dfb58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.383 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 namespace which is not needed anymore
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 25 08:59:43 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007f.scope: Consumed 14.340s CPU time.
Nov 25 08:59:43 compute-0 systemd-machined[215790]: Machine qemu-157-instance-0000007f terminated.
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.476 253542 INFO nova.virt.libvirt.driver [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Instance destroyed successfully.
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.477 253542 DEBUG nova.objects.instance [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lazy-loading 'resources' on Instance uuid 6a6a3230-e005-48d6-b758-3cf5d4f9410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.491 253542 DEBUG nova.virt.libvirt.vif [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-571013725',display_name='tempest-TestNetworkBasicOps-server-571013725',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-571013725',id=127,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9vDi5E2E+ywOEJjDZ0PQ8FPpnwynuIL6i1c9SarSwpCQ7QnXbRw7n+Ck5BMm/3gHxZu4fef569DYJ0xiHgyqCAtkk+E+7ZMYtBKG+VyGO33faTg/ful5ZkeC+zSQwIDw==',key_name='tempest-TestNetworkBasicOps-363283039',keypairs=<?>,launch_index=0,launched_at=2025-11-25T08:59:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92faeb767e7a423586eaaf32661ce771',ramdisk_id='',reservation_id='r-kwd70nuj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2019122229',owner_user_name='tempest-TestNetworkBasicOps-2019122229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T08:59:17Z,user_data=None,user_id='4211995133cc45db8e38c47f747fb092',uuid=6a6a3230-e005-48d6-b758-3cf5d4f9410f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.492 253542 DEBUG nova.network.os_vif_util [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converting VIF {"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.492 253542 DEBUG nova.network.os_vif_util [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.493 253542 DEBUG os_vif [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.494 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b7bd252-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.501 253542 INFO os_vif [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:6b:c3,bridge_name='br-int',has_traffic_filtering=True,id=0b7bd252-0c0e-43fd-b9ae-27d615ec9c29,network=Network(d0402a09-5c1d-4dec-b1c6-38e77edc4409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b7bd252-0c')
Nov 25 08:59:43 compute-0 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [NOTICE]   (387044) : haproxy version is 2.8.14-c23fe91
Nov 25 08:59:43 compute-0 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [NOTICE]   (387044) : path to executable is /usr/sbin/haproxy
Nov 25 08:59:43 compute-0 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [WARNING]  (387044) : Exiting Master process...
Nov 25 08:59:43 compute-0 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [ALERT]    (387044) : Current worker (387046) exited with code 143 (Terminated)
Nov 25 08:59:43 compute-0 neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409[387040]: [WARNING]  (387044) : All workers exited. Exiting... (0)
Nov 25 08:59:43 compute-0 systemd[1]: libpod-a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407.scope: Deactivated successfully.
Nov 25 08:59:43 compute-0 podman[387303]: 2025-11-25 08:59:43.52665306 +0000 UTC m=+0.046723933 container died a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 08:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407-userdata-shm.mount: Deactivated successfully.
Nov 25 08:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-30d2bcefa150aa50a2485c101e427760b62e7f7acb3630901c864073572e7a19-merged.mount: Deactivated successfully.
Nov 25 08:59:43 compute-0 podman[387303]: 2025-11-25 08:59:43.569519496 +0000 UTC m=+0.089590359 container cleanup a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.571 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.571 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 08:59:43 compute-0 systemd[1]: libpod-conmon-a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407.scope: Deactivated successfully.
Nov 25 08:59:43 compute-0 podman[387353]: 2025-11-25 08:59:43.66776419 +0000 UTC m=+0.066874750 container remove a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.675 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c8df159b-e6e1-412e-8d21-a21053f7c83b]: (4, ('Tue Nov 25 08:59:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 (a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407)\na41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407\nTue Nov 25 08:59:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 (a41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407)\na41782ed97ee4d448f4409f10540456942e8fdc1bbcea1477e1e710d084a3407\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.678 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d3b506-e857-49df-be14-fc941c5b1cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.680 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0402a09-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.683 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 kernel: tapd0402a09-50: left promiscuous mode
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.702 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.706 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60125edb-fc5b-40e5-b0cf-09ad03cf4af4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.721 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d061b886-82c7-4a5a-8a51-5687da845419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.723 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9bf1f59-93cf-4286-86e5-6942bae1c893]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.741 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f620a3e9-0b3f-4cfa-bf6c-450e92b76b5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651145, 'reachable_time': 44799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387366, 'error': None, 'target': 'ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dd0402a09\x2d5c1d\x2d4dec\x2db1c6\x2d38e77edc4409.mount: Deactivated successfully.
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.744 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0402a09-5c1d-4dec-b1c6-38e77edc4409 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 08:59:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 08:59:43.744 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[484aa584-aca0-4857-8de7-a4323086c4df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.895 253542 INFO nova.virt.libvirt.driver [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Deleting instance files /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f_del
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.896 253542 INFO nova.virt.libvirt.driver [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Deletion of /var/lib/nova/instances/6a6a3230-e005-48d6-b758-3cf5d4f9410f_del complete
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.973 253542 INFO nova.compute.manager [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.974 253542 DEBUG oslo.service.loopingcall [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.974 253542 DEBUG nova.compute.manager [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 08:59:43 compute-0 nova_compute[253538]: 2025-11-25 08:59:43.975 253542 DEBUG nova.network.neutron [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 08:59:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 421 KiB/s wr, 35 op/s
Nov 25 08:59:44 compute-0 nova_compute[253538]: 2025-11-25 08:59:44.747 253542 DEBUG nova.network.neutron [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:44 compute-0 nova_compute[253538]: 2025-11-25 08:59:44.762 253542 INFO nova.compute.manager [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Took 0.79 seconds to deallocate network for instance.
Nov 25 08:59:44 compute-0 nova_compute[253538]: 2025-11-25 08:59:44.822 253542 DEBUG nova.compute.manager [req-6ed2b8f3-9a8e-4f39-84c7-d3300739df06 req-05d55742-92e9-4815-97ba-1b4c8dbfd731 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-deleted-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:44 compute-0 nova_compute[253538]: 2025-11-25 08:59:44.840 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:44 compute-0 nova_compute[253538]: 2025-11-25 08:59:44.841 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:44 compute-0 sshd-session[387268]: Received disconnect from 45.78.222.2 port 53514:11: Bye Bye [preauth]
Nov 25 08:59:44 compute-0 sshd-session[387268]: Disconnected from authenticating user root 45.78.222.2 port 53514 [preauth]
Nov 25 08:59:44 compute-0 nova_compute[253538]: 2025-11-25 08:59:44.903 253542 DEBUG oslo_concurrency.processutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:44 compute-0 nova_compute[253538]: 2025-11-25 08:59:44.951 253542 DEBUG nova.network.neutron [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updated VIF entry in instance network info cache for port 0b7bd252-0c0e-43fd-b9ae-27d615ec9c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 08:59:44 compute-0 nova_compute[253538]: 2025-11-25 08:59:44.953 253542 DEBUG nova.network.neutron [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Updating instance_info_cache with network_info: [{"id": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "address": "fa:16:3e:61:6b:c3", "network": {"id": "d0402a09-5c1d-4dec-b1c6-38e77edc4409", "bridge": "br-int", "label": "tempest-network-smoke--803417347", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92faeb767e7a423586eaaf32661ce771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b7bd252-0c", "ovs_interfaceid": "0b7bd252-0c0e-43fd-b9ae-27d615ec9c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.006 253542 DEBUG oslo_concurrency.lockutils [req-57306cb8-b385-4596-8a43-0a21a2546fa9 req-627f9f0a-ab82-4924-b90c-75dfefd93278 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-6a6a3230-e005-48d6-b758-3cf5d4f9410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 08:59:45 compute-0 ceph-mon[75015]: pgmap v2368: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 421 KiB/s wr, 35 op/s
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.264 253542 DEBUG nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-unplugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.264 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.265 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.266 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.267 253542 DEBUG nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] No waiting events found dispatching network-vif-unplugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.267 253542 WARNING nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received unexpected event network-vif-unplugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 for instance with vm_state deleted and task_state None.
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.268 253542 DEBUG nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.269 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.269 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.270 253542 DEBUG oslo_concurrency.lockutils [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.270 253542 DEBUG nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] No waiting events found dispatching network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.271 253542 WARNING nova.compute.manager [req-2afd7512-4e8e-45f1-8753-f7345b3523d6 req-11e553d3-193f-4423-9a07-97bd47cf376b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Received unexpected event network-vif-plugged-0b7bd252-0c0e-43fd-b9ae-27d615ec9c29 for instance with vm_state deleted and task_state None.
Nov 25 08:59:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:59:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456270002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.376 253542 DEBUG oslo_concurrency.processutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.384 253542 DEBUG nova.compute.provider_tree [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.399 253542 DEBUG nova.scheduler.client.report [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.424 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.518 253542 INFO nova.scheduler.client.report [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Deleted allocations for instance 6a6a3230-e005-48d6-b758-3cf5d4f9410f
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 08:59:45 compute-0 nova_compute[253538]: 2025-11-25 08:59:45.625 253542 DEBUG oslo_concurrency.lockutils [None req-6b06ef83-12c5-4ab5-91d5-9878a1809a2b 4211995133cc45db8e38c47f747fb092 92faeb767e7a423586eaaf32661ce771 - - default default] Lock "6a6a3230-e005-48d6-b758-3cf5d4f9410f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 149 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 20 KiB/s wr, 32 op/s
Nov 25 08:59:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3456270002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:46 compute-0 nova_compute[253538]: 2025-11-25 08:59:46.558 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:46 compute-0 nova_compute[253538]: 2025-11-25 08:59:46.668 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:47 compute-0 ceph-mon[75015]: pgmap v2369: 321 pgs: 321 active+clean; 149 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 20 KiB/s wr, 32 op/s
Nov 25 08:59:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:47 compute-0 podman[387391]: 2025-11-25 08:59:47.848249998 +0000 UTC m=+0.081249162 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 08:59:47 compute-0 podman[387390]: 2025-11-25 08:59:47.847907899 +0000 UTC m=+0.082654371 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 08:59:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 117 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 15 KiB/s wr, 16 op/s
Nov 25 08:59:48 compute-0 nova_compute[253538]: 2025-11-25 08:59:48.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:48 compute-0 ceph-mon[75015]: pgmap v2370: 321 pgs: 321 active+clean; 117 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 15 KiB/s wr, 16 op/s
Nov 25 08:59:48 compute-0 nova_compute[253538]: 2025-11-25 08:59:48.496 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:48 compute-0 nova_compute[253538]: 2025-11-25 08:59:48.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:48 compute-0 nova_compute[253538]: 2025-11-25 08:59:48.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:49 compute-0 nova_compute[253538]: 2025-11-25 08:59:49.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 25 08:59:50 compute-0 nova_compute[253538]: 2025-11-25 08:59:50.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 08:59:50 compute-0 nova_compute[253538]: 2025-11-25 08:59:50.592 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:50 compute-0 nova_compute[253538]: 2025-11-25 08:59:50.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:50 compute-0 nova_compute[253538]: 2025-11-25 08:59:50.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:50 compute-0 nova_compute[253538]: 2025-11-25 08:59:50.593 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 08:59:50 compute-0 nova_compute[253538]: 2025-11-25 08:59:50.593 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:50 compute-0 sshd-session[387428]: Received disconnect from 45.202.211.6 port 57396:11: Bye Bye [preauth]
Nov 25 08:59:50 compute-0 sshd-session[387428]: Disconnected from authenticating user root 45.202.211.6 port 57396 [preauth]
Nov 25 08:59:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:59:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110125167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.081 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:51 compute-0 ceph-mon[75015]: pgmap v2371: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 25 08:59:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4110125167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.315 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.317 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3737MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.318 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.318 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.385 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.386 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.409 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:59:51 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885284282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.900 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.905 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:59:51 compute-0 nova_compute[253538]: 2025-11-25 08:59:51.925 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:59:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.099 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.100 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3885284282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.678 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.679 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.707 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.785 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.785 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.793 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.793 253542 INFO nova.compute.claims [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Claim successful on node compute-0.ctlplane.example.com
Nov 25 08:59:52 compute-0 nova_compute[253538]: 2025-11-25 08:59:52.886 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:53 compute-0 ceph-mon[75015]: pgmap v2372: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_08:59:53
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 08:59:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 08:59:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4023664004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.409 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.419 253542 DEBUG nova.compute.provider_tree [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.441 253542 DEBUG nova.scheduler.client.report [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.480 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.481 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.538 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.539 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.564 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.589 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.861 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.863 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.863 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Creating image(s)
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.892 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.924 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.958 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:53 compute-0 nova_compute[253538]: 2025-11-25 08:59:53.964 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:59:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.065 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.067 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.068 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.069 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.103 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.108 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4356e66d-96cf-4d55-bf3e-280638024374_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.154 253542 DEBUG nova.policy [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 08:59:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4023664004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.424 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 4356e66d-96cf-4d55-bf3e-280638024374_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.515 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.624 253542 DEBUG nova.objects.instance [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 4356e66d-96cf-4d55-bf3e-280638024374 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.640 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.640 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Ensure instance console log exists: /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.641 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.642 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 08:59:54 compute-0 nova_compute[253538]: 2025-11-25 08:59:54.642 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 08:59:54 compute-0 podman[387664]: 2025-11-25 08:59:54.918731648 +0000 UTC m=+0.163708057 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 08:59:55 compute-0 ceph-mon[75015]: pgmap v2373: 321 pgs: 321 active+clean; 88 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Nov 25 08:59:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 105 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 448 KiB/s wr, 25 op/s
Nov 25 08:59:56 compute-0 ceph-mon[75015]: pgmap v2374: 321 pgs: 321 active+clean; 105 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 448 KiB/s wr, 25 op/s
Nov 25 08:59:56 compute-0 nova_compute[253538]: 2025-11-25 08:59:56.374 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Successfully created port: 6654b89e-a102-49f6-ad76-45e598fe2702 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 08:59:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 08:59:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 124 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 18 op/s
Nov 25 08:59:58 compute-0 nova_compute[253538]: 2025-11-25 08:59:58.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:58 compute-0 nova_compute[253538]: 2025-11-25 08:59:58.474 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061183.4726167, 6a6a3230-e005-48d6-b758-3cf5d4f9410f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 08:59:58 compute-0 nova_compute[253538]: 2025-11-25 08:59:58.475 253542 INFO nova.compute.manager [-] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] VM Stopped (Lifecycle Event)
Nov 25 08:59:58 compute-0 nova_compute[253538]: 2025-11-25 08:59:58.498 253542 DEBUG nova.compute.manager [None req-397aa31d-6142-47fb-9d04-282272f7baf2 - - - - - -] [instance: 6a6a3230-e005-48d6-b758-3cf5d4f9410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 08:59:58 compute-0 nova_compute[253538]: 2025-11-25 08:59:58.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 08:59:59 compute-0 ceph-mon[75015]: pgmap v2375: 321 pgs: 321 active+clean; 124 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 18 op/s
Nov 25 08:59:59 compute-0 nova_compute[253538]: 2025-11-25 08:59:59.126 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Successfully updated port: 6654b89e-a102-49f6-ad76-45e598fe2702 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 08:59:59 compute-0 nova_compute[253538]: 2025-11-25 08:59:59.209 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:59:59 compute-0 nova_compute[253538]: 2025-11-25 08:59:59.209 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 08:59:59 compute-0 nova_compute[253538]: 2025-11-25 08:59:59.209 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 08:59:59 compute-0 nova_compute[253538]: 2025-11-25 08:59:59.422 253542 DEBUG nova.compute.manager [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 08:59:59 compute-0 nova_compute[253538]: 2025-11-25 08:59:59.423 253542 DEBUG nova.compute.manager [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing instance network info cache due to event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 08:59:59 compute-0 nova_compute[253538]: 2025-11-25 08:59:59.423 253542 DEBUG oslo_concurrency.lockutils [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 08:59:59 compute-0 nova_compute[253538]: 2025-11-25 08:59:59.508 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 08:59:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.630 253542 DEBUG nova.network.neutron [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.835 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.836 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance network_info: |[{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.837 253542 DEBUG oslo_concurrency.lockutils [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.837 253542 DEBUG nova.network.neutron [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.842 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start _get_guest_xml network_info=[{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.849 253542 WARNING nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.864 253542 DEBUG nova.virt.libvirt.host [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.865 253542 DEBUG nova.virt.libvirt.host [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.870 253542 DEBUG nova.virt.libvirt.host [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.871 253542 DEBUG nova.virt.libvirt.host [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.872 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.872 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.874 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.874 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.875 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.875 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.876 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.876 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.877 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.878 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.878 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.879 253542 DEBUG nova.virt.hardware [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:00:00 compute-0 nova_compute[253538]: 2025-11-25 09:00:00.885 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:01 compute-0 ceph-mon[75015]: pgmap v2376: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 25 09:00:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:00:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513629832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.370 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.393 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.397 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:00:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022759584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.828 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.831 253542 DEBUG nova.virt.libvirt.vif [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:59:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=128,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9GKDtc/iAkQHMT31cNhSUeZ5EWysDNnQ0GII5sebZcAX/FwcSvLZXUXWZQJzndY+3PoIINOvsAEaMLFDOLThu4z2CfTlWQWolUUIfZRA+bjYm4j7TZTDILpFaRJI4ZIA==',key_name='tempest-TestSecurityGroupsBasicOps-1928582476',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-tzlro08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:59:53Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=4356e66d-96cf-4d55-bf3e-280638024374,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.831 253542 DEBUG nova.network.os_vif_util [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.833 253542 DEBUG nova.network.os_vif_util [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.835 253542 DEBUG nova.objects.instance [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4356e66d-96cf-4d55-bf3e-280638024374 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.854 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <uuid>4356e66d-96cf-4d55-bf3e-280638024374</uuid>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <name>instance-00000080</name>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795</nova:name>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:00:00</nova:creationTime>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <nova:port uuid="6654b89e-a102-49f6-ad76-45e598fe2702">
Nov 25 09:00:01 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <system>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <entry name="serial">4356e66d-96cf-4d55-bf3e-280638024374</entry>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <entry name="uuid">4356e66d-96cf-4d55-bf3e-280638024374</entry>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </system>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <os>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   </os>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <features>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   </features>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4356e66d-96cf-4d55-bf3e-280638024374_disk">
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       </source>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/4356e66d-96cf-4d55-bf3e-280638024374_disk.config">
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       </source>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:00:01 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d9:12:97"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <target dev="tap6654b89e-a1"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/console.log" append="off"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <video>
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </video>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:00:01 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:00:01 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:00:01 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:00:01 compute-0 nova_compute[253538]: </domain>
Nov 25 09:00:01 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.857 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Preparing to wait for external event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.858 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.858 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.859 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.860 253542 DEBUG nova.virt.libvirt.vif [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T08:59:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=128,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9GKDtc/iAkQHMT31cNhSUeZ5EWysDNnQ0GII5sebZcAX/FwcSvLZXUXWZQJzndY+3PoIINOvsAEaMLFDOLThu4z2CfTlWQWolUUIfZRA+bjYm4j7TZTDILpFaRJI4ZIA==',key_name='tempest-TestSecurityGroupsBasicOps-1928582476',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-tzlro08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T08:59:53Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=4356e66d-96cf-4d55-bf3e-280638024374,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.861 253542 DEBUG nova.network.os_vif_util [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.862 253542 DEBUG nova.network.os_vif_util [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.863 253542 DEBUG os_vif [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.865 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.866 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.871 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6654b89e-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.872 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6654b89e-a1, col_values=(('external_ids', {'iface-id': '6654b89e-a102-49f6-ad76-45e598fe2702', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:12:97', 'vm-uuid': '4356e66d-96cf-4d55-bf3e-280638024374'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:01 compute-0 NetworkManager[48915]: <info>  [1764061201.8760] manager: (tap6654b89e-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.881 253542 INFO os_vif [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1')
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.929 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.930 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.930 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:d9:12:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.930 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Using config drive
Nov 25 09:00:01 compute-0 nova_compute[253538]: 2025-11-25 09:00:01.953 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:00:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/513629832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:00:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3022759584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:00:02 compute-0 sudo[387773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:02 compute-0 sudo[387773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:02 compute-0 sudo[387773]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:02 compute-0 sudo[387798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:00:02 compute-0 sudo[387798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:02 compute-0 sudo[387798]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.519 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Creating config drive at /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.523 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu11xni_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:02 compute-0 sudo[387823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:02 compute-0 sudo[387823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:02 compute-0 sudo[387823]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:02 compute-0 sudo[387849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:00:02 compute-0 sudo[387849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.662 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu11xni_" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.689 253542 DEBUG nova.storage.rbd_utils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 4356e66d-96cf-4d55-bf3e-280638024374_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.695 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config 4356e66d-96cf-4d55-bf3e-280638024374_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.849 253542 DEBUG oslo_concurrency.processutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config 4356e66d-96cf-4d55-bf3e-280638024374_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.851 253542 INFO nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Deleting local config drive /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374/disk.config because it was imported into RBD.
Nov 25 09:00:02 compute-0 kernel: tap6654b89e-a1: entered promiscuous mode
Nov 25 09:00:02 compute-0 NetworkManager[48915]: <info>  [1764061202.9198] manager: (tap6654b89e-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Nov 25 09:00:02 compute-0 ovn_controller[152859]: 2025-11-25T09:00:02Z|01317|binding|INFO|Claiming lport 6654b89e-a102-49f6-ad76-45e598fe2702 for this chassis.
Nov 25 09:00:02 compute-0 ovn_controller[152859]: 2025-11-25T09:00:02Z|01318|binding|INFO|6654b89e-a102-49f6-ad76-45e598fe2702: Claiming fa:16:3e:d9:12:97 10.100.0.4
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.929 253542 DEBUG nova.network.neutron [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updated VIF entry in instance network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.930 253542 DEBUG nova.network.neutron [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.950 253542 DEBUG oslo_concurrency.lockutils [req-80c15d05-d7c5-4680-9c49-98089332aebc req-02ac9b35-9d27-4f9c-a7e6-41e709819804 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:00:02 compute-0 systemd-udevd[387944]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:00:02 compute-0 systemd-machined[215790]: New machine qemu-158-instance-00000080.
Nov 25 09:00:02 compute-0 NetworkManager[48915]: <info>  [1764061202.9687] device (tap6654b89e-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:00:02 compute-0 NetworkManager[48915]: <info>  [1764061202.9704] device (tap6654b89e-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:02 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-00000080.
Nov 25 09:00:02 compute-0 ovn_controller[152859]: 2025-11-25T09:00:02Z|01319|binding|INFO|Setting lport 6654b89e-a102-49f6-ad76-45e598fe2702 ovn-installed in OVS
Nov 25 09:00:02 compute-0 nova_compute[253538]: 2025-11-25 09:00:02.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:03 compute-0 ovn_controller[152859]: 2025-11-25T09:00:03Z|01320|binding|INFO|Setting lport 6654b89e-a102-49f6-ad76-45e598fe2702 up in Southbound
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.022 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:12:97 10.100.0.4'], port_security=['fa:16:3e:d9:12:97 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4356e66d-96cf-4d55-bf3e-280638024374', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f656b36-7475-4baf-b321-d82280dade68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2c277aa6-98be-4b49-b9f4-357b27c34694 6ba9af76-9f60-4459-905f-068c6194f108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51801c6e-6c82-485c-b7cb-963a30ef2813, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6654b89e-a102-49f6-ad76-45e598fe2702) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.023 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6654b89e-a102-49f6-ad76-45e598fe2702 in datapath 0f656b36-7475-4baf-b321-d82280dade68 bound to our chassis
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.023 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f656b36-7475-4baf-b321-d82280dade68
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.034 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7be230f6-d24b-449f-a234-d81bf4883b02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.034 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f656b36-71 in ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.037 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f656b36-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.037 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87259df3-4a11-4702-a205-74bb2892dde1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.039 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff49599-07f2-4825-b37c-d346d91f0355]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 sudo[387849]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.054 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[28f14582-3762-488e-b0dd-9b1d42a10f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.077 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[69e1c098-bbec-48a0-a5cb-424f352adb0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:00:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:00:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:00:03 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:00:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:00:03 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:00:03 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev b2b08cd8-0faa-4dd0-87d3-3385d66cb740 does not exist
Nov 25 09:00:03 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d54ced60-48aa-4d9d-943d-1c1e32e8dbd9 does not exist
Nov 25 09:00:03 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 20363c20-0fb1-48d8-b86c-da7fe1a25cab does not exist
Nov 25 09:00:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:00:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:00:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:00:03 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:00:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:00:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.109 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[06483a1d-b8c0-44f6-94a0-bc10f4f4504f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 NetworkManager[48915]: <info>  [1764061203.1169] manager: (tap0f656b36-70): new Veth device (/org/freedesktop/NetworkManager/Devices/540)
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.115 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[221d3d2c-bb90-40a7-bc56-0beeae37220a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 systemd-udevd[387946]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:00:03 compute-0 ceph-mon[75015]: pgmap v2377: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:00:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:00:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:00:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:00:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:00:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:00:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.152 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[efeaf913-c755-42a0-a21c-4d85565fa2af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.157 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[71d7303f-accb-412e-800a-cc9158b5a5a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 sudo[387970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:03 compute-0 sudo[387970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:03 compute-0 NetworkManager[48915]: <info>  [1764061203.1920] device (tap0f656b36-70): carrier: link connected
Nov 25 09:00:03 compute-0 sudo[387970]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.200 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b676a1e4-68bf-4c8a-b2c9-df95253be650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.223 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a414ef7a-1b8c-491a-ad39-2258e442b9ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f656b36-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:19:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655779, 'reachable_time': 38231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388017, 'error': None, 'target': 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.239 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e7266bba-5508-439c-b154-656e8bdf4953]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:1969'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655779, 'tstamp': 655779}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388035, 'error': None, 'target': 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 sudo[388014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:00:03 compute-0 sudo[388014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.258 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fffe4b-b588-45b1-b794-a314d491bfeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f656b36-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:19:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655779, 'reachable_time': 38231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388039, 'error': None, 'target': 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 sudo[388014]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[58503f1c-cd2b-4137-8de6-c73693c90eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 sudo[388043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:03 compute-0 sudo[388043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:03 compute-0 sudo[388043]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.359 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e55f1c6a-db03-4a6f-8b44-02185feedf2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.360 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f656b36-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.361 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f656b36-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.363 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:03 compute-0 NetworkManager[48915]: <info>  [1764061203.3639] manager: (tap0f656b36-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Nov 25 09:00:03 compute-0 kernel: tap0f656b36-70: entered promiscuous mode
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.366 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f656b36-70, col_values=(('external_ids', {'iface-id': 'df5aa971-5155-42d7-8605-12093fd4412c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:03 compute-0 ovn_controller[152859]: 2025-11-25T09:00:03Z|01321|binding|INFO|Releasing lport df5aa971-5155-42d7-8605-12093fd4412c from this chassis (sb_readonly=0)
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.369 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f656b36-7475-4baf-b321-d82280dade68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f656b36-7475-4baf-b321-d82280dade68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.370 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a5b31e-72ef-4d79-8a72-33f352631694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.371 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-0f656b36-7475-4baf-b321-d82280dade68
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/0f656b36-7475-4baf-b321-d82280dade68.pid.haproxy
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 0f656b36-7475-4baf-b321-d82280dade68
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.372 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'env', 'PROCESS_TAG=haproxy-0f656b36-7475-4baf-b321-d82280dade68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f656b36-7475-4baf-b321-d82280dade68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:03 compute-0 sudo[388073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:00:03 compute-0 sudo[388073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.545 253542 DEBUG nova.compute.manager [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.546 253542 DEBUG oslo_concurrency.lockutils [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.546 253542 DEBUG oslo_concurrency.lockutils [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.546 253542 DEBUG oslo_concurrency.lockutils [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.546 253542 DEBUG nova.compute.manager [req-bd0cdbc6-1c32-4a6a-8eaf-6ec45ffb43e0 req-0972e105-9b20-4682-b98e-e55871a818b2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Processing event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.738 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.739 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061203.737384, 4356e66d-96cf-4d55-bf3e-280638024374 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.739 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] VM Started (Lifecycle Event)
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.749 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.755 253542 INFO nova.virt.libvirt.driver [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance spawned successfully.
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.756 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:00:03 compute-0 podman[388204]: 2025-11-25 09:00:03.771753986 +0000 UTC m=+0.058523204 container create af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:00:03 compute-0 podman[388210]: 2025-11-25 09:00:03.77963958 +0000 UTC m=+0.053774015 container create 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.792 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.798 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.800 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.801 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.803 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.805 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.806 253542 DEBUG nova.virt.libvirt.driver [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.811 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:00:03 compute-0 systemd[1]: Started libpod-conmon-37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3.scope.
Nov 25 09:00:03 compute-0 systemd[1]: Started libpod-conmon-af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b.scope.
Nov 25 09:00:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:00:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8292ddc888ebc952e3026b61ff6dd04593c40f1d79add281577b607917257e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:03 compute-0 podman[388204]: 2025-11-25 09:00:03.747633379 +0000 UTC m=+0.034402617 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:00:03 compute-0 podman[388210]: 2025-11-25 09:00:03.75064023 +0000 UTC m=+0.024774695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.849 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.850 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061203.7387707, 4356e66d-96cf-4d55-bf3e-280638024374 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.850 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] VM Paused (Lifecycle Event)
Nov 25 09:00:03 compute-0 podman[388204]: 2025-11-25 09:00:03.855591888 +0000 UTC m=+0.142361116 container init af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 09:00:03 compute-0 podman[388210]: 2025-11-25 09:00:03.859286508 +0000 UTC m=+0.133420963 container init 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 09:00:03 compute-0 podman[388204]: 2025-11-25 09:00:03.862349121 +0000 UTC m=+0.149118349 container start af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:00:03 compute-0 podman[388210]: 2025-11-25 09:00:03.868133659 +0000 UTC m=+0.142268104 container start 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.870 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:03 compute-0 silly_snyder[388233]: 167 167
Nov 25 09:00:03 compute-0 systemd[1]: libpod-37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3.scope: Deactivated successfully.
Nov 25 09:00:03 compute-0 podman[388210]: 2025-11-25 09:00:03.875301364 +0000 UTC m=+0.149435829 container attach 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:00:03 compute-0 podman[388210]: 2025-11-25 09:00:03.875768277 +0000 UTC m=+0.149902742 container died 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.877 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061203.7431898, 4356e66d-96cf-4d55-bf3e-280638024374 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.877 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] VM Resumed (Lifecycle Event)
Nov 25 09:00:03 compute-0 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [NOTICE]   (388241) : New worker (388245) forked
Nov 25 09:00:03 compute-0 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [NOTICE]   (388241) : Loading success.
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.893 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.896 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.899 253542 INFO nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Took 10.04 seconds to spawn the instance on the hypervisor.
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.899 253542 DEBUG nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-82d83fc8e248d99a3a95afad2b14db4cab22474e81f5eeb268b780faffe52862-merged.mount: Deactivated successfully.
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.911 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:00:03 compute-0 podman[388210]: 2025-11-25 09:00:03.926138888 +0000 UTC m=+0.200273333 container remove 37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_snyder, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:00:03 compute-0 systemd[1]: libpod-conmon-37a4d0eec340308ec5bbc2dc96ceeaa4cfe5973fc18144ad4b4f40e8d60abbb3.scope: Deactivated successfully.
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.980 253542 INFO nova.compute.manager [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Took 11.22 seconds to build instance.
Nov 25 09:00:03 compute-0 nova_compute[253538]: 2025-11-25 09:00:03.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.982 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:00:03 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:03.982 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:00:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 09:00:04 compute-0 nova_compute[253538]: 2025-11-25 09:00:04.004 253542 DEBUG oslo_concurrency.lockutils [None req-60e6232e-31b9-4174-8bd5-e4a5c4c8cf0f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:04 compute-0 podman[388271]: 2025-11-25 09:00:04.099091225 +0000 UTC m=+0.045814478 container create 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:00:04 compute-0 systemd[1]: Started libpod-conmon-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope.
Nov 25 09:00:04 compute-0 podman[388271]: 2025-11-25 09:00:04.078235387 +0000 UTC m=+0.024958670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:00:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:04 compute-0 podman[388271]: 2025-11-25 09:00:04.205342598 +0000 UTC m=+0.152065871 container init 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 25 09:00:04 compute-0 podman[388271]: 2025-11-25 09:00:04.213741836 +0000 UTC m=+0.160465089 container start 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:00:04 compute-0 podman[388271]: 2025-11-25 09:00:04.216917052 +0000 UTC m=+0.163640365 container attach 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:00:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:00:04 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:04.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:00:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.5 total, 600.0 interval
                                           Cumulative writes: 38K writes, 155K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.87 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5228 writes, 21K keys, 5228 commit groups, 1.0 writes per commit group, ingest: 24.78 MB, 0.04 MB/s
                                           Interval WAL: 5228 writes, 1957 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:00:05 compute-0 ceph-mon[75015]: pgmap v2378: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 09:00:05 compute-0 busy_khayyam[388287]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:00:05 compute-0 busy_khayyam[388287]: --> relative data size: 1.0
Nov 25 09:00:05 compute-0 busy_khayyam[388287]: --> All data devices are unavailable
Nov 25 09:00:05 compute-0 systemd[1]: libpod-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope: Deactivated successfully.
Nov 25 09:00:05 compute-0 systemd[1]: libpod-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope: Consumed 1.073s CPU time.
Nov 25 09:00:05 compute-0 conmon[388287]: conmon 94991b75ca74a20f2ee6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope/container/memory.events
Nov 25 09:00:05 compute-0 podman[388271]: 2025-11-25 09:00:05.392191181 +0000 UTC m=+1.338914464 container died 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:00:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d6e3f950babd04fbd78bccac18244223e2dda3257c715509cc225dff4adee3f-merged.mount: Deactivated successfully.
Nov 25 09:00:05 compute-0 podman[388271]: 2025-11-25 09:00:05.468476538 +0000 UTC m=+1.415199791 container remove 94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 09:00:05 compute-0 systemd[1]: libpod-conmon-94991b75ca74a20f2ee63c99497805c113841cfa25b0a26e7be71fe01ecb2717.scope: Deactivated successfully.
Nov 25 09:00:05 compute-0 sudo[388073]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:05 compute-0 sudo[388330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:05 compute-0 sudo[388330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:05 compute-0 sudo[388330]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:05 compute-0 sudo[388355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:00:05 compute-0 sudo[388355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:05 compute-0 sudo[388355]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:05 compute-0 nova_compute[253538]: 2025-11-25 09:00:05.677 253542 DEBUG nova.compute.manager [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:05 compute-0 nova_compute[253538]: 2025-11-25 09:00:05.681 253542 DEBUG oslo_concurrency.lockutils [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:05 compute-0 nova_compute[253538]: 2025-11-25 09:00:05.681 253542 DEBUG oslo_concurrency.lockutils [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:05 compute-0 nova_compute[253538]: 2025-11-25 09:00:05.682 253542 DEBUG oslo_concurrency.lockutils [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:05 compute-0 nova_compute[253538]: 2025-11-25 09:00:05.682 253542 DEBUG nova.compute.manager [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] No waiting events found dispatching network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:00:05 compute-0 nova_compute[253538]: 2025-11-25 09:00:05.682 253542 WARNING nova.compute.manager [req-26c89a1c-a3a5-4f98-a2b3-fa679abdfb19 req-380ba652-07f4-4522-b955-5e990f383b5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received unexpected event network-vif-plugged-6654b89e-a102-49f6-ad76-45e598fe2702 for instance with vm_state active and task_state None.
Nov 25 09:00:05 compute-0 sudo[388380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:05 compute-0 sudo[388380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:05 compute-0 sudo[388380]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:05 compute-0 sudo[388405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:00:05 compute-0 sudo[388405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 09:00:06 compute-0 podman[388470]: 2025-11-25 09:00:06.179922483 +0000 UTC m=+0.063188181 container create 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 09:00:06 compute-0 systemd[1]: Started libpod-conmon-888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e.scope.
Nov 25 09:00:06 compute-0 podman[388470]: 2025-11-25 09:00:06.148267721 +0000 UTC m=+0.031533459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:00:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:00:06 compute-0 podman[388470]: 2025-11-25 09:00:06.282194226 +0000 UTC m=+0.165459954 container init 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 09:00:06 compute-0 podman[388470]: 2025-11-25 09:00:06.291398106 +0000 UTC m=+0.174663834 container start 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:00:06 compute-0 podman[388470]: 2025-11-25 09:00:06.296272019 +0000 UTC m=+0.179537737 container attach 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:00:06 compute-0 dazzling_kalam[388485]: 167 167
Nov 25 09:00:06 compute-0 podman[388470]: 2025-11-25 09:00:06.302004755 +0000 UTC m=+0.185270453 container died 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:00:06 compute-0 systemd[1]: libpod-888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e.scope: Deactivated successfully.
Nov 25 09:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3917e53c00a5c9288edc47b4371273c781e1a5178d9f8832d784bf2b9fd0bc55-merged.mount: Deactivated successfully.
Nov 25 09:00:06 compute-0 podman[388470]: 2025-11-25 09:00:06.34038984 +0000 UTC m=+0.223655538 container remove 888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:00:06 compute-0 systemd[1]: libpod-conmon-888023c7b37ad0a691525109a524ee9247def5f5e6609232d827b83c6a76179e.scope: Deactivated successfully.
Nov 25 09:00:06 compute-0 podman[388509]: 2025-11-25 09:00:06.569469796 +0000 UTC m=+0.064266260 container create ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:00:06 compute-0 systemd[1]: Started libpod-conmon-ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1.scope.
Nov 25 09:00:06 compute-0 podman[388509]: 2025-11-25 09:00:06.54101176 +0000 UTC m=+0.035808314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:00:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:06 compute-0 podman[388509]: 2025-11-25 09:00:06.69305917 +0000 UTC m=+0.187855674 container init ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:00:06 compute-0 podman[388509]: 2025-11-25 09:00:06.700235614 +0000 UTC m=+0.195032118 container start ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:00:06 compute-0 podman[388509]: 2025-11-25 09:00:06.703827833 +0000 UTC m=+0.198624337 container attach ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 09:00:06 compute-0 nova_compute[253538]: 2025-11-25 09:00:06.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:07 compute-0 ceph-mon[75015]: pgmap v2379: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 09:00:07 compute-0 gallant_germain[388525]: {
Nov 25 09:00:07 compute-0 gallant_germain[388525]:     "0": [
Nov 25 09:00:07 compute-0 gallant_germain[388525]:         {
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "devices": [
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "/dev/loop3"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             ],
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_name": "ceph_lv0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_size": "21470642176",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "name": "ceph_lv0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "tags": {
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cluster_name": "ceph",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.crush_device_class": "",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.encrypted": "0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osd_id": "0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.type": "block",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.vdo": "0"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             },
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "type": "block",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "vg_name": "ceph_vg0"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:         }
Nov 25 09:00:07 compute-0 gallant_germain[388525]:     ],
Nov 25 09:00:07 compute-0 gallant_germain[388525]:     "1": [
Nov 25 09:00:07 compute-0 gallant_germain[388525]:         {
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "devices": [
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "/dev/loop4"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             ],
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_name": "ceph_lv1",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_size": "21470642176",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "name": "ceph_lv1",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "tags": {
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cluster_name": "ceph",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.crush_device_class": "",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.encrypted": "0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osd_id": "1",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.type": "block",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.vdo": "0"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             },
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "type": "block",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "vg_name": "ceph_vg1"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:         }
Nov 25 09:00:07 compute-0 gallant_germain[388525]:     ],
Nov 25 09:00:07 compute-0 gallant_germain[388525]:     "2": [
Nov 25 09:00:07 compute-0 gallant_germain[388525]:         {
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "devices": [
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "/dev/loop5"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             ],
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_name": "ceph_lv2",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_size": "21470642176",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "name": "ceph_lv2",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "tags": {
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.cluster_name": "ceph",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.crush_device_class": "",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.encrypted": "0",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osd_id": "2",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.type": "block",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:                 "ceph.vdo": "0"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             },
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "type": "block",
Nov 25 09:00:07 compute-0 gallant_germain[388525]:             "vg_name": "ceph_vg2"
Nov 25 09:00:07 compute-0 gallant_germain[388525]:         }
Nov 25 09:00:07 compute-0 gallant_germain[388525]:     ]
Nov 25 09:00:07 compute-0 gallant_germain[388525]: }
Nov 25 09:00:07 compute-0 systemd[1]: libpod-ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1.scope: Deactivated successfully.
Nov 25 09:00:07 compute-0 conmon[388525]: conmon ab7ba2a1893dfddaece4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1.scope/container/memory.events
Nov 25 09:00:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:07 compute-0 podman[388534]: 2025-11-25 09:00:07.652275278 +0000 UTC m=+0.025784653 container died ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ad42156624ae0e34acf7740f9185fda1939cd23b52323b368b362b59c0d4553-merged.mount: Deactivated successfully.
Nov 25 09:00:07 compute-0 podman[388534]: 2025-11-25 09:00:07.722803367 +0000 UTC m=+0.096312732 container remove ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:00:07 compute-0 systemd[1]: libpod-conmon-ab7ba2a1893dfddaece4d8187e98c5c45a5efe2412f0ffca3ba3048a0b31f1f1.scope: Deactivated successfully.
Nov 25 09:00:07 compute-0 sudo[388405]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:07 compute-0 sudo[388549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:07 compute-0 sudo[388549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:07 compute-0 sudo[388549]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:07 compute-0 sudo[388574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:00:07 compute-0 sudo[388574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:07 compute-0 sudo[388574]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 1.3 MiB/s wr, 65 op/s
Nov 25 09:00:08 compute-0 sudo[388599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:08 compute-0 sudo[388599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:08 compute-0 sudo[388599]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:08 compute-0 sudo[388624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:00:08 compute-0 sudo[388624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:08 compute-0 nova_compute[253538]: 2025-11-25 09:00:08.403 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:08 compute-0 podman[388687]: 2025-11-25 09:00:08.500463924 +0000 UTC m=+0.038758086 container create 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:00:08 compute-0 podman[388687]: 2025-11-25 09:00:08.481879778 +0000 UTC m=+0.020173970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:00:08 compute-0 systemd[1]: Started libpod-conmon-0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24.scope.
Nov 25 09:00:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:00:08 compute-0 podman[388687]: 2025-11-25 09:00:08.663263055 +0000 UTC m=+0.201557247 container init 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:00:08 compute-0 podman[388687]: 2025-11-25 09:00:08.67039499 +0000 UTC m=+0.208689182 container start 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:00:08 compute-0 podman[388687]: 2025-11-25 09:00:08.674619994 +0000 UTC m=+0.212914246 container attach 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:00:08 compute-0 lucid_goldstine[388703]: 167 167
Nov 25 09:00:08 compute-0 systemd[1]: libpod-0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24.scope: Deactivated successfully.
Nov 25 09:00:08 compute-0 podman[388687]: 2025-11-25 09:00:08.67665462 +0000 UTC m=+0.214948812 container died 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 09:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-aebb5199e19e87493e23dcb2ae71ca0e60cc5ffaa78e9ed38571dae04b4a3f34-merged.mount: Deactivated successfully.
Nov 25 09:00:08 compute-0 podman[388687]: 2025-11-25 09:00:08.813231658 +0000 UTC m=+0.351525820 container remove 0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:00:08 compute-0 systemd[1]: libpod-conmon-0003edf7adecc06df7dfc8a461f36ee02bfe1a2a18b1e3979159145e2118cc24.scope: Deactivated successfully.
Nov 25 09:00:09 compute-0 podman[388728]: 2025-11-25 09:00:09.003155216 +0000 UTC m=+0.039493675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:00:09 compute-0 podman[388728]: 2025-11-25 09:00:09.1256238 +0000 UTC m=+0.161962239 container create 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 25 09:00:09 compute-0 ceph-mon[75015]: pgmap v2380: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 1.3 MiB/s wr, 65 op/s
Nov 25 09:00:09 compute-0 systemd[1]: Started libpod-conmon-717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c.scope.
Nov 25 09:00:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:09 compute-0 podman[388728]: 2025-11-25 09:00:09.254097147 +0000 UTC m=+0.290435576 container init 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 09:00:09 compute-0 podman[388728]: 2025-11-25 09:00:09.263836042 +0000 UTC m=+0.300174461 container start 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:00:09 compute-0 podman[388728]: 2025-11-25 09:00:09.267390599 +0000 UTC m=+0.303729028 container attach 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:00:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 96 op/s
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]: {
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "osd_id": 1,
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "type": "bluestore"
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:     },
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "osd_id": 2,
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "type": "bluestore"
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:     },
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "osd_id": 0,
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:         "type": "bluestore"
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]:     }
Nov 25 09:00:10 compute-0 pensive_brahmagupta[388745]: }
Nov 25 09:00:10 compute-0 systemd[1]: libpod-717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c.scope: Deactivated successfully.
Nov 25 09:00:10 compute-0 podman[388728]: 2025-11-25 09:00:10.339941162 +0000 UTC m=+1.376279611 container died 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:00:10 compute-0 systemd[1]: libpod-717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c.scope: Consumed 1.081s CPU time.
Nov 25 09:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0034c6b949637d6ebcb0c46a92ce5440c36bbb54237d477ed9b0aba177eccd2-merged.mount: Deactivated successfully.
Nov 25 09:00:10 compute-0 podman[388728]: 2025-11-25 09:00:10.397165221 +0000 UTC m=+1.433503620 container remove 717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:00:10 compute-0 sudo[388624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:10 compute-0 systemd[1]: libpod-conmon-717084890ad6dd7820e408c9ff446cbc3713e5fce582334a76eec8b6a230636c.scope: Deactivated successfully.
Nov 25 09:00:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:00:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:00:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:00:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:00:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev be8c2088-eff7-4875-a893-0c8320f791ca does not exist
Nov 25 09:00:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9e5a642c-52e7-42a4-87c9-84d432b7c6aa does not exist
Nov 25 09:00:10 compute-0 sudo[388790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:00:10 compute-0 sudo[388790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:10 compute-0 sudo[388790]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:10 compute-0 sudo[388815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:00:10 compute-0 sudo[388815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:00:10 compute-0 sudo[388815]: pam_unix(sudo:session): session closed for user root
Nov 25 09:00:11 compute-0 NetworkManager[48915]: <info>  [1764061211.1221] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Nov 25 09:00:11 compute-0 NetworkManager[48915]: <info>  [1764061211.1234] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Nov 25 09:00:11 compute-0 ovn_controller[152859]: 2025-11-25T09:00:11Z|01322|binding|INFO|Releasing lport df5aa971-5155-42d7-8605-12093fd4412c from this chassis (sb_readonly=0)
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:11 compute-0 ceph-mon[75015]: pgmap v2381: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 96 op/s
Nov 25 09:00:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:00:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:00:11 compute-0 ovn_controller[152859]: 2025-11-25T09:00:11Z|01323|binding|INFO|Releasing lport df5aa971-5155-42d7-8605-12093fd4412c from this chassis (sb_readonly=0)
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:11 compute-0 sshd-session[388840]: Invalid user hadoop from 193.32.162.151 port 59890
Nov 25 09:00:11 compute-0 sshd-session[388840]: Connection closed by invalid user hadoop 193.32.162.151 port 59890 [preauth]
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.572 253542 DEBUG nova.compute.manager [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.572 253542 DEBUG nova.compute.manager [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing instance network info cache due to event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.573 253542 DEBUG oslo_concurrency.lockutils [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.573 253542 DEBUG oslo_concurrency.lockutils [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.573 253542 DEBUG nova.network.neutron [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:00:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:00:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.4 total, 600.0 interval
                                           Cumulative writes: 38K writes, 149K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.83 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6047 writes, 23K keys, 6047 commit groups, 1.0 writes per commit group, ingest: 24.53 MB, 0.04 MB/s
                                           Interval WAL: 6047 writes, 2362 syncs, 2.56 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:00:11 compute-0 nova_compute[253538]: 2025-11-25 09:00:11.886 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:00:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:13 compute-0 ceph-mon[75015]: pgmap v2382: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:00:13 compute-0 nova_compute[253538]: 2025-11-25 09:00:13.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:13 compute-0 nova_compute[253538]: 2025-11-25 09:00:13.520 253542 DEBUG nova.network.neutron [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updated VIF entry in instance network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:00:13 compute-0 nova_compute[253538]: 2025-11-25 09:00:13.520 253542 DEBUG nova.network.neutron [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:13 compute-0 nova_compute[253538]: 2025-11-25 09:00:13.554 253542 DEBUG oslo_concurrency.lockutils [req-343d4938-577a-4c4e-96b9-84f3a3705c2c req-95053db3-e2dc-4b46-9f82-71f3c8b6880a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:00:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:00:15 compute-0 ceph-mon[75015]: pgmap v2383: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:00:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 09:00:16 compute-0 ceph-mon[75015]: pgmap v2384: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 09:00:16 compute-0 nova_compute[253538]: 2025-11-25 09:00:16.641 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:16 compute-0 nova_compute[253538]: 2025-11-25 09:00:16.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 140 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 794 KiB/s wr, 87 op/s
Nov 25 09:00:18 compute-0 ovn_controller[152859]: 2025-11-25T09:00:18Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:12:97 10.100.0.4
Nov 25 09:00:18 compute-0 ovn_controller[152859]: 2025-11-25T09:00:18Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:12:97 10.100.0.4
Nov 25 09:00:18 compute-0 nova_compute[253538]: 2025-11-25 09:00:18.405 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:18 compute-0 podman[388847]: 2025-11-25 09:00:18.820533036 +0000 UTC m=+0.068637110 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:00:18 compute-0 podman[388846]: 2025-11-25 09:00:18.829783968 +0000 UTC m=+0.078006595 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:00:19 compute-0 ceph-mon[75015]: pgmap v2385: 321 pgs: 321 active+clean; 140 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 794 KiB/s wr, 87 op/s
Nov 25 09:00:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 163 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 25 09:00:21 compute-0 ceph-mon[75015]: pgmap v2386: 321 pgs: 321 active+clean; 163 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 25 09:00:21 compute-0 nova_compute[253538]: 2025-11-25 09:00:21.660 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:21 compute-0 nova_compute[253538]: 2025-11-25 09:00:21.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:00:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4202.4 total, 600.0 interval
                                           Cumulative writes: 30K writes, 120K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.88 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4235 writes, 17K keys, 4235 commit groups, 1.0 writes per commit group, ingest: 21.75 MB, 0.04 MB/s
                                           Interval WAL: 4235 writes, 1652 syncs, 2.56 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:00:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:00:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:23.232 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:81:3e 10.100.0.2 2001:db8::f816:3eff:fee5:813e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee5:813e/64', 'neutron:device_id': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=25e4a85d-5a04-4d07-a006-66576a20c294) old=Port_Binding(mac=['fa:16:3e:e5:81:3e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:00:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:23.233 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 25e4a85d-5a04-4d07-a006-66576a20c294 in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d updated
Nov 25 09:00:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:23.234 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:00:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:00:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:00:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:00:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:00:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:00:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:00:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:23.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45ff31ef-9bc5-409e-a7df-ace22e0e812d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:23 compute-0 nova_compute[253538]: 2025-11-25 09:00:23.570 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:23 compute-0 ceph-mon[75015]: pgmap v2387: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:00:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:00:24 compute-0 ceph-mon[75015]: pgmap v2388: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:00:25 compute-0 podman[388884]: 2025-11-25 09:00:25.837839809 +0000 UTC m=+0.089572449 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:00:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:00:26 compute-0 nova_compute[253538]: 2025-11-25 09:00:26.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:27 compute-0 ceph-mon[75015]: pgmap v2389: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:00:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:00:28 compute-0 ceph-mon[75015]: pgmap v2390: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:00:28 compute-0 nova_compute[253538]: 2025-11-25 09:00:28.572 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:00:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/941402795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:00:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:00:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/941402795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:00:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/941402795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:00:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/941402795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:00:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 1.4 MiB/s wr, 43 op/s
Nov 25 09:00:30 compute-0 ceph-mon[75015]: pgmap v2391: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 1.4 MiB/s wr, 43 op/s
Nov 25 09:00:30 compute-0 nova_compute[253538]: 2025-11-25 09:00:30.798 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:30 compute-0 nova_compute[253538]: 2025-11-25 09:00:30.798 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:30 compute-0 nova_compute[253538]: 2025-11-25 09:00:30.822 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:00:30 compute-0 nova_compute[253538]: 2025-11-25 09:00:30.928 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:30 compute-0 nova_compute[253538]: 2025-11-25 09:00:30.929 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:30 compute-0 nova_compute[253538]: 2025-11-25 09:00:30.942 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:00:30 compute-0 nova_compute[253538]: 2025-11-25 09:00:30.943 253542 INFO nova.compute.claims [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.074 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.396 253542 DEBUG nova.compute.manager [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.397 253542 DEBUG nova.compute.manager [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing instance network info cache due to event network-changed-6654b89e-a102-49f6-ad76-45e598fe2702. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.398 253542 DEBUG oslo_concurrency.lockutils [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.398 253542 DEBUG oslo_concurrency.lockutils [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.399 253542 DEBUG nova.network.neutron [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Refreshing network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:00:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:00:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3975728011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.515 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.529 253542 DEBUG nova.compute.provider_tree [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.541 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.541 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.541 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "4356e66d-96cf-4d55-bf3e-280638024374-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.542 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.542 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.543 253542 INFO nova.compute.manager [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Terminating instance
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.544 253542 DEBUG nova.compute.manager [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.545 253542 DEBUG nova.scheduler.client.report [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.566 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.566 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.606 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.606 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.621 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.636 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.719 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.720 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.720 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Creating image(s)
Nov 25 09:00:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3975728011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.853 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.881 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.963 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:31 compute-0 nova_compute[253538]: 2025-11-25 09:00:31.967 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:31 compute-0 kernel: tap6654b89e-a1 (unregistering): left promiscuous mode
Nov 25 09:00:31 compute-0 NetworkManager[48915]: <info>  [1764061231.9848] device (tap6654b89e-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:00:31 compute-0 ovn_controller[152859]: 2025-11-25T09:00:31Z|01324|binding|INFO|Releasing lport 6654b89e-a102-49f6-ad76-45e598fe2702 from this chassis (sb_readonly=0)
Nov 25 09:00:31 compute-0 ovn_controller[152859]: 2025-11-25T09:00:31Z|01325|binding|INFO|Setting lport 6654b89e-a102-49f6-ad76-45e598fe2702 down in Southbound
Nov 25 09:00:32 compute-0 ovn_controller[152859]: 2025-11-25T09:00:32Z|01326|binding|INFO|Removing iface tap6654b89e-a1 ovn-installed in OVS
Nov 25 09:00:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 87 KiB/s wr, 8 op/s
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.007 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:12:97 10.100.0.4'], port_security=['fa:16:3e:d9:12:97 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4356e66d-96cf-4d55-bf3e-280638024374', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f656b36-7475-4baf-b321-d82280dade68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2c277aa6-98be-4b49-b9f4-357b27c34694 6ba9af76-9f60-4459-905f-068c6194f108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51801c6e-6c82-485c-b7cb-963a30ef2813, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6654b89e-a102-49f6-ad76-45e598fe2702) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.008 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6654b89e-a102-49f6-ad76-45e598fe2702 in datapath 0f656b36-7475-4baf-b321-d82280dade68 unbound from our chassis
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.009 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f656b36-7475-4baf-b321-d82280dade68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.010 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[67687c34-0c2c-45b9-99aa-214c01379bab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.010 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 namespace which is not needed anymore
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.023 253542 DEBUG nova.policy [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.034 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:00:32 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000080.scope: Deactivated successfully.
Nov 25 09:00:32 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000080.scope: Consumed 14.414s CPU time.
Nov 25 09:00:32 compute-0 systemd-machined[215790]: Machine qemu-158-instance-00000080 terminated.
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.070 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.071 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.071 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.071 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.094 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.098 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f05e074d-5838-4c4b-89dc-76afe386f635_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:32 compute-0 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [NOTICE]   (388241) : haproxy version is 2.8.14-c23fe91
Nov 25 09:00:32 compute-0 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [NOTICE]   (388241) : path to executable is /usr/sbin/haproxy
Nov 25 09:00:32 compute-0 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [WARNING]  (388241) : Exiting Master process...
Nov 25 09:00:32 compute-0 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [WARNING]  (388241) : Exiting Master process...
Nov 25 09:00:32 compute-0 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [ALERT]    (388241) : Current worker (388245) exited with code 143 (Terminated)
Nov 25 09:00:32 compute-0 neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68[388234]: [WARNING]  (388241) : All workers exited. Exiting... (0)
Nov 25 09:00:32 compute-0 systemd[1]: libpod-af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b.scope: Deactivated successfully.
Nov 25 09:00:32 compute-0 podman[389033]: 2025-11-25 09:00:32.160699869 +0000 UTC m=+0.054043313 container died af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.194 253542 INFO nova.virt.libvirt.driver [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Instance destroyed successfully.
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.198 253542 DEBUG nova.objects.instance [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 4356e66d-96cf-4d55-bf3e-280638024374 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b-userdata-shm.mount: Deactivated successfully.
Nov 25 09:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d8292ddc888ebc952e3026b61ff6dd04593c40f1d79add281577b607917257e-merged.mount: Deactivated successfully.
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.209 253542 DEBUG nova.virt.libvirt.vif [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T08:59:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1963588795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=128,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9GKDtc/iAkQHMT31cNhSUeZ5EWysDNnQ0GII5sebZcAX/FwcSvLZXUXWZQJzndY+3PoIINOvsAEaMLFDOLThu4z2CfTlWQWolUUIfZRA+bjYm4j7TZTDILpFaRJI4ZIA==',key_name='tempest-TestSecurityGroupsBasicOps-1928582476',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:00:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-tzlro08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:00:03Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=4356e66d-96cf-4d55-bf3e-280638024374,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.209 253542 DEBUG nova.network.os_vif_util [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.210 253542 DEBUG nova.network.os_vif_util [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.210 253542 DEBUG os_vif [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.212 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6654b89e-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.219 253542 INFO os_vif [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:12:97,bridge_name='br-int',has_traffic_filtering=True,id=6654b89e-a102-49f6-ad76-45e598fe2702,network=Network(0f656b36-7475-4baf-b321-d82280dade68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6654b89e-a1')
Nov 25 09:00:32 compute-0 podman[389033]: 2025-11-25 09:00:32.275847683 +0000 UTC m=+0.169191107 container cleanup af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:00:32 compute-0 systemd[1]: libpod-conmon-af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b.scope: Deactivated successfully.
Nov 25 09:00:32 compute-0 podman[389108]: 2025-11-25 09:00:32.375752083 +0000 UTC m=+0.078568241 container remove af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.383 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[26035961-2f11-420e-83da-77cef2105198]: (4, ('Tue Nov 25 09:00:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 (af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b)\naf8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b\nTue Nov 25 09:00:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 (af8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b)\naf8724245293e0a8eeab72042011d14b5a558212e403fb4614fcaa71e705ad8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.385 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70eaf3bd-30af-4aa0-9cbf-e57abf32f529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.386 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f656b36-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.387 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:32 compute-0 kernel: tap0f656b36-70: left promiscuous mode
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.401 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.406 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f46ad7b1-029c-4801-b7a8-ab5aea80e159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.418 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8828d72f-acf5-44dd-90bf-d5fbddb72e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.419 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a9df5f60-ca79-4ad8-8164-489e64f01b68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.435 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[66d8028a-de49-482d-8e41-40477c5d139e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655770, 'reachable_time': 31378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389125, 'error': None, 'target': 'ovnmeta-0f656b36-7475-4baf-b321-d82280dade68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d0f656b36\x2d7475\x2d4baf\x2db321\x2dd82280dade68.mount: Deactivated successfully.
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.439 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f656b36-7475-4baf-b321-d82280dade68 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:00:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:32.439 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[8928675c-c0d7-41ec-b188-5dce1d66df15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.614 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f05e074d-5838-4c4b-89dc-76afe386f635_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.676 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.705 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Successfully created port: 30ba0f84-3dca-47f6-911d-5fff56a99b0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.721 253542 DEBUG nova.network.neutron [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updated VIF entry in instance network info cache for port 6654b89e-a102-49f6-ad76-45e598fe2702. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.721 253542 DEBUG nova.network.neutron [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [{"id": "6654b89e-a102-49f6-ad76-45e598fe2702", "address": "fa:16:3e:d9:12:97", "network": {"id": "0f656b36-7475-4baf-b321-d82280dade68", "bridge": "br-int", "label": "tempest-network-smoke--187809639", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6654b89e-a1", "ovs_interfaceid": "6654b89e-a102-49f6-ad76-45e598fe2702", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:32 compute-0 ceph-mon[75015]: pgmap v2392: 321 pgs: 321 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 87 KiB/s wr, 8 op/s
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.743 253542 DEBUG oslo_concurrency.lockutils [req-326e1da0-2b81-4ae4-bf59-a35fdf9677d7 req-b9a11942-29d2-427e-b742-60227efdcdb6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-4356e66d-96cf-4d55-bf3e-280638024374" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.780 253542 DEBUG nova.objects.instance [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.792 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.792 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Ensure instance console log exists: /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.793 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.795 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.795 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.962 253542 INFO nova.virt.libvirt.driver [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Deleting instance files /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374_del
Nov 25 09:00:32 compute-0 nova_compute[253538]: 2025-11-25 09:00:32.963 253542 INFO nova.virt.libvirt.driver [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Deletion of /var/lib/nova/instances/4356e66d-96cf-4d55-bf3e-280638024374_del complete
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.011 253542 INFO nova.compute.manager [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Took 1.47 seconds to destroy the instance on the hypervisor.
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.011 253542 DEBUG oslo.service.loopingcall [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.012 253542 DEBUG nova.compute.manager [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.012 253542 DEBUG nova.network.neutron [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.575 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.640 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Successfully updated port: 30ba0f84-3dca-47f6-911d-5fff56a99b0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.655 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.655 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.655 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.788 253542 DEBUG nova.compute.manager [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.789 253542 DEBUG nova.compute.manager [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing instance network info cache due to event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.789 253542 DEBUG oslo_concurrency.lockutils [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.790 253542 DEBUG nova.network.neutron [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.804 253542 INFO nova.compute.manager [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Took 0.79 seconds to deallocate network for instance.
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.850 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.855 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.856 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:33 compute-0 nova_compute[253538]: 2025-11-25 09:00:33.927 253542 DEBUG oslo_concurrency.processutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 147 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 360 KiB/s wr, 14 op/s
Nov 25 09:00:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:00:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437537554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:34 compute-0 nova_compute[253538]: 2025-11-25 09:00:34.445 253542 DEBUG oslo_concurrency.processutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:34 compute-0 nova_compute[253538]: 2025-11-25 09:00:34.456 253542 DEBUG nova.compute.provider_tree [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:00:34 compute-0 nova_compute[253538]: 2025-11-25 09:00:34.481 253542 DEBUG nova.scheduler.client.report [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:00:34 compute-0 nova_compute[253538]: 2025-11-25 09:00:34.513 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:34 compute-0 nova_compute[253538]: 2025-11-25 09:00:34.557 253542 INFO nova.scheduler.client.report [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 4356e66d-96cf-4d55-bf3e-280638024374
Nov 25 09:00:34 compute-0 nova_compute[253538]: 2025-11-25 09:00:34.625 253542 DEBUG oslo_concurrency.lockutils [None req-8e2688be-e1e9-4563-b98a-127a8c25ed02 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "4356e66d-96cf-4d55-bf3e-280638024374" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:35 compute-0 ceph-mon[75015]: pgmap v2393: 321 pgs: 321 active+clean; 147 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 360 KiB/s wr, 14 op/s
Nov 25 09:00:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/437537554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:35 compute-0 nova_compute[253538]: 2025-11-25 09:00:35.920 253542 DEBUG nova.compute.manager [req-f1903a81-7b57-4d4f-b642-a2011ddba0e3 req-d8927bb2-8f5f-452d-bbe0-5ffa5bf14318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Received event network-vif-deleted-6654b89e-a102-49f6-ad76-45e598fe2702 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 151 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 40 op/s
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.227 253542 DEBUG nova.network.neutron [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.255 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.256 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance network_info: |[{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.256 253542 DEBUG oslo_concurrency.lockutils [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.257 253542 DEBUG nova.network.neutron [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.263 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start _get_guest_xml network_info=[{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.268 253542 WARNING nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.274 253542 DEBUG nova.virt.libvirt.host [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.274 253542 DEBUG nova.virt.libvirt.host [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.278 253542 DEBUG nova.virt.libvirt.host [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.278 253542 DEBUG nova.virt.libvirt.host [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.279 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.279 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.279 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.280 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.281 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.281 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.281 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.281 253542 DEBUG nova.virt.hardware [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.285 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:00:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639692007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.751 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.784 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:36 compute-0 nova_compute[253538]: 2025-11-25 09:00:36.789 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:37 compute-0 ceph-mon[75015]: pgmap v2394: 321 pgs: 321 active+clean; 151 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 40 op/s
Nov 25 09:00:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/639692007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:00:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883182616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.254 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.256 253542 DEBUG nova.virt.libvirt.vif [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1223430136',display_name='tempest-TestGettingAddress-server-1223430136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1223430136',id=129,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-w1sbzkv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:00:31Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f05e074d-5838-4c4b-89dc-76afe386f635,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.257 253542 DEBUG nova.network.os_vif_util [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.258 253542 DEBUG nova.network.os_vif_util [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.260 253542 DEBUG nova.objects.instance [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.275 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <uuid>f05e074d-5838-4c4b-89dc-76afe386f635</uuid>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <name>instance-00000081</name>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-1223430136</nova:name>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:00:36</nova:creationTime>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <nova:port uuid="30ba0f84-3dca-47f6-911d-5fff56a99b0b">
Nov 25 09:00:37 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fed4:d053" ipVersion="6"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <system>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <entry name="serial">f05e074d-5838-4c4b-89dc-76afe386f635</entry>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <entry name="uuid">f05e074d-5838-4c4b-89dc-76afe386f635</entry>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </system>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <os>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   </os>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <features>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   </features>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/f05e074d-5838-4c4b-89dc-76afe386f635_disk">
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       </source>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/f05e074d-5838-4c4b-89dc-76afe386f635_disk.config">
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       </source>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:00:37 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d4:d0:53"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <target dev="tap30ba0f84-3d"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/console.log" append="off"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <video>
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </video>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:00:37 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:00:37 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:00:37 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:00:37 compute-0 nova_compute[253538]: </domain>
Nov 25 09:00:37 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.277 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Preparing to wait for external event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.277 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.278 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.279 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.280 253542 DEBUG nova.virt.libvirt.vif [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1223430136',display_name='tempest-TestGettingAddress-server-1223430136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1223430136',id=129,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-w1sbzkv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:00:31Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f05e074d-5838-4c4b-89dc-76afe386f635,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.281 253542 DEBUG nova.network.os_vif_util [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.282 253542 DEBUG nova.network.os_vif_util [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.284 253542 DEBUG os_vif [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.286 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.291 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30ba0f84-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.292 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30ba0f84-3d, col_values=(('external_ids', {'iface-id': '30ba0f84-3dca-47f6-911d-5fff56a99b0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:d0:53', 'vm-uuid': 'f05e074d-5838-4c4b-89dc-76afe386f635'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:37 compute-0 NetworkManager[48915]: <info>  [1764061237.2955] manager: (tap30ba0f84-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/544)
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.301 253542 INFO os_vif [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d')
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.396 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.400 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.400 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:d4:d0:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.401 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Using config drive
Nov 25 09:00:37 compute-0 nova_compute[253538]: 2025-11-25 09:00:37.428 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 09:00:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2883182616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:00:38 compute-0 nova_compute[253538]: 2025-11-25 09:00:38.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:39 compute-0 ceph-mon[75015]: pgmap v2395: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.157 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Creating config drive at /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.170 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pz4ih0t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.321 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pz4ih0t" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.364 253542 DEBUG nova.storage.rbd_utils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f05e074d-5838-4c4b-89dc-76afe386f635_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.369 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config f05e074d-5838-4c4b-89dc-76afe386f635_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.582 253542 DEBUG oslo_concurrency.processutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config f05e074d-5838-4c4b-89dc-76afe386f635_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.584 253542 INFO nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Deleting local config drive /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635/disk.config because it was imported into RBD.
Nov 25 09:00:39 compute-0 kernel: tap30ba0f84-3d: entered promiscuous mode
Nov 25 09:00:39 compute-0 NetworkManager[48915]: <info>  [1764061239.6590] manager: (tap30ba0f84-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/545)
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:39 compute-0 ovn_controller[152859]: 2025-11-25T09:00:39Z|01327|binding|INFO|Claiming lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b for this chassis.
Nov 25 09:00:39 compute-0 ovn_controller[152859]: 2025-11-25T09:00:39Z|01328|binding|INFO|30ba0f84-3dca-47f6-911d-5fff56a99b0b: Claiming fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.670 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053'], port_security=['fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fed4:d053/64', 'neutron:device_id': 'f05e074d-5838-4c4b-89dc-76afe386f635', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc2d8b7e-9a70-4d0b-ab2b-2be8746de01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=30ba0f84-3dca-47f6-911d-5fff56a99b0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.673 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 30ba0f84-3dca-47f6-911d-5fff56a99b0b in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d bound to our chassis
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.674 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d
Nov 25 09:00:39 compute-0 ovn_controller[152859]: 2025-11-25T09:00:39Z|01329|binding|INFO|Setting lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b ovn-installed in OVS
Nov 25 09:00:39 compute-0 ovn_controller[152859]: 2025-11-25T09:00:39Z|01330|binding|INFO|Setting lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b up in Southbound
Nov 25 09:00:39 compute-0 nova_compute[253538]: 2025-11-25 09:00:39.678 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.684 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c63a5440-7164-4d51-8fca-d04df14fca04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.685 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c2834b5-01 in ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.687 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c2834b5-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a356c37f-2ea3-4b88-bb2a-76002022c210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.688 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bf092b-f60a-4ed7-8907-2de011fe3b29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 systemd-udevd[389358]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:00:39 compute-0 systemd-machined[215790]: New machine qemu-159-instance-00000081.
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.702 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0cffca-61e6-49a5-b0ac-67032705e441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 NetworkManager[48915]: <info>  [1764061239.7098] device (tap30ba0f84-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:00:39 compute-0 NetworkManager[48915]: <info>  [1764061239.7106] device (tap30ba0f84-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:00:39 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-00000081.
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.720 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7389d1aa-a342-41df-bc35-9ab777d09082]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.756 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[20ac8fe7-2c0e-49ac-8612-5194c25851d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 systemd-udevd[389362]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.762 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[569ae129-4cc9-4d8a-aa8b-dcbf8c7d8481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 NetworkManager[48915]: <info>  [1764061239.7627] manager: (tap6c2834b5-00): new Veth device (/org/freedesktop/NetworkManager/Devices/546)
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.799 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cceb5397-fb66-4261-86ae-e41a8dabfdfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.803 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[78bf2fe2-74e2-4dc1-9e21-e5132681f8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 NetworkManager[48915]: <info>  [1764061239.8380] device (tap6c2834b5-00): carrier: link connected
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.846 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a0912dd6-01bd-4767-9c61-83291183b3c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.861 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c301226f-713a-4166-864d-456adcdfbedb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c2834b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:81:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659443, 'reachable_time': 25259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389390, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.873 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5e13f0-6742-42c7-ba1d-99f4e25f2f0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:813e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659443, 'tstamp': 659443}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389391, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.886 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffd531f-2690-43d9-a561-a53956d147eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c2834b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:81:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659443, 'reachable_time': 25259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389392, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.920 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5b62a253-5bdc-4669-8e14-8e52d7422e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.997 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[330164d7-7d7b-4fde-9cab-61622cc51388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:39.999 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c2834b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.000 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.001 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c2834b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:40 compute-0 kernel: tap6c2834b5-00: entered promiscuous mode
Nov 25 09:00:40 compute-0 NetworkManager[48915]: <info>  [1764061240.0528] manager: (tap6c2834b5-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.055 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c2834b5-00, col_values=(('external_ids', {'iface-id': '25e4a85d-5a04-4d07-a006-66576a20c294'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:00:40 compute-0 ovn_controller[152859]: 2025-11-25T09:00:40Z|01331|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.075 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c2834b5-0444-432c-8da4-c0b4f4aabc4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c2834b5-0444-432c-8da4-c0b4f4aabc4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.076 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7a710e-551c-4c6d-9d93-c682b6800a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.076 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-6c2834b5-0444-432c-8da4-c0b4f4aabc4d
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/6c2834b5-0444-432c-8da4-c0b4f4aabc4d.pid.haproxy
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 6c2834b5-0444-432c-8da4-c0b4f4aabc4d
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:00:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:40.077 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'env', 'PROCESS_TAG=haproxy-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c2834b5-0444-432c-8da4-c0b4f4aabc4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.101 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.407 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061240.4065454, f05e074d-5838-4c4b-89dc-76afe386f635 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.408 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] VM Started (Lifecycle Event)
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.428 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.432 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061240.4075935, f05e074d-5838-4c4b-89dc-76afe386f635 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.432 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] VM Paused (Lifecycle Event)
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.448 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.451 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:00:40 compute-0 podman[389466]: 2025-11-25 09:00:40.45799714 +0000 UTC m=+0.052140331 container create d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.464 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:00:40 compute-0 systemd[1]: Started libpod-conmon-d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60.scope.
Nov 25 09:00:40 compute-0 podman[389466]: 2025-11-25 09:00:40.426882963 +0000 UTC m=+0.021026114 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:00:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f8cf4280ba7f34d6348b543dd0da02fb745e7d55375aa17f4fa4a2f697425a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:40 compute-0 podman[389466]: 2025-11-25 09:00:40.553272623 +0000 UTC m=+0.147415784 container init d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:40 compute-0 podman[389466]: 2025-11-25 09:00:40.564034885 +0000 UTC m=+0.158178026 container start d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 09:00:40 compute-0 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [NOTICE]   (389485) : New worker (389487) forked
Nov 25 09:00:40 compute-0 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [NOTICE]   (389485) : Loading success.
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.732 253542 DEBUG nova.compute.manager [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.733 253542 DEBUG oslo_concurrency.lockutils [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.734 253542 DEBUG oslo_concurrency.lockutils [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.734 253542 DEBUG oslo_concurrency.lockutils [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.734 253542 DEBUG nova.compute.manager [req-f6710749-e8b4-4549-8759-f10f9facbbec req-880a86ac-4022-4234-9fbd-24fcbd7c2159 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Processing event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.736 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.741 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061240.7408547, f05e074d-5838-4c4b-89dc-76afe386f635 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.741 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] VM Resumed (Lifecycle Event)
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.744 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.749 253542 INFO nova.virt.libvirt.driver [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance spawned successfully.
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.750 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.767 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.772 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.787 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.788 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.789 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.790 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.791 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.792 253542 DEBUG nova.virt.libvirt.driver [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.799 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.844 253542 INFO nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Took 9.12 seconds to spawn the instance on the hypervisor.
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.844 253542 DEBUG nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.912 253542 INFO nova.compute.manager [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Took 10.04 seconds to build instance.
Nov 25 09:00:40 compute-0 nova_compute[253538]: 2025-11-25 09:00:40.930 253542 DEBUG oslo_concurrency.lockutils [None req-73baf30d-b3ff-4201-b0d3-37a16d64b79f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:00:41.085 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:41 compute-0 ceph-mon[75015]: pgmap v2396: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 09:00:41 compute-0 nova_compute[253538]: 2025-11-25 09:00:41.975 253542 DEBUG nova.network.neutron [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated VIF entry in instance network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:00:41 compute-0 nova_compute[253538]: 2025-11-25 09:00:41.975 253542 DEBUG nova.network.neutron [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:41 compute-0 nova_compute[253538]: 2025-11-25 09:00:41.994 253542 DEBUG oslo_concurrency.lockutils [req-900f573c-a4a8-4203-8d92-751cabc75950 req-e4c0ed71-aaa3-4e8a-80fe-6fc007134b75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:00:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 09:00:42 compute-0 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.330 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:42 compute-0 ovn_controller[152859]: 2025-11-25T09:00:42Z|01332|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.838 253542 DEBUG nova.compute.manager [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.839 253542 DEBUG oslo_concurrency.lockutils [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.839 253542 DEBUG oslo_concurrency.lockutils [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.839 253542 DEBUG oslo_concurrency.lockutils [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.840 253542 DEBUG nova.compute.manager [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] No waiting events found dispatching network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.840 253542 WARNING nova.compute.manager [req-b01e15ed-6225-4c90-af06-d55ca474b999 req-0bf37402-8aa7-452e-8338-5125e5b35ea8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received unexpected event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b for instance with vm_state active and task_state None.
Nov 25 09:00:42 compute-0 ovn_controller[152859]: 2025-11-25T09:00:42Z|01333|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 09:00:42 compute-0 nova_compute[253538]: 2025-11-25 09:00:42.934 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:43 compute-0 ceph-mon[75015]: pgmap v2397: 321 pgs: 321 active+clean; 134 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 09:00:43 compute-0 nova_compute[253538]: 2025-11-25 09:00:43.578 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 747 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 25 09:00:44 compute-0 nova_compute[253538]: 2025-11-25 09:00:44.286 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:44 compute-0 NetworkManager[48915]: <info>  [1764061244.2875] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Nov 25 09:00:44 compute-0 NetworkManager[48915]: <info>  [1764061244.2887] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/549)
Nov 25 09:00:44 compute-0 ovn_controller[152859]: 2025-11-25T09:00:44Z|01334|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 09:00:44 compute-0 nova_compute[253538]: 2025-11-25 09:00:44.319 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:44 compute-0 ovn_controller[152859]: 2025-11-25T09:00:44Z|01335|binding|INFO|Releasing lport 25e4a85d-5a04-4d07-a006-66576a20c294 from this chassis (sb_readonly=0)
Nov 25 09:00:44 compute-0 nova_compute[253538]: 2025-11-25 09:00:44.327 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:44 compute-0 nova_compute[253538]: 2025-11-25 09:00:44.937 253542 DEBUG nova.compute.manager [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:00:44 compute-0 nova_compute[253538]: 2025-11-25 09:00:44.939 253542 DEBUG nova.compute.manager [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing instance network info cache due to event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:00:44 compute-0 nova_compute[253538]: 2025-11-25 09:00:44.939 253542 DEBUG oslo_concurrency.lockutils [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:00:44 compute-0 nova_compute[253538]: 2025-11-25 09:00:44.939 253542 DEBUG oslo_concurrency.lockutils [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:00:44 compute-0 nova_compute[253538]: 2025-11-25 09:00:44.940 253542 DEBUG nova.network.neutron [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:00:45 compute-0 ceph-mon[75015]: pgmap v2398: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 747 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 25 09:00:45 compute-0 nova_compute[253538]: 2025-11-25 09:00:45.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:45 compute-0 nova_compute[253538]: 2025-11-25 09:00:45.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:00:45 compute-0 nova_compute[253538]: 2025-11-25 09:00:45.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:00:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 106 op/s
Nov 25 09:00:46 compute-0 nova_compute[253538]: 2025-11-25 09:00:46.145 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:00:46 compute-0 nova_compute[253538]: 2025-11-25 09:00:46.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:46 compute-0 ceph-mon[75015]: pgmap v2399: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 106 op/s
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.191 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061232.1897573, 4356e66d-96cf-4d55-bf3e-280638024374 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.192 253542 INFO nova.compute.manager [-] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] VM Stopped (Lifecycle Event)
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.215 253542 DEBUG nova.compute.manager [None req-df57c3cf-a639-415a-9a8c-f297cf07c0e3 - - - - - -] [instance: 4356e66d-96cf-4d55-bf3e-280638024374] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.332 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.414 253542 DEBUG nova.network.neutron [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated VIF entry in instance network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.415 253542 DEBUG nova.network.neutron [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.440 253542 DEBUG oslo_concurrency.lockutils [req-38227f6d-94ef-49b4-8cd4-5c8d2def5364 req-d974bf70-e100-4a70-9875-4d91fbe7164c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.442 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.442 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:00:47 compute-0 nova_compute[253538]: 2025-11-25 09:00:47.443 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:00:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 585 KiB/s wr, 87 op/s
Nov 25 09:00:48 compute-0 nova_compute[253538]: 2025-11-25 09:00:48.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:49 compute-0 ceph-mon[75015]: pgmap v2400: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 585 KiB/s wr, 87 op/s
Nov 25 09:00:49 compute-0 podman[389499]: 2025-11-25 09:00:49.83501062 +0000 UTC m=+0.075490336 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:00:49 compute-0 podman[389498]: 2025-11-25 09:00:49.850186202 +0000 UTC m=+0.096065275 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 09:00:49 compute-0 nova_compute[253538]: 2025-11-25 09:00:49.858 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:00:49 compute-0 nova_compute[253538]: 2025-11-25 09:00:49.886 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:00:49 compute-0 nova_compute[253538]: 2025-11-25 09:00:49.886 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:00:49 compute-0 nova_compute[253538]: 2025-11-25 09:00:49.886 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:49 compute-0 nova_compute[253538]: 2025-11-25 09:00:49.887 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:49 compute-0 nova_compute[253538]: 2025-11-25 09:00:49.887 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:49 compute-0 nova_compute[253538]: 2025-11-25 09:00:49.887 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:00:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:00:50 compute-0 nova_compute[253538]: 2025-11-25 09:00:50.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:50 compute-0 nova_compute[253538]: 2025-11-25 09:00:50.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:51 compute-0 ceph-mon[75015]: pgmap v2401: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:00:51 compute-0 nova_compute[253538]: 2025-11-25 09:00:51.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:00:52 compute-0 nova_compute[253538]: 2025-11-25 09:00:52.334 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:52 compute-0 nova_compute[253538]: 2025-11-25 09:00:52.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:52 compute-0 nova_compute[253538]: 2025-11-25 09:00:52.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:52 compute-0 nova_compute[253538]: 2025-11-25 09:00:52.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:52 compute-0 nova_compute[253538]: 2025-11-25 09:00:52.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:52 compute-0 nova_compute[253538]: 2025-11-25 09:00:52.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:00:52 compute-0 nova_compute[253538]: 2025-11-25 09:00:52.583 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:00:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.082 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.171 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.172 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:00:53
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'backups', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.387 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.389 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3583MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.389 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.389 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:53 compute-0 ceph-mon[75015]: pgmap v2402: 321 pgs: 321 active+clean; 134 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:00:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2128169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:00:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.482 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance f05e074d-5838-4c4b-89dc-76afe386f635 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.483 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.484 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:53 compute-0 nova_compute[253538]: 2025-11-25 09:00:53.725 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 138 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 329 KiB/s wr, 62 op/s
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:00:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:00:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:00:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204900587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:54 compute-0 nova_compute[253538]: 2025-11-25 09:00:54.267 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:54 compute-0 nova_compute[253538]: 2025-11-25 09:00:54.273 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:00:54 compute-0 nova_compute[253538]: 2025-11-25 09:00:54.287 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:00:54 compute-0 nova_compute[253538]: 2025-11-25 09:00:54.358 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:00:54 compute-0 nova_compute[253538]: 2025-11-25 09:00:54.359 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:54 compute-0 ceph-mon[75015]: pgmap v2403: 321 pgs: 321 active+clean; 138 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 329 KiB/s wr, 62 op/s
Nov 25 09:00:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4204900587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:55 compute-0 nova_compute[253538]: 2025-11-25 09:00:55.353 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:55 compute-0 ovn_controller[152859]: 2025-11-25T09:00:55Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:d0:53 10.100.0.14
Nov 25 09:00:55 compute-0 ovn_controller[152859]: 2025-11-25T09:00:55Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:d0:53 10.100.0.14
Nov 25 09:00:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 148 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 63 op/s
Nov 25 09:00:56 compute-0 podman[389584]: 2025-11-25 09:00:56.854351196 +0000 UTC m=+0.105712478 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:00:57 compute-0 ceph-mon[75015]: pgmap v2404: 321 pgs: 321 active+clean; 148 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 63 op/s
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.429 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.430 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.475 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.616 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.617 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.623 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.624 253542 INFO nova.compute.claims [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:00:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:00:57 compute-0 nova_compute[253538]: 2025-11-25 09:00:57.778 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 161 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 25 09:00:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:00:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/816391199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.245 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.250 253542 DEBUG nova.compute.provider_tree [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.265 253542 DEBUG nova.scheduler.client.report [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.295 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.296 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.431 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.431 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.462 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.486 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.607 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.609 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.609 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Creating image(s)
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.636 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.660 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.687 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.692 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.746 253542 DEBUG nova.policy [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.788 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.789 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.790 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.790 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.811 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:00:58 compute-0 nova_compute[253538]: 2025-11-25 09:00:58.814 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 830678ef-9f48-4175-aa6d-666c24a11689_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:00:59 compute-0 ceph-mon[75015]: pgmap v2405: 321 pgs: 321 active+clean; 161 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 25 09:00:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/816391199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.256 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 830678ef-9f48-4175-aa6d-666c24a11689_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.335 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.475 253542 DEBUG nova.objects.instance [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 830678ef-9f48-4175-aa6d-666c24a11689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.494 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.495 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Ensure instance console log exists: /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.495 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.496 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.496 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:00:59 compute-0 nova_compute[253538]: 2025-11-25 09:00:59.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:01:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 187 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 3.2 MiB/s wr, 62 op/s
Nov 25 09:01:00 compute-0 nova_compute[253538]: 2025-11-25 09:01:00.480 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Successfully created port: 2719889c-c962-425f-9df3-6f3d741ca0ec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:01:01 compute-0 ceph-mon[75015]: pgmap v2406: 321 pgs: 321 active+clean; 187 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 3.2 MiB/s wr, 62 op/s
Nov 25 09:01:01 compute-0 CROND[389800]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 09:01:01 compute-0 run-parts[389803]: (/etc/cron.hourly) starting 0anacron
Nov 25 09:01:01 compute-0 run-parts[389809]: (/etc/cron.hourly) finished 0anacron
Nov 25 09:01:01 compute-0 CROND[389799]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 09:01:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 206 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.8 MiB/s wr, 73 op/s
Nov 25 09:01:02 compute-0 nova_compute[253538]: 2025-11-25 09:01:02.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:03 compute-0 ceph-mon[75015]: pgmap v2407: 321 pgs: 321 active+clean; 206 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.8 MiB/s wr, 73 op/s
Nov 25 09:01:03 compute-0 nova_compute[253538]: 2025-11-25 09:01:03.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 213 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001102599282900306 of space, bias 1.0, pg target 0.3307797848700918 quantized to 32 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:01:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:01:05 compute-0 nova_compute[253538]: 2025-11-25 09:01:05.137 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Successfully updated port: 2719889c-c962-425f-9df3-6f3d741ca0ec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:01:05 compute-0 ceph-mon[75015]: pgmap v2408: 321 pgs: 321 active+clean; 213 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Nov 25 09:01:05 compute-0 nova_compute[253538]: 2025-11-25 09:01:05.282 253542 DEBUG nova.compute.manager [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:05 compute-0 nova_compute[253538]: 2025-11-25 09:01:05.283 253542 DEBUG nova.compute.manager [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing instance network info cache due to event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:05 compute-0 nova_compute[253538]: 2025-11-25 09:01:05.283 253542 DEBUG oslo_concurrency.lockutils [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:05 compute-0 nova_compute[253538]: 2025-11-25 09:01:05.283 253542 DEBUG oslo_concurrency.lockutils [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:05 compute-0 nova_compute[253538]: 2025-11-25 09:01:05.283 253542 DEBUG nova.network.neutron [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Nov 25 09:01:06 compute-0 nova_compute[253538]: 2025-11-25 09:01:06.092 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:06 compute-0 nova_compute[253538]: 2025-11-25 09:01:06.141 253542 DEBUG nova.network.neutron [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:01:06 compute-0 sshd-session[389810]: Received disconnect from 45.202.211.6 port 41350:11: Bye Bye [preauth]
Nov 25 09:01:06 compute-0 sshd-session[389810]: Disconnected from authenticating user root 45.202.211.6 port 41350 [preauth]
Nov 25 09:01:07 compute-0 nova_compute[253538]: 2025-11-25 09:01:07.135 253542 DEBUG nova.network.neutron [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:07 compute-0 nova_compute[253538]: 2025-11-25 09:01:07.153 253542 DEBUG oslo_concurrency.lockutils [req-0b732332-bcfe-4650-89b7-99d245bcf0ad req-e5f69855-2c05-4a05-a619-4772c3aee78b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:07 compute-0 nova_compute[253538]: 2025-11-25 09:01:07.154 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:07 compute-0 nova_compute[253538]: 2025-11-25 09:01:07.154 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:01:07 compute-0 ceph-mon[75015]: pgmap v2409: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Nov 25 09:01:07 compute-0 nova_compute[253538]: 2025-11-25 09:01:07.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:07 compute-0 nova_compute[253538]: 2025-11-25 09:01:07.377 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:01:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.5 MiB/s wr, 67 op/s
Nov 25 09:01:08 compute-0 nova_compute[253538]: 2025-11-25 09:01:08.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:09 compute-0 ceph-mon[75015]: pgmap v2410: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.5 MiB/s wr, 67 op/s
Nov 25 09:01:09 compute-0 nova_compute[253538]: 2025-11-25 09:01:09.540 253542 DEBUG nova.network.neutron [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:09 compute-0 nova_compute[253538]: 2025-11-25 09:01:09.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:09 compute-0 nova_compute[253538]: 2025-11-25 09:01:09.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:01:09 compute-0 nova_compute[253538]: 2025-11-25 09:01:09.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:01:09 compute-0 nova_compute[253538]: 2025-11-25 09:01:09.993 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:09 compute-0 nova_compute[253538]: 2025-11-25 09:01:09.994 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance network_info: |[{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:01:09 compute-0 nova_compute[253538]: 2025-11-25 09:01:09.997 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start _get_guest_xml network_info=[{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.003 253542 WARNING nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.014 253542 DEBUG nova.virt.libvirt.host [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.015 253542 DEBUG nova.virt.libvirt.host [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:01:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.019 253542 DEBUG nova.virt.libvirt.host [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.020 253542 DEBUG nova.virt.libvirt.host [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.021 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.021 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.022 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.022 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.023 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.023 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.024 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.024 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.025 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.025 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.026 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.026 253542 DEBUG nova.virt.hardware [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.030 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:01:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1750179277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.499 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.522 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.527 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:10 compute-0 sudo[389856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:10 compute-0 sudo[389856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:10 compute-0 sudo[389856]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:10 compute-0 sudo[389899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:01:10 compute-0 sudo[389899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:10 compute-0 sudo[389899]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:10 compute-0 sudo[389924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:10 compute-0 sudo[389924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:10 compute-0 sudo[389924]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:01:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2750775944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.949 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.951 253542 DEBUG nova.virt.libvirt.vif [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=130,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ddf3avyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:00:58Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=830678ef-9f48-4175-aa6d-666c24a11689,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.952 253542 DEBUG nova.network.os_vif_util [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.953 253542 DEBUG nova.network.os_vif_util [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.955 253542 DEBUG nova.objects.instance [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 830678ef-9f48-4175-aa6d-666c24a11689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.978 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <uuid>830678ef-9f48-4175-aa6d-666c24a11689</uuid>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <name>instance-00000082</name>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983</nova:name>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:01:10</nova:creationTime>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <nova:port uuid="2719889c-c962-425f-9df3-6f3d741ca0ec">
Nov 25 09:01:10 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <system>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <entry name="serial">830678ef-9f48-4175-aa6d-666c24a11689</entry>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <entry name="uuid">830678ef-9f48-4175-aa6d-666c24a11689</entry>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </system>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <os>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   </os>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <features>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   </features>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/830678ef-9f48-4175-aa6d-666c24a11689_disk">
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       </source>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/830678ef-9f48-4175-aa6d-666c24a11689_disk.config">
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       </source>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:01:10 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:0c:1f:9b"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <target dev="tap2719889c-c9"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/console.log" append="off"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <video>
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </video>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:01:10 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:01:10 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:01:10 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:01:10 compute-0 nova_compute[253538]: </domain>
Nov 25 09:01:10 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.980 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Preparing to wait for external event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.981 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.981 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.982 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.982 253542 DEBUG nova.virt.libvirt.vif [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=130,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ddf3avyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:00:58Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=830678ef-9f48-4175-aa6d-666c24a11689,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.983 253542 DEBUG nova.network.os_vif_util [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.984 253542 DEBUG nova.network.os_vif_util [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.984 253542 DEBUG os_vif [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.985 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.986 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.991 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2719889c-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.991 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2719889c-c9, col_values=(('external_ids', {'iface-id': '2719889c-c962-425f-9df3-6f3d741ca0ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:1f:9b', 'vm-uuid': '830678ef-9f48-4175-aa6d-666c24a11689'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.993 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:10 compute-0 NetworkManager[48915]: <info>  [1764061270.9944] manager: (tap2719889c-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/550)
Nov 25 09:01:10 compute-0 nova_compute[253538]: 2025-11-25 09:01:10.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:01:10 compute-0 sudo[389949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:01:11 compute-0 sudo[389949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:11 compute-0 nova_compute[253538]: 2025-11-25 09:01:11.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:11 compute-0 nova_compute[253538]: 2025-11-25 09:01:11.004 253542 INFO os_vif [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9')
Nov 25 09:01:11 compute-0 nova_compute[253538]: 2025-11-25 09:01:11.053 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:01:11 compute-0 nova_compute[253538]: 2025-11-25 09:01:11.054 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:01:11 compute-0 nova_compute[253538]: 2025-11-25 09:01:11.054 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:0c:1f:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:01:11 compute-0 nova_compute[253538]: 2025-11-25 09:01:11.054 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Using config drive
Nov 25 09:01:11 compute-0 nova_compute[253538]: 2025-11-25 09:01:11.088 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:11 compute-0 ceph-mon[75015]: pgmap v2411: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 25 09:01:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1750179277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2750775944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:11 compute-0 sudo[389949]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:01:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:01:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:01:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:01:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:01:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:01:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 752e73c7-75ac-4d34-aa35-8b8a9f409851 does not exist
Nov 25 09:01:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4475ab59-34bf-44ef-ab0e-3a0961802d48 does not exist
Nov 25 09:01:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 10df075c-21e3-4ecb-9769-789baa4c9bac does not exist
Nov 25 09:01:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:01:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:01:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:01:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:01:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:01:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:01:11 compute-0 sudo[390028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:11 compute-0 sudo[390028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:11 compute-0 sudo[390028]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:11 compute-0 sudo[390053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:01:11 compute-0 sudo[390053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:11 compute-0 sudo[390053]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:11 compute-0 sudo[390078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:11 compute-0 sudo[390078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:11 compute-0 sudo[390078]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:11 compute-0 sudo[390103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:01:11 compute-0 sudo[390103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 696 KiB/s wr, 25 op/s
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.077 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Creating config drive at /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.083 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_u_h_8rh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:01:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:01:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:01:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:01:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:01:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.227 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_u_h_8rh" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.254 253542 DEBUG nova.storage.rbd_utils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 830678ef-9f48-4175-aa6d-666c24a11689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.257 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config 830678ef-9f48-4175-aa6d-666c24a11689_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:12 compute-0 podman[390171]: 2025-11-25 09:01:12.29835137 +0000 UTC m=+0.054698311 container create 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:01:12 compute-0 systemd[1]: Started libpod-conmon-2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4.scope.
Nov 25 09:01:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:01:12 compute-0 podman[390171]: 2025-11-25 09:01:12.273929015 +0000 UTC m=+0.030275996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:01:12 compute-0 podman[390171]: 2025-11-25 09:01:12.383526997 +0000 UTC m=+0.139873978 container init 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:01:12 compute-0 podman[390171]: 2025-11-25 09:01:12.392201614 +0000 UTC m=+0.148548565 container start 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:01:12 compute-0 podman[390171]: 2025-11-25 09:01:12.397348294 +0000 UTC m=+0.153695245 container attach 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:01:12 compute-0 practical_shamir[390207]: 167 167
Nov 25 09:01:12 compute-0 systemd[1]: libpod-2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4.scope: Deactivated successfully.
Nov 25 09:01:12 compute-0 podman[390171]: 2025-11-25 09:01:12.399457031 +0000 UTC m=+0.155803972 container died 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 09:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c9d529062dfe4c437cfe613279421b23a2f5c4855d082a656c03eb3d32805c9-merged.mount: Deactivated successfully.
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.437 253542 DEBUG oslo_concurrency.processutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config 830678ef-9f48-4175-aa6d-666c24a11689_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.440 253542 INFO nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Deleting local config drive /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689/disk.config because it was imported into RBD.
Nov 25 09:01:12 compute-0 podman[390171]: 2025-11-25 09:01:12.451204399 +0000 UTC m=+0.207551330 container remove 2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:01:12 compute-0 systemd[1]: libpod-conmon-2aea7d2783f03fd8c7df167a36d0721b5e28432e7e410ed98f58ac68fd0020a4.scope: Deactivated successfully.
Nov 25 09:01:12 compute-0 NetworkManager[48915]: <info>  [1764061272.4964] manager: (tap2719889c-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/551)
Nov 25 09:01:12 compute-0 kernel: tap2719889c-c9: entered promiscuous mode
Nov 25 09:01:12 compute-0 ovn_controller[152859]: 2025-11-25T09:01:12Z|01336|binding|INFO|Claiming lport 2719889c-c962-425f-9df3-6f3d741ca0ec for this chassis.
Nov 25 09:01:12 compute-0 ovn_controller[152859]: 2025-11-25T09:01:12Z|01337|binding|INFO|2719889c-c962-425f-9df3-6f3d741ca0ec: Claiming fa:16:3e:0c:1f:9b 10.100.0.7
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:12 compute-0 ovn_controller[152859]: 2025-11-25T09:01:12Z|01338|binding|INFO|Setting lport 2719889c-c962-425f-9df3-6f3d741ca0ec ovn-installed in OVS
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:12 compute-0 systemd-udevd[390253]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:01:12 compute-0 systemd-machined[215790]: New machine qemu-160-instance-00000082.
Nov 25 09:01:12 compute-0 NetworkManager[48915]: <info>  [1764061272.5498] device (tap2719889c-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:01:12 compute-0 NetworkManager[48915]: <info>  [1764061272.5511] device (tap2719889c-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:01:12 compute-0 ovn_controller[152859]: 2025-11-25T09:01:12Z|01339|binding|INFO|Setting lport 2719889c-c962-425f-9df3-6f3d741ca0ec up in Southbound
Nov 25 09:01:12 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-00000082.
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.558 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:1f:9b 10.100.0.7'], port_security=['fa:16:3e:0c:1f:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '830678ef-9f48-4175-aa6d-666c24a11689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7a01ec2f-9868-40ca-9120-52725aa4431e 8a14d0f4-bb68-44c1-9d93-80bac0a038b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4cb606-f8e1-4247-876b-21f84cfe5e61, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2719889c-c962-425f-9df3-6f3d741ca0ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.559 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2719889c-c962-425f-9df3-6f3d741ca0ec in datapath 01d5ee0a-5a87-445b-8539-b33b1f9d0842 bound to our chassis
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.560 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d5ee0a-5a87-445b-8539-b33b1f9d0842
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.572 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e170b9d2-7afe-4330-a7ff-8b6f89666aea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.573 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d5ee0a-51 in ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.575 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d5ee0a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.575 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[94a654c1-c31c-45fa-826a-ad0d9a2ef717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c6f823-8919-4ba9-9cd1-e9181c2e2ae2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.588 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e24770ca-10ad-4d4b-b0ea-38060480baa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27060116-69ae-4dfe-8911-b6846d972fe3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 podman[390265]: 2025-11-25 09:01:12.630622013 +0000 UTC m=+0.034657634 container create 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.637 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5f75ca-2986-40b2-a244-231a92252ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 NetworkManager[48915]: <info>  [1764061272.6434] manager: (tap01d5ee0a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/552)
Nov 25 09:01:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.643 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7a216def-e32d-4497-b2a4-84b928c16a50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 systemd[1]: Started libpod-conmon-233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544.scope.
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.677 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[afd08671-2fdd-4cc1-904a-6f3d9895cfa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.680 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[61f12dac-5eb0-4945-bd69-0f0f02e816bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:12 compute-0 podman[390265]: 2025-11-25 09:01:12.617183867 +0000 UTC m=+0.021219508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.718 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[505a24fd-f858-4fa0-9aca-fbceb7b5e3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 NetworkManager[48915]: <info>  [1764061272.7215] device (tap01d5ee0a-50): carrier: link connected
Nov 25 09:01:12 compute-0 podman[390265]: 2025-11-25 09:01:12.728671222 +0000 UTC m=+0.132706853 container init 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:01:12 compute-0 podman[390265]: 2025-11-25 09:01:12.740924726 +0000 UTC m=+0.144960357 container start 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:01:12 compute-0 podman[390265]: 2025-11-25 09:01:12.744476972 +0000 UTC m=+0.148512643 container attach 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.746 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5eeaaae6-b505-49a9-a2f4-6c0c674e8050]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d5ee0a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:39:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662730, 'reachable_time': 37346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390312, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.762 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7210de0a-6f5a-435a-a9b3-3d17b5e3b77d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:394a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662730, 'tstamp': 662730}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390314, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.778 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73d2a943-0ba9-4ecf-9424-8fec1b6e0aa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d5ee0a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:39:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662730, 'reachable_time': 37346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390315, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[56770ab1-804b-4450-82e2-091a4d586141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.875 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19baacbd-3571-45cf-9839-ad79c06c846d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.878 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d5ee0a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.879 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.880 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d5ee0a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.882 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:12 compute-0 NetworkManager[48915]: <info>  [1764061272.8830] manager: (tap01d5ee0a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Nov 25 09:01:12 compute-0 kernel: tap01d5ee0a-50: entered promiscuous mode
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.886 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d5ee0a-50, col_values=(('external_ids', {'iface-id': 'e613213f-7deb-43ce-acbb-25b798b2b340'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:12 compute-0 ovn_controller[152859]: 2025-11-25T09:01:12Z|01340|binding|INFO|Releasing lport e613213f-7deb-43ce-acbb-25b798b2b340 from this chassis (sb_readonly=0)
Nov 25 09:01:12 compute-0 nova_compute[253538]: 2025-11-25 09:01:12.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.950 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d5ee0a-5a87-445b-8539-b33b1f9d0842.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d5ee0a-5a87-445b-8539-b33b1f9d0842.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.951 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e700d4eb-36cc-4ec5-8e72-d5be9ced80ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.952 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-01d5ee0a-5a87-445b-8539-b33b1f9d0842
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/01d5ee0a-5a87-445b-8539-b33b1f9d0842.pid.haproxy
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 01d5ee0a-5a87-445b-8539-b33b1f9d0842
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:01:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:12.953 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'env', 'PROCESS_TAG=haproxy-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d5ee0a-5a87-445b-8539-b33b1f9d0842.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:01:13 compute-0 ceph-mon[75015]: pgmap v2412: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 696 KiB/s wr, 25 op/s
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.275 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061273.274793, 830678ef-9f48-4175-aa6d-666c24a11689 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.277 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] VM Started (Lifecycle Event)
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.300 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.305 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061273.2750247, 830678ef-9f48-4175-aa6d-666c24a11689 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.306 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] VM Paused (Lifecycle Event)
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.321 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.325 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.338 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:01:13 compute-0 podman[390387]: 2025-11-25 09:01:13.358382632 +0000 UTC m=+0.048837790 container create f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 09:01:13 compute-0 systemd[1]: Started libpod-conmon-f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a.scope.
Nov 25 09:01:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a4031ab9bbc2e36451a0ff49fcc18576186761ed3f26382ce7cd4974678668/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:13 compute-0 podman[390387]: 2025-11-25 09:01:13.332028675 +0000 UTC m=+0.022483853 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:01:13 compute-0 podman[390387]: 2025-11-25 09:01:13.433741413 +0000 UTC m=+0.124196611 container init f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:01:13 compute-0 podman[390387]: 2025-11-25 09:01:13.446027477 +0000 UTC m=+0.136482665 container start f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 09:01:13 compute-0 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [NOTICE]   (390407) : New worker (390411) forked
Nov 25 09:01:13 compute-0 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [NOTICE]   (390407) : Loading success.
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.588 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.706 253542 DEBUG nova.compute.manager [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.707 253542 DEBUG oslo_concurrency.lockutils [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.708 253542 DEBUG oslo_concurrency.lockutils [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.709 253542 DEBUG oslo_concurrency.lockutils [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.709 253542 DEBUG nova.compute.manager [req-8796410c-a88a-4ab0-9f03-7c9770ddea1f req-6af55f84-7c82-4a28-a908-52ed44157482 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Processing event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.710 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.716 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061273.7159095, 830678ef-9f48-4175-aa6d-666c24a11689 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.716 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] VM Resumed (Lifecycle Event)
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.720 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.726 253542 INFO nova.virt.libvirt.driver [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance spawned successfully.
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.727 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.736 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.748 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.755 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.755 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.756 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.757 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.758 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.759 253542 DEBUG nova.virt.libvirt.driver [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.767 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:01:13 compute-0 distracted_snyder[390299]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:01:13 compute-0 distracted_snyder[390299]: --> relative data size: 1.0
Nov 25 09:01:13 compute-0 distracted_snyder[390299]: --> All data devices are unavailable
Nov 25 09:01:13 compute-0 systemd[1]: libpod-233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544.scope: Deactivated successfully.
Nov 25 09:01:13 compute-0 systemd[1]: libpod-233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544.scope: Consumed 1.121s CPU time.
Nov 25 09:01:13 compute-0 podman[390265]: 2025-11-25 09:01:13.942543181 +0000 UTC m=+1.346578802 container died 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-098702006aeefdadc98d48c8ab9d32fc36c18d606e3fea854186f94d988e9beb-merged.mount: Deactivated successfully.
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.989 253542 INFO nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Took 15.38 seconds to spawn the instance on the hypervisor.
Nov 25 09:01:13 compute-0 nova_compute[253538]: 2025-11-25 09:01:13.990 253542 DEBUG nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:14 compute-0 podman[390265]: 2025-11-25 09:01:14.011974261 +0000 UTC m=+1.416009892 container remove 233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_snyder, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:01:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 66 KiB/s wr, 17 op/s
Nov 25 09:01:14 compute-0 systemd[1]: libpod-conmon-233cdcc8be2a23802aab4f899b1f5f6fc20cea004cf66abbe36a8da7c0632544.scope: Deactivated successfully.
Nov 25 09:01:14 compute-0 sudo[390103]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:14 compute-0 sudo[390452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:14 compute-0 sudo[390452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:14 compute-0 sudo[390452]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:14 compute-0 nova_compute[253538]: 2025-11-25 09:01:14.203 253542 INFO nova.compute.manager [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Took 16.61 seconds to build instance.
Nov 25 09:01:14 compute-0 sudo[390477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:01:14 compute-0 sudo[390477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:14 compute-0 sudo[390477]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:14 compute-0 sudo[390502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:14 compute-0 sudo[390502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:14 compute-0 nova_compute[253538]: 2025-11-25 09:01:14.285 253542 DEBUG oslo_concurrency.lockutils [None req-6e36b076-4c39-4eef-9cf4-c1944c47f212 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:14 compute-0 sudo[390502]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:14 compute-0 sudo[390527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:01:14 compute-0 sudo[390527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:14.558 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:01:14 compute-0 nova_compute[253538]: 2025-11-25 09:01:14.560 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:14.561 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:01:14 compute-0 podman[390589]: 2025-11-25 09:01:14.916604444 +0000 UTC m=+0.063410536 container create f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:01:14 compute-0 systemd[1]: Started libpod-conmon-f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3.scope.
Nov 25 09:01:14 compute-0 podman[390589]: 2025-11-25 09:01:14.890578356 +0000 UTC m=+0.037384488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:01:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:01:15 compute-0 podman[390589]: 2025-11-25 09:01:15.019801733 +0000 UTC m=+0.166607845 container init f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:01:15 compute-0 podman[390589]: 2025-11-25 09:01:15.030922266 +0000 UTC m=+0.177728358 container start f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:01:15 compute-0 podman[390589]: 2025-11-25 09:01:15.03403525 +0000 UTC m=+0.180841342 container attach f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:01:15 compute-0 festive_poitras[390605]: 167 167
Nov 25 09:01:15 compute-0 systemd[1]: libpod-f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3.scope: Deactivated successfully.
Nov 25 09:01:15 compute-0 podman[390589]: 2025-11-25 09:01:15.038067601 +0000 UTC m=+0.184873703 container died f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea862fb840c21e1ac31bfc3ac7b168aba1e543b0b68351e4da8ce0b85a33d6b5-merged.mount: Deactivated successfully.
Nov 25 09:01:15 compute-0 podman[390589]: 2025-11-25 09:01:15.07997052 +0000 UTC m=+0.226776622 container remove f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poitras, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:01:15 compute-0 systemd[1]: libpod-conmon-f267f6c2048ce7468b3e70114997aca675f81b25584ae5e9bd3c9dc5bcc147b3.scope: Deactivated successfully.
Nov 25 09:01:15 compute-0 ceph-mon[75015]: pgmap v2413: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 66 KiB/s wr, 17 op/s
Nov 25 09:01:15 compute-0 podman[390627]: 2025-11-25 09:01:15.28786696 +0000 UTC m=+0.056669914 container create 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:01:15 compute-0 podman[390627]: 2025-11-25 09:01:15.255841108 +0000 UTC m=+0.024644122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:01:15 compute-0 systemd[1]: Started libpod-conmon-3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db.scope.
Nov 25 09:01:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:15 compute-0 podman[390627]: 2025-11-25 09:01:15.423853531 +0000 UTC m=+0.192656505 container init 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:01:15 compute-0 podman[390627]: 2025-11-25 09:01:15.437209934 +0000 UTC m=+0.206012878 container start 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:01:15 compute-0 podman[390627]: 2025-11-25 09:01:15.440910555 +0000 UTC m=+0.209713529 container attach 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 09:01:15 compute-0 nova_compute[253538]: 2025-11-25 09:01:15.811 253542 DEBUG nova.compute.manager [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:15 compute-0 nova_compute[253538]: 2025-11-25 09:01:15.815 253542 DEBUG oslo_concurrency.lockutils [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:15 compute-0 nova_compute[253538]: 2025-11-25 09:01:15.816 253542 DEBUG oslo_concurrency.lockutils [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:15 compute-0 nova_compute[253538]: 2025-11-25 09:01:15.816 253542 DEBUG oslo_concurrency.lockutils [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:15 compute-0 nova_compute[253538]: 2025-11-25 09:01:15.817 253542 DEBUG nova.compute.manager [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] No waiting events found dispatching network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:01:15 compute-0 nova_compute[253538]: 2025-11-25 09:01:15.818 253542 WARNING nova.compute.manager [req-00a97137-fc9a-488d-afcc-eb28801f5e9d req-36d4656e-a82c-4399-851e-50a1f894cfd3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received unexpected event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec for instance with vm_state active and task_state None.
Nov 25 09:01:15 compute-0 nova_compute[253538]: 2025-11-25 09:01:15.993 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 13 KiB/s wr, 13 op/s
Nov 25 09:01:16 compute-0 exciting_bassi[390643]: {
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:     "0": [
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:         {
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "devices": [
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "/dev/loop3"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             ],
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_name": "ceph_lv0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_size": "21470642176",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "name": "ceph_lv0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "tags": {
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cluster_name": "ceph",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.crush_device_class": "",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.encrypted": "0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osd_id": "0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.type": "block",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.vdo": "0"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             },
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "type": "block",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "vg_name": "ceph_vg0"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:         }
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:     ],
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:     "1": [
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:         {
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "devices": [
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "/dev/loop4"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             ],
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_name": "ceph_lv1",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_size": "21470642176",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "name": "ceph_lv1",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "tags": {
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cluster_name": "ceph",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.crush_device_class": "",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.encrypted": "0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osd_id": "1",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.type": "block",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.vdo": "0"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             },
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "type": "block",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "vg_name": "ceph_vg1"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:         }
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:     ],
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:     "2": [
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:         {
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "devices": [
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "/dev/loop5"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             ],
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_name": "ceph_lv2",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_size": "21470642176",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "name": "ceph_lv2",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "tags": {
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.cluster_name": "ceph",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.crush_device_class": "",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.encrypted": "0",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osd_id": "2",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.type": "block",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:                 "ceph.vdo": "0"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             },
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "type": "block",
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:             "vg_name": "ceph_vg2"
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:         }
Nov 25 09:01:16 compute-0 exciting_bassi[390643]:     ]
Nov 25 09:01:16 compute-0 exciting_bassi[390643]: }
Nov 25 09:01:16 compute-0 systemd[1]: libpod-3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db.scope: Deactivated successfully.
Nov 25 09:01:16 compute-0 podman[390652]: 2025-11-25 09:01:16.237616481 +0000 UTC m=+0.022952056 container died 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:01:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eafeb57e2011e64fb01aa988d3bf8e0be4173607df466a1cdf34fd29c0aa8e9-merged.mount: Deactivated successfully.
Nov 25 09:01:16 compute-0 podman[390652]: 2025-11-25 09:01:16.284199528 +0000 UTC m=+0.069535043 container remove 3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 09:01:16 compute-0 systemd[1]: libpod-conmon-3b6b32360f7439bc47141fb2911c72612f8c59aeec8cf4c66b06fb22772192db.scope: Deactivated successfully.
Nov 25 09:01:16 compute-0 sudo[390527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:16 compute-0 sudo[390667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:16 compute-0 sudo[390667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:16 compute-0 sudo[390667]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:16 compute-0 sudo[390692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:01:16 compute-0 sudo[390692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:16 compute-0 sudo[390692]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:16 compute-0 sudo[390717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:16 compute-0 sudo[390717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:16 compute-0 sudo[390717]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:16 compute-0 sudo[390742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:01:16 compute-0 sudo[390742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:16 compute-0 podman[390809]: 2025-11-25 09:01:16.875785761 +0000 UTC m=+0.039061744 container create fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:01:16 compute-0 systemd[1]: Started libpod-conmon-fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f.scope.
Nov 25 09:01:16 compute-0 podman[390809]: 2025-11-25 09:01:16.85558003 +0000 UTC m=+0.018855993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:01:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:01:16 compute-0 podman[390809]: 2025-11-25 09:01:16.974151018 +0000 UTC m=+0.137427081 container init fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:01:16 compute-0 podman[390809]: 2025-11-25 09:01:16.981531518 +0000 UTC m=+0.144807471 container start fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:01:16 compute-0 podman[390809]: 2025-11-25 09:01:16.985104446 +0000 UTC m=+0.148380429 container attach fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:01:16 compute-0 pensive_archimedes[390825]: 167 167
Nov 25 09:01:16 compute-0 systemd[1]: libpod-fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f.scope: Deactivated successfully.
Nov 25 09:01:16 compute-0 podman[390809]: 2025-11-25 09:01:16.987847171 +0000 UTC m=+0.151123114 container died fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 09:01:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a97dce1d426b3e3a51143c3f7d37bed786b5d17933ca44a673aeeaed327b296-merged.mount: Deactivated successfully.
Nov 25 09:01:17 compute-0 podman[390809]: 2025-11-25 09:01:17.031970481 +0000 UTC m=+0.195246464 container remove fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:01:17 compute-0 systemd[1]: libpod-conmon-fca33543ce50d4c9b83328a26542ef5ee4bed1ff1a851a42d87cf7f35b86652f.scope: Deactivated successfully.
Nov 25 09:01:17 compute-0 ceph-mon[75015]: pgmap v2414: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 13 KiB/s wr, 13 op/s
Nov 25 09:01:17 compute-0 podman[390850]: 2025-11-25 09:01:17.262338072 +0000 UTC m=+0.059627094 container create f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:01:17 compute-0 systemd[1]: Started libpod-conmon-f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57.scope.
Nov 25 09:01:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:01:17 compute-0 podman[390850]: 2025-11-25 09:01:17.245035551 +0000 UTC m=+0.042324673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:01:17 compute-0 podman[390850]: 2025-11-25 09:01:17.358849699 +0000 UTC m=+0.156138811 container init f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:01:17 compute-0 podman[390850]: 2025-11-25 09:01:17.370509496 +0000 UTC m=+0.167798518 container start f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:01:17 compute-0 podman[390850]: 2025-11-25 09:01:17.374340911 +0000 UTC m=+0.171629993 container attach f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:01:17 compute-0 nova_compute[253538]: 2025-11-25 09:01:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 12 KiB/s wr, 38 op/s
Nov 25 09:01:18 compute-0 jovial_joliot[390866]: {
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "osd_id": 1,
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "type": "bluestore"
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:     },
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "osd_id": 2,
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "type": "bluestore"
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:     },
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "osd_id": 0,
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:         "type": "bluestore"
Nov 25 09:01:18 compute-0 jovial_joliot[390866]:     }
Nov 25 09:01:18 compute-0 jovial_joliot[390866]: }
Nov 25 09:01:18 compute-0 systemd[1]: libpod-f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57.scope: Deactivated successfully.
Nov 25 09:01:18 compute-0 systemd[1]: libpod-f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57.scope: Consumed 1.066s CPU time.
Nov 25 09:01:18 compute-0 podman[390850]: 2025-11-25 09:01:18.431037792 +0000 UTC m=+1.228326814 container died f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b95c61a306c593a3f39733b8ff68fc6632cb98819fe20882093f86dbd8ac508-merged.mount: Deactivated successfully.
Nov 25 09:01:18 compute-0 podman[390850]: 2025-11-25 09:01:18.512230892 +0000 UTC m=+1.309519914 container remove f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_joliot, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 09:01:18 compute-0 systemd[1]: libpod-conmon-f6cd09bb2c55482919541c6e887de8529d47138b79dc073283e91cc5cced7a57.scope: Deactivated successfully.
Nov 25 09:01:18 compute-0 sudo[390742]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:01:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:01:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:01:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:01:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d27debef-311f-4557-b4e3-14ee9032084d does not exist
Nov 25 09:01:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 192de5e3-8ee2-40cf-895f-a666b0ec470e does not exist
Nov 25 09:01:18 compute-0 nova_compute[253538]: 2025-11-25 09:01:18.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:18 compute-0 sudo[390910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:01:18 compute-0 sudo[390910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:18 compute-0 sudo[390910]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:18 compute-0 sudo[390935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:01:18 compute-0 sudo[390935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:01:18 compute-0 sudo[390935]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:19 compute-0 ceph-mon[75015]: pgmap v2415: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 12 KiB/s wr, 38 op/s
Nov 25 09:01:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:01:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:01:19 compute-0 sshd-session[389812]: Invalid user admin from 45.78.217.205 port 44770
Nov 25 09:01:19 compute-0 sshd-session[389812]: Received disconnect from 45.78.217.205 port 44770:11: Bye Bye [preauth]
Nov 25 09:01:19 compute-0 sshd-session[389812]: Disconnected from invalid user admin 45.78.217.205 port 44770 [preauth]
Nov 25 09:01:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.450 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.451 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.471 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.551 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.552 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.562 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.563 253542 INFO nova.compute.claims [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:01:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:20.564 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.680 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:20 compute-0 podman[390962]: 2025-11-25 09:01:20.811258288 +0000 UTC m=+0.054550265 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 09:01:20 compute-0 podman[390961]: 2025-11-25 09:01:20.818455994 +0000 UTC m=+0.060754095 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:01:20 compute-0 nova_compute[253538]: 2025-11-25 09:01:20.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:01:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3060430614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:21 compute-0 ceph-mon[75015]: pgmap v2416: 321 pgs: 321 active+clean; 213 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:01:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3060430614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.323 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.329 253542 DEBUG nova.compute.provider_tree [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.345 253542 DEBUG nova.scheduler.client.report [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.379 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.379 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.433 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.433 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.450 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.466 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.545 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.546 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.547 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Creating image(s)
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.569 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.591 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.611 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.615 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.664 253542 DEBUG nova.policy [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.687 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.688 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.688 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.688 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.709 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.714 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.753 253542 DEBUG nova.compute.manager [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.753 253542 DEBUG nova.compute.manager [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing instance network info cache due to event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.753 253542 DEBUG oslo_concurrency.lockutils [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.754 253542 DEBUG oslo_concurrency.lockutils [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.754 253542 DEBUG nova.network.neutron [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:21 compute-0 nova_compute[253538]: 2025-11-25 09:01:21.958 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 220 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 116 KiB/s wr, 74 op/s
Nov 25 09:01:22 compute-0 nova_compute[253538]: 2025-11-25 09:01:22.030 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:01:22 compute-0 nova_compute[253538]: 2025-11-25 09:01:22.118 253542 DEBUG nova.objects.instance [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:01:22 compute-0 nova_compute[253538]: 2025-11-25 09:01:22.137 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:01:22 compute-0 nova_compute[253538]: 2025-11-25 09:01:22.138 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Ensure instance console log exists: /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:01:22 compute-0 nova_compute[253538]: 2025-11-25 09:01:22.138 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:22 compute-0 nova_compute[253538]: 2025-11-25 09:01:22.138 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:22 compute-0 nova_compute[253538]: 2025-11-25 09:01:22.139 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:22 compute-0 nova_compute[253538]: 2025-11-25 09:01:22.623 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Successfully created port: cf17086b-8fa3-4041-8a87-a1f9ede3f871 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:01:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:23 compute-0 ceph-mon[75015]: pgmap v2417: 321 pgs: 321 active+clean; 220 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 116 KiB/s wr, 74 op/s
Nov 25 09:01:23 compute-0 nova_compute[253538]: 2025-11-25 09:01:23.439 253542 DEBUG nova.network.neutron [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updated VIF entry in instance network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:01:23 compute-0 nova_compute[253538]: 2025-11-25 09:01:23.439 253542 DEBUG nova.network.neutron [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:23 compute-0 nova_compute[253538]: 2025-11-25 09:01:23.453 253542 DEBUG oslo_concurrency.lockutils [req-81e3de33-3dca-4b27-a58e-35928b6ee3fe req-a501c094-110c-459e-90b3-b0364e53c2ad b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:01:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:01:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:01:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:01:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:01:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:01:23 compute-0 nova_compute[253538]: 2025-11-25 09:01:23.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:24 compute-0 nova_compute[253538]: 2025-11-25 09:01:24.014 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Successfully updated port: cf17086b-8fa3-4041-8a87-a1f9ede3f871 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:01:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 236 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 783 KiB/s wr, 75 op/s
Nov 25 09:01:24 compute-0 nova_compute[253538]: 2025-11-25 09:01:24.032 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:24 compute-0 nova_compute[253538]: 2025-11-25 09:01:24.032 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:24 compute-0 nova_compute[253538]: 2025-11-25 09:01:24.033 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:01:24 compute-0 nova_compute[253538]: 2025-11-25 09:01:24.113 253542 DEBUG nova.compute.manager [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:24 compute-0 nova_compute[253538]: 2025-11-25 09:01:24.114 253542 DEBUG nova.compute.manager [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing instance network info cache due to event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:24 compute-0 nova_compute[253538]: 2025-11-25 09:01:24.114 253542 DEBUG oslo_concurrency.lockutils [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:24 compute-0 nova_compute[253538]: 2025-11-25 09:01:24.190 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:01:25 compute-0 ceph-mon[75015]: pgmap v2418: 321 pgs: 321 active+clean; 236 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 783 KiB/s wr, 75 op/s
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.720 253542 DEBUG nova.network.neutron [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.738 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.739 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance network_info: |[{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.740 253542 DEBUG oslo_concurrency.lockutils [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.740 253542 DEBUG nova.network.neutron [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.743 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start _get_guest_xml network_info=[{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.747 253542 WARNING nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.755 253542 DEBUG nova.virt.libvirt.host [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.756 253542 DEBUG nova.virt.libvirt.host [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.759 253542 DEBUG nova.virt.libvirt.host [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.759 253542 DEBUG nova.virt.libvirt.host [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.760 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.760 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.760 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.760 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.761 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.762 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.762 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.762 253542 DEBUG nova.virt.hardware [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.764 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:25 compute-0 nova_compute[253538]: 2025-11-25 09:01:25.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 244 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Nov 25 09:01:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:01:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129416049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.244 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.272 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.278 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4129416049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:01:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885720655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.738 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.740 253542 DEBUG nova.virt.libvirt.vif [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:01:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-432989417',display_name='tempest-TestGettingAddress-server-432989417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-432989417',id=131,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-z9h0547w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:01:21Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3dd3cc22-d02b-4948-b7e0-da630c6ad4b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.740 253542 DEBUG nova.network.os_vif_util [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.742 253542 DEBUG nova.network.os_vif_util [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.744 253542 DEBUG nova.objects.instance [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.760 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <uuid>3dd3cc22-d02b-4948-b7e0-da630c6ad4b0</uuid>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <name>instance-00000083</name>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-432989417</nova:name>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:01:25</nova:creationTime>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <nova:port uuid="cf17086b-8fa3-4041-8a87-a1f9ede3f871">
Nov 25 09:01:26 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2b:642f" ipVersion="6"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <system>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <entry name="serial">3dd3cc22-d02b-4948-b7e0-da630c6ad4b0</entry>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <entry name="uuid">3dd3cc22-d02b-4948-b7e0-da630c6ad4b0</entry>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </system>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <os>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   </os>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <features>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   </features>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk">
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       </source>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config">
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       </source>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:01:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:2b:64:2f"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <target dev="tapcf17086b-8f"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/console.log" append="off"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <video>
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </video>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:01:26 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:01:26 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:01:26 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:01:26 compute-0 nova_compute[253538]: </domain>
Nov 25 09:01:26 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.774 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Preparing to wait for external event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.775 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.775 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.775 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.776 253542 DEBUG nova.virt.libvirt.vif [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:01:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-432989417',display_name='tempest-TestGettingAddress-server-432989417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-432989417',id=131,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-z9h0547w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:01:21Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3dd3cc22-d02b-4948-b7e0-da630c6ad4b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.776 253542 DEBUG nova.network.os_vif_util [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.777 253542 DEBUG nova.network.os_vif_util [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.777 253542 DEBUG os_vif [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.778 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.778 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.784 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf17086b-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.785 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf17086b-8f, col_values=(('external_ids', {'iface-id': 'cf17086b-8fa3-4041-8a87-a1f9ede3f871', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:64:2f', 'vm-uuid': '3dd3cc22-d02b-4948-b7e0-da630c6ad4b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:26 compute-0 NetworkManager[48915]: <info>  [1764061286.7872] manager: (tapcf17086b-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.788 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.792 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.793 253542 INFO os_vif [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f')
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.835 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.836 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.836 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:2b:64:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.837 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Using config drive
Nov 25 09:01:26 compute-0 nova_compute[253538]: 2025-11-25 09:01:26.859 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.271 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Creating config drive at /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.277 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebfw6he5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:27 compute-0 ceph-mon[75015]: pgmap v2419: 321 pgs: 321 active+clean; 244 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Nov 25 09:01:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2885720655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.417 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebfw6he5" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.441 253542 DEBUG nova.storage.rbd_utils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.445 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:27 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.608 253542 DEBUG oslo_concurrency.processutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.609 253542 INFO nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Deleting local config drive /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0/disk.config because it was imported into RBD.
Nov 25 09:01:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:27 compute-0 kernel: tapcf17086b-8f: entered promiscuous mode
Nov 25 09:01:27 compute-0 NetworkManager[48915]: <info>  [1764061287.6602] manager: (tapcf17086b-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/555)
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.661 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:27 compute-0 ovn_controller[152859]: 2025-11-25T09:01:27Z|01341|binding|INFO|Claiming lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 for this chassis.
Nov 25 09:01:27 compute-0 ovn_controller[152859]: 2025-11-25T09:01:27Z|01342|binding|INFO|cf17086b-8fa3-4041-8a87-a1f9ede3f871: Claiming fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.672 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f'], port_security=['fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fe2b:642f/64', 'neutron:device_id': '3dd3cc22-d02b-4948-b7e0-da630c6ad4b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc2d8b7e-9a70-4d0b-ab2b-2be8746de01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=cf17086b-8fa3-4041-8a87-a1f9ede3f871) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.673 162739 INFO neutron.agent.ovn.metadata.agent [-] Port cf17086b-8fa3-4041-8a87-a1f9ede3f871 in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d bound to our chassis
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.675 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d
Nov 25 09:01:27 compute-0 ovn_controller[152859]: 2025-11-25T09:01:27Z|01343|binding|INFO|Setting lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 ovn-installed in OVS
Nov 25 09:01:27 compute-0 ovn_controller[152859]: 2025-11-25T09:01:27Z|01344|binding|INFO|Setting lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 up in Southbound
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.690 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.693 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc30e756-75bb-4949-a06d-a3f6fa9e0044]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.697 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:27 compute-0 systemd-machined[215790]: New machine qemu-161-instance-00000083.
Nov 25 09:01:27 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000083.
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.744 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7acc6e47-9166-4145-a484-b3b72466d6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.748 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a5434a05-71ba-4e2a-8f33-9c9f9a961caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:27 compute-0 systemd-udevd[391337]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:01:27 compute-0 NetworkManager[48915]: <info>  [1764061287.7732] device (tapcf17086b-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:01:27 compute-0 NetworkManager[48915]: <info>  [1764061287.7742] device (tapcf17086b-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.792 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5334810a-b40c-4c80-bc51-75cff835b0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.829 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e768691-362c-432f-9a45-89c8f705d3e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c2834b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:81:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659443, 'reachable_time': 25259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391353, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.845 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a06a55c-3d5c-4aa3-95b2-ba0b3b606993]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c2834b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659455, 'tstamp': 659455}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391362, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c2834b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659459, 'tstamp': 659459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391362, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.847 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c2834b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.851 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c2834b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.851 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c2834b5-00, col_values=(('external_ids', {'iface-id': '25e4a85d-5a04-4d07-a006-66576a20c294'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:27.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:27 compute-0 podman[391318]: 2025-11-25 09:01:27.853549351 +0000 UTC m=+0.137299128 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 09:01:27 compute-0 ovn_controller[152859]: 2025-11-25T09:01:27Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:1f:9b 10.100.0.7
Nov 25 09:01:27 compute-0 ovn_controller[152859]: 2025-11-25T09:01:27Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:1f:9b 10.100.0.7
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.953 253542 DEBUG nova.network.neutron [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updated VIF entry in instance network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.954 253542 DEBUG nova.network.neutron [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:27 compute-0 nova_compute[253538]: 2025-11-25 09:01:27.969 253542 DEBUG oslo_concurrency.lockutils [req-efc30647-e42e-43e1-9e2b-4915b470d6d3 req-365ff198-f061-4875-bee2-fa483775127d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 269 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.317 253542 DEBUG nova.compute.manager [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.317 253542 DEBUG oslo_concurrency.lockutils [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.318 253542 DEBUG oslo_concurrency.lockutils [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.319 253542 DEBUG oslo_concurrency.lockutils [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.320 253542 DEBUG nova.compute.manager [req-34977306-7e8c-4b5c-b78f-4b70ba68175a req-94d52f8f-03c3-4ce8-ae8c-bc01e5e0805a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Processing event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:01:28 compute-0 ceph-mon[75015]: pgmap v2420: 321 pgs: 321 active+clean; 269 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.489 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061288.4893496, 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.490 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] VM Started (Lifecycle Event)
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.492 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.494 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.498 253542 INFO nova.virt.libvirt.driver [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance spawned successfully.
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.498 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.504 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.507 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.515 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.516 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.516 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.517 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.517 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.517 253542 DEBUG nova.virt.libvirt.driver [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.525 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.525 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061288.491612, 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.525 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] VM Paused (Lifecycle Event)
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.550 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.553 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061288.4942284, 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.553 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] VM Resumed (Lifecycle Event)
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.580 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.582 253542 INFO nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Took 7.04 seconds to spawn the instance on the hypervisor.
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.582 253542 DEBUG nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.584 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.650 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.736 253542 INFO nova.compute.manager [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Took 8.22 seconds to build instance.
Nov 25 09:01:28 compute-0 nova_compute[253538]: 2025-11-25 09:01:28.764 253542 DEBUG oslo_concurrency.lockutils [None req-929f6467-62d9-4b67-964d-212d9d487e9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:01:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163160749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:01:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:01:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163160749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:01:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4163160749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:01:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4163160749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:01:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 293 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 25 09:01:30 compute-0 ceph-mon[75015]: pgmap v2421: 321 pgs: 321 active+clean; 293 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 25 09:01:30 compute-0 nova_compute[253538]: 2025-11-25 09:01:30.449 253542 DEBUG nova.compute.manager [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:30 compute-0 nova_compute[253538]: 2025-11-25 09:01:30.450 253542 DEBUG oslo_concurrency.lockutils [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:30 compute-0 nova_compute[253538]: 2025-11-25 09:01:30.450 253542 DEBUG oslo_concurrency.lockutils [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:30 compute-0 nova_compute[253538]: 2025-11-25 09:01:30.450 253542 DEBUG oslo_concurrency.lockutils [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:30 compute-0 nova_compute[253538]: 2025-11-25 09:01:30.451 253542 DEBUG nova.compute.manager [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] No waiting events found dispatching network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:01:30 compute-0 nova_compute[253538]: 2025-11-25 09:01:30.451 253542 WARNING nova.compute.manager [req-e1bb7283-c1b7-4090-9fdd-3a1a3a429218 req-61b7fed6-422c-459d-8b10-b42420f16883 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received unexpected event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 for instance with vm_state active and task_state None.
Nov 25 09:01:31 compute-0 nova_compute[253538]: 2025-11-25 09:01:31.787 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 142 op/s
Nov 25 09:01:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:32 compute-0 nova_compute[253538]: 2025-11-25 09:01:32.980 253542 DEBUG nova.compute.manager [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:32 compute-0 nova_compute[253538]: 2025-11-25 09:01:32.981 253542 DEBUG nova.compute.manager [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing instance network info cache due to event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:32 compute-0 nova_compute[253538]: 2025-11-25 09:01:32.981 253542 DEBUG oslo_concurrency.lockutils [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:32 compute-0 nova_compute[253538]: 2025-11-25 09:01:32.981 253542 DEBUG oslo_concurrency.lockutils [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:32 compute-0 nova_compute[253538]: 2025-11-25 09:01:32.982 253542 DEBUG nova.network.neutron [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:33 compute-0 ceph-mon[75015]: pgmap v2422: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 142 op/s
Nov 25 09:01:33 compute-0 nova_compute[253538]: 2025-11-25 09:01:33.596 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Nov 25 09:01:35 compute-0 ceph-mon[75015]: pgmap v2423: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Nov 25 09:01:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 163 op/s
Nov 25 09:01:36 compute-0 nova_compute[253538]: 2025-11-25 09:01:36.185 253542 DEBUG nova.network.neutron [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updated VIF entry in instance network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:01:36 compute-0 nova_compute[253538]: 2025-11-25 09:01:36.186 253542 DEBUG nova.network.neutron [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:36 compute-0 nova_compute[253538]: 2025-11-25 09:01:36.204 253542 DEBUG oslo_concurrency.lockutils [req-d3f991b3-dca4-497d-a2cd-07b9901a1584 req-0d59347a-35e2-47a6-9467-4b7cc5de4c14 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:36 compute-0 nova_compute[253538]: 2025-11-25 09:01:36.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:37 compute-0 ceph-mon[75015]: pgmap v2424: 321 pgs: 321 active+clean; 293 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 163 op/s
Nov 25 09:01:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 293 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 139 op/s
Nov 25 09:01:38 compute-0 nova_compute[253538]: 2025-11-25 09:01:38.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:39 compute-0 ceph-mon[75015]: pgmap v2425: 321 pgs: 321 active+clean; 293 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 139 op/s
Nov 25 09:01:39 compute-0 nova_compute[253538]: 2025-11-25 09:01:39.570 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 293 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Nov 25 09:01:40 compute-0 nova_compute[253538]: 2025-11-25 09:01:40.907 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:40 compute-0 nova_compute[253538]: 2025-11-25 09:01:40.908 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:40 compute-0 nova_compute[253538]: 2025-11-25 09:01:40.920 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:01:40 compute-0 nova_compute[253538]: 2025-11-25 09:01:40.994 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:40 compute-0 nova_compute[253538]: 2025-11-25 09:01:40.995 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.007 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.008 253542 INFO nova.compute.claims [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:01:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:41.084 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:41.085 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.209 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:41 compute-0 ceph-mon[75015]: pgmap v2426: 321 pgs: 321 active+clean; 293 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:01:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2319355486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.692 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.699 253542 DEBUG nova.compute.provider_tree [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.716 253542 DEBUG nova.scheduler.client.report [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.746 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.747 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.786 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.786 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.809 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.830 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.912 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.914 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.914 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Creating image(s)
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.954 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:41 compute-0 nova_compute[253538]: 2025-11-25 09:01:41.982 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.008 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.012 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 308 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 97 op/s
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.089 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.089 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.090 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.091 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.108 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.111 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:42 compute-0 ovn_controller[152859]: 2025-11-25T09:01:42Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:64:2f 10.100.0.9
Nov 25 09:01:42 compute-0 ovn_controller[152859]: 2025-11-25T09:01:42Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:64:2f 10.100.0.9
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.177 253542 DEBUG nova.policy [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:01:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2319355486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.374 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.454 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.557 253542 DEBUG nova.objects.instance [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.579 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.580 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Ensure instance console log exists: /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.580 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.580 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:42 compute-0 nova_compute[253538]: 2025-11-25 09:01:42.581 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:43 compute-0 ceph-mon[75015]: pgmap v2427: 321 pgs: 321 active+clean; 308 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 97 op/s
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.708 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Successfully created port: c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.842 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 WARNING nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid f05e074d-5838-4c4b-89dc-76afe386f635 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 830678ef-9f48-4175-aa6d-666c24a11689 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.868 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.869 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.869 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.869 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.869 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "830678ef-9f48-4175-aa6d-666c24a11689" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.870 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.870 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.870 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.904 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "830678ef-9f48-4175-aa6d-666c24a11689" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.907 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:43 compute-0 nova_compute[253538]: 2025-11-25 09:01:43.908 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1000 KiB/s rd, 1.7 MiB/s wr, 66 op/s
Nov 25 09:01:44 compute-0 ceph-mon[75015]: pgmap v2428: 321 pgs: 321 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1000 KiB/s rd, 1.7 MiB/s wr, 66 op/s
Nov 25 09:01:44 compute-0 nova_compute[253538]: 2025-11-25 09:01:44.780 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Successfully updated port: c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:01:44 compute-0 nova_compute[253538]: 2025-11-25 09:01:44.809 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:44 compute-0 nova_compute[253538]: 2025-11-25 09:01:44.809 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:44 compute-0 nova_compute[253538]: 2025-11-25 09:01:44.809 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:01:44 compute-0 nova_compute[253538]: 2025-11-25 09:01:44.885 253542 DEBUG nova.compute.manager [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:44 compute-0 nova_compute[253538]: 2025-11-25 09:01:44.885 253542 DEBUG nova.compute.manager [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing instance network info cache due to event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:44 compute-0 nova_compute[253538]: 2025-11-25 09:01:44.885 253542 DEBUG oslo_concurrency.lockutils [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.148 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.889 253542 DEBUG nova.network.neutron [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.930 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.930 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance network_info: |[{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.930 253542 DEBUG oslo_concurrency.lockutils [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.931 253542 DEBUG nova.network.neutron [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.933 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start _get_guest_xml network_info=[{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.938 253542 WARNING nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.946 253542 DEBUG nova.virt.libvirt.host [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.947 253542 DEBUG nova.virt.libvirt.host [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.950 253542 DEBUG nova.virt.libvirt.host [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.951 253542 DEBUG nova.virt.libvirt.host [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.951 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.951 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.952 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.953 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.953 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.953 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.953 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.954 253542 DEBUG nova.virt.hardware [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:01:45 compute-0 nova_compute[253538]: 2025-11-25 09:01:45.957 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 351 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.7 MiB/s wr, 66 op/s
Nov 25 09:01:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:01:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3804695167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.438 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.465 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.469 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.583 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.583 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.605 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.812 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.812 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.812 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.813 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:01:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:01:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068414821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.989 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.991 253542 DEBUG nova.virt.libvirt.vif [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=132,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ejm0jt8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:01:41Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2d372af7-dca6-4f5f-bd4c-beedbb8cc055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.991 253542 DEBUG nova.network.os_vif_util [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.992 253542 DEBUG nova.network.os_vif_util [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:01:46 compute-0 nova_compute[253538]: 2025-11-25 09:01:46.993 253542 DEBUG nova.objects.instance [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.008 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <uuid>2d372af7-dca6-4f5f-bd4c-beedbb8cc055</uuid>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <name>instance-00000084</name>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435</nova:name>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:01:45</nova:creationTime>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <nova:port uuid="c1655f18-1254-402d-b9b6-7ca2d5a8bcdc">
Nov 25 09:01:47 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <system>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <entry name="serial">2d372af7-dca6-4f5f-bd4c-beedbb8cc055</entry>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <entry name="uuid">2d372af7-dca6-4f5f-bd4c-beedbb8cc055</entry>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </system>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <os>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   </os>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <features>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   </features>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk">
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       </source>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config">
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       </source>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:01:47 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:cb:b3:62"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <target dev="tapc1655f18-12"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/console.log" append="off"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <video>
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </video>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:01:47 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:01:47 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:01:47 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:01:47 compute-0 nova_compute[253538]: </domain>
Nov 25 09:01:47 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.009 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Preparing to wait for external event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.009 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.009 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.009 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.010 253542 DEBUG nova.virt.libvirt.vif [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=132,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ejm0jt8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:01:41Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2d372af7-dca6-4f5f-bd4c-beedbb8cc055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.010 253542 DEBUG nova.network.os_vif_util [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.011 253542 DEBUG nova.network.os_vif_util [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.011 253542 DEBUG os_vif [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.012 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.013 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.015 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.016 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1655f18-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.016 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1655f18-12, col_values=(('external_ids', {'iface-id': 'c1655f18-1254-402d-b9b6-7ca2d5a8bcdc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:b3:62', 'vm-uuid': '2d372af7-dca6-4f5f-bd4c-beedbb8cc055'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:47 compute-0 NetworkManager[48915]: <info>  [1764061307.0189] manager: (tapc1655f18-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.020 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.025 253542 INFO os_vif [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12')
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.080 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.080 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.080 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:cb:b3:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.081 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Using config drive
Nov 25 09:01:47 compute-0 nova_compute[253538]: 2025-11-25 09:01:47.101 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:47 compute-0 ceph-mon[75015]: pgmap v2429: 321 pgs: 321 active+clean; 351 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.7 MiB/s wr, 66 op/s
Nov 25 09:01:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3804695167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4068414821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:01:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 09:01:48 compute-0 nova_compute[253538]: 2025-11-25 09:01:48.280 253542 DEBUG nova.network.neutron [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updated VIF entry in instance network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:01:48 compute-0 nova_compute[253538]: 2025-11-25 09:01:48.281 253542 DEBUG nova.network.neutron [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:48 compute-0 nova_compute[253538]: 2025-11-25 09:01:48.298 253542 DEBUG oslo_concurrency.lockutils [req-1714bf5c-6c28-4243-830d-abbb035e68ee req-281f9807-1562-4140-aa80-1aeafe7933f5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:48 compute-0 nova_compute[253538]: 2025-11-25 09:01:48.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:49 compute-0 nova_compute[253538]: 2025-11-25 09:01:49.201 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Creating config drive at /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config
Nov 25 09:01:49 compute-0 nova_compute[253538]: 2025-11-25 09:01:49.206 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpndydkpuw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:49 compute-0 ceph-mon[75015]: pgmap v2430: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 09:01:49 compute-0 nova_compute[253538]: 2025-11-25 09:01:49.347 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpndydkpuw" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:49 compute-0 nova_compute[253538]: 2025-11-25 09:01:49.373 253542 DEBUG nova.storage.rbd_utils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:01:49 compute-0 nova_compute[253538]: 2025-11-25 09:01:49.378 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 09:01:50 compute-0 ceph-mon[75015]: pgmap v2431: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.774 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.792 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.793 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.794 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.794 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.794 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.795 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.868 253542 DEBUG nova.compute.manager [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.869 253542 DEBUG nova.compute.manager [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing instance network info cache due to event network-changed-cf17086b-8fa3-4041-8a87-a1f9ede3f871. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.869 253542 DEBUG oslo_concurrency.lockutils [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.870 253542 DEBUG oslo_concurrency.lockutils [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.870 253542 DEBUG nova.network.neutron [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Refreshing network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.899 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.900 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.900 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.900 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.901 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.903 253542 INFO nova.compute.manager [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Terminating instance
Nov 25 09:01:50 compute-0 nova_compute[253538]: 2025-11-25 09:01:50.905 253542 DEBUG nova.compute.manager [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.066 253542 DEBUG oslo_concurrency.processutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config 2d372af7-dca6-4f5f-bd4c-beedbb8cc055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.067 253542 INFO nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Deleting local config drive /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055/disk.config because it was imported into RBD.
Nov 25 09:01:51 compute-0 NetworkManager[48915]: <info>  [1764061311.1279] manager: (tapc1655f18-12): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Nov 25 09:01:51 compute-0 kernel: tapc1655f18-12: entered promiscuous mode
Nov 25 09:01:51 compute-0 ovn_controller[152859]: 2025-11-25T09:01:51Z|01345|binding|INFO|Claiming lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc for this chassis.
Nov 25 09:01:51 compute-0 ovn_controller[152859]: 2025-11-25T09:01:51Z|01346|binding|INFO|c1655f18-1254-402d-b9b6-7ca2d5a8bcdc: Claiming fa:16:3e:cb:b3:62 10.100.0.12
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.142 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:b3:62 10.100.0.12'], port_security=['fa:16:3e:cb:b3:62 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2d372af7-dca6-4f5f-bd4c-beedbb8cc055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7a01ec2f-9868-40ca-9120-52725aa4431e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4cb606-f8e1-4247-876b-21f84cfe5e61, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.143 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc in datapath 01d5ee0a-5a87-445b-8539-b33b1f9d0842 bound to our chassis
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.145 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d5ee0a-5a87-445b-8539-b33b1f9d0842
Nov 25 09:01:51 compute-0 ovn_controller[152859]: 2025-11-25T09:01:51Z|01347|binding|INFO|Setting lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc ovn-installed in OVS
Nov 25 09:01:51 compute-0 ovn_controller[152859]: 2025-11-25T09:01:51Z|01348|binding|INFO|Setting lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc up in Southbound
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.173 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[027735f8-7815-4e36-8392-b66640510e7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 systemd-machined[215790]: New machine qemu-162-instance-00000084.
Nov 25 09:01:51 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000084.
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.208 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2b863ed8-0661-459e-865a-34d87c058d50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 systemd-udevd[391752]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.212 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4d16cb4c-f6db-4555-872f-0e0428af6044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 NetworkManager[48915]: <info>  [1764061311.2233] device (tapc1655f18-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:01:51 compute-0 NetworkManager[48915]: <info>  [1764061311.2243] device (tapc1655f18-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:01:51 compute-0 podman[391728]: 2025-11-25 09:01:51.240453032 +0000 UTC m=+0.065378641 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.246 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e3006cbd-32c2-42c9-934d-88cd34e10e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 podman[391726]: 2025-11-25 09:01:51.248235893 +0000 UTC m=+0.075126535 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.268 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f740e7e-9af1-4333-acbb-e96ed487cff2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d5ee0a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:39:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662730, 'reachable_time': 37346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391777, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.288 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a706244c-441b-47e1-90a8-ac9f70fd0457]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01d5ee0a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662744, 'tstamp': 662744}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391780, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01d5ee0a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662746, 'tstamp': 662746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391780, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.291 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d5ee0a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.296 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d5ee0a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.296 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.297 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d5ee0a-50, col_values=(('external_ids', {'iface-id': 'e613213f-7deb-43ce-acbb-25b798b2b340'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.297 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.404 253542 DEBUG nova.compute.manager [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.405 253542 DEBUG oslo_concurrency.lockutils [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.405 253542 DEBUG oslo_concurrency.lockutils [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.406 253542 DEBUG oslo_concurrency.lockutils [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.406 253542 DEBUG nova.compute.manager [req-38592265-fc5a-4690-9712-392990a1dbb8 req-4871de1c-572a-471a-86dc-dfe1e01140c6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Processing event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:01:51 compute-0 kernel: tapcf17086b-8f (unregistering): left promiscuous mode
Nov 25 09:01:51 compute-0 NetworkManager[48915]: <info>  [1764061311.4451] device (tapcf17086b-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:01:51 compute-0 ovn_controller[152859]: 2025-11-25T09:01:51Z|01349|binding|INFO|Releasing lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 from this chassis (sb_readonly=0)
Nov 25 09:01:51 compute-0 ovn_controller[152859]: 2025-11-25T09:01:51Z|01350|binding|INFO|Setting lport cf17086b-8fa3-4041-8a87-a1f9ede3f871 down in Southbound
Nov 25 09:01:51 compute-0 ovn_controller[152859]: 2025-11-25T09:01:51Z|01351|binding|INFO|Removing iface tapcf17086b-8f ovn-installed in OVS
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.455 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.461 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f'], port_security=['fa:16:3e:2b:64:2f 10.100.0.9 2001:db8::f816:3eff:fe2b:642f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fe2b:642f/64', 'neutron:device_id': '3dd3cc22-d02b-4948-b7e0-da630c6ad4b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc2d8b7e-9a70-4d0b-ab2b-2be8746de01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=cf17086b-8fa3-4041-8a87-a1f9ede3f871) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.462 162739 INFO neutron.agent.ovn.metadata.agent [-] Port cf17086b-8fa3-4041-8a87-a1f9ede3f871 in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d unbound from our chassis
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.464 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.491 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b514d0d1-7919-4a42-900a-0df5a00931f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.524 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff83287-4a90-4003-9ef9-6df9999d18ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.527 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[82d221e7-1c3c-41db-9457-0bde78fee2b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 25 09:01:51 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000083.scope: Consumed 13.710s CPU time.
Nov 25 09:01:51 compute-0 systemd-machined[215790]: Machine qemu-161-instance-00000083 terminated.
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.552 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01a94cee-3e47-442b-a6d6-b527e34ab4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.573 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d44fed5e-de06-4f58-a30b-eaafd6126e75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c2834b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:81:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659443, 'reachable_time': 25259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391790, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.587 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0ea9c7-904b-49ab-80d8-5dbc6e365881]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c2834b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659455, 'tstamp': 659455}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391791, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c2834b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659459, 'tstamp': 659459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391791, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.589 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c2834b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.595 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c2834b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c2834b5-00, col_values=(('external_ids', {'iface-id': '25e4a85d-5a04-4d07-a006-66576a20c294'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:51.596 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.741 253542 INFO nova.virt.libvirt.driver [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Instance destroyed successfully.
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.742 253542 DEBUG nova.objects.instance [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.753 253542 DEBUG nova.virt.libvirt.vif [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:01:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-432989417',display_name='tempest-TestGettingAddress-server-432989417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-432989417',id=131,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:01:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-z9h0547w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:01:28Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3dd3cc22-d02b-4948-b7e0-da630c6ad4b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.753 253542 DEBUG nova.network.os_vif_util [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.754 253542 DEBUG nova.network.os_vif_util [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.754 253542 DEBUG os_vif [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.756 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf17086b-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.762 253542 INFO os_vif [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:64:2f,bridge_name='br-int',has_traffic_filtering=True,id=cf17086b-8fa3-4041-8a87-a1f9ede3f871,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf17086b-8f')
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.956 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061311.955714, 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.956 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] VM Started (Lifecycle Event)
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.958 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.961 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.964 253542 INFO nova.virt.libvirt.driver [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance spawned successfully.
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.964 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.982 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.987 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.992 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.993 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.993 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.994 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.994 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:51 compute-0 nova_compute[253538]: 2025-11-25 09:01:51.995 253542 DEBUG nova.virt.libvirt.driver [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.019 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.020 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061311.9559531, 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.020 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] VM Paused (Lifecycle Event)
Nov 25 09:01:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.051 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.055 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061311.961139, 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.055 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] VM Resumed (Lifecycle Event)
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.064 253542 INFO nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Took 10.15 seconds to spawn the instance on the hypervisor.
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.065 253542 DEBUG nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.086 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.090 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.117 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.129 253542 INFO nova.compute.manager [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Took 11.16 seconds to build instance.
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.142 253542 DEBUG oslo_concurrency.lockutils [None req-d35aed32-9a06-49d6-a4ac-4f84c5a5ab8b 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.143 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.143 253542 INFO nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.144 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.521 253542 DEBUG nova.network.neutron [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updated VIF entry in instance network info cache for port cf17086b-8fa3-4041-8a87-a1f9ede3f871. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.524 253542 DEBUG nova.network.neutron [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [{"id": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "address": "fa:16:3e:2b:64:2f", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:642f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf17086b-8f", "ovs_interfaceid": "cf17086b-8fa3-4041-8a87-a1f9ede3f871", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.541 253542 DEBUG oslo_concurrency.lockutils [req-e460ec4a-2595-40bd-a4b8-c4d12aeba6b4 req-edb31421-8da4-4326-bf21-cfbde2a51dde b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:52 compute-0 nova_compute[253538]: 2025-11-25 09:01:52.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:53 compute-0 ceph-mon[75015]: pgmap v2432: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:01:53
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'images', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:01:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.495 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.496 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.496 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.496 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.496 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] No waiting events found dispatching network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.497 253542 WARNING nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received unexpected event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc for instance with vm_state active and task_state None.
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.497 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-unplugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.497 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.498 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.498 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.498 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] No waiting events found dispatching network-vif-unplugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.498 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-unplugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.499 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.499 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.499 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.499 253542 DEBUG oslo_concurrency.lockutils [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.500 253542 DEBUG nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] No waiting events found dispatching network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.500 253542 WARNING nova.compute.manager [req-1e0a66ad-1555-4a61-baf4-9908c6cc2ef6 req-99c7927e-f7dd-4072-af4b-5f92a3be824c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received unexpected event network-vif-plugged-cf17086b-8fa3-4041-8a87-a1f9ede3f871 for instance with vm_state active and task_state deleting.
Nov 25 09:01:53 compute-0 nova_compute[253538]: 2025-11-25 09:01:53.608 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.024 253542 INFO nova.virt.libvirt.driver [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Deleting instance files /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_del
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.025 253542 INFO nova.virt.libvirt.driver [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Deletion of /var/lib/nova/instances/3dd3cc22-d02b-4948-b7e0-da630c6ad4b0_del complete
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:01:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 346 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 156 KiB/s rd, 3.0 MiB/s wr, 65 op/s
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.132 253542 INFO nova.compute.manager [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Took 3.23 seconds to destroy the instance on the hypervisor.
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.133 253542 DEBUG oslo.service.loopingcall [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.133 253542 DEBUG nova.compute.manager [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.134 253542 DEBUG nova.network.neutron [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.574 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.575 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.897 253542 DEBUG nova.network.neutron [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.913 253542 INFO nova.compute.manager [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Took 0.78 seconds to deallocate network for instance.
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.950 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:54 compute-0 nova_compute[253538]: 2025-11-25 09:01:54.951 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:01:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222683498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.053 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.073 253542 DEBUG oslo_concurrency.processutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.183 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.184 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.189 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.190 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.195 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.195 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.356 253542 DEBUG nova.compute.manager [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.357 253542 DEBUG nova.compute.manager [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing instance network info cache due to event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.358 253542 DEBUG oslo_concurrency.lockutils [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.359 253542 DEBUG oslo_concurrency.lockutils [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.359 253542 DEBUG nova.network.neutron [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:55 compute-0 ceph-mon[75015]: pgmap v2433: 321 pgs: 321 active+clean; 346 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 156 KiB/s rd, 3.0 MiB/s wr, 65 op/s
Nov 25 09:01:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3222683498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.508 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.509 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3114MB free_disk=59.84657669067383GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.509 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.596 253542 DEBUG nova.compute.manager [req-2fbf3e9a-9153-462e-bda2-8bc9d78f1467 req-b84792ee-a9ed-4fa4-99a7-ab4524481ba0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Received event network-vif-deleted-cf17086b-8fa3-4041-8a87-a1f9ede3f871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:01:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1247479192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.697 253542 DEBUG oslo_concurrency.processutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.704 253542 DEBUG nova.compute.provider_tree [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.726 253542 DEBUG nova.scheduler.client.report [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.750 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.754 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.802 253542 INFO nova.scheduler.client.report [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.897 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance f05e074d-5838-4c4b-89dc-76afe386f635 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.898 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 830678ef-9f48-4175-aa6d-666c24a11689 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.898 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.898 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.899 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.941 253542 DEBUG oslo_concurrency.lockutils [None req-6adced35-cf3c-4fbd-b013-bc3653cb8899 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3dd3cc22-d02b-4948-b7e0-da630c6ad4b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:55 compute-0 nova_compute[253538]: 2025-11-25 09:01:55.993 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:01:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 330 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 25 09:01:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:01:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807212356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:56 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1247479192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:56 compute-0 ceph-mon[75015]: pgmap v2434: 321 pgs: 321 active+clean; 330 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.599 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.604 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.631 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.673 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.674 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.784 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.785 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.785 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.785 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.786 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.787 253542 INFO nova.compute.manager [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Terminating instance
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.789 253542 DEBUG nova.compute.manager [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.930 253542 DEBUG nova.network.neutron [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updated VIF entry in instance network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.930 253542 DEBUG nova.network.neutron [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:56 compute-0 nova_compute[253538]: 2025-11-25 09:01:56.947 253542 DEBUG oslo_concurrency.lockutils [req-63beb1e3-3b83-4432-bba0-71bd07d98918 req-30a2eb91-a950-423f-9f03-992904e82441 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:57 compute-0 kernel: tap30ba0f84-3d (unregistering): left promiscuous mode
Nov 25 09:01:57 compute-0 NetworkManager[48915]: <info>  [1764061317.2248] device (tap30ba0f84-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:01:57 compute-0 ovn_controller[152859]: 2025-11-25T09:01:57Z|01352|binding|INFO|Releasing lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b from this chassis (sb_readonly=0)
Nov 25 09:01:57 compute-0 ovn_controller[152859]: 2025-11-25T09:01:57Z|01353|binding|INFO|Setting lport 30ba0f84-3dca-47f6-911d-5fff56a99b0b down in Southbound
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.232 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:57 compute-0 ovn_controller[152859]: 2025-11-25T09:01:57Z|01354|binding|INFO|Removing iface tap30ba0f84-3d ovn-installed in OVS
Nov 25 09:01:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.241 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053'], port_security=['fa:16:3e:d4:d0:53 10.100.0.14 2001:db8::f816:3eff:fed4:d053'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fed4:d053/64', 'neutron:device_id': 'f05e074d-5838-4c4b-89dc-76afe386f635', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc2d8b7e-9a70-4d0b-ab2b-2be8746de01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566c4500-0375-4680-b110-24535007c05e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=30ba0f84-3dca-47f6-911d-5fff56a99b0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:01:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.242 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 30ba0f84-3dca-47f6-911d-5fff56a99b0b in datapath 6c2834b5-0444-432c-8da4-c0b4f4aabc4d unbound from our chassis
Nov 25 09:01:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.243 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c2834b5-0444-432c-8da4-c0b4f4aabc4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:01:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.243 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7866c655-3561-4bd5-b4eb-893ac1e4d7b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:57.245 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d namespace which is not needed anymore
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.252 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:57 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 25 09:01:57 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000081.scope: Consumed 16.762s CPU time.
Nov 25 09:01:57 compute-0 systemd-machined[215790]: Machine qemu-159-instance-00000081 terminated.
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.428 253542 INFO nova.virt.libvirt.driver [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Instance destroyed successfully.
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.429 253542 DEBUG nova.objects.instance [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid f05e074d-5838-4c4b-89dc-76afe386f635 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.439 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.440 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing instance network info cache due to event network-changed-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.440 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.440 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.440 253542 DEBUG nova.network.neutron [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Refreshing network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.442 253542 DEBUG nova.virt.libvirt.vif [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1223430136',display_name='tempest-TestGettingAddress-server-1223430136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1223430136',id=129,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE43RDkFrIAVI0BW11KJE2OYK1HJrAvC1/LPNZ8TDm6qjtPF0a19WFe4a1radMZBBdiDLKOYgFd/MDesehfKkwox7hQY3R8UYDimfx0Df8eUJAXIFHj7mDCbbhz+nTG4ag==',key_name='tempest-TestGettingAddress-356224709',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:00:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-w1sbzkv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:00:40Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f05e074d-5838-4c4b-89dc-76afe386f635,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.443 253542 DEBUG nova.network.os_vif_util [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.444 253542 DEBUG nova.network.os_vif_util [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.444 253542 DEBUG os_vif [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.445 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.446 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30ba0f84-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.451 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.453 253542 INFO os_vif [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:d0:53,bridge_name='br-int',has_traffic_filtering=True,id=30ba0f84-3dca-47f6-911d-5fff56a99b0b,network=Network(6c2834b5-0444-432c-8da4-c0b4f4aabc4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ba0f84-3d')
Nov 25 09:01:57 compute-0 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [NOTICE]   (389485) : haproxy version is 2.8.14-c23fe91
Nov 25 09:01:57 compute-0 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [NOTICE]   (389485) : path to executable is /usr/sbin/haproxy
Nov 25 09:01:57 compute-0 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [WARNING]  (389485) : Exiting Master process...
Nov 25 09:01:57 compute-0 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [ALERT]    (389485) : Current worker (389487) exited with code 143 (Terminated)
Nov 25 09:01:57 compute-0 neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d[389481]: [WARNING]  (389485) : All workers exited. Exiting... (0)
Nov 25 09:01:57 compute-0 systemd[1]: libpod-d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60.scope: Deactivated successfully.
Nov 25 09:01:57 compute-0 podman[391956]: 2025-11-25 09:01:57.528172764 +0000 UTC m=+0.185802928 container died d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:01:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:01:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1807212356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.690 253542 DEBUG nova.compute.manager [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.691 253542 DEBUG nova.compute.manager [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing instance network info cache due to event network-changed-30ba0f84-3dca-47f6-911d-5fff56a99b0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.691 253542 DEBUG oslo_concurrency.lockutils [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.691 253542 DEBUG oslo_concurrency.lockutils [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:01:57 compute-0 nova_compute[253538]: 2025-11-25 09:01:57.692 253542 DEBUG nova.network.neutron [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Refreshing network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60-userdata-shm.mount: Deactivated successfully.
Nov 25 09:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f8cf4280ba7f34d6348b543dd0da02fb745e7d55375aa17f4fa4a2f697425a-merged.mount: Deactivated successfully.
Nov 25 09:01:58 compute-0 podman[391956]: 2025-11-25 09:01:58.015599271 +0000 UTC m=+0.673229435 container cleanup d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 09:01:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 120 op/s
Nov 25 09:01:58 compute-0 systemd[1]: libpod-conmon-d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60.scope: Deactivated successfully.
Nov 25 09:01:58 compute-0 podman[392016]: 2025-11-25 09:01:58.235224579 +0000 UTC m=+0.163099801 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:01:58 compute-0 podman[392015]: 2025-11-25 09:01:58.605512538 +0000 UTC m=+0.556672223 container remove d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:01:58 compute-0 nova_compute[253538]: 2025-11-25 09:01:58.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.614 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6da2d84-4c8c-4188-894a-35557877ff08]: (4, ('Tue Nov 25 09:01:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d (d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60)\nd3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60\nTue Nov 25 09:01:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d (d3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60)\nd3176deb3929171a3708dfedf6cc865ea546447c29f69deab5baa393f5ab0b60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.616 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[52f367dd-0db5-4254-a308-f9fa4a8a7538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.617 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c2834b5-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:01:58 compute-0 nova_compute[253538]: 2025-11-25 09:01:58.618 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:58 compute-0 kernel: tap6c2834b5-00: left promiscuous mode
Nov 25 09:01:58 compute-0 nova_compute[253538]: 2025-11-25 09:01:58.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.629 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[85a9f392-bb4c-441e-bda4-95d540c5f5f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.651 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[47abf192-b0cf-45d0-8500-8af66767f1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.653 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9595b11e-9a4c-4061-8da8-35da5856d75d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:58 compute-0 nova_compute[253538]: 2025-11-25 09:01:58.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.671 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9befea9e-1031-4ffb-8380-e91c242be8a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659434, 'reachable_time': 37034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392056, 'error': None, 'target': 'ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c2834b5\x2d0444\x2d432c\x2d8da4\x2dc0b4f4aabc4d.mount: Deactivated successfully.
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.676 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c2834b5-0444-432c-8da4-c0b4f4aabc4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:01:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:01:58.676 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[862172d9-dede-41fa-ad1f-d90cbea5c720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:01:58 compute-0 ceph-mon[75015]: pgmap v2435: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 120 op/s
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.497 253542 DEBUG nova.network.neutron [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updated VIF entry in instance network info cache for port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.497 253542 DEBUG nova.network.neutron [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [{"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.518 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2d372af7-dca6-4f5f-bd4c-beedbb8cc055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.518 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-unplugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.519 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.519 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.519 253542 DEBUG oslo_concurrency.lockutils [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.520 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] No waiting events found dispatching network-vif-unplugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.520 253542 DEBUG nova.compute.manager [req-3c51296c-8d2f-4355-93d1-6b3840646bf6 req-86c45f0a-16a9-45c6-accc-3043692c4120 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-unplugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.571 253542 DEBUG nova.compute.manager [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.571 253542 DEBUG oslo_concurrency.lockutils [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.571 253542 DEBUG oslo_concurrency.lockutils [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.572 253542 DEBUG oslo_concurrency.lockutils [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.572 253542 DEBUG nova.compute.manager [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] No waiting events found dispatching network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:01:59 compute-0 nova_compute[253538]: 2025-11-25 09:01:59.572 253542 WARNING nova.compute.manager [req-dbbb69e5-b861-4014-88f6-d0239517bccf req-35f10a0a-436c-4167-8b4f-7287e3489dce b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received unexpected event network-vif-plugged-30ba0f84-3dca-47f6-911d-5fff56a99b0b for instance with vm_state active and task_state deleting.
Nov 25 09:02:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 285 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 114 op/s
Nov 25 09:02:00 compute-0 nova_compute[253538]: 2025-11-25 09:02:00.186 253542 DEBUG nova.network.neutron [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updated VIF entry in instance network info cache for port 30ba0f84-3dca-47f6-911d-5fff56a99b0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:02:00 compute-0 nova_compute[253538]: 2025-11-25 09:02:00.187 253542 DEBUG nova.network.neutron [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [{"id": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "address": "fa:16:3e:d4:d0:53", "network": {"id": "6c2834b5-0444-432c-8da4-c0b4f4aabc4d", "bridge": "br-int", "label": "tempest-network-smoke--139778261", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed4:d053", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ba0f84-3d", "ovs_interfaceid": "30ba0f84-3dca-47f6-911d-5fff56a99b0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:00 compute-0 nova_compute[253538]: 2025-11-25 09:02:00.203 253542 DEBUG oslo_concurrency.lockutils [req-7efe3add-a307-45c5-b5ee-e4a824d0a349 req-750f161f-6399-4e98-8dc0-2bd8ed363cdc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f05e074d-5838-4c4b-89dc-76afe386f635" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:02:00 compute-0 ceph-mon[75015]: pgmap v2436: 321 pgs: 321 active+clean; 285 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 114 op/s
Nov 25 09:02:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 260 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 32 KiB/s wr, 116 op/s
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.108 253542 INFO nova.virt.libvirt.driver [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Deleting instance files /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635_del
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.110 253542 INFO nova.virt.libvirt.driver [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Deletion of /var/lib/nova/instances/f05e074d-5838-4c4b-89dc-76afe386f635_del complete
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.175 253542 INFO nova.compute.manager [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Took 5.39 seconds to destroy the instance on the hypervisor.
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.175 253542 DEBUG oslo.service.loopingcall [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.175 253542 DEBUG nova.compute.manager [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.176 253542 DEBUG nova.network.neutron [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.447 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.761 253542 DEBUG nova.network.neutron [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.776 253542 INFO nova.compute.manager [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Took 0.60 seconds to deallocate network for instance.
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.811 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.812 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.839 253542 DEBUG nova.compute.manager [req-7550e33d-b7f4-4d45-9cfe-57972b56016b req-aa0b8644-4b9c-490c-ade7-68eb962a0991 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Received event network-vif-deleted-30ba0f84-3dca-47f6-911d-5fff56a99b0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:02 compute-0 nova_compute[253538]: 2025-11-25 09:02:02.881 253542 DEBUG oslo_concurrency.processutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:03 compute-0 ceph-mon[75015]: pgmap v2437: 321 pgs: 321 active+clean; 260 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 32 KiB/s wr, 116 op/s
Nov 25 09:02:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:02:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62532698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:03 compute-0 nova_compute[253538]: 2025-11-25 09:02:03.332 253542 DEBUG oslo_concurrency.processutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:03 compute-0 nova_compute[253538]: 2025-11-25 09:02:03.342 253542 DEBUG nova.compute.provider_tree [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:02:03 compute-0 nova_compute[253538]: 2025-11-25 09:02:03.363 253542 DEBUG nova.scheduler.client.report [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:02:03 compute-0 nova_compute[253538]: 2025-11-25 09:02:03.397 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:03 compute-0 nova_compute[253538]: 2025-11-25 09:02:03.432 253542 INFO nova.scheduler.client.report [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance f05e074d-5838-4c4b-89dc-76afe386f635
Nov 25 09:02:03 compute-0 nova_compute[253538]: 2025-11-25 09:02:03.498 253542 DEBUG oslo_concurrency.lockutils [None req-58afc9c6-4eaa-4ac4-8a3e-2c46e29e2d61 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f05e074d-5838-4c4b-89dc-76afe386f635" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:03 compute-0 nova_compute[253538]: 2025-11-25 09:02:03.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 247 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.5 KiB/s wr, 122 op/s
Nov 25 09:02:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/62532698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012935621784853813 of space, bias 1.0, pg target 0.3880686535456144 quantized to 32 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:02:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:02:05 compute-0 ceph-mon[75015]: pgmap v2438: 321 pgs: 321 active+clean; 247 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.5 KiB/s wr, 122 op/s
Nov 25 09:02:05 compute-0 ovn_controller[152859]: 2025-11-25T09:02:05Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:b3:62 10.100.0.12
Nov 25 09:02:05 compute-0 ovn_controller[152859]: 2025-11-25T09:02:05Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:b3:62 10.100.0.12
Nov 25 09:02:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 224 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 139 op/s
Nov 25 09:02:06 compute-0 nova_compute[253538]: 2025-11-25 09:02:06.740 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061311.73886, 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:06 compute-0 nova_compute[253538]: 2025-11-25 09:02:06.741 253542 INFO nova.compute.manager [-] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] VM Stopped (Lifecycle Event)
Nov 25 09:02:06 compute-0 nova_compute[253538]: 2025-11-25 09:02:06.764 253542 DEBUG nova.compute.manager [None req-7f7ba636-9164-40a0-a2f9-b3554f9096b1 - - - - - -] [instance: 3dd3cc22-d02b-4948-b7e0-da630c6ad4b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:07 compute-0 ceph-mon[75015]: pgmap v2439: 321 pgs: 321 active+clean; 224 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 139 op/s
Nov 25 09:02:07 compute-0 nova_compute[253538]: 2025-11-25 09:02:07.451 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 229 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.6 MiB/s wr, 146 op/s
Nov 25 09:02:08 compute-0 nova_compute[253538]: 2025-11-25 09:02:08.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:09 compute-0 ceph-mon[75015]: pgmap v2440: 321 pgs: 321 active+clean; 229 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.6 MiB/s wr, 146 op/s
Nov 25 09:02:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 246 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 25 09:02:11 compute-0 ceph-mon[75015]: pgmap v2441: 321 pgs: 321 active+clean; 246 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.764 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.765 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.765 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.765 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.766 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.767 253542 INFO nova.compute.manager [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Terminating instance
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.769 253542 DEBUG nova.compute.manager [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:02:11 compute-0 kernel: tapc1655f18-12 (unregistering): left promiscuous mode
Nov 25 09:02:11 compute-0 NetworkManager[48915]: <info>  [1764061331.8745] device (tapc1655f18-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:02:11 compute-0 ovn_controller[152859]: 2025-11-25T09:02:11Z|01355|binding|INFO|Releasing lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc from this chassis (sb_readonly=0)
Nov 25 09:02:11 compute-0 ovn_controller[152859]: 2025-11-25T09:02:11Z|01356|binding|INFO|Setting lport c1655f18-1254-402d-b9b6-7ca2d5a8bcdc down in Southbound
Nov 25 09:02:11 compute-0 ovn_controller[152859]: 2025-11-25T09:02:11Z|01357|binding|INFO|Removing iface tapc1655f18-12 ovn-installed in OVS
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.897 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:b3:62 10.100.0.12', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2d372af7-dca6-4f5f-bd4c-beedbb8cc055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4cb606-f8e1-4247-876b-21f84cfe5e61, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:02:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.900 162739 INFO neutron.agent.ovn.metadata.agent [-] Port c1655f18-1254-402d-b9b6-7ca2d5a8bcdc in datapath 01d5ee0a-5a87-445b-8539-b33b1f9d0842 unbound from our chassis
Nov 25 09:02:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.903 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d5ee0a-5a87-445b-8539-b33b1f9d0842
Nov 25 09:02:11 compute-0 nova_compute[253538]: 2025-11-25 09:02:11.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.921 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f159209-1cbd-4848-bfe0-1b5719373607]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:11 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000084.scope: Deactivated successfully.
Nov 25 09:02:11 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000084.scope: Consumed 13.992s CPU time.
Nov 25 09:02:11 compute-0 systemd-machined[215790]: Machine qemu-162-instance-00000084 terminated.
Nov 25 09:02:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.962 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[da8ea646-f717-4800-bc71-96c3ac2b7bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.965 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd07486-23c2-4b48-9f4c-00a391dc4861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:11.995 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f472863d-1742-4ebf-a9d2-926f8692a8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.011 253542 INFO nova.virt.libvirt.driver [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Instance destroyed successfully.
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.012 253542 DEBUG nova.objects.instance [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:02:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.013 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[84f7a74b-6971-470c-97a4-295f0b25a1a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d5ee0a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:39:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662730, 'reachable_time': 37346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392098, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.024 253542 DEBUG nova.virt.libvirt.vif [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1970158435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=132,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:01:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ejm0jt8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:01:52Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2d372af7-dca6-4f5f-bd4c-beedbb8cc055,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.024 253542 DEBUG nova.network.os_vif_util [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "address": "fa:16:3e:cb:b3:62", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1655f18-12", "ovs_interfaceid": "c1655f18-1254-402d-b9b6-7ca2d5a8bcdc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.025 253542 DEBUG nova.network.os_vif_util [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.026 253542 DEBUG os_vif [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.029 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.029 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1655f18-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.032 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dd20c0cd-e6fa-4348-953e-aee9a0d4b402]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01d5ee0a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662744, 'tstamp': 662744}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392103, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01d5ee0a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662746, 'tstamp': 662746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392103, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.035 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d5ee0a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.035 253542 INFO os_vif [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b3:62,bridge_name='br-int',has_traffic_filtering=True,id=c1655f18-1254-402d-b9b6-7ca2d5a8bcdc,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1655f18-12')
Nov 25 09:02:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.039 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d5ee0a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.040 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:02:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.040 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d5ee0a-50, col_values=(('external_ids', {'iface-id': 'e613213f-7deb-43ce-acbb-25b798b2b340'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:12.041 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:02:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 246 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.123 253542 DEBUG nova.compute.manager [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-unplugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.123 253542 DEBUG oslo_concurrency.lockutils [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.124 253542 DEBUG oslo_concurrency.lockutils [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.124 253542 DEBUG oslo_concurrency.lockutils [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.125 253542 DEBUG nova.compute.manager [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] No waiting events found dispatching network-vif-unplugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.125 253542 DEBUG nova.compute.manager [req-d49e277a-bc33-40e1-ae51-3ca684a7cca2 req-e27260ea-74c6-4479-99fe-337fbe625361 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-unplugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:02:12 compute-0 ovn_controller[152859]: 2025-11-25T09:02:12Z|01358|binding|INFO|Releasing lport e613213f-7deb-43ce-acbb-25b798b2b340 from this chassis (sb_readonly=0)
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.427 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061317.426396, f05e074d-5838-4c4b-89dc-76afe386f635 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.428 253542 INFO nova.compute.manager [-] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] VM Stopped (Lifecycle Event)
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.450 253542 DEBUG nova.compute.manager [None req-59424f37-1b62-4e8c-8a26-954b8c195306 - - - - - -] [instance: f05e074d-5838-4c4b-89dc-76afe386f635] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.682 253542 INFO nova.virt.libvirt.driver [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Deleting instance files /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055_del
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.683 253542 INFO nova.virt.libvirt.driver [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Deletion of /var/lib/nova/instances/2d372af7-dca6-4f5f-bd4c-beedbb8cc055_del complete
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.763 253542 INFO nova.compute.manager [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.764 253542 DEBUG oslo.service.loopingcall [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.764 253542 DEBUG nova.compute.manager [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:02:12 compute-0 nova_compute[253538]: 2025-11-25 09:02:12.764 253542 DEBUG nova.network.neutron [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:02:13 compute-0 ceph-mon[75015]: pgmap v2442: 321 pgs: 321 active+clean; 246 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Nov 25 09:02:13 compute-0 nova_compute[253538]: 2025-11-25 09:02:13.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:13 compute-0 nova_compute[253538]: 2025-11-25 09:02:13.963 253542 DEBUG nova.network.neutron [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:13 compute-0 nova_compute[253538]: 2025-11-25 09:02:13.983 253542 INFO nova.compute.manager [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Took 1.22 seconds to deallocate network for instance.
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.046 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.046 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 229 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.063 253542 DEBUG nova.compute.manager [req-29171dff-1b2d-4599-98e6-9723c6ac3c90 req-23d7d12f-94d8-42de-bfd9-e5679260074c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-deleted-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.120 253542 DEBUG oslo_concurrency.processutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.227 253542 DEBUG nova.compute.manager [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.228 253542 DEBUG oslo_concurrency.lockutils [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.228 253542 DEBUG oslo_concurrency.lockutils [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.229 253542 DEBUG oslo_concurrency.lockutils [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.229 253542 DEBUG nova.compute.manager [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] No waiting events found dispatching network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.229 253542 WARNING nova.compute.manager [req-2339ed3a-b89b-4b44-bd75-a04a3d27fef9 req-5df11435-3377-45bf-b4a9-c63789e69ca8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Received unexpected event network-vif-plugged-c1655f18-1254-402d-b9b6-7ca2d5a8bcdc for instance with vm_state deleted and task_state None.
Nov 25 09:02:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:02:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2462729414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.550 253542 DEBUG oslo_concurrency.processutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.558 253542 DEBUG nova.compute.provider_tree [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.570 253542 DEBUG nova.scheduler.client.report [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.588 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.617 253542 INFO nova.scheduler.client.report [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 2d372af7-dca6-4f5f-bd4c-beedbb8cc055
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.707 253542 DEBUG oslo_concurrency.lockutils [None req-5d1d5cc0-d945-44f7-8628-e35e03692700 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2d372af7-dca6-4f5f-bd4c-beedbb8cc055" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:14.726 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:02:14 compute-0 nova_compute[253538]: 2025-11-25 09:02:14.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:14 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:14.727 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:02:15 compute-0 ceph-mon[75015]: pgmap v2443: 321 pgs: 321 active+clean; 229 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 09:02:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2462729414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:15 compute-0 nova_compute[253538]: 2025-11-25 09:02:15.744 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:15 compute-0 nova_compute[253538]: 2025-11-25 09:02:15.744 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:15 compute-0 nova_compute[253538]: 2025-11-25 09:02:15.745 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:15 compute-0 nova_compute[253538]: 2025-11-25 09:02:15.745 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:15 compute-0 nova_compute[253538]: 2025-11-25 09:02:15.746 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:15 compute-0 nova_compute[253538]: 2025-11-25 09:02:15.747 253542 INFO nova.compute.manager [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Terminating instance
Nov 25 09:02:15 compute-0 nova_compute[253538]: 2025-11-25 09:02:15.748 253542 DEBUG nova.compute.manager [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:02:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 193 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.2 MiB/s wr, 141 op/s
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.180 253542 DEBUG nova.compute.manager [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.181 253542 DEBUG nova.compute.manager [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing instance network info cache due to event network-changed-2719889c-c962-425f-9df3-6f3d741ca0ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.181 253542 DEBUG oslo_concurrency.lockutils [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.182 253542 DEBUG oslo_concurrency.lockutils [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.182 253542 DEBUG nova.network.neutron [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Refreshing network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:02:16 compute-0 kernel: tap2719889c-c9 (unregistering): left promiscuous mode
Nov 25 09:02:16 compute-0 NetworkManager[48915]: <info>  [1764061336.3348] device (tap2719889c-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:02:16 compute-0 ovn_controller[152859]: 2025-11-25T09:02:16Z|01359|binding|INFO|Releasing lport 2719889c-c962-425f-9df3-6f3d741ca0ec from this chassis (sb_readonly=0)
Nov 25 09:02:16 compute-0 ovn_controller[152859]: 2025-11-25T09:02:16Z|01360|binding|INFO|Setting lport 2719889c-c962-425f-9df3-6f3d741ca0ec down in Southbound
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.344 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:16 compute-0 ovn_controller[152859]: 2025-11-25T09:02:16Z|01361|binding|INFO|Removing iface tap2719889c-c9 ovn-installed in OVS
Nov 25 09:02:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.355 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:1f:9b 10.100.0.7'], port_security=['fa:16:3e:0c:1f:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '830678ef-9f48-4175-aa6d-666c24a11689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7a01ec2f-9868-40ca-9120-52725aa4431e 8a14d0f4-bb68-44c1-9d93-80bac0a038b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4cb606-f8e1-4247-876b-21f84cfe5e61, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2719889c-c962-425f-9df3-6f3d741ca0ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:02:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.357 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2719889c-c962-425f-9df3-6f3d741ca0ec in datapath 01d5ee0a-5a87-445b-8539-b33b1f9d0842 unbound from our chassis
Nov 25 09:02:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.359 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d5ee0a-5a87-445b-8539-b33b1f9d0842, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:02:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.360 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0621ca-28af-4f48-aa5e-1700de7b9647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:16.361 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 namespace which is not needed anymore
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:16 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 25 09:02:16 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000082.scope: Consumed 16.356s CPU time.
Nov 25 09:02:16 compute-0 systemd-machined[215790]: Machine qemu-160-instance-00000082 terminated.
Nov 25 09:02:16 compute-0 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [NOTICE]   (390407) : haproxy version is 2.8.14-c23fe91
Nov 25 09:02:16 compute-0 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [NOTICE]   (390407) : path to executable is /usr/sbin/haproxy
Nov 25 09:02:16 compute-0 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [WARNING]  (390407) : Exiting Master process...
Nov 25 09:02:16 compute-0 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [WARNING]  (390407) : Exiting Master process...
Nov 25 09:02:16 compute-0 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [ALERT]    (390407) : Current worker (390411) exited with code 143 (Terminated)
Nov 25 09:02:16 compute-0 neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842[390402]: [WARNING]  (390407) : All workers exited. Exiting... (0)
Nov 25 09:02:16 compute-0 systemd[1]: libpod-f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a.scope: Deactivated successfully.
Nov 25 09:02:16 compute-0 podman[392171]: 2025-11-25 09:02:16.60418489 +0000 UTC m=+0.102359267 container died f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.605 253542 INFO nova.virt.libvirt.driver [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Instance destroyed successfully.
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.606 253542 DEBUG nova.objects.instance [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 830678ef-9f48-4175-aa6d-666c24a11689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.623 253542 DEBUG nova.virt.libvirt.vif [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-1626905983',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=130,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJWOQIYal/ibrReI+6RW0ugIdawfbseW0yB5VZg2UY83dhKROJ1pIw9MNC3hDvNCvPlRsh4rMt3whBfd3b+Yn9hu3bxkRBW96s2JDhI6eOb1sodQt8E/2VrhY6VxJQFmA==',key_name='tempest-TestSecurityGroupsBasicOps-688042133',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:01:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-ddf3avyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:01:14Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=830678ef-9f48-4175-aa6d-666c24a11689,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.624 253542 DEBUG nova.network.os_vif_util [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.625 253542 DEBUG nova.network.os_vif_util [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.625 253542 DEBUG os_vif [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.628 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2719889c-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:02:16 compute-0 nova_compute[253538]: 2025-11-25 09:02:16.635 253542 INFO os_vif [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:1f:9b,bridge_name='br-int',has_traffic_filtering=True,id=2719889c-c962-425f-9df3-6f3d741ca0ec,network=Network(01d5ee0a-5a87-445b-8539-b33b1f9d0842),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2719889c-c9')
Nov 25 09:02:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a-userdata-shm.mount: Deactivated successfully.
Nov 25 09:02:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a4031ab9bbc2e36451a0ff49fcc18576186761ed3f26382ce7cd4974678668-merged.mount: Deactivated successfully.
Nov 25 09:02:17 compute-0 podman[392171]: 2025-11-25 09:02:17.25188046 +0000 UTC m=+0.750054847 container cleanup f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 09:02:17 compute-0 systemd[1]: libpod-conmon-f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a.scope: Deactivated successfully.
Nov 25 09:02:17 compute-0 nova_compute[253538]: 2025-11-25 09:02:17.384 253542 DEBUG nova.network.neutron [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updated VIF entry in instance network info cache for port 2719889c-c962-425f-9df3-6f3d741ca0ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:02:17 compute-0 nova_compute[253538]: 2025-11-25 09:02:17.385 253542 DEBUG nova.network.neutron [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [{"id": "2719889c-c962-425f-9df3-6f3d741ca0ec", "address": "fa:16:3e:0c:1f:9b", "network": {"id": "01d5ee0a-5a87-445b-8539-b33b1f9d0842", "bridge": "br-int", "label": "tempest-network-smoke--2130316306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2719889c-c9", "ovs_interfaceid": "2719889c-c962-425f-9df3-6f3d741ca0ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:17 compute-0 nova_compute[253538]: 2025-11-25 09:02:17.439 253542 DEBUG oslo_concurrency.lockutils [req-6e77dd4f-102b-4a5b-ad39-2084d781118c req-ea794970-2afb-45b0-afd2-f644292df707 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-830678ef-9f48-4175-aa6d-666c24a11689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:02:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:17 compute-0 ceph-mon[75015]: pgmap v2444: 321 pgs: 321 active+clean; 193 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.2 MiB/s wr, 141 op/s
Nov 25 09:02:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 966 KiB/s wr, 134 op/s
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.281 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-unplugged-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.282 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.283 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.283 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.283 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] No waiting events found dispatching network-vif-unplugged-2719889c-c962-425f-9df3-6f3d741ca0ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.284 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-unplugged-2719889c-c962-425f-9df3-6f3d741ca0ec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.284 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.284 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "830678ef-9f48-4175-aa6d-666c24a11689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.285 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.285 253542 DEBUG oslo_concurrency.lockutils [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.286 253542 DEBUG nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] No waiting events found dispatching network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:02:18 compute-0 podman[392230]: 2025-11-25 09:02:18.286292395 +0000 UTC m=+1.006207089 container remove f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.286 253542 WARNING nova.compute.manager [req-188c9f02-3e97-4115-878d-bc6ed7747401 req-57d87d45-98f7-4591-bc5f-32532bca78a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received unexpected event network-vif-plugged-2719889c-c962-425f-9df3-6f3d741ca0ec for instance with vm_state active and task_state deleting.
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.293 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd75971a-87d7-4b5e-bcb6-1296a1a7aa96]: (4, ('Tue Nov 25 09:02:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 (f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a)\nf925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a\nTue Nov 25 09:02:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 (f925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a)\nf925e285d80e411ef3468a7a3967feb9f7a3b50cd31f59d28a132bc43d5e9a6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.294 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[116ec404-e7c7-4ebc-80f2-72cdcf09a020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.295 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d5ee0a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:18 compute-0 kernel: tap01d5ee0a-50: left promiscuous mode
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.314 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbe58ad-4dda-41ea-991f-0d1251717669]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f596e55-4d06-4e52-bac6-b193dacc1f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.334 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a18cc45-7c52-4a5c-a470-0fd9e7b28f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.350 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e558b2f0-4cda-4ab8-9a20-ff39d153baaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662722, 'reachable_time': 32740, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392246, 'error': None, 'target': 'ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d01d5ee0a\x2d5a87\x2d445b\x2d8539\x2db33b1f9d0842.mount: Deactivated successfully.
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.353 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d5ee0a-5a87-445b-8539-b33b1f9d0842 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:02:18 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:18.354 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[60abf6d5-c078-4018-becb-39900265daf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:18 compute-0 nova_compute[253538]: 2025-11-25 09:02:18.709 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:18 compute-0 sudo[392249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:18 compute-0 sudo[392249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:18 compute-0 sudo[392249]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:18 compute-0 sudo[392274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:02:18 compute-0 sudo[392274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:18 compute-0 sudo[392274]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:18 compute-0 sudo[392299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:18 compute-0 sudo[392299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:18 compute-0 sudo[392299]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:18 compute-0 sudo[392324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 09:02:18 compute-0 sudo[392324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:18 compute-0 sshd-session[392247]: Invalid user ansible from 193.32.162.151 port 46638
Nov 25 09:02:19 compute-0 ceph-mon[75015]: pgmap v2445: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 966 KiB/s wr, 134 op/s
Nov 25 09:02:19 compute-0 sshd-session[392247]: Connection closed by invalid user ansible 193.32.162.151 port 46638 [preauth]
Nov 25 09:02:19 compute-0 sudo[392324]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:02:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:02:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:19 compute-0 sudo[392369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:19 compute-0 sudo[392369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:19 compute-0 sudo[392369]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:19 compute-0 sudo[392394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:02:19 compute-0 sudo[392394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:19 compute-0 sudo[392394]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:19 compute-0 sudo[392419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:19 compute-0 sudo[392419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:19 compute-0 sudo[392419]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:19 compute-0 sudo[392444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:02:19 compute-0 sudo[392444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 526 KiB/s wr, 93 op/s
Nov 25 09:02:20 compute-0 sudo[392444]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:02:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:02:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:02:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:02:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:02:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:20 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 98628459-36dd-4b35-8bd4-74d5197386eb does not exist
Nov 25 09:02:20 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 427ae1a3-3f72-44a8-9d50-4511e3cf4cf1 does not exist
Nov 25 09:02:20 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4d590eec-0606-4779-8627-b76c553221be does not exist
Nov 25 09:02:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:02:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:02:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:02:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:02:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:02:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:02:20 compute-0 sudo[392501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:20 compute-0 sudo[392501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:20 compute-0 sudo[392501]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:20 compute-0 ceph-mon[75015]: pgmap v2446: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 526 KiB/s wr, 93 op/s
Nov 25 09:02:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:02:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:02:20 compute-0 sudo[392526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:02:20 compute-0 sudo[392526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:20 compute-0 sudo[392526]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:21 compute-0 sudo[392551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:21 compute-0 sudo[392551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:21 compute-0 sudo[392551]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:21 compute-0 sudo[392576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:02:21 compute-0 sudo[392576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:21 compute-0 podman[392643]: 2025-11-25 09:02:21.474439852 +0000 UTC m=+0.097671300 container create 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:02:21 compute-0 podman[392643]: 2025-11-25 09:02:21.400036937 +0000 UTC m=+0.023268465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:02:21 compute-0 systemd[1]: Started libpod-conmon-7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7.scope.
Nov 25 09:02:21 compute-0 podman[392657]: 2025-11-25 09:02:21.606181948 +0000 UTC m=+0.078974340 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 25 09:02:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:21 compute-0 nova_compute[253538]: 2025-11-25 09:02:21.660 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:21 compute-0 podman[392643]: 2025-11-25 09:02:21.676611895 +0000 UTC m=+0.299843333 container init 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:02:21 compute-0 podman[392643]: 2025-11-25 09:02:21.684724076 +0000 UTC m=+0.307955504 container start 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:02:21 compute-0 peaceful_noether[392689]: 167 167
Nov 25 09:02:21 compute-0 systemd[1]: libpod-7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7.scope: Deactivated successfully.
Nov 25 09:02:21 compute-0 podman[392643]: 2025-11-25 09:02:21.725755393 +0000 UTC m=+0.348986811 container attach 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:02:21 compute-0 podman[392643]: 2025-11-25 09:02:21.726228145 +0000 UTC m=+0.349459553 container died 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:02:21 compute-0 podman[392658]: 2025-11-25 09:02:21.861699352 +0000 UTC m=+0.334119635 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d0060f63eaa902da1b7dbee76a53390e9dab1ce284c38454bab3deb8e55db62-merged.mount: Deactivated successfully.
Nov 25 09:02:21 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:21 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:02:21 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:02:21 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:02:22 compute-0 podman[392643]: 2025-11-25 09:02:22.041202118 +0000 UTC m=+0.664433536 container remove 7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:02:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 139 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 24 KiB/s wr, 40 op/s
Nov 25 09:02:22 compute-0 systemd[1]: libpod-conmon-7224245a95c09ebe038660fa87a25db2281460425bc286a3fc13d758052283c7.scope: Deactivated successfully.
Nov 25 09:02:22 compute-0 podman[392723]: 2025-11-25 09:02:22.252598243 +0000 UTC m=+0.081317285 container create 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 09:02:22 compute-0 podman[392723]: 2025-11-25 09:02:22.197786961 +0000 UTC m=+0.026506003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:02:22 compute-0 systemd[1]: Started libpod-conmon-0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca.scope.
Nov 25 09:02:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:22 compute-0 podman[392723]: 2025-11-25 09:02:22.386844167 +0000 UTC m=+0.215563189 container init 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:02:22 compute-0 podman[392723]: 2025-11-25 09:02:22.395223185 +0000 UTC m=+0.223942227 container start 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:02:22 compute-0 podman[392723]: 2025-11-25 09:02:22.588396953 +0000 UTC m=+0.417115965 container attach 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 09:02:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:22 compute-0 nova_compute[253538]: 2025-11-25 09:02:22.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:23 compute-0 ceph-mon[75015]: pgmap v2447: 321 pgs: 321 active+clean; 139 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 24 KiB/s wr, 40 op/s
Nov 25 09:02:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:02:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:02:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:02:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:02:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:02:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:02:23 compute-0 nervous_germain[392739]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:02:23 compute-0 nervous_germain[392739]: --> relative data size: 1.0
Nov 25 09:02:23 compute-0 nervous_germain[392739]: --> All data devices are unavailable
Nov 25 09:02:23 compute-0 nova_compute[253538]: 2025-11-25 09:02:23.712 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:23 compute-0 systemd[1]: libpod-0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca.scope: Deactivated successfully.
Nov 25 09:02:23 compute-0 systemd[1]: libpod-0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca.scope: Consumed 1.101s CPU time.
Nov 25 09:02:23 compute-0 podman[392723]: 2025-11-25 09:02:23.753400923 +0000 UTC m=+1.582119955 container died 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:02:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 118 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 43 op/s
Nov 25 09:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b89a5fbbd3ca7973748cf79a9ef380532fbdbcc6bbd2b614f8330ac845ab9f-merged.mount: Deactivated successfully.
Nov 25 09:02:24 compute-0 nova_compute[253538]: 2025-11-25 09:02:24.388 253542 INFO nova.virt.libvirt.driver [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Deleting instance files /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689_del
Nov 25 09:02:24 compute-0 nova_compute[253538]: 2025-11-25 09:02:24.390 253542 INFO nova.virt.libvirt.driver [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Deletion of /var/lib/nova/instances/830678ef-9f48-4175-aa6d-666c24a11689_del complete
Nov 25 09:02:24 compute-0 nova_compute[253538]: 2025-11-25 09:02:24.458 253542 INFO nova.compute.manager [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Took 8.71 seconds to destroy the instance on the hypervisor.
Nov 25 09:02:24 compute-0 nova_compute[253538]: 2025-11-25 09:02:24.460 253542 DEBUG oslo.service.loopingcall [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:02:24 compute-0 nova_compute[253538]: 2025-11-25 09:02:24.460 253542 DEBUG nova.compute.manager [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:02:24 compute-0 nova_compute[253538]: 2025-11-25 09:02:24.460 253542 DEBUG nova.network.neutron [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:02:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:24.728 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:24 compute-0 ceph-mon[75015]: pgmap v2448: 321 pgs: 321 active+clean; 118 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 43 op/s
Nov 25 09:02:25 compute-0 podman[392723]: 2025-11-25 09:02:25.071536101 +0000 UTC m=+2.900255123 container remove 0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:02:25 compute-0 sudo[392576]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:25 compute-0 sshd-session[392768]: Received disconnect from 45.202.211.6 port 34072:11: Bye Bye [preauth]
Nov 25 09:02:25 compute-0 sshd-session[392768]: Disconnected from authenticating user root 45.202.211.6 port 34072 [preauth]
Nov 25 09:02:25 compute-0 systemd[1]: libpod-conmon-0f549d782a02e1072866029a7f4c34cd3ddc205ed2512c6982bbca721df6d8ca.scope: Deactivated successfully.
Nov 25 09:02:25 compute-0 sudo[392782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:25 compute-0 sudo[392782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:25 compute-0 sudo[392782]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:25 compute-0 sudo[392807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:02:25 compute-0 sudo[392807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:25 compute-0 sudo[392807]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:25 compute-0 sudo[392832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:25 compute-0 sudo[392832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:25 compute-0 sudo[392832]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.391 253542 DEBUG nova.network.neutron [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.426 253542 DEBUG nova.compute.manager [req-fc95b160-0454-4227-810e-4775cbe428bb req-bc00770d-1f3f-4a6d-bf96-3105ef6ab75b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Received event network-vif-deleted-2719889c-c962-425f-9df3-6f3d741ca0ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.427 253542 INFO nova.compute.manager [req-fc95b160-0454-4227-810e-4775cbe428bb req-bc00770d-1f3f-4a6d-bf96-3105ef6ab75b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Neutron deleted interface 2719889c-c962-425f-9df3-6f3d741ca0ec; detaching it from the instance and deleting it from the info cache
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.427 253542 DEBUG nova.network.neutron [req-fc95b160-0454-4227-810e-4775cbe428bb req-bc00770d-1f3f-4a6d-bf96-3105ef6ab75b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.433 253542 INFO nova.compute.manager [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Took 0.97 seconds to deallocate network for instance.
Nov 25 09:02:25 compute-0 sudo[392857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:02:25 compute-0 sudo[392857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.462 253542 DEBUG nova.compute.manager [req-fc95b160-0454-4227-810e-4775cbe428bb req-bc00770d-1f3f-4a6d-bf96-3105ef6ab75b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Detach interface failed, port_id=2719889c-c962-425f-9df3-6f3d741ca0ec, reason: Instance 830678ef-9f48-4175-aa6d-666c24a11689 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.563 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.564 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:25 compute-0 nova_compute[253538]: 2025-11-25 09:02:25.640 253542 DEBUG oslo_concurrency.processutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:25 compute-0 podman[392925]: 2025-11-25 09:02:25.803166555 +0000 UTC m=+0.031201811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:02:25 compute-0 podman[392925]: 2025-11-25 09:02:25.909197221 +0000 UTC m=+0.137232467 container create f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:02:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:02:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700976738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 11 KiB/s wr, 48 op/s
Nov 25 09:02:26 compute-0 nova_compute[253538]: 2025-11-25 09:02:26.075 253542 DEBUG oslo_concurrency.processutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:26 compute-0 nova_compute[253538]: 2025-11-25 09:02:26.082 253542 DEBUG nova.compute.provider_tree [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:02:26 compute-0 nova_compute[253538]: 2025-11-25 09:02:26.096 253542 DEBUG nova.scheduler.client.report [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:02:26 compute-0 nova_compute[253538]: 2025-11-25 09:02:26.119 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:26 compute-0 nova_compute[253538]: 2025-11-25 09:02:26.158 253542 INFO nova.scheduler.client.report [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 830678ef-9f48-4175-aa6d-666c24a11689
Nov 25 09:02:26 compute-0 systemd[1]: Started libpod-conmon-f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc.scope.
Nov 25 09:02:26 compute-0 nova_compute[253538]: 2025-11-25 09:02:26.241 253542 DEBUG oslo_concurrency.lockutils [None req-7776ed08-73c2-45b3-83c1-bd8e3abb85b5 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "830678ef-9f48-4175-aa6d-666c24a11689" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2700976738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:26 compute-0 podman[392925]: 2025-11-25 09:02:26.358652115 +0000 UTC m=+0.586687381 container init f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:02:26 compute-0 podman[392925]: 2025-11-25 09:02:26.367803643 +0000 UTC m=+0.595838879 container start f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:02:26 compute-0 gracious_wu[392960]: 167 167
Nov 25 09:02:26 compute-0 systemd[1]: libpod-f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc.scope: Deactivated successfully.
Nov 25 09:02:26 compute-0 conmon[392960]: conmon f932d28ebf0f7c92599f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc.scope/container/memory.events
Nov 25 09:02:26 compute-0 podman[392925]: 2025-11-25 09:02:26.645895183 +0000 UTC m=+0.873930439 container attach f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:02:26 compute-0 podman[392925]: 2025-11-25 09:02:26.648638198 +0000 UTC m=+0.876673434 container died f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:02:26 compute-0 nova_compute[253538]: 2025-11-25 09:02:26.663 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:26 compute-0 nova_compute[253538]: 2025-11-25 09:02:26.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac65601efe6fc9495de3437e29022a7af158daa35c868bd48f67727d684505e5-merged.mount: Deactivated successfully.
Nov 25 09:02:27 compute-0 nova_compute[253538]: 2025-11-25 09:02:27.009 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061332.008601, 2d372af7-dca6-4f5f-bd4c-beedbb8cc055 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:27 compute-0 nova_compute[253538]: 2025-11-25 09:02:27.011 253542 INFO nova.compute.manager [-] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] VM Stopped (Lifecycle Event)
Nov 25 09:02:27 compute-0 nova_compute[253538]: 2025-11-25 09:02:27.033 253542 DEBUG nova.compute.manager [None req-04cd7668-d974-4567-bad0-e40dd3274c16 - - - - - -] [instance: 2d372af7-dca6-4f5f-bd4c-beedbb8cc055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:27 compute-0 ceph-mon[75015]: pgmap v2449: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 11 KiB/s wr, 48 op/s
Nov 25 09:02:28 compute-0 podman[392925]: 2025-11-25 09:02:28.028957351 +0000 UTC m=+2.256992577 container remove f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:02:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Nov 25 09:02:28 compute-0 systemd[1]: libpod-conmon-f932d28ebf0f7c92599fad90ed35f57befdc10d14dabf5247b0bd4a4042e6ccc.scope: Deactivated successfully.
Nov 25 09:02:28 compute-0 podman[392983]: 2025-11-25 09:02:28.284563629 +0000 UTC m=+0.096751001 container create b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:02:28 compute-0 podman[392983]: 2025-11-25 09:02:28.225040601 +0000 UTC m=+0.037227983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:02:28 compute-0 systemd[1]: Started libpod-conmon-b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc.scope.
Nov 25 09:02:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:28 compute-0 nova_compute[253538]: 2025-11-25 09:02:28.714 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:28 compute-0 podman[392983]: 2025-11-25 09:02:28.839516624 +0000 UTC m=+0.651704066 container init b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:02:28 compute-0 podman[392983]: 2025-11-25 09:02:28.854718098 +0000 UTC m=+0.666905500 container start b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:02:29 compute-0 podman[392983]: 2025-11-25 09:02:29.20171843 +0000 UTC m=+1.013905842 container attach b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 09:02:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:02:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3062220266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:02:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:02:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3062220266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:02:29 compute-0 ceph-mon[75015]: pgmap v2450: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Nov 25 09:02:29 compute-0 podman[393002]: 2025-11-25 09:02:29.390463112 +0000 UTC m=+0.634211013 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 09:02:29 compute-0 nova_compute[253538]: 2025-11-25 09:02:29.571 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:29 compute-0 nova_compute[253538]: 2025-11-25 09:02:29.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]: {
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:     "0": [
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:         {
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "devices": [
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "/dev/loop3"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             ],
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_name": "ceph_lv0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_size": "21470642176",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "name": "ceph_lv0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "tags": {
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cluster_name": "ceph",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.crush_device_class": "",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.encrypted": "0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osd_id": "0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.type": "block",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.vdo": "0"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             },
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "type": "block",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "vg_name": "ceph_vg0"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:         }
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:     ],
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:     "1": [
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:         {
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "devices": [
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "/dev/loop4"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             ],
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_name": "ceph_lv1",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_size": "21470642176",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "name": "ceph_lv1",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "tags": {
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cluster_name": "ceph",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.crush_device_class": "",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.encrypted": "0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osd_id": "1",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.type": "block",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.vdo": "0"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             },
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "type": "block",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "vg_name": "ceph_vg1"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:         }
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:     ],
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:     "2": [
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:         {
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "devices": [
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "/dev/loop5"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             ],
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_name": "ceph_lv2",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_size": "21470642176",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "name": "ceph_lv2",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "tags": {
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.cluster_name": "ceph",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.crush_device_class": "",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.encrypted": "0",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osd_id": "2",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.type": "block",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:                 "ceph.vdo": "0"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             },
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "type": "block",
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:             "vg_name": "ceph_vg2"
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:         }
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]:     ]
Nov 25 09:02:29 compute-0 cool_grothendieck[392999]: }
Nov 25 09:02:29 compute-0 systemd[1]: libpod-b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc.scope: Deactivated successfully.
Nov 25 09:02:29 compute-0 podman[392983]: 2025-11-25 09:02:29.790052013 +0000 UTC m=+1.602239405 container died b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:02:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d9b498e189d8517d6f19a521ba1ee849073bed21d0aec61620bda4ce9287153-merged.mount: Deactivated successfully.
Nov 25 09:02:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3062220266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:02:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3062220266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:02:31 compute-0 podman[392983]: 2025-11-25 09:02:31.173937733 +0000 UTC m=+2.986125115 container remove b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 09:02:31 compute-0 systemd[1]: libpod-conmon-b812990ce5cb99fe6431850fbb0bdc766c95ddbbd3f2055d14e3b3c62c81b6cc.scope: Deactivated successfully.
Nov 25 09:02:31 compute-0 sudo[392857]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:31 compute-0 sudo[393045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:31 compute-0 sudo[393045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:31 compute-0 sudo[393045]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:31 compute-0 sudo[393070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:02:31 compute-0 sudo[393070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:31 compute-0 sudo[393070]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:31 compute-0 sudo[393095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:31 compute-0 sudo[393095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:31 compute-0 sudo[393095]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:31 compute-0 sudo[393120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:02:31 compute-0 sudo[393120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:31 compute-0 nova_compute[253538]: 2025-11-25 09:02:31.598 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061336.5972981, 830678ef-9f48-4175-aa6d-666c24a11689 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:31 compute-0 nova_compute[253538]: 2025-11-25 09:02:31.599 253542 INFO nova.compute.manager [-] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] VM Stopped (Lifecycle Event)
Nov 25 09:02:31 compute-0 nova_compute[253538]: 2025-11-25 09:02:31.621 253542 DEBUG nova.compute.manager [None req-d6d076e5-3f17-49fb-9ca7-1a1fd9df23f3 - - - - - -] [instance: 830678ef-9f48-4175-aa6d-666c24a11689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:31 compute-0 ceph-mon[75015]: pgmap v2451: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:02:31 compute-0 nova_compute[253538]: 2025-11-25 09:02:31.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:31 compute-0 podman[393186]: 2025-11-25 09:02:31.907061972 +0000 UTC m=+0.059986951 container create 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 09:02:31 compute-0 systemd[1]: Started libpod-conmon-691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0.scope.
Nov 25 09:02:31 compute-0 podman[393186]: 2025-11-25 09:02:31.872047791 +0000 UTC m=+0.024972780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:02:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:32 compute-0 podman[393186]: 2025-11-25 09:02:32.038146766 +0000 UTC m=+0.191071735 container init 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:02:32 compute-0 podman[393186]: 2025-11-25 09:02:32.045082654 +0000 UTC m=+0.198007633 container start 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:02:32 compute-0 epic_brahmagupta[393202]: 167 167
Nov 25 09:02:32 compute-0 systemd[1]: libpod-691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0.scope: Deactivated successfully.
Nov 25 09:02:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 25 09:02:32 compute-0 podman[393186]: 2025-11-25 09:02:32.113896055 +0000 UTC m=+0.266821054 container attach 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:02:32 compute-0 podman[393186]: 2025-11-25 09:02:32.115084197 +0000 UTC m=+0.268009196 container died 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcc30919b6d79a0bd141af1944a0aa9c48ca12d7731e17f4f6a3f0562508fcbe-merged.mount: Deactivated successfully.
Nov 25 09:02:32 compute-0 podman[393186]: 2025-11-25 09:02:32.323574715 +0000 UTC m=+0.476499694 container remove 691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 09:02:32 compute-0 systemd[1]: libpod-conmon-691ecbd83df97660eef38e10fdb484b2594163cc075b1a175f4b5edb48e971c0.scope: Deactivated successfully.
Nov 25 09:02:32 compute-0 podman[393228]: 2025-11-25 09:02:32.548631862 +0000 UTC m=+0.081283370 container create 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:02:32 compute-0 podman[393228]: 2025-11-25 09:02:32.496456345 +0000 UTC m=+0.029107833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:02:32 compute-0 systemd[1]: Started libpod-conmon-08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8.scope.
Nov 25 09:02:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:32 compute-0 podman[393228]: 2025-11-25 09:02:32.662527179 +0000 UTC m=+0.195178677 container init 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:02:32 compute-0 ceph-mon[75015]: pgmap v2452: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 25 09:02:32 compute-0 podman[393228]: 2025-11-25 09:02:32.67472074 +0000 UTC m=+0.207372208 container start 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 09:02:32 compute-0 podman[393228]: 2025-11-25 09:02:32.680044765 +0000 UTC m=+0.212696263 container attach 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 09:02:33 compute-0 priceless_feistel[393244]: {
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "osd_id": 1,
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "type": "bluestore"
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:     },
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "osd_id": 2,
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "type": "bluestore"
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:     },
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "osd_id": 0,
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:         "type": "bluestore"
Nov 25 09:02:33 compute-0 priceless_feistel[393244]:     }
Nov 25 09:02:33 compute-0 priceless_feistel[393244]: }
Nov 25 09:02:33 compute-0 systemd[1]: libpod-08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8.scope: Deactivated successfully.
Nov 25 09:02:33 compute-0 podman[393228]: 2025-11-25 09:02:33.671803275 +0000 UTC m=+1.204454733 container died 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:02:33 compute-0 systemd[1]: libpod-08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8.scope: Consumed 1.001s CPU time.
Nov 25 09:02:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-831f78d36eb69a72a4585aba9444b9d81ea77494fe8786460d1e6860559ac20d-merged.mount: Deactivated successfully.
Nov 25 09:02:33 compute-0 nova_compute[253538]: 2025-11-25 09:02:33.716 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:33 compute-0 podman[393228]: 2025-11-25 09:02:33.740673968 +0000 UTC m=+1.273325436 container remove 08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:02:33 compute-0 systemd[1]: libpod-conmon-08aecfa979a6b575e9d3e7b59ba41656941354766f961c4dc6f93a4527ff91f8.scope: Deactivated successfully.
Nov 25 09:02:33 compute-0 sudo[393120]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:02:33 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:02:33 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:33 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 32f8db10-73f3-4d3a-9b39-7fe4cd84d93c does not exist
Nov 25 09:02:33 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5922ec99-a8fc-43b5-9d82-d92a1449200b does not exist
Nov 25 09:02:33 compute-0 sudo[393290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:02:33 compute-0 sudo[393290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:33 compute-0 sudo[393290]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:33 compute-0 sudo[393315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:02:33 compute-0 sudo[393315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:02:33 compute-0 sudo[393315]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 16 op/s
Nov 25 09:02:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:02:34 compute-0 ceph-mon[75015]: pgmap v2453: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 16 op/s
Nov 25 09:02:34 compute-0 nova_compute[253538]: 2025-11-25 09:02:34.977 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:34 compute-0 nova_compute[253538]: 2025-11-25 09:02:34.977 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:34 compute-0 nova_compute[253538]: 2025-11-25 09:02:34.994 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.075 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.075 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.085 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.085 253542 INFO nova.compute.claims [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.179 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:02:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925754751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.603 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.610 253542 DEBUG nova.compute.provider_tree [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.624 253542 DEBUG nova.scheduler.client.report [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.656 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.657 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.707 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.708 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.727 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.741 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.872 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.874 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.874 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Creating image(s)
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.904 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.924 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.946 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.950 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:35 compute-0 nova_compute[253538]: 2025-11-25 09:02:35.987 253542 DEBUG nova.policy [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:02:36 compute-0 nova_compute[253538]: 2025-11-25 09:02:36.027 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:36 compute-0 nova_compute[253538]: 2025-11-25 09:02:36.028 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:36 compute-0 nova_compute[253538]: 2025-11-25 09:02:36.028 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:36 compute-0 nova_compute[253538]: 2025-11-25 09:02:36.029 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 596 B/s wr, 13 op/s
Nov 25 09:02:36 compute-0 nova_compute[253538]: 2025-11-25 09:02:36.090 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:36 compute-0 nova_compute[253538]: 2025-11-25 09:02:36.093 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 28376454-90b2-431d-9052-48b369973c8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3925754751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:36 compute-0 nova_compute[253538]: 2025-11-25 09:02:36.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:37 compute-0 ceph-mon[75015]: pgmap v2454: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 596 B/s wr, 13 op/s
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.466 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Successfully created port: 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.592 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 28376454-90b2-431d-9052-48b369973c8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.641 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 28376454-90b2-431d-9052-48b369973c8e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:02:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.846 253542 DEBUG nova.objects.instance [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 28376454-90b2-431d-9052-48b369973c8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.858 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.859 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Ensure instance console log exists: /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.859 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.860 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:37 compute-0 nova_compute[253538]: 2025-11-25 09:02:37.860 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 3 op/s
Nov 25 09:02:38 compute-0 nova_compute[253538]: 2025-11-25 09:02:38.181 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Successfully created port: fd44d480-0242-4c7a-b02e-f58852c99ca0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:02:38 compute-0 ceph-mon[75015]: pgmap v2455: 321 pgs: 321 active+clean; 88 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 3 op/s
Nov 25 09:02:38 compute-0 nova_compute[253538]: 2025-11-25 09:02:38.719 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:39 compute-0 nova_compute[253538]: 2025-11-25 09:02:39.491 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Successfully updated port: 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:02:39 compute-0 nova_compute[253538]: 2025-11-25 09:02:39.571 253542 DEBUG nova.compute.manager [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:39 compute-0 nova_compute[253538]: 2025-11-25 09:02:39.572 253542 DEBUG nova.compute.manager [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing instance network info cache due to event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:02:39 compute-0 nova_compute[253538]: 2025-11-25 09:02:39.573 253542 DEBUG oslo_concurrency.lockutils [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:02:39 compute-0 nova_compute[253538]: 2025-11-25 09:02:39.574 253542 DEBUG oslo_concurrency.lockutils [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:02:39 compute-0 nova_compute[253538]: 2025-11-25 09:02:39.574 253542 DEBUG nova.network.neutron [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:02:39 compute-0 nova_compute[253538]: 2025-11-25 09:02:39.811 253542 DEBUG nova.network.neutron [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:02:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 119 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 09:02:40 compute-0 nova_compute[253538]: 2025-11-25 09:02:40.584 253542 DEBUG nova.network.neutron [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:40 compute-0 nova_compute[253538]: 2025-11-25 09:02:40.595 253542 DEBUG oslo_concurrency.lockutils [req-91b54e20-efd4-4340-b1e5-7e37352c3488 req-8a513184-1086-46a0-b7ba-3fce159f1042 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:02:40 compute-0 nova_compute[253538]: 2025-11-25 09:02:40.720 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Successfully updated port: fd44d480-0242-4c7a-b02e-f58852c99ca0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:02:40 compute-0 nova_compute[253538]: 2025-11-25 09:02:40.901 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:02:40 compute-0 nova_compute[253538]: 2025-11-25 09:02:40.902 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:02:40 compute-0 nova_compute[253538]: 2025-11-25 09:02:40.902 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:02:41 compute-0 nova_compute[253538]: 2025-11-25 09:02:41.070 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:02:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:41.085 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:41.086 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:41.086 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:41 compute-0 ceph-mon[75015]: pgmap v2456: 321 pgs: 321 active+clean; 119 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 09:02:41 compute-0 nova_compute[253538]: 2025-11-25 09:02:41.677 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:41 compute-0 nova_compute[253538]: 2025-11-25 09:02:41.678 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:41 compute-0 nova_compute[253538]: 2025-11-25 09:02:41.687 253542 DEBUG nova.compute.manager [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-changed-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:41 compute-0 nova_compute[253538]: 2025-11-25 09:02:41.688 253542 DEBUG nova.compute.manager [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing instance network info cache due to event network-changed-fd44d480-0242-4c7a-b02e-f58852c99ca0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:02:41 compute-0 nova_compute[253538]: 2025-11-25 09:02:41.688 253542 DEBUG oslo_concurrency.lockutils [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:02:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:02:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:43 compute-0 ceph-mon[75015]: pgmap v2457: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:02:43 compute-0 nova_compute[253538]: 2025-11-25 09:02:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:43 compute-0 nova_compute[253538]: 2025-11-25 09:02:43.720 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:02:45 compute-0 ceph-mon[75015]: pgmap v2458: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.635 253542 DEBUG nova.network.neutron [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.663 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.663 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance network_info: |[{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.663 253542 DEBUG oslo_concurrency.lockutils [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.664 253542 DEBUG nova.network.neutron [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing network info cache for port fd44d480-0242-4c7a-b02e-f58852c99ca0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.666 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start _get_guest_xml network_info=[{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.671 253542 WARNING nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.678 253542 DEBUG nova.virt.libvirt.host [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.678 253542 DEBUG nova.virt.libvirt.host [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.681 253542 DEBUG nova.virt.libvirt.host [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.682 253542 DEBUG nova.virt.libvirt.host [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.682 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.682 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.683 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.683 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.683 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.683 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.684 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.684 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.684 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.684 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.685 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.685 253542 DEBUG nova.virt.hardware [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:02:45 compute-0 nova_compute[253538]: 2025-11-25 09:02:45.687 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:02:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:02:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577756570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.126 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.146 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.149 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2577756570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:02:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:02:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224312869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.593 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.594 253542 DEBUG nova.virt.libvirt.vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.594 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.595 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.596 253542 DEBUG nova.virt.libvirt.vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.596 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.597 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.598 253542 DEBUG nova.objects.instance [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28376454-90b2-431d-9052-48b369973c8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.607 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.610 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <uuid>28376454-90b2-431d-9052-48b369973c8e</uuid>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <name>instance-00000085</name>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-2133610236</nova:name>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:02:45</nova:creationTime>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:port uuid="9918858c-8b7c-4d3f-aada-d04fcb6eab03">
Nov 25 09:02:46 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <nova:port uuid="fd44d480-0242-4c7a-b02e-f58852c99ca0">
Nov 25 09:02:46 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb8:5e87" ipVersion="6"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <system>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <entry name="serial">28376454-90b2-431d-9052-48b369973c8e</entry>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <entry name="uuid">28376454-90b2-431d-9052-48b369973c8e</entry>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </system>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <os>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   </os>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <features>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   </features>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/28376454-90b2-431d-9052-48b369973c8e_disk">
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       </source>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/28376454-90b2-431d-9052-48b369973c8e_disk.config">
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       </source>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:02:46 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:f4:e3:ea"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <target dev="tap9918858c-8b"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:b8:5e:87"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <target dev="tapfd44d480-02"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/console.log" append="off"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <video>
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </video>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:02:46 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:02:46 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:02:46 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:02:46 compute-0 nova_compute[253538]: </domain>
Nov 25 09:02:46 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.612 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Preparing to wait for external event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.612 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.612 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Preparing to wait for external event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.613 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.614 253542 DEBUG nova.virt.libvirt.vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.614 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.615 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.615 253542 DEBUG os_vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.616 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.616 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.619 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9918858c-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.620 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9918858c-8b, col_values=(('external_ids', {'iface-id': '9918858c-8b7c-4d3f-aada-d04fcb6eab03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:e3:ea', 'vm-uuid': '28376454-90b2-431d-9052-48b369973c8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:46 compute-0 NetworkManager[48915]: <info>  [1764061366.6222] manager: (tap9918858c-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.628 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.629 253542 INFO os_vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b')
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.630 253542 DEBUG nova.virt.libvirt.vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.630 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.630 253542 DEBUG nova.network.os_vif_util [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.631 253542 DEBUG os_vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.631 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.631 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.632 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.635 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd44d480-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.635 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd44d480-02, col_values=(('external_ids', {'iface-id': 'fd44d480-0242-4c7a-b02e-f58852c99ca0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:5e:87', 'vm-uuid': '28376454-90b2-431d-9052-48b369973c8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:46 compute-0 NetworkManager[48915]: <info>  [1764061366.6379] manager: (tapfd44d480-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.639 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.644 253542 INFO os_vif [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02')
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.806 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.807 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.813 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.814 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.814 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:f4:e3:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.814 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:b8:5e:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.815 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Using config drive
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.837 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.842 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.925 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.926 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.932 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:02:46 compute-0 nova_compute[253538]: 2025-11-25 09:02:46.933 253542 INFO nova.compute.claims [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.026 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.058 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.058 253542 DEBUG nova.compute.provider_tree [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.074 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.100 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.149 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:47 compute-0 ceph-mon[75015]: pgmap v2459: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:02:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2224312869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.385 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Creating config drive at /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.390 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqvzwcwn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.533 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqvzwcwn" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:02:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3400637335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.567 253542 DEBUG nova.storage.rbd_utils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 28376454-90b2-431d-9052-48b369973c8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.570 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config 28376454-90b2-431d-9052-48b369973c8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.597 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.603 253542 DEBUG nova.compute.provider_tree [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.615 253542 DEBUG nova.network.neutron [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updated VIF entry in instance network info cache for port fd44d480-0242-4c7a-b02e-f58852c99ca0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.615 253542 DEBUG nova.network.neutron [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.618 253542 DEBUG nova.scheduler.client.report [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.641 253542 DEBUG oslo_concurrency.lockutils [req-b9c3c7b3-5830-4116-91aa-f04dcaa0566b req-d10d06cf-e42a-4ba1-9c9d-429c51c8c04c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.646 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.647 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:02:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.695 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.696 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.710 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.725 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.812 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.813 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.814 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Creating image(s)
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.840 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.867 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.895 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.900 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.935 253542 DEBUG nova.policy [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.970 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.971 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.972 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.972 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:47 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.996 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:48 compute-0 nova_compute[253538]: 2025-11-25 09:02:47.999 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:02:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3400637335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:48 compute-0 nova_compute[253538]: 2025-11-25 09:02:48.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:48 compute-0 nova_compute[253538]: 2025-11-25 09:02:48.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:02:48 compute-0 nova_compute[253538]: 2025-11-25 09:02:48.784 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Successfully created port: a93aab06-4a98-453a-87c3-01b817ee7602 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:02:48 compute-0 nova_compute[253538]: 2025-11-25 09:02:48.791 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.234 253542 DEBUG oslo_concurrency.processutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config 28376454-90b2-431d-9052-48b369973c8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.235 253542 INFO nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Deleting local config drive /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e/disk.config because it was imported into RBD.
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.2797] manager: (tap9918858c-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/560)
Nov 25 09:02:49 compute-0 kernel: tap9918858c-8b: entered promiscuous mode
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.283 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01362|binding|INFO|Claiming lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 for this chassis.
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01363|binding|INFO|9918858c-8b7c-4d3f-aada-d04fcb6eab03: Claiming fa:16:3e:f4:e3:ea 10.100.0.12
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.2941] manager: (tapfd44d480-02): new Tun device (/org/freedesktop/NetworkManager/Devices/561)
Nov 25 09:02:49 compute-0 kernel: tapfd44d480-02: entered promiscuous mode
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.304 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01364|if_status|INFO|Not updating pb chassis for fd44d480-0242-4c7a-b02e-f58852c99ca0 now as sb is readonly
Nov 25 09:02:49 compute-0 systemd-udevd[393784]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:02:49 compute-0 systemd-udevd[393785]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.312 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e3:ea 10.100.0.12'], port_security=['fa:16:3e:f4:e3:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '28376454-90b2-431d-9052-48b369973c8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f691304c-d112-4c32-b3ac-0f33230178b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a4166e3-493a-4dd7-9e89-e40e3bf1bed7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9918858c-8b7c-4d3f-aada-d04fcb6eab03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.313 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 in datapath f691304c-d112-4c32-b3ac-0f33230178b0 bound to our chassis
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.314 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f691304c-d112-4c32-b3ac-0f33230178b0
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.3257] device (tapfd44d480-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.3267] device (tapfd44d480-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.3276] device (tap9918858c-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:02:49 compute-0 systemd-machined[215790]: New machine qemu-163-instance-00000085.
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.3284] device (tap9918858c-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.329 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[73fcf272-2fcf-477d-87b9-ce18493953a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.330 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf691304c-d1 in ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.332 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf691304c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.332 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41fdbc07-acee-4d1e-9aa9-6ca6de3ece23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.333 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35dd6f73-5eff-47a0-8eb7-7dbb63d07f1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.345 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2b309b-5bc3-4c4d-98a6-da46821929c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000085.
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.371 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3a58c7-9307-4459-ba49-d4c72aae4010]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01365|binding|INFO|Claiming lport fd44d480-0242-4c7a-b02e-f58852c99ca0 for this chassis.
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01366|binding|INFO|fd44d480-0242-4c7a-b02e-f58852c99ca0: Claiming fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01367|binding|INFO|Setting lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 ovn-installed in OVS
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.399 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[120f899f-ceb3-45bb-aa0b-47a711ba3dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01368|binding|INFO|Setting lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 up in Southbound
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.404 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87'], port_security=['fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb8:5e87/64', 'neutron:device_id': '28376454-90b2-431d-9052-48b369973c8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a218a470-f1de-46c9-a998-6139383f8f9c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=fd44d480-0242-4c7a-b02e-f58852c99ca0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.408 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e476e3-092e-4024-8ebd-461812c63b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01369|binding|INFO|Setting lport fd44d480-0242-4c7a-b02e-f58852c99ca0 ovn-installed in OVS
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01370|binding|INFO|Setting lport fd44d480-0242-4c7a-b02e-f58852c99ca0 up in Southbound
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.4094] manager: (tapf691304c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/562)
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.412 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 ceph-mon[75015]: pgmap v2460: 321 pgs: 321 active+clean; 134 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.448 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[00578ddf-12ad-46e6-a2cc-387290a50b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.451 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1f612679-ff66-477a-ab7f-0b884c20b8fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.4730] device (tapf691304c-d0): carrier: link connected
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.479 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3b48d6-6df8-4307-83c1-f32d470d0e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.497 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[51392a50-f54f-4695-afef-bb7dca2bbea4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf691304c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:d6:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672407, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393821, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.513 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a1ae20-5ec9-470a-b495-4725bdc237f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:d69f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672407, 'tstamp': 672407}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393822, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.527 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbc9028-b8fe-42a7-b4d7-fe566352a001]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf691304c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:d6:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672407, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393823, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.555 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6f2aca-08c4-4c00-9776-773f30ce4501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9b2520-3e9c-4e01-919c-bb382a2697eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf691304c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf691304c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:49 compute-0 NetworkManager[48915]: <info>  [1764061369.6276] manager: (tapf691304c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 kernel: tapf691304c-d0: entered promiscuous mode
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.629 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.629 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf691304c-d0, col_values=(('external_ids', {'iface-id': '5564ec46-e1ee-4a7e-990b-f716b4d2c9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.630 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 ovn_controller[152859]: 2025-11-25T09:02:49Z|01371|binding|INFO|Releasing lport 5564ec46-e1ee-4a7e-990b-f716b4d2c9e2 from this chassis (sb_readonly=0)
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.631 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.632 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f691304c-d112-4c32-b3ac-0f33230178b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f691304c-d112-4c32-b3ac-0f33230178b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.640 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b647b3cb-c58c-47cc-b3fe-8114cce5eb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.640 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-f691304c-d112-4c32-b3ac-0f33230178b0
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/f691304c-d112-4c32-b3ac-0f33230178b0.pid.haproxy
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID f691304c-d112-4c32-b3ac-0f33230178b0
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:02:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:49.641 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'env', 'PROCESS_TAG=haproxy-f691304c-d112-4c32-b3ac-0f33230178b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f691304c-d112-4c32-b3ac-0f33230178b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.659 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Successfully updated port: a93aab06-4a98-453a-87c3-01b817ee7602 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.675 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.676 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.676 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.698 253542 DEBUG nova.compute.manager [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.699 253542 DEBUG oslo_concurrency.lockutils [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.699 253542 DEBUG oslo_concurrency.lockutils [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.699 253542 DEBUG oslo_concurrency.lockutils [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.699 253542 DEBUG nova.compute.manager [req-0909415b-0e1f-41b3-96f6-d2e0c160431d req-02037659-52c7-473a-8c48-c3f86f34b2a2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Processing event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:02:49 compute-0 nova_compute[253538]: 2025-11-25 09:02:49.834 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:02:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 153 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 MiB/s wr, 46 op/s
Nov 25 09:02:50 compute-0 podman[393874]: 2025-11-25 09:02:50.007194987 +0000 UTC m=+0.018753370 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.559 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061370.558355, 28376454-90b2-431d-9052-48b369973c8e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.559 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] VM Started (Lifecycle Event)
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.580 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.584 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061370.5593734, 28376454-90b2-431d-9052-48b369973c8e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.584 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] VM Paused (Lifecycle Event)
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.603 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.606 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.621 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:02:50 compute-0 podman[393874]: 2025-11-25 09:02:50.632481645 +0000 UTC m=+0.644040008 container create adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:02:50 compute-0 ceph-mon[75015]: pgmap v2461: 321 pgs: 321 active+clean; 153 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 MiB/s wr, 46 op/s
Nov 25 09:02:50 compute-0 systemd[1]: Started libpod-conmon-adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0.scope.
Nov 25 09:02:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d86f6fd563b663b27ffb3825320ba53e316c324703edef04af20e73a076c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:50 compute-0 podman[393874]: 2025-11-25 09:02:50.760920776 +0000 UTC m=+0.772479219 container init adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.760 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:50 compute-0 podman[393874]: 2025-11-25 09:02:50.768592725 +0000 UTC m=+0.780151128 container start adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:02:50 compute-0 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [NOTICE]   (393919) : New worker (393939) forked
Nov 25 09:02:50 compute-0 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [NOTICE]   (393919) : Loading success.
Nov 25 09:02:50 compute-0 nova_compute[253538]: 2025-11-25 09:02:50.822 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:02:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.969 162739 INFO neutron.agent.ovn.metadata.agent [-] Port fd44d480-0242-4c7a-b02e-f58852c99ca0 in datapath 269f4fa4-a7fb-4f9a-b49d-3b1968826304 unbound from our chassis
Nov 25 09:02:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.971 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269f4fa4-a7fb-4f9a-b49d-3b1968826304
Nov 25 09:02:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.981 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[40cf23c1-0d04-440a-b81d-4181a55fd53a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.982 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap269f4fa4-a1 in ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:02:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.984 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap269f4fa4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:02:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.984 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[21e6612f-9201-4fc6-ac52-8e1f9dd5f463]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.985 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c7c37e-6511-47c8-88d7-1812d7874c78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:50.995 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4be70ba4-f060-4661-87fd-24b5726a8340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.007 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee2c815-c629-40bf-90fe-3c4e2835c008]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.037 253542 DEBUG nova.network.neutron [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.037 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6da867ae-2fe3-4fb8-b8ba-0ebb0bbd22c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.046 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[662d9366-b4f3-4537-8fc6-9c37d11a1fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 NetworkManager[48915]: <info>  [1764061371.0480] manager: (tap269f4fa4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/564)
Nov 25 09:02:51 compute-0 systemd-udevd[393815]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.055 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.056 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance network_info: |[{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.082 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[faa619a6-516f-4acd-931d-be1b951edb76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.085 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6d11f85e-c8a5-4ba4-8012-a0daac377869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 NetworkManager[48915]: <info>  [1764061371.1066] device (tap269f4fa4-a0): carrier: link connected
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.110 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[fd683c3f-a4d1-4f2d-bd97-12e49fccf080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.126 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[632aba05-2867-4c17-9559-2a0fe7aab024]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269f4fa4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:a8:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672570, 'reachable_time': 38765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393994, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.142 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36a86f6d-2a6c-494f-9e5e-2e32c0c5c096]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:a884'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672570, 'tstamp': 672570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393995, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.157 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[22f21bf4-c3b1-4bf7-a0af-c512c2dc4e40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269f4fa4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:a8:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672570, 'reachable_time': 38765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393996, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.183 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dc00dac0-1519-479a-a87b-10fefe0efc5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bb845718-784c-4ffc-bda0-772bfcd7952a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.210 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f4fa4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269f4fa4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:51 compute-0 NetworkManager[48915]: <info>  [1764061371.2134] manager: (tap269f4fa4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:51 compute-0 kernel: tap269f4fa4-a0: entered promiscuous mode
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.217 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269f4fa4-a0, col_values=(('external_ids', {'iface-id': 'ab77c41f-12b1-44c7-af48-058abf7be28c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:51 compute-0 ovn_controller[152859]: 2025-11-25T09:02:51Z|01372|binding|INFO|Releasing lport ab77c41f-12b1-44c7-af48-058abf7be28c from this chassis (sb_readonly=0)
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.221 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/269f4fa4-a7fb-4f9a-b49d-3b1968826304.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/269f4fa4-a7fb-4f9a-b49d-3b1968826304.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcb4158-e623-459f-a35f-ffe14d765450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.222 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-269f4fa4-a7fb-4f9a-b49d-3b1968826304
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/269f4fa4-a7fb-4f9a-b49d-3b1968826304.pid.haproxy
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 269f4fa4-a7fb-4f9a-b49d-3b1968826304
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:02:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:51.223 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'env', 'PROCESS_TAG=haproxy-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/269f4fa4-a7fb-4f9a-b49d-3b1968826304.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.638 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:51 compute-0 podman[394027]: 2025-11-25 09:02:51.596346167 +0000 UTC m=+0.023270364 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.827 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.827 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing instance network info cache due to event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.828 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.828 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.829 253542 DEBUG nova.network.neutron [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.875 253542 DEBUG nova.objects.instance [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c4a1d63-7674-4276-8da9-b9d4f4fea307 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:02:51 compute-0 podman[394040]: 2025-11-25 09:02:51.881722064 +0000 UTC m=+0.126051608 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.890 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.891 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Ensure instance console log exists: /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.891 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.891 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.892 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.893 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start _get_guest_xml network_info=[{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.898 253542 WARNING nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.904 253542 DEBUG nova.virt.libvirt.host [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.904 253542 DEBUG nova.virt.libvirt.host [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.911 253542 DEBUG nova.virt.libvirt.host [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.911 253542 DEBUG nova.virt.libvirt.host [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.912 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.912 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.912 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.913 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.914 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.914 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.914 253542 DEBUG nova.virt.hardware [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:02:51 compute-0 nova_compute[253538]: 2025-11-25 09:02:51.916 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:51 compute-0 podman[394027]: 2025-11-25 09:02:51.970058686 +0000 UTC m=+0.396982863 container create 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:02:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 165 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Nov 25 09:02:52 compute-0 systemd[1]: Started libpod-conmon-205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5.scope.
Nov 25 09:02:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:52 compute-0 podman[394078]: 2025-11-25 09:02:52.152921206 +0000 UTC m=+0.243775078 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a675655322cb97a61297c5daeb9688e6dd7732cb7cb875eab3105e1fff65389/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:02:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943920349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:02:52 compute-0 podman[394027]: 2025-11-25 09:02:52.372882126 +0000 UTC m=+0.799806293 container init 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 09:02:52 compute-0 podman[394027]: 2025-11-25 09:02:52.378564191 +0000 UTC m=+0.805488358 container start 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.380 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:52 compute-0 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [NOTICE]   (394125) : New worker (394142) forked
Nov 25 09:02:52 compute-0 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [NOTICE]   (394125) : Loading success.
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.405 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.411 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:02:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831216243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.838 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.840 253542 DEBUG nova.virt.libvirt.vif [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=134,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-jwi4cpze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:47Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2c4a1d63-7674-4276-8da9-b9d4f4fea307,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.840 253542 DEBUG nova.network.os_vif_util [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.841 253542 DEBUG nova.network.os_vif_util [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.842 253542 DEBUG nova.objects.instance [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c4a1d63-7674-4276-8da9-b9d4f4fea307 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.855 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <uuid>2c4a1d63-7674-4276-8da9-b9d4f4fea307</uuid>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <name>instance-00000086</name>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118</nova:name>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:02:51</nova:creationTime>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <nova:port uuid="a93aab06-4a98-453a-87c3-01b817ee7602">
Nov 25 09:02:52 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <system>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <entry name="serial">2c4a1d63-7674-4276-8da9-b9d4f4fea307</entry>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <entry name="uuid">2c4a1d63-7674-4276-8da9-b9d4f4fea307</entry>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </system>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <os>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   </os>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <features>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   </features>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk">
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       </source>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config">
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       </source>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:02:52 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:2f:fb:42"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <target dev="tapa93aab06-4a"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/console.log" append="off"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <video>
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </video>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:02:52 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:02:52 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:02:52 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:02:52 compute-0 nova_compute[253538]: </domain>
Nov 25 09:02:52 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.856 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Preparing to wait for external event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.857 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.857 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.858 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.859 253542 DEBUG nova.virt.libvirt.vif [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:02:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=134,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-jwi4cpze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:02:47Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2c4a1d63-7674-4276-8da9-b9d4f4fea307,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.859 253542 DEBUG nova.network.os_vif_util [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.860 253542 DEBUG nova.network.os_vif_util [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.860 253542 DEBUG os_vif [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.862 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.862 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.866 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa93aab06-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.866 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa93aab06-4a, col_values=(('external_ids', {'iface-id': 'a93aab06-4a98-453a-87c3-01b817ee7602', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:fb:42', 'vm-uuid': '2c4a1d63-7674-4276-8da9-b9d4f4fea307'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:52 compute-0 NetworkManager[48915]: <info>  [1764061372.8689] manager: (tapa93aab06-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.877 253542 INFO os_vif [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a')
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.957 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.958 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.958 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:2f:fb:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:02:52 compute-0 nova_compute[253538]: 2025-11-25 09:02:52.959 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Using config drive
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.016 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.262 253542 DEBUG nova.network.neutron [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updated VIF entry in instance network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.263 253542 DEBUG nova.network.neutron [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.283 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.284 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.284 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.284 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.284 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No event matching network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 in dict_keys([('network-vif-plugged', 'fd44d480-0242-4c7a-b02e-f58852c99ca0')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 WARNING nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received unexpected event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 for instance with vm_state building and task_state spawning.
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.285 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Processing event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.286 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.287 253542 DEBUG oslo_concurrency.lockutils [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.287 253542 DEBUG nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.287 253542 WARNING nova.compute.manager [req-35d81841-171d-4e0e-94a9-03e31a2142ac req-919f96fd-609a-46f8-8d58-a3c289844759 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received unexpected event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 for instance with vm_state building and task_state spawning.
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.287 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.308 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061373.3077528, 28376454-90b2-431d-9052-48b369973c8e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.309 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] VM Resumed (Lifecycle Event)
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.312 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.316 253542 INFO nova.virt.libvirt.driver [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance spawned successfully.
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.316 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.330 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.339 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.342 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.343 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.343 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.344 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.344 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.345 253542 DEBUG nova.virt.libvirt.driver [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:02:53
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.371 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.387 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Creating config drive at /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.392 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgg4p4mt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.440 253542 INFO nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Took 17.57 seconds to spawn the instance on the hypervisor.
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.441 253542 DEBUG nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:02:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.511 253542 INFO nova.compute.manager [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Took 18.47 seconds to build instance.
Nov 25 09:02:53 compute-0 ceph-mon[75015]: pgmap v2462: 321 pgs: 321 active+clean; 165 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Nov 25 09:02:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2943920349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:02:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1831216243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.527 253542 DEBUG oslo_concurrency.lockutils [None req-f71c3cb0-8ec9-4bda-b83e-e143f3daa1c9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.550 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgg4p4mt" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.651 253542 DEBUG nova.storage.rbd_utils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.657 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.699 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:53 compute-0 nova_compute[253538]: 2025-11-25 09:02:53.786 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:02:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 176 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 24 op/s
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.237 253542 DEBUG oslo_concurrency.processutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config 2c4a1d63-7674-4276-8da9-b9d4f4fea307_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.237 253542 INFO nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Deleting local config drive /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307/disk.config because it was imported into RBD.
Nov 25 09:02:54 compute-0 NetworkManager[48915]: <info>  [1764061374.2805] manager: (tapa93aab06-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/567)
Nov 25 09:02:54 compute-0 kernel: tapa93aab06-4a: entered promiscuous mode
Nov 25 09:02:54 compute-0 ovn_controller[152859]: 2025-11-25T09:02:54Z|01373|binding|INFO|Claiming lport a93aab06-4a98-453a-87c3-01b817ee7602 for this chassis.
Nov 25 09:02:54 compute-0 ovn_controller[152859]: 2025-11-25T09:02:54Z|01374|binding|INFO|a93aab06-4a98-453a-87c3-01b817ee7602: Claiming fa:16:3e:2f:fb:42 10.100.0.13
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.322 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.331 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:fb:42 10.100.0.13'], port_security=['fa:16:3e:2f:fb:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c4a1d63-7674-4276-8da9-b9d4f4fea307', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72472fc5-3661-404c-a0d2-df155795bd2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '046f46ed-7d5f-45ad-8313-fa0fe77b097a 0f20aab2-1f55-4a0f-8bdf-77bad4fbb70d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48d726f4-a876-48b7-812b-492b6f2eebf1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a93aab06-4a98-453a-87c3-01b817ee7602) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.333 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a93aab06-4a98-453a-87c3-01b817ee7602 in datapath 72472fc5-3661-404c-a0d2-df155795bd2b bound to our chassis
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.334 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72472fc5-3661-404c-a0d2-df155795bd2b
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.344 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b57be80f-bf72-48d4-893a-6c795c372a7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.345 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72472fc5-31 in ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:02:54 compute-0 systemd-udevd[394251]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.347 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72472fc5-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.347 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f52b3fc6-2551-4783-a6fd-03b6fed3f615]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.348 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3a2533-9c03-41a3-aed3-e6c04ba16da9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 systemd-machined[215790]: New machine qemu-164-instance-00000086.
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.358 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[9f895fde-e745-46fa-ab3d-daffca5df5b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 NetworkManager[48915]: <info>  [1764061374.3659] device (tapa93aab06-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:02:54 compute-0 NetworkManager[48915]: <info>  [1764061374.3669] device (tapa93aab06-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:02:54 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000086.
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.384 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7aacec-fe85-4830-a423-057ead4db048]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.388 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:54 compute-0 ovn_controller[152859]: 2025-11-25T09:02:54Z|01375|binding|INFO|Setting lport a93aab06-4a98-453a-87c3-01b817ee7602 ovn-installed in OVS
Nov 25 09:02:54 compute-0 ovn_controller[152859]: 2025-11-25T09:02:54Z|01376|binding|INFO|Setting lport a93aab06-4a98-453a-87c3-01b817ee7602 up in Southbound
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.416 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[962f5d31-9227-4971-ac28-3e5fc00a27f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 NetworkManager[48915]: <info>  [1764061374.4240] manager: (tap72472fc5-30): new Veth device (/org/freedesktop/NetworkManager/Devices/568)
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.423 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb76805-fd3f-479d-bf2c-aad48faf6975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 systemd-udevd[394255]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.458 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2b66ed0b-57d2-4f49-881d-f046ca3fa405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.461 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5eebc0-0dcb-4ff2-9702-b20083d6f446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 NetworkManager[48915]: <info>  [1764061374.4858] device (tap72472fc5-30): carrier: link connected
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.490 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0d345b6a-9b91-4e05-8591-1ed6cd32aef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.514 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bbac8c80-db24-46d8-a92c-809f181f1caf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72472fc5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:5c:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672908, 'reachable_time': 15576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394285, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ceph-mon[75015]: pgmap v2463: 321 pgs: 321 active+clean; 176 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 24 op/s
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.530 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00fa75d9-0139-44af-9cb9-2eb3c6a7384e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:5c07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672908, 'tstamp': 672908}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394286, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.545 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a924a64a-7b47-4d0b-98b3-98706b8e5790]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72472fc5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:5c:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672908, 'reachable_time': 15576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394287, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.571 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efa207cf-956b-4fdf-883b-dc7eab73040e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.629 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1c41d503-abba-4c45-a715-2d7b6daa7e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.630 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72472fc5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.631 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.631 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72472fc5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:54 compute-0 NetworkManager[48915]: <info>  [1764061374.6337] manager: (tap72472fc5-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Nov 25 09:02:54 compute-0 kernel: tap72472fc5-30: entered promiscuous mode
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.637 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72472fc5-30, col_values=(('external_ids', {'iface-id': '7518767c-6a1a-4489-968c-840b865348d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:02:54 compute-0 ovn_controller[152859]: 2025-11-25T09:02:54Z|01377|binding|INFO|Releasing lport 7518767c-6a1a-4489-968c-840b865348d3 from this chassis (sb_readonly=0)
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.653 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72472fc5-3661-404c-a0d2-df155795bd2b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72472fc5-3661-404c-a0d2-df155795bd2b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.654 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5f71ff37-188a-4fa7-a21c-d2228e0eb38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.655 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-72472fc5-3661-404c-a0d2-df155795bd2b
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/72472fc5-3661-404c-a0d2-df155795bd2b.pid.haproxy
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 72472fc5-3661-404c-a0d2-df155795bd2b
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:02:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:02:54.655 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'env', 'PROCESS_TAG=haproxy-72472fc5-3661-404c-a0d2-df155795bd2b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72472fc5-3661-404c-a0d2-df155795bd2b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.746 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061374.7456698, 2c4a1d63-7674-4276-8da9-b9d4f4fea307 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.746 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] VM Started (Lifecycle Event)
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.775 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.779 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061374.7485554, 2c4a1d63-7674-4276-8da9-b9d4f4fea307 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.780 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] VM Paused (Lifecycle Event)
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.805 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.807 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:02:54 compute-0 nova_compute[253538]: 2025-11-25 09:02:54.835 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:02:55 compute-0 podman[394361]: 2025-11-25 09:02:55.02901297 +0000 UTC m=+0.037870011 container create 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:02:55 compute-0 systemd[1]: Started libpod-conmon-4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2.scope.
Nov 25 09:02:55 compute-0 podman[394361]: 2025-11-25 09:02:55.009273384 +0000 UTC m=+0.018130445 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:02:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:02:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c472ba9f61967f5c2c1bbfa218a3c4c1a73bb2a29d7b40245ca25e38f08a9e8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:02:55 compute-0 podman[394361]: 2025-11-25 09:02:55.126728907 +0000 UTC m=+0.135585968 container init 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:02:55 compute-0 podman[394361]: 2025-11-25 09:02:55.132335869 +0000 UTC m=+0.141192910 container start 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:02:55 compute-0 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [NOTICE]   (394381) : New worker (394383) forked
Nov 25 09:02:55 compute-0 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [NOTICE]   (394381) : Loading success.
Nov 25 09:02:55 compute-0 nova_compute[253538]: 2025-11-25 09:02:55.550 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 180 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 25 09:02:56 compute-0 nova_compute[253538]: 2025-11-25 09:02:56.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:02:56 compute-0 nova_compute[253538]: 2025-11-25 09:02:56.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:56 compute-0 nova_compute[253538]: 2025-11-25 09:02:56.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:56 compute-0 nova_compute[253538]: 2025-11-25 09:02:56.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:56 compute-0 nova_compute[253538]: 2025-11-25 09:02:56.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:02:56 compute-0 nova_compute[253538]: 2025-11-25 09:02:56.583 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:02:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265326903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.042 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.115 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.115 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.119 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.119 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:02:57 compute-0 ceph-mon[75015]: pgmap v2464: 321 pgs: 321 active+clean; 180 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 25 09:02:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/265326903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.278 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.279 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3506MB free_disk=59.946624755859375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.279 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.279 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.289 253542 DEBUG nova.compute.manager [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.289 253542 DEBUG oslo_concurrency.lockutils [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.289 253542 DEBUG oslo_concurrency.lockutils [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.289 253542 DEBUG oslo_concurrency.lockutils [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.290 253542 DEBUG nova.compute.manager [req-da911e8e-80dd-4e5e-a1da-abf73979b24b req-04b40c58-dc86-4344-9d58-14dd0c9d2e5f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Processing event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.290 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.298 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061377.2986612, 2c4a1d63-7674-4276-8da9-b9d4f4fea307 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.299 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] VM Resumed (Lifecycle Event)
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.301 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.305 253542 INFO nova.virt.libvirt.driver [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance spawned successfully.
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.305 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.335 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.338 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.346 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.347 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.347 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.347 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.348 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.348 253542 DEBUG nova.virt.libvirt.driver [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.370 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.397 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 28376454-90b2-431d-9052-48b369973c8e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.398 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2c4a1d63-7674-4276-8da9-b9d4f4fea307 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.398 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.398 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.407 253542 INFO nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Took 9.59 seconds to spawn the instance on the hypervisor.
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.407 253542 DEBUG nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.465 253542 INFO nova.compute.manager [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Took 10.57 seconds to build instance.
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.470 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.511 253542 DEBUG oslo_concurrency.lockutils [None req-e81b8dd4-6010-4d30-aa5a-a47b67de9ac9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:02:57 compute-0 nova_compute[253538]: 2025-11-25 09:02:57.870 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:02:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2706206898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:58 compute-0 nova_compute[253538]: 2025-11-25 09:02:58.012 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:02:58 compute-0 nova_compute[253538]: 2025-11-25 09:02:58.018 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:02:58 compute-0 nova_compute[253538]: 2025-11-25 09:02:58.044 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:02:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 09:02:58 compute-0 nova_compute[253538]: 2025-11-25 09:02:58.070 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:02:58 compute-0 nova_compute[253538]: 2025-11-25 09:02:58.071 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2706206898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:02:58 compute-0 nova_compute[253538]: 2025-11-25 09:02:58.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:59 compute-0 ceph-mon[75015]: pgmap v2465: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.453 253542 DEBUG nova.compute.manager [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.454 253542 DEBUG oslo_concurrency.lockutils [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.455 253542 DEBUG oslo_concurrency.lockutils [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.455 253542 DEBUG oslo_concurrency.lockutils [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.455 253542 DEBUG nova.compute.manager [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] No waiting events found dispatching network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.456 253542 WARNING nova.compute.manager [req-fef3d36a-671c-441f-abe8-122dfe9171fb req-3b116c1d-8c71-4358-aef3-4e37f1798dee b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received unexpected event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 for instance with vm_state active and task_state None.
Nov 25 09:02:59 compute-0 podman[394437]: 2025-11-25 09:02:59.828941411 +0000 UTC m=+0.082412311 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:59 compute-0 NetworkManager[48915]: <info>  [1764061379.9046] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/570)
Nov 25 09:02:59 compute-0 ovn_controller[152859]: 2025-11-25T09:02:59Z|01378|binding|INFO|Releasing lport ab77c41f-12b1-44c7-af48-058abf7be28c from this chassis (sb_readonly=0)
Nov 25 09:02:59 compute-0 ovn_controller[152859]: 2025-11-25T09:02:59Z|01379|binding|INFO|Releasing lport 5564ec46-e1ee-4a7e-990b-f716b4d2c9e2 from this chassis (sb_readonly=0)
Nov 25 09:02:59 compute-0 ovn_controller[152859]: 2025-11-25T09:02:59Z|01380|binding|INFO|Releasing lport 7518767c-6a1a-4489-968c-840b865348d3 from this chassis (sb_readonly=0)
Nov 25 09:02:59 compute-0 NetworkManager[48915]: <info>  [1764061379.9058] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Nov 25 09:02:59 compute-0 ovn_controller[152859]: 2025-11-25T09:02:59Z|01381|binding|INFO|Releasing lport ab77c41f-12b1-44c7-af48-058abf7be28c from this chassis (sb_readonly=0)
Nov 25 09:02:59 compute-0 ovn_controller[152859]: 2025-11-25T09:02:59Z|01382|binding|INFO|Releasing lport 5564ec46-e1ee-4a7e-990b-f716b4d2c9e2 from this chassis (sb_readonly=0)
Nov 25 09:02:59 compute-0 ovn_controller[152859]: 2025-11-25T09:02:59Z|01383|binding|INFO|Releasing lport 7518767c-6a1a-4489-968c-840b865348d3 from this chassis (sb_readonly=0)
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:02:59 compute-0 nova_compute[253538]: 2025-11-25 09:02:59.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Nov 25 09:03:00 compute-0 nova_compute[253538]: 2025-11-25 09:03:00.318 253542 DEBUG nova.compute.manager [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:00 compute-0 nova_compute[253538]: 2025-11-25 09:03:00.319 253542 DEBUG nova.compute.manager [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing instance network info cache due to event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:03:00 compute-0 nova_compute[253538]: 2025-11-25 09:03:00.319 253542 DEBUG oslo_concurrency.lockutils [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:00 compute-0 nova_compute[253538]: 2025-11-25 09:03:00.320 253542 DEBUG oslo_concurrency.lockutils [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:00 compute-0 nova_compute[253538]: 2025-11-25 09:03:00.320 253542 DEBUG nova.network.neutron [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:03:01 compute-0 ceph-mon[75015]: pgmap v2466: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Nov 25 09:03:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 155 op/s
Nov 25 09:03:02 compute-0 nova_compute[253538]: 2025-11-25 09:03:02.216 253542 DEBUG nova.network.neutron [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updated VIF entry in instance network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:03:02 compute-0 nova_compute[253538]: 2025-11-25 09:03:02.217 253542 DEBUG nova.network.neutron [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:02 compute-0 nova_compute[253538]: 2025-11-25 09:03:02.234 253542 DEBUG oslo_concurrency.lockutils [req-122d2af0-0b6e-4966-968e-e8b19bfe2e9f req-ef9f69e5-7333-4f80-bcf8-8408796cdd98 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:02 compute-0 nova_compute[253538]: 2025-11-25 09:03:02.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:03 compute-0 ceph-mon[75015]: pgmap v2467: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 155 op/s
Nov 25 09:03:03 compute-0 nova_compute[253538]: 2025-11-25 09:03:03.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 810 KiB/s wr, 153 op/s
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006967633855896333 of space, bias 1.0, pg target 0.20902901567689 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:03:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:03:04 compute-0 nova_compute[253538]: 2025-11-25 09:03:04.616 253542 DEBUG nova.compute.manager [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:04 compute-0 nova_compute[253538]: 2025-11-25 09:03:04.618 253542 DEBUG nova.compute.manager [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing instance network info cache due to event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:03:04 compute-0 nova_compute[253538]: 2025-11-25 09:03:04.619 253542 DEBUG oslo_concurrency.lockutils [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:04 compute-0 nova_compute[253538]: 2025-11-25 09:03:04.619 253542 DEBUG oslo_concurrency.lockutils [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:04 compute-0 nova_compute[253538]: 2025-11-25 09:03:04.619 253542 DEBUG nova.network.neutron [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:03:05 compute-0 ceph-mon[75015]: pgmap v2468: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 810 KiB/s wr, 153 op/s
Nov 25 09:03:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 55 KiB/s wr, 152 op/s
Nov 25 09:03:06 compute-0 ovn_controller[152859]: 2025-11-25T09:03:06Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:e3:ea 10.100.0.12
Nov 25 09:03:06 compute-0 ovn_controller[152859]: 2025-11-25T09:03:06Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:e3:ea 10.100.0.12
Nov 25 09:03:07 compute-0 ceph-mon[75015]: pgmap v2469: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 55 KiB/s wr, 152 op/s
Nov 25 09:03:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:07 compute-0 nova_compute[253538]: 2025-11-25 09:03:07.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 199 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 744 KiB/s wr, 123 op/s
Nov 25 09:03:08 compute-0 nova_compute[253538]: 2025-11-25 09:03:08.321 253542 DEBUG nova.network.neutron [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updated VIF entry in instance network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:03:08 compute-0 nova_compute[253538]: 2025-11-25 09:03:08.321 253542 DEBUG nova.network.neutron [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:08 compute-0 nova_compute[253538]: 2025-11-25 09:03:08.337 253542 DEBUG oslo_concurrency.lockutils [req-5c6cb4da-21a2-48c8-8706-ca0d2dac6056 req-b17d46ff-931e-4c6d-a04d-0360ebc417ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:08 compute-0 nova_compute[253538]: 2025-11-25 09:03:08.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:09 compute-0 ceph-mon[75015]: pgmap v2470: 321 pgs: 321 active+clean; 199 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 744 KiB/s wr, 123 op/s
Nov 25 09:03:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 222 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 156 op/s
Nov 25 09:03:10 compute-0 ovn_controller[152859]: 2025-11-25T09:03:10Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:fb:42 10.100.0.13
Nov 25 09:03:10 compute-0 ovn_controller[152859]: 2025-11-25T09:03:10Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:fb:42 10.100.0.13
Nov 25 09:03:11 compute-0 ceph-mon[75015]: pgmap v2471: 321 pgs: 321 active+clean; 222 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 156 op/s
Nov 25 09:03:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 234 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 939 KiB/s rd, 3.6 MiB/s wr, 124 op/s
Nov 25 09:03:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:12 compute-0 nova_compute[253538]: 2025-11-25 09:03:12.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:13 compute-0 ceph-mon[75015]: pgmap v2472: 321 pgs: 321 active+clean; 234 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 939 KiB/s rd, 3.6 MiB/s wr, 124 op/s
Nov 25 09:03:13 compute-0 nova_compute[253538]: 2025-11-25 09:03:13.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 243 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 4.2 MiB/s wr, 118 op/s
Nov 25 09:03:15 compute-0 ceph-mon[75015]: pgmap v2473: 321 pgs: 321 active+clean; 243 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 4.2 MiB/s wr, 118 op/s
Nov 25 09:03:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 715 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Nov 25 09:03:16 compute-0 ceph-mon[75015]: pgmap v2474: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 715 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Nov 25 09:03:16 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 25 09:03:16 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:16.752393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:03:16 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 25 09:03:16 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061396752463, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2060, "num_deletes": 251, "total_data_size": 3346609, "memory_usage": 3404680, "flush_reason": "Manual Compaction"}
Nov 25 09:03:16 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 25 09:03:16 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061396897639, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3279708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49760, "largest_seqno": 51819, "table_properties": {"data_size": 3270425, "index_size": 5841, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19034, "raw_average_key_size": 20, "raw_value_size": 3251904, "raw_average_value_size": 3444, "num_data_blocks": 259, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061178, "oldest_key_time": 1764061178, "file_creation_time": 1764061396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:03:16 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 145292 microseconds, and 13556 cpu microseconds.
Nov 25 09:03:16 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:16.897690) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3279708 bytes OK
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:16.897715) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.056571) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.056617) EVENT_LOG_v1 {"time_micros": 1764061397056609, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.056640) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3337962, prev total WAL file size 3337962, number of live WAL files 2.
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.057688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3202KB)], [116(8422KB)]
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061397057717, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11904328, "oldest_snapshot_seqno": -1}
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7404 keys, 10210900 bytes, temperature: kUnknown
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061397239104, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10210900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10161992, "index_size": 29276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 191974, "raw_average_key_size": 25, "raw_value_size": 10030229, "raw_average_value_size": 1354, "num_data_blocks": 1149, "num_entries": 7404, "num_filter_entries": 7404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.239401) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10210900 bytes
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.409944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.6 rd, 56.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7918, records dropped: 514 output_compression: NoCompression
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.409999) EVENT_LOG_v1 {"time_micros": 1764061397409979, "job": 70, "event": "compaction_finished", "compaction_time_micros": 181492, "compaction_time_cpu_micros": 25084, "output_level": 6, "num_output_files": 1, "total_output_size": 10210900, "num_input_records": 7918, "num_output_records": 7404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061397411617, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061397414882, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.057606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:17 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:17.414996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:17 compute-0 nova_compute[253538]: 2025-11-25 09:03:17.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 09:03:18 compute-0 nova_compute[253538]: 2025-11-25 09:03:18.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:19 compute-0 ceph-mon[75015]: pgmap v2475: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 09:03:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Nov 25 09:03:21 compute-0 ceph-mon[75015]: pgmap v2476: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Nov 25 09:03:21 compute-0 nova_compute[253538]: 2025-11-25 09:03:21.743 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:21 compute-0 nova_compute[253538]: 2025-11-25 09:03:21.744 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:21 compute-0 nova_compute[253538]: 2025-11-25 09:03:21.762 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:03:21 compute-0 nova_compute[253538]: 2025-11-25 09:03:21.853 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:21 compute-0 nova_compute[253538]: 2025-11-25 09:03:21.854 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:21 compute-0 nova_compute[253538]: 2025-11-25 09:03:21.864 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:03:21 compute-0 nova_compute[253538]: 2025-11-25 09:03:21.865 253542 INFO nova.compute.claims [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:03:22 compute-0 nova_compute[253538]: 2025-11-25 09:03:22.030 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Nov 25 09:03:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:03:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2571430072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:03:22 compute-0 nova_compute[253538]: 2025-11-25 09:03:22.540 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:22 compute-0 nova_compute[253538]: 2025-11-25 09:03:22.548 253542 DEBUG nova.compute.provider_tree [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:03:22 compute-0 ceph-mon[75015]: pgmap v2477: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Nov 25 09:03:22 compute-0 nova_compute[253538]: 2025-11-25 09:03:22.564 253542 DEBUG nova.scheduler.client.report [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:03:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:22 compute-0 nova_compute[253538]: 2025-11-25 09:03:22.787 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:22 compute-0 nova_compute[253538]: 2025-11-25 09:03:22.787 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:03:22 compute-0 podman[394489]: 2025-11-25 09:03:22.809870998 +0000 UTC m=+0.049846547 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 09:03:22 compute-0 podman[394488]: 2025-11-25 09:03:22.835534475 +0000 UTC m=+0.084380754 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:03:22 compute-0 nova_compute[253538]: 2025-11-25 09:03:22.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.254 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.254 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.272 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.285 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.368 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.370 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.370 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Creating image(s)
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.396 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:03:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:03:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:03:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:03:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:03:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.671 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2571430072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.778 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.781 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.857 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.858 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.859 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.859 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.881 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.884 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:23 compute-0 nova_compute[253538]: 2025-11-25 09:03:23.919 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:24 compute-0 nova_compute[253538]: 2025-11-25 09:03:24.022 253542 DEBUG nova.policy [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:03:24 compute-0 nova_compute[253538]: 2025-11-25 09:03:24.055 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:24.056 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:03:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:24.058 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:03:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 656 KiB/s wr, 15 op/s
Nov 25 09:03:24 compute-0 ceph-mon[75015]: pgmap v2478: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 656 KiB/s wr, 15 op/s
Nov 25 09:03:24 compute-0 nova_compute[253538]: 2025-11-25 09:03:24.796 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.911s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:24 compute-0 nova_compute[253538]: 2025-11-25 09:03:24.858 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:03:25 compute-0 nova_compute[253538]: 2025-11-25 09:03:25.035 253542 DEBUG nova.objects.instance [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ee319a5-b613-4b27-a1e6-64b0129bf269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:03:25 compute-0 nova_compute[253538]: 2025-11-25 09:03:25.058 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:03:25 compute-0 nova_compute[253538]: 2025-11-25 09:03:25.058 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Ensure instance console log exists: /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:03:25 compute-0 nova_compute[253538]: 2025-11-25 09:03:25.058 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:25 compute-0 nova_compute[253538]: 2025-11-25 09:03:25.059 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:25 compute-0 nova_compute[253538]: 2025-11-25 09:03:25.059 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:25 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:25.060 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 274 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 09:03:27 compute-0 ceph-mon[75015]: pgmap v2479: 321 pgs: 321 active+clean; 274 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 09:03:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:27 compute-0 nova_compute[253538]: 2025-11-25 09:03:27.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:28 compute-0 nova_compute[253538]: 2025-11-25 09:03:28.063 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Successfully created port: 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:03:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 289 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Nov 25 09:03:28 compute-0 nova_compute[253538]: 2025-11-25 09:03:28.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:03:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2469312614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:03:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:03:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2469312614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:03:29 compute-0 nova_compute[253538]: 2025-11-25 09:03:29.214 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Successfully created port: 25c8c441-cd5e-4cd3-9151-e8137db08e65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:03:29 compute-0 ceph-mon[75015]: pgmap v2480: 321 pgs: 321 active+clean; 289 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Nov 25 09:03:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2469312614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:03:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2469312614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:03:29 compute-0 nova_compute[253538]: 2025-11-25 09:03:29.843 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:29 compute-0 nova_compute[253538]: 2025-11-25 09:03:29.843 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:29 compute-0 nova_compute[253538]: 2025-11-25 09:03:29.855 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:03:29 compute-0 nova_compute[253538]: 2025-11-25 09:03:29.937 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:29 compute-0 nova_compute[253538]: 2025-11-25 09:03:29.939 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:29 compute-0 nova_compute[253538]: 2025-11-25 09:03:29.955 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:03:29 compute-0 nova_compute[253538]: 2025-11-25 09:03:29.955 253542 INFO nova.compute.claims [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.075 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Successfully updated port: 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:03:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.099 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.182 253542 DEBUG nova.compute.manager [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.183 253542 DEBUG nova.compute.manager [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing instance network info cache due to event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.183 253542 DEBUG oslo_concurrency.lockutils [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.183 253542 DEBUG oslo_concurrency.lockutils [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.183 253542 DEBUG nova.network.neutron [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.400 253542 DEBUG nova.network.neutron [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:03:30 compute-0 ceph-mon[75015]: pgmap v2481: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:03:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:03:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663197943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.619 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.627 253542 DEBUG nova.compute.provider_tree [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.643 253542 DEBUG nova.scheduler.client.report [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.700 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.701 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.811 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.811 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.831 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:03:30 compute-0 podman[394714]: 2025-11-25 09:03:30.85325476 +0000 UTC m=+0.103946127 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.854 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.960 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.962 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.963 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Creating image(s)
Nov 25 09:03:30 compute-0 nova_compute[253538]: 2025-11-25 09:03:30.981 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.002 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.024 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.027 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.102 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.103 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.104 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.104 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.125 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.128 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 60fab831-4ae4-4e18-a4e4-5466abbece52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.263 253542 DEBUG nova.network.neutron [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.290 253542 DEBUG oslo_concurrency.lockutils [req-049a09f1-8ecc-4141-a2c8-1ca1d499bf14 req-0dabf1c1-7fcf-4e0c-a0e6-56f3ef9eb92e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.331 253542 DEBUG nova.policy [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283b89dbe3284e8ea2019b797673108b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfffff2c57a442a59b202d368d49bf00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.462 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 60fab831-4ae4-4e18-a4e4-5466abbece52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.498 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Successfully updated port: 25c8c441-cd5e-4cd3-9151-e8137db08e65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:03:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2663197943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.549 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.549 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.550 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.557 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] resizing rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.668 253542 DEBUG nova.objects.instance [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'migration_context' on Instance uuid 60fab831-4ae4-4e18-a4e4-5466abbece52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.678 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.679 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Ensure instance console log exists: /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.679 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.680 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.680 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:31 compute-0 nova_compute[253538]: 2025-11-25 09:03:31.952 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:03:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 303 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 38 op/s
Nov 25 09:03:32 compute-0 nova_compute[253538]: 2025-11-25 09:03:32.260 253542 DEBUG nova.compute.manager [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-changed-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:32 compute-0 nova_compute[253538]: 2025-11-25 09:03:32.260 253542 DEBUG nova.compute.manager [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing instance network info cache due to event network-changed-25c8c441-cd5e-4cd3-9151-e8137db08e65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:03:32 compute-0 nova_compute[253538]: 2025-11-25 09:03:32.261 253542 DEBUG oslo_concurrency.lockutils [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:32 compute-0 nova_compute[253538]: 2025-11-25 09:03:32.329 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Successfully created port: d2008aa0-bac3-4d83-88d2-34376e911b2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:03:32 compute-0 ceph-mon[75015]: pgmap v2482: 321 pgs: 321 active+clean; 303 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 38 op/s
Nov 25 09:03:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:32 compute-0 nova_compute[253538]: 2025-11-25 09:03:32.916 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.649 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Successfully updated port: d2008aa0-bac3-4d83-88d2-34376e911b2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.663 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.663 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquired lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.664 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.756 253542 DEBUG nova.compute.manager [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-changed-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.757 253542 DEBUG nova.compute.manager [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Refreshing instance network info cache due to event network-changed-d2008aa0-bac3-4d83-88d2-34376e911b2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.758 253542 DEBUG oslo_concurrency.lockutils [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:33 compute-0 nova_compute[253538]: 2025-11-25 09:03:33.964 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:03:34 compute-0 sudo[394907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:34 compute-0 sudo[394907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:34 compute-0 sudo[394907]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 MiB/s wr, 40 op/s
Nov 25 09:03:34 compute-0 sudo[394932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:03:34 compute-0 sudo[394932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:34 compute-0 sudo[394932]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.157 253542 DEBUG nova.network.neutron [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.174 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.174 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance network_info: |[{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.174 253542 DEBUG oslo_concurrency.lockutils [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.175 253542 DEBUG nova.network.neutron [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing network info cache for port 25c8c441-cd5e-4cd3-9151-e8137db08e65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.178 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start _get_guest_xml network_info=[{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.183 253542 WARNING nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.192 253542 DEBUG nova.virt.libvirt.host [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:03:34 compute-0 sudo[394957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.192 253542 DEBUG nova.virt.libvirt.host [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:03:34 compute-0 sudo[394957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.196 253542 DEBUG nova.virt.libvirt.host [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.196 253542 DEBUG nova.virt.libvirt.host [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.196 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:03:34 compute-0 sudo[394957]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.197 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.197 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.198 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.198 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.198 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.198 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.199 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.199 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.199 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.199 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.200 253542 DEBUG nova.virt.hardware [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.203 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:34 compute-0 sudo[394982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:03:34 compute-0 sudo[394982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:03:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778574145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.664 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.689 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:34 compute-0 nova_compute[253538]: 2025-11-25 09:03:34.692 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:34 compute-0 sudo[394982]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 09:03:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:03:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:03:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:03:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:03:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:03:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:03:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:03:34 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev fe72d3be-15b6-49ab-a422-849cac669e17 does not exist
Nov 25 09:03:34 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4a80d837-ad7a-480b-aeaf-6c2c204c4c1b does not exist
Nov 25 09:03:34 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d4c12f1b-1e7f-42cc-8a01-126e19d1bbdb does not exist
Nov 25 09:03:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:03:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:03:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:03:34 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:03:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:03:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:03:34 compute-0 sudo[395099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:34 compute-0 sudo[395099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:34 compute-0 sudo[395099]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:34 compute-0 sudo[395124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:03:34 compute-0 sudo[395124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:34 compute-0 sudo[395124]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:35 compute-0 sudo[395149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:35 compute-0 sudo[395149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:35 compute-0 sudo[395149]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:35 compute-0 sudo[395174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:03:35 compute-0 sudo[395174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:03:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734369686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.149 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.151 253542 DEBUG nova.virt.libvirt.vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:23Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.152 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.153 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.154 253542 DEBUG nova.virt.libvirt.vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:23Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.154 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.155 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.156 253542 DEBUG nova.objects.instance [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ee319a5-b613-4b27-a1e6-64b0129bf269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.168 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <uuid>1ee319a5-b613-4b27-a1e6-64b0129bf269</uuid>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <name>instance-00000087</name>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-872919100</nova:name>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:03:34</nova:creationTime>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:port uuid="2cd88dce-60d9-4da6-a5f6-ba6622fd8812">
Nov 25 09:03:35 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <nova:port uuid="25c8c441-cd5e-4cd3-9151-e8137db08e65">
Nov 25 09:03:35 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fefe:c4a5" ipVersion="6"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <system>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <entry name="serial">1ee319a5-b613-4b27-a1e6-64b0129bf269</entry>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <entry name="uuid">1ee319a5-b613-4b27-a1e6-64b0129bf269</entry>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </system>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <os>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   </os>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <features>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   </features>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/1ee319a5-b613-4b27-a1e6-64b0129bf269_disk">
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       </source>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config">
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       </source>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:03:35 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:1e:e8:e0"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <target dev="tap2cd88dce-60"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:fe:c4:a5"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <target dev="tap25c8c441-cd"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/console.log" append="off"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <video>
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </video>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:03:35 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:03:35 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:03:35 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:03:35 compute-0 nova_compute[253538]: </domain>
Nov 25 09:03:35 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.170 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Preparing to wait for external event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.170 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Preparing to wait for external event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.171 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.172 253542 DEBUG nova.virt.libvirt.vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:23Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.172 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.173 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.173 253542 DEBUG os_vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.175 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.175 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.178 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.178 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cd88dce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.178 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cd88dce-60, col_values=(('external_ids', {'iface-id': '2cd88dce-60d9-4da6-a5f6-ba6622fd8812', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:e8:e0', 'vm-uuid': '1ee319a5-b613-4b27-a1e6-64b0129bf269'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:35 compute-0 NetworkManager[48915]: <info>  [1764061415.1817] manager: (tap2cd88dce-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.187 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.187 253542 INFO os_vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60')
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.188 253542 DEBUG nova.virt.libvirt.vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:23Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.188 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.189 253542 DEBUG nova.network.os_vif_util [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.189 253542 DEBUG os_vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.190 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.192 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.192 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25c8c441-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.192 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25c8c441-cd, col_values=(('external_ids', {'iface-id': '25c8c441-cd5e-4cd3-9151-e8137db08e65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:c4:a5', 'vm-uuid': '1ee319a5-b613-4b27-a1e6-64b0129bf269'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.193 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:35 compute-0 NetworkManager[48915]: <info>  [1764061415.1952] manager: (tap25c8c441-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.195 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.204 253542 INFO os_vif [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd')
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.259 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.259 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.259 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:1e:e8:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.260 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:fe:c4:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.260 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Using config drive
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.292 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.323 253542 DEBUG nova.network.neutron [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updating instance_info_cache with network_info: [{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.345 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Releasing lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.346 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance network_info: |[{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.346 253542 DEBUG oslo_concurrency.lockutils [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.347 253542 DEBUG nova.network.neutron [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Refreshing network info cache for port d2008aa0-bac3-4d83-88d2-34376e911b2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.350 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start _get_guest_xml network_info=[{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.355 253542 WARNING nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:03:35 compute-0 ceph-mon[75015]: pgmap v2483: 321 pgs: 321 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 MiB/s wr, 40 op/s
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2778574145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:03:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/734369686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.365 253542 DEBUG nova.virt.libvirt.host [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.366 253542 DEBUG nova.virt.libvirt.host [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.370 253542 DEBUG nova.virt.libvirt.host [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.371 253542 DEBUG nova.virt.libvirt.host [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.371 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.372 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.372 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.372 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.373 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.374 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.374 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.374 253542 DEBUG nova.virt.hardware [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.377 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:35 compute-0 podman[395263]: 2025-11-25 09:03:35.520474514 +0000 UTC m=+0.100998967 container create b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:03:35 compute-0 podman[395263]: 2025-11-25 09:03:35.440975723 +0000 UTC m=+0.021500196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:03:35 compute-0 systemd[1]: Started libpod-conmon-b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1.scope.
Nov 25 09:03:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.672 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Creating config drive at /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.680 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvqaj0jt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:35 compute-0 podman[395263]: 2025-11-25 09:03:35.759875191 +0000 UTC m=+0.340399704 container init b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:03:35 compute-0 podman[395263]: 2025-11-25 09:03:35.769027221 +0000 UTC m=+0.349551674 container start b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:03:35 compute-0 adoring_banach[395299]: 167 167
Nov 25 09:03:35 compute-0 systemd[1]: libpod-b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1.scope: Deactivated successfully.
Nov 25 09:03:35 compute-0 podman[395263]: 2025-11-25 09:03:35.821335132 +0000 UTC m=+0.401859635 container attach b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:03:35 compute-0 podman[395263]: 2025-11-25 09:03:35.821707483 +0000 UTC m=+0.402231956 container died b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.822 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvqaj0jt" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:03:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880590095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.849 253542 DEBUG nova.storage.rbd_utils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.852 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-290eec8d57d83157bf2bf33c009f078ff32b600cab5be2c95e5ca0acd6bf3d04-merged.mount: Deactivated successfully.
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.886 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.906 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:35 compute-0 nova_compute[253538]: 2025-11-25 09:03:35.910 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 09:03:36 compute-0 podman[395263]: 2025-11-25 09:03:36.093583874 +0000 UTC m=+0.674108327 container remove b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 25 09:03:36 compute-0 systemd[1]: libpod-conmon-b3fbc7d43e02f03db9e2a0e7ba3a0adc50f0e98ae28c21415f593d0e079876c1.scope: Deactivated successfully.
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.282 253542 DEBUG nova.network.neutron [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updated VIF entry in instance network info cache for port 25c8c441-cd5e-4cd3-9151-e8137db08e65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.285 253542 DEBUG nova.network.neutron [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.307 253542 DEBUG oslo_concurrency.lockutils [req-372b9d1e-f967-4ee7-b893-6a89a10e98d7 req-86f1c0a0-8677-4f84-b004-00e30e928c73 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:36 compute-0 podman[395403]: 2025-11-25 09:03:36.274601924 +0000 UTC m=+0.037186702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:03:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:03:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606996446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:03:36 compute-0 podman[395403]: 2025-11-25 09:03:36.461536066 +0000 UTC m=+0.224120824 container create d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:03:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1880590095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.496 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.498 253542 DEBUG nova.virt.libvirt.vif [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=136,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-lk5znizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:30Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=60fab831-4ae4-4e18-a4e4-5466abbece52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.498 253542 DEBUG nova.network.os_vif_util [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.499 253542 DEBUG nova.network.os_vif_util [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.500 253542 DEBUG nova.objects.instance [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 60fab831-4ae4-4e18-a4e4-5466abbece52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.517 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <uuid>60fab831-4ae4-4e18-a4e4-5466abbece52</uuid>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <name>instance-00000088</name>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521</nova:name>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:03:35</nova:creationTime>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <nova:user uuid="283b89dbe3284e8ea2019b797673108b">tempest-TestSecurityGroupsBasicOps-1495447964-project-member</nova:user>
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <nova:project uuid="cfffff2c57a442a59b202d368d49bf00">tempest-TestSecurityGroupsBasicOps-1495447964</nova:project>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <nova:port uuid="d2008aa0-bac3-4d83-88d2-34376e911b2c">
Nov 25 09:03:36 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <system>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <entry name="serial">60fab831-4ae4-4e18-a4e4-5466abbece52</entry>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <entry name="uuid">60fab831-4ae4-4e18-a4e4-5466abbece52</entry>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </system>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <os>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   </os>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <features>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   </features>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/60fab831-4ae4-4e18-a4e4-5466abbece52_disk">
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       </source>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config">
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       </source>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:03:36 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:72:7d:84"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <target dev="tapd2008aa0-ba"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/console.log" append="off"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <video>
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </video>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:03:36 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:03:36 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:03:36 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:03:36 compute-0 nova_compute[253538]: </domain>
Nov 25 09:03:36 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.519 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Preparing to wait for external event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.520 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.520 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.521 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.522 253542 DEBUG nova.virt.libvirt.vif [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:03:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=136,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-lk5znizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:03:30Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=60fab831-4ae4-4e18-a4e4-5466abbece52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:03:36 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.522 253542 DEBUG nova.network.os_vif_util [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.523 253542 DEBUG nova.network.os_vif_util [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.524 253542 DEBUG os_vif [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.525 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.526 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.530 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.531 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2008aa0-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.531 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd2008aa0-ba, col_values=(('external_ids', {'iface-id': 'd2008aa0-bac3-4d83-88d2-34376e911b2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:7d:84', 'vm-uuid': '60fab831-4ae4-4e18-a4e4-5466abbece52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.537 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:03:36 compute-0 NetworkManager[48915]: <info>  [1764061416.5385] manager: (tapd2008aa0-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.546 253542 INFO os_vif [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba')
Nov 25 09:03:36 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:03:36 compute-0 systemd[1]: Started libpod-conmon-d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299.scope.
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.573 253542 DEBUG oslo_concurrency.processutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config 1ee319a5-b613-4b27-a1e6-64b0129bf269_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.574 253542 INFO nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Deleting local config drive /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269/disk.config because it was imported into RBD.
Nov 25 09:03:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:36 compute-0 podman[395403]: 2025-11-25 09:03:36.604861272 +0000 UTC m=+0.367446060 container init d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.615 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.616 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.616 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] No VIF found with MAC fa:16:3e:72:7d:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.616 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Using config drive
Nov 25 09:03:36 compute-0 podman[395403]: 2025-11-25 09:03:36.617860706 +0000 UTC m=+0.380445464 container start d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.645 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:36 compute-0 NetworkManager[48915]: <info>  [1764061416.6472] manager: (tap2cd88dce-60): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Nov 25 09:03:36 compute-0 kernel: tap2cd88dce-60: entered promiscuous mode
Nov 25 09:03:36 compute-0 podman[395403]: 2025-11-25 09:03:36.675938274 +0000 UTC m=+0.438523052 container attach d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01384|binding|INFO|Claiming lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 for this chassis.
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01385|binding|INFO|2cd88dce-60d9-4da6-a5f6-ba6622fd8812: Claiming fa:16:3e:1e:e8:e0 10.100.0.14
Nov 25 09:03:36 compute-0 systemd-udevd[395462]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:03:36 compute-0 NetworkManager[48915]: <info>  [1764061416.6821] manager: (tap25c8c441-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Nov 25 09:03:36 compute-0 systemd-udevd[395465]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.684 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:e8:e0 10.100.0.14'], port_security=['fa:16:3e:1e:e8:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ee319a5-b613-4b27-a1e6-64b0129bf269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f691304c-d112-4c32-b3ac-0f33230178b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a4166e3-493a-4dd7-9e89-e40e3bf1bed7, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2cd88dce-60d9-4da6-a5f6-ba6622fd8812) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.686 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 in datapath f691304c-d112-4c32-b3ac-0f33230178b0 bound to our chassis
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.687 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f691304c-d112-4c32-b3ac-0f33230178b0
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.692 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 NetworkManager[48915]: <info>  [1764061416.6954] device (tap2cd88dce-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:03:36 compute-0 NetworkManager[48915]: <info>  [1764061416.6961] device (tap2cd88dce-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:03:36 compute-0 NetworkManager[48915]: <info>  [1764061416.7027] device (tap25c8c441-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:03:36 compute-0 kernel: tap25c8c441-cd: entered promiscuous mode
Nov 25 09:03:36 compute-0 NetworkManager[48915]: <info>  [1764061416.7038] device (tap25c8c441-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01386|binding|INFO|Setting lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 ovn-installed in OVS
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01387|binding|INFO|Setting lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 up in Southbound
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01388|if_status|INFO|Not updating pb chassis for 25c8c441-cd5e-4cd3-9151-e8137db08e65 now as sb is readonly
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.705 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c952ebb-93f1-434f-a290-7eca6efdc5d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01389|binding|INFO|Claiming lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 for this chassis.
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01390|binding|INFO|25c8c441-cd5e-4cd3-9151-e8137db08e65: Claiming fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.737 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5'], port_security=['fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fefe:c4a5/64', 'neutron:device_id': '1ee319a5-b613-4b27-a1e6-64b0129bf269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a218a470-f1de-46c9-a998-6139383f8f9c, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=25c8c441-cd5e-4cd3-9151-e8137db08e65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01391|binding|INFO|Setting lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 ovn-installed in OVS
Nov 25 09:03:36 compute-0 ovn_controller[152859]: 2025-11-25T09:03:36Z|01392|binding|INFO|Setting lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 up in Southbound
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 systemd-machined[215790]: New machine qemu-165-instance-00000087.
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.770 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f85db8-a2a5-4d5a-a6a2-3b4978cdf947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.775 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[acde9e8e-33ec-43e4-8828-550e34ad9dd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000087.
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.808 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b03e3464-9075-4f14-be5f-8fb1cf2429ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36d9fb8a-f1a2-4592-ba07-705b3b3f88c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf691304c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:d6:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672407, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395480, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.843 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[04cb5edb-d211-43cb-a215-44ea5ac89610]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf691304c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672419, 'tstamp': 672419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395485, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf691304c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672421, 'tstamp': 672421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395485, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.844 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf691304c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.846 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf691304c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.852 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.853 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf691304c-d0, col_values=(('external_ids', {'iface-id': '5564ec46-e1ee-4a7e-990b-f716b4d2c9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.853 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.855 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 25c8c441-cd5e-4cd3-9151-e8137db08e65 in datapath 269f4fa4-a7fb-4f9a-b49d-3b1968826304 unbound from our chassis
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.857 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269f4fa4-a7fb-4f9a-b49d-3b1968826304
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.871 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[21612972-5311-4696-908c-f881e40ab195]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.904 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9bd8f9-86b9-4b40-8d86-f582cc45415a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.906 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d13f5630-8af8-4bb7-86f3-fc4497de7aef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.944 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[887050a0-2afb-4ab6-93af-e37a350c4194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.965 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d91be723-7f3b-45ea-a9ea-6fa905a1fbba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269f4fa4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:a8:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672570, 'reachable_time': 38765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395493, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.986 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[38e8dff1-0840-4b91-89d9-90f962c86300]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap269f4fa4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672580, 'tstamp': 672580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395494, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.988 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f4fa4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 nova_compute[253538]: 2025-11-25 09:03:36.996 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.997 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269f4fa4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.997 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.998 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269f4fa4-a0, col_values=(('external_ids', {'iface-id': 'ab77c41f-12b1-44c7-af48-058abf7be28c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:36.998 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.371 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061417.3714201, 1ee319a5-b613-4b27-a1e6-64b0129bf269 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.372 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] VM Started (Lifecycle Event)
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.388 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.393 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061417.3715806, 1ee319a5-b613-4b27-a1e6-64b0129bf269 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.394 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] VM Paused (Lifecycle Event)
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.412 253542 DEBUG nova.compute.manager [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.412 253542 DEBUG oslo_concurrency.lockutils [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.412 253542 DEBUG oslo_concurrency.lockutils [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.412 253542 DEBUG oslo_concurrency.lockutils [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.413 253542 DEBUG nova.compute.manager [req-bbc1137f-07d3-4399-9485-b727693defca req-0f936d07-38a6-409a-bfb1-82ca33e5b4e3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Processing event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.414 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.418 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.435 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.498 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Creating config drive at /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.503 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7qkw4gl4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.536 253542 DEBUG nova.network.neutron [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updated VIF entry in instance network info cache for port d2008aa0-bac3-4d83-88d2-34376e911b2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.537 253542 DEBUG nova.network.neutron [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updating instance_info_cache with network_info: [{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.550 253542 DEBUG oslo_concurrency.lockutils [req-894204e2-b11b-4939-bf49-188c572977d5 req-07fe0918-a967-4cea-90bd-77260f02f8e9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:37 compute-0 ceph-mon[75015]: pgmap v2484: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 09:03:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/606996446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.641 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7qkw4gl4" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.662 253542 DEBUG nova.storage.rbd_utils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] rbd image 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:03:37 compute-0 nova_compute[253538]: 2025-11-25 09:03:37.665 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:37 compute-0 recursing_jepsen[395427]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:03:37 compute-0 recursing_jepsen[395427]: --> relative data size: 1.0
Nov 25 09:03:37 compute-0 recursing_jepsen[395427]: --> All data devices are unavailable
Nov 25 09:03:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:37 compute-0 systemd[1]: libpod-d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299.scope: Deactivated successfully.
Nov 25 09:03:37 compute-0 systemd[1]: libpod-d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299.scope: Consumed 1.033s CPU time.
Nov 25 09:03:37 compute-0 podman[395403]: 2025-11-25 09:03:37.736969147 +0000 UTC m=+1.499553905 container died d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-833fc99fdcb4670e859ff04d55199eec67b5628be698cbe604b240e1437b8ced-merged.mount: Deactivated successfully.
Nov 25 09:03:37 compute-0 podman[395403]: 2025-11-25 09:03:37.895793875 +0000 UTC m=+1.658378633 container remove d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 09:03:37 compute-0 systemd[1]: libpod-conmon-d09bd3b709ad1fa294b8b59a3c1d26d977814b56f41fc8e912e4960f88f96299.scope: Deactivated successfully.
Nov 25 09:03:37 compute-0 sudo[395174]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:37 compute-0 sudo[395619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:37 compute-0 sudo[395619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:37 compute-0 sudo[395619]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:38 compute-0 sudo[395644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:03:38 compute-0 sudo[395644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:38 compute-0 sudo[395644]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 MiB/s wr, 43 op/s
Nov 25 09:03:38 compute-0 sudo[395669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:38 compute-0 sudo[395669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:38 compute-0 sudo[395669]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:38 compute-0 sudo[395694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:03:38 compute-0 sudo[395694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:38 compute-0 nova_compute[253538]: 2025-11-25 09:03:38.405 253542 DEBUG oslo_concurrency.processutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config 60fab831-4ae4-4e18-a4e4-5466abbece52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:38 compute-0 nova_compute[253538]: 2025-11-25 09:03:38.408 253542 INFO nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Deleting local config drive /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52/disk.config because it was imported into RBD.
Nov 25 09:03:38 compute-0 NetworkManager[48915]: <info>  [1764061418.4710] manager: (tapd2008aa0-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Nov 25 09:03:38 compute-0 kernel: tapd2008aa0-ba: entered promiscuous mode
Nov 25 09:03:38 compute-0 ovn_controller[152859]: 2025-11-25T09:03:38Z|01393|binding|INFO|Claiming lport d2008aa0-bac3-4d83-88d2-34376e911b2c for this chassis.
Nov 25 09:03:38 compute-0 ovn_controller[152859]: 2025-11-25T09:03:38Z|01394|binding|INFO|d2008aa0-bac3-4d83-88d2-34376e911b2c: Claiming fa:16:3e:72:7d:84 10.100.0.11
Nov 25 09:03:38 compute-0 nova_compute[253538]: 2025-11-25 09:03:38.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.486 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:7d:84 10.100.0.11'], port_security=['fa:16:3e:72:7d:84 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '60fab831-4ae4-4e18-a4e4-5466abbece52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72472fc5-3661-404c-a0d2-df155795bd2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f20aab2-1f55-4a0f-8bdf-77bad4fbb70d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48d726f4-a876-48b7-812b-492b6f2eebf1, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d2008aa0-bac3-4d83-88d2-34376e911b2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.487 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d2008aa0-bac3-4d83-88d2-34376e911b2c in datapath 72472fc5-3661-404c-a0d2-df155795bd2b bound to our chassis
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.488 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72472fc5-3661-404c-a0d2-df155795bd2b
Nov 25 09:03:38 compute-0 NetworkManager[48915]: <info>  [1764061418.4961] device (tapd2008aa0-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:03:38 compute-0 NetworkManager[48915]: <info>  [1764061418.4970] device (tapd2008aa0-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:03:38 compute-0 ovn_controller[152859]: 2025-11-25T09:03:38Z|01395|binding|INFO|Setting lport d2008aa0-bac3-4d83-88d2-34376e911b2c ovn-installed in OVS
Nov 25 09:03:38 compute-0 ovn_controller[152859]: 2025-11-25T09:03:38Z|01396|binding|INFO|Setting lport d2008aa0-bac3-4d83-88d2-34376e911b2c up in Southbound
Nov 25 09:03:38 compute-0 nova_compute[253538]: 2025-11-25 09:03:38.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.507 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee576ae-aedf-4d06-a3e1-21ce1ac56451]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:38 compute-0 nova_compute[253538]: 2025-11-25 09:03:38.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:38 compute-0 systemd-machined[215790]: New machine qemu-166-instance-00000088.
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.546 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce81f7e-3d45-4b2a-8033-7fe4cc158996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.550 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[19753bfd-1536-4d41-bf9e-12ae0540d18a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:38 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000088.
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.583 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac8e156-a2f1-473b-b069-d0c6e47137af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.602 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[33edc571-215c-421f-8390-3dc874b082d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72472fc5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:5c:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672908, 'reachable_time': 15576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395781, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.621 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc44e31-5fda-4b8f-928c-19f13f11a85b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72472fc5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672919, 'tstamp': 672919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395794, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72472fc5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672922, 'tstamp': 672922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395794, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.623 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72472fc5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:38 compute-0 nova_compute[253538]: 2025-11-25 09:03:38.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:38 compute-0 nova_compute[253538]: 2025-11-25 09:03:38.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.627 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72472fc5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.627 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72472fc5-30, col_values=(('external_ids', {'iface-id': '7518767c-6a1a-4489-968c-840b865348d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:03:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:38.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:03:38 compute-0 ceph-mon[75015]: pgmap v2485: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 MiB/s wr, 43 op/s
Nov 25 09:03:38 compute-0 podman[395778]: 2025-11-25 09:03:38.617913595 +0000 UTC m=+0.023172351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:03:38 compute-0 podman[395778]: 2025-11-25 09:03:38.797511487 +0000 UTC m=+0.202770233 container create dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:03:38 compute-0 systemd[1]: Started libpod-conmon-dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a.scope.
Nov 25 09:03:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:03:38 compute-0 nova_compute[253538]: 2025-11-25 09:03:38.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:38 compute-0 podman[395778]: 2025-11-25 09:03:38.942419587 +0000 UTC m=+0.347678343 container init dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 09:03:38 compute-0 podman[395778]: 2025-11-25 09:03:38.951808252 +0000 UTC m=+0.357067028 container start dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:03:38 compute-0 goofy_goldwasser[395821]: 167 167
Nov 25 09:03:38 compute-0 systemd[1]: libpod-dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a.scope: Deactivated successfully.
Nov 25 09:03:38 compute-0 conmon[395821]: conmon dc257c01d4a60eaa7986 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a.scope/container/memory.events
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.039 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061419.038855, 60fab831-4ae4-4e18-a4e4-5466abbece52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.040 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] VM Started (Lifecycle Event)
Nov 25 09:03:39 compute-0 podman[395778]: 2025-11-25 09:03:39.043995188 +0000 UTC m=+0.449253954 container attach dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:03:39 compute-0 podman[395778]: 2025-11-25 09:03:39.045034106 +0000 UTC m=+0.450292852 container died dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.061 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.075 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061419.0390875, 60fab831-4ae4-4e18-a4e4-5466abbece52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.075 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] VM Paused (Lifecycle Event)
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.096 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.098 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.113 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-14fcf2b2042cbe2926ff5c1bc005ab6b15e8a4e7b2e009ac4d389f9e6fd5158f-merged.mount: Deactivated successfully.
Nov 25 09:03:39 compute-0 podman[395778]: 2025-11-25 09:03:39.341429553 +0000 UTC m=+0.746688329 container remove dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:03:39 compute-0 systemd[1]: libpod-conmon-dc257c01d4a60eaa7986adf959496c3179c414fc3688c2bda75991c1373c7b8a.scope: Deactivated successfully.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.574 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.575 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.575 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.575 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.575 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No event matching network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 in dict_keys([('network-vif-plugged', '25c8c441-cd5e-4cd3-9151-e8137db08e65')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.576 253542 WARNING nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received unexpected event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 for instance with vm_state building and task_state spawning.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.576 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.576 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.576 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Processing event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.577 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 WARNING nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received unexpected event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 for instance with vm_state building and task_state spawning.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.578 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.579 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.579 253542 DEBUG oslo_concurrency.lockutils [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.579 253542 DEBUG nova.compute.manager [req-cadc6a5d-0997-43d7-81cb-ee665b1f7ab8 req-856e2c64-ea12-4477-8792-a40ca8340519 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Processing event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.580 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.580 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.585 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061419.585224, 1ee319a5-b613-4b27-a1e6-64b0129bf269 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.586 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] VM Resumed (Lifecycle Event)
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.587 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.588 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.591 253542 INFO nova.virt.libvirt.driver [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance spawned successfully.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.592 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.595 253542 INFO nova.virt.libvirt.driver [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance spawned successfully.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.597 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:03:39 compute-0 podman[395869]: 2025-11-25 09:03:39.600504905 +0000 UTC m=+0.089342789 container create 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:03:39 compute-0 podman[395869]: 2025-11-25 09:03:39.537162214 +0000 UTC m=+0.026000178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:03:39 compute-0 systemd[1]: Started libpod-conmon-9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f.scope.
Nov 25 09:03:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.698 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.701 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.702 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.702 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.703 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.703 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.703 253542 DEBUG nova.virt.libvirt.driver [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 podman[395869]: 2025-11-25 09:03:39.722561274 +0000 UTC m=+0.211399178 container init 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.716 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.716 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.716 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.717 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.717 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.717 253542 DEBUG nova.virt.libvirt.driver [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.723 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:03:39 compute-0 podman[395869]: 2025-11-25 09:03:39.732093412 +0000 UTC m=+0.220931296 container start 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:03:39 compute-0 podman[395869]: 2025-11-25 09:03:39.736911864 +0000 UTC m=+0.225749778 container attach 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.768 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.768 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061419.5856168, 60fab831-4ae4-4e18-a4e4-5466abbece52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.769 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] VM Resumed (Lifecycle Event)
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.797 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.799 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.806 253542 INFO nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Took 8.84 seconds to spawn the instance on the hypervisor.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.806 253542 DEBUG nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.816 253542 INFO nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Took 16.45 seconds to spawn the instance on the hypervisor.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.817 253542 DEBUG nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.832 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.875 253542 INFO nova.compute.manager [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Took 9.97 seconds to build instance.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.878 253542 INFO nova.compute.manager [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Took 18.06 seconds to build instance.
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.892 253542 DEBUG oslo_concurrency.lockutils [None req-114760a2-1ab1-4b38-a8f4-182ed57f57d9 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:39 compute-0 nova_compute[253538]: 2025-11-25 09:03:39.893 253542 DEBUG oslo_concurrency.lockutils [None req-07640ace-66ce-4387-9528-c13339b929d9 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]: {
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:     "0": [
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:         {
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "devices": [
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "/dev/loop3"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             ],
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_name": "ceph_lv0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_size": "21470642176",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "name": "ceph_lv0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "tags": {
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cluster_name": "ceph",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.crush_device_class": "",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.encrypted": "0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osd_id": "0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.type": "block",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.vdo": "0"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             },
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "type": "block",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "vg_name": "ceph_vg0"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:         }
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:     ],
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:     "1": [
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:         {
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "devices": [
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "/dev/loop4"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             ],
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_name": "ceph_lv1",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_size": "21470642176",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "name": "ceph_lv1",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "tags": {
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cluster_name": "ceph",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.crush_device_class": "",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.encrypted": "0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osd_id": "1",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.type": "block",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.vdo": "0"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             },
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "type": "block",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "vg_name": "ceph_vg1"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:         }
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:     ],
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:     "2": [
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:         {
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "devices": [
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "/dev/loop5"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             ],
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_name": "ceph_lv2",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_size": "21470642176",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "name": "ceph_lv2",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "tags": {
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.cluster_name": "ceph",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.crush_device_class": "",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.encrypted": "0",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osd_id": "2",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.type": "block",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:                 "ceph.vdo": "0"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             },
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "type": "block",
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:             "vg_name": "ceph_vg2"
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:         }
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]:     ]
Nov 25 09:03:40 compute-0 eloquent_leavitt[395885]: }
Nov 25 09:03:40 compute-0 systemd[1]: libpod-9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f.scope: Deactivated successfully.
Nov 25 09:03:40 compute-0 podman[395869]: 2025-11-25 09:03:40.612065634 +0000 UTC m=+1.100903508 container died 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcaf4ed3ca92606b0ad6cf3dda4874bb51e64b6fd1142f2bf0d47b0124f3f2ea-merged.mount: Deactivated successfully.
Nov 25 09:03:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:41.087 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:41.088 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:03:41.089 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:41 compute-0 ceph-mon[75015]: pgmap v2486: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 09:03:41 compute-0 podman[395869]: 2025-11-25 09:03:41.299478541 +0000 UTC m=+1.788316425 container remove 9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:03:41 compute-0 sudo[395694]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:41 compute-0 systemd[1]: libpod-conmon-9f532a17f42f7bb8fa301fe6819437362244531bcd7e8d6dde781cbdc131550f.scope: Deactivated successfully.
Nov 25 09:03:41 compute-0 sudo[395909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:41 compute-0 sudo[395909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:41 compute-0 sudo[395909]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:41 compute-0 sudo[395934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:03:41 compute-0 sudo[395934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:41 compute-0 sudo[395934]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:41 compute-0 nova_compute[253538]: 2025-11-25 09:03:41.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:41 compute-0 sudo[395959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:41 compute-0 sudo[395959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:41 compute-0 sudo[395959]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:41 compute-0 sudo[395984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:03:41 compute-0 sudo[395984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:41 compute-0 sshd-session[392367]: ssh_dispatch_run_fatal: Connection from 45.78.222.2 port 36618: Connection timed out [preauth]
Nov 25 09:03:41 compute-0 sshd-session[395891]: Received disconnect from 45.202.211.6 port 60342:11: Bye Bye [preauth]
Nov 25 09:03:41 compute-0 sshd-session[395891]: Disconnected from authenticating user root 45.202.211.6 port 60342 [preauth]
Nov 25 09:03:42 compute-0 podman[396048]: 2025-11-25 09:03:41.981977224 +0000 UTC m=+0.030988944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:03:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 09:03:42 compute-0 podman[396048]: 2025-11-25 09:03:42.106672423 +0000 UTC m=+0.155684123 container create bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:03:42 compute-0 systemd[1]: Started libpod-conmon-bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814.scope.
Nov 25 09:03:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:03:42 compute-0 nova_compute[253538]: 2025-11-25 09:03:42.341 253542 DEBUG nova.compute.manager [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:42 compute-0 nova_compute[253538]: 2025-11-25 09:03:42.341 253542 DEBUG oslo_concurrency.lockutils [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:42 compute-0 nova_compute[253538]: 2025-11-25 09:03:42.342 253542 DEBUG oslo_concurrency.lockutils [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:42 compute-0 nova_compute[253538]: 2025-11-25 09:03:42.342 253542 DEBUG oslo_concurrency.lockutils [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:42 compute-0 nova_compute[253538]: 2025-11-25 09:03:42.342 253542 DEBUG nova.compute.manager [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] No waiting events found dispatching network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:03:42 compute-0 nova_compute[253538]: 2025-11-25 09:03:42.342 253542 WARNING nova.compute.manager [req-9cb5eed8-b56d-4f66-a279-dbe9dba8419f req-69c27afa-d4ef-471c-b06f-b7e3a4eb8a2f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received unexpected event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c for instance with vm_state active and task_state None.
Nov 25 09:03:42 compute-0 podman[396048]: 2025-11-25 09:03:42.59507387 +0000 UTC m=+0.644085600 container init bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 09:03:42 compute-0 nostalgic_elgamal[396064]: 167 167
Nov 25 09:03:42 compute-0 systemd[1]: libpod-bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814.scope: Deactivated successfully.
Nov 25 09:03:42 compute-0 podman[396048]: 2025-11-25 09:03:42.61052431 +0000 UTC m=+0.659536010 container start bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:03:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:42 compute-0 ceph-mon[75015]: pgmap v2487: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 09:03:42 compute-0 podman[396048]: 2025-11-25 09:03:42.814011462 +0000 UTC m=+0.863023262 container attach bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 09:03:42 compute-0 podman[396048]: 2025-11-25 09:03:42.814935527 +0000 UTC m=+0.863947267 container died bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 09:03:43 compute-0 nova_compute[253538]: 2025-11-25 09:03:43.072 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cd28420316157bba06c463141e823a465e6c8de1667300fa4b451aac0083951-merged.mount: Deactivated successfully.
Nov 25 09:03:43 compute-0 podman[396048]: 2025-11-25 09:03:43.475506674 +0000 UTC m=+1.524518394 container remove bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:03:43 compute-0 systemd[1]: libpod-conmon-bfd3778d8f8675c8524353b174e929c5345ea77a0e8c16c102ef7584c3f02814.scope: Deactivated successfully.
Nov 25 09:03:43 compute-0 nova_compute[253538]: 2025-11-25 09:03:43.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:43 compute-0 podman[396086]: 2025-11-25 09:03:43.737015872 +0000 UTC m=+0.042478455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:03:43 compute-0 nova_compute[253538]: 2025-11-25 09:03:43.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:43 compute-0 podman[396086]: 2025-11-25 09:03:43.992365164 +0000 UTC m=+0.297827727 container create a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:03:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.112 253542 DEBUG nova.compute.manager [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-changed-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.113 253542 DEBUG nova.compute.manager [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Refreshing instance network info cache due to event network-changed-d2008aa0-bac3-4d83-88d2-34376e911b2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.114 253542 DEBUG oslo_concurrency.lockutils [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.114 253542 DEBUG oslo_concurrency.lockutils [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.115 253542 DEBUG nova.network.neutron [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Refreshing network info cache for port d2008aa0-bac3-4d83-88d2-34376e911b2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:03:44 compute-0 systemd[1]: Started libpod-conmon-a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3.scope.
Nov 25 09:03:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:03:44 compute-0 podman[396086]: 2025-11-25 09:03:44.297788136 +0000 UTC m=+0.603250799 container init a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:03:44 compute-0 podman[396086]: 2025-11-25 09:03:44.30677244 +0000 UTC m=+0.612235003 container start a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.443 253542 DEBUG nova.compute.manager [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.444 253542 DEBUG nova.compute.manager [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing instance network info cache due to event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.445 253542 DEBUG oslo_concurrency.lockutils [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.446 253542 DEBUG oslo_concurrency.lockutils [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:44 compute-0 nova_compute[253538]: 2025-11-25 09:03:44.446 253542 DEBUG nova.network.neutron [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:03:44 compute-0 podman[396086]: 2025-11-25 09:03:44.641161731 +0000 UTC m=+0.946624314 container attach a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 09:03:45 compute-0 ceph-mon[75015]: pgmap v2488: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Nov 25 09:03:45 compute-0 recursing_kirch[396103]: {
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "osd_id": 1,
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "type": "bluestore"
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:     },
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "osd_id": 2,
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "type": "bluestore"
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:     },
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "osd_id": 0,
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:         "type": "bluestore"
Nov 25 09:03:45 compute-0 recursing_kirch[396103]:     }
Nov 25 09:03:45 compute-0 recursing_kirch[396103]: }
Nov 25 09:03:45 compute-0 systemd[1]: libpod-a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3.scope: Deactivated successfully.
Nov 25 09:03:45 compute-0 systemd[1]: libpod-a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3.scope: Consumed 1.049s CPU time.
Nov 25 09:03:45 compute-0 podman[396086]: 2025-11-25 09:03:45.361332108 +0000 UTC m=+1.666794671 container died a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:03:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecfad7911025f9390be9a4762af21e2a46dde347def1effa25e0febe236a5b66-merged.mount: Deactivated successfully.
Nov 25 09:03:45 compute-0 podman[396086]: 2025-11-25 09:03:45.928556427 +0000 UTC m=+2.234018990 container remove a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kirch, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 09:03:45 compute-0 sudo[395984]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:03:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:03:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:03:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:03:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 531720b5-6ac8-481f-a6e9-acde38fc5bf2 does not exist
Nov 25 09:03:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e1bed3ad-cfac-4290-8292-b790c4dde700 does not exist
Nov 25 09:03:46 compute-0 systemd[1]: libpod-conmon-a8c0a3fc0739410be3c4ab1340789954bdde9243a35903c300e05d65f7ef27c3.scope: Deactivated successfully.
Nov 25 09:03:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 646 KiB/s wr, 162 op/s
Nov 25 09:03:46 compute-0 sudo[396148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:03:46 compute-0 sudo[396148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:46 compute-0 sudo[396148]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:46 compute-0 sudo[396173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:03:46 compute-0 sudo[396173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:03:46 compute-0 sudo[396173]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:46 compute-0 nova_compute[253538]: 2025-11-25 09:03:46.535 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:03:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:03:47 compute-0 ceph-mon[75015]: pgmap v2489: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 646 KiB/s wr, 162 op/s
Nov 25 09:03:47 compute-0 nova_compute[253538]: 2025-11-25 09:03:47.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:47 compute-0 nova_compute[253538]: 2025-11-25 09:03:47.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:03:47 compute-0 nova_compute[253538]: 2025-11-25 09:03:47.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:03:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:47 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 25 09:03:47 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:47.778133) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:03:47 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 25 09:03:47 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061427778208, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 531, "num_deletes": 258, "total_data_size": 499597, "memory_usage": 511288, "flush_reason": "Manual Compaction"}
Nov 25 09:03:47 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 25 09:03:47 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061427928033, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 495208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51820, "largest_seqno": 52350, "table_properties": {"data_size": 492244, "index_size": 936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7031, "raw_average_key_size": 18, "raw_value_size": 486191, "raw_average_value_size": 1293, "num_data_blocks": 41, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061398, "oldest_key_time": 1764061398, "file_creation_time": 1764061427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:03:47 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 149944 microseconds, and 2673 cpu microseconds.
Nov 25 09:03:47 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:03:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 148 op/s
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:47.928086) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 495208 bytes OK
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:47.928108) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.091801) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.091844) EVENT_LOG_v1 {"time_micros": 1764061428091835, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.091873) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 496510, prev total WAL file size 496510, number of live WAL files 2.
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.092554) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303135' seq:72057594037927935, type:22 .. '6C6F676D0032323639' seq:0, type:0; will stop at (end)
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(483KB)], [119(9971KB)]
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061428092599, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10706108, "oldest_snapshot_seqno": -1}
Nov 25 09:03:48 compute-0 nova_compute[253538]: 2025-11-25 09:03:48.260 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:03:48 compute-0 nova_compute[253538]: 2025-11-25 09:03:48.261 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:03:48 compute-0 nova_compute[253538]: 2025-11-25 09:03:48.262 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:03:48 compute-0 nova_compute[253538]: 2025-11-25 09:03:48.263 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 28376454-90b2-431d-9052-48b369973c8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7252 keys, 10589790 bytes, temperature: kUnknown
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061428312442, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10589790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10540882, "index_size": 29660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18181, "raw_key_size": 189793, "raw_average_key_size": 26, "raw_value_size": 10410776, "raw_average_value_size": 1435, "num_data_blocks": 1163, "num_entries": 7252, "num_filter_entries": 7252, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.313097) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10589790 bytes
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.341089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.7 rd, 48.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.7 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(43.0) write-amplify(21.4) OK, records in: 7780, records dropped: 528 output_compression: NoCompression
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.341178) EVENT_LOG_v1 {"time_micros": 1764061428341146, "job": 72, "event": "compaction_finished", "compaction_time_micros": 219980, "compaction_time_cpu_micros": 27739, "output_level": 6, "num_output_files": 1, "total_output_size": 10589790, "num_input_records": 7780, "num_output_records": 7252, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061428341797, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061428346478, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.092461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:48 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:03:48.346571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:03:48 compute-0 ceph-mon[75015]: pgmap v2490: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 148 op/s
Nov 25 09:03:48 compute-0 nova_compute[253538]: 2025-11-25 09:03:48.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:49 compute-0 nova_compute[253538]: 2025-11-25 09:03:49.322 253542 DEBUG nova.network.neutron [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updated VIF entry in instance network info cache for port d2008aa0-bac3-4d83-88d2-34376e911b2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:03:49 compute-0 nova_compute[253538]: 2025-11-25 09:03:49.323 253542 DEBUG nova.network.neutron [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updating instance_info_cache with network_info: [{"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:49 compute-0 nova_compute[253538]: 2025-11-25 09:03:49.326 253542 DEBUG nova.network.neutron [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updated VIF entry in instance network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:03:49 compute-0 nova_compute[253538]: 2025-11-25 09:03:49.327 253542 DEBUG nova.network.neutron [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:49 compute-0 nova_compute[253538]: 2025-11-25 09:03:49.347 253542 DEBUG oslo_concurrency.lockutils [req-65dadb0d-11ef-48f3-86e7-81ae8ecd0a0c req-ac4ad3e4-4896-4bde-911a-b49c4fc8bcfa b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:49 compute-0 nova_compute[253538]: 2025-11-25 09:03:49.349 253542 DEBUG oslo_concurrency.lockutils [req-ed77d0bc-d88e-418d-9e02-382def51063f req-55b577de-c098-4784-92a7-b46424e29d08 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-60fab831-4ae4-4e18-a4e4-5466abbece52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 20 KiB/s wr, 145 op/s
Nov 25 09:03:51 compute-0 ceph-mon[75015]: pgmap v2491: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 20 KiB/s wr, 145 op/s
Nov 25 09:03:51 compute-0 nova_compute[253538]: 2025-11-25 09:03:51.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.7 KiB/s wr, 135 op/s
Nov 25 09:03:52 compute-0 nova_compute[253538]: 2025-11-25 09:03:52.108 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:03:52 compute-0 nova_compute[253538]: 2025-11-25 09:03:52.131 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:03:52 compute-0 nova_compute[253538]: 2025-11-25 09:03:52.132 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:03:52 compute-0 nova_compute[253538]: 2025-11-25 09:03:52.132 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:52 compute-0 nova_compute[253538]: 2025-11-25 09:03:52.133 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:52 compute-0 nova_compute[253538]: 2025-11-25 09:03:52.133 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:03:52 compute-0 ceph-mon[75015]: pgmap v2492: 321 pgs: 321 active+clean; 339 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.7 KiB/s wr, 135 op/s
Nov 25 09:03:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:03:53
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'images', '.mgr', 'vms', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:03:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:03:53 compute-0 nova_compute[253538]: 2025-11-25 09:03:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:53 compute-0 nova_compute[253538]: 2025-11-25 09:03:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:53 compute-0 podman[396200]: 2025-11-25 09:03:53.827856712 +0000 UTC m=+0.065243734 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:03:53 compute-0 podman[396199]: 2025-11-25 09:03:53.847045574 +0000 UTC m=+0.092125195 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd)
Nov 25 09:03:53 compute-0 nova_compute[253538]: 2025-11-25 09:03:53.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 340 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 411 KiB/s wr, 101 op/s
Nov 25 09:03:54 compute-0 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 09:03:55 compute-0 ceph-mon[75015]: pgmap v2493: 321 pgs: 321 active+clean; 340 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 411 KiB/s wr, 101 op/s
Nov 25 09:03:55 compute-0 nova_compute[253538]: 2025-11-25 09:03:55.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 346 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Nov 25 09:03:56 compute-0 nova_compute[253538]: 2025-11-25 09:03:56.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:57 compute-0 ceph-mon[75015]: pgmap v2494: 321 pgs: 321 active+clean; 346 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Nov 25 09:03:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:03:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 357 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 25 09:03:58 compute-0 nova_compute[253538]: 2025-11-25 09:03:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:03:58 compute-0 nova_compute[253538]: 2025-11-25 09:03:58.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:58 compute-0 nova_compute[253538]: 2025-11-25 09:03:58.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:58 compute-0 nova_compute[253538]: 2025-11-25 09:03:58.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:03:58 compute-0 nova_compute[253538]: 2025-11-25 09:03:58.589 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:03:58 compute-0 nova_compute[253538]: 2025-11-25 09:03:58.589 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:58 compute-0 ceph-mon[75015]: pgmap v2495: 321 pgs: 321 active+clean; 357 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 25 09:03:58 compute-0 nova_compute[253538]: 2025-11-25 09:03:58.975 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:03:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:03:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1321375396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.150 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.378 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.379 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.384 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.384 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.388 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.389 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.393 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.393 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.621 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.623 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2929MB free_disk=59.82619857788086GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.623 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.623 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.711 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 28376454-90b2-431d-9052-48b369973c8e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2c4a1d63-7674-4276-8da9-b9d4f4fea307 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 1ee319a5-b613-4b27-a1e6-64b0129bf269 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 60fab831-4ae4-4e18-a4e4-5466abbece52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.712 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:03:59 compute-0 nova_compute[253538]: 2025-11-25 09:03:59.818 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:03:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1321375396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 4.0 MiB/s wr, 77 op/s
Nov 25 09:04:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:04:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465604835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:00 compute-0 nova_compute[253538]: 2025-11-25 09:04:00.280 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:04:00 compute-0 nova_compute[253538]: 2025-11-25 09:04:00.285 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:04:00 compute-0 nova_compute[253538]: 2025-11-25 09:04:00.296 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:04:00 compute-0 ovn_controller[152859]: 2025-11-25T09:04:00Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:e8:e0 10.100.0.14
Nov 25 09:04:00 compute-0 ovn_controller[152859]: 2025-11-25T09:04:00Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:e8:e0 10.100.0.14
Nov 25 09:04:00 compute-0 nova_compute[253538]: 2025-11-25 09:04:00.346 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:04:00 compute-0 nova_compute[253538]: 2025-11-25 09:04:00.347 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:00 compute-0 ceph-mon[75015]: pgmap v2496: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 4.0 MiB/s wr, 77 op/s
Nov 25 09:04:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/465604835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:01 compute-0 ovn_controller[152859]: 2025-11-25T09:04:01Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:7d:84 10.100.0.11
Nov 25 09:04:01 compute-0 ovn_controller[152859]: 2025-11-25T09:04:01Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:7d:84 10.100.0.11
Nov 25 09:04:01 compute-0 nova_compute[253538]: 2025-11-25 09:04:01.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:01 compute-0 podman[396282]: 2025-11-25 09:04:01.861278145 +0000 UTC m=+0.099662901 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:04:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 394 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 4.1 MiB/s wr, 109 op/s
Nov 25 09:04:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:03 compute-0 ceph-mon[75015]: pgmap v2497: 321 pgs: 321 active+clean; 394 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 4.1 MiB/s wr, 109 op/s
Nov 25 09:04:03 compute-0 nova_compute[253538]: 2025-11-25 09:04:03.979 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 4.1 MiB/s wr, 114 op/s
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030138815971543923 of space, bias 1.0, pg target 0.9041644791463177 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:04:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:04:04 compute-0 ceph-mon[75015]: pgmap v2498: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 4.1 MiB/s wr, 114 op/s
Nov 25 09:04:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 648 KiB/s rd, 3.9 MiB/s wr, 125 op/s
Nov 25 09:04:06 compute-0 ceph-mon[75015]: pgmap v2499: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 648 KiB/s rd, 3.9 MiB/s wr, 125 op/s
Nov 25 09:04:06 compute-0 nova_compute[253538]: 2025-11-25 09:04:06.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:07 compute-0 nova_compute[253538]: 2025-11-25 09:04:07.597 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:07 compute-0 nova_compute[253538]: 2025-11-25 09:04:07.598 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:07 compute-0 nova_compute[253538]: 2025-11-25 09:04:07.598 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:07 compute-0 nova_compute[253538]: 2025-11-25 09:04:07.599 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:07 compute-0 nova_compute[253538]: 2025-11-25 09:04:07.599 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:07 compute-0 nova_compute[253538]: 2025-11-25 09:04:07.601 253542 INFO nova.compute.manager [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Terminating instance
Nov 25 09:04:07 compute-0 nova_compute[253538]: 2025-11-25 09:04:07.602 253542 DEBUG nova.compute.manager [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:04:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 3.0 MiB/s wr, 108 op/s
Nov 25 09:04:08 compute-0 ceph-mon[75015]: pgmap v2500: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 3.0 MiB/s wr, 108 op/s
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 kernel: tapd2008aa0-ba (unregistering): left promiscuous mode
Nov 25 09:04:09 compute-0 NetworkManager[48915]: <info>  [1764061449.2473] device (tapd2008aa0-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.261 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 ovn_controller[152859]: 2025-11-25T09:04:09Z|01397|binding|INFO|Releasing lport d2008aa0-bac3-4d83-88d2-34376e911b2c from this chassis (sb_readonly=0)
Nov 25 09:04:09 compute-0 ovn_controller[152859]: 2025-11-25T09:04:09Z|01398|binding|INFO|Setting lport d2008aa0-bac3-4d83-88d2-34376e911b2c down in Southbound
Nov 25 09:04:09 compute-0 ovn_controller[152859]: 2025-11-25T09:04:09Z|01399|binding|INFO|Removing iface tapd2008aa0-ba ovn-installed in OVS
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 25 09:04:09 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000088.scope: Consumed 15.748s CPU time.
Nov 25 09:04:09 compute-0 systemd-machined[215790]: Machine qemu-166-instance-00000088 terminated.
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.342 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:7d:84 10.100.0.11'], port_security=['fa:16:3e:72:7d:84 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '60fab831-4ae4-4e18-a4e4-5466abbece52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72472fc5-3661-404c-a0d2-df155795bd2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1e296d37-dec5-4d7e-978f-4bb613fbda54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48d726f4-a876-48b7-812b-492b6f2eebf1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=d2008aa0-bac3-4d83-88d2-34376e911b2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.343 162739 INFO neutron.agent.ovn.metadata.agent [-] Port d2008aa0-bac3-4d83-88d2-34376e911b2c in datapath 72472fc5-3661-404c-a0d2-df155795bd2b unbound from our chassis
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.344 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72472fc5-3661-404c-a0d2-df155795bd2b
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.365 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd0e34d-59a5-4f59-9899-e4718f6c09a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.400 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6f7837-817a-4168-adf8-df7bebf46201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.403 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[228728de-bb02-421c-a574-b16945feef5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.445 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4a77e3-837c-4486-80a9-d523db8a035b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.451 253542 INFO nova.virt.libvirt.driver [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Instance destroyed successfully.
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.451 253542 DEBUG nova.objects.instance [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 60fab831-4ae4-4e18-a4e4-5466abbece52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.463 253542 DEBUG nova.virt.libvirt.vif [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:03:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-gen-1-1215590521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ge',id=136,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:03:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-lk5znizs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:03:39Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=60fab831-4ae4-4e18-a4e4-5466abbece52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.464 253542 DEBUG nova.network.os_vif_util [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "address": "fa:16:3e:72:7d:84", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2008aa0-ba", "ovs_interfaceid": "d2008aa0-bac3-4d83-88d2-34376e911b2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.465 253542 DEBUG nova.network.os_vif_util [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.465 253542 DEBUG os_vif [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.467 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2008aa0-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0448e5e9-38da-482e-838f-0cdf719a6941]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72472fc5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:5c:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672908, 'reachable_time': 15576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396329, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.472 253542 INFO os_vif [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:7d:84,bridge_name='br-int',has_traffic_filtering=True,id=d2008aa0-bac3-4d83-88d2-34376e911b2c,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2008aa0-ba')
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.486 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e331a514-27f2-47fa-8926-0c1be6cf9365]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72472fc5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672919, 'tstamp': 672919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396332, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72472fc5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672922, 'tstamp': 672922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396332, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.488 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72472fc5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.491 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72472fc5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.491 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.491 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72472fc5-30, col_values=(('external_ids', {'iface-id': '7518767c-6a1a-4489-968c-840b865348d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:09.492 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.761 253542 DEBUG nova.compute.manager [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-unplugged-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.761 253542 DEBUG oslo_concurrency.lockutils [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.761 253542 DEBUG oslo_concurrency.lockutils [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.762 253542 DEBUG oslo_concurrency.lockutils [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.762 253542 DEBUG nova.compute.manager [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] No waiting events found dispatching network-vif-unplugged-d2008aa0-bac3-4d83-88d2-34376e911b2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:09 compute-0 nova_compute[253538]: 2025-11-25 09:04:09.762 253542 DEBUG nova.compute.manager [req-519e17b1-7d81-4a51-9d61-526b3a36f812 req-f5f6ace6-5405-4f11-994d-ff1778aaa257 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-unplugged-d2008aa0-bac3-4d83-88d2-34376e911b2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:04:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 593 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Nov 25 09:04:11 compute-0 ceph-mon[75015]: pgmap v2501: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 593 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Nov 25 09:04:11 compute-0 nova_compute[253538]: 2025-11-25 09:04:11.848 253542 DEBUG nova.compute.manager [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:11 compute-0 nova_compute[253538]: 2025-11-25 09:04:11.848 253542 DEBUG oslo_concurrency.lockutils [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:11 compute-0 nova_compute[253538]: 2025-11-25 09:04:11.849 253542 DEBUG oslo_concurrency.lockutils [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:11 compute-0 nova_compute[253538]: 2025-11-25 09:04:11.849 253542 DEBUG oslo_concurrency.lockutils [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:11 compute-0 nova_compute[253538]: 2025-11-25 09:04:11.850 253542 DEBUG nova.compute.manager [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] No waiting events found dispatching network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:11 compute-0 nova_compute[253538]: 2025-11-25 09:04:11.850 253542 WARNING nova.compute.manager [req-494d07a3-479a-4017-a22d-353c75d05f30 req-7e3b6f71-5e6f-4112-9089-a814a59eafe0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received unexpected event network-vif-plugged-d2008aa0-bac3-4d83-88d2-34376e911b2c for instance with vm_state active and task_state deleting.
Nov 25 09:04:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 285 KiB/s wr, 59 op/s
Nov 25 09:04:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:13 compute-0 ceph-mon[75015]: pgmap v2502: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 285 KiB/s wr, 59 op/s
Nov 25 09:04:13 compute-0 ovn_controller[152859]: 2025-11-25T09:04:13Z|01400|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 09:04:14 compute-0 nova_compute[253538]: 2025-11-25 09:04:14.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 376 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 177 KiB/s wr, 28 op/s
Nov 25 09:04:14 compute-0 nova_compute[253538]: 2025-11-25 09:04:14.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:15 compute-0 ceph-mon[75015]: pgmap v2503: 321 pgs: 321 active+clean; 376 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 177 KiB/s wr, 28 op/s
Nov 25 09:04:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 170 KiB/s wr, 30 op/s
Nov 25 09:04:17 compute-0 ceph-mon[75015]: pgmap v2504: 321 pgs: 321 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 170 KiB/s wr, 30 op/s
Nov 25 09:04:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 44 KiB/s wr, 19 op/s
Nov 25 09:04:18 compute-0 ceph-mon[75015]: pgmap v2505: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 44 KiB/s wr, 19 op/s
Nov 25 09:04:19 compute-0 nova_compute[253538]: 2025-11-25 09:04:19.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:19 compute-0 nova_compute[253538]: 2025-11-25 09:04:19.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:19 compute-0 nova_compute[253538]: 2025-11-25 09:04:19.966 253542 DEBUG nova.compute.manager [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:19 compute-0 nova_compute[253538]: 2025-11-25 09:04:19.967 253542 DEBUG nova.compute.manager [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing instance network info cache due to event network-changed-2cd88dce-60d9-4da6-a5f6-ba6622fd8812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:04:19 compute-0 nova_compute[253538]: 2025-11-25 09:04:19.967 253542 DEBUG oslo_concurrency.lockutils [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:04:19 compute-0 nova_compute[253538]: 2025-11-25 09:04:19.967 253542 DEBUG oslo_concurrency.lockutils [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:04:19 compute-0 nova_compute[253538]: 2025-11-25 09:04:19.968 253542 DEBUG nova.network.neutron [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Refreshing network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:04:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 46 KiB/s wr, 27 op/s
Nov 25 09:04:20 compute-0 nova_compute[253538]: 2025-11-25 09:04:20.108 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:20 compute-0 nova_compute[253538]: 2025-11-25 09:04:20.108 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:20 compute-0 nova_compute[253538]: 2025-11-25 09:04:20.109 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:20 compute-0 nova_compute[253538]: 2025-11-25 09:04:20.109 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:20 compute-0 nova_compute[253538]: 2025-11-25 09:04:20.109 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:20 compute-0 nova_compute[253538]: 2025-11-25 09:04:20.110 253542 INFO nova.compute.manager [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Terminating instance
Nov 25 09:04:20 compute-0 nova_compute[253538]: 2025-11-25 09:04:20.111 253542 DEBUG nova.compute.manager [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:04:20 compute-0 ceph-mon[75015]: pgmap v2506: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 46 KiB/s wr, 27 op/s
Nov 25 09:04:21 compute-0 kernel: tap2cd88dce-60 (unregistering): left promiscuous mode
Nov 25 09:04:21 compute-0 NetworkManager[48915]: <info>  [1764061461.2691] device (tap2cd88dce-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:04:21 compute-0 ovn_controller[152859]: 2025-11-25T09:04:21Z|01401|binding|INFO|Releasing lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 from this chassis (sb_readonly=0)
Nov 25 09:04:21 compute-0 ovn_controller[152859]: 2025-11-25T09:04:21Z|01402|binding|INFO|Setting lport 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 down in Southbound
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.321 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 ovn_controller[152859]: 2025-11-25T09:04:21Z|01403|binding|INFO|Removing iface tap2cd88dce-60 ovn-installed in OVS
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.330 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:e8:e0 10.100.0.14'], port_security=['fa:16:3e:1e:e8:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ee319a5-b613-4b27-a1e6-64b0129bf269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f691304c-d112-4c32-b3ac-0f33230178b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a4166e3-493a-4dd7-9e89-e40e3bf1bed7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2cd88dce-60d9-4da6-a5f6-ba6622fd8812) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.332 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812 in datapath f691304c-d112-4c32-b3ac-0f33230178b0 unbound from our chassis
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.334 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f691304c-d112-4c32-b3ac-0f33230178b0
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 kernel: tap25c8c441-cd (unregistering): left promiscuous mode
Nov 25 09:04:21 compute-0 NetworkManager[48915]: <info>  [1764061461.3434] device (tap25c8c441-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.348 253542 DEBUG nova.network.neutron [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updated VIF entry in instance network info cache for port 2cd88dce-60d9-4da6-a5f6-ba6622fd8812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.348 253542 DEBUG nova.network.neutron [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.351 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9374098-ba37-43bc-bb60-a5983587c27c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.352 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 ovn_controller[152859]: 2025-11-25T09:04:21Z|01404|binding|INFO|Releasing lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 from this chassis (sb_readonly=0)
Nov 25 09:04:21 compute-0 ovn_controller[152859]: 2025-11-25T09:04:21Z|01405|binding|INFO|Setting lport 25c8c441-cd5e-4cd3-9151-e8137db08e65 down in Southbound
Nov 25 09:04:21 compute-0 ovn_controller[152859]: 2025-11-25T09:04:21Z|01406|binding|INFO|Removing iface tap25c8c441-cd ovn-installed in OVS
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.361 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5'], port_security=['fa:16:3e:fe:c4:a5 2001:db8::f816:3eff:fefe:c4a5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fefe:c4a5/64', 'neutron:device_id': '1ee319a5-b613-4b27-a1e6-64b0129bf269', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a218a470-f1de-46c9-a998-6139383f8f9c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=25c8c441-cd5e-4cd3-9151-e8137db08e65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.377 253542 DEBUG oslo_concurrency.lockutils [req-0703c0ba-7df6-47d9-b08e-66ae870bd76c req-6ccba779-03f1-44ff-86c7-6c61ff3da3a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-1ee319a5-b613-4b27-a1e6-64b0129bf269" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.389 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e09a5fa4-bcfb-4a62-8470-9aa115e1930f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.393 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[176afe60-6f21-45b1-ad01-32510ca6b176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.423 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e19e408f-eaa9-4d12-bb0d-50a494023086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 25 09:04:21 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000087.scope: Consumed 16.896s CPU time.
Nov 25 09:04:21 compute-0 systemd-machined[215790]: Machine qemu-165-instance-00000087 terminated.
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.440 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a32bff81-c7d5-45a0-8577-e8c8f330af73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf691304c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:d6:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672407, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396370, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.461 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[281c26aa-ea9f-4597-942d-7c9262358664]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf691304c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672419, 'tstamp': 672419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396371, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf691304c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672421, 'tstamp': 672421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396371, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.463 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf691304c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.472 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf691304c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.473 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.473 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf691304c-d0, col_values=(('external_ids', {'iface-id': '5564ec46-e1ee-4a7e-990b-f716b4d2c9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.473 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.474 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 25c8c441-cd5e-4cd3-9151-e8137db08e65 in datapath 269f4fa4-a7fb-4f9a-b49d-3b1968826304 unbound from our chassis
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.475 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269f4fa4-a7fb-4f9a-b49d-3b1968826304
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.490 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdd81cc-bfdc-4b9c-be4d-4dd41a975d3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.531 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9673c4-28a9-49d3-a1b2-4efc3f44c94d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.534 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e48481-30fd-45fa-897c-c855879732ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 NetworkManager[48915]: <info>  [1764061461.5410] manager: (tap25c8c441-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.559 253542 INFO nova.virt.libvirt.driver [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Instance destroyed successfully.
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.560 253542 DEBUG nova.objects.instance [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 1ee319a5-b613-4b27-a1e6-64b0129bf269 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.574 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb56adc-9de4-4e9c-8f8d-8e53bccda2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.598 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[16007a2b-fc31-4781-8254-d65823c32720]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269f4fa4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:a8:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672570, 'reachable_time': 38765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396398, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.604 253542 DEBUG nova.virt.libvirt.vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:03:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:03:39Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.604 253542 DEBUG nova.network.os_vif_util [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.605 253542 DEBUG nova.network.os_vif_util [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.605 253542 DEBUG os_vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.607 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cd88dce-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.616 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.619 253542 INFO os_vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:e8:e0,bridge_name='br-int',has_traffic_filtering=True,id=2cd88dce-60d9-4da6-a5f6-ba6622fd8812,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cd88dce-60')
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.620 253542 DEBUG nova.virt.libvirt.vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-872919100',display_name='tempest-TestGettingAddress-server-872919100',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-872919100',id=135,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:03:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-91m001wh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:03:39Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=1ee319a5-b613-4b27-a1e6-64b0129bf269,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.620 253542 DEBUG nova.network.os_vif_util [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "address": "fa:16:3e:fe:c4:a5", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefe:c4a5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c8c441-cd", "ovs_interfaceid": "25c8c441-cd5e-4cd3-9151-e8137db08e65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.621 253542 DEBUG nova.network.os_vif_util [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.621 253542 DEBUG os_vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.622 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25c8c441-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[558c0839-81db-43fb-92fc-558232c31d81]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap269f4fa4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672580, 'tstamp': 672580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396400, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f4fa4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269f4fa4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.628 253542 INFO os_vif [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c4:a5,bridge_name='br-int',has_traffic_filtering=True,id=25c8c441-cd5e-4cd3-9151-e8137db08e65,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c8c441-cd')
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269f4fa4-a0, col_values=(('external_ids', {'iface-id': 'ab77c41f-12b1-44c7-af48-058abf7be28c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:21.628 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.659 253542 DEBUG nova.compute.manager [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-unplugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG oslo_concurrency.lockutils [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG oslo_concurrency.lockutils [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG oslo_concurrency.lockutils [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG nova.compute.manager [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-unplugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:21 compute-0 nova_compute[253538]: 2025-11-25 09:04:21.660 253542 DEBUG nova.compute.manager [req-63d2feb9-aa38-4ae8-9350-17b808fb3577 req-fbd125ad-a1f4-440e-9d81-176a93e412bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-unplugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:04:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 25 op/s
Nov 25 09:04:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:04:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:04:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:04:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:04:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:04:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:04:23 compute-0 ceph-mon[75015]: pgmap v2507: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 25 op/s
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.809 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.809 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.810 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.810 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.810 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.810 253542 WARNING nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received unexpected event network-vif-plugged-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 for instance with vm_state active and task_state deleting.
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-unplugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.811 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-unplugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-unplugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG oslo_concurrency.lockutils [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.812 253542 DEBUG nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] No waiting events found dispatching network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.813 253542 WARNING nova.compute.manager [req-0ae28dfa-515d-4849-8dea-facede86f595 req-8cbc6c67-6597-4c5a-9cfe-82cd6187b26d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received unexpected event network-vif-plugged-25c8c441-cd5e-4cd3-9151-e8137db08e65 for instance with vm_state active and task_state deleting.
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.821 253542 INFO nova.virt.libvirt.driver [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Deleting instance files /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52_del
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.822 253542 INFO nova.virt.libvirt.driver [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Deletion of /var/lib/nova/instances/60fab831-4ae4-4e18-a4e4-5466abbece52_del complete
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.874 253542 INFO nova.compute.manager [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Took 16.27 seconds to destroy the instance on the hypervisor.
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.875 253542 DEBUG oslo.service.loopingcall [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.876 253542 DEBUG nova.compute.manager [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:04:23 compute-0 nova_compute[253538]: 2025-11-25 09:04:23.876 253542 DEBUG nova.network.neutron [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.449 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061449.4481587, 60fab831-4ae4-4e18-a4e4-5466abbece52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.450 253542 INFO nova.compute.manager [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] VM Stopped (Lifecycle Event)
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.463 253542 DEBUG nova.compute.manager [None req-89e273be-1c0f-46c7-9640-ba7499c4ffe5 - - - - - -] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:04:24 compute-0 ceph-mon[75015]: pgmap v2508: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.576 253542 DEBUG nova.network.neutron [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.589 253542 INFO nova.compute.manager [-] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Took 0.71 seconds to deallocate network for instance.
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.629 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.629 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.767 253542 DEBUG nova.compute.manager [req-9cdb6414-ad8f-4b17-9223-f7363215a641 req-9b8aa4fc-a20b-4331-99ed-59ddc9faec64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 60fab831-4ae4-4e18-a4e4-5466abbece52] Received event network-vif-deleted-d2008aa0-bac3-4d83-88d2-34376e911b2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:24 compute-0 nova_compute[253538]: 2025-11-25 09:04:24.795 253542 DEBUG oslo_concurrency.processutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:04:24 compute-0 podman[396420]: 2025-11-25 09:04:24.80543474 +0000 UTC m=+0.054929695 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:04:24 compute-0 podman[396419]: 2025-11-25 09:04:24.816506231 +0000 UTC m=+0.066722285 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 09:04:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:04:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/228841934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:25 compute-0 nova_compute[253538]: 2025-11-25 09:04:25.243 253542 DEBUG oslo_concurrency.processutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:04:25 compute-0 nova_compute[253538]: 2025-11-25 09:04:25.250 253542 DEBUG nova.compute.provider_tree [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:04:25 compute-0 nova_compute[253538]: 2025-11-25 09:04:25.266 253542 DEBUG nova.scheduler.client.report [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:04:25 compute-0 nova_compute[253538]: 2025-11-25 09:04:25.439 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:25 compute-0 nova_compute[253538]: 2025-11-25 09:04:25.537 253542 INFO nova.scheduler.client.report [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 60fab831-4ae4-4e18-a4e4-5466abbece52
Nov 25 09:04:25 compute-0 nova_compute[253538]: 2025-11-25 09:04:25.609 253542 DEBUG oslo_concurrency.lockutils [None req-148edbc7-74b7-407c-adb3-ee461f084c56 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "60fab831-4ae4-4e18-a4e4-5466abbece52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/228841934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 17 KiB/s wr, 29 op/s
Nov 25 09:04:26 compute-0 nova_compute[253538]: 2025-11-25 09:04:26.625 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:26 compute-0 sshd-session[396481]: Connection closed by authenticating user root 193.32.162.151 port 33360 [preauth]
Nov 25 09:04:26 compute-0 nova_compute[253538]: 2025-11-25 09:04:26.924 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:26 compute-0 nova_compute[253538]: 2025-11-25 09:04:26.925 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:26 compute-0 nova_compute[253538]: 2025-11-25 09:04:26.925 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:26 compute-0 nova_compute[253538]: 2025-11-25 09:04:26.925 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:26 compute-0 nova_compute[253538]: 2025-11-25 09:04:26.926 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:26 compute-0 nova_compute[253538]: 2025-11-25 09:04:26.927 253542 INFO nova.compute.manager [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Terminating instance
Nov 25 09:04:26 compute-0 nova_compute[253538]: 2025-11-25 09:04:26.928 253542 DEBUG nova.compute.manager [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:04:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.010 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:04:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.010 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.040 253542 DEBUG nova.compute.manager [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.040 253542 DEBUG nova.compute.manager [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing instance network info cache due to event network-changed-a93aab06-4a98-453a-87c3-01b817ee7602. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.040 253542 DEBUG oslo_concurrency.lockutils [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.040 253542 DEBUG oslo_concurrency.lockutils [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.041 253542 DEBUG nova.network.neutron [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Refreshing network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:04:27 compute-0 ceph-mon[75015]: pgmap v2509: 321 pgs: 321 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 17 KiB/s wr, 29 op/s
Nov 25 09:04:27 compute-0 kernel: tapa93aab06-4a (unregistering): left promiscuous mode
Nov 25 09:04:27 compute-0 NetworkManager[48915]: <info>  [1764061467.8369] device (tapa93aab06-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:04:27 compute-0 ovn_controller[152859]: 2025-11-25T09:04:27Z|01407|binding|INFO|Releasing lport a93aab06-4a98-453a-87c3-01b817ee7602 from this chassis (sb_readonly=0)
Nov 25 09:04:27 compute-0 ovn_controller[152859]: 2025-11-25T09:04:27Z|01408|binding|INFO|Setting lport a93aab06-4a98-453a-87c3-01b817ee7602 down in Southbound
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:27 compute-0 ovn_controller[152859]: 2025-11-25T09:04:27Z|01409|binding|INFO|Removing iface tapa93aab06-4a ovn-installed in OVS
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.860 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:fb:42 10.100.0.13'], port_security=['fa:16:3e:2f:fb:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c4a1d63-7674-4276-8da9-b9d4f4fea307', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72472fc5-3661-404c-a0d2-df155795bd2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfffff2c57a442a59b202d368d49bf00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '046f46ed-7d5f-45ad-8313-fa0fe77b097a 0f20aab2-1f55-4a0f-8bdf-77bad4fbb70d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48d726f4-a876-48b7-812b-492b6f2eebf1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a93aab06-4a98-453a-87c3-01b817ee7602) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:04:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.861 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a93aab06-4a98-453a-87c3-01b817ee7602 in datapath 72472fc5-3661-404c-a0d2-df155795bd2b unbound from our chassis
Nov 25 09:04:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.863 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72472fc5-3661-404c-a0d2-df155795bd2b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.864 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70bd5e33-e7c6-464c-be49-330582d7f1c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:27.864 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b namespace which is not needed anymore
Nov 25 09:04:27 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000086.scope: Deactivated successfully.
Nov 25 09:04:27 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000086.scope: Consumed 16.553s CPU time.
Nov 25 09:04:27 compute-0 systemd-machined[215790]: Machine qemu-164-instance-00000086 terminated.
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.965 253542 INFO nova.virt.libvirt.driver [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Instance destroyed successfully.
Nov 25 09:04:27 compute-0 nova_compute[253538]: 2025-11-25 09:04:27.965 253542 DEBUG nova.objects.instance [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lazy-loading 'resources' on Instance uuid 2c4a1d63-7674-4276-8da9-b9d4f4fea307 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.003 253542 DEBUG nova.virt.libvirt.vif [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:02:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1495447964-access_point-582189118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1495447964-ac',id=134,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmWC6+Lhjt7ZggX7R271Z8ydSfaz55B57w+DkFd+9P5ZZi/RtND2Lrlt8eHEO6BYJ7zwCthN/Bq/sxErPbCK7jo7dXwLPbl9Tdlo4a8btinzQw58JjdPmMZRYnO+DrRAQ==',key_name='tempest-TestSecurityGroupsBasicOps-1979245065',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:02:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfffff2c57a442a59b202d368d49bf00',ramdisk_id='',reservation_id='r-jwi4cpze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1495447964',owner_user_name='tempest-TestSecurityGroupsBasicOps-1495447964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:02:57Z,user_data=None,user_id='283b89dbe3284e8ea2019b797673108b',uuid=2c4a1d63-7674-4276-8da9-b9d4f4fea307,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.004 253542 DEBUG nova.network.os_vif_util [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converting VIF {"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.005 253542 DEBUG nova.network.os_vif_util [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.005 253542 DEBUG os_vif [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.007 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa93aab06-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:28.012 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.014 253542 DEBUG nova.compute.manager [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-unplugged-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.014 253542 DEBUG oslo_concurrency.lockutils [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.015 253542 DEBUG oslo_concurrency.lockutils [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.015 253542 DEBUG oslo_concurrency.lockutils [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.015 253542 DEBUG nova.compute.manager [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] No waiting events found dispatching network-vif-unplugged-a93aab06-4a98-453a-87c3-01b817ee7602 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.016 253542 DEBUG nova.compute.manager [req-f4065564-cfc4-40e0-b21e-c5e1e856fa7b req-0f4ae239-f6bb-4282-8be8-a39a2e8b466f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-unplugged-a93aab06-4a98-453a-87c3-01b817ee7602 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.019 253542 INFO os_vif [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=a93aab06-4a98-453a-87c3-01b817ee7602,network=Network(72472fc5-3661-404c-a0d2-df155795bd2b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93aab06-4a')
Nov 25 09:04:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 318 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 8.2 KiB/s wr, 28 op/s
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.195 253542 DEBUG nova.network.neutron [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updated VIF entry in instance network info cache for port a93aab06-4a98-453a-87c3-01b817ee7602. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.196 253542 DEBUG nova.network.neutron [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [{"id": "a93aab06-4a98-453a-87c3-01b817ee7602", "address": "fa:16:3e:2f:fb:42", "network": {"id": "72472fc5-3661-404c-a0d2-df155795bd2b", "bridge": "br-int", "label": "tempest-network-smoke--1408423100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfffff2c57a442a59b202d368d49bf00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93aab06-4a", "ovs_interfaceid": "a93aab06-4a98-453a-87c3-01b817ee7602", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:04:28 compute-0 nova_compute[253538]: 2025-11-25 09:04:28.213 253542 DEBUG oslo_concurrency.lockutils [req-5e4040bc-e314-4a78-b1e0-05c5d330b069 req-e0e70227-ddc9-440a-bf5b-2dafdf0cacb2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2c4a1d63-7674-4276-8da9-b9d4f4fea307" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:04:28 compute-0 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [NOTICE]   (394381) : haproxy version is 2.8.14-c23fe91
Nov 25 09:04:28 compute-0 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [NOTICE]   (394381) : path to executable is /usr/sbin/haproxy
Nov 25 09:04:28 compute-0 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [WARNING]  (394381) : Exiting Master process...
Nov 25 09:04:28 compute-0 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [ALERT]    (394381) : Current worker (394383) exited with code 143 (Terminated)
Nov 25 09:04:28 compute-0 neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b[394377]: [WARNING]  (394381) : All workers exited. Exiting... (0)
Nov 25 09:04:28 compute-0 systemd[1]: libpod-4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2.scope: Deactivated successfully.
Nov 25 09:04:28 compute-0 podman[396513]: 2025-11-25 09:04:28.241746933 +0000 UTC m=+0.275941723 container died 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:04:28 compute-0 ceph-mon[75015]: pgmap v2510: 321 pgs: 321 active+clean; 318 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 8.2 KiB/s wr, 28 op/s
Nov 25 09:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2-userdata-shm.mount: Deactivated successfully.
Nov 25 09:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c472ba9f61967f5c2c1bbfa218a3c4c1a73bb2a29d7b40245ca25e38f08a9e8a-merged.mount: Deactivated successfully.
Nov 25 09:04:28 compute-0 podman[396513]: 2025-11-25 09:04:28.756606519 +0000 UTC m=+0.790801309 container cleanup 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 09:04:28 compute-0 systemd[1]: libpod-conmon-4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2.scope: Deactivated successfully.
Nov 25 09:04:29 compute-0 nova_compute[253538]: 2025-11-25 09:04:29.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:04:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/415942382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:04:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:04:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/415942382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:04:29 compute-0 podman[396569]: 2025-11-25 09:04:29.222269638 +0000 UTC m=+0.440946589 container remove 4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.228 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b4115d3b-7308-40d6-af78-c8b23603fbbe]: (4, ('Tue Nov 25 09:04:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b (4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2)\n4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2\nTue Nov 25 09:04:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b (4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2)\n4d5507b550ed52bb6ccf48abf5b01f4c5ae6fba6c6fa789780fe85272d962cd2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.230 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fe35cb-56d4-416b-b262-cb3debb65565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.232 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72472fc5-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:29 compute-0 nova_compute[253538]: 2025-11-25 09:04:29.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:29 compute-0 kernel: tap72472fc5-30: left promiscuous mode
Nov 25 09:04:29 compute-0 nova_compute[253538]: 2025-11-25 09:04:29.237 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.239 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a2db03-9811-4f7b-a5b3-f130cb12ad95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:29 compute-0 nova_compute[253538]: 2025-11-25 09:04:29.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.258 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[26b87281-c4f3-4276-af41-ecfe4b916454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.259 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71493e26-00e1-45fa-912d-e4c35d3e21ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.279 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e932295-e198-4e82-8455-6d07f7637b8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672901, 'reachable_time': 22980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396583, 'error': None, 'target': 'ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.283 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72472fc5-3661-404c-a0d2-df155795bd2b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:04:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:29.283 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[885aefcf-aa9c-42d4-948d-41708341ffbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d72472fc5\x2d3661\x2d404c\x2da0d2\x2ddf155795bd2b.mount: Deactivated successfully.
Nov 25 09:04:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/415942382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:04:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/415942382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:04:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 262 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 9.2 KiB/s wr, 37 op/s
Nov 25 09:04:30 compute-0 nova_compute[253538]: 2025-11-25 09:04:30.534 253542 DEBUG nova.compute.manager [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:30 compute-0 nova_compute[253538]: 2025-11-25 09:04:30.535 253542 DEBUG oslo_concurrency.lockutils [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:30 compute-0 nova_compute[253538]: 2025-11-25 09:04:30.535 253542 DEBUG oslo_concurrency.lockutils [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:30 compute-0 nova_compute[253538]: 2025-11-25 09:04:30.536 253542 DEBUG oslo_concurrency.lockutils [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:30 compute-0 nova_compute[253538]: 2025-11-25 09:04:30.536 253542 DEBUG nova.compute.manager [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] No waiting events found dispatching network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:30 compute-0 nova_compute[253538]: 2025-11-25 09:04:30.537 253542 WARNING nova.compute.manager [req-82b3d032-ac68-4705-b069-e65036d5bda2 req-5acaa0f5-94fb-4a53-8299-3ad62066ed7f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received unexpected event network-vif-plugged-a93aab06-4a98-453a-87c3-01b817ee7602 for instance with vm_state active and task_state deleting.
Nov 25 09:04:30 compute-0 ceph-mon[75015]: pgmap v2511: 321 pgs: 321 active+clean; 262 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 9.2 KiB/s wr, 37 op/s
Nov 25 09:04:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 224 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 7.0 KiB/s wr, 39 op/s
Nov 25 09:04:32 compute-0 ceph-mon[75015]: pgmap v2512: 321 pgs: 321 active+clean; 224 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 7.0 KiB/s wr, 39 op/s
Nov 25 09:04:32 compute-0 podman[396584]: 2025-11-25 09:04:32.835626254 +0000 UTC m=+0.086541084 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 09:04:33 compute-0 nova_compute[253538]: 2025-11-25 09:04:33.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:34 compute-0 nova_compute[253538]: 2025-11-25 09:04:34.018 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 204 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 6.7 KiB/s wr, 37 op/s
Nov 25 09:04:34 compute-0 nova_compute[253538]: 2025-11-25 09:04:34.957 253542 INFO nova.virt.libvirt.driver [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Deleting instance files /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269_del
Nov 25 09:04:34 compute-0 nova_compute[253538]: 2025-11-25 09:04:34.958 253542 INFO nova.virt.libvirt.driver [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Deletion of /var/lib/nova/instances/1ee319a5-b613-4b27-a1e6-64b0129bf269_del complete
Nov 25 09:04:35 compute-0 nova_compute[253538]: 2025-11-25 09:04:35.020 253542 INFO nova.compute.manager [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Took 14.91 seconds to destroy the instance on the hypervisor.
Nov 25 09:04:35 compute-0 nova_compute[253538]: 2025-11-25 09:04:35.020 253542 DEBUG oslo.service.loopingcall [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:04:35 compute-0 nova_compute[253538]: 2025-11-25 09:04:35.021 253542 DEBUG nova.compute.manager [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:04:35 compute-0 nova_compute[253538]: 2025-11-25 09:04:35.021 253542 DEBUG nova.network.neutron [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:04:35 compute-0 ceph-mon[75015]: pgmap v2513: 321 pgs: 321 active+clean; 204 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 6.7 KiB/s wr, 37 op/s
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.083 253542 DEBUG nova.compute.manager [req-6acccd52-a97a-4eba-b1dc-26bcc34cc468 req-2e836a4c-90ca-486b-a200-73157b956dc1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-deleted-25c8c441-cd5e-4cd3-9151-e8137db08e65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.083 253542 INFO nova.compute.manager [req-6acccd52-a97a-4eba-b1dc-26bcc34cc468 req-2e836a4c-90ca-486b-a200-73157b956dc1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Neutron deleted interface 25c8c441-cd5e-4cd3-9151-e8137db08e65; detaching it from the instance and deleting it from the info cache
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.084 253542 DEBUG nova.network.neutron [req-6acccd52-a97a-4eba-b1dc-26bcc34cc468 req-2e836a4c-90ca-486b-a200-73157b956dc1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [{"id": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "address": "fa:16:3e:1e:e8:e0", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cd88dce-60", "ovs_interfaceid": "2cd88dce-60d9-4da6-a5f6-ba6622fd8812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.105 253542 DEBUG nova.compute.manager [req-6acccd52-a97a-4eba-b1dc-26bcc34cc468 req-2e836a4c-90ca-486b-a200-73157b956dc1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Detach interface failed, port_id=25c8c441-cd5e-4cd3-9151-e8137db08e65, reason: Instance 1ee319a5-b613-4b27-a1e6-64b0129bf269 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 09:04:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 169 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 7.1 KiB/s wr, 40 op/s
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.557 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061461.5565076, 1ee319a5-b613-4b27-a1e6-64b0129bf269 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.558 253542 INFO nova.compute.manager [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] VM Stopped (Lifecycle Event)
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.580 253542 DEBUG nova.compute.manager [None req-9685c2eb-2920-4e4a-985f-d4ddddda688c - - - - - -] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.651 253542 DEBUG nova.network.neutron [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:04:36 compute-0 ceph-mon[75015]: pgmap v2514: 321 pgs: 321 active+clean; 169 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 7.1 KiB/s wr, 40 op/s
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.686 253542 INFO nova.compute.manager [-] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Took 1.66 seconds to deallocate network for instance.
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.760 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.760 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:36 compute-0 nova_compute[253538]: 2025-11-25 09:04:36.876 253542 DEBUG oslo_concurrency.processutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:04:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:04:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857513962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:37 compute-0 nova_compute[253538]: 2025-11-25 09:04:37.340 253542 DEBUG oslo_concurrency.processutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:04:37 compute-0 nova_compute[253538]: 2025-11-25 09:04:37.349 253542 DEBUG nova.compute.provider_tree [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:04:37 compute-0 nova_compute[253538]: 2025-11-25 09:04:37.366 253542 DEBUG nova.scheduler.client.report [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:04:37 compute-0 nova_compute[253538]: 2025-11-25 09:04:37.386 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:37 compute-0 nova_compute[253538]: 2025-11-25 09:04:37.416 253542 INFO nova.scheduler.client.report [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 1ee319a5-b613-4b27-a1e6-64b0129bf269
Nov 25 09:04:37 compute-0 nova_compute[253538]: 2025-11-25 09:04:37.470 253542 DEBUG oslo_concurrency.lockutils [None req-13c5af3d-5c05-4928-9e59-9e03050fad50 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "1ee319a5-b613-4b27-a1e6-64b0129bf269" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 17.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3857513962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 167 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 44 op/s
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.171 253542 DEBUG nova.compute.manager [req-c43723f9-b411-43d0-a5e1-22ff2b03ba4f req-1240758d-2eae-45ce-8b72-a169476780b9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 1ee319a5-b613-4b27-a1e6-64b0129bf269] Received event network-vif-deleted-2cd88dce-60d9-4da6-a5f6-ba6622fd8812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.327 253542 INFO nova.virt.libvirt.driver [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Deleting instance files /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307_del
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.328 253542 INFO nova.virt.libvirt.driver [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Deletion of /var/lib/nova/instances/2c4a1d63-7674-4276-8da9-b9d4f4fea307_del complete
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.378 253542 INFO nova.compute.manager [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Took 11.45 seconds to destroy the instance on the hypervisor.
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.379 253542 DEBUG oslo.service.loopingcall [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.380 253542 DEBUG nova.compute.manager [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.382 253542 DEBUG nova.network.neutron [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.792 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.792 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.793 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.793 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.793 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.795 253542 INFO nova.compute.manager [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Terminating instance
Nov 25 09:04:38 compute-0 nova_compute[253538]: 2025-11-25 09:04:38.796 253542 DEBUG nova.compute.manager [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:04:38 compute-0 ceph-mon[75015]: pgmap v2515: 321 pgs: 321 active+clean; 167 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 44 op/s
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 kernel: tap9918858c-8b (unregistering): left promiscuous mode
Nov 25 09:04:39 compute-0 NetworkManager[48915]: <info>  [1764061479.1192] device (tap9918858c-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:04:39 compute-0 ovn_controller[152859]: 2025-11-25T09:04:39Z|01410|binding|INFO|Releasing lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 from this chassis (sb_readonly=0)
Nov 25 09:04:39 compute-0 ovn_controller[152859]: 2025-11-25T09:04:39Z|01411|binding|INFO|Setting lport 9918858c-8b7c-4d3f-aada-d04fcb6eab03 down in Southbound
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.131 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 ovn_controller[152859]: 2025-11-25T09:04:39Z|01412|binding|INFO|Removing iface tap9918858c-8b ovn-installed in OVS
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.144 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e3:ea 10.100.0.12'], port_security=['fa:16:3e:f4:e3:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '28376454-90b2-431d-9052-48b369973c8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f691304c-d112-4c32-b3ac-0f33230178b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a4166e3-493a-4dd7-9e89-e40e3bf1bed7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9918858c-8b7c-4d3f-aada-d04fcb6eab03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.146 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 in datapath f691304c-d112-4c32-b3ac-0f33230178b0 unbound from our chassis
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.147 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f691304c-d112-4c32-b3ac-0f33230178b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.148 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.148 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa3876f-1e51-47d4-aa42-0afa22af178e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.149 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 namespace which is not needed anymore
Nov 25 09:04:39 compute-0 kernel: tapfd44d480-02 (unregistering): left promiscuous mode
Nov 25 09:04:39 compute-0 NetworkManager[48915]: <info>  [1764061479.1585] device (tapfd44d480-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 ovn_controller[152859]: 2025-11-25T09:04:39Z|01413|binding|INFO|Releasing lport fd44d480-0242-4c7a-b02e-f58852c99ca0 from this chassis (sb_readonly=0)
Nov 25 09:04:39 compute-0 ovn_controller[152859]: 2025-11-25T09:04:39Z|01414|binding|INFO|Setting lport fd44d480-0242-4c7a-b02e-f58852c99ca0 down in Southbound
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.169 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 ovn_controller[152859]: 2025-11-25T09:04:39Z|01415|binding|INFO|Removing iface tapfd44d480-02 ovn-installed in OVS
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.177 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87'], port_security=['fa:16:3e:b8:5e:87 2001:db8::f816:3eff:feb8:5e87'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb8:5e87/64', 'neutron:device_id': '28376454-90b2-431d-9052-48b369973c8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5f96402-bff3-4d85-bba1-4edfd7ec059e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a218a470-f1de-46c9-a998-6139383f8f9c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=fd44d480-0242-4c7a-b02e-f58852c99ca0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 25 09:04:39 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000085.scope: Consumed 17.307s CPU time.
Nov 25 09:04:39 compute-0 systemd-machined[215790]: Machine qemu-163-instance-00000085 terminated.
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [NOTICE]   (393919) : haproxy version is 2.8.14-c23fe91
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [NOTICE]   (393919) : path to executable is /usr/sbin/haproxy
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [WARNING]  (393919) : Exiting Master process...
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [ALERT]    (393919) : Current worker (393939) exited with code 143 (Terminated)
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0[393915]: [WARNING]  (393919) : All workers exited. Exiting... (0)
Nov 25 09:04:39 compute-0 systemd[1]: libpod-adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0.scope: Deactivated successfully.
Nov 25 09:04:39 compute-0 podman[396659]: 2025-11-25 09:04:39.332902095 +0000 UTC m=+0.099763633 container died adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:04:39 compute-0 NetworkManager[48915]: <info>  [1764061479.4398] manager: (tapfd44d480-02): new Tun device (/org/freedesktop/NetworkManager/Devices/579)
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.458 253542 INFO nova.virt.libvirt.driver [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Instance destroyed successfully.
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.458 253542 DEBUG nova.objects.instance [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 28376454-90b2-431d-9052-48b369973c8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0-userdata-shm.mount: Deactivated successfully.
Nov 25 09:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a80d86f6fd563b663b27ffb3825320ba53e316c324703edef04af20e73a076c2-merged.mount: Deactivated successfully.
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.489 253542 DEBUG nova.virt.libvirt.vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:02:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:02:53Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.490 253542 DEBUG nova.network.os_vif_util [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.490 253542 DEBUG nova.network.os_vif_util [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.491 253542 DEBUG os_vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.493 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.494 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9918858c-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.503 253542 INFO os_vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:ea,bridge_name='br-int',has_traffic_filtering=True,id=9918858c-8b7c-4d3f-aada-d04fcb6eab03,network=Network(f691304c-d112-4c32-b3ac-0f33230178b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9918858c-8b')
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.504 253542 DEBUG nova.virt.libvirt.vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2133610236',display_name='tempest-TestGettingAddress-server-2133610236',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2133610236',id=133,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL/CVAR3vbL/QBCkvY1zGP75ygSlwt3u20wjPL1pxq8pHBjwVx3a3z7YKcJhNBX7yKxfeJakRZPN9e1zFn+ojVqSwTlufKzTatjarhh+2Z6TgY95SPM/GKylcUrgXrauw==',key_name='tempest-TestGettingAddress-1143965571',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:02:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-lil1hhw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:02:53Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=28376454-90b2-431d-9052-48b369973c8e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.505 253542 DEBUG nova.network.os_vif_util [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.505 253542 DEBUG nova.network.os_vif_util [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.506 253542 DEBUG os_vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.507 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd44d480-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.513 253542 INFO os_vif [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:5e:87,bridge_name='br-int',has_traffic_filtering=True,id=fd44d480-0242-4c7a-b02e-f58852c99ca0,network=Network(269f4fa4-a7fb-4f9a-b49d-3b1968826304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd44d480-02')
Nov 25 09:04:39 compute-0 podman[396659]: 2025-11-25 09:04:39.607010517 +0000 UTC m=+0.373872045 container cleanup adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:04:39 compute-0 systemd[1]: libpod-conmon-adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0.scope: Deactivated successfully.
Nov 25 09:04:39 compute-0 podman[396734]: 2025-11-25 09:04:39.68072164 +0000 UTC m=+0.052755745 container remove adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.688 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68c1c163-9cc9-434e-8953-3b5c260d4653]: (4, ('Tue Nov 25 09:04:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 (adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0)\nadb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0\nTue Nov 25 09:04:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 (adb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0)\nadb9d81385c86af26091f86b4facd3c110feb8543d79924ee01eefa24baef3a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.689 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5091e52-6804-4384-93f6-b3b24066afc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.690 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf691304c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 kernel: tapf691304c-d0: left promiscuous mode
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.698 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2f245312-84ce-4e2a-b54e-50e1d0ad42ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.707 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.716 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[01c3c4c1-2bd7-4538-b14c-88b6432860a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.717 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[05c38b24-d1bc-4e79-916a-7831bb2c6cbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.718 253542 DEBUG nova.network.neutron [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.733 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96b6a677-ff1f-4d6b-8b9c-7ef15d0243e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672399, 'reachable_time': 41962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396749, 'error': None, 'target': 'ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 systemd[1]: run-netns-ovnmeta\x2df691304c\x2dd112\x2d4c32\x2db3ac\x2d0f33230178b0.mount: Deactivated successfully.
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.735 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f691304c-d112-4c32-b3ac-0f33230178b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.735 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6f762ead-6ed9-4603-b6dd-6be3b4d0bd10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.736 162739 INFO neutron.agent.ovn.metadata.agent [-] Port fd44d480-0242-4c7a-b02e-f58852c99ca0 in datapath 269f4fa4-a7fb-4f9a-b49d-3b1968826304 unbound from our chassis
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.738 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269f4fa4-a7fb-4f9a-b49d-3b1968826304, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.738 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8866200-77a4-4de7-a512-a58b9272e0bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:39.739 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 namespace which is not needed anymore
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.748 253542 INFO nova.compute.manager [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Took 1.37 seconds to deallocate network for instance.
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.786 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.787 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:39 compute-0 nova_compute[253538]: 2025-11-25 09:04:39.857 253542 DEBUG oslo_concurrency.processutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [NOTICE]   (394125) : haproxy version is 2.8.14-c23fe91
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [NOTICE]   (394125) : path to executable is /usr/sbin/haproxy
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [WARNING]  (394125) : Exiting Master process...
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [ALERT]    (394125) : Current worker (394142) exited with code 143 (Terminated)
Nov 25 09:04:39 compute-0 neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304[394115]: [WARNING]  (394125) : All workers exited. Exiting... (0)
Nov 25 09:04:39 compute-0 systemd[1]: libpod-205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5.scope: Deactivated successfully.
Nov 25 09:04:39 compute-0 podman[396767]: 2025-11-25 09:04:39.894652905 +0000 UTC m=+0.052510128 container died 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5-userdata-shm.mount: Deactivated successfully.
Nov 25 09:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a675655322cb97a61297c5daeb9688e6dd7732cb7cb875eab3105e1fff65389-merged.mount: Deactivated successfully.
Nov 25 09:04:39 compute-0 podman[396767]: 2025-11-25 09:04:39.939546896 +0000 UTC m=+0.097404079 container cleanup 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:04:39 compute-0 systemd[1]: libpod-conmon-205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5.scope: Deactivated successfully.
Nov 25 09:04:40 compute-0 podman[396798]: 2025-11-25 09:04:40.009882278 +0000 UTC m=+0.041865699 container remove 205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.016 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00da122f-e9f6-48e7-b069-4af68dceb23f]: (4, ('Tue Nov 25 09:04:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 (205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5)\n205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5\nTue Nov 25 09:04:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 (205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5)\n205b317ded882a30395431bd420e7da91fc5e6103e9b654f88e7b330157c80b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.018 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0e97d4-ccfd-44c8-a193-64f3cb2185f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.019 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269f4fa4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:04:40 compute-0 kernel: tap269f4fa4-a0: left promiscuous mode
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.063 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.079 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.080 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[144ce451-5c69-4e4d-a9d3-e19335b4c9d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.095 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a12719-4116-4f32-915d-c3db79de4b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.097 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1f7693-0e7b-459c-8a9c-4ce1367a5a3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 136 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.9 KiB/s wr, 46 op/s
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.121 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[35fe3141-ac5c-437f-a142-f521e55196d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672563, 'reachable_time': 39102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396830, 'error': None, 'target': 'ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.123 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-269f4fa4-a7fb-4f9a-b49d-3b1968826304 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:04:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:40.123 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[d3911d0a-4fb3-4d52-b5d6-a87c5a01bb33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.177 253542 INFO nova.virt.libvirt.driver [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Deleting instance files /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e_del
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.178 253542 INFO nova.virt.libvirt.driver [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Deletion of /var/lib/nova/instances/28376454-90b2-431d-9052-48b369973c8e_del complete
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.249 253542 INFO nova.compute.manager [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Took 1.45 seconds to destroy the instance on the hypervisor.
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.250 253542 DEBUG oslo.service.loopingcall [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.250 253542 DEBUG nova.compute.manager [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.251 253542 DEBUG nova.network.neutron [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.267 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.268 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing instance network info cache due to event network-changed-9918858c-8b7c-4d3f-aada-d04fcb6eab03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.268 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.269 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.269 253542 DEBUG nova.network.neutron [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Refreshing network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:04:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:04:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487161151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.363 253542 DEBUG oslo_concurrency.processutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.370 253542 DEBUG nova.compute.provider_tree [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.434 253542 DEBUG nova.scheduler.client.report [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.476 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d269f4fa4\x2da7fb\x2d4f9a\x2db49d\x2d3b1968826304.mount: Deactivated successfully.
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.513 253542 INFO nova.scheduler.client.report [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Deleted allocations for instance 2c4a1d63-7674-4276-8da9-b9d4f4fea307
Nov 25 09:04:40 compute-0 nova_compute[253538]: 2025-11-25 09:04:40.581 253542 DEBUG oslo_concurrency.lockutils [None req-13c2ed13-a610-4e15-a7f8-a702b1dc0d4f 283b89dbe3284e8ea2019b797673108b cfffff2c57a442a59b202d368d49bf00 - - default default] Lock "2c4a1d63-7674-4276-8da9-b9d4f4fea307" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:41.088 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:41.088 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:04:41.088 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:41 compute-0 ceph-mon[75015]: pgmap v2516: 321 pgs: 321 active+clean; 136 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.9 KiB/s wr, 46 op/s
Nov 25 09:04:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1487161151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.011 253542 DEBUG nova.network.neutron [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updated VIF entry in instance network info cache for port 9918858c-8b7c-4d3f-aada-d04fcb6eab03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.012 253542 DEBUG nova.network.neutron [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [{"id": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "address": "fa:16:3e:f4:e3:ea", "network": {"id": "f691304c-d112-4c32-b3ac-0f33230178b0", "bridge": "br-int", "label": "tempest-network-smoke--636122136", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9918858c-8b", "ovs_interfaceid": "9918858c-8b7c-4d3f-aada-d04fcb6eab03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "address": "fa:16:3e:b8:5e:87", "network": {"id": "269f4fa4-a7fb-4f9a-b49d-3b1968826304", "bridge": "br-int", "label": "tempest-network-smoke--375194542", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb8:5e87", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd44d480-02", "ovs_interfaceid": "fd44d480-0242-4c7a-b02e-f58852c99ca0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.030 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-28376454-90b2-431d-9052-48b369973c8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.031 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-unplugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.032 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.032 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.032 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.032 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-unplugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.033 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-unplugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.033 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Received event network-vif-deleted-a93aab06-4a98-453a-87c3-01b817ee7602 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.033 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.034 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.034 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.034 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.034 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.035 253542 WARNING nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received unexpected event network-vif-plugged-9918858c-8b7c-4d3f-aada-d04fcb6eab03 for instance with vm_state active and task_state deleting.
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.035 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-unplugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.036 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.036 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.036 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.037 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-unplugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.037 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-unplugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.037 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.038 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "28376454-90b2-431d-9052-48b369973c8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.038 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.038 253542 DEBUG oslo_concurrency.lockutils [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.039 253542 DEBUG nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] No waiting events found dispatching network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.039 253542 WARNING nova.compute.manager [req-85a88d63-77e6-43c5-affa-9caeafdde3f4 req-7e002b91-4791-4dfe-9b3c-ba5f5fcd5f86 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received unexpected event network-vif-plugged-fd44d480-0242-4c7a-b02e-f58852c99ca0 for instance with vm_state active and task_state deleting.
Nov 25 09:04:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 103 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 39 op/s
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.279 253542 DEBUG nova.network.neutron [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.326 253542 INFO nova.compute.manager [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] Took 2.07 seconds to deallocate network for instance.
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.344 253542 DEBUG nova.compute.manager [req-101ba256-08c4-44a4-bea2-9be65234d09d req-430a455d-3a28-4424-a160-a449c57f2a04 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-deleted-fd44d480-0242-4c7a-b02e-f58852c99ca0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.344 253542 DEBUG nova.compute.manager [req-101ba256-08c4-44a4-bea2-9be65234d09d req-430a455d-3a28-4424-a160-a449c57f2a04 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 28376454-90b2-431d-9052-48b369973c8e] Received event network-vif-deleted-9918858c-8b7c-4d3f-aada-d04fcb6eab03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.404 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.405 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.445 253542 DEBUG oslo_concurrency.processutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:04:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:04:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120779292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.883 253542 DEBUG oslo_concurrency.processutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.890 253542 DEBUG nova.compute.provider_tree [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.903 253542 DEBUG nova.scheduler.client.report [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.933 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.964 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061467.9628727, 2c4a1d63-7674-4276-8da9-b9d4f4fea307 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.964 253542 INFO nova.compute.manager [-] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] VM Stopped (Lifecycle Event)
Nov 25 09:04:42 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.989 253542 DEBUG nova.compute.manager [None req-2f725777-ee92-4b58-bf96-b7c7a2e577cd - - - - - -] [instance: 2c4a1d63-7674-4276-8da9-b9d4f4fea307] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:04:43 compute-0 nova_compute[253538]: 2025-11-25 09:04:42.999 253542 INFO nova.scheduler.client.report [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 28376454-90b2-431d-9052-48b369973c8e
Nov 25 09:04:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:43 compute-0 nova_compute[253538]: 2025-11-25 09:04:43.111 253542 DEBUG oslo_concurrency.lockutils [None req-d225763c-ebc0-46c2-8acb-0fceb516b1ac c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "28376454-90b2-431d-9052-48b369973c8e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:04:43 compute-0 ceph-mon[75015]: pgmap v2517: 321 pgs: 321 active+clean; 103 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 39 op/s
Nov 25 09:04:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/120779292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:04:44 compute-0 nova_compute[253538]: 2025-11-25 09:04:44.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 88 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.4 KiB/s wr, 44 op/s
Nov 25 09:04:44 compute-0 nova_compute[253538]: 2025-11-25 09:04:44.511 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:44 compute-0 nova_compute[253538]: 2025-11-25 09:04:44.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:04:44 compute-0 nova_compute[253538]: 2025-11-25 09:04:44.661 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:45 compute-0 nova_compute[253538]: 2025-11-25 09:04:45.349 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:45 compute-0 nova_compute[253538]: 2025-11-25 09:04:45.349 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:45 compute-0 ceph-mon[75015]: pgmap v2518: 321 pgs: 321 active+clean; 88 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.4 KiB/s wr, 44 op/s
Nov 25 09:04:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 KiB/s wr, 49 op/s
Nov 25 09:04:46 compute-0 sudo[396856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:46 compute-0 sudo[396856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:46 compute-0 sudo[396856]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:46 compute-0 sudo[396881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:04:46 compute-0 sudo[396881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:46 compute-0 sudo[396881]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:46 compute-0 sudo[396906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:46 compute-0 sudo[396906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:46 compute-0 sudo[396906]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:46 compute-0 ceph-mon[75015]: pgmap v2519: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 KiB/s wr, 49 op/s
Nov 25 09:04:46 compute-0 sudo[396931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:04:46 compute-0 sudo[396931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:46 compute-0 sudo[396931]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:04:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:04:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:04:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:04:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:04:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:04:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8a08af04-d070-4a65-9410-0b63ab08408f does not exist
Nov 25 09:04:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6603b883-ac97-4819-b081-fdde4ebb62cb does not exist
Nov 25 09:04:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 08bf02cd-255a-4faf-9999-8507075e3581 does not exist
Nov 25 09:04:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:04:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:04:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:04:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:04:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:04:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:04:47 compute-0 sudo[396988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:47 compute-0 sudo[396988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:47 compute-0 sudo[396988]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:47 compute-0 sudo[397013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:04:47 compute-0 sudo[397013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:47 compute-0 sudo[397013]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:47 compute-0 sudo[397038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:47 compute-0 sudo[397038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:47 compute-0 sudo[397038]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:47 compute-0 sudo[397063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:04:47 compute-0 sudo[397063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:04:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:04:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:04:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:04:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:04:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:04:47 compute-0 podman[397129]: 2025-11-25 09:04:47.623905809 +0000 UTC m=+0.038560470 container create 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 09:04:47 compute-0 systemd[1]: Started libpod-conmon-8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d.scope.
Nov 25 09:04:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:04:47 compute-0 podman[397129]: 2025-11-25 09:04:47.606274509 +0000 UTC m=+0.020929190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:04:47 compute-0 podman[397129]: 2025-11-25 09:04:47.708516778 +0000 UTC m=+0.123171469 container init 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:04:47 compute-0 podman[397129]: 2025-11-25 09:04:47.716935807 +0000 UTC m=+0.131590468 container start 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:04:47 compute-0 podman[397129]: 2025-11-25 09:04:47.720893744 +0000 UTC m=+0.135548405 container attach 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:04:47 compute-0 funny_goldstine[397146]: 167 167
Nov 25 09:04:47 compute-0 systemd[1]: libpod-8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d.scope: Deactivated successfully.
Nov 25 09:04:47 compute-0 podman[397129]: 2025-11-25 09:04:47.725943762 +0000 UTC m=+0.140598433 container died 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7c1a6e4d72f3583ef2740e7f855ad6ae8b2839060bb518f836c45a7fa90ca6a-merged.mount: Deactivated successfully.
Nov 25 09:04:47 compute-0 podman[397129]: 2025-11-25 09:04:47.768642472 +0000 UTC m=+0.183297133 container remove 8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:04:47 compute-0 systemd[1]: libpod-conmon-8723c4fe0c877e460bae03de4cb84240f0c24a1505f18723d1196cbad819471d.scope: Deactivated successfully.
Nov 25 09:04:47 compute-0 podman[397169]: 2025-11-25 09:04:47.964196899 +0000 UTC m=+0.055414168 container create e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:04:47 compute-0 systemd[1]: Started libpod-conmon-e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b.scope.
Nov 25 09:04:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:48 compute-0 podman[397169]: 2025-11-25 09:04:48.022704699 +0000 UTC m=+0.113921988 container init e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:04:48 compute-0 podman[397169]: 2025-11-25 09:04:48.032246628 +0000 UTC m=+0.123463887 container start e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:04:48 compute-0 podman[397169]: 2025-11-25 09:04:48.034685465 +0000 UTC m=+0.125902734 container attach e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:04:48 compute-0 podman[397169]: 2025-11-25 09:04:47.94588822 +0000 UTC m=+0.037105509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:04:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Nov 25 09:04:48 compute-0 ceph-mon[75015]: pgmap v2520: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Nov 25 09:04:48 compute-0 nova_compute[253538]: 2025-11-25 09:04:48.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:48 compute-0 nova_compute[253538]: 2025-11-25 09:04:48.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:04:48 compute-0 nova_compute[253538]: 2025-11-25 09:04:48.613 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:04:49 compute-0 nova_compute[253538]: 2025-11-25 09:04:49.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:49 compute-0 happy_mirzakhani[397186]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:04:49 compute-0 happy_mirzakhani[397186]: --> relative data size: 1.0
Nov 25 09:04:49 compute-0 happy_mirzakhani[397186]: --> All data devices are unavailable
Nov 25 09:04:49 compute-0 systemd[1]: libpod-e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b.scope: Deactivated successfully.
Nov 25 09:04:49 compute-0 podman[397215]: 2025-11-25 09:04:49.379723308 +0000 UTC m=+0.026091530 container died e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 09:04:49 compute-0 nova_compute[253538]: 2025-11-25 09:04:49.514 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e7472c60a98303cbfc01a8265810fecd3e14add2a191d6400b5ea5ce927425-merged.mount: Deactivated successfully.
Nov 25 09:04:50 compute-0 podman[397215]: 2025-11-25 09:04:50.085671009 +0000 UTC m=+0.732039211 container remove e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:04:50 compute-0 systemd[1]: libpod-conmon-e0e68cdaf99e0160cd0cf1ffd58415186ed81d1284be63dd23a70fe62a3cfd4b.scope: Deactivated successfully.
Nov 25 09:04:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Nov 25 09:04:50 compute-0 sudo[397063]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:50 compute-0 sudo[397228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:50 compute-0 sudo[397228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:50 compute-0 sudo[397228]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:50 compute-0 sudo[397253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:04:50 compute-0 sudo[397253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:50 compute-0 sudo[397253]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:50 compute-0 sudo[397278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:50 compute-0 sudo[397278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:50 compute-0 sudo[397278]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:50 compute-0 sudo[397303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:04:50 compute-0 sudo[397303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:50 compute-0 nova_compute[253538]: 2025-11-25 09:04:50.606 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:50 compute-0 podman[397368]: 2025-11-25 09:04:50.726953711 +0000 UTC m=+0.020713753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:04:51 compute-0 podman[397368]: 2025-11-25 09:04:51.01160976 +0000 UTC m=+0.305369772 container create ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 09:04:51 compute-0 systemd[1]: Started libpod-conmon-ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7.scope.
Nov 25 09:04:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:04:51 compute-0 podman[397368]: 2025-11-25 09:04:51.295014014 +0000 UTC m=+0.588774056 container init ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:04:51 compute-0 podman[397368]: 2025-11-25 09:04:51.304635635 +0000 UTC m=+0.598395657 container start ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:04:51 compute-0 gallant_keldysh[397384]: 167 167
Nov 25 09:04:51 compute-0 systemd[1]: libpod-ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7.scope: Deactivated successfully.
Nov 25 09:04:51 compute-0 ceph-mon[75015]: pgmap v2521: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Nov 25 09:04:51 compute-0 podman[397368]: 2025-11-25 09:04:51.607398336 +0000 UTC m=+0.901158348 container attach ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:04:51 compute-0 podman[397368]: 2025-11-25 09:04:51.609239996 +0000 UTC m=+0.903000008 container died ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7498f61e2443d5d62b12b01d538269d5902c0045074d5409b4f65d42f3380a8a-merged.mount: Deactivated successfully.
Nov 25 09:04:51 compute-0 podman[397368]: 2025-11-25 09:04:51.785546409 +0000 UTC m=+1.079306411 container remove ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keldysh, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:04:51 compute-0 systemd[1]: libpod-conmon-ef0b917d3a073717451dc425088d171c1def12f8b98e36be0a80ba7cbfd3c5d7.scope: Deactivated successfully.
Nov 25 09:04:51 compute-0 podman[397408]: 2025-11-25 09:04:51.935611419 +0000 UTC m=+0.037035509 container create ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:04:51 compute-0 systemd[1]: Started libpod-conmon-ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c.scope.
Nov 25 09:04:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:04:52 compute-0 podman[397408]: 2025-11-25 09:04:51.920751724 +0000 UTC m=+0.022175824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:52 compute-0 podman[397408]: 2025-11-25 09:04:52.030344003 +0000 UTC m=+0.131768103 container init ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 09:04:52 compute-0 podman[397408]: 2025-11-25 09:04:52.037131738 +0000 UTC m=+0.138555818 container start ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:04:52 compute-0 podman[397408]: 2025-11-25 09:04:52.040838599 +0000 UTC m=+0.142262699 container attach ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 09:04:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 26 op/s
Nov 25 09:04:52 compute-0 ceph-mon[75015]: pgmap v2522: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 26 op/s
Nov 25 09:04:52 compute-0 nova_compute[253538]: 2025-11-25 09:04:52.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:52 compute-0 nova_compute[253538]: 2025-11-25 09:04:52.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:52 compute-0 nova_compute[253538]: 2025-11-25 09:04:52.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:04:52 compute-0 friendly_brown[397425]: {
Nov 25 09:04:52 compute-0 friendly_brown[397425]:     "0": [
Nov 25 09:04:52 compute-0 friendly_brown[397425]:         {
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "devices": [
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "/dev/loop3"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             ],
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_name": "ceph_lv0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_size": "21470642176",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "name": "ceph_lv0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "tags": {
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cluster_name": "ceph",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.crush_device_class": "",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.encrypted": "0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osd_id": "0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.type": "block",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.vdo": "0"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             },
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "type": "block",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "vg_name": "ceph_vg0"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:         }
Nov 25 09:04:52 compute-0 friendly_brown[397425]:     ],
Nov 25 09:04:52 compute-0 friendly_brown[397425]:     "1": [
Nov 25 09:04:52 compute-0 friendly_brown[397425]:         {
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "devices": [
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "/dev/loop4"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             ],
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_name": "ceph_lv1",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_size": "21470642176",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "name": "ceph_lv1",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "tags": {
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cluster_name": "ceph",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.crush_device_class": "",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.encrypted": "0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osd_id": "1",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.type": "block",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.vdo": "0"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             },
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "type": "block",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "vg_name": "ceph_vg1"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:         }
Nov 25 09:04:52 compute-0 friendly_brown[397425]:     ],
Nov 25 09:04:52 compute-0 friendly_brown[397425]:     "2": [
Nov 25 09:04:52 compute-0 friendly_brown[397425]:         {
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "devices": [
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "/dev/loop5"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             ],
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_name": "ceph_lv2",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_size": "21470642176",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "name": "ceph_lv2",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "tags": {
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.cluster_name": "ceph",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.crush_device_class": "",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.encrypted": "0",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osd_id": "2",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.type": "block",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:                 "ceph.vdo": "0"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             },
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "type": "block",
Nov 25 09:04:52 compute-0 friendly_brown[397425]:             "vg_name": "ceph_vg2"
Nov 25 09:04:52 compute-0 friendly_brown[397425]:         }
Nov 25 09:04:52 compute-0 friendly_brown[397425]:     ]
Nov 25 09:04:52 compute-0 friendly_brown[397425]: }
Nov 25 09:04:52 compute-0 systemd[1]: libpod-ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c.scope: Deactivated successfully.
Nov 25 09:04:52 compute-0 podman[397408]: 2025-11-25 09:04:52.858823705 +0000 UTC m=+0.960247785 container died ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bbe4b61bfac9c4bd63da063f3b035f96fdfb61c38a7ad2dba0d9f0e4c5304d8-merged.mount: Deactivated successfully.
Nov 25 09:04:52 compute-0 podman[397408]: 2025-11-25 09:04:52.912484513 +0000 UTC m=+1.013908593 container remove ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:04:52 compute-0 systemd[1]: libpod-conmon-ed0417f26f2b0d6bfc06dfa65ead5c158a936dd1d801f6cd864c5ec5cde1136c.scope: Deactivated successfully.
Nov 25 09:04:52 compute-0 sudo[397303]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:53 compute-0 sudo[397447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:53 compute-0 sudo[397447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:53 compute-0 sudo[397447]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:53 compute-0 sudo[397472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:04:53 compute-0 sudo[397472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:53 compute-0 sudo[397472]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:53 compute-0 sudo[397497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:53 compute-0 sudo[397497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:53 compute-0 sudo[397497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:53 compute-0 sudo[397522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:04:53 compute-0 sudo[397522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:04:53
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:04:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:04:53 compute-0 podman[397588]: 2025-11-25 09:04:53.511784135 +0000 UTC m=+0.042366103 container create f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:04:53 compute-0 nova_compute[253538]: 2025-11-25 09:04:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:53 compute-0 systemd[1]: Started libpod-conmon-f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf.scope.
Nov 25 09:04:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:04:53 compute-0 sshd-session[397430]: Invalid user system from 45.202.211.6 port 60320
Nov 25 09:04:53 compute-0 podman[397588]: 2025-11-25 09:04:53.494757122 +0000 UTC m=+0.025339120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:04:53 compute-0 podman[397588]: 2025-11-25 09:04:53.600868776 +0000 UTC m=+0.131450774 container init f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:04:53 compute-0 podman[397588]: 2025-11-25 09:04:53.609340237 +0000 UTC m=+0.139922195 container start f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:04:53 compute-0 podman[397588]: 2025-11-25 09:04:53.613646624 +0000 UTC m=+0.144228612 container attach f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:04:53 compute-0 peaceful_hawking[397604]: 167 167
Nov 25 09:04:53 compute-0 systemd[1]: libpod-f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf.scope: Deactivated successfully.
Nov 25 09:04:53 compute-0 podman[397588]: 2025-11-25 09:04:53.615115784 +0000 UTC m=+0.145697752 container died f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f03bf4cb38f84816c566e32deedf5bdc92d884d27aa292f0602868fb08df768-merged.mount: Deactivated successfully.
Nov 25 09:04:53 compute-0 podman[397588]: 2025-11-25 09:04:53.651197595 +0000 UTC m=+0.181779563 container remove f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hawking, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:04:53 compute-0 systemd[1]: libpod-conmon-f8c60f222bcb07af4dfd4f68ccf70b788fcc7777ea8e85f65b6bed1a2d589dbf.scope: Deactivated successfully.
Nov 25 09:04:53 compute-0 sshd-session[397430]: Received disconnect from 45.202.211.6 port 60320:11: Bye Bye [preauth]
Nov 25 09:04:53 compute-0 sshd-session[397430]: Disconnected from invalid user system 45.202.211.6 port 60320 [preauth]
Nov 25 09:04:53 compute-0 podman[397627]: 2025-11-25 09:04:53.82539561 +0000 UTC m=+0.048613752 container create b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:04:53 compute-0 systemd[1]: Started libpod-conmon-b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177.scope.
Nov 25 09:04:53 compute-0 podman[397627]: 2025-11-25 09:04:53.805077328 +0000 UTC m=+0.028295460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:04:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:04:53 compute-0 podman[397627]: 2025-11-25 09:04:53.937746604 +0000 UTC m=+0.160964736 container init b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:04:53 compute-0 podman[397627]: 2025-11-25 09:04:53.951894108 +0000 UTC m=+0.175112240 container start b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 09:04:53 compute-0 podman[397627]: 2025-11-25 09:04:53.956631578 +0000 UTC m=+0.179849730 container attach b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:04:54 compute-0 nova_compute[253538]: 2025-11-25 09:04:54.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:04:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 682 B/s wr, 20 op/s
Nov 25 09:04:54 compute-0 nova_compute[253538]: 2025-11-25 09:04:54.455 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061479.4542713, 28376454-90b2-431d-9052-48b369973c8e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:04:54 compute-0 nova_compute[253538]: 2025-11-25 09:04:54.456 253542 INFO nova.compute.manager [-] [instance: 28376454-90b2-431d-9052-48b369973c8e] VM Stopped (Lifecycle Event)
Nov 25 09:04:54 compute-0 nova_compute[253538]: 2025-11-25 09:04:54.475 253542 DEBUG nova.compute.manager [None req-584695c1-1904-4507-9f8b-7a65274d7549 - - - - - -] [instance: 28376454-90b2-431d-9052-48b369973c8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:04:54 compute-0 nova_compute[253538]: 2025-11-25 09:04:54.515 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]: {
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "osd_id": 1,
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "type": "bluestore"
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:     },
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "osd_id": 2,
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "type": "bluestore"
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:     },
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "osd_id": 0,
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:         "type": "bluestore"
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]:     }
Nov 25 09:04:54 compute-0 ecstatic_hodgkin[397644]: }
Nov 25 09:04:54 compute-0 systemd[1]: libpod-b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177.scope: Deactivated successfully.
Nov 25 09:04:54 compute-0 podman[397627]: 2025-11-25 09:04:54.978762933 +0000 UTC m=+1.201981035 container died b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:04:54 compute-0 systemd[1]: libpod-b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177.scope: Consumed 1.032s CPU time.
Nov 25 09:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6eb31707272b5f5e7765fd2e64ec49a879d85ae123dea70abe9625add75fbb0-merged.mount: Deactivated successfully.
Nov 25 09:04:55 compute-0 podman[397627]: 2025-11-25 09:04:55.044645454 +0000 UTC m=+1.267863566 container remove b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:04:55 compute-0 systemd[1]: libpod-conmon-b8ac2c15b4a9acaf5e8f012b00fcda3e33c8fcdf7d6813709aefd0e7fa8bc177.scope: Deactivated successfully.
Nov 25 09:04:55 compute-0 podman[397678]: 2025-11-25 09:04:55.084337024 +0000 UTC m=+0.067610970 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:04:55 compute-0 sudo[397522]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:04:55 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:04:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:04:55 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:04:55 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 11d73f36-6b86-42ce-a003-e1ba2ce0955b does not exist
Nov 25 09:04:55 compute-0 podman[397686]: 2025-11-25 09:04:55.105688543 +0000 UTC m=+0.090908961 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 09:04:55 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0353cb75-88c3-495a-82c3-5bb9285ff9c7 does not exist
Nov 25 09:04:55 compute-0 sudo[397728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:04:55 compute-0 sudo[397728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:55 compute-0 sudo[397728]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:55 compute-0 ceph-mon[75015]: pgmap v2523: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 682 B/s wr, 20 op/s
Nov 25 09:04:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:04:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:04:55 compute-0 sudo[397753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:04:55 compute-0 sudo[397753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:04:55 compute-0 sudo[397753]: pam_unix(sudo:session): session closed for user root
Nov 25 09:04:55 compute-0 nova_compute[253538]: 2025-11-25 09:04:55.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 25 09:04:56 compute-0 nova_compute[253538]: 2025-11-25 09:04:56.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:04:57 compute-0 ceph-mon[75015]: pgmap v2524: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 25 09:04:57 compute-0 sshd-session[397780]: Connection closed by authenticating user root 171.244.51.45 port 33242 [preauth]
Nov 25 09:04:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:04:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.247365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061498247450, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 250, "total_data_size": 1059638, "memory_usage": 1083208, "flush_reason": "Manual Compaction"}
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061498282123, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 666791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52351, "largest_seqno": 53164, "table_properties": {"data_size": 663370, "index_size": 1201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9126, "raw_average_key_size": 20, "raw_value_size": 656158, "raw_average_value_size": 1484, "num_data_blocks": 54, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061428, "oldest_key_time": 1764061428, "file_creation_time": 1764061498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 34802 microseconds, and 3005 cpu microseconds.
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.282175) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 666791 bytes OK
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.282199) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.645833) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.645890) EVENT_LOG_v1 {"time_micros": 1764061498645878, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.645915) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1055586, prev total WAL file size 1082074, number of live WAL files 2.
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.646748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303035' seq:72057594037927935, type:22 .. '6D6772737461740032323536' seq:0, type:0; will stop at (end)
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(651KB)], [122(10MB)]
Nov 25 09:04:58 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061498646798, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11256581, "oldest_snapshot_seqno": -1}
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7211 keys, 8336726 bytes, temperature: kUnknown
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061499023287, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8336726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8292042, "index_size": 25564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 189106, "raw_average_key_size": 26, "raw_value_size": 8166614, "raw_average_value_size": 1132, "num_data_blocks": 993, "num_entries": 7211, "num_filter_entries": 7211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.023810) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8336726 bytes
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.025778) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.9 rd, 22.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.1 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(29.4) write-amplify(12.5) OK, records in: 7694, records dropped: 483 output_compression: NoCompression
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.025834) EVENT_LOG_v1 {"time_micros": 1764061499025817, "job": 74, "event": "compaction_finished", "compaction_time_micros": 376797, "compaction_time_cpu_micros": 22633, "output_level": 6, "num_output_files": 1, "total_output_size": 8336726, "num_input_records": 7694, "num_output_records": 7211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061499026417, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061499028409, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:58.646681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:04:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:04:59.028515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:04:59 compute-0 nova_compute[253538]: 2025-11-25 09:04:59.027 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:04:59 compute-0 ceph-mon[75015]: pgmap v2525: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:04:59 compute-0 nova_compute[253538]: 2025-11-25 09:04:59.564 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:00 compute-0 nova_compute[253538]: 2025-11-25 09:05:00.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:00 compute-0 nova_compute[253538]: 2025-11-25 09:05:00.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:00 compute-0 nova_compute[253538]: 2025-11-25 09:05:00.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:00 compute-0 nova_compute[253538]: 2025-11-25 09:05:00.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:00 compute-0 nova_compute[253538]: 2025-11-25 09:05:00.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:05:00 compute-0 nova_compute[253538]: 2025-11-25 09:05:00.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:05:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38150164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.075 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:01 compute-0 sshd-session[397778]: Connection closed by 45.78.222.2 port 33596 [preauth]
Nov 25 09:05:01 compute-0 ceph-mon[75015]: pgmap v2526: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/38150164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.287 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.288 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3705MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.289 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.289 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.357 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.358 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.373 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:05:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468073979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.818 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.825 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.841 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.908 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:05:01 compute-0 nova_compute[253538]: 2025-11-25 09:05:01.909 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1468073979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:05:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:03 compute-0 ceph-mon[75015]: pgmap v2527: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:03 compute-0 podman[397827]: 2025-11-25 09:05:03.845800125 +0000 UTC m=+0.103138525 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:05:04 compute-0 nova_compute[253538]: 2025-11-25 09:05:04.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:05:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:05:04 compute-0 nova_compute[253538]: 2025-11-25 09:05:04.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:05 compute-0 ceph-mon[75015]: pgmap v2528: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:07 compute-0 ceph-mon[75015]: pgmap v2529: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:08 compute-0 ceph-mon[75015]: pgmap v2530: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:09 compute-0 nova_compute[253538]: 2025-11-25 09:05:09.032 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:09 compute-0 nova_compute[253538]: 2025-11-25 09:05:09.639 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:11 compute-0 ceph-mon[75015]: pgmap v2531: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:13 compute-0 ceph-mon[75015]: pgmap v2532: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:14 compute-0 nova_compute[253538]: 2025-11-25 09:05:14.035 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:14 compute-0 ceph-mon[75015]: pgmap v2533: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:14 compute-0 nova_compute[253538]: 2025-11-25 09:05:14.642 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:17 compute-0 ceph-mon[75015]: pgmap v2534: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:17.586 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:b4:6d 2001:db8:0:1:f816:3eff:fe71:b46d 2001:db8::f816:3eff:fe71:b46d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe71:b46d/64 2001:db8::f816:3eff:fe71:b46d/64', 'neutron:device_id': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f8aacb3c-1998-431a-ac4d-66021d7412c1) old=Port_Binding(mac=['fa:16:3e:71:b4:6d 2001:db8::f816:3eff:fe71:b46d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe71:b46d/64', 'neutron:device_id': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:05:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:17.588 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f8aacb3c-1998-431a-ac4d-66021d7412c1 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 updated
Nov 25 09:05:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:17.590 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c644b4d-59a5-410c-b57a-1faa3d063b78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:05:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:17.592 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e64fe2-92a6-48fa-bf3f-dd864d642c19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:19 compute-0 nova_compute[253538]: 2025-11-25 09:05:19.037 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:19 compute-0 ceph-mon[75015]: pgmap v2535: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:19 compute-0 nova_compute[253538]: 2025-11-25 09:05:19.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:21 compute-0 ceph-mon[75015]: pgmap v2536: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:23 compute-0 ceph-mon[75015]: pgmap v2537: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:05:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:05:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:05:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:05:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:05:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.038 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.140 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.141 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.167 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.280 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.280 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.297 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.298 253542 INFO nova.compute.claims [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.463 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.646 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:05:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036385225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.932 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.938 253542 DEBUG nova.compute.provider_tree [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.959 253542 DEBUG nova.scheduler.client.report [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.994 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:24 compute-0 nova_compute[253538]: 2025-11-25 09:05:24.995 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.069 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.069 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.090 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.112 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.210 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.211 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.211 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Creating image(s)
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.234 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:25 compute-0 ceph-mon[75015]: pgmap v2538: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1036385225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.261 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.285 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.288 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.357 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.357 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.358 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.358 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.380 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.383 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.413 253542 DEBUG nova.policy [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.658 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.730 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:05:25 compute-0 podman[398021]: 2025-11-25 09:05:25.820202978 +0000 UTC m=+0.061688477 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.835 253542 DEBUG nova.objects.instance [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:05:25 compute-0 podman[398006]: 2025-11-25 09:05:25.842445513 +0000 UTC m=+0.091804387 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.851 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.852 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Ensure instance console log exists: /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.852 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.852 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:25 compute-0 nova_compute[253538]: 2025-11-25 09:05:25.853 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:26 compute-0 nova_compute[253538]: 2025-11-25 09:05:26.544 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Successfully created port: bc72cf9d-bb8d-4968-879b-a65c0e151d35 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:05:27 compute-0 ceph-mon[75015]: pgmap v2539: 321 pgs: 321 active+clean; 88 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:05:27 compute-0 ovn_controller[152859]: 2025-11-25T09:05:27Z|01416|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 09:05:27 compute-0 nova_compute[253538]: 2025-11-25 09:05:27.350 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Successfully created port: e64e0c93-9ff8-4b26-9a7e-1bae8b024966 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:05:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.108 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.109 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.122 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.137 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Successfully updated port: bc72cf9d-bb8d-4968-879b-a65c0e151d35 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:05:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 106 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 775 KiB/s wr, 2 op/s
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.184 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.185 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.196 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.196 253542 INFO nova.compute.claims [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.213 253542 DEBUG nova.compute.manager [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.214 253542 DEBUG nova.compute.manager [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing instance network info cache due to event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.214 253542 DEBUG oslo_concurrency.lockutils [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.215 253542 DEBUG oslo_concurrency.lockutils [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.215 253542 DEBUG nova.network.neutron [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.313 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.382 253542 DEBUG nova.network.neutron [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:05:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:05:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177567934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.778 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.782 253542 DEBUG nova.compute.provider_tree [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.834 253542 DEBUG nova.scheduler.client.report [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.849 253542 DEBUG nova.network.neutron [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.861 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.862 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.864 253542 DEBUG oslo_concurrency.lockutils [req-d032b321-04c3-4b67-8666-8a7b12b358ec req-be4e74ef-a4f7-4f5e-aaff-48d56a90851f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.898 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.898 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.911 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:05:28 compute-0 nova_compute[253538]: 2025-11-25 09:05:28.926 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.005 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Successfully updated port: e64e0c93-9ff8-4b26-9a7e-1bae8b024966 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.017 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.019 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.020 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Creating image(s)
Nov 25 09:05:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:05:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1999833537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:05:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:05:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1999833537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.047 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.076 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.103 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.107 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.159 253542 DEBUG nova.policy [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '637a807a37ce403a8612d303b1acbb3b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3908211615c4cbaae61d6e5833ca908', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.162 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.162 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.163 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.216 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.217 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.217 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.217 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.235 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.239 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 497131ea-c693-4c1d-b471-5b69d2294e3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:29 compute-0 ceph-mon[75015]: pgmap v2540: 321 pgs: 321 active+clean; 106 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 775 KiB/s wr, 2 op/s
Nov 25 09:05:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/177567934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:05:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1999833537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:05:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1999833537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.373 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.661 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 497131ea-c693-4c1d-b471-5b69d2294e3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.714 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] resizing rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.799 253542 DEBUG nova.objects.instance [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lazy-loading 'migration_context' on Instance uuid 497131ea-c693-4c1d-b471-5b69d2294e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.818 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.818 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Ensure instance console log exists: /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.818 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.819 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:29 compute-0 nova_compute[253538]: 2025-11-25 09:05:29.819 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:30.089 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:05:30 compute-0 nova_compute[253538]: 2025-11-25 09:05:30.090 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:30.091 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:05:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 141 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.9 MiB/s wr, 28 op/s
Nov 25 09:05:30 compute-0 nova_compute[253538]: 2025-11-25 09:05:30.188 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Successfully created port: 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:05:30 compute-0 nova_compute[253538]: 2025-11-25 09:05:30.324 253542 DEBUG nova.compute.manager [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-changed-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:30 compute-0 nova_compute[253538]: 2025-11-25 09:05:30.325 253542 DEBUG nova.compute.manager [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing instance network info cache due to event network-changed-e64e0c93-9ff8-4b26-9a7e-1bae8b024966. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:05:30 compute-0 nova_compute[253538]: 2025-11-25 09:05:30.325 253542 DEBUG oslo_concurrency.lockutils [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.096 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Successfully updated port: 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.121 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.121 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquired lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.122 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.205 253542 DEBUG nova.compute.manager [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-changed-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.206 253542 DEBUG nova.compute.manager [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Refreshing instance network info cache due to event network-changed-44ba14ce-3677-4e53-b6ea-a21b98ba45d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.206 253542 DEBUG oslo_concurrency.lockutils [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:05:31 compute-0 ceph-mon[75015]: pgmap v2541: 321 pgs: 321 active+clean; 141 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.9 MiB/s wr, 28 op/s
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.313 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.552 253542 DEBUG nova.network.neutron [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.572 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.574 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance network_info: |[{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.576 253542 DEBUG oslo_concurrency.lockutils [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.577 253542 DEBUG nova.network.neutron [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing network info cache for port e64e0c93-9ff8-4b26-9a7e-1bae8b024966 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.585 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start _get_guest_xml network_info=[{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.594 253542 WARNING nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.606 253542 DEBUG nova.virt.libvirt.host [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.607 253542 DEBUG nova.virt.libvirt.host [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.612 253542 DEBUG nova.virt.libvirt.host [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.613 253542 DEBUG nova.virt.libvirt.host [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.614 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.614 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.615 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.616 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.616 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.617 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.617 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.618 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.618 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.619 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.619 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.620 253542 DEBUG nova.virt.hardware [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:05:31 compute-0 nova_compute[253538]: 2025-11-25 09:05:31.626 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.078 253542 DEBUG nova.network.neutron [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updating instance_info_cache with network_info: [{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.098 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Releasing lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.099 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance network_info: |[{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.100 253542 DEBUG oslo_concurrency.lockutils [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.101 253542 DEBUG nova.network.neutron [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Refreshing network info cache for port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.106 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start _get_guest_xml network_info=[{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.113 253542 WARNING nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.125 253542 DEBUG nova.virt.libvirt.host [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.126 253542 DEBUG nova.virt.libvirt.host [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:05:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:05:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767503369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.133 253542 DEBUG nova.virt.libvirt.host [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.134 253542 DEBUG nova.virt.libvirt.host [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.135 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.135 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.136 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.136 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.137 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.137 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.138 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.139 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.139 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.140 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.140 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.141 253542 DEBUG nova.virt.hardware [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:05:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 156 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 MiB/s wr, 40 op/s
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.157 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.211 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.237 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.241 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1767503369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:05:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:05:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383854619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.646 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.669 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.672 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:05:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921679297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.702 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.704 253542 DEBUG nova.virt.libvirt.vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:25Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.704 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.706 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.707 253542 DEBUG nova.virt.libvirt.vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:25Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.707 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.708 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.709 253542 DEBUG nova.objects.instance [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.727 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <uuid>7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0</uuid>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <name>instance-00000089</name>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-1397309390</nova:name>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:05:31</nova:creationTime>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:port uuid="bc72cf9d-bb8d-4968-879b-a65c0e151d35">
Nov 25 09:05:32 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <nova:port uuid="e64e0c93-9ff8-4b26-9a7e-1bae8b024966">
Nov 25 09:05:32 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6a:414a" ipVersion="6"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe6a:414a" ipVersion="6"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <system>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <entry name="serial">7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0</entry>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <entry name="uuid">7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0</entry>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </system>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <os>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   </os>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <features>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   </features>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk">
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       </source>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config">
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       </source>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:05:32 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:b7:a3:07"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <target dev="tapbc72cf9d-bb"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:6a:41:4a"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <target dev="tape64e0c93-9f"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/console.log" append="off"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <video>
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </video>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:05:32 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:05:32 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:05:32 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:05:32 compute-0 nova_compute[253538]: </domain>
Nov 25 09:05:32 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.728 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Preparing to wait for external event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.728 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.729 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.729 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.729 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Preparing to wait for external event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.730 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.730 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.730 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.731 253542 DEBUG nova.virt.libvirt.vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:25Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.731 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.732 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.732 253542 DEBUG os_vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.734 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.734 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.735 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.738 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.738 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc72cf9d-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.739 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc72cf9d-bb, col_values=(('external_ids', {'iface-id': 'bc72cf9d-bb8d-4968-879b-a65c0e151d35', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a3:07', 'vm-uuid': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:32 compute-0 NetworkManager[48915]: <info>  [1764061532.7422] manager: (tapbc72cf9d-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.747 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.748 253542 INFO os_vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb')
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.749 253542 DEBUG nova.virt.libvirt.vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:25Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.749 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.750 253542 DEBUG nova.network.os_vif_util [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.751 253542 DEBUG os_vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.751 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.752 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.755 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.755 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64e0c93-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.756 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape64e0c93-9f, col_values=(('external_ids', {'iface-id': 'e64e0c93-9ff8-4b26-9a7e-1bae8b024966', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:41:4a', 'vm-uuid': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:32 compute-0 NetworkManager[48915]: <info>  [1764061532.7580] manager: (tape64e0c93-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.765 253542 INFO os_vif [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f')
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.822 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.822 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.823 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:b7:a3:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.824 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:6a:41:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.824 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Using config drive
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.858 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.906 253542 DEBUG nova.network.neutron [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updated VIF entry in instance network info cache for port e64e0c93-9ff8-4b26-9a7e-1bae8b024966. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.907 253542 DEBUG nova.network.neutron [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:05:32 compute-0 nova_compute[253538]: 2025-11-25 09:05:32.925 253542 DEBUG oslo_concurrency.lockutils [req-d5343e8a-6ebe-4b4d-8f48-ce1f0640d2a8 req-92cbaa91-479c-4215-8cfa-e874faa59d17 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:05:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:05:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616471348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.093 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.100 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.101 253542 DEBUG nova.virt.libvirt.vif [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-829104372',display_name='tempest-TestServerBasicOps-server-829104372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-829104372',id=138,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzwD53kQ8BpPBb54UPZdiuwcAps8iqsBmsdvuGpmBwC+Q4SksGNyI7vnMrtWDCi5xUrajEjXki8ZVS3NyMr/F7GJW+4JitS6beGfKpA2babih/6mXQzAB6PKdgZETbkFw==',key_name='tempest-TestServerBasicOps-1951077282',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3908211615c4cbaae61d6e5833ca908',ramdisk_id='',reservation_id='r-r1jf1pgp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1625299484',owner_user_name='tempest-TestServerBasicOps-1625299484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='637a807a37ce403a8612d303b1acbb3b',uuid=497131ea-c693-4c1d-b471-5b69d2294e3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.101 253542 DEBUG nova.network.os_vif_util [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converting VIF {"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.102 253542 DEBUG nova.network.os_vif_util [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.102 253542 DEBUG nova.objects.instance [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lazy-loading 'pci_devices' on Instance uuid 497131ea-c693-4c1d-b471-5b69d2294e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.116 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <uuid>497131ea-c693-4c1d-b471-5b69d2294e3a</uuid>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <name>instance-0000008a</name>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <nova:name>tempest-TestServerBasicOps-server-829104372</nova:name>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:05:32</nova:creationTime>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <nova:user uuid="637a807a37ce403a8612d303b1acbb3b">tempest-TestServerBasicOps-1625299484-project-member</nova:user>
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <nova:project uuid="a3908211615c4cbaae61d6e5833ca908">tempest-TestServerBasicOps-1625299484</nova:project>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <nova:port uuid="44ba14ce-3677-4e53-b6ea-a21b98ba45d6">
Nov 25 09:05:33 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <system>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <entry name="serial">497131ea-c693-4c1d-b471-5b69d2294e3a</entry>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <entry name="uuid">497131ea-c693-4c1d-b471-5b69d2294e3a</entry>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </system>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <os>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   </os>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <features>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   </features>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/497131ea-c693-4c1d-b471-5b69d2294e3a_disk">
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       </source>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config">
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       </source>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:05:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:70:84:10"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <target dev="tap44ba14ce-36"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/console.log" append="off"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <video>
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </video>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:05:33 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:05:33 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:05:33 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:05:33 compute-0 nova_compute[253538]: </domain>
Nov 25 09:05:33 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.118 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Preparing to wait for external event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.118 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.118 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.118 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.119 253542 DEBUG nova.virt.libvirt.vif [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-829104372',display_name='tempest-TestServerBasicOps-server-829104372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-829104372',id=138,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzwD53kQ8BpPBb54UPZdiuwcAps8iqsBmsdvuGpmBwC+Q4SksGNyI7vnMrtWDCi5xUrajEjXki8ZVS3NyMr/F7GJW+4JitS6beGfKpA2babih/6mXQzAB6PKdgZETbkFw==',key_name='tempest-TestServerBasicOps-1951077282',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3908211615c4cbaae61d6e5833ca908',ramdisk_id='',reservation_id='r-r1jf1pgp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1625299484',owner_user_name='tempest-TestServerBasicOps-1625299484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:05:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='637a807a37ce403a8612d303b1acbb3b',uuid=497131ea-c693-4c1d-b471-5b69d2294e3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.119 253542 DEBUG nova.network.os_vif_util [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converting VIF {"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.119 253542 DEBUG nova.network.os_vif_util [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.120 253542 DEBUG os_vif [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.120 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.121 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.121 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.123 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44ba14ce-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.123 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44ba14ce-36, col_values=(('external_ids', {'iface-id': '44ba14ce-3677-4e53-b6ea-a21b98ba45d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:84:10', 'vm-uuid': '497131ea-c693-4c1d-b471-5b69d2294e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.1262] manager: (tap44ba14ce-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.136 253542 INFO os_vif [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36')
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.187 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Creating config drive at /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.193 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt9nozznd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.235 253542 DEBUG nova.network.neutron [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updated VIF entry in instance network info cache for port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.236 253542 DEBUG nova.network.neutron [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updating instance_info_cache with network_info: [{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.239 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.239 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.239 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] No VIF found with MAC fa:16:3e:70:84:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.240 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Using config drive
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.263 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:33 compute-0 ceph-mon[75015]: pgmap v2542: 321 pgs: 321 active+clean; 156 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 MiB/s wr, 40 op/s
Nov 25 09:05:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2383854619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:05:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/921679297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:05:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3616471348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.302 253542 DEBUG oslo_concurrency.lockutils [req-18946731-5463-4347-ae2a-e4895b7239b4 req-eeebd2e3-0091-4601-b15b-f6597d574eb9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.346 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt9nozznd" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.375 253542 DEBUG nova.storage.rbd_utils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.380 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.546 253542 DEBUG oslo_concurrency.processutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.548 253542 INFO nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Deleting local config drive /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0/disk.config because it was imported into RBD.
Nov 25 09:05:33 compute-0 kernel: tapbc72cf9d-bb: entered promiscuous mode
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.6155] manager: (tapbc72cf9d-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01417|binding|INFO|Claiming lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 for this chassis.
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01418|binding|INFO|bc72cf9d-bb8d-4968-879b-a65c0e151d35: Claiming fa:16:3e:b7:a3:07 10.100.0.14
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.6314] manager: (tape64e0c93-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/584)
Nov 25 09:05:33 compute-0 kernel: tape64e0c93-9f: entered promiscuous mode
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01419|if_status|INFO|Dropped 1 log messages in last 117 seconds (most recently, 117 seconds ago) due to excessive rate
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01420|if_status|INFO|Not updating pb chassis for e64e0c93-9ff8-4b26-9a7e-1bae8b024966 now as sb is readonly
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01421|binding|INFO|Claiming lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 for this chassis.
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01422|binding|INFO|e64e0c93-9ff8-4b26-9a7e-1bae8b024966: Claiming fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.644 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a3:07 10.100.0.14'], port_security=['fa:16:3e:b7:a3:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc72cf9d-bb8d-4968-879b-a65c0e151d35) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.646 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc72cf9d-bb8d-4968-879b-a65c0e151d35 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 bound to our chassis
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.648 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f08e3a5-c18c-40d6-a052-3721725c11a7
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.655 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a'], port_security=['fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6a:414a/64 2001:db8::f816:3eff:fe6a:414a/64', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e64e0c93-9ff8-4b26-9a7e-1bae8b024966) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:05:33 compute-0 systemd-udevd[398492]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:05:33 compute-0 systemd-udevd[398490]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.662 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6c02d8-09fb-40a2-ba26-c49d2bad43a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.664 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f08e3a5-c1 in ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.667 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f08e3a5-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.667 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6075988e-33b8-46d7-b208-f78cbdd94779]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.668 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0d5d8b-dd84-4fd8-8104-a9ead0ddeabf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.6794] device (tape64e0c93-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.6811] device (tape64e0c93-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.6819] device (tapbc72cf9d-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.6830] device (tapbc72cf9d-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.689 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[532abce3-0484-4d3f-a0d9-418b79667604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 systemd-machined[215790]: New machine qemu-167-instance-00000089.
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.717 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[230a67df-191c-4b0f-b054-e90610633696]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000089.
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.740 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01423|binding|INFO|Setting lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 ovn-installed in OVS
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01424|binding|INFO|Setting lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 up in Southbound
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01425|binding|INFO|Setting lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 ovn-installed in OVS
Nov 25 09:05:33 compute-0 ovn_controller[152859]: 2025-11-25T09:05:33Z|01426|binding|INFO|Setting lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 up in Southbound
Nov 25 09:05:33 compute-0 nova_compute[253538]: 2025-11-25 09:05:33.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.767 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7c98c8-b88f-4923-a00e-b828618ca04f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.772 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[12de0896-8ea9-4ac5-812c-7b916518d3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.7735] manager: (tap8f08e3a5-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/585)
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.814 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cebcbdaa-a0af-474a-bf96-3a2d18d442cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.820 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f6c3b9-0b5d-4dfa-a446-316b1886bc27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 NetworkManager[48915]: <info>  [1764061533.8504] device (tap8f08e3a5-c0): carrier: link connected
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.856 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a4381c79-9114-49f5-a919-ba28a80fa2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.874 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a02026f4-8cae-436e-8a26-40f0cb3b2e46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f08e3a5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:64:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 412], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688844, 'reachable_time': 16923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398531, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.888 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f360dd-1fe1-4373-989b-0faaf8538ce6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:645c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688844, 'tstamp': 688844}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398532, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.905 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[567c1b95-62e8-4f5d-a622-57c19e40f5b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f08e3a5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:64:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 412], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688844, 'reachable_time': 16923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398533, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:33.934 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a20ed8a6-1471-493d-9dd1-e7bb1e120003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.013 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb6e554-6c3e-44c4-8648-922bb013485f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.014 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f08e3a5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.014 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.015 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f08e3a5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:34 compute-0 NetworkManager[48915]: <info>  [1764061534.0174] manager: (tap8f08e3a5-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 kernel: tap8f08e3a5-c0: entered promiscuous mode
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.028 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f08e3a5-c0, col_values=(('external_ids', {'iface-id': '8fae56b6-9884-44ea-b3b3-2b19412193c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.029 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 ovn_controller[152859]: 2025-11-25T09:05:34Z|01427|binding|INFO|Releasing lport 8fae56b6-9884-44ea-b3b3-2b19412193c5 from this chassis (sb_readonly=0)
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.048 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f08e3a5-c18c-40d6-a052-3721725c11a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f08e3a5-c18c-40d6-a052-3721725c11a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.049 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a47b1b-5fd7-4f48-81bb-75d63b22601b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.049 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-8f08e3a5-c18c-40d6-a052-3721725c11a7
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/8f08e3a5-c18c-40d6-a052-3721725c11a7.pid.haproxy
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 8f08e3a5-c18c-40d6-a052-3721725c11a7
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.050 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'env', 'PROCESS_TAG=haproxy-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f08e3a5-c18c-40d6-a052-3721725c11a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.115 253542 DEBUG nova.compute.manager [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.116 253542 DEBUG oslo_concurrency.lockutils [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.116 253542 DEBUG oslo_concurrency.lockutils [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.116 253542 DEBUG oslo_concurrency.lockutils [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.117 253542 DEBUG nova.compute.manager [req-6034f5c8-e1d4-4411-a01e-800e32fabfbf req-5631e041-6c3e-454e-a271-7effeca7968b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Processing event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:05:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 175 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.4 MiB/s wr, 56 op/s
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.183 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Creating config drive at /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.191 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplzgsjx_9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.361 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplzgsjx_9" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.403 253542 DEBUG nova.storage.rbd_utils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] rbd image 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.409 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.452 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061534.4293022, 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.453 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] VM Started (Lifecycle Event)
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.479 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:05:34 compute-0 podman[398634]: 2025-11-25 09:05:34.482375071 +0000 UTC m=+0.056654991 container create 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.483 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061534.4300125, 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.484 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] VM Paused (Lifecycle Event)
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.508 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.515 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:05:34 compute-0 systemd[1]: Started libpod-conmon-51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435.scope.
Nov 25 09:05:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:05:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80317eb3c6d5e4b152cb82d3e07a88ef2d85b9598516f68f790b2b2e984d1b96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.544 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:05:34 compute-0 podman[398634]: 2025-11-25 09:05:34.455184283 +0000 UTC m=+0.029464223 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:05:34 compute-0 podman[398634]: 2025-11-25 09:05:34.55993073 +0000 UTC m=+0.134210670 container init 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:05:34 compute-0 podman[398634]: 2025-11-25 09:05:34.565549723 +0000 UTC m=+0.139829643 container start 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:05:34 compute-0 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [NOTICE]   (398689) : New worker (398696) forked
Nov 25 09:05:34 compute-0 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [NOTICE]   (398689) : Loading success.
Nov 25 09:05:34 compute-0 podman[398648]: 2025-11-25 09:05:34.602847696 +0000 UTC m=+0.086655287 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.618 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e64e0c93-9ff8-4b26-9a7e-1bae8b024966 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 unbound from our chassis
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.620 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c644b4d-59a5-410c-b57a-1faa3d063b78
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.625 253542 DEBUG oslo_concurrency.processutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config 497131ea-c693-4c1d-b471-5b69d2294e3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.626 253542 INFO nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Deleting local config drive /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a/disk.config because it was imported into RBD.
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.630 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1bd913-906d-49dc-b304-5aa4953cb671]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.632 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c644b4d-51 in ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.634 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c644b4d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.634 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[21a99274-543a-410c-a84b-3d526c9ab5b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.636 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[056efffe-544e-48be-a84c-68553f34179e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.651 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[7c282ecb-1e71-4c56-ab74-0974b43e5bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.675 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f158f7-b3cb-40cb-a857-fcf6a3a9ccbc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 kernel: tap44ba14ce-36: entered promiscuous mode
Nov 25 09:05:34 compute-0 systemd-udevd[398514]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:05:34 compute-0 NetworkManager[48915]: <info>  [1764061534.6835] manager: (tap44ba14ce-36): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Nov 25 09:05:34 compute-0 ovn_controller[152859]: 2025-11-25T09:05:34Z|01428|binding|INFO|Claiming lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 for this chassis.
Nov 25 09:05:34 compute-0 ovn_controller[152859]: 2025-11-25T09:05:34Z|01429|binding|INFO|44ba14ce-3677-4e53-b6ea-a21b98ba45d6: Claiming fa:16:3e:70:84:10 10.100.0.14
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.694 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 NetworkManager[48915]: <info>  [1764061534.7010] device (tap44ba14ce-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:05:34 compute-0 NetworkManager[48915]: <info>  [1764061534.7020] device (tap44ba14ce-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.702 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:84:10 10.100.0.14'], port_security=['fa:16:3e:70:84:10 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '497131ea-c693-4c1d-b471-5b69d2294e3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbe7469c-9d57-4418-b63b-ede368786895', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3908211615c4cbaae61d6e5833ca908', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96995e66-0af1-4c06-becd-28a8c446152a ba12b146-5dc4-4552-8b04-abe689899999', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15de48bd-9bbf-4354-b481-d22133abf514, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=44ba14ce-3677-4e53-b6ea-a21b98ba45d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.714 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfe04f2-a0d9-4aab-9e25-681f0171ba54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.724 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9481e425-c221-4215-b6be-e01f9bdac16c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 NetworkManager[48915]: <info>  [1764061534.7256] manager: (tap6c644b4d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/588)
Nov 25 09:05:34 compute-0 systemd-machined[215790]: New machine qemu-168-instance-0000008a.
Nov 25 09:05:34 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-0000008a.
Nov 25 09:05:34 compute-0 ovn_controller[152859]: 2025-11-25T09:05:34Z|01430|binding|INFO|Setting lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 ovn-installed in OVS
Nov 25 09:05:34 compute-0 ovn_controller[152859]: 2025-11-25T09:05:34Z|01431|binding|INFO|Setting lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 up in Southbound
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.757 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[460baff5-b3f8-4759-aa93-3bf2076e24df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.762 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b8ac9d-d92d-4d3e-ba10-48bd2462415f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 NetworkManager[48915]: <info>  [1764061534.7848] device (tap6c644b4d-50): carrier: link connected
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.788 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[44099974-b431-4171-8c3d-cbe38b177236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.807 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f762e1-c18a-4cda-b4ff-ae59261f4cfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c644b4d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b4:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 414], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688938, 'reachable_time': 35223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398735, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.829 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[addb5d77-e97a-4d74-b785-da408de57afd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:b46d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688938, 'tstamp': 688938}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398738, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.852 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[31594316-095a-4b43-92ec-214c72f4c079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c644b4d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b4:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 414], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688938, 'reachable_time': 35223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398739, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[69726ba1-2a0b-4b6d-82a7-8924e84f7d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.942 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[478a2528-1c10-4c15-9229-3ad6bb64f8dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c644b4d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.944 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c644b4d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.946 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 kernel: tap6c644b4d-50: entered promiscuous mode
Nov 25 09:05:34 compute-0 NetworkManager[48915]: <info>  [1764061534.9473] manager: (tap6c644b4d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.950 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.950 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c644b4d-50, col_values=(('external_ids', {'iface-id': 'f8aacb3c-1998-431a-ac4d-66021d7412c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.952 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 ovn_controller[152859]: 2025-11-25T09:05:34Z|01432|binding|INFO|Releasing lport f8aacb3c-1998-431a-ac4d-66021d7412c1 from this chassis (sb_readonly=0)
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.954 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c644b4d-59a5-410c-b57a-1faa3d063b78.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c644b4d-59a5-410c-b57a-1faa3d063b78.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.954 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb18991-7ec5-48ed-b1fa-027c5f3a82b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.956 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-6c644b4d-59a5-410c-b57a-1faa3d063b78
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/6c644b4d-59a5-410c-b57a-1faa3d063b78.pid.haproxy
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 6c644b4d-59a5-410c-b57a-1faa3d063b78
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:05:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:34.957 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'env', 'PROCESS_TAG=haproxy-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c644b4d-59a5-410c-b57a-1faa3d063b78.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:05:34 compute-0 nova_compute[253538]: 2025-11-25 09:05:34.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.122 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061535.1219375, 497131ea-c693-4c1d-b471-5b69d2294e3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.123 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] VM Started (Lifecycle Event)
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.140 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.145 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061535.1220737, 497131ea-c693-4c1d-b471-5b69d2294e3a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.145 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] VM Paused (Lifecycle Event)
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.160 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.164 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.179 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:05:35 compute-0 ceph-mon[75015]: pgmap v2543: 321 pgs: 321 active+clean; 175 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.4 MiB/s wr, 56 op/s
Nov 25 09:05:35 compute-0 podman[398811]: 2025-11-25 09:05:35.340995662 +0000 UTC m=+0.029123573 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.526 253542 DEBUG nova.compute.manager [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.527 253542 DEBUG oslo_concurrency.lockutils [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.527 253542 DEBUG oslo_concurrency.lockutils [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.527 253542 DEBUG oslo_concurrency.lockutils [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.528 253542 DEBUG nova.compute.manager [req-c2f5a027-55c2-4a4d-ae20-c77b5b4add08 req-a7e999c0-018f-42e6-a7f8-90794c621cf8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Processing event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.528 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.532 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061535.532646, 497131ea-c693-4c1d-b471-5b69d2294e3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.533 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] VM Resumed (Lifecycle Event)
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.535 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.538 253542 INFO nova.virt.libvirt.driver [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance spawned successfully.
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.538 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:05:35 compute-0 podman[398811]: 2025-11-25 09:05:35.556076129 +0000 UTC m=+0.244204020 container create d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.556 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.562 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.566 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.567 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.567 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.568 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.568 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.568 253542 DEBUG nova.virt.libvirt.driver [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.591 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:05:35 compute-0 systemd[1]: Started libpod-conmon-d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91.scope.
Nov 25 09:05:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d698aa358060802f75d0f6998f934a98b483adfd949a7477e1c82db0e0b4996/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.747 253542 INFO nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Took 6.73 seconds to spawn the instance on the hypervisor.
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.748 253542 DEBUG nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:05:35 compute-0 podman[398811]: 2025-11-25 09:05:35.7746156 +0000 UTC m=+0.462743521 container init d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 09:05:35 compute-0 podman[398811]: 2025-11-25 09:05:35.780253113 +0000 UTC m=+0.468381014 container start d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:05:35 compute-0 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [NOTICE]   (398830) : New worker (398832) forked
Nov 25 09:05:35 compute-0 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [NOTICE]   (398830) : Loading success.
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.815 253542 INFO nova.compute.manager [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Took 7.65 seconds to build instance.
Nov 25 09:05:35 compute-0 nova_compute[253538]: 2025-11-25 09:05:35.836 253542 DEBUG oslo_concurrency.lockutils [None req-2f2e9b20-8e15-45de-84f5-40efdcfc8a3b 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 180 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.186 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.186 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.187 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.187 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.188 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No event matching network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 in dict_keys([('network-vif-plugged', 'e64e0c93-9ff8-4b26-9a7e-1bae8b024966')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.189 253542 WARNING nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received unexpected event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 for instance with vm_state building and task_state spawning.
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.189 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.190 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.190 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.191 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.191 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Processing event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.192 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.192 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.193 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.194 253542 DEBUG oslo_concurrency.lockutils [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.194 253542 DEBUG nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.195 253542 WARNING nova.compute.manager [req-73d03d81-236a-48ad-91ff-35a209f82fed req-0f8a45d7-f6ee-43fc-8f28-ee24f49bfa5d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received unexpected event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 for instance with vm_state building and task_state spawning.
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.197 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.204 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.205 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061536.20414, 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.205 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] VM Resumed (Lifecycle Event)
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.209 253542 INFO nova.virt.libvirt.driver [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance spawned successfully.
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.209 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.223 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.226 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 in datapath dbe7469c-9d57-4418-b63b-ede368786895 unbound from our chassis
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.229 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.230 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dbe7469c-9d57-4418-b63b-ede368786895
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.231 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.232 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.232 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.233 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.233 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.233 253542 DEBUG nova.virt.libvirt.driver [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.243 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e10af586-fe1b-468b-a077-a66a7c876a55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.244 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdbe7469c-91 in ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.248 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdbe7469c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.248 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63404250-426e-4d2f-a478-fb0121d42681]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.250 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d7169a4a-f970-41b1-88fa-55dac9a6ef7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.264 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.268 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2e17e037-3056-4a95-af14-9e1e4d8166d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.283 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bd597c70-7a1b-484f-aa4d-7660a18b4dc2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.293 253542 INFO nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Took 11.08 seconds to spawn the instance on the hypervisor.
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.294 253542 DEBUG nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.316 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f77843dc-bc9b-486e-bb6b-9decdeb2e8b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 NetworkManager[48915]: <info>  [1764061536.3235] manager: (tapdbe7469c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/590)
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.322 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37d07e8c-2c34-44ef-8f6e-698c1b1ecac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.352 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[38ef0167-f6a9-48f7-b774-104cf4574fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.355 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[68b7c474-ace3-41f2-9f21-8c3d7f2c1af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.362 253542 INFO nova.compute.manager [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Took 12.13 seconds to build instance.
Nov 25 09:05:36 compute-0 NetworkManager[48915]: <info>  [1764061536.3786] device (tapdbe7469c-90): carrier: link connected
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.385 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea75ec1-172d-4fb0-b404-05b67cb1ebaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.396 253542 DEBUG oslo_concurrency.lockutils [None req-b7a1b552-6c58-4f68-93a5-293ff97ee175 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.405 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0a498b58-e38e-4f60-90ea-b5e2c6bc249e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdbe7469c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:82:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689097, 'reachable_time': 40708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398851, 'error': None, 'target': 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.422 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2d995d0d-d041-4ec8-9a7a-660c76d55e11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:8242'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689097, 'tstamp': 689097}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398852, 'error': None, 'target': 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ceph-mon[75015]: pgmap v2544: 321 pgs: 321 active+clean; 180 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.440 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3134650d-890c-42bd-b9a8-20e1f6080c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdbe7469c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:82:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689097, 'reachable_time': 40708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398853, 'error': None, 'target': 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.475 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf267bf-327d-45e2-a0a8-fb1cc8d68e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.552 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a0cec7-66ba-4ae6-b5d4-f6fd1174d145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.553 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbe7469c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.553 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.554 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdbe7469c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:36 compute-0 NetworkManager[48915]: <info>  [1764061536.5566] manager: (tapdbe7469c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Nov 25 09:05:36 compute-0 kernel: tapdbe7469c-90: entered promiscuous mode
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.560 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdbe7469c-90, col_values=(('external_ids', {'iface-id': '4e603e97-58e9-4264-9b31-7189cd08be5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:05:36 compute-0 ovn_controller[152859]: 2025-11-25T09:05:36Z|01433|binding|INFO|Releasing lport 4e603e97-58e9-4264-9b31-7189cd08be5d from this chassis (sb_readonly=0)
Nov 25 09:05:36 compute-0 nova_compute[253538]: 2025-11-25 09:05:36.573 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.576 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dbe7469c-9d57-4418-b63b-ede368786895.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dbe7469c-9d57-4418-b63b-ede368786895.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4032dcb4-9306-4e81-abc4-d70f372c8171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.578 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-dbe7469c-9d57-4418-b63b-ede368786895
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/dbe7469c-9d57-4418-b63b-ede368786895.pid.haproxy
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID dbe7469c-9d57-4418-b63b-ede368786895
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:05:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:36.578 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'env', 'PROCESS_TAG=haproxy-dbe7469c-9d57-4418-b63b-ede368786895', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dbe7469c-9d57-4418-b63b-ede368786895.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:05:36 compute-0 podman[398886]: 2025-11-25 09:05:36.99040339 +0000 UTC m=+0.063678502 container create e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 09:05:37 compute-0 systemd[1]: Started libpod-conmon-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7.scope.
Nov 25 09:05:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183e12d28189ae10c66d159b8f25f1d139631c6af13b1ca004806ba514918c2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:05:37 compute-0 podman[398886]: 2025-11-25 09:05:36.962277476 +0000 UTC m=+0.035552638 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:05:37 compute-0 podman[398886]: 2025-11-25 09:05:37.064458773 +0000 UTC m=+0.137733885 container init e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 09:05:37 compute-0 podman[398886]: 2025-11-25 09:05:37.074418914 +0000 UTC m=+0.147694026 container start e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 09:05:37 compute-0 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [NOTICE]   (398905) : New worker (398907) forked
Nov 25 09:05:37 compute-0 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [NOTICE]   (398905) : Loading success.
Nov 25 09:05:37 compute-0 nova_compute[253538]: 2025-11-25 09:05:37.612 253542 DEBUG nova.compute.manager [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:37 compute-0 nova_compute[253538]: 2025-11-25 09:05:37.613 253542 DEBUG oslo_concurrency.lockutils [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:37 compute-0 nova_compute[253538]: 2025-11-25 09:05:37.614 253542 DEBUG oslo_concurrency.lockutils [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:37 compute-0 nova_compute[253538]: 2025-11-25 09:05:37.614 253542 DEBUG oslo_concurrency.lockutils [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:37 compute-0 nova_compute[253538]: 2025-11-25 09:05:37.614 253542 DEBUG nova.compute.manager [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] No waiting events found dispatching network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:05:37 compute-0 nova_compute[253538]: 2025-11-25 09:05:37.615 253542 WARNING nova.compute.manager [req-20a4dc2f-bbfc-4773-abd2-259e19fcf636 req-4471d4e7-c9a2-4b01-8b69-269388cd0bf2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received unexpected event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 for instance with vm_state active and task_state None.
Nov 25 09:05:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:38 compute-0 nova_compute[253538]: 2025-11-25 09:05:38.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 09:05:38 compute-0 nova_compute[253538]: 2025-11-25 09:05:38.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:38 compute-0 NetworkManager[48915]: <info>  [1764061538.7445] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Nov 25 09:05:38 compute-0 NetworkManager[48915]: <info>  [1764061538.7457] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Nov 25 09:05:38 compute-0 nova_compute[253538]: 2025-11-25 09:05:38.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:38 compute-0 ovn_controller[152859]: 2025-11-25T09:05:38Z|01434|binding|INFO|Releasing lport 4e603e97-58e9-4264-9b31-7189cd08be5d from this chassis (sb_readonly=0)
Nov 25 09:05:38 compute-0 ovn_controller[152859]: 2025-11-25T09:05:38Z|01435|binding|INFO|Releasing lport 8fae56b6-9884-44ea-b3b3-2b19412193c5 from this chassis (sb_readonly=0)
Nov 25 09:05:38 compute-0 ovn_controller[152859]: 2025-11-25T09:05:38Z|01436|binding|INFO|Releasing lport f8aacb3c-1998-431a-ac4d-66021d7412c1 from this chassis (sb_readonly=0)
Nov 25 09:05:38 compute-0 nova_compute[253538]: 2025-11-25 09:05:38.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:39 compute-0 nova_compute[253538]: 2025-11-25 09:05:39.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:39 compute-0 ceph-mon[75015]: pgmap v2545: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 09:05:39 compute-0 nova_compute[253538]: 2025-11-25 09:05:39.732 253542 DEBUG nova.compute.manager [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-changed-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:39 compute-0 nova_compute[253538]: 2025-11-25 09:05:39.736 253542 DEBUG nova.compute.manager [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Refreshing instance network info cache due to event network-changed-44ba14ce-3677-4e53-b6ea-a21b98ba45d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:05:39 compute-0 nova_compute[253538]: 2025-11-25 09:05:39.737 253542 DEBUG oslo_concurrency.lockutils [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:05:39 compute-0 nova_compute[253538]: 2025-11-25 09:05:39.737 253542 DEBUG oslo_concurrency.lockutils [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:05:39 compute-0 nova_compute[253538]: 2025-11-25 09:05:39.737 253542 DEBUG nova.network.neutron [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Refreshing network info cache for port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:05:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 176 op/s
Nov 25 09:05:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:41.089 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:41.090 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:41.091 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:41 compute-0 nova_compute[253538]: 2025-11-25 09:05:41.166 253542 DEBUG nova.compute.manager [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:05:41 compute-0 nova_compute[253538]: 2025-11-25 09:05:41.167 253542 DEBUG nova.compute.manager [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing instance network info cache due to event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:05:41 compute-0 nova_compute[253538]: 2025-11-25 09:05:41.168 253542 DEBUG oslo_concurrency.lockutils [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:05:41 compute-0 nova_compute[253538]: 2025-11-25 09:05:41.168 253542 DEBUG oslo_concurrency.lockutils [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:05:41 compute-0 nova_compute[253538]: 2025-11-25 09:05:41.169 253542 DEBUG nova.network.neutron [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:05:41 compute-0 ceph-mon[75015]: pgmap v2546: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 176 op/s
Nov 25 09:05:41 compute-0 nova_compute[253538]: 2025-11-25 09:05:41.521 253542 DEBUG nova.network.neutron [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updated VIF entry in instance network info cache for port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:05:41 compute-0 nova_compute[253538]: 2025-11-25 09:05:41.522 253542 DEBUG nova.network.neutron [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updating instance_info_cache with network_info: [{"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:05:41 compute-0 nova_compute[253538]: 2025-11-25 09:05:41.541 253542 DEBUG oslo_concurrency.lockutils [req-586f3de0-57cb-4fe2-9bbc-535b597a1055 req-ecbfdb3c-53dc-4bde-9d5c-66e251ddcd1c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-497131ea-c693-4c1d-b471-5b69d2294e3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:05:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 173 op/s
Nov 25 09:05:42 compute-0 nova_compute[253538]: 2025-11-25 09:05:42.738 253542 DEBUG nova.network.neutron [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updated VIF entry in instance network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:05:42 compute-0 nova_compute[253538]: 2025-11-25 09:05:42.739 253542 DEBUG nova.network.neutron [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:05:42 compute-0 nova_compute[253538]: 2025-11-25 09:05:42.759 253542 DEBUG oslo_concurrency.lockutils [req-15969e47-e2a7-492c-b880-bcfa2bab1f3e req-64d02fe6-2d92-4074-8213-5401d3af7a75 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:05:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:43 compute-0 nova_compute[253538]: 2025-11-25 09:05:43.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:43 compute-0 ceph-mon[75015]: pgmap v2547: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 173 op/s
Nov 25 09:05:44 compute-0 nova_compute[253538]: 2025-11-25 09:05:44.051 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1012 KiB/s wr, 161 op/s
Nov 25 09:05:44 compute-0 nova_compute[253538]: 2025-11-25 09:05:44.910 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:45 compute-0 ceph-mon[75015]: pgmap v2548: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1012 KiB/s wr, 161 op/s
Nov 25 09:05:45 compute-0 nova_compute[253538]: 2025-11-25 09:05:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 201 KiB/s wr, 144 op/s
Nov 25 09:05:47 compute-0 ceph-mon[75015]: pgmap v2549: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 201 KiB/s wr, 144 op/s
Nov 25 09:05:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:48 compute-0 nova_compute[253538]: 2025-11-25 09:05:48.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 142 op/s
Nov 25 09:05:48 compute-0 ovn_controller[152859]: 2025-11-25T09:05:48Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:84:10 10.100.0.14
Nov 25 09:05:48 compute-0 ovn_controller[152859]: 2025-11-25T09:05:48Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:84:10 10.100.0.14
Nov 25 09:05:48 compute-0 ovn_controller[152859]: 2025-11-25T09:05:48Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:a3:07 10.100.0.14
Nov 25 09:05:48 compute-0 ovn_controller[152859]: 2025-11-25T09:05:48Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:a3:07 10.100.0.14
Nov 25 09:05:49 compute-0 nova_compute[253538]: 2025-11-25 09:05:49.112 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:49 compute-0 ceph-mon[75015]: pgmap v2550: 321 pgs: 321 active+clean; 181 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 142 op/s
Nov 25 09:05:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 210 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.3 MiB/s wr, 188 op/s
Nov 25 09:05:50 compute-0 nova_compute[253538]: 2025-11-25 09:05:50.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:50 compute-0 nova_compute[253538]: 2025-11-25 09:05:50.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:05:50 compute-0 nova_compute[253538]: 2025-11-25 09:05:50.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:05:51 compute-0 ceph-mon[75015]: pgmap v2551: 321 pgs: 321 active+clean; 210 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.3 MiB/s wr, 188 op/s
Nov 25 09:05:51 compute-0 nova_compute[253538]: 2025-11-25 09:05:51.360 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:05:51 compute-0 nova_compute[253538]: 2025-11-25 09:05:51.360 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:05:51 compute-0 nova_compute[253538]: 2025-11-25 09:05:51.361 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:05:51 compute-0 nova_compute[253538]: 2025-11-25 09:05:51.361 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:05:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 242 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.2 MiB/s wr, 141 op/s
Nov 25 09:05:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:53 compute-0 nova_compute[253538]: 2025-11-25 09:05:53.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:53 compute-0 ceph-mon[75015]: pgmap v2552: 321 pgs: 321 active+clean; 242 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.2 MiB/s wr, 141 op/s
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:05:53
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'backups']
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:05:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:05:53 compute-0 nova_compute[253538]: 2025-11-25 09:05:53.824 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:05:53 compute-0 nova_compute[253538]: 2025-11-25 09:05:53.850 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:05:53 compute-0 nova_compute[253538]: 2025-11-25 09:05:53.850 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:05:54 compute-0 nova_compute[253538]: 2025-11-25 09:05:54.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 09:05:54 compute-0 nova_compute[253538]: 2025-11-25 09:05:54.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:54 compute-0 nova_compute[253538]: 2025-11-25 09:05:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:54 compute-0 nova_compute[253538]: 2025-11-25 09:05:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:54 compute-0 nova_compute[253538]: 2025-11-25 09:05:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:54 compute-0 nova_compute[253538]: 2025-11-25 09:05:54.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:05:55 compute-0 ceph-mon[75015]: pgmap v2553: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 09:05:55 compute-0 sudo[398917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:05:55 compute-0 sudo[398917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:55 compute-0 sudo[398917]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:55 compute-0 sudo[398942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:05:55 compute-0 sudo[398942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:55 compute-0 sudo[398942]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:55 compute-0 sudo[398967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:05:55 compute-0 sudo[398967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:55 compute-0 sudo[398967]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:55 compute-0 sudo[398992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 09:05:55 compute-0 sudo[398992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:55.922 162847 DEBUG eventlet.wsgi.server [-] (162847) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 25 09:05:55 compute-0 podman[399062]: 2025-11-25 09:05:55.92426977 +0000 UTC m=+0.059329974 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:55.924 162847 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: Accept: */*
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: Connection: close
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: Content-Type: text/plain
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: Host: 169.254.169.254
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: User-Agent: curl/7.84.0
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: X-Forwarded-For: 10.100.0.14
Nov 25 09:05:55 compute-0 ovn_metadata_agent[162734]: X-Ovn-Network-Id: dbe7469c-9d57-4418-b63b-ede368786895 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 25 09:05:55 compute-0 podman[399063]: 2025-11-25 09:05:55.949352731 +0000 UTC m=+0.082679188 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:05:56 compute-0 podman[399122]: 2025-11-25 09:05:56.028804812 +0000 UTC m=+0.060783204 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:05:56 compute-0 podman[399122]: 2025-11-25 09:05:56.119566429 +0000 UTC m=+0.151544801 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:05:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 09:05:56 compute-0 nova_compute[253538]: 2025-11-25 09:05:56.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:05:56 compute-0 sudo[398992]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:05:56 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:05:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:05:56 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:05:56 compute-0 sudo[399281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:05:56 compute-0 sudo[399281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:56 compute-0 sudo[399281]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:56 compute-0 sudo[399306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:05:56 compute-0 sudo[399306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:56 compute-0 sudo[399306]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:56 compute-0 sudo[399331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:05:56 compute-0 sudo[399331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:56 compute-0 sudo[399331]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:56 compute-0 sudo[399356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:05:56 compute-0 sudo[399356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:57 compute-0 ceph-mon[75015]: pgmap v2554: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 09:05:57 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:05:57 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:05:57 compute-0 haproxy-metadata-proxy-dbe7469c-9d57-4418-b63b-ede368786895[398907]: 10.100.0.14:60996 [25/Nov/2025:09:05:55.920] listener listener/metadata 0/0/0/1563/1563 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.483 162847 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.484 162847 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.5605788
Nov 25 09:05:57 compute-0 sudo[399356]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:05:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:05:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:05:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:05:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.616 162847 DEBUG eventlet.wsgi.server [-] (162847) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.617 162847 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: Accept: */*
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: Connection: close
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: Content-Length: 100
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: Content-Type: application/x-www-form-urlencoded
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: Host: 169.254.169.254
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: User-Agent: curl/7.84.0
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: X-Forwarded-For: 10.100.0.14
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: X-Ovn-Network-Id: dbe7469c-9d57-4418-b63b-ede368786895
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 25 09:05:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:05:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev cdd73f30-5f36-43f1-8f23-50a4f11b2be0 does not exist
Nov 25 09:05:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4e12e532-edcd-4e58-a1f0-d79dade0052c does not exist
Nov 25 09:05:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d6a61e78-fc78-40cc-8512-b6b98051d094 does not exist
Nov 25 09:05:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:05:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:05:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:05:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:05:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:05:57 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:05:57 compute-0 sudo[399413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:05:57 compute-0 sudo[399413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:57 compute-0 sudo[399413]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:57 compute-0 sudo[399438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:05:57 compute-0 sudo[399438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:57 compute-0 sudo[399438]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:57 compute-0 sudo[399463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:05:57 compute-0 sudo[399463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:57 compute-0 sudo[399463]: pam_unix(sudo:session): session closed for user root
Nov 25 09:05:57 compute-0 sudo[399488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:05:57 compute-0 sudo[399488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.897 162847 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 25 09:05:57 compute-0 haproxy-metadata-proxy-dbe7469c-9d57-4418-b63b-ede368786895[398907]: 10.100.0.14:32772 [25/Nov/2025:09:05:57.615] listener listener/metadata 0/0/0/282/282 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 25 09:05:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:57.898 162847 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2805822
Nov 25 09:05:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:05:58 compute-0 nova_compute[253538]: 2025-11-25 09:05:58.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 09:05:58 compute-0 podman[399551]: 2025-11-25 09:05:58.234024408 +0000 UTC m=+0.060561677 container create 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:05:58 compute-0 systemd[1]: Started libpod-conmon-65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8.scope.
Nov 25 09:05:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:05:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:05:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:05:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:05:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:05:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:05:58 compute-0 podman[399551]: 2025-11-25 09:05:58.206050618 +0000 UTC m=+0.032587967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:05:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:05:58 compute-0 podman[399551]: 2025-11-25 09:05:58.329757541 +0000 UTC m=+0.156294850 container init 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:05:58 compute-0 podman[399551]: 2025-11-25 09:05:58.338220071 +0000 UTC m=+0.164757350 container start 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:05:58 compute-0 podman[399551]: 2025-11-25 09:05:58.341765897 +0000 UTC m=+0.168303166 container attach 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:05:58 compute-0 laughing_lehmann[399567]: 167 167
Nov 25 09:05:58 compute-0 systemd[1]: libpod-65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8.scope: Deactivated successfully.
Nov 25 09:05:58 compute-0 podman[399551]: 2025-11-25 09:05:58.345368915 +0000 UTC m=+0.171906194 container died 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 09:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2414d02f59ed5de810d19ecaef98cda7d3248155515dd23a2aa4e37ae18907b3-merged.mount: Deactivated successfully.
Nov 25 09:05:58 compute-0 podman[399551]: 2025-11-25 09:05:58.389824354 +0000 UTC m=+0.216361653 container remove 65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:05:58 compute-0 systemd[1]: libpod-conmon-65ae9bd48ca180dad68c6e11d10d3e445c6a52b16be5c45509a39aa59b6d49b8.scope: Deactivated successfully.
Nov 25 09:05:58 compute-0 podman[399592]: 2025-11-25 09:05:58.603761879 +0000 UTC m=+0.049587479 container create 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 09:05:58 compute-0 systemd[1]: Started libpod-conmon-7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57.scope.
Nov 25 09:05:58 compute-0 podman[399592]: 2025-11-25 09:05:58.57951044 +0000 UTC m=+0.025336090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:05:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:05:58 compute-0 podman[399592]: 2025-11-25 09:05:58.698937617 +0000 UTC m=+0.144763237 container init 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:05:58 compute-0 podman[399592]: 2025-11-25 09:05:58.708511377 +0000 UTC m=+0.154336977 container start 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:05:58 compute-0 podman[399592]: 2025-11-25 09:05:58.71229343 +0000 UTC m=+0.158119160 container attach 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:59 compute-0 ceph-mon[75015]: pgmap v2555: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.753 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.754 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.755 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.756 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.756 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.758 253542 INFO nova.compute.manager [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Terminating instance
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.760 253542 DEBUG nova.compute.manager [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:05:59 compute-0 kernel: tap44ba14ce-36 (unregistering): left promiscuous mode
Nov 25 09:05:59 compute-0 NetworkManager[48915]: <info>  [1764061559.8150] device (tap44ba14ce-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:59 compute-0 ovn_controller[152859]: 2025-11-25T09:05:59Z|01437|binding|INFO|Releasing lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 from this chassis (sb_readonly=0)
Nov 25 09:05:59 compute-0 ovn_controller[152859]: 2025-11-25T09:05:59Z|01438|binding|INFO|Setting lport 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 down in Southbound
Nov 25 09:05:59 compute-0 ovn_controller[152859]: 2025-11-25T09:05:59Z|01439|binding|INFO|Removing iface tap44ba14ce-36 ovn-installed in OVS
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:59 compute-0 xenodochial_lederberg[399608]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:05:59 compute-0 xenodochial_lederberg[399608]: --> relative data size: 1.0
Nov 25 09:05:59 compute-0 xenodochial_lederberg[399608]: --> All data devices are unavailable
Nov 25 09:05:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.846 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:84:10 10.100.0.14'], port_security=['fa:16:3e:70:84:10 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '497131ea-c693-4c1d-b471-5b69d2294e3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbe7469c-9d57-4418-b63b-ede368786895', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3908211615c4cbaae61d6e5833ca908', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96995e66-0af1-4c06-becd-28a8c446152a ba12b146-5dc4-4552-8b04-abe689899999', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15de48bd-9bbf-4354-b481-d22133abf514, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=44ba14ce-3677-4e53-b6ea-a21b98ba45d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:05:59 compute-0 nova_compute[253538]: 2025-11-25 09:05:59.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:05:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.850 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 44ba14ce-3677-4e53-b6ea-a21b98ba45d6 in datapath dbe7469c-9d57-4418-b63b-ede368786895 unbound from our chassis
Nov 25 09:05:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.853 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dbe7469c-9d57-4418-b63b-ede368786895, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:05:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.854 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2b631f-7874-4198-aca0-af81c22e9bc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:05:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:05:59.855 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 namespace which is not needed anymore
Nov 25 09:05:59 compute-0 systemd[1]: libpod-7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57.scope: Deactivated successfully.
Nov 25 09:05:59 compute-0 systemd[1]: libpod-7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57.scope: Consumed 1.093s CPU time.
Nov 25 09:05:59 compute-0 podman[399592]: 2025-11-25 09:05:59.878226634 +0000 UTC m=+1.324052244 container died 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 09:05:59 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 25 09:05:59 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d0000008a.scope: Consumed 13.914s CPU time.
Nov 25 09:05:59 compute-0 systemd-machined[215790]: Machine qemu-168-instance-0000008a terminated.
Nov 25 09:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e87a2078062c625d8d9ff3fcd6efd32b1867605ec72e73b869c67e1bd746814f-merged.mount: Deactivated successfully.
Nov 25 09:05:59 compute-0 podman[399592]: 2025-11-25 09:05:59.937198358 +0000 UTC m=+1.383023958 container remove 7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:05:59 compute-0 systemd[1]: libpod-conmon-7035302d6241346d49702ef8a6d41c230fd1ad2f3bcd911f249af1743ca64d57.scope: Deactivated successfully.
Nov 25 09:05:59 compute-0 sudo[399488]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.003 253542 INFO nova.virt.libvirt.driver [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Instance destroyed successfully.
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.004 253542 DEBUG nova.objects.instance [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lazy-loading 'resources' on Instance uuid 497131ea-c693-4c1d-b471-5b69d2294e3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:06:00 compute-0 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [NOTICE]   (398905) : haproxy version is 2.8.14-c23fe91
Nov 25 09:06:00 compute-0 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [NOTICE]   (398905) : path to executable is /usr/sbin/haproxy
Nov 25 09:06:00 compute-0 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [WARNING]  (398905) : Exiting Master process...
Nov 25 09:06:00 compute-0 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [WARNING]  (398905) : Exiting Master process...
Nov 25 09:06:00 compute-0 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [ALERT]    (398905) : Current worker (398907) exited with code 143 (Terminated)
Nov 25 09:06:00 compute-0 neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895[398901]: [WARNING]  (398905) : All workers exited. Exiting... (0)
Nov 25 09:06:00 compute-0 systemd[1]: libpod-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7.scope: Deactivated successfully.
Nov 25 09:06:00 compute-0 conmon[398901]: conmon e894baea833b52fb0996 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7.scope/container/memory.events
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.018 253542 DEBUG nova.virt.libvirt.vif [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-829104372',display_name='tempest-TestServerBasicOps-server-829104372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-829104372',id=138,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzwD53kQ8BpPBb54UPZdiuwcAps8iqsBmsdvuGpmBwC+Q4SksGNyI7vnMrtWDCi5xUrajEjXki8ZVS3NyMr/F7GJW+4JitS6beGfKpA2babih/6mXQzAB6PKdgZETbkFw==',key_name='tempest-TestServerBasicOps-1951077282',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:05:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3908211615c4cbaae61d6e5833ca908',ramdisk_id='',reservation_id='r-r1jf1pgp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1625299484',owner_user_name='tempest-TestServerBasicOps-1625299484-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:05:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='637a807a37ce403a8612d303b1acbb3b',uuid=497131ea-c693-4c1d-b471-5b69d2294e3a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.020 253542 DEBUG nova.network.os_vif_util [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converting VIF {"id": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "address": "fa:16:3e:70:84:10", "network": {"id": "dbe7469c-9d57-4418-b63b-ede368786895", "bridge": "br-int", "label": "tempest-TestServerBasicOps-32608326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3908211615c4cbaae61d6e5833ca908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ba14ce-36", "ovs_interfaceid": "44ba14ce-3677-4e53-b6ea-a21b98ba45d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:00 compute-0 podman[399673]: 2025-11-25 09:06:00.021068818 +0000 UTC m=+0.058812410 container died e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.021 253542 DEBUG nova.network.os_vif_util [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.021 253542 DEBUG os_vif [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.023 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.023 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44ba14ce-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.028 253542 INFO os_vif [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:84:10,bridge_name='br-int',has_traffic_filtering=True,id=44ba14ce-3677-4e53-b6ea-a21b98ba45d6,network=Network(dbe7469c-9d57-4418-b63b-ede368786895),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ba14ce-36')
Nov 25 09:06:00 compute-0 sudo[399680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:06:00 compute-0 sudo[399680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:00 compute-0 sudo[399680]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7-userdata-shm.mount: Deactivated successfully.
Nov 25 09:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-183e12d28189ae10c66d159b8f25f1d139631c6af13b1ca004806ba514918c2a-merged.mount: Deactivated successfully.
Nov 25 09:06:00 compute-0 podman[399673]: 2025-11-25 09:06:00.070293855 +0000 UTC m=+0.108037407 container cleanup e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:06:00 compute-0 systemd[1]: libpod-conmon-e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7.scope: Deactivated successfully.
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.102 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.103 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.108 253542 DEBUG nova.compute.manager [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-unplugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.109 253542 DEBUG oslo_concurrency.lockutils [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:00 compute-0 sudo[399749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.109 253542 DEBUG oslo_concurrency.lockutils [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.109 253542 DEBUG oslo_concurrency.lockutils [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.109 253542 DEBUG nova.compute.manager [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] No waiting events found dispatching network-vif-unplugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.110 253542 DEBUG nova.compute.manager [req-a2c394fb-7a5e-46fa-9dc5-4b17f0d10a97 req-0c2f5a71-f3fd-4a08-92c8-c7f07c6cb14c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-unplugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:06:00 compute-0 sudo[399749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:00 compute-0 sudo[399749]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.123 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:06:00 compute-0 podman[399778]: 2025-11-25 09:06:00.147364121 +0000 UTC m=+0.049796505 container remove e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:06:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.156 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03de744c-91ab-4b42-88a7-c0c3d7041809]: (4, ('Tue Nov 25 09:05:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 (e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7)\ne894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7\nTue Nov 25 09:06:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 (e894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7)\ne894baea833b52fb0996e9308f3864faa6b3a5c411e8ca9a13b8c10bcdb32af7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.160 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2da27ee5-698d-42ac-beb5-e3e41aacfd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.161 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbe7469c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:00 compute-0 kernel: tapdbe7469c-90: left promiscuous mode
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.177 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:00 compute-0 sudo[399793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:06:00 compute-0 sudo[399793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.183 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3733cd3f-a3d3-4dd0-b6c4-ad87c19fb540]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:00 compute-0 sudo[399793]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.197 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4c0c45-4ecb-4a75-b728-a3b22df33439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.198 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5803f4-b375-45ab-8448-0d8aeb72f02d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.207 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.208 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.217 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.217 253542 INFO nova.compute.claims [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76fbf831-f439-4924-8b9d-f234517900cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689090, 'reachable_time': 28151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399822, 'error': None, 'target': 'ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.224 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dbe7469c-9d57-4418-b63b-ede368786895 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:06:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:00.224 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1db601d8-44e5-4c1a-84db-2d52faf7c631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:00 compute-0 systemd[1]: run-netns-ovnmeta\x2ddbe7469c\x2d9d57\x2d4418\x2db63b\x2dede368786895.mount: Deactivated successfully.
Nov 25 09:06:00 compute-0 sudo[399819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:06:00 compute-0 sudo[399819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.440 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.498 253542 INFO nova.virt.libvirt.driver [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Deleting instance files /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a_del
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.499 253542 INFO nova.virt.libvirt.driver [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Deletion of /var/lib/nova/instances/497131ea-c693-4c1d-b471-5b69d2294e3a_del complete
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.567 253542 INFO nova.compute.manager [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.567 253542 DEBUG oslo.service.loopingcall [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.567 253542 DEBUG nova.compute.manager [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.568 253542 DEBUG nova.network.neutron [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:00 compute-0 podman[399907]: 2025-11-25 09:06:00.652129282 +0000 UTC m=+0.045644741 container create b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:06:00 compute-0 systemd[1]: Started libpod-conmon-b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7.scope.
Nov 25 09:06:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:06:00 compute-0 podman[399907]: 2025-11-25 09:06:00.632788336 +0000 UTC m=+0.026303825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:06:00 compute-0 podman[399907]: 2025-11-25 09:06:00.73445415 +0000 UTC m=+0.127969669 container init b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 09:06:00 compute-0 podman[399907]: 2025-11-25 09:06:00.743722772 +0000 UTC m=+0.137238231 container start b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:06:00 compute-0 podman[399907]: 2025-11-25 09:06:00.747380891 +0000 UTC m=+0.140896460 container attach b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:06:00 compute-0 sweet_moore[399924]: 167 167
Nov 25 09:06:00 compute-0 systemd[1]: libpod-b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7.scope: Deactivated successfully.
Nov 25 09:06:00 compute-0 conmon[399924]: conmon b8cec6758febf38b3d48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7.scope/container/memory.events
Nov 25 09:06:00 compute-0 podman[399907]: 2025-11-25 09:06:00.752202083 +0000 UTC m=+0.145717552 container died b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:06:00 compute-0 podman[399907]: 2025-11-25 09:06:00.788618443 +0000 UTC m=+0.182133902 container remove b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:06:00 compute-0 systemd[1]: libpod-conmon-b8cec6758febf38b3d4890b38d8bed52d525664516745d9f40bdd764605f5dd7.scope: Deactivated successfully.
Nov 25 09:06:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:06:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524204620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9da89a1aa05cd1fd4333b5666d3ef0de34928fd08ef8022c58c6c195f7b1d10-merged.mount: Deactivated successfully.
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.918 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.927 253542 DEBUG nova.compute.provider_tree [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.940 253542 DEBUG nova.scheduler.client.report [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.960 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.961 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.963 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.963 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.963 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:06:00 compute-0 nova_compute[253538]: 2025-11-25 09:06:00.963 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:01 compute-0 podman[399950]: 2025-11-25 09:06:01.017133084 +0000 UTC m=+0.057412191 container create 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.045 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.046 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:06:01 compute-0 systemd[1]: Started libpod-conmon-8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146.scope.
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.070 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:06:01 compute-0 podman[399950]: 2025-11-25 09:06:00.995542858 +0000 UTC m=+0.035822055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:06:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:01 compute-0 podman[399950]: 2025-11-25 09:06:01.122728725 +0000 UTC m=+0.163007872 container init 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:06:01 compute-0 podman[399950]: 2025-11-25 09:06:01.131962526 +0000 UTC m=+0.172241633 container start 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.138 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:06:01 compute-0 podman[399950]: 2025-11-25 09:06:01.140035915 +0000 UTC m=+0.180315072 container attach 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.240 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.241 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.242 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Creating image(s)
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.267 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.298 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:01 compute-0 ceph-mon[75015]: pgmap v2556: 321 pgs: 321 active+clean; 246 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 09:06:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2524204620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.323 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.327 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.415 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:06:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/415574734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.416 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.417 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.417 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.443 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.448 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 935c4eb2-999f-40a4-8643-0479d293c149_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.489 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.495 253542 DEBUG nova.policy [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.544 253542 DEBUG nova.network.neutron [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.567 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.567 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.579 253542 INFO nova.compute.manager [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Took 1.01 seconds to deallocate network for instance.
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.674 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.675 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.777 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 935c4eb2-999f-40a4-8643-0479d293c149_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.809 253542 DEBUG oslo_concurrency.processutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.875 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:06:01 compute-0 great_merkle[399969]: {
Nov 25 09:06:01 compute-0 great_merkle[399969]:     "0": [
Nov 25 09:06:01 compute-0 great_merkle[399969]:         {
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "devices": [
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "/dev/loop3"
Nov 25 09:06:01 compute-0 great_merkle[399969]:             ],
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_name": "ceph_lv0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_size": "21470642176",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "name": "ceph_lv0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "tags": {
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cluster_name": "ceph",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.crush_device_class": "",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.encrypted": "0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osd_id": "0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.type": "block",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.vdo": "0"
Nov 25 09:06:01 compute-0 great_merkle[399969]:             },
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "type": "block",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "vg_name": "ceph_vg0"
Nov 25 09:06:01 compute-0 great_merkle[399969]:         }
Nov 25 09:06:01 compute-0 great_merkle[399969]:     ],
Nov 25 09:06:01 compute-0 great_merkle[399969]:     "1": [
Nov 25 09:06:01 compute-0 great_merkle[399969]:         {
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "devices": [
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "/dev/loop4"
Nov 25 09:06:01 compute-0 great_merkle[399969]:             ],
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_name": "ceph_lv1",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_size": "21470642176",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "name": "ceph_lv1",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "tags": {
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cluster_name": "ceph",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.crush_device_class": "",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.encrypted": "0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osd_id": "1",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.type": "block",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.vdo": "0"
Nov 25 09:06:01 compute-0 great_merkle[399969]:             },
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "type": "block",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "vg_name": "ceph_vg1"
Nov 25 09:06:01 compute-0 great_merkle[399969]:         }
Nov 25 09:06:01 compute-0 great_merkle[399969]:     ],
Nov 25 09:06:01 compute-0 great_merkle[399969]:     "2": [
Nov 25 09:06:01 compute-0 great_merkle[399969]:         {
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "devices": [
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "/dev/loop5"
Nov 25 09:06:01 compute-0 great_merkle[399969]:             ],
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_name": "ceph_lv2",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_size": "21470642176",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "name": "ceph_lv2",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "tags": {
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.cluster_name": "ceph",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.crush_device_class": "",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.encrypted": "0",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osd_id": "2",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.type": "block",
Nov 25 09:06:01 compute-0 great_merkle[399969]:                 "ceph.vdo": "0"
Nov 25 09:06:01 compute-0 great_merkle[399969]:             },
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "type": "block",
Nov 25 09:06:01 compute-0 great_merkle[399969]:             "vg_name": "ceph_vg2"
Nov 25 09:06:01 compute-0 great_merkle[399969]:         }
Nov 25 09:06:01 compute-0 great_merkle[399969]:     ]
Nov 25 09:06:01 compute-0 great_merkle[399969]: }
Nov 25 09:06:01 compute-0 systemd[1]: libpod-8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146.scope: Deactivated successfully.
Nov 25 09:06:01 compute-0 conmon[399969]: conmon 8ce155a6e844ad0ebdc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146.scope/container/memory.events
Nov 25 09:06:01 compute-0 podman[399950]: 2025-11-25 09:06:01.940456364 +0000 UTC m=+0.980735471 container died 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-310305027b8220902a9605de02267739117c08b754481d5ba6f4f9c89e88921a-merged.mount: Deactivated successfully.
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.978 253542 DEBUG nova.objects.instance [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 935c4eb2-999f-40a4-8643-0479d293c149 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.989 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.989 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Ensure instance console log exists: /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.990 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.990 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:01 compute-0 nova_compute[253538]: 2025-11-25 09:06:01.990 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:01 compute-0 podman[399950]: 2025-11-25 09:06:01.993146137 +0000 UTC m=+1.033425234 container remove 8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 09:06:02 compute-0 systemd[1]: libpod-conmon-8ce155a6e844ad0ebdc3db99412bb2bf2f2c161ad8e99b1a2e24b35f991ba146.scope: Deactivated successfully.
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.029 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.031 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3392MB free_disk=59.897193908691406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.031 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:02 compute-0 sudo[399819]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:02 compute-0 sudo[400201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:06:02 compute-0 sudo[400201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:02 compute-0 sudo[400201]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 214 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 1.9 MiB/s wr, 66 op/s
Nov 25 09:06:02 compute-0 sudo[400226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:06:02 compute-0 sudo[400226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:02 compute-0 sudo[400226]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.198 253542 DEBUG nova.compute.manager [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.199 253542 DEBUG oslo_concurrency.lockutils [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.200 253542 DEBUG oslo_concurrency.lockutils [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.200 253542 DEBUG oslo_concurrency.lockutils [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.200 253542 DEBUG nova.compute.manager [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] No waiting events found dispatching network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.200 253542 WARNING nova.compute.manager [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received unexpected event network-vif-plugged-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 for instance with vm_state deleted and task_state None.
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.201 253542 DEBUG nova.compute.manager [req-5ad593b0-9155-4ee8-ba2f-1eaa629a5bc1 req-c76a241f-5b98-4376-85c6-05bd10d8194a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Received event network-vif-deleted-44ba14ce-3677-4e53-b6ea-a21b98ba45d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:02 compute-0 sudo[400251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:06:02 compute-0 sudo[400251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:02 compute-0 sudo[400251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:02 compute-0 sshd-session[399968]: Invalid user db2inst1 from 45.202.211.6 port 33266
Nov 25 09:06:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:06:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053078326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.278 253542 DEBUG oslo_concurrency.processutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.283 253542 DEBUG nova.compute.provider_tree [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:06:02 compute-0 sudo[400276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:06:02 compute-0 sudo[400276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.302 253542 DEBUG nova.scheduler.client.report [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:06:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/415574734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2053078326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.324 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.327 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.356 253542 INFO nova.scheduler.client.report [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Deleted allocations for instance 497131ea-c693-4c1d-b471-5b69d2294e3a
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.416 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.417 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 935c4eb2-999f-40a4-8643-0479d293c149 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.419 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.420 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.425 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Successfully created port: bf69fe43-dd03-40a9-a38f-2ec005c27f58 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.441 253542 DEBUG oslo_concurrency.lockutils [None req-2f5d4e90-4405-495f-9aba-72e02aa05145 637a807a37ce403a8612d303b1acbb3b a3908211615c4cbaae61d6e5833ca908 - - default default] Lock "497131ea-c693-4c1d-b471-5b69d2294e3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:02 compute-0 sshd-session[399968]: Received disconnect from 45.202.211.6 port 33266:11: Bye Bye [preauth]
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.479 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:02 compute-0 sshd-session[399968]: Disconnected from invalid user db2inst1 45.202.211.6 port 33266 [preauth]
Nov 25 09:06:02 compute-0 podman[400343]: 2025-11-25 09:06:02.620645995 +0000 UTC m=+0.049429975 container create 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:06:02 compute-0 systemd[1]: Started libpod-conmon-80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2.scope.
Nov 25 09:06:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:06:02 compute-0 podman[400343]: 2025-11-25 09:06:02.604475945 +0000 UTC m=+0.033259955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:06:02 compute-0 podman[400343]: 2025-11-25 09:06:02.7282939 +0000 UTC m=+0.157077890 container init 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:06:02 compute-0 podman[400343]: 2025-11-25 09:06:02.737263674 +0000 UTC m=+0.166047664 container start 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:06:02 compute-0 laughing_ishizaka[400378]: 167 167
Nov 25 09:06:02 compute-0 podman[400343]: 2025-11-25 09:06:02.743215007 +0000 UTC m=+0.171999037 container attach 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:06:02 compute-0 systemd[1]: libpod-80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2.scope: Deactivated successfully.
Nov 25 09:06:02 compute-0 podman[400343]: 2025-11-25 09:06:02.745740684 +0000 UTC m=+0.174524684 container died 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 09:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b839f33407981b2d24abbbd96c883cd4be049992cbe0cced12dc03c73743ee13-merged.mount: Deactivated successfully.
Nov 25 09:06:02 compute-0 podman[400343]: 2025-11-25 09:06:02.788471107 +0000 UTC m=+0.217255087 container remove 80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:06:02 compute-0 systemd[1]: libpod-conmon-80f69981fcab8e00afa83eefc57a0df9430de3f752820e89c24e900e008680e2.scope: Deactivated successfully.
Nov 25 09:06:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:06:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134778005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.904 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.913 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:06:02 compute-0 nova_compute[253538]: 2025-11-25 09:06:02.931 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:06:03 compute-0 podman[400405]: 2025-11-25 09:06:02.999993656 +0000 UTC m=+0.057175954 container create 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:06:03 compute-0 systemd[1]: Started libpod-conmon-108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6.scope.
Nov 25 09:06:03 compute-0 nova_compute[253538]: 2025-11-25 09:06:03.053 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:06:03 compute-0 nova_compute[253538]: 2025-11-25 09:06:03.054 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:06:03 compute-0 podman[400405]: 2025-11-25 09:06:02.970897256 +0000 UTC m=+0.028079624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:03 compute-0 podman[400405]: 2025-11-25 09:06:03.077024661 +0000 UTC m=+0.134206949 container init 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:06:03 compute-0 podman[400405]: 2025-11-25 09:06:03.084690818 +0000 UTC m=+0.141873096 container start 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:06:03 compute-0 podman[400405]: 2025-11-25 09:06:03.088519973 +0000 UTC m=+0.145702281 container attach 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:06:03 compute-0 nova_compute[253538]: 2025-11-25 09:06:03.139 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Successfully created port: 20e31743-4fc4-43d2-ab28-5205c776f506 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:06:03 compute-0 ceph-mon[75015]: pgmap v2557: 321 pgs: 321 active+clean; 214 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 1.9 MiB/s wr, 66 op/s
Nov 25 09:06:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3134778005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:04 compute-0 friendly_leakey[400422]: {
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "osd_id": 1,
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "type": "bluestore"
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:     },
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "osd_id": 2,
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "type": "bluestore"
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:     },
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "osd_id": 0,
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:         "type": "bluestore"
Nov 25 09:06:04 compute-0 friendly_leakey[400422]:     }
Nov 25 09:06:04 compute-0 friendly_leakey[400422]: }
Nov 25 09:06:04 compute-0 systemd[1]: libpod-108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6.scope: Deactivated successfully.
Nov 25 09:06:04 compute-0 systemd[1]: libpod-108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6.scope: Consumed 1.038s CPU time.
Nov 25 09:06:04 compute-0 podman[400405]: 2025-11-25 09:06:04.116794106 +0000 UTC m=+1.173976384 container died 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:06:04 compute-0 nova_compute[253538]: 2025-11-25 09:06:04.119 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3ad723ecdd14fd854bb3581ad1b1654bf7159e8ec03f73428e0c38f570016d9-merged.mount: Deactivated successfully.
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 199 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 438 KiB/s wr, 38 op/s
Nov 25 09:06:04 compute-0 podman[400405]: 2025-11-25 09:06:04.167803402 +0000 UTC m=+1.224985690 container remove 108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:06:04 compute-0 systemd[1]: libpod-conmon-108535d395d88d6d34fb419f105c9fa2334a9beb5c1cda52fd155e4118dcd6a6.scope: Deactivated successfully.
Nov 25 09:06:04 compute-0 sudo[400276]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:06:04 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:06:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:06:04 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 7640ac54-c959-4d89-a139-f80406d1c08b does not exist
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev bedad813-7a0c-4d52-8961-898d14ac5226 does not exist
Nov 25 09:06:04 compute-0 sudo[400467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:06:04 compute-0 sudo[400467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:04 compute-0 sudo[400467]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:04 compute-0 sudo[400492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:06:04 compute-0 sudo[400492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:06:04 compute-0 sudo[400492]: pam_unix(sudo:session): session closed for user root
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009890262787328255 of space, bias 1.0, pg target 0.2967078836198477 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:06:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:06:04 compute-0 nova_compute[253538]: 2025-11-25 09:06:04.855 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Successfully updated port: bf69fe43-dd03-40a9-a38f-2ec005c27f58 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:06:04 compute-0 podman[400517]: 2025-11-25 09:06:04.860998217 +0000 UTC m=+0.112716116 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:06:04 compute-0 nova_compute[253538]: 2025-11-25 09:06:04.968 253542 DEBUG nova.compute.manager [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:04 compute-0 nova_compute[253538]: 2025-11-25 09:06:04.969 253542 DEBUG nova.compute.manager [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing instance network info cache due to event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:06:04 compute-0 nova_compute[253538]: 2025-11-25 09:06:04.969 253542 DEBUG oslo_concurrency.lockutils [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:06:04 compute-0 nova_compute[253538]: 2025-11-25 09:06:04.970 253542 DEBUG oslo_concurrency.lockutils [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:06:04 compute-0 nova_compute[253538]: 2025-11-25 09:06:04.970 253542 DEBUG nova.network.neutron [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:06:05 compute-0 nova_compute[253538]: 2025-11-25 09:06:05.025 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:05 compute-0 nova_compute[253538]: 2025-11-25 09:06:05.119 253542 DEBUG nova.network.neutron [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:06:05 compute-0 ceph-mon[75015]: pgmap v2558: 321 pgs: 321 active+clean; 199 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 438 KiB/s wr, 38 op/s
Nov 25 09:06:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:06:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:06:05 compute-0 nova_compute[253538]: 2025-11-25 09:06:05.389 253542 DEBUG nova.network.neutron [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:05 compute-0 nova_compute[253538]: 2025-11-25 09:06:05.400 253542 DEBUG oslo_concurrency.lockutils [req-35069aba-f630-43d6-b883-93bdd9fa5476 req-fe114c44-6b14-40a9-9f64-41212caed099 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:06:05 compute-0 nova_compute[253538]: 2025-11-25 09:06:05.916 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Successfully updated port: 20e31743-4fc4-43d2-ab28-5205c776f506 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:06:05 compute-0 nova_compute[253538]: 2025-11-25 09:06:05.940 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:06:05 compute-0 nova_compute[253538]: 2025-11-25 09:06:05.940 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:06:05 compute-0 nova_compute[253538]: 2025-11-25 09:06:05.941 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:06:06 compute-0 nova_compute[253538]: 2025-11-25 09:06:06.109 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:06:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 192 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.2 MiB/s wr, 45 op/s
Nov 25 09:06:06 compute-0 nova_compute[253538]: 2025-11-25 09:06:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:06 compute-0 nova_compute[253538]: 2025-11-25 09:06:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:06:07 compute-0 nova_compute[253538]: 2025-11-25 09:06:07.056 253542 DEBUG nova.compute.manager [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-changed-20e31743-4fc4-43d2-ab28-5205c776f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:07 compute-0 nova_compute[253538]: 2025-11-25 09:06:07.056 253542 DEBUG nova.compute.manager [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing instance network info cache due to event network-changed-20e31743-4fc4-43d2-ab28-5205c776f506. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:06:07 compute-0 nova_compute[253538]: 2025-11-25 09:06:07.057 253542 DEBUG oslo_concurrency.lockutils [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:06:07 compute-0 ceph-mon[75015]: pgmap v2559: 321 pgs: 321 active+clean; 192 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.2 MiB/s wr, 45 op/s
Nov 25 09:06:07 compute-0 ovn_controller[152859]: 2025-11-25T09:06:07Z|01440|binding|INFO|Releasing lport 8fae56b6-9884-44ea-b3b3-2b19412193c5 from this chassis (sb_readonly=0)
Nov 25 09:06:07 compute-0 ovn_controller[152859]: 2025-11-25T09:06:07Z|01441|binding|INFO|Releasing lport f8aacb3c-1998-431a-ac4d-66021d7412c1 from this chassis (sb_readonly=0)
Nov 25 09:06:07 compute-0 nova_compute[253538]: 2025-11-25 09:06:07.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.197 253542 DEBUG nova.network.neutron [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.218 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.219 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance network_info: |[{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.219 253542 DEBUG oslo_concurrency.lockutils [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.219 253542 DEBUG nova.network.neutron [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing network info cache for port 20e31743-4fc4-43d2-ab28-5205c776f506 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.223 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start _get_guest_xml network_info=[{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.227 253542 WARNING nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.235 253542 DEBUG nova.virt.libvirt.host [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.236 253542 DEBUG nova.virt.libvirt.host [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.239 253542 DEBUG nova.virt.libvirt.host [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.239 253542 DEBUG nova.virt.libvirt.host [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.240 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.240 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.240 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.241 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.241 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.241 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.241 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.242 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.242 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.242 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.242 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.243 253542 DEBUG nova.virt.hardware [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.246 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:06:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142923082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.671 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.706 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:08 compute-0 nova_compute[253538]: 2025-11-25 09:06:08.711 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:06:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2124237264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.212 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.214 253542 DEBUG nova.virt.libvirt.vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.215 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.216 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.217 253542 DEBUG nova.virt.libvirt.vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.217 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.218 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.220 253542 DEBUG nova.objects.instance [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 935c4eb2-999f-40a4-8643-0479d293c149 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:06:09 compute-0 ceph-mon[75015]: pgmap v2560: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 09:06:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1142923082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:06:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2124237264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.251 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <uuid>935c4eb2-999f-40a4-8643-0479d293c149</uuid>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <name>instance-0000008b</name>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-1102871071</nova:name>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:06:08</nova:creationTime>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:port uuid="bf69fe43-dd03-40a9-a38f-2ec005c27f58">
Nov 25 09:06:09 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <nova:port uuid="20e31743-4fc4-43d2-ab28-5205c776f506">
Nov 25 09:06:09 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fedb:48c2" ipVersion="6"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fedb:48c2" ipVersion="6"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <system>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <entry name="serial">935c4eb2-999f-40a4-8643-0479d293c149</entry>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <entry name="uuid">935c4eb2-999f-40a4-8643-0479d293c149</entry>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </system>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <os>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   </os>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <features>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   </features>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/935c4eb2-999f-40a4-8643-0479d293c149_disk">
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       </source>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/935c4eb2-999f-40a4-8643-0479d293c149_disk.config">
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       </source>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:06:09 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:8d:64:ff"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <target dev="tapbf69fe43-dd"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:db:48:c2"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <target dev="tap20e31743-4f"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/console.log" append="off"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <video>
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </video>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:06:09 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:06:09 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:06:09 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:06:09 compute-0 nova_compute[253538]: </domain>
Nov 25 09:06:09 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.253 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Preparing to wait for external event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.254 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.254 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.255 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.255 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Preparing to wait for external event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.256 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.257 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.258 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.259 253542 DEBUG nova.virt.libvirt.vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.259 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.261 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.261 253542 DEBUG os_vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.262 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.263 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.263 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.267 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.268 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf69fe43-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.269 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf69fe43-dd, col_values=(('external_ids', {'iface-id': 'bf69fe43-dd03-40a9-a38f-2ec005c27f58', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:64:ff', 'vm-uuid': '935c4eb2-999f-40a4-8643-0479d293c149'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.271 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 NetworkManager[48915]: <info>  [1764061569.2716] manager: (tapbf69fe43-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.274 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.280 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.282 253542 INFO os_vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd')
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.283 253542 DEBUG nova.virt.libvirt.vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.283 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.285 253542 DEBUG nova.network.os_vif_util [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.286 253542 DEBUG os_vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.287 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.289 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20e31743-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.290 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20e31743-4f, col_values=(('external_ids', {'iface-id': '20e31743-4fc4-43d2-ab28-5205c776f506', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:48:c2', 'vm-uuid': '935c4eb2-999f-40a4-8643-0479d293c149'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 NetworkManager[48915]: <info>  [1764061569.2920] manager: (tap20e31743-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.294 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.298 253542 INFO os_vif [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f')
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.348 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.348 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.348 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:8d:64:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.349 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:db:48:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.349 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Using config drive
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.373 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.567 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.583 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.773 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Creating config drive at /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.782 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacz45rq9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.927 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacz45rq9" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.951 253542 DEBUG nova.storage.rbd_utils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 935c4eb2-999f-40a4-8643-0479d293c149_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:09 compute-0 nova_compute[253538]: 2025-11-25 09:06:09.954 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config 935c4eb2-999f-40a4-8643-0479d293c149_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.060 253542 DEBUG nova.network.neutron [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updated VIF entry in instance network info cache for port 20e31743-4fc4-43d2-ab28-5205c776f506. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.061 253542 DEBUG nova.network.neutron [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.081 253542 DEBUG oslo_concurrency.lockutils [req-42731087-d8b3-476f-a24a-a9e7a0781e68 req-fb377a87-bee2-4cab-b971-860acaa05590 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.151 253542 DEBUG oslo_concurrency.processutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config 935c4eb2-999f-40a4-8643-0479d293c149_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.152 253542 INFO nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Deleting local config drive /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149/disk.config because it was imported into RBD.
Nov 25 09:06:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 09:06:10 compute-0 kernel: tapbf69fe43-dd: entered promiscuous mode
Nov 25 09:06:10 compute-0 NetworkManager[48915]: <info>  [1764061570.2131] manager: (tapbf69fe43-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Nov 25 09:06:10 compute-0 ovn_controller[152859]: 2025-11-25T09:06:10Z|01442|binding|INFO|Claiming lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 for this chassis.
Nov 25 09:06:10 compute-0 ovn_controller[152859]: 2025-11-25T09:06:10Z|01443|binding|INFO|bf69fe43-dd03-40a9-a38f-2ec005c27f58: Claiming fa:16:3e:8d:64:ff 10.100.0.9
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:10 compute-0 NetworkManager[48915]: <info>  [1764061570.2284] manager: (tap20e31743-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/597)
Nov 25 09:06:10 compute-0 systemd-udevd[400681]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:06:10 compute-0 systemd-udevd[400682]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:06:10 compute-0 kernel: tap20e31743-4f: entered promiscuous mode
Nov 25 09:06:10 compute-0 NetworkManager[48915]: <info>  [1764061570.2558] device (tap20e31743-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:06:10 compute-0 NetworkManager[48915]: <info>  [1764061570.2567] device (tap20e31743-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.256 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:64:ff 10.100.0.9'], port_security=['fa:16:3e:8d:64:ff 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '935c4eb2-999f-40a4-8643-0479d293c149', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bf69fe43-dd03-40a9-a38f-2ec005c27f58) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.260 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bf69fe43-dd03-40a9-a38f-2ec005c27f58 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 bound to our chassis
Nov 25 09:06:10 compute-0 ovn_controller[152859]: 2025-11-25T09:06:10Z|01444|binding|INFO|Claiming lport 20e31743-4fc4-43d2-ab28-5205c776f506 for this chassis.
Nov 25 09:06:10 compute-0 ovn_controller[152859]: 2025-11-25T09:06:10Z|01445|binding|INFO|20e31743-4fc4-43d2-ab28-5205c776f506: Claiming fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.261 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f08e3a5-c18c-40d6-a052-3721725c11a7
Nov 25 09:06:10 compute-0 ovn_controller[152859]: 2025-11-25T09:06:10Z|01446|binding|INFO|Setting lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 ovn-installed in OVS
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.264 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:10 compute-0 NetworkManager[48915]: <info>  [1764061570.2695] device (tapbf69fe43-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:06:10 compute-0 NetworkManager[48915]: <info>  [1764061570.2710] device (tapbf69fe43-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:06:10 compute-0 systemd-machined[215790]: New machine qemu-169-instance-0000008b.
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.278 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e64ec752-be3a-488c-8c52-66481760dded]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_controller[152859]: 2025-11-25T09:06:10Z|01447|binding|INFO|Setting lport 20e31743-4fc4-43d2-ab28-5205c776f506 ovn-installed in OVS
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:10 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-0000008b.
Nov 25 09:06:10 compute-0 ovn_controller[152859]: 2025-11-25T09:06:10Z|01448|binding|INFO|Setting lport 20e31743-4fc4-43d2-ab28-5205c776f506 up in Southbound
Nov 25 09:06:10 compute-0 ovn_controller[152859]: 2025-11-25T09:06:10Z|01449|binding|INFO|Setting lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 up in Southbound
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.287 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2'], port_security=['fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fedb:48c2/64 2001:db8::f816:3eff:fedb:48c2/64', 'neutron:device_id': '935c4eb2-999f-40a4-8643-0479d293c149', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=20e31743-4fc4-43d2-ab28-5205c776f506) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.309 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ae351201-d09d-42aa-8836-b2fc7f9f7460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.313 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5fb44f-88be-4e82-8571-7c21720b3f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.341 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad7a109-3d53-4e26-b6fa-558601fff26d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.362 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[430f3209-147f-4936-8361-7fc3e9152240]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f08e3a5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:64:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 412], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688844, 'reachable_time': 16923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400698, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.376 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aa628d71-3a90-4177-b969-9fb9a5d6c4ec]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f08e3a5-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688856, 'tstamp': 688856}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400700, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f08e3a5-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688860, 'tstamp': 688860}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400700, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.377 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f08e3a5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.379 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.380 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f08e3a5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.380 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.380 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f08e3a5-c0, col_values=(('external_ids', {'iface-id': '8fae56b6-9884-44ea-b3b3-2b19412193c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.381 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.383 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 20e31743-4fc4-43d2-ab28-5205c776f506 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 unbound from our chassis
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.384 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c644b4d-59a5-410c-b57a-1faa3d063b78
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.411 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8e8396-8033-41a9-90d1-bcf565d40e75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.450 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01af1a60-1301-467e-84f3-d4e3473e4d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.453 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[085a088d-94ef-43e2-89b8-acbd99f43202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.487 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[120672f9-6d84-4108-9fd0-e12d8650162b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.518 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fc5a16-e0f5-410a-9adb-7822ad65db05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c644b4d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b4:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 414], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688938, 'reachable_time': 35223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400706, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.540 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[feaf6618-0c49-4e5d-90da-c3ffadf84199]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c644b4d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688953, 'tstamp': 688953}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400707, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.542 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c644b4d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.546 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c644b4d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.546 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.547 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c644b4d-50, col_values=(('external_ids', {'iface-id': 'f8aacb3c-1998-431a-ac4d-66021d7412c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:10.547 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.825 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061570.8244038, 935c4eb2-999f-40a4-8643-0479d293c149 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.826 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] VM Started (Lifecycle Event)
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.848 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.852 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061570.8246381, 935c4eb2-999f-40a4-8643-0479d293c149 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.853 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] VM Paused (Lifecycle Event)
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.867 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.870 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.886 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.926 253542 DEBUG nova.compute.manager [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.927 253542 DEBUG oslo_concurrency.lockutils [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.927 253542 DEBUG oslo_concurrency.lockutils [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.927 253542 DEBUG oslo_concurrency.lockutils [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:10 compute-0 nova_compute[253538]: 2025-11-25 09:06:10.927 253542 DEBUG nova.compute.manager [req-85dd83b8-bd19-4200-bb1c-2fd9201e3574 req-da15cdf7-5119-4297-930e-fac50a85dc64 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Processing event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:06:11 compute-0 ceph-mon[75015]: pgmap v2561: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 25 09:06:11 compute-0 ovn_controller[152859]: 2025-11-25T09:06:11Z|01450|binding|INFO|Releasing lport 8fae56b6-9884-44ea-b3b3-2b19412193c5 from this chassis (sb_readonly=0)
Nov 25 09:06:11 compute-0 ovn_controller[152859]: 2025-11-25T09:06:11Z|01451|binding|INFO|Releasing lport f8aacb3c-1998-431a-ac4d-66021d7412c1 from this chassis (sb_readonly=0)
Nov 25 09:06:11 compute-0 nova_compute[253538]: 2025-11-25 09:06:11.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.008 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.009 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.009 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.010 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.010 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] No event matching network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 in dict_keys([('network-vif-plugged', '20e31743-4fc4-43d2-ab28-5205c776f506')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.010 253542 WARNING nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received unexpected event network-vif-plugged-bf69fe43-dd03-40a9-a38f-2ec005c27f58 for instance with vm_state building and task_state spawning.
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.011 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.011 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.012 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.012 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.013 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Processing event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.013 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.013 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.014 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.014 253542 DEBUG oslo_concurrency.lockutils [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.014 253542 DEBUG nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] No waiting events found dispatching network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.015 253542 WARNING nova.compute.manager [req-bde926e0-95db-4546-a640-792c86974df7 req-dabfa31a-51c4-4ab9-a771-9b9469d14ca5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received unexpected event network-vif-plugged-20e31743-4fc4-43d2-ab28-5205c776f506 for instance with vm_state building and task_state spawning.
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.016 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.021 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061573.0208182, 935c4eb2-999f-40a4-8643-0479d293c149 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.021 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] VM Resumed (Lifecycle Event)
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.026 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.034 253542 INFO nova.virt.libvirt.driver [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance spawned successfully.
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.035 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.043 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.052 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.061 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.062 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.063 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.064 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.065 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.065 253542 DEBUG nova.virt.libvirt.driver [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.071 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:06:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.129 253542 INFO nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Took 11.89 seconds to spawn the instance on the hypervisor.
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.129 253542 DEBUG nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.193 253542 INFO nova.compute.manager [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Took 13.01 seconds to build instance.
Nov 25 09:06:13 compute-0 ceph-mon[75015]: pgmap v2562: 321 pgs: 321 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 25 09:06:13 compute-0 nova_compute[253538]: 2025-11-25 09:06:13.335 253542 DEBUG oslo_concurrency.lockutils [None req-611360db-81bc-42f5-8bf9-42345a8fc7ae c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:14 compute-0 nova_compute[253538]: 2025-11-25 09:06:14.123 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 09:06:14 compute-0 nova_compute[253538]: 2025-11-25 09:06:14.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:14 compute-0 nova_compute[253538]: 2025-11-25 09:06:14.998 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061559.9972892, 497131ea-c693-4c1d-b471-5b69d2294e3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:14 compute-0 nova_compute[253538]: 2025-11-25 09:06:14.998 253542 INFO nova.compute.manager [-] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] VM Stopped (Lifecycle Event)
Nov 25 09:06:15 compute-0 nova_compute[253538]: 2025-11-25 09:06:15.021 253542 DEBUG nova.compute.manager [None req-dc11a86d-1a24-4d73-ae5f-c2e2681478ca - - - - - -] [instance: 497131ea-c693-4c1d-b471-5b69d2294e3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:15 compute-0 ceph-mon[75015]: pgmap v2563: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 09:06:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Nov 25 09:06:17 compute-0 ceph-mon[75015]: pgmap v2564: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Nov 25 09:06:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 612 KiB/s wr, 71 op/s
Nov 25 09:06:18 compute-0 nova_compute[253538]: 2025-11-25 09:06:18.276 253542 DEBUG nova.compute.manager [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:18 compute-0 nova_compute[253538]: 2025-11-25 09:06:18.276 253542 DEBUG nova.compute.manager [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing instance network info cache due to event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:06:18 compute-0 nova_compute[253538]: 2025-11-25 09:06:18.276 253542 DEBUG oslo_concurrency.lockutils [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:06:18 compute-0 nova_compute[253538]: 2025-11-25 09:06:18.277 253542 DEBUG oslo_concurrency.lockutils [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:06:18 compute-0 nova_compute[253538]: 2025-11-25 09:06:18.277 253542 DEBUG nova.network.neutron [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:06:19 compute-0 nova_compute[253538]: 2025-11-25 09:06:19.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:19 compute-0 nova_compute[253538]: 2025-11-25 09:06:19.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:19 compute-0 ceph-mon[75015]: pgmap v2565: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 612 KiB/s wr, 71 op/s
Nov 25 09:06:19 compute-0 nova_compute[253538]: 2025-11-25 09:06:19.418 253542 DEBUG nova.network.neutron [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updated VIF entry in instance network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:06:19 compute-0 nova_compute[253538]: 2025-11-25 09:06:19.418 253542 DEBUG nova.network.neutron [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:19 compute-0 nova_compute[253538]: 2025-11-25 09:06:19.439 253542 DEBUG oslo_concurrency.lockutils [req-948af99b-fdb8-46ce-8e8a-0703641130c1 req-d7a89e4c-5c2e-460d-aa2c-00c8db1f1130 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:06:19 compute-0 nova_compute[253538]: 2025-11-25 09:06:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 09:06:21 compute-0 ceph-mon[75015]: pgmap v2566: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 09:06:21 compute-0 nova_compute[253538]: 2025-11-25 09:06:21.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 09:06:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:23 compute-0 ceph-mon[75015]: pgmap v2567: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 09:06:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:06:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:06:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:06:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:06:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:06:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:06:23 compute-0 nova_compute[253538]: 2025-11-25 09:06:23.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:24 compute-0 nova_compute[253538]: 2025-11-25 09:06:24.127 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 09:06:24 compute-0 nova_compute[253538]: 2025-11-25 09:06:24.296 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:24 compute-0 ceph-mon[75015]: pgmap v2568: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 09:06:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 66 op/s
Nov 25 09:06:26 compute-0 ovn_controller[152859]: 2025-11-25T09:06:26Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:64:ff 10.100.0.9
Nov 25 09:06:26 compute-0 ovn_controller[152859]: 2025-11-25T09:06:26Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:64:ff 10.100.0.9
Nov 25 09:06:26 compute-0 podman[400752]: 2025-11-25 09:06:26.815360746 +0000 UTC m=+0.068312609 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 09:06:26 compute-0 podman[400753]: 2025-11-25 09:06:26.832132921 +0000 UTC m=+0.085429753 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 09:06:27 compute-0 ceph-mon[75015]: pgmap v2569: 321 pgs: 321 active+clean; 213 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 66 op/s
Nov 25 09:06:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 225 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 703 KiB/s wr, 66 op/s
Nov 25 09:06:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:06:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571887286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:06:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:06:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571887286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:06:29 compute-0 nova_compute[253538]: 2025-11-25 09:06:29.129 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:29 compute-0 ceph-mon[75015]: pgmap v2570: 321 pgs: 321 active+clean; 225 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 703 KiB/s wr, 66 op/s
Nov 25 09:06:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1571887286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:06:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1571887286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:06:29 compute-0 nova_compute[253538]: 2025-11-25 09:06:29.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Nov 25 09:06:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:30.714 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:30 compute-0 nova_compute[253538]: 2025-11-25 09:06:30.715 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:30 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:30.715 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:06:31 compute-0 ceph-mon[75015]: pgmap v2571: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Nov 25 09:06:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:06:32 compute-0 sshd-session[400790]: Connection closed by authenticating user root 193.32.162.151 port 48304 [preauth]
Nov 25 09:06:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:33 compute-0 ceph-mon[75015]: pgmap v2572: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:06:34 compute-0 nova_compute[253538]: 2025-11-25 09:06:34.131 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:06:34 compute-0 nova_compute[253538]: 2025-11-25 09:06:34.299 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:34 compute-0 nova_compute[253538]: 2025-11-25 09:06:34.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:34.717 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:35 compute-0 ceph-mon[75015]: pgmap v2573: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:06:35 compute-0 podman[400792]: 2025-11-25 09:06:35.859786141 +0000 UTC m=+0.106188918 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:06:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:06:37 compute-0 ceph-mon[75015]: pgmap v2574: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:06:37 compute-0 nova_compute[253538]: 2025-11-25 09:06:37.958 253542 DEBUG nova.compute.manager [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:37 compute-0 nova_compute[253538]: 2025-11-25 09:06:37.958 253542 DEBUG nova.compute.manager [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing instance network info cache due to event network-changed-bf69fe43-dd03-40a9-a38f-2ec005c27f58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:06:37 compute-0 nova_compute[253538]: 2025-11-25 09:06:37.960 253542 DEBUG oslo_concurrency.lockutils [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:06:37 compute-0 nova_compute[253538]: 2025-11-25 09:06:37.960 253542 DEBUG oslo_concurrency.lockutils [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:06:37 compute-0 nova_compute[253538]: 2025-11-25 09:06:37.960 253542 DEBUG nova.network.neutron [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Refreshing network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:06:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.120 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.120 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.121 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "935c4eb2-999f-40a4-8643-0479d293c149-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.121 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.121 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.122 253542 INFO nova.compute.manager [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Terminating instance
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.124 253542 DEBUG nova.compute.manager [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:06:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:06:38 compute-0 kernel: tapbf69fe43-dd (unregistering): left promiscuous mode
Nov 25 09:06:38 compute-0 NetworkManager[48915]: <info>  [1764061598.1881] device (tapbf69fe43-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:06:38 compute-0 ovn_controller[152859]: 2025-11-25T09:06:38Z|01452|binding|INFO|Releasing lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 from this chassis (sb_readonly=0)
Nov 25 09:06:38 compute-0 ovn_controller[152859]: 2025-11-25T09:06:38Z|01453|binding|INFO|Setting lport bf69fe43-dd03-40a9-a38f-2ec005c27f58 down in Southbound
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 ovn_controller[152859]: 2025-11-25T09:06:38Z|01454|binding|INFO|Removing iface tapbf69fe43-dd ovn-installed in OVS
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 kernel: tap20e31743-4f (unregistering): left promiscuous mode
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 NetworkManager[48915]: <info>  [1764061598.2369] device (tap20e31743-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.249 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 ovn_controller[152859]: 2025-11-25T09:06:38Z|01455|binding|INFO|Releasing lport 20e31743-4fc4-43d2-ab28-5205c776f506 from this chassis (sb_readonly=1)
Nov 25 09:06:38 compute-0 ovn_controller[152859]: 2025-11-25T09:06:38Z|01456|binding|INFO|Removing iface tap20e31743-4f ovn-installed in OVS
Nov 25 09:06:38 compute-0 ovn_controller[152859]: 2025-11-25T09:06:38Z|01457|if_status|INFO|Dropped 4 log messages in last 678 seconds (most recently, 667 seconds ago) due to excessive rate
Nov 25 09:06:38 compute-0 ovn_controller[152859]: 2025-11-25T09:06:38Z|01458|if_status|INFO|Not setting lport 20e31743-4fc4-43d2-ab28-5205c776f506 down as sb is readonly
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.268 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 25 09:06:38 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008b.scope: Consumed 14.264s CPU time.
Nov 25 09:06:38 compute-0 systemd-machined[215790]: Machine qemu-169-instance-0000008b terminated.
Nov 25 09:06:38 compute-0 ovn_controller[152859]: 2025-11-25T09:06:38Z|01459|binding|INFO|Setting lport 20e31743-4fc4-43d2-ab28-5205c776f506 down in Southbound
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.307 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:64:ff 10.100.0.9'], port_security=['fa:16:3e:8d:64:ff 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '935c4eb2-999f-40a4-8643-0479d293c149', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bf69fe43-dd03-40a9-a38f-2ec005c27f58) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.308 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bf69fe43-dd03-40a9-a38f-2ec005c27f58 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 unbound from our chassis
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.310 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f08e3a5-c18c-40d6-a052-3721725c11a7
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.342 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2df7bb-6e90-4e63-9865-3fc1c590fcc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.347 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2'], port_security=['fa:16:3e:db:48:c2 2001:db8:0:1:f816:3eff:fedb:48c2 2001:db8::f816:3eff:fedb:48c2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fedb:48c2/64 2001:db8::f816:3eff:fedb:48c2/64', 'neutron:device_id': '935c4eb2-999f-40a4-8643-0479d293c149', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=20e31743-4fc4-43d2-ab28-5205c776f506) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:38 compute-0 NetworkManager[48915]: <info>  [1764061598.3606] manager: (tap20e31743-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/598)
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.372 253542 INFO nova.virt.libvirt.driver [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Instance destroyed successfully.
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.372 253542 DEBUG nova.objects.instance [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 935c4eb2-999f-40a4-8643-0479d293c149 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.376 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7993fb21-a285-4821-ae38-1450db8ee4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.380 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4a629a11-7630-4e21-a892-7dd1645139c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.383 253542 DEBUG nova.virt.libvirt.vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:06:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:06:13Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.384 253542 DEBUG nova.network.os_vif_util [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.384 253542 DEBUG nova.network.os_vif_util [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.384 253542 DEBUG os_vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.386 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.386 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf69fe43-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.389 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.393 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.395 253542 INFO os_vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:64:ff,bridge_name='br-int',has_traffic_filtering=True,id=bf69fe43-dd03-40a9-a38f-2ec005c27f58,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf69fe43-dd')
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.396 253542 DEBUG nova.virt.libvirt.vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102871071',display_name='tempest-TestGettingAddress-server-1102871071',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102871071',id=139,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:06:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-wh4fqbxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:06:13Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=935c4eb2-999f-40a4-8643-0479d293c149,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.396 253542 DEBUG nova.network.os_vif_util [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.397 253542 DEBUG nova.network.os_vif_util [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.397 253542 DEBUG os_vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.398 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20e31743-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.402 253542 INFO os_vif [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:48:c2,bridge_name='br-int',has_traffic_filtering=True,id=20e31743-4fc4-43d2-ab28-5205c776f506,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20e31743-4f')
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.413 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[578e39af-28ac-45ab-8a00-e4d5f49e0165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.432 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[354129dd-89bd-4de3-9b1e-06de76a0ccd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f08e3a5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:64:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 412], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688844, 'reachable_time': 16923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400873, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.448 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68366a5c-17c6-43cd-93c3-1d907bc44f0a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f08e3a5-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688856, 'tstamp': 688856}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400877, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f08e3a5-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688860, 'tstamp': 688860}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400877, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.449 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f08e3a5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f08e3a5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f08e3a5-c0, col_values=(('external_ids', {'iface-id': '8fae56b6-9884-44ea-b3b3-2b19412193c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.452 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.454 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 20e31743-4fc4-43d2-ab28-5205c776f506 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 unbound from our chassis
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.456 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c644b4d-59a5-410c-b57a-1faa3d063b78
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.471 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4af0f3-efba-4718-b22e-2ebe9eb2900c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.501 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4207e997-1d0f-4dbf-963a-aa9f01fb7819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.504 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[6970267c-13db-4e5b-94fc-7b449ed2935e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.536 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd51e18-f869-4ae2-ab19-745e737a19fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.556 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf61d95-23ab-43d6-accb-64946197669c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c644b4d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b4:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 414], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688938, 'reachable_time': 35223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400883, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.574 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d1837d07-6a54-427a-89fe-2eebaf6f5364]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c644b4d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688953, 'tstamp': 688953}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400884, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.578 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c644b4d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c644b4d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c644b4d-50, col_values=(('external_ids', {'iface-id': 'f8aacb3c-1998-431a-ac4d-66021d7412c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:38.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.846 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.860 253542 INFO nova.virt.libvirt.driver [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Deleting instance files /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149_del
Nov 25 09:06:38 compute-0 nova_compute[253538]: 2025-11-25 09:06:38.861 253542 INFO nova.virt.libvirt.driver [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Deletion of /var/lib/nova/instances/935c4eb2-999f-40a4-8643-0479d293c149_del complete
Nov 25 09:06:39 compute-0 nova_compute[253538]: 2025-11-25 09:06:39.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:39 compute-0 ceph-mon[75015]: pgmap v2575: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:06:39 compute-0 nova_compute[253538]: 2025-11-25 09:06:39.462 253542 INFO nova.compute.manager [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Took 1.34 seconds to destroy the instance on the hypervisor.
Nov 25 09:06:39 compute-0 nova_compute[253538]: 2025-11-25 09:06:39.462 253542 DEBUG oslo.service.loopingcall [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:06:39 compute-0 nova_compute[253538]: 2025-11-25 09:06:39.462 253542 DEBUG nova.compute.manager [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:06:39 compute-0 nova_compute[253538]: 2025-11-25 09:06:39.463 253542 DEBUG nova.network.neutron [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:06:39 compute-0 nova_compute[253538]: 2025-11-25 09:06:39.701 253542 DEBUG nova.network.neutron [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updated VIF entry in instance network info cache for port bf69fe43-dd03-40a9-a38f-2ec005c27f58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:06:39 compute-0 nova_compute[253538]: 2025-11-25 09:06:39.702 253542 DEBUG nova.network.neutron [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "address": "fa:16:3e:8d:64:ff", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf69fe43-dd", "ovs_interfaceid": "bf69fe43-dd03-40a9-a38f-2ec005c27f58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:39 compute-0 nova_compute[253538]: 2025-11-25 09:06:39.721 253542 DEBUG oslo_concurrency.lockutils [req-9f40bcc0-13e6-4325-9e03-8e3471d3bb84 req-b092db25-ad01-4ef1-8005-1761a31f22ab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-935c4eb2-999f-40a4-8643-0479d293c149" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:06:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 223 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Nov 25 09:06:40 compute-0 nova_compute[253538]: 2025-11-25 09:06:40.609 253542 DEBUG nova.compute.manager [req-32a844c0-9e4a-43e1-86be-e1385074dc18 req-d0fcd5dc-dc6d-49c3-9cd5-732eed286ba5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-deleted-bf69fe43-dd03-40a9-a38f-2ec005c27f58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:40 compute-0 nova_compute[253538]: 2025-11-25 09:06:40.609 253542 INFO nova.compute.manager [req-32a844c0-9e4a-43e1-86be-e1385074dc18 req-d0fcd5dc-dc6d-49c3-9cd5-732eed286ba5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Neutron deleted interface bf69fe43-dd03-40a9-a38f-2ec005c27f58; detaching it from the instance and deleting it from the info cache
Nov 25 09:06:40 compute-0 nova_compute[253538]: 2025-11-25 09:06:40.610 253542 DEBUG nova.network.neutron [req-32a844c0-9e4a-43e1-86be-e1385074dc18 req-d0fcd5dc-dc6d-49c3-9cd5-732eed286ba5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [{"id": "20e31743-4fc4-43d2-ab28-5205c776f506", "address": "fa:16:3e:db:48:c2", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedb:48c2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20e31743-4f", "ovs_interfaceid": "20e31743-4fc4-43d2-ab28-5205c776f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:40 compute-0 nova_compute[253538]: 2025-11-25 09:06:40.653 253542 DEBUG nova.compute.manager [req-32a844c0-9e4a-43e1-86be-e1385074dc18 req-d0fcd5dc-dc6d-49c3-9cd5-732eed286ba5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Detach interface failed, port_id=bf69fe43-dd03-40a9-a38f-2ec005c27f58, reason: Instance 935c4eb2-999f-40a4-8643-0479d293c149 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 09:06:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:41.090 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:41.090 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:41.091 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:41 compute-0 ceph-mon[75015]: pgmap v2576: 321 pgs: 321 active+clean; 223 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Nov 25 09:06:41 compute-0 nova_compute[253538]: 2025-11-25 09:06:41.448 253542 DEBUG nova.network.neutron [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:41 compute-0 nova_compute[253538]: 2025-11-25 09:06:41.467 253542 INFO nova.compute.manager [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Took 2.00 seconds to deallocate network for instance.
Nov 25 09:06:41 compute-0 nova_compute[253538]: 2025-11-25 09:06:41.516 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:41 compute-0 nova_compute[253538]: 2025-11-25 09:06:41.517 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:41 compute-0 nova_compute[253538]: 2025-11-25 09:06:41.708 253542 DEBUG oslo_concurrency.processutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:06:42 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534015723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 191 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 27 KiB/s wr, 19 op/s
Nov 25 09:06:42 compute-0 nova_compute[253538]: 2025-11-25 09:06:42.173 253542 DEBUG oslo_concurrency.processutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:42 compute-0 nova_compute[253538]: 2025-11-25 09:06:42.182 253542 DEBUG nova.compute.provider_tree [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:06:42 compute-0 nova_compute[253538]: 2025-11-25 09:06:42.274 253542 DEBUG nova.scheduler.client.report [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:06:42 compute-0 nova_compute[253538]: 2025-11-25 09:06:42.296 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:42 compute-0 nova_compute[253538]: 2025-11-25 09:06:42.319 253542 INFO nova.scheduler.client.report [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 935c4eb2-999f-40a4-8643-0479d293c149
Nov 25 09:06:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1534015723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:42 compute-0 nova_compute[253538]: 2025-11-25 09:06:42.377 253542 DEBUG oslo_concurrency.lockutils [None req-015b4da1-0d87-4e37-86d1-eefd05601139 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "935c4eb2-999f-40a4-8643-0479d293c149" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:42 compute-0 nova_compute[253538]: 2025-11-25 09:06:42.699 253542 DEBUG nova.compute.manager [req-dc14f490-3551-4853-96f1-7858007ec18e req-10ffa19f-4784-4b80-86c7-8b370dd1a515 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Received event network-vif-deleted-20e31743-4fc4-43d2-ab28-5205c776f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:43 compute-0 ceph-mon[75015]: pgmap v2577: 321 pgs: 321 active+clean; 191 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 27 KiB/s wr, 19 op/s
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.753 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.754 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.754 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.754 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.754 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.755 253542 INFO nova.compute.manager [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Terminating instance
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.756 253542 DEBUG nova.compute.manager [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:06:43 compute-0 kernel: tapbc72cf9d-bb (unregistering): left promiscuous mode
Nov 25 09:06:43 compute-0 NetworkManager[48915]: <info>  [1764061603.8215] device (tapbc72cf9d-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:43 compute-0 ovn_controller[152859]: 2025-11-25T09:06:43Z|01460|binding|INFO|Releasing lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 from this chassis (sb_readonly=0)
Nov 25 09:06:43 compute-0 ovn_controller[152859]: 2025-11-25T09:06:43Z|01461|binding|INFO|Setting lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 down in Southbound
Nov 25 09:06:43 compute-0 ovn_controller[152859]: 2025-11-25T09:06:43Z|01462|binding|INFO|Removing iface tapbc72cf9d-bb ovn-installed in OVS
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.829 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.837 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a3:07 10.100.0.14'], port_security=['fa:16:3e:b7:a3:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc72cf9d-bb8d-4968-879b-a65c0e151d35) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.839 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc72cf9d-bb8d-4968-879b-a65c0e151d35 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 unbound from our chassis
Nov 25 09:06:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.841 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f08e3a5-c18c-40d6-a052-3721725c11a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:06:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.841 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a21799-0477-40a3-9c0f-98197bc0829c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.842 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 namespace which is not needed anymore
Nov 25 09:06:43 compute-0 kernel: tape64e0c93-9f (unregistering): left promiscuous mode
Nov 25 09:06:43 compute-0 NetworkManager[48915]: <info>  [1764061603.8475] device (tape64e0c93-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:43 compute-0 ovn_controller[152859]: 2025-11-25T09:06:43Z|01463|binding|INFO|Releasing lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 from this chassis (sb_readonly=0)
Nov 25 09:06:43 compute-0 ovn_controller[152859]: 2025-11-25T09:06:43Z|01464|binding|INFO|Setting lport e64e0c93-9ff8-4b26-9a7e-1bae8b024966 down in Southbound
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:43 compute-0 ovn_controller[152859]: 2025-11-25T09:06:43Z|01465|binding|INFO|Removing iface tape64e0c93-9f ovn-installed in OVS
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.862 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.867 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a'], port_security=['fa:16:3e:6a:41:4a 2001:db8:0:1:f816:3eff:fe6a:414a 2001:db8::f816:3eff:fe6a:414a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6a:414a/64 2001:db8::f816:3eff:fe6a:414a/64', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dd839f3-f2ee-404d-b9f9-bedcfcb25484, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=e64e0c93-9ff8-4b26-9a7e-1bae8b024966) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:43 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000089.scope: Deactivated successfully.
Nov 25 09:06:43 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000089.scope: Consumed 15.714s CPU time.
Nov 25 09:06:43 compute-0 systemd-machined[215790]: Machine qemu-167-instance-00000089 terminated.
Nov 25 09:06:43 compute-0 kernel: tapbc72cf9d-bb: entered promiscuous mode
Nov 25 09:06:43 compute-0 kernel: tapbc72cf9d-bb (unregistering): left promiscuous mode
Nov 25 09:06:43 compute-0 NetworkManager[48915]: <info>  [1764061603.9829] manager: (tapbc72cf9d-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/599)
Nov 25 09:06:43 compute-0 nova_compute[253538]: 2025-11-25 09:06:43.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:43 compute-0 ovn_controller[152859]: 2025-11-25T09:06:43Z|01466|binding|INFO|Claiming lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 for this chassis.
Nov 25 09:06:43 compute-0 ovn_controller[152859]: 2025-11-25T09:06:43Z|01467|binding|INFO|bc72cf9d-bb8d-4968-879b-a65c0e151d35: Claiming fa:16:3e:b7:a3:07 10.100.0.14
Nov 25 09:06:43 compute-0 NetworkManager[48915]: <info>  [1764061603.9950] manager: (tape64e0c93-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Nov 25 09:06:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:43.996 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a3:07 10.100.0.14'], port_security=['fa:16:3e:b7:a3:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc72cf9d-bb8d-4968-879b-a65c0e151d35) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 ovn_controller[152859]: 2025-11-25T09:06:44Z|01468|binding|INFO|Releasing lport bc72cf9d-bb8d-4968-879b-a65c0e151d35 from this chassis (sb_readonly=0)
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.015 253542 INFO nova.virt.libvirt.driver [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Instance destroyed successfully.
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.016 253542 DEBUG nova.objects.instance [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.021 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a3:07 10.100.0.14'], port_security=['fa:16:3e:b7:a3:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8ddf0de9-e47d-4ca3-b0fe-d196d1eb3ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0fc5f4dc-b315-4292-9809-5084499d762b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc72cf9d-bb8d-4968-879b-a65c0e151d35) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [NOTICE]   (398689) : haproxy version is 2.8.14-c23fe91
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [NOTICE]   (398689) : path to executable is /usr/sbin/haproxy
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [WARNING]  (398689) : Exiting Master process...
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [WARNING]  (398689) : Exiting Master process...
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.027 253542 DEBUG nova.virt.libvirt.vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:05:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:05:36Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.028 253542 DEBUG nova.network.os_vif_util [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.028 253542 DEBUG nova.network.os_vif_util [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.029 253542 DEBUG os_vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.031 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.031 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc72cf9d-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [ALERT]    (398689) : Current worker (398696) exited with code 143 (Terminated)
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7[398671]: [WARNING]  (398689) : All workers exited. Exiting... (0)
Nov 25 09:06:44 compute-0 systemd[1]: libpod-51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435.scope: Deactivated successfully.
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.037 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.040 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.042 253542 INFO os_vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:a3:07,bridge_name='br-int',has_traffic_filtering=True,id=bc72cf9d-bb8d-4968-879b-a65c0e151d35,network=Network(8f08e3a5-c18c-40d6-a052-3721725c11a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc72cf9d-bb')
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.043 253542 DEBUG nova.virt.libvirt.vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1397309390',display_name='tempest-TestGettingAddress-server-1397309390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1397309390',id=137,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2EHMFfBwqY5UWv5ZUTvDGydPH+lI0Q3ZijcI/Z0KNxJwJ0XwD1zp8yjXj4vyT5Wr1yKGnfee4Ard9WUGqQHTmOIMYiRA7o0ggCLm4Ee9TLzJApKNCTFF0W861lRMGbtg==',key_name='tempest-TestGettingAddress-99102097',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:05:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ie7bcng6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:05:36Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.043 253542 DEBUG nova.network.os_vif_util [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:44 compute-0 podman[400937]: 2025-11-25 09:06:44.043437876 +0000 UTC m=+0.083451959 container died 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.044 253542 DEBUG nova.network.os_vif_util [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.044 253542 DEBUG os_vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.046 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64e0c93-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.047 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.051 253542 INFO os_vif [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:41:4a,bridge_name='br-int',has_traffic_filtering=True,id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966,network=Network(6c644b4d-59a5-410c-b57a-1faa3d063b78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape64e0c93-9f')
Nov 25 09:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435-userdata-shm.mount: Deactivated successfully.
Nov 25 09:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-80317eb3c6d5e4b152cb82d3e07a88ef2d85b9598516f68f790b2b2e984d1b96-merged.mount: Deactivated successfully.
Nov 25 09:06:44 compute-0 podman[400937]: 2025-11-25 09:06:44.11014433 +0000 UTC m=+0.150158403 container cleanup 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:06:44 compute-0 systemd[1]: libpod-conmon-51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435.scope: Deactivated successfully.
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 167 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 31 op/s
Nov 25 09:06:44 compute-0 podman[400997]: 2025-11-25 09:06:44.190515395 +0000 UTC m=+0.058287226 container remove 51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.198 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9a408f-a2f3-4688-934e-9cada135e482]: (4, ('Tue Nov 25 09:06:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 (51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435)\n51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435\nTue Nov 25 09:06:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 (51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435)\n51f25d4677022ebbe2b594fe9f78cd07eb053d885ed4dadb80219cb3a3c42435\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.200 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2905fc10-d7eb-47ce-92ae-88bf2933c1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.201 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f08e3a5-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 kernel: tap8f08e3a5-c0: left promiscuous mode
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.222 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c65d9bc1-b666-4c41-b4a2-3b6edb6c0b3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.248 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e42c2d-f5b2-453b-becd-983d3e997d6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.250 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff859f7-b1d0-49f6-a8f3-030865f47050]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.264 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[30b59c4f-7025-43e4-a102-b152f3c094d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688835, 'reachable_time': 36307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401012, 'error': None, 'target': 'ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d8f08e3a5\x2dc18c\x2d40d6\x2da052\x2d3721725c11a7.mount: Deactivated successfully.
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.268 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f08e3a5-c18c-40d6-a052-3721725c11a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.269 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5552b9-bb7a-41e1-a5f3-eabb197fe1a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.272 162739 INFO neutron.agent.ovn.metadata.agent [-] Port e64e0c93-9ff8-4b26-9a7e-1bae8b024966 in datapath 6c644b4d-59a5-410c-b57a-1faa3d063b78 unbound from our chassis
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.273 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c644b4d-59a5-410c-b57a-1faa3d063b78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.274 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6d47cb48-5dbf-43f5-8adc-21897c3ab1f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.275 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 namespace which is not needed anymore
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [NOTICE]   (398830) : haproxy version is 2.8.14-c23fe91
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [NOTICE]   (398830) : path to executable is /usr/sbin/haproxy
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [WARNING]  (398830) : Exiting Master process...
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [WARNING]  (398830) : Exiting Master process...
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [ALERT]    (398830) : Current worker (398832) exited with code 143 (Terminated)
Nov 25 09:06:44 compute-0 neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78[398826]: [WARNING]  (398830) : All workers exited. Exiting... (0)
Nov 25 09:06:44 compute-0 systemd[1]: libpod-d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91.scope: Deactivated successfully.
Nov 25 09:06:44 compute-0 podman[401031]: 2025-11-25 09:06:44.419388467 +0000 UTC m=+0.056391955 container died d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 09:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d698aa358060802f75d0f6998f934a98b483adfd949a7477e1c82db0e0b4996-merged.mount: Deactivated successfully.
Nov 25 09:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91-userdata-shm.mount: Deactivated successfully.
Nov 25 09:06:44 compute-0 podman[401031]: 2025-11-25 09:06:44.460876244 +0000 UTC m=+0.097879662 container cleanup d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 09:06:44 compute-0 systemd[1]: libpod-conmon-d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91.scope: Deactivated successfully.
Nov 25 09:06:44 compute-0 podman[401062]: 2025-11-25 09:06:44.55119314 +0000 UTC m=+0.060434105 container remove d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e759d56-8fb6-447b-9801-26b9a2d9f471]: (4, ('Tue Nov 25 09:06:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 (d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91)\nd317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91\nTue Nov 25 09:06:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 (d317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91)\nd317ae8c5007405fc2e9d775491a99f4a2311cee72727f2819f698c1226b5b91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.560 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9d299deb-5413-4d2e-b7bf-0485eaa545fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.561 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c644b4d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 kernel: tap6c644b4d-50: left promiscuous mode
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.599 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed235222-a9fb-42fa-ab60-667ddcd659c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.618 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e464db56-c7b1-4a0a-a42c-c7f6e5365035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.620 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cb819a57-d5dc-4cb9-834d-7ad679eccb12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.629 253542 INFO nova.virt.libvirt.driver [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Deleting instance files /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_del
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.630 253542 INFO nova.virt.libvirt.driver [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Deletion of /var/lib/nova/instances/7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0_del complete
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.637 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1afffc7a-9ba4-40d9-8e0e-4a42bdf22bb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688930, 'reachable_time': 20816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401075, 'error': None, 'target': 'ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.638 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c644b4d-59a5-410c-b57a-1faa3d063b78 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.639 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1a706114-0dba-425b-8ced-3c42951d6ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.639 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc72cf9d-bb8d-4968-879b-a65c0e151d35 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 unbound from our chassis
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.640 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f08e3a5-c18c-40d6-a052-3721725c11a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4381bd9f-a9c1-4a24-b423-8db2fd52132f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.641 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc72cf9d-bb8d-4968-879b-a65c0e151d35 in datapath 8f08e3a5-c18c-40d6-a052-3721725c11a7 unbound from our chassis
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.642 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f08e3a5-c18c-40d6-a052-3721725c11a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:06:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:44.643 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5284d97-2816-4422-a6bf-2090a04ad274]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.694 253542 INFO nova.compute.manager [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Took 0.94 seconds to destroy the instance on the hypervisor.
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.694 253542 DEBUG oslo.service.loopingcall [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.695 253542 DEBUG nova.compute.manager [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:06:44 compute-0 nova_compute[253538]: 2025-11-25 09:06:44.695 253542 DEBUG nova.network.neutron [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:06:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c644b4d\x2d59a5\x2d410c\x2db57a\x2d1faa3d063b78.mount: Deactivated successfully.
Nov 25 09:06:45 compute-0 ceph-mon[75015]: pgmap v2578: 321 pgs: 321 active+clean; 167 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 31 op/s
Nov 25 09:06:45 compute-0 nova_compute[253538]: 2025-11-25 09:06:45.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:45 compute-0 nova_compute[253538]: 2025-11-25 09:06:45.563 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:45 compute-0 nova_compute[253538]: 2025-11-25 09:06:45.663 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:45 compute-0 nova_compute[253538]: 2025-11-25 09:06:45.663 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing instance network info cache due to event network-changed-bc72cf9d-bb8d-4968-879b-a65c0e151d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:06:45 compute-0 nova_compute[253538]: 2025-11-25 09:06:45.664 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:06:45 compute-0 nova_compute[253538]: 2025-11-25 09:06:45.665 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:06:45 compute-0 nova_compute[253538]: 2025-11-25 09:06:45.665 253542 DEBUG nova.network.neutron [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Refreshing network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:06:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 148 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 31 KiB/s wr, 44 op/s
Nov 25 09:06:46 compute-0 nova_compute[253538]: 2025-11-25 09:06:46.834 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:46 compute-0 nova_compute[253538]: 2025-11-25 09:06:46.835 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:46 compute-0 nova_compute[253538]: 2025-11-25 09:06:46.852 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:06:46 compute-0 nova_compute[253538]: 2025-11-25 09:06:46.929 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:46 compute-0 nova_compute[253538]: 2025-11-25 09:06:46.930 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:46 compute-0 nova_compute[253538]: 2025-11-25 09:06:46.936 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:06:46 compute-0 nova_compute[253538]: 2025-11-25 09:06:46.936 253542 INFO nova.compute.claims [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.061 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.212 253542 DEBUG nova.network.neutron [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.236 253542 INFO nova.compute.manager [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Took 2.54 seconds to deallocate network for instance.
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.279 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:47 compute-0 ceph-mon[75015]: pgmap v2579: 321 pgs: 321 active+clean; 148 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 31 KiB/s wr, 44 op/s
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.443 253542 DEBUG nova.network.neutron [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updated VIF entry in instance network info cache for port bc72cf9d-bb8d-4968-879b-a65c0e151d35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.444 253542 DEBUG nova.network.neutron [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "address": "fa:16:3e:6a:41:4a", "network": {"id": "6c644b4d-59a5-410c-b57a-1faa3d063b78", "bridge": "br-int", "label": "tempest-network-smoke--1048784636", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:414a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape64e0c93-9f", "ovs_interfaceid": "e64e0c93-9ff8-4b26-9a7e-1bae8b024966", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.465 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.466 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-unplugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.467 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.467 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.467 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.468 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-unplugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.468 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-unplugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.469 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.469 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.469 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.470 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.470 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.470 253542 WARNING nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received unexpected event network-vif-plugged-bc72cf9d-bb8d-4968-879b-a65c0e151d35 for instance with vm_state active and task_state deleting.
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.471 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-unplugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.471 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.471 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.472 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.472 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-unplugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.472 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-unplugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.473 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.473 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.473 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.474 253542 DEBUG oslo_concurrency.lockutils [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.474 253542 DEBUG nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] No waiting events found dispatching network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.474 253542 WARNING nova.compute.manager [req-a58857d1-c18a-476c-8e45-ec4dd2c08fce req-e81c591c-aa36-4843-bae6-3a77afa9e8f2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received unexpected event network-vif-plugged-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 for instance with vm_state active and task_state deleting.
Nov 25 09:06:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:06:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491286695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.496 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.503 253542 DEBUG nova.compute.provider_tree [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.518 253542 DEBUG nova.scheduler.client.report [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.538 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.539 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.541 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.596 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.596 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.617 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.626 253542 DEBUG oslo_concurrency.processutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.666 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.754 253542 DEBUG nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-deleted-e64e0c93-9ff8-4b26-9a7e-1bae8b024966 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.755 253542 INFO nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Neutron deleted interface e64e0c93-9ff8-4b26-9a7e-1bae8b024966; detaching it from the instance and deleting it from the info cache
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.755 253542 DEBUG nova.network.neutron [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [{"id": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "address": "fa:16:3e:b7:a3:07", "network": {"id": "8f08e3a5-c18c-40d6-a052-3721725c11a7", "bridge": "br-int", "label": "tempest-network-smoke--1961352312", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc72cf9d-bb", "ovs_interfaceid": "bc72cf9d-bb8d-4968-879b-a65c0e151d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.766 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.767 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.767 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Creating image(s)
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.788 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.810 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.842 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.847 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.879 253542 DEBUG nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Detach interface failed, port_id=e64e0c93-9ff8-4b26-9a7e-1bae8b024966, reason: Instance 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.880 253542 DEBUG nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Received event network-vif-deleted-bc72cf9d-bb8d-4968-879b-a65c0e151d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.880 253542 INFO nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Neutron deleted interface bc72cf9d-bb8d-4968-879b-a65c0e151d35; detaching it from the instance and deleting it from the info cache
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.881 253542 DEBUG nova.network.neutron [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.902 253542 DEBUG nova.compute.manager [req-099a01da-4ee0-4aa3-94ef-955dc62b0c85 req-72b70455-460d-4313-aea8-352d495e9976 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Detach interface failed, port_id=bc72cf9d-bb8d-4968-879b-a65c0e151d35, reason: Instance 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.917 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.918 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.919 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.919 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.940 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:47 compute-0 nova_compute[253538]: 2025-11-25 09:06:47.943 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7aeb9ccf-2506-41d1-92c2-c72892096857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:06:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602972307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:48 compute-0 nova_compute[253538]: 2025-11-25 09:06:48.136 253542 DEBUG oslo_concurrency.processutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:48 compute-0 nova_compute[253538]: 2025-11-25 09:06:48.143 253542 DEBUG nova.compute.provider_tree [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:06:48 compute-0 nova_compute[253538]: 2025-11-25 09:06:48.160 253542 DEBUG nova.scheduler.client.report [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:06:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 115 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 31 KiB/s wr, 49 op/s
Nov 25 09:06:48 compute-0 nova_compute[253538]: 2025-11-25 09:06:48.195 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:48 compute-0 nova_compute[253538]: 2025-11-25 09:06:48.234 253542 INFO nova.scheduler.client.report [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0
Nov 25 09:06:48 compute-0 nova_compute[253538]: 2025-11-25 09:06:48.309 253542 DEBUG nova.policy [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aef72e2ffce442d1848c4753c324ae92', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:06:48 compute-0 nova_compute[253538]: 2025-11-25 09:06:48.316 253542 DEBUG oslo_concurrency.lockutils [None req-cb3238fb-c719-4d4a-9ae1-20a2e4b8f3bf c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:48 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 09:06:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3491286695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2602972307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.048 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.076 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 7aeb9ccf-2506-41d1-92c2-c72892096857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.145 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] resizing rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.195 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Successfully created port: 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.254 253542 DEBUG nova.objects.instance [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7aeb9ccf-2506-41d1-92c2-c72892096857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.265 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.266 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Ensure instance console log exists: /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.266 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.267 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:49 compute-0 nova_compute[253538]: 2025-11-25 09:06:49.267 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:49 compute-0 ceph-mon[75015]: pgmap v2580: 321 pgs: 321 active+clean; 115 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 31 KiB/s wr, 49 op/s
Nov 25 09:06:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 105 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 592 KiB/s wr, 63 op/s
Nov 25 09:06:50 compute-0 ceph-mon[75015]: pgmap v2581: 321 pgs: 321 active+clean; 105 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 592 KiB/s wr, 63 op/s
Nov 25 09:06:50 compute-0 nova_compute[253538]: 2025-11-25 09:06:50.495 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Successfully updated port: 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:06:50 compute-0 nova_compute[253538]: 2025-11-25 09:06:50.533 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:06:50 compute-0 nova_compute[253538]: 2025-11-25 09:06:50.534 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:06:50 compute-0 nova_compute[253538]: 2025-11-25 09:06:50.534 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:06:50 compute-0 nova_compute[253538]: 2025-11-25 09:06:50.626 253542 DEBUG nova.compute.manager [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:50 compute-0 nova_compute[253538]: 2025-11-25 09:06:50.626 253542 DEBUG nova.compute.manager [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing instance network info cache due to event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:06:50 compute-0 nova_compute[253538]: 2025-11-25 09:06:50.627 253542 DEBUG oslo_concurrency.lockutils [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:06:50 compute-0 nova_compute[253538]: 2025-11-25 09:06:50.729 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:06:51 compute-0 nova_compute[253538]: 2025-11-25 09:06:51.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:51 compute-0 nova_compute[253538]: 2025-11-25 09:06:51.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:06:51 compute-0 nova_compute[253538]: 2025-11-25 09:06:51.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:06:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 115 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 774 KiB/s wr, 62 op/s
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.392 253542 DEBUG nova.network.neutron [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.422 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.423 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance network_info: |[{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.423 253542 DEBUG oslo_concurrency.lockutils [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.423 253542 DEBUG nova.network.neutron [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.426 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start _get_guest_xml network_info=[{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.431 253542 WARNING nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.435 253542 DEBUG nova.virt.libvirt.host [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.436 253542 DEBUG nova.virt.libvirt.host [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.441 253542 DEBUG nova.virt.libvirt.host [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.442 253542 DEBUG nova.virt.libvirt.host [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.442 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.443 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.443 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.443 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.444 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.444 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.444 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.444 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.445 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.445 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.445 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.445 253542 DEBUG nova.virt.hardware [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.448 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:06:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814264828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.903 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.939 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:52 compute-0 nova_compute[253538]: 2025-11-25 09:06:52.944 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:53 compute-0 ceph-mon[75015]: pgmap v2582: 321 pgs: 321 active+clean; 115 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 774 KiB/s wr, 62 op/s
Nov 25 09:06:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/814264828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:06:53
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', 'backups', 'default.rgw.meta']
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.369 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061598.3686907, 935c4eb2-999f-40a4-8643-0479d293c149 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.370 253542 INFO nova.compute.manager [-] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] VM Stopped (Lifecycle Event)
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.399 253542 DEBUG nova.compute.manager [None req-8a432def-77de-4807-8e39-dd150b949f05 - - - - - -] [instance: 935c4eb2-999f-40a4-8643-0479d293c149] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:06:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1153513911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.437 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.438 253542 DEBUG nova.virt.libvirt.vif [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-150948456',display_name='tempest-TestSnapshotPattern-server-150948456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-150948456',id=140,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-9z2sb50j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:47Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=7aeb9ccf-2506-41d1-92c2-c72892096857,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.439 253542 DEBUG nova.network.os_vif_util [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.440 253542 DEBUG nova.network.os_vif_util [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.441 253542 DEBUG nova.objects.instance [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7aeb9ccf-2506-41d1-92c2-c72892096857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.456 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <uuid>7aeb9ccf-2506-41d1-92c2-c72892096857</uuid>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <name>instance-0000008c</name>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <nova:name>tempest-TestSnapshotPattern-server-150948456</nova:name>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:06:52</nova:creationTime>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <nova:user uuid="aef72e2ffce442d1848c4753c324ae92">tempest-TestSnapshotPattern-569624779-project-member</nova:user>
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <nova:project uuid="8771100a91ef4eb3b58cc4840f6154b4">tempest-TestSnapshotPattern-569624779</nova:project>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <nova:port uuid="0ddcebf0-d7e9-474d-b53b-d1746f5af8f2">
Nov 25 09:06:53 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <system>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <entry name="serial">7aeb9ccf-2506-41d1-92c2-c72892096857</entry>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <entry name="uuid">7aeb9ccf-2506-41d1-92c2-c72892096857</entry>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </system>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <os>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   </os>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <features>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   </features>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7aeb9ccf-2506-41d1-92c2-c72892096857_disk">
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       </source>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config">
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       </source>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:06:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:d3:95:36"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <target dev="tap0ddcebf0-d7"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/console.log" append="off"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <video>
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </video>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:06:53 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:06:53 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:06:53 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:06:53 compute-0 nova_compute[253538]: </domain>
Nov 25 09:06:53 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.458 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Preparing to wait for external event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.458 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.458 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.459 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.460 253542 DEBUG nova.virt.libvirt.vif [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-150948456',display_name='tempest-TestSnapshotPattern-server-150948456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-150948456',id=140,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-9z2sb50j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:06:47Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=7aeb9ccf-2506-41d1-92c2-c72892096857,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.460 253542 DEBUG nova.network.os_vif_util [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.461 253542 DEBUG nova.network.os_vif_util [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.461 253542 DEBUG os_vif [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.462 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.462 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.463 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.469 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ddcebf0-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.469 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ddcebf0-d7, col_values=(('external_ids', {'iface-id': '0ddcebf0-d7e9-474d-b53b-d1746f5af8f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:95:36', 'vm-uuid': '7aeb9ccf-2506-41d1-92c2-c72892096857'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:53 compute-0 NetworkManager[48915]: <info>  [1764061613.4717] manager: (tap0ddcebf0-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.473 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:06:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.477 253542 INFO os_vif [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7')
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.535 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.535 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.535 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No VIF found with MAC fa:16:3e:d3:95:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.536 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Using config drive
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.556 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.966 253542 DEBUG nova.network.neutron [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updated VIF entry in instance network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.967 253542 DEBUG nova.network.neutron [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:06:53 compute-0 nova_compute[253538]: 2025-11-25 09:06:53.985 253542 DEBUG oslo_concurrency.lockutils [req-f69aaf89-36f2-4f78-ba3d-1223228a2a9b req-218016f0-2840-46b7-95aa-f7b60d64e066 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.037 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Creating config drive at /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.042 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_vie_v8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.184 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_vie_v8" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.207 253542 DEBUG nova.storage.rbd_utils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.210 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:06:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1153513911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.376 253542 DEBUG oslo_concurrency.processutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config 7aeb9ccf-2506-41d1-92c2-c72892096857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.377 253542 INFO nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Deleting local config drive /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857/disk.config because it was imported into RBD.
Nov 25 09:06:54 compute-0 kernel: tap0ddcebf0-d7: entered promiscuous mode
Nov 25 09:06:54 compute-0 ovn_controller[152859]: 2025-11-25T09:06:54Z|01469|binding|INFO|Claiming lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 for this chassis.
Nov 25 09:06:54 compute-0 ovn_controller[152859]: 2025-11-25T09:06:54Z|01470|binding|INFO|0ddcebf0-d7e9-474d-b53b-d1746f5af8f2: Claiming fa:16:3e:d3:95:36 10.100.0.13
Nov 25 09:06:54 compute-0 NetworkManager[48915]: <info>  [1764061614.4329] manager: (tap0ddcebf0-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/602)
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.432 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.441 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:95:36 10.100.0.13'], port_security=['fa:16:3e:d3:95:36 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7aeb9ccf-2506-41d1-92c2-c72892096857', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b51ac1f2-fc04-45c4-8aed-ff9624bae478', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=774fcdab-a888-48f5-b941-79ea7db76602, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.442 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 in datapath 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c bound to our chassis
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.444 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.448 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:54 compute-0 ovn_controller[152859]: 2025-11-25T09:06:54Z|01471|binding|INFO|Setting lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 ovn-installed in OVS
Nov 25 09:06:54 compute-0 ovn_controller[152859]: 2025-11-25T09:06:54Z|01472|binding|INFO|Setting lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 up in Southbound
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.453 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.455 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d7625f36-2fd6-41bb-b9e4-498b72edd5b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.456 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c3eb82e-11 in ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.458 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c3eb82e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.458 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0924495c-3898-4253-9149-3b058444e523]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[eceedb38-a206-478b-ab41-92ad1878a978]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 systemd-udevd[401420]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.470 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b1e305-cd78-42d3-9ac8-750033b6d265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 NetworkManager[48915]: <info>  [1764061614.4737] device (tap0ddcebf0-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:06:54 compute-0 NetworkManager[48915]: <info>  [1764061614.4757] device (tap0ddcebf0-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:06:54 compute-0 systemd-machined[215790]: New machine qemu-170-instance-0000008c.
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.484 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e03ad70d-d0da-49af-a374-8cf18ed2641e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-0000008c.
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.515 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf92dc7-92d4-42ad-adde-80017cc4e018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.520 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[434bb6b9-b889-4f31-8863-658dbc572ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 NetworkManager[48915]: <info>  [1764061614.5218] manager: (tap3c3eb82e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/603)
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.557 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[222b18ea-921a-47a8-b4b8-adc0305b0dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.558 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.558 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.560 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[70207f56-4fa5-4f35-ba13-ba7928172f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 NetworkManager[48915]: <info>  [1764061614.5832] device (tap3c3eb82e-10): carrier: link connected
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.590 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[50a840f6-32eb-4fa3-b69c-91696bf53750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.607 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1499df-a385-4971-9870-bef381feb570]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3eb82e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4d:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696918, 'reachable_time': 15963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401454, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.621 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[97cac539-d2b8-4efb-955f-f374f6bf6246]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:4dac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696918, 'tstamp': 696918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401455, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.639 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cc48c9e8-b2ca-4104-8041-36f4bb3c8e1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3eb82e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4d:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696918, 'reachable_time': 15963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401456, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.667 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3e55cf72-8ec1-4216-a8a4-a987cd3ff715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.691 253542 DEBUG nova.compute.manager [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.691 253542 DEBUG oslo_concurrency.lockutils [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.691 253542 DEBUG oslo_concurrency.lockutils [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.692 253542 DEBUG oslo_concurrency.lockutils [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.692 253542 DEBUG nova.compute.manager [req-ec22508e-1465-4432-b858-aea2fea53b05 req-6a093228-37c1-41d3-8384-a9b388f2e51c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Processing event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.722 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8d50ca00-c2ab-4fd5-a027-edce20d23f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.723 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3eb82e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.723 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.724 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3eb82e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:54 compute-0 kernel: tap3c3eb82e-10: entered promiscuous mode
Nov 25 09:06:54 compute-0 NetworkManager[48915]: <info>  [1764061614.7271] manager: (tap3c3eb82e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.729 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.731 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c3eb82e-10, col_values=(('external_ids', {'iface-id': 'aca5006e-311f-469a-ba5d-688da3f7d396'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:54 compute-0 ovn_controller[152859]: 2025-11-25T09:06:54Z|01473|binding|INFO|Releasing lport aca5006e-311f-469a-ba5d-688da3f7d396 from this chassis (sb_readonly=0)
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.732 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.733 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c3eb82e-1161-4c2f-9fce-53fdf4386d9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c3eb82e-1161-4c2f-9fce-53fdf4386d9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.734 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc45e17-9bec-4a0d-b8ea-4195af8d9e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.735 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/3c3eb82e-1161-4c2f-9fce-53fdf4386d9c.pid.haproxy
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:06:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:06:54.736 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'env', 'PROCESS_TAG=haproxy-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c3eb82e-1161-4c2f-9fce-53fdf4386d9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:06:54 compute-0 nova_compute[253538]: 2025-11-25 09:06:54.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.094 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.096 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061615.094353, 7aeb9ccf-2506-41d1-92c2-c72892096857 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.096 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] VM Started (Lifecycle Event)
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.098 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.103 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance spawned successfully.
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.104 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.115 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.121 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.125 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.126 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.126 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.127 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.127 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.127 253542 DEBUG nova.virt.libvirt.driver [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:06:55 compute-0 podman[401530]: 2025-11-25 09:06:55.149618706 +0000 UTC m=+0.058369277 container create 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.167 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.167 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061615.096618, 7aeb9ccf-2506-41d1-92c2-c72892096857 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.168 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] VM Paused (Lifecycle Event)
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.187 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.190 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061615.0980508, 7aeb9ccf-2506-41d1-92c2-c72892096857 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.190 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] VM Resumed (Lifecycle Event)
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.195 253542 INFO nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 7.43 seconds to spawn the instance on the hypervisor.
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.195 253542 DEBUG nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:55 compute-0 systemd[1]: Started libpod-conmon-83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41.scope.
Nov 25 09:06:55 compute-0 podman[401530]: 2025-11-25 09:06:55.116939708 +0000 UTC m=+0.025690279 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.217 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.220 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:06:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24402a25caa013e206ed4779ad47cd8d103184e7670c046105bc5ea23a229e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:06:55 compute-0 podman[401530]: 2025-11-25 09:06:55.246792358 +0000 UTC m=+0.155542929 container init 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 09:06:55 compute-0 podman[401530]: 2025-11-25 09:06:55.25204511 +0000 UTC m=+0.160795661 container start 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:06:55 compute-0 ceph-mon[75015]: pgmap v2583: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.256 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:06:55 compute-0 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [NOTICE]   (401548) : New worker (401550) forked
Nov 25 09:06:55 compute-0 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [NOTICE]   (401548) : Loading success.
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.284 253542 INFO nova.compute.manager [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 8.37 seconds to build instance.
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.301 253542 DEBUG oslo_concurrency.lockutils [None req-1295a4e2-e903-455c-be0d-3566e41a5fe7 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:55 compute-0 ovn_controller[152859]: 2025-11-25T09:06:55Z|01474|binding|INFO|Releasing lport aca5006e-311f-469a-ba5d-688da3f7d396 from this chassis (sb_readonly=0)
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:55 compute-0 ovn_controller[152859]: 2025-11-25T09:06:55Z|01475|binding|INFO|Releasing lport aca5006e-311f-469a-ba5d-688da3f7d396 from this chassis (sb_readonly=0)
Nov 25 09:06:55 compute-0 nova_compute[253538]: 2025-11-25 09:06:55.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 09:06:56 compute-0 nova_compute[253538]: 2025-11-25 09:06:56.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:56 compute-0 nova_compute[253538]: 2025-11-25 09:06:56.752 253542 DEBUG nova.compute.manager [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:06:56 compute-0 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 DEBUG oslo_concurrency.lockutils [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:06:56 compute-0 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 DEBUG oslo_concurrency.lockutils [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:06:56 compute-0 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 DEBUG oslo_concurrency.lockutils [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:06:56 compute-0 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 DEBUG nova.compute.manager [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] No waiting events found dispatching network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:06:56 compute-0 nova_compute[253538]: 2025-11-25 09:06:56.753 253542 WARNING nova.compute.manager [req-e8f968e6-26c3-4a57-bb90-e33c6d2198c3 req-247faed9-d7d7-4d86-86e0-ac0c849fd974 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received unexpected event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 for instance with vm_state active and task_state None.
Nov 25 09:06:57 compute-0 ceph-mon[75015]: pgmap v2584: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 09:06:57 compute-0 podman[401560]: 2025-11-25 09:06:57.818527018 +0000 UTC m=+0.066082937 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:06:57 compute-0 podman[401561]: 2025-11-25 09:06:57.831234623 +0000 UTC m=+0.074206157 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 09:06:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:06:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 09:06:58 compute-0 nova_compute[253538]: 2025-11-25 09:06:58.471 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:58 compute-0 nova_compute[253538]: 2025-11-25 09:06:58.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:58 compute-0 nova_compute[253538]: 2025-11-25 09:06:58.563 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:06:59 compute-0 nova_compute[253538]: 2025-11-25 09:06:59.010 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061604.0089462, 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:06:59 compute-0 nova_compute[253538]: 2025-11-25 09:06:59.010 253542 INFO nova.compute.manager [-] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] VM Stopped (Lifecycle Event)
Nov 25 09:06:59 compute-0 nova_compute[253538]: 2025-11-25 09:06:59.032 253542 DEBUG nova.compute.manager [None req-b75574f4-44a0-461d-acd2-d1853650cd33 - - - - - -] [instance: 7bc2f34f-b7dd-4b8d-99c0-1fc01d3c2aa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:06:59 compute-0 nova_compute[253538]: 2025-11-25 09:06:59.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:06:59 compute-0 ceph-mon[75015]: pgmap v2585: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.282927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619282976, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1222, "num_deletes": 251, "total_data_size": 1847332, "memory_usage": 1876624, "flush_reason": "Manual Compaction"}
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619298462, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1818565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53165, "largest_seqno": 54386, "table_properties": {"data_size": 1812679, "index_size": 3217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12467, "raw_average_key_size": 19, "raw_value_size": 1800941, "raw_average_value_size": 2872, "num_data_blocks": 144, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061498, "oldest_key_time": 1764061498, "file_creation_time": 1764061619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 15680 microseconds, and 5143 cpu microseconds.
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.298614) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1818565 bytes OK
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.298661) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.301824) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.301846) EVENT_LOG_v1 {"time_micros": 1764061619301840, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.301863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1841767, prev total WAL file size 1841767, number of live WAL files 2.
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.302579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1775KB)], [125(8141KB)]
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619302602, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10155291, "oldest_snapshot_seqno": -1}
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7324 keys, 8477771 bytes, temperature: kUnknown
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619361070, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8477771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8432044, "index_size": 26286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 192146, "raw_average_key_size": 26, "raw_value_size": 8304303, "raw_average_value_size": 1133, "num_data_blocks": 1019, "num_entries": 7324, "num_filter_entries": 7324, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.361300) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8477771 bytes
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.362644) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.4 rd, 144.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(10.2) write-amplify(4.7) OK, records in: 7838, records dropped: 514 output_compression: NoCompression
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.362658) EVENT_LOG_v1 {"time_micros": 1764061619362651, "job": 76, "event": "compaction_finished", "compaction_time_micros": 58555, "compaction_time_cpu_micros": 20015, "output_level": 6, "num_output_files": 1, "total_output_size": 8477771, "num_input_records": 7838, "num_output_records": 7324, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619363128, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061619364323, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.302533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:06:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:06:59.364388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:07:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 09:07:00 compute-0 NetworkManager[48915]: <info>  [1764061620.3404] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Nov 25 09:07:00 compute-0 NetworkManager[48915]: <info>  [1764061620.3415] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Nov 25 09:07:00 compute-0 nova_compute[253538]: 2025-11-25 09:07:00.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:00 compute-0 nova_compute[253538]: 2025-11-25 09:07:00.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:00 compute-0 ovn_controller[152859]: 2025-11-25T09:07:00Z|01476|binding|INFO|Releasing lport aca5006e-311f-469a-ba5d-688da3f7d396 from this chassis (sb_readonly=0)
Nov 25 09:07:00 compute-0 nova_compute[253538]: 2025-11-25 09:07:00.459 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:01 compute-0 ceph-mon[75015]: pgmap v2586: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.508 253542 DEBUG nova.compute.manager [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.509 253542 DEBUG nova.compute.manager [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing instance network info cache due to event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.509 253542 DEBUG oslo_concurrency.lockutils [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.509 253542 DEBUG oslo_concurrency.lockutils [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.509 253542 DEBUG nova.network.neutron [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.571 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.572 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:07:01 compute-0 nova_compute[253538]: 2025-11-25 09:07:01.572 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:07:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433178434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.024 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.206 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.206 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:07:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2433178434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.372 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.374 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.374 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.374 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.446 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7aeb9ccf-2506-41d1-92c2-c72892096857 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.447 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.447 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.492 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:07:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885092081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.953 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.959 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:07:02 compute-0 nova_compute[253538]: 2025-11-25 09:07:02.981 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:07:03 compute-0 nova_compute[253538]: 2025-11-25 09:07:03.015 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:07:03 compute-0 nova_compute[253538]: 2025-11-25 09:07:03.016 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:03 compute-0 ceph-mon[75015]: pgmap v2587: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Nov 25 09:07:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3885092081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:07:03 compute-0 nova_compute[253538]: 2025-11-25 09:07:03.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:03 compute-0 nova_compute[253538]: 2025-11-25 09:07:03.768 253542 DEBUG nova.network.neutron [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updated VIF entry in instance network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:07:03 compute-0 nova_compute[253538]: 2025-11-25 09:07:03.769 253542 DEBUG nova.network.neutron [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:03 compute-0 nova_compute[253538]: 2025-11-25 09:07:03.836 253542 DEBUG oslo_concurrency.lockutils [req-9ec32d62-6dfe-4056-bac0-514538e2c950 req-7baf3401-dbd0-46dd-96ef-6cae0c493e59 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:04 compute-0 nova_compute[253538]: 2025-11-25 09:07:04.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 83 op/s
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:07:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:07:04 compute-0 sudo[401645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:04 compute-0 sudo[401645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:04 compute-0 sudo[401645]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:04 compute-0 sudo[401670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:07:04 compute-0 sudo[401670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:04 compute-0 sudo[401670]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:04 compute-0 sudo[401695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:04 compute-0 sudo[401695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:04 compute-0 sudo[401695]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:04 compute-0 sudo[401720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:07:04 compute-0 sudo[401720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:05 compute-0 sudo[401720]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:07:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:07:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:07:05 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:07:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:07:05 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:07:05 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3470c4f4-eb92-45cc-9d5a-111c323537b1 does not exist
Nov 25 09:07:05 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev fd2a2eb5-8f50-4371-9a6c-1151a7d23693 does not exist
Nov 25 09:07:05 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 222a9998-08f3-4f1d-871d-424dd161aeb2 does not exist
Nov 25 09:07:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:07:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:07:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:07:05 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:07:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:07:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:07:05 compute-0 sudo[401776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:05 compute-0 ceph-mon[75015]: pgmap v2588: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 83 op/s
Nov 25 09:07:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:07:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:07:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:07:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:07:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:07:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:07:05 compute-0 sudo[401776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:05 compute-0 sudo[401776]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:05 compute-0 sudo[401801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:07:05 compute-0 sudo[401801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:05 compute-0 sudo[401801]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:05 compute-0 sudo[401826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:05 compute-0 sudo[401826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:05 compute-0 sudo[401826]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:05 compute-0 sudo[401851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:07:05 compute-0 sudo[401851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:05 compute-0 podman[401916]: 2025-11-25 09:07:05.829838608 +0000 UTC m=+0.057215326 container create 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:07:05 compute-0 systemd[1]: Started libpod-conmon-7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9.scope.
Nov 25 09:07:05 compute-0 podman[401916]: 2025-11-25 09:07:05.797704115 +0000 UTC m=+0.025080843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:07:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:07:05 compute-0 podman[401916]: 2025-11-25 09:07:05.91229038 +0000 UTC m=+0.139667128 container init 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 09:07:05 compute-0 podman[401916]: 2025-11-25 09:07:05.918498498 +0000 UTC m=+0.145875216 container start 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:07:05 compute-0 systemd[1]: libpod-7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9.scope: Deactivated successfully.
Nov 25 09:07:05 compute-0 great_ardinghelli[401932]: 167 167
Nov 25 09:07:05 compute-0 conmon[401932]: conmon 7135a79d554a31a47d48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9.scope/container/memory.events
Nov 25 09:07:05 compute-0 podman[401916]: 2025-11-25 09:07:05.926026083 +0000 UTC m=+0.153402831 container attach 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:05 compute-0 podman[401916]: 2025-11-25 09:07:05.927259056 +0000 UTC m=+0.154635794 container died 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 09:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a363d05051ff9e0334c0cefbc004fd3d501e926facac4bf1fd0810e422858d56-merged.mount: Deactivated successfully.
Nov 25 09:07:05 compute-0 podman[401916]: 2025-11-25 09:07:05.977641786 +0000 UTC m=+0.205018504 container remove 7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 09:07:05 compute-0 systemd[1]: libpod-conmon-7135a79d554a31a47d48454889f4db8ee54a0c7653bda52e8bbde46f4b337fc9.scope: Deactivated successfully.
Nov 25 09:07:06 compute-0 podman[401935]: 2025-11-25 09:07:06.011151467 +0000 UTC m=+0.099737943 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 09:07:06 compute-0 podman[401981]: 2025-11-25 09:07:06.15364833 +0000 UTC m=+0.047856281 container create 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:07:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:07:06 compute-0 systemd[1]: Started libpod-conmon-297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900.scope.
Nov 25 09:07:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:06 compute-0 podman[401981]: 2025-11-25 09:07:06.132828465 +0000 UTC m=+0.027036446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:07:06 compute-0 podman[401981]: 2025-11-25 09:07:06.257637197 +0000 UTC m=+0.151845198 container init 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:06 compute-0 podman[401981]: 2025-11-25 09:07:06.263724663 +0000 UTC m=+0.157932624 container start 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:07:06 compute-0 podman[401981]: 2025-11-25 09:07:06.283378737 +0000 UTC m=+0.177586688 container attach 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:07:07 compute-0 admiring_darwin[401997]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:07:07 compute-0 admiring_darwin[401997]: --> relative data size: 1.0
Nov 25 09:07:07 compute-0 admiring_darwin[401997]: --> All data devices are unavailable
Nov 25 09:07:07 compute-0 systemd[1]: libpod-297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900.scope: Deactivated successfully.
Nov 25 09:07:07 compute-0 systemd[1]: libpod-297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900.scope: Consumed 1.013s CPU time.
Nov 25 09:07:07 compute-0 podman[401981]: 2025-11-25 09:07:07.33300373 +0000 UTC m=+1.227211701 container died 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:07:07 compute-0 ceph-mon[75015]: pgmap v2589: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-47f17f3cebb8fb7e865d73739b74ff0516aa8c117aba2842de043b36e7a8bace-merged.mount: Deactivated successfully.
Nov 25 09:07:07 compute-0 podman[401981]: 2025-11-25 09:07:07.83846033 +0000 UTC m=+1.732668321 container remove 297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:07:07 compute-0 sudo[401851]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:07 compute-0 systemd[1]: libpod-conmon-297489f0950dc8b7d355c380d9e17855ccae3143e9edc8ad6aa4db857daca900.scope: Deactivated successfully.
Nov 25 09:07:07 compute-0 sudo[402035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:07 compute-0 sudo[402035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:07 compute-0 sudo[402035]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:07 compute-0 sudo[402061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:07:07 compute-0 sudo[402061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:07 compute-0 sudo[402061]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:08 compute-0 sudo[402086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:08 compute-0 sudo[402086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:08 compute-0 sudo[402086]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:08 compute-0 sudo[402111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:07:08 compute-0 sudo[402111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 09:07:08 compute-0 nova_compute[253538]: 2025-11-25 09:07:08.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:08 compute-0 podman[402175]: 2025-11-25 09:07:08.491816432 +0000 UTC m=+0.040703928 container create ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:07:08 compute-0 systemd[1]: Started libpod-conmon-ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08.scope.
Nov 25 09:07:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:07:08 compute-0 podman[402175]: 2025-11-25 09:07:08.474348296 +0000 UTC m=+0.023235822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:07:08 compute-0 podman[402175]: 2025-11-25 09:07:08.569936704 +0000 UTC m=+0.118824220 container init ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:08 compute-0 podman[402175]: 2025-11-25 09:07:08.576576966 +0000 UTC m=+0.125464462 container start ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 09:07:08 compute-0 podman[402175]: 2025-11-25 09:07:08.580915003 +0000 UTC m=+0.129802519 container attach ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:07:08 compute-0 practical_blackburn[402191]: 167 167
Nov 25 09:07:08 compute-0 systemd[1]: libpod-ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08.scope: Deactivated successfully.
Nov 25 09:07:08 compute-0 podman[402175]: 2025-11-25 09:07:08.583295538 +0000 UTC m=+0.132183034 container died ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 09:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b19b6fbfffe4eaefbb8866cae7a026dc3431cc17e5ec8d394a10331157f13130-merged.mount: Deactivated successfully.
Nov 25 09:07:08 compute-0 podman[402175]: 2025-11-25 09:07:08.626913614 +0000 UTC m=+0.175801110 container remove ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:08 compute-0 systemd[1]: libpod-conmon-ef635661db48d779fe6d66362ba4f26b1de8812b6f7fbdeede3c6700eab46c08.scope: Deactivated successfully.
Nov 25 09:07:08 compute-0 ovn_controller[152859]: 2025-11-25T09:07:08Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:95:36 10.100.0.13
Nov 25 09:07:08 compute-0 ovn_controller[152859]: 2025-11-25T09:07:08Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:95:36 10.100.0.13
Nov 25 09:07:08 compute-0 podman[402214]: 2025-11-25 09:07:08.784627211 +0000 UTC m=+0.041027667 container create 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:07:08 compute-0 systemd[1]: Started libpod-conmon-5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706.scope.
Nov 25 09:07:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:08 compute-0 podman[402214]: 2025-11-25 09:07:08.768933345 +0000 UTC m=+0.025333811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:07:08 compute-0 podman[402214]: 2025-11-25 09:07:08.872841309 +0000 UTC m=+0.129241785 container init 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 09:07:08 compute-0 podman[402214]: 2025-11-25 09:07:08.888145025 +0000 UTC m=+0.144545511 container start 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:07:08 compute-0 podman[402214]: 2025-11-25 09:07:08.892215896 +0000 UTC m=+0.148616372 container attach 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 09:07:09 compute-0 nova_compute[253538]: 2025-11-25 09:07:09.150 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:09 compute-0 ceph-mon[75015]: pgmap v2590: 321 pgs: 321 active+clean; 134 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 09:07:09 compute-0 festive_wu[402230]: {
Nov 25 09:07:09 compute-0 festive_wu[402230]:     "0": [
Nov 25 09:07:09 compute-0 festive_wu[402230]:         {
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "devices": [
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "/dev/loop3"
Nov 25 09:07:09 compute-0 festive_wu[402230]:             ],
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_name": "ceph_lv0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_size": "21470642176",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "name": "ceph_lv0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "tags": {
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cluster_name": "ceph",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.crush_device_class": "",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.encrypted": "0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osd_id": "0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.type": "block",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.vdo": "0"
Nov 25 09:07:09 compute-0 festive_wu[402230]:             },
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "type": "block",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "vg_name": "ceph_vg0"
Nov 25 09:07:09 compute-0 festive_wu[402230]:         }
Nov 25 09:07:09 compute-0 festive_wu[402230]:     ],
Nov 25 09:07:09 compute-0 festive_wu[402230]:     "1": [
Nov 25 09:07:09 compute-0 festive_wu[402230]:         {
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "devices": [
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "/dev/loop4"
Nov 25 09:07:09 compute-0 festive_wu[402230]:             ],
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_name": "ceph_lv1",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_size": "21470642176",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "name": "ceph_lv1",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "tags": {
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cluster_name": "ceph",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.crush_device_class": "",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.encrypted": "0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osd_id": "1",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.type": "block",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.vdo": "0"
Nov 25 09:07:09 compute-0 festive_wu[402230]:             },
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "type": "block",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "vg_name": "ceph_vg1"
Nov 25 09:07:09 compute-0 festive_wu[402230]:         }
Nov 25 09:07:09 compute-0 festive_wu[402230]:     ],
Nov 25 09:07:09 compute-0 festive_wu[402230]:     "2": [
Nov 25 09:07:09 compute-0 festive_wu[402230]:         {
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "devices": [
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "/dev/loop5"
Nov 25 09:07:09 compute-0 festive_wu[402230]:             ],
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_name": "ceph_lv2",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_size": "21470642176",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "name": "ceph_lv2",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "tags": {
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.cluster_name": "ceph",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.crush_device_class": "",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.encrypted": "0",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osd_id": "2",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.type": "block",
Nov 25 09:07:09 compute-0 festive_wu[402230]:                 "ceph.vdo": "0"
Nov 25 09:07:09 compute-0 festive_wu[402230]:             },
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "type": "block",
Nov 25 09:07:09 compute-0 festive_wu[402230]:             "vg_name": "ceph_vg2"
Nov 25 09:07:09 compute-0 festive_wu[402230]:         }
Nov 25 09:07:09 compute-0 festive_wu[402230]:     ]
Nov 25 09:07:09 compute-0 festive_wu[402230]: }
Nov 25 09:07:09 compute-0 systemd[1]: libpod-5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706.scope: Deactivated successfully.
Nov 25 09:07:09 compute-0 podman[402214]: 2025-11-25 09:07:09.706195753 +0000 UTC m=+0.962596219 container died 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dcf0ce642ae64399c0c64264db2b35a04adce66219c7ca3a2ddb533e15d9dd7-merged.mount: Deactivated successfully.
Nov 25 09:07:09 compute-0 podman[402214]: 2025-11-25 09:07:09.784917003 +0000 UTC m=+1.041317469 container remove 5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:07:09 compute-0 systemd[1]: libpod-conmon-5aa1fcd0435785699f58110ce7db204d4dc901afe78a18ffd9d006c5445b6706.scope: Deactivated successfully.
Nov 25 09:07:09 compute-0 sudo[402111]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:09 compute-0 sudo[402251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:09 compute-0 sudo[402251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:09 compute-0 sudo[402251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:09 compute-0 sudo[402276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:07:09 compute-0 sudo[402276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:09 compute-0 sudo[402276]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:09 compute-0 nova_compute[253538]: 2025-11-25 09:07:09.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:10 compute-0 sudo[402301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:10 compute-0 sudo[402301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:10 compute-0 sudo[402301]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:10 compute-0 sudo[402326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:07:10 compute-0 sudo[402326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 159 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 93 op/s
Nov 25 09:07:10 compute-0 podman[402391]: 2025-11-25 09:07:10.426116983 +0000 UTC m=+0.038051456 container create 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:07:10 compute-0 systemd[1]: Started libpod-conmon-3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f.scope.
Nov 25 09:07:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:07:10 compute-0 podman[402391]: 2025-11-25 09:07:10.409302756 +0000 UTC m=+0.021237249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:07:10 compute-0 podman[402391]: 2025-11-25 09:07:10.507076044 +0000 UTC m=+0.119010537 container init 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:07:10 compute-0 podman[402391]: 2025-11-25 09:07:10.514970549 +0000 UTC m=+0.126905032 container start 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 09:07:10 compute-0 podman[402391]: 2025-11-25 09:07:10.518829663 +0000 UTC m=+0.130764206 container attach 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:07:10 compute-0 friendly_beaver[402407]: 167 167
Nov 25 09:07:10 compute-0 systemd[1]: libpod-3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f.scope: Deactivated successfully.
Nov 25 09:07:10 compute-0 podman[402391]: 2025-11-25 09:07:10.521853376 +0000 UTC m=+0.133787849 container died 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 09:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b13b8fdbceb8e9673a4721cfd00f23e4d9bd736e4971706cf6e7b9f7771da55f-merged.mount: Deactivated successfully.
Nov 25 09:07:10 compute-0 podman[402391]: 2025-11-25 09:07:10.559548221 +0000 UTC m=+0.171482694 container remove 3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:07:10 compute-0 systemd[1]: libpod-conmon-3623ee54d41744a3602d33237befcae75e8ca18d18c0e38783ebe4f2d4c1a11f.scope: Deactivated successfully.
Nov 25 09:07:10 compute-0 podman[402430]: 2025-11-25 09:07:10.736379648 +0000 UTC m=+0.042381483 container create 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:07:10 compute-0 systemd[1]: Started libpod-conmon-94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c.scope.
Nov 25 09:07:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:10 compute-0 podman[402430]: 2025-11-25 09:07:10.718214933 +0000 UTC m=+0.024216778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:07:10 compute-0 podman[402430]: 2025-11-25 09:07:10.820875125 +0000 UTC m=+0.126877010 container init 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:07:10 compute-0 podman[402430]: 2025-11-25 09:07:10.837747953 +0000 UTC m=+0.143749808 container start 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:07:10 compute-0 podman[402430]: 2025-11-25 09:07:10.842235626 +0000 UTC m=+0.148237461 container attach 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:11 compute-0 ceph-mon[75015]: pgmap v2591: 321 pgs: 321 active+clean; 159 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 93 op/s
Nov 25 09:07:11 compute-0 blissful_diffie[402448]: {
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "osd_id": 1,
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "type": "bluestore"
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:     },
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "osd_id": 2,
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "type": "bluestore"
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:     },
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "osd_id": 0,
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:         "type": "bluestore"
Nov 25 09:07:11 compute-0 blissful_diffie[402448]:     }
Nov 25 09:07:11 compute-0 blissful_diffie[402448]: }
Nov 25 09:07:11 compute-0 systemd[1]: libpod-94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c.scope: Deactivated successfully.
Nov 25 09:07:11 compute-0 systemd[1]: libpod-94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c.scope: Consumed 1.024s CPU time.
Nov 25 09:07:11 compute-0 podman[402430]: 2025-11-25 09:07:11.858044349 +0000 UTC m=+1.164046184 container died 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c981229acc3180246becf62ae8396677f6daf94938c8069fdcabc29f2caf318c-merged.mount: Deactivated successfully.
Nov 25 09:07:11 compute-0 sshd-session[402444]: Invalid user username from 45.202.211.6 port 43664
Nov 25 09:07:11 compute-0 podman[402430]: 2025-11-25 09:07:11.916521778 +0000 UTC m=+1.222523603 container remove 94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:07:11 compute-0 systemd[1]: libpod-conmon-94379ed3b27c5e046c59cf71a8701f74905a616f529620796ffb84bacbef3d6c.scope: Deactivated successfully.
Nov 25 09:07:11 compute-0 sudo[402326]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:07:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:07:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:07:11 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:07:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8cb696f3-9849-4981-b480-3cca206d97c7 does not exist
Nov 25 09:07:11 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0378c638-7eb0-42ff-a448-df7657e204fa does not exist
Nov 25 09:07:12 compute-0 sudo[402492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:07:12 compute-0 sudo[402492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:12 compute-0 sudo[402492]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:12 compute-0 sshd-session[402444]: Received disconnect from 45.202.211.6 port 43664:11: Bye Bye [preauth]
Nov 25 09:07:12 compute-0 sshd-session[402444]: Disconnected from invalid user username 45.202.211.6 port 43664 [preauth]
Nov 25 09:07:12 compute-0 sudo[402517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:07:12 compute-0 sudo[402517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:07:12 compute-0 sudo[402517]: pam_unix(sudo:session): session closed for user root
Nov 25 09:07:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 09:07:12 compute-0 nova_compute[253538]: 2025-11-25 09:07:12.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:07:12 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:07:12 compute-0 ceph-mon[75015]: pgmap v2592: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 09:07:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:13 compute-0 nova_compute[253538]: 2025-11-25 09:07:13.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:14 compute-0 nova_compute[253538]: 2025-11-25 09:07:14.152 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 09:07:15 compute-0 ceph-mon[75015]: pgmap v2593: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 09:07:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 09:07:17 compute-0 ceph-mon[75015]: pgmap v2594: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 09:07:17 compute-0 nova_compute[253538]: 2025-11-25 09:07:17.927 253542 DEBUG nova.compute.manager [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:17 compute-0 nova_compute[253538]: 2025-11-25 09:07:17.987 253542 INFO nova.compute.manager [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] instance snapshotting
Nov 25 09:07:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 09:07:18 compute-0 nova_compute[253538]: 2025-11-25 09:07:18.281 253542 INFO nova.virt.libvirt.driver [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Beginning live snapshot process
Nov 25 09:07:18 compute-0 nova_compute[253538]: 2025-11-25 09:07:18.460 253542 DEBUG nova.virt.libvirt.imagebackend [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 09:07:18 compute-0 nova_compute[253538]: 2025-11-25 09:07:18.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:18 compute-0 nova_compute[253538]: 2025-11-25 09:07:18.928 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] creating snapshot(e7a5b94946234757868df5f88e1b5970) on rbd image(7aeb9ccf-2506-41d1-92c2-c72892096857_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 09:07:18 compute-0 nova_compute[253538]: 2025-11-25 09:07:18.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:19 compute-0 nova_compute[253538]: 2025-11-25 09:07:19.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 25 09:07:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 25 09:07:19 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 25 09:07:19 compute-0 ceph-mon[75015]: pgmap v2595: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 09:07:19 compute-0 nova_compute[253538]: 2025-11-25 09:07:19.375 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] cloning vms/7aeb9ccf-2506-41d1-92c2-c72892096857_disk@e7a5b94946234757868df5f88e1b5970 to images/cea21f13-1c78-4633-9d51-3cb641934c22 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 09:07:19 compute-0 nova_compute[253538]: 2025-11-25 09:07:19.485 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] flattening images/cea21f13-1c78-4633-9d51-3cb641934c22 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 09:07:19 compute-0 nova_compute[253538]: 2025-11-25 09:07:19.879 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] removing snapshot(e7a5b94946234757868df5f88e1b5970) on rbd image(7aeb9ccf-2506-41d1-92c2-c72892096857_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 09:07:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 620 KiB/s wr, 42 op/s
Nov 25 09:07:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 25 09:07:20 compute-0 ceph-mon[75015]: osdmap e254: 3 total, 3 up, 3 in
Nov 25 09:07:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 25 09:07:20 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 25 09:07:20 compute-0 nova_compute[253538]: 2025-11-25 09:07:20.636 253542 DEBUG nova.storage.rbd_utils [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] creating snapshot(snap) on rbd image(cea21f13-1c78-4633-9d51-3cb641934c22) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 09:07:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:20.893 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:b7:82 2001:db8:0:1:f816:3eff:fea4:b782 2001:db8::f816:3eff:fea4:b782'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea4:b782/64 2001:db8::f816:3eff:fea4:b782/64', 'neutron:device_id': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693) old=Port_Binding(mac=['fa:16:3e:a4:b7:82 2001:db8::f816:3eff:fea4:b782'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:b782/64', 'neutron:device_id': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:07:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:20.895 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693 in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 updated
Nov 25 09:07:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:20.898 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:07:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:20.899 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[841d2420-a318-4d87-a202-e36ae8fe2fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:21 compute-0 ceph-mon[75015]: pgmap v2597: 321 pgs: 321 active+clean; 167 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 620 KiB/s wr, 42 op/s
Nov 25 09:07:21 compute-0 ceph-mon[75015]: osdmap e255: 3 total, 3 up, 3 in
Nov 25 09:07:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 25 09:07:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 25 09:07:21 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 25 09:07:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 196 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 24 op/s
Nov 25 09:07:22 compute-0 ceph-mon[75015]: osdmap e256: 3 total, 3 up, 3 in
Nov 25 09:07:22 compute-0 ceph-mon[75015]: pgmap v2600: 321 pgs: 321 active+clean; 196 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 24 op/s
Nov 25 09:07:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:23 compute-0 nova_compute[253538]: 2025-11-25 09:07:23.268 253542 INFO nova.virt.libvirt.driver [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Snapshot image upload complete
Nov 25 09:07:23 compute-0 nova_compute[253538]: 2025-11-25 09:07:23.269 253542 INFO nova.compute.manager [None req-fb97dd2a-bb73-4d24-bde6-cb7eda278abf aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 5.28 seconds to snapshot the instance on the hypervisor.
Nov 25 09:07:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:07:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:07:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:07:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:07:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:07:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:07:23 compute-0 nova_compute[253538]: 2025-11-25 09:07:23.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:24 compute-0 nova_compute[253538]: 2025-11-25 09:07:24.157 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 219 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.3 MiB/s wr, 126 op/s
Nov 25 09:07:25 compute-0 ceph-mon[75015]: pgmap v2601: 321 pgs: 321 active+clean; 219 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.3 MiB/s wr, 126 op/s
Nov 25 09:07:25 compute-0 nova_compute[253538]: 2025-11-25 09:07:25.556 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:25 compute-0 nova_compute[253538]: 2025-11-25 09:07:25.557 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:25 compute-0 nova_compute[253538]: 2025-11-25 09:07:25.580 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:07:25 compute-0 nova_compute[253538]: 2025-11-25 09:07:25.680 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:25 compute-0 nova_compute[253538]: 2025-11-25 09:07:25.681 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:25 compute-0 nova_compute[253538]: 2025-11-25 09:07:25.690 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:07:25 compute-0 nova_compute[253538]: 2025-11-25 09:07:25.690 253542 INFO nova.compute.claims [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:07:25 compute-0 nova_compute[253538]: 2025-11-25 09:07:25.808 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.8 MiB/s wr, 155 op/s
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.236 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.237 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:07:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362277075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.270 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.274 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.279 253542 DEBUG nova.compute.provider_tree [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.299 253542 DEBUG nova.scheduler.client.report [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:07:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/362277075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.334 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.334 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.355 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.355 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.362 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.362 253542 INFO nova.compute.claims [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.404 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.404 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.431 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.453 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.675 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.795 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.797 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.797 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Creating image(s)
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.823 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.858 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.886 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.891 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.989 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.991 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.992 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:26 compute-0 nova_compute[253538]: 2025-11-25 09:07:26.992 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.027 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.032 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5ec67ec-7042-47d0-925d-6ff3847d3846_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:07:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488964994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.191 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.201 253542 DEBUG nova.compute.provider_tree [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.226 253542 DEBUG nova.scheduler.client.report [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.254 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.255 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:07:27 compute-0 ceph-mon[75015]: pgmap v2602: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.8 MiB/s wr, 155 op/s
Nov 25 09:07:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/488964994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.317 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.318 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.324 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc a5ec67ec-7042-47d0-925d-6ff3847d3846_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.353 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.388 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.395 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.427 253542 DEBUG nova.policy [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.494 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.495 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.496 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Creating image(s)
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.518 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.542 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.568 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.572 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "61b07ac455ac595ffac8250648100eba5804ec9e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.573 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "61b07ac455ac595ffac8250648100eba5804ec9e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.583 253542 DEBUG nova.objects.instance [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid a5ec67ec-7042-47d0-925d-6ff3847d3846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.605 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.606 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Ensure instance console log exists: /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.607 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.607 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.607 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.770 253542 DEBUG nova.virt.libvirt.imagebackend [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/cea21f13-1c78-4633-9d51-3cb641934c22/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/cea21f13-1c78-4633-9d51-3cb641934c22/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.810 253542 DEBUG nova.virt.libvirt.imagebackend [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/cea21f13-1c78-4633-9d51-3cb641934c22/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.811 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] cloning images/cea21f13-1c78-4633-9d51-3cb641934c22@snap to None/de5bfbef-7a99-4280-a304-71b9099f110b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 09:07:27 compute-0 nova_compute[253538]: 2025-11-25 09:07:27.915 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "61b07ac455ac595ffac8250648100eba5804ec9e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.046 253542 DEBUG nova.objects.instance [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'migration_context' on Instance uuid de5bfbef-7a99-4280-a304-71b9099f110b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.057 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.058 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Ensure instance console log exists: /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.058 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.059 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.059 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 25 09:07:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 25 09:07:28 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 25 09:07:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 139 op/s
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.284 253542 DEBUG nova.policy [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aef72e2ffce442d1848c4753c324ae92', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:28 compute-0 podman[403072]: 2025-11-25 09:07:28.815164023 +0000 UTC m=+0.063317762 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:07:28 compute-0 podman[403073]: 2025-11-25 09:07:28.831271341 +0000 UTC m=+0.070772975 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 09:07:28 compute-0 nova_compute[253538]: 2025-11-25 09:07:28.852 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Successfully created port: 05d0fd93-ce0f-4842-962f-c9491d3850c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:07:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:07:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3750147748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:07:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:07:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3750147748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:07:29 compute-0 nova_compute[253538]: 2025-11-25 09:07:29.069 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Successfully created port: ff88ca8a-d270-4991-b2c4-617f04418848 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:07:29 compute-0 ceph-mon[75015]: osdmap e257: 3 total, 3 up, 3 in
Nov 25 09:07:29 compute-0 ceph-mon[75015]: pgmap v2604: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 139 op/s
Nov 25 09:07:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3750147748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:07:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3750147748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:07:29 compute-0 nova_compute[253538]: 2025-11-25 09:07:29.159 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:29 compute-0 nova_compute[253538]: 2025-11-25 09:07:29.456 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Successfully created port: 2eda6ce1-df50-4620-a5e9-d08e62f7350e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:07:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 277 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.3 MiB/s wr, 167 op/s
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.276 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Successfully updated port: 05d0fd93-ce0f-4842-962f-c9491d3850c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.307 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Successfully updated port: ff88ca8a-d270-4991-b2c4-617f04418848 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.321 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.321 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquired lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.322 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.384 253542 DEBUG nova.compute.manager [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.385 253542 DEBUG nova.compute.manager [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing instance network info cache due to event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.388 253542 DEBUG oslo_concurrency.lockutils [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.389 253542 DEBUG oslo_concurrency.lockutils [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.389 253542 DEBUG nova.network.neutron [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.403 253542 DEBUG nova.compute.manager [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.404 253542 DEBUG nova.compute.manager [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing instance network info cache due to event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.404 253542 DEBUG oslo_concurrency.lockutils [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.503 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:07:30 compute-0 nova_compute[253538]: 2025-11-25 09:07:30.638 253542 DEBUG nova.network.neutron [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.084 253542 DEBUG nova.network.neutron [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.099 253542 DEBUG oslo_concurrency.lockutils [req-d0ac20b3-28b6-4c56-b14a-9e02d11d6254 req-a38f1de1-06ff-4527-8de2-dabfea6489dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.173 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Successfully updated port: 2eda6ce1-df50-4620-a5e9-d08e62f7350e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.190 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.190 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.190 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:07:31 compute-0 ceph-mon[75015]: pgmap v2605: 321 pgs: 321 active+clean; 277 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.3 MiB/s wr, 167 op/s
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.332 253542 DEBUG nova.network.neutron [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.347 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.352 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Releasing lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.353 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance network_info: |[{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.354 253542 DEBUG oslo_concurrency.lockutils [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.354 253542 DEBUG nova.network.neutron [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.359 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start _get_guest_xml network_info=[{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T09:07:17Z,direct_url=<?>,disk_format='raw',id=cea21f13-1c78-4633-9d51-3cb641934c22,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-651714253',owner='8771100a91ef4eb3b58cc4840f6154b4',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T09:07:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': 'cea21f13-1c78-4633-9d51-3cb641934c22'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.366 253542 WARNING nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.377 253542 DEBUG nova.virt.libvirt.host [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.378 253542 DEBUG nova.virt.libvirt.host [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.383 253542 DEBUG nova.virt.libvirt.host [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.383 253542 DEBUG nova.virt.libvirt.host [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.384 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.384 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T09:07:17Z,direct_url=<?>,disk_format='raw',id=cea21f13-1c78-4633-9d51-3cb641934c22,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-651714253',owner='8771100a91ef4eb3b58cc4840f6154b4',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T09:07:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.385 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.385 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.386 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.386 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.386 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.387 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.387 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.387 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.388 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.388 253542 DEBUG nova.virt.hardware [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.392 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:07:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512203468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.882 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.911 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:31 compute-0 nova_compute[253538]: 2025-11-25 09:07:31.917 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.0 MiB/s wr, 159 op/s
Nov 25 09:07:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/512203468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:07:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:07:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2725074280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.396 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.398 253542 DEBUG nova.virt.libvirt.vif [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1748422570',display_name='tempest-TestSnapshotPattern-server-1748422570',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1748422570',id=142,image_ref='cea21f13-1c78-4633-9d51-3cb641934c22',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-a2i419d2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7aeb9ccf-2506-41d1-92c2-c72892096857',image_min_disk='1',image_min_ram='0',image_owner_id='8771100a91ef4eb3b58cc4840f6154b4',image_owner_project_name='tempest-TestSnapshotPattern-569624779',image_owner_user_name='tempest-TestSnapshotPattern-569624779-project-member',image_user_id='aef72e2ffce442d1848c4753c324ae92',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:27Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=de5bfbef-7a99-4280-a304-71b9099f110b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.398 253542 DEBUG nova.network.os_vif_util [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.399 253542 DEBUG nova.network.os_vif_util [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.400 253542 DEBUG nova.objects.instance [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid de5bfbef-7a99-4280-a304-71b9099f110b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.421 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <uuid>de5bfbef-7a99-4280-a304-71b9099f110b</uuid>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <name>instance-0000008e</name>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <nova:name>tempest-TestSnapshotPattern-server-1748422570</nova:name>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:07:31</nova:creationTime>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <nova:user uuid="aef72e2ffce442d1848c4753c324ae92">tempest-TestSnapshotPattern-569624779-project-member</nova:user>
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <nova:project uuid="8771100a91ef4eb3b58cc4840f6154b4">tempest-TestSnapshotPattern-569624779</nova:project>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="cea21f13-1c78-4633-9d51-3cb641934c22"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <nova:port uuid="ff88ca8a-d270-4991-b2c4-617f04418848">
Nov 25 09:07:32 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <system>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <entry name="serial">de5bfbef-7a99-4280-a304-71b9099f110b</entry>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <entry name="uuid">de5bfbef-7a99-4280-a304-71b9099f110b</entry>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </system>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <os>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   </os>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <features>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   </features>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/de5bfbef-7a99-4280-a304-71b9099f110b_disk">
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       </source>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/de5bfbef-7a99-4280-a304-71b9099f110b_disk.config">
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       </source>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:07:32 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:ec:c7:f1"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <target dev="tapff88ca8a-d2"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/console.log" append="off"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <video>
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </video>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:07:32 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:07:32 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:07:32 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:07:32 compute-0 nova_compute[253538]: </domain>
Nov 25 09:07:32 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.422 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Preparing to wait for external event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.422 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.423 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.423 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.423 253542 DEBUG nova.virt.libvirt.vif [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1748422570',display_name='tempest-TestSnapshotPattern-server-1748422570',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1748422570',id=142,image_ref='cea21f13-1c78-4633-9d51-3cb641934c22',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-a2i419d2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7aeb9ccf-2506-41d1-92c2-c72892096857',image_min_disk='1',image_min_ram='0',image_owner_id='8771100a91ef4eb3b58cc4840f6154b4',image_owner_project_name='tempest-TestSnapshotPattern-569624779',image_owner_user_name='tempest-TestSnapshotPattern-569624779-project-member',image_user_id='aef72e2ffce442d1848c4753c324ae92',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:27Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=de5bfbef-7a99-4280-a304-71b9099f110b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.424 253542 DEBUG nova.network.os_vif_util [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.424 253542 DEBUG nova.network.os_vif_util [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.425 253542 DEBUG os_vif [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.425 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.426 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.426 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.433 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.434 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff88ca8a-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.434 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff88ca8a-d2, col_values=(('external_ids', {'iface-id': 'ff88ca8a-d270-4991-b2c4-617f04418848', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:c7:f1', 'vm-uuid': 'de5bfbef-7a99-4280-a304-71b9099f110b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.436 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:32 compute-0 NetworkManager[48915]: <info>  [1764061652.4386] manager: (tapff88ca8a-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/607)
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.441 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.444 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.445 253542 INFO os_vif [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2')
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.473 253542 DEBUG nova.compute.manager [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-changed-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.474 253542 DEBUG nova.compute.manager [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing instance network info cache due to event network-changed-2eda6ce1-df50-4620-a5e9-d08e62f7350e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.474 253542 DEBUG oslo_concurrency.lockutils [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.495 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.495 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.495 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] No VIF found with MAC fa:16:3e:ec:c7:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.496 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Using config drive
Nov 25 09:07:32 compute-0 nova_compute[253538]: 2025-11-25 09:07:32.519 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:32 compute-0 sshd-session[403111]: Received disconnect from 45.78.222.2 port 45876:11: Bye Bye [preauth]
Nov 25 09:07:32 compute-0 sshd-session[403111]: Disconnected from authenticating user root 45.78.222.2 port 45876 [preauth]
Nov 25 09:07:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:33 compute-0 ceph-mon[75015]: pgmap v2606: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.0 MiB/s wr, 159 op/s
Nov 25 09:07:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2725074280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.284 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Creating config drive at /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.289 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaw7bvzpx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.341 253542 DEBUG nova.network.neutron [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updated VIF entry in instance network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.341 253542 DEBUG nova.network.neutron [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.356 253542 DEBUG oslo_concurrency.lockutils [req-59aa2f56-4a2c-4aed-9fab-801d9c1b8c40 req-57da30a7-aefa-4afa-a6ab-b96556537c8d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.455 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaw7bvzpx" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.480 253542 DEBUG nova.storage.rbd_utils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] rbd image de5bfbef-7a99-4280-a304-71b9099f110b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.485 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config de5bfbef-7a99-4280-a304-71b9099f110b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.564 253542 DEBUG nova.network.neutron [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.674 253542 DEBUG oslo_concurrency.processutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config de5bfbef-7a99-4280-a304-71b9099f110b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.675 253542 INFO nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Deleting local config drive /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b/disk.config because it was imported into RBD.
Nov 25 09:07:33 compute-0 kernel: tapff88ca8a-d2: entered promiscuous mode
Nov 25 09:07:33 compute-0 NetworkManager[48915]: <info>  [1764061653.7459] manager: (tapff88ca8a-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Nov 25 09:07:33 compute-0 ovn_controller[152859]: 2025-11-25T09:07:33Z|01477|binding|INFO|Claiming lport ff88ca8a-d270-4991-b2c4-617f04418848 for this chassis.
Nov 25 09:07:33 compute-0 ovn_controller[152859]: 2025-11-25T09:07:33Z|01478|binding|INFO|ff88ca8a-d270-4991-b2c4-617f04418848: Claiming fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:33 compute-0 ovn_controller[152859]: 2025-11-25T09:07:33Z|01479|binding|INFO|Setting lport ff88ca8a-d270-4991-b2c4-617f04418848 ovn-installed in OVS
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:33 compute-0 ovn_controller[152859]: 2025-11-25T09:07:33Z|01480|binding|INFO|Setting lport ff88ca8a-d270-4991-b2c4-617f04418848 up in Southbound
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.773 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:c7:f1 10.100.0.6'], port_security=['fa:16:3e:ec:c7:f1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'de5bfbef-7a99-4280-a304-71b9099f110b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b51ac1f2-fc04-45c4-8aed-ff9624bae478', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=774fcdab-a888-48f5-b941-79ea7db76602, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=ff88ca8a-d270-4991-b2c4-617f04418848) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.774 162739 INFO neutron.agent.ovn.metadata.agent [-] Port ff88ca8a-d270-4991-b2c4-617f04418848 in datapath 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c bound to our chassis
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.775 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.775 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.776 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance network_info: |[{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.776 253542 DEBUG oslo_concurrency.lockutils [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.776 253542 DEBUG nova.network.neutron [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing network info cache for port 2eda6ce1-df50-4620-a5e9-d08e62f7350e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:07:33 compute-0 systemd-udevd[403247]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.781 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start _get_guest_xml network_info=[{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.781 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:33 compute-0 systemd-machined[215790]: New machine qemu-171-instance-0000008e.
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.788 253542 WARNING nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:07:33 compute-0 NetworkManager[48915]: <info>  [1764061653.7951] device (tapff88ca8a-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:07:33 compute-0 NetworkManager[48915]: <info>  [1764061653.7959] device (tapff88ca8a-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.798 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[120a4caa-3d83-4d23-910e-6d2c46e55aee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.800 253542 DEBUG nova.virt.libvirt.host [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:07:33 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-0000008e.
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.802 253542 DEBUG nova.virt.libvirt.host [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.805 253542 DEBUG nova.virt.libvirt.host [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.806 253542 DEBUG nova.virt.libvirt.host [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.806 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.806 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.807 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.807 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.807 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.808 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.808 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.808 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.808 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.809 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.809 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.809 253542 DEBUG nova.virt.hardware [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.812 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.843 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[35eca959-4acd-4361-94e8-5a2c5ec9d3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.848 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[54a31d38-3e0f-4797-a733-78c1af53c955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.885 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1074e76c-784a-4afb-9fc1-f9ec61bc3675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f64606-3cae-4c92-941a-cde1787ea21a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3eb82e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4d:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696918, 'reachable_time': 15963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403263, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4f771805-132d-4ddb-82d8-2afecb09a2cb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c3eb82e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696928, 'tstamp': 696928}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403264, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c3eb82e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696931, 'tstamp': 696931}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403264, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.937 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3eb82e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:33 compute-0 nova_compute[253538]: 2025-11-25 09:07:33.939 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.940 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3eb82e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.941 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.941 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c3eb82e-10, col_values=(('external_ids', {'iface-id': 'aca5006e-311f-469a-ba5d-688da3f7d396'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:33 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:33.941 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Nov 25 09:07:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:07:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953828355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:07:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2953828355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.283 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.316 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.327 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.484 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061654.4831617, de5bfbef-7a99-4280-a304-71b9099f110b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.485 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] VM Started (Lifecycle Event)
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.502 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.507 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061654.4834478, de5bfbef-7a99-4280-a304-71b9099f110b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.507 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] VM Paused (Lifecycle Event)
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.523 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.525 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.545 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.579 253542 DEBUG nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.580 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.580 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.580 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.580 253542 DEBUG nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Processing event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG oslo_concurrency.lockutils [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 DEBUG nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] No waiting events found dispatching network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.581 253542 WARNING nova.compute.manager [req-70232c9a-124d-447f-a54e-0650f6355798 req-c4bc2f78-b66f-48a1-b56c-af8c616e1691 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received unexpected event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 for instance with vm_state building and task_state spawning.
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.582 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.586 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061654.586144, de5bfbef-7a99-4280-a304-71b9099f110b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.586 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] VM Resumed (Lifecycle Event)
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.588 253542 DEBUG nova.virt.libvirt.driver [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.601 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.603 253542 INFO nova.virt.libvirt.driver [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance spawned successfully.
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.603 253542 INFO nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 7.11 seconds to spawn the instance on the hypervisor.
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.604 253542 DEBUG nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.606 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.773 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:07:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:07:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831369604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.800 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.802 253542 DEBUG nova.virt.libvirt.vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.803 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.804 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.805 253542 DEBUG nova.virt.libvirt.vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.805 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.806 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.807 253542 DEBUG nova.objects.instance [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5ec67ec-7042-47d0-925d-6ff3847d3846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.810 253542 INFO nova.compute.manager [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 8.48 seconds to build instance.
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.829 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <uuid>a5ec67ec-7042-47d0-925d-6ff3847d3846</uuid>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <name>instance-0000008d</name>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-1660352591</nova:name>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:07:33</nova:creationTime>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:port uuid="05d0fd93-ce0f-4842-962f-c9491d3850c8">
Nov 25 09:07:34 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <nova:port uuid="2eda6ce1-df50-4620-a5e9-d08e62f7350e">
Nov 25 09:07:34 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fefc:ed73" ipVersion="6"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fefc:ed73" ipVersion="6"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <system>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <entry name="serial">a5ec67ec-7042-47d0-925d-6ff3847d3846</entry>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <entry name="uuid">a5ec67ec-7042-47d0-925d-6ff3847d3846</entry>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </system>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <os>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   </os>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <features>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   </features>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a5ec67ec-7042-47d0-925d-6ff3847d3846_disk">
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       </source>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config">
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       </source>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:07:34 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:e4:ec:f8"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <target dev="tap05d0fd93-ce"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:fc:ed:73"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <target dev="tap2eda6ce1-df"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/console.log" append="off"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <video>
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </video>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:07:34 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:07:34 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:07:34 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:07:34 compute-0 nova_compute[253538]: </domain>
Nov 25 09:07:34 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.830 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Preparing to wait for external event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.830 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.830 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.831 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.831 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Preparing to wait for external event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.832 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.832 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.832 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.834 253542 DEBUG nova.virt.libvirt.vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.834 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.835 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.836 253542 DEBUG os_vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.837 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.838 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.839 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.841 253542 DEBUG oslo_concurrency.lockutils [None req-6caa8021-d676-481f-8d01-e5222d951739 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.843 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05d0fd93-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.844 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05d0fd93-ce, col_values=(('external_ids', {'iface-id': '05d0fd93-ce0f-4842-962f-c9491d3850c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:ec:f8', 'vm-uuid': 'a5ec67ec-7042-47d0-925d-6ff3847d3846'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 NetworkManager[48915]: <info>  [1764061654.8464] manager: (tap05d0fd93-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.855 253542 INFO os_vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce')
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.856 253542 DEBUG nova.virt.libvirt.vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:07:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.857 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.859 253542 DEBUG nova.network.os_vif_util [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.859 253542 DEBUG os_vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.860 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.860 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.863 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eda6ce1-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.864 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2eda6ce1-df, col_values=(('external_ids', {'iface-id': '2eda6ce1-df50-4620-a5e9-d08e62f7350e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:ed:73', 'vm-uuid': 'a5ec67ec-7042-47d0-925d-6ff3847d3846'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:34 compute-0 NetworkManager[48915]: <info>  [1764061654.8661] manager: (tap2eda6ce1-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.873 253542 INFO os_vif [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df')
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.914 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.915 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.915 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:e4:ec:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.915 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:fc:ed:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.916 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Using config drive
Nov 25 09:07:34 compute-0 nova_compute[253538]: 2025-11-25 09:07:34.942 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:35 compute-0 ceph-mon[75015]: pgmap v2607: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Nov 25 09:07:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1831369604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.513 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Creating config drive at /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.520 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmokdw9gy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.564 253542 DEBUG nova.network.neutron [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updated VIF entry in instance network info cache for port 2eda6ce1-df50-4620-a5e9-d08e62f7350e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.565 253542 DEBUG nova.network.neutron [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.578 253542 DEBUG oslo_concurrency.lockutils [req-d24e9ac6-b6e1-4c49-b415-2c9f0b06b997 req-c06b1edd-d267-44a2-83a3-4f8fd87e1845 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.675 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmokdw9gy" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.700 253542 DEBUG nova.storage.rbd_utils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.704 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.849 253542 DEBUG oslo_concurrency.processutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config a5ec67ec-7042-47d0-925d-6ff3847d3846_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.850 253542 INFO nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Deleting local config drive /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846/disk.config because it was imported into RBD.
Nov 25 09:07:35 compute-0 kernel: tap05d0fd93-ce: entered promiscuous mode
Nov 25 09:07:35 compute-0 NetworkManager[48915]: <info>  [1764061655.9007] manager: (tap05d0fd93-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/611)
Nov 25 09:07:35 compute-0 systemd-udevd[403251]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01481|binding|INFO|Claiming lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 for this chassis.
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01482|binding|INFO|05d0fd93-ce0f-4842-962f-c9491d3850c8: Claiming fa:16:3e:e4:ec:f8 10.100.0.7
Nov 25 09:07:35 compute-0 NetworkManager[48915]: <info>  [1764061655.9116] device (tap05d0fd93-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:07:35 compute-0 NetworkManager[48915]: <info>  [1764061655.9128] device (tap05d0fd93-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.917 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:ec:f8 10.100.0.7'], port_security=['fa:16:3e:e4:ec:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a5ec67ec-7042-47d0-925d-6ff3847d3846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d52cd819-aa98-4895-9898-b2ec17432e84, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=05d0fd93-ce0f-4842-962f-c9491d3850c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.920 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 05d0fd93-ce0f-4842-962f-c9491d3850c8 in datapath 2a6609b2-beb0-48a5-8dc0-1a4c153da77e bound to our chassis
Nov 25 09:07:35 compute-0 NetworkManager[48915]: <info>  [1764061655.9239] manager: (tap2eda6ce1-df): new Tun device (/org/freedesktop/NetworkManager/Devices/612)
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.924 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a6609b2-beb0-48a5-8dc0-1a4c153da77e
Nov 25 09:07:35 compute-0 kernel: tap2eda6ce1-df: entered promiscuous mode
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01483|binding|INFO|Setting lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 ovn-installed in OVS
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01484|binding|INFO|Setting lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 up in Southbound
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01485|if_status|INFO|Not updating pb chassis for 2eda6ce1-df50-4620-a5e9-d08e62f7350e now as sb is readonly
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01486|binding|INFO|Claiming lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e for this chassis.
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01487|binding|INFO|2eda6ce1-df50-4620-a5e9-d08e62f7350e: Claiming fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73
Nov 25 09:07:35 compute-0 NetworkManager[48915]: <info>  [1764061655.9371] device (tap2eda6ce1-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:07:35 compute-0 NetworkManager[48915]: <info>  [1764061655.9379] device (tap2eda6ce1-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.940 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d8caf94a-74e6-4047-8855-5cc608bd5ba3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.943 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2a6609b2-b1 in ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01488|binding|INFO|Setting lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e ovn-installed in OVS
Nov 25 09:07:35 compute-0 ovn_controller[152859]: 2025-11-25T09:07:35Z|01489|binding|INFO|Setting lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e up in Southbound
Nov 25 09:07:35 compute-0 nova_compute[253538]: 2025-11-25 09:07:35.950 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.952 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73'], port_security=['fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fefc:ed73/64 2001:db8::f816:3eff:fefc:ed73/64', 'neutron:device_id': 'a5ec67ec-7042-47d0-925d-6ff3847d3846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2eda6ce1-df50-4620-a5e9-d08e62f7350e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.950 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2a6609b2-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2578cc-43d0-408d-a889-f6a671ca9177]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.958 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efbdfe45-d082-4cae-a7f9-0d986893895e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:35 compute-0 systemd-machined[215790]: New machine qemu-172-instance-0000008d.
Nov 25 09:07:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.976 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d9ec2e-4013-43e0-87f6-a9ed95812cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:35 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008d.
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:35.999 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[59ad622e-e0c1-45c4-aee8-93f7956353e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.047 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[05f6717e-2417-4d2e-8461-d8f6ac9eb271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.065 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbe80fd-3af6-4e01-9f2c-b660769c543a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 NetworkManager[48915]: <info>  [1764061656.0658] manager: (tap2a6609b2-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/613)
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.114 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a804e2e0-5ef6-43eb-b7ba-28ea8134e930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.117 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[e7abc562-9895-4f43-b503-bf5a0589ba42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 NetworkManager[48915]: <info>  [1764061656.1534] device (tap2a6609b2-b0): carrier: link connected
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.160 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[15a3bc11-3207-409b-b19b-11acdb1f3abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.178 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1aa8aa-4120-4d48-8a5c-36d25e28eed0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a6609b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:74:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701075, 'reachable_time': 23436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403495, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.198 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71809fcf-75ca-436e-8a5d-f2ab6f4b5da1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:74b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701075, 'tstamp': 701075}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403499, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.221 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b22a73b7-6cae-4aea-b1b8-2dd6b605ebba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a6609b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:74:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701075, 'reachable_time': 23436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 403505, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 podman[403471]: 2025-11-25 09:07:36.242302783 +0000 UTC m=+0.129236735 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.272 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e8c4b4-8617-4e69-b004-9e82b76c4989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.350 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6e247429-3fe3-47b5-a9c8-f71c3273ec42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.352 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a6609b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.352 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.352 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a6609b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:36 compute-0 NetworkManager[48915]: <info>  [1764061656.3556] manager: (tap2a6609b2-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Nov 25 09:07:36 compute-0 kernel: tap2a6609b2-b0: entered promiscuous mode
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.358 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a6609b2-b0, col_values=(('external_ids', {'iface-id': 'd96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.361 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2a6609b2-beb0-48a5-8dc0-1a4c153da77e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2a6609b2-beb0-48a5-8dc0-1a4c153da77e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:07:36 compute-0 ovn_controller[152859]: 2025-11-25T09:07:36Z|01490|binding|INFO|Releasing lport d96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c from this chassis (sb_readonly=0)
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.362 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c67dbf89-bcd1-46ab-8f5b-e50fbf81745c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.363 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-2a6609b2-beb0-48a5-8dc0-1a4c153da77e
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/2a6609b2-beb0-48a5-8dc0-1a4c153da77e.pid.haproxy
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 2a6609b2-beb0-48a5-8dc0-1a4c153da77e
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.363 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'env', 'PROCESS_TAG=haproxy-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2a6609b2-beb0-48a5-8dc0-1a4c153da77e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.489 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061656.489156, a5ec67ec-7042-47d0-925d-6ff3847d3846 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.490 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] VM Started (Lifecycle Event)
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.520 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.525 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061656.489671, a5ec67ec-7042-47d0-925d-6ff3847d3846 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.525 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] VM Paused (Lifecycle Event)
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.548 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.551 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.573 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:07:36 compute-0 podman[403578]: 2025-11-25 09:07:36.803470147 +0000 UTC m=+0.051748778 container create 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:36 compute-0 systemd[1]: Started libpod-conmon-9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa.scope.
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.866 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:07:36 compute-0 podman[403578]: 2025-11-25 09:07:36.777769439 +0000 UTC m=+0.026048090 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:07:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.877 253542 DEBUG nova.compute.manager [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.878 253542 DEBUG oslo_concurrency.lockutils [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.879 253542 DEBUG oslo_concurrency.lockutils [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.879 253542 DEBUG oslo_concurrency.lockutils [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.880 253542 DEBUG nova.compute.manager [req-7a23aca1-cc2b-4448-971d-ffabfd0e51b9 req-050777ab-e335-4b4b-96d3-2d5a75f789ae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Processing event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09528d4ae8d074a0aa083a055a8dc133b765cc40f864841f0ed3ff34768cc83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:36 compute-0 podman[403578]: 2025-11-25 09:07:36.902222692 +0000 UTC m=+0.150501363 container init 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:07:36 compute-0 podman[403578]: 2025-11-25 09:07:36.907938347 +0000 UTC m=+0.156216988 container start 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:07:36 compute-0 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [NOTICE]   (403598) : New worker (403600) forked
Nov 25 09:07:36 compute-0 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [NOTICE]   (403598) : Loading success.
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.948 253542 DEBUG nova.compute.manager [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.948 253542 DEBUG oslo_concurrency.lockutils [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.949 253542 DEBUG oslo_concurrency.lockutils [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.949 253542 DEBUG oslo_concurrency.lockutils [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.949 253542 DEBUG nova.compute.manager [req-a69ccf3c-4005-454b-b417-71d1a57a11b7 req-93fae0ea-d7d7-4558-a42b-974f52f46335 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Processing event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.950 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.954 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061656.9539952, a5ec67ec-7042-47d0-925d-6ff3847d3846 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.954 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] VM Resumed (Lifecycle Event)
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.956 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.961 253542 INFO nova.virt.libvirt.driver [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance spawned successfully.
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.961 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.977 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.982 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2eda6ce1-df50-4620-a5e9-d08e62f7350e in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 unbound from our chassis
Nov 25 09:07:36 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:36.986 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.986 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.992 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.993 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.994 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.994 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.995 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:07:36 compute-0 nova_compute[253538]: 2025-11-25 09:07:36.996 253542 DEBUG nova.virt.libvirt.driver [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.001 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9b25c4e5-1d94-4c65-b0c1-8f31f1d97aa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.002 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap21786b2a-51 in ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.003 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.004 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap21786b2a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.004 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7557a6b9-826d-4f57-8af9-6f4d571ee1b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.005 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74a74390-e687-47b0-98e7-1ec1b531a9b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.020 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb97c39-9d82-43b9-bf6e-05f3202df5bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.035 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d689c4b8-33de-4c1f-9c4f-4d748132ec0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.064 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a37f7dc5-001c-4db4-a28d-2495c98c71a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f255de48-71b1-44f3-ab0d-02e6ce382e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 NetworkManager[48915]: <info>  [1764061657.0709] manager: (tap21786b2a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/615)
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.081 253542 INFO nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Took 10.28 seconds to spawn the instance on the hypervisor.
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.081 253542 DEBUG nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:07:37 compute-0 systemd-udevd[403616]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.119 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[40c69750-a4a3-4977-84f8-8bdfed8288fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.125 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3102d0-278c-4e7e-a389-8049b8b288fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 NetworkManager[48915]: <info>  [1764061657.1507] device (tap21786b2a-50): carrier: link connected
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.151 253542 INFO nova.compute.manager [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Took 11.50 seconds to build instance.
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.156 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[858d829d-5964-49b2-ad0b-f0e226279bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.171 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcff875-0499-4200-9056-b8b0b30f5dec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21786b2a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:b7:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701174, 'reachable_time': 31262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403635, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.179 253542 DEBUG oslo_concurrency.lockutils [None req-85aeb3ae-c5c3-40cc-8a06-a8e3b54adb3e c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.185 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[614b21dc-01ea-44a4-b562-6d92cac8714f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:b782'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701174, 'tstamp': 701174}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403636, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.204 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8ce041-bd4c-4087-a87f-524aae15eb58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21786b2a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:b7:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701174, 'reachable_time': 31262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 403637, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.237 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4c415e89-134b-4428-932e-ab042ec18aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.281 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f19485ec-6536-443b-be8b-d8b0310a6b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.282 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21786b2a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.283 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.283 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21786b2a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:37 compute-0 NetworkManager[48915]: <info>  [1764061657.2855] manager: (tap21786b2a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Nov 25 09:07:37 compute-0 kernel: tap21786b2a-50: entered promiscuous mode
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.288 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21786b2a-50, col_values=(('external_ids', {'iface-id': 'c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.289 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:37 compute-0 ceph-mon[75015]: pgmap v2608: 321 pgs: 321 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 09:07:37 compute-0 ovn_controller[152859]: 2025-11-25T09:07:37Z|01491|binding|INFO|Releasing lport c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693 from this chassis (sb_readonly=0)
Nov 25 09:07:37 compute-0 nova_compute[253538]: 2025-11-25 09:07:37.304 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.306 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/21786b2a-59f9-4c4e-b462-8a28f7bd93a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/21786b2a-59f9-4c4e-b462-8a28f7bd93a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.307 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27b33ae7-2bdb-47a2-be79-4e5acd21409f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.307 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-21786b2a-59f9-4c4e-b462-8a28f7bd93a3
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/21786b2a-59f9-4c4e-b462-8a28f7bd93a3.pid.haproxy
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 21786b2a-59f9-4c4e-b462-8a28f7bd93a3
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.308 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'env', 'PROCESS_TAG=haproxy-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/21786b2a-59f9-4c4e-b462-8a28f7bd93a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:07:37 compute-0 podman[403668]: 2025-11-25 09:07:37.724743162 +0000 UTC m=+0.052115008 container create 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:07:37 compute-0 systemd[1]: Started libpod-conmon-4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51.scope.
Nov 25 09:07:37 compute-0 podman[403668]: 2025-11-25 09:07:37.697430329 +0000 UTC m=+0.024802195 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:07:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489261d86b924eb877c414a169c28246ea802beb251d049c1cd79a5b173b9a69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:07:37 compute-0 podman[403668]: 2025-11-25 09:07:37.8269489 +0000 UTC m=+0.154320746 container init 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:07:37 compute-0 podman[403668]: 2025-11-25 09:07:37.834922547 +0000 UTC m=+0.162294393 container start 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:07:37 compute-0 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [NOTICE]   (403687) : New worker (403689) forked
Nov 25 09:07:37 compute-0 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [NOTICE]   (403687) : Loading success.
Nov 25 09:07:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:37.898 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:07:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Nov 25 09:07:38 compute-0 nova_compute[253538]: 2025-11-25 09:07:38.994 253542 DEBUG nova.compute.manager [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:38 compute-0 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 DEBUG oslo_concurrency.lockutils [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:38 compute-0 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 DEBUG oslo_concurrency.lockutils [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:38 compute-0 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 DEBUG oslo_concurrency.lockutils [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:38 compute-0 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 DEBUG nova.compute.manager [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:07:38 compute-0 nova_compute[253538]: 2025-11-25 09:07:38.995 253542 WARNING nova.compute.manager [req-7196ae77-d36e-48e2-a65a-15c57a14b04b req-4bfb3f9d-80eb-43b8-b73b-92a5063310d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received unexpected event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 for instance with vm_state active and task_state None.
Nov 25 09:07:39 compute-0 nova_compute[253538]: 2025-11-25 09:07:39.034 253542 DEBUG nova.compute.manager [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:39 compute-0 nova_compute[253538]: 2025-11-25 09:07:39.034 253542 DEBUG oslo_concurrency.lockutils [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:39 compute-0 nova_compute[253538]: 2025-11-25 09:07:39.034 253542 DEBUG oslo_concurrency.lockutils [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:39 compute-0 nova_compute[253538]: 2025-11-25 09:07:39.034 253542 DEBUG oslo_concurrency.lockutils [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:39 compute-0 nova_compute[253538]: 2025-11-25 09:07:39.035 253542 DEBUG nova.compute.manager [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:07:39 compute-0 nova_compute[253538]: 2025-11-25 09:07:39.035 253542 WARNING nova.compute.manager [req-2692a35c-ea2b-4675-8568-0fcc301049d7 req-d7aae92a-92a9-410d-8280-e57eaf161ee3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received unexpected event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e for instance with vm_state active and task_state None.
Nov 25 09:07:39 compute-0 nova_compute[253538]: 2025-11-25 09:07:39.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:39 compute-0 sshd-session[403640]: Invalid user ubuntu from 119.96.131.8 port 56890
Nov 25 09:07:39 compute-0 ceph-mon[75015]: pgmap v2609: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Nov 25 09:07:39 compute-0 sshd-session[403640]: Received disconnect from 119.96.131.8 port 56890:11:  [preauth]
Nov 25 09:07:39 compute-0 sshd-session[403640]: Disconnected from invalid user ubuntu 119.96.131.8 port 56890 [preauth]
Nov 25 09:07:39 compute-0 nova_compute[253538]: 2025-11-25 09:07:39.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Nov 25 09:07:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:41.092 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:41.092 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:41.093 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:07:41 compute-0 ceph-mon[75015]: pgmap v2610: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Nov 25 09:07:41 compute-0 nova_compute[253538]: 2025-11-25 09:07:41.490 253542 DEBUG nova.compute.manager [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:41 compute-0 nova_compute[253538]: 2025-11-25 09:07:41.491 253542 DEBUG nova.compute.manager [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing instance network info cache due to event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:07:41 compute-0 nova_compute[253538]: 2025-11-25 09:07:41.492 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:41 compute-0 nova_compute[253538]: 2025-11-25 09:07:41.492 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:41 compute-0 nova_compute[253538]: 2025-11-25 09:07:41.492 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:07:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 408 KiB/s wr, 155 op/s
Nov 25 09:07:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:43 compute-0 ceph-mon[75015]: pgmap v2611: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 408 KiB/s wr, 155 op/s
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.246 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updated VIF entry in instance network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.246 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.285 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.286 253542 DEBUG nova.compute.manager [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.286 253542 DEBUG nova.compute.manager [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing instance network info cache due to event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.287 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.287 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.287 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:07:44 compute-0 nova_compute[253538]: 2025-11-25 09:07:44.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:45 compute-0 ceph-mon[75015]: pgmap v2612: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Nov 25 09:07:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Nov 25 09:07:46 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:07:46.900 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:07:47 compute-0 ovn_controller[152859]: 2025-11-25T09:07:47Z|00185|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.6
Nov 25 09:07:47 compute-0 ovn_controller[152859]: 2025-11-25T09:07:47Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 09:07:47 compute-0 ceph-mon[75015]: pgmap v2613: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Nov 25 09:07:47 compute-0 nova_compute[253538]: 2025-11-25 09:07:47.417 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updated VIF entry in instance network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:07:47 compute-0 nova_compute[253538]: 2025-11-25 09:07:47.418 253542 DEBUG nova.network.neutron [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:47 compute-0 nova_compute[253538]: 2025-11-25 09:07:47.434 253542 DEBUG oslo_concurrency.lockutils [req-b579cec6-3aab-4bed-92ec-ba1bf7703cf6 req-5d76c048-99bb-47d5-891e-3f3d633220d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:48 compute-0 nova_compute[253538]: 2025-11-25 09:07:48.017 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:48 compute-0 nova_compute[253538]: 2025-11-25 09:07:48.018 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 302 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 347 KiB/s wr, 180 op/s
Nov 25 09:07:49 compute-0 nova_compute[253538]: 2025-11-25 09:07:49.171 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:49 compute-0 ceph-mon[75015]: pgmap v2614: 321 pgs: 321 active+clean; 302 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 347 KiB/s wr, 180 op/s
Nov 25 09:07:49 compute-0 nova_compute[253538]: 2025-11-25 09:07:49.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 319 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.4 MiB/s wr, 195 op/s
Nov 25 09:07:50 compute-0 ovn_controller[152859]: 2025-11-25T09:07:50Z|00187|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.6
Nov 25 09:07:50 compute-0 ovn_controller[152859]: 2025-11-25T09:07:50Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 09:07:50 compute-0 ovn_controller[152859]: 2025-11-25T09:07:50Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:ec:f8 10.100.0.7
Nov 25 09:07:50 compute-0 ovn_controller[152859]: 2025-11-25T09:07:50Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:ec:f8 10.100.0.7
Nov 25 09:07:51 compute-0 ceph-mon[75015]: pgmap v2615: 321 pgs: 321 active+clean; 319 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.4 MiB/s wr, 195 op/s
Nov 25 09:07:51 compute-0 nova_compute[253538]: 2025-11-25 09:07:51.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:51 compute-0 nova_compute[253538]: 2025-11-25 09:07:51.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:07:51 compute-0 nova_compute[253538]: 2025-11-25 09:07:51.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:07:51 compute-0 nova_compute[253538]: 2025-11-25 09:07:51.737 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:07:51 compute-0 nova_compute[253538]: 2025-11-25 09:07:51.738 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:07:51 compute-0 nova_compute[253538]: 2025-11-25 09:07:51.738 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:07:51 compute-0 nova_compute[253538]: 2025-11-25 09:07:51.738 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aeb9ccf-2506-41d1-92c2-c72892096857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:07:52 compute-0 ovn_controller[152859]: 2025-11-25T09:07:52Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 09:07:52 compute-0 ovn_controller[152859]: 2025-11-25T09:07:52Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:c7:f1 10.100.0.6
Nov 25 09:07:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 322 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 25 09:07:52 compute-0 nova_compute[253538]: 2025-11-25 09:07:52.886 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:07:52 compute-0 nova_compute[253538]: 2025-11-25 09:07:52.903 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:07:52 compute-0 nova_compute[253538]: 2025-11-25 09:07:52.904 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:07:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:07:53
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'backups', '.mgr', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:07:53 compute-0 ceph-mon[75015]: pgmap v2616: 321 pgs: 321 active+clean; 322 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:07:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:07:53 compute-0 nova_compute[253538]: 2025-11-25 09:07:53.896 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:07:54 compute-0 nova_compute[253538]: 2025-11-25 09:07:54.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 338 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 119 op/s
Nov 25 09:07:54 compute-0 nova_compute[253538]: 2025-11-25 09:07:54.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:54 compute-0 nova_compute[253538]: 2025-11-25 09:07:54.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:07:54 compute-0 nova_compute[253538]: 2025-11-25 09:07:54.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:55 compute-0 ceph-mon[75015]: pgmap v2617: 321 pgs: 321 active+clean; 338 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 119 op/s
Nov 25 09:07:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 340 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 117 op/s
Nov 25 09:07:56 compute-0 nova_compute[253538]: 2025-11-25 09:07:56.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:57 compute-0 ceph-mon[75015]: pgmap v2618: 321 pgs: 321 active+clean; 340 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 117 op/s
Nov 25 09:07:57 compute-0 nova_compute[253538]: 2025-11-25 09:07:57.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:07:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 120 op/s
Nov 25 09:07:58 compute-0 nova_compute[253538]: 2025-11-25 09:07:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:07:59 compute-0 nova_compute[253538]: 2025-11-25 09:07:59.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:59 compute-0 ceph-mon[75015]: pgmap v2619: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 120 op/s
Nov 25 09:07:59 compute-0 nova_compute[253538]: 2025-11-25 09:07:59.522 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:59 compute-0 nova_compute[253538]: 2025-11-25 09:07:59.523 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:07:59 compute-0 nova_compute[253538]: 2025-11-25 09:07:59.535 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:07:59 compute-0 podman[403699]: 2025-11-25 09:07:59.826180847 +0000 UTC m=+0.058057398 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 09:07:59 compute-0 podman[403698]: 2025-11-25 09:07:59.829253421 +0000 UTC m=+0.067229109 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:07:59 compute-0 nova_compute[253538]: 2025-11-25 09:07:59.879 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:07:59 compute-0 nova_compute[253538]: 2025-11-25 09:07:59.996 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:07:59 compute-0 nova_compute[253538]: 2025-11-25 09:07:59.996 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.006 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.007 253542 INFO nova.compute.claims [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.140 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.199 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.199 253542 DEBUG nova.compute.provider_tree [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:08:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.211 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.230 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.326 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:08:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736770756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.782 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.790 253542 DEBUG nova.compute.provider_tree [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.806 253542 DEBUG nova.scheduler.client.report [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.827 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.827 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.876 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.876 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.903 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:08:00 compute-0 nova_compute[253538]: 2025-11-25 09:08:00.922 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.030 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.032 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.032 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Creating image(s)
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.057 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.086 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.110 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.114 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.166 253542 DEBUG nova.policy [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.212 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.213 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.214 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.214 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.242 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.246 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 24611274-7a7c-4258-8631-032a6c1d8410_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:01 compute-0 ceph-mon[75015]: pgmap v2620: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Nov 25 09:08:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2736770756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.511 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 24611274-7a7c-4258-8631-032a6c1d8410_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.572 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.656 253542 DEBUG nova.objects.instance [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 24611274-7a7c-4258-8631-032a6c1d8410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.668 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.668 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Ensure instance console log exists: /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.669 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.669 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:01 compute-0 nova_compute[253538]: 2025-11-25 09:08:01.669 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:02 compute-0 nova_compute[253538]: 2025-11-25 09:08:02.107 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Successfully created port: 0967717d-564b-4989-8f67-1cd8c2de57ce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:08:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 1.3 MiB/s wr, 60 op/s
Nov 25 09:08:02 compute-0 nova_compute[253538]: 2025-11-25 09:08:02.757 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Successfully created port: 6effd17c-b1ee-44e2-8346-9e445de0dfb9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:08:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:03 compute-0 ceph-mon[75015]: pgmap v2621: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 1.3 MiB/s wr, 60 op/s
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.634 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Successfully updated port: 0967717d-564b-4989-8f67-1cd8c2de57ce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.710 253542 DEBUG nova.compute.manager [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.711 253542 DEBUG nova.compute.manager [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing instance network info cache due to event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.711 253542 DEBUG oslo_concurrency.lockutils [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.711 253542 DEBUG oslo_concurrency.lockutils [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.711 253542 DEBUG nova.network.neutron [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:08:03 compute-0 nova_compute[253538]: 2025-11-25 09:08:03.893 253542 DEBUG nova.network.neutron [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:08:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:08:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143973489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.103 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.207 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.207 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 361 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 1.7 MiB/s wr, 41 op/s
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.211 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.212 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.215 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.216 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.232 253542 DEBUG nova.network.neutron [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.248 253542 DEBUG oslo_concurrency.lockutils [req-ed74f5d0-87e3-429f-931e-000bb3fd45d7 req-2fde0e27-450f-4a34-9927-4e454ec9c94c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017758341065963405 of space, bias 1.0, pg target 0.5327502319789021 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001769983761476851 of space, bias 1.0, pg target 0.5309951284430553 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:08:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:08:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4143973489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.422 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.423 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2982MB free_disk=59.89088821411133GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.424 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.424 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.451 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Successfully updated port: 6effd17c-b1ee-44e2-8346-9e445de0dfb9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.476 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.476 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.476 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.501 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 7aeb9ccf-2506-41d1-92c2-c72892096857 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.501 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance a5ec67ec-7042-47d0-925d-6ff3847d3846 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.501 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance de5bfbef-7a99-4280-a304-71b9099f110b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.502 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 24611274-7a7c-4258-8631-032a6c1d8410 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.502 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.502 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.616 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.669 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:08:04 compute-0 nova_compute[253538]: 2025-11-25 09:08:04.882 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:08:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164471518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:05 compute-0 nova_compute[253538]: 2025-11-25 09:08:05.079 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:05 compute-0 nova_compute[253538]: 2025-11-25 09:08:05.089 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:08:05 compute-0 nova_compute[253538]: 2025-11-25 09:08:05.105 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:08:05 compute-0 nova_compute[253538]: 2025-11-25 09:08:05.130 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:08:05 compute-0 nova_compute[253538]: 2025-11-25 09:08:05.131 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:05 compute-0 sshd-session[403946]: Connection closed by authenticating user root 171.244.51.45 port 60042 [preauth]
Nov 25 09:08:05 compute-0 ceph-mon[75015]: pgmap v2622: 321 pgs: 321 active+clean; 361 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 1.7 MiB/s wr, 41 op/s
Nov 25 09:08:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4164471518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:05 compute-0 nova_compute[253538]: 2025-11-25 09:08:05.792 253542 DEBUG nova.compute.manager [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-changed-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:05 compute-0 nova_compute[253538]: 2025-11-25 09:08:05.792 253542 DEBUG nova.compute.manager [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing instance network info cache due to event network-changed-6effd17c-b1ee-44e2-8346-9e445de0dfb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:08:05 compute-0 nova_compute[253538]: 2025-11-25 09:08:05.792 253542 DEBUG oslo_concurrency.lockutils [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:08:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:08:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 54K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1374 writes, 6219 keys, 1374 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s
                                           Interval WAL: 1374 writes, 1374 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     30.3      2.17              0.23        38    0.057       0      0       0.0       0.0
                                             L6      1/0    8.09 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4     61.4     51.3      5.68              0.94        37    0.154    224K    20K       0.0       0.0
                                            Sum      1/0    8.09 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     44.4     45.5      7.85              1.18        75    0.105    224K    20K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2     41.3     41.8      1.26              0.15        10    0.126     38K   2553       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     61.4     51.3      5.68              0.94        37    0.154    224K    20K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.3      2.17              0.23        37    0.059       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.064, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.35 GB write, 0.07 MB/s write, 0.34 GB read, 0.07 MB/s read, 7.9 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 40.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.00026 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2656,39.04 MB,12.8429%) FilterBlock(76,633.61 KB,0.203539%) IndexBlock(76,1011.41 KB,0.324902%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:08:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 1.4 MiB/s wr, 27 op/s
Nov 25 09:08:06 compute-0 podman[403973]: 2025-11-25 09:08:06.85904195 +0000 UTC m=+0.105568251 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 25 09:08:07 compute-0 ceph-mon[75015]: pgmap v2623: 321 pgs: 321 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 1.4 MiB/s wr, 27 op/s
Nov 25 09:08:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.179 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:09 compute-0 ceph-mon[75015]: pgmap v2624: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.652 253542 DEBUG nova.network.neutron [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.696 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.696 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance network_info: |[{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.697 253542 DEBUG oslo_concurrency.lockutils [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.697 253542 DEBUG nova.network.neutron [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing network info cache for port 6effd17c-b1ee-44e2-8346-9e445de0dfb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.701 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start _get_guest_xml network_info=[{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.706 253542 WARNING nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.710 253542 DEBUG nova.virt.libvirt.host [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.711 253542 DEBUG nova.virt.libvirt.host [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.714 253542 DEBUG nova.virt.libvirt.host [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.714 253542 DEBUG nova.virt.libvirt.host [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.715 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.715 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.715 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.716 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.716 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.716 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.716 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.717 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.717 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.717 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.717 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.718 253542 DEBUG nova.virt.hardware [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.721 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:09 compute-0 nova_compute[253538]: 2025-11-25 09:08:09.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:08:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:08:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565731362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.264 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.288 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.293 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3565731362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:08:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:08:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668307544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.772 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.774 253542 DEBUG nova.virt.libvirt.vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:08:00Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.774 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.775 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.776 253542 DEBUG nova.virt.libvirt.vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:08:00Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.777 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.777 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.778 253542 DEBUG nova.objects.instance [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24611274-7a7c-4258-8631-032a6c1d8410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.793 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <uuid>24611274-7a7c-4258-8631-032a6c1d8410</uuid>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <name>instance-0000008f</name>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-1852306099</nova:name>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:08:09</nova:creationTime>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:port uuid="0967717d-564b-4989-8f67-1cd8c2de57ce">
Nov 25 09:08:10 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <nova:port uuid="6effd17c-b1ee-44e2-8346-9e445de0dfb9">
Nov 25 09:08:10 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe28:6fde" ipVersion="6"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:6fde" ipVersion="6"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <system>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <entry name="serial">24611274-7a7c-4258-8631-032a6c1d8410</entry>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <entry name="uuid">24611274-7a7c-4258-8631-032a6c1d8410</entry>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </system>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <os>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   </os>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <features>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   </features>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/24611274-7a7c-4258-8631-032a6c1d8410_disk">
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       </source>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/24611274-7a7c-4258-8631-032a6c1d8410_disk.config">
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       </source>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:08:10 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:11:69:66"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <target dev="tap0967717d-56"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:28:6f:de"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <target dev="tap6effd17c-b1"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/console.log" append="off"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <video>
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </video>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:08:10 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:08:10 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:08:10 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:08:10 compute-0 nova_compute[253538]: </domain>
Nov 25 09:08:10 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.795 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Preparing to wait for external event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.795 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.795 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Preparing to wait for external event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.796 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.797 253542 DEBUG nova.virt.libvirt.vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:08:00Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.797 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.798 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.798 253542 DEBUG os_vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.799 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.800 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.802 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0967717d-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.803 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0967717d-56, col_values=(('external_ids', {'iface-id': '0967717d-564b-4989-8f67-1cd8c2de57ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:69:66', 'vm-uuid': '24611274-7a7c-4258-8631-032a6c1d8410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 NetworkManager[48915]: <info>  [1764061690.8060] manager: (tap0967717d-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/617)
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.812 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.813 253542 INFO os_vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56')
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.814 253542 DEBUG nova.virt.libvirt.vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:08:00Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.814 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.815 253542 DEBUG nova.network.os_vif_util [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.815 253542 DEBUG os_vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.816 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.816 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6effd17c-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.819 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6effd17c-b1, col_values=(('external_ids', {'iface-id': '6effd17c-b1ee-44e2-8346-9e445de0dfb9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:6f:de', 'vm-uuid': '24611274-7a7c-4258-8631-032a6c1d8410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.820 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 NetworkManager[48915]: <info>  [1764061690.8213] manager: (tap6effd17c-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.826 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.827 253542 INFO os_vif [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1')
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.871 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.871 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.871 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:11:69:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.871 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:28:6f:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.872 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Using config drive
Nov 25 09:08:10 compute-0 nova_compute[253538]: 2025-11-25 09:08:10.896 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:08:11 compute-0 ceph-mon[75015]: pgmap v2625: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:08:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/668307544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:08:11 compute-0 nova_compute[253538]: 2025-11-25 09:08:11.553 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Creating config drive at /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config
Nov 25 09:08:11 compute-0 nova_compute[253538]: 2025-11-25 09:08:11.559 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpobzoy0q1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:11 compute-0 nova_compute[253538]: 2025-11-25 09:08:11.705 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpobzoy0q1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:11 compute-0 nova_compute[253538]: 2025-11-25 09:08:11.735 253542 DEBUG nova.storage.rbd_utils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 24611274-7a7c-4258-8631-032a6c1d8410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:08:11 compute-0 nova_compute[253538]: 2025-11-25 09:08:11.740 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config 24611274-7a7c-4258-8631-032a6c1d8410_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:11 compute-0 nova_compute[253538]: 2025-11-25 09:08:11.949 253542 DEBUG oslo_concurrency.processutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config 24611274-7a7c-4258-8631-032a6c1d8410_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:11 compute-0 nova_compute[253538]: 2025-11-25 09:08:11.950 253542 INFO nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Deleting local config drive /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410/disk.config because it was imported into RBD.
Nov 25 09:08:11 compute-0 NetworkManager[48915]: <info>  [1764061691.9970] manager: (tap0967717d-56): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Nov 25 09:08:12 compute-0 kernel: tap0967717d-56: entered promiscuous mode
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01492|binding|INFO|Claiming lport 0967717d-564b-4989-8f67-1cd8c2de57ce for this chassis.
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01493|binding|INFO|0967717d-564b-4989-8f67-1cd8c2de57ce: Claiming fa:16:3e:11:69:66 10.100.0.14
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.011 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:69:66 10.100.0.14'], port_security=['fa:16:3e:11:69:66 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '24611274-7a7c-4258-8631-032a6c1d8410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d52cd819-aa98-4895-9898-b2ec17432e84, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0967717d-564b-4989-8f67-1cd8c2de57ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.013 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0967717d-564b-4989-8f67-1cd8c2de57ce in datapath 2a6609b2-beb0-48a5-8dc0-1a4c153da77e bound to our chassis
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.014 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a6609b2-beb0-48a5-8dc0-1a4c153da77e
Nov 25 09:08:12 compute-0 NetworkManager[48915]: <info>  [1764061692.0202] manager: (tap6effd17c-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/620)
Nov 25 09:08:12 compute-0 kernel: tap6effd17c-b1: entered promiscuous mode
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01494|binding|INFO|Setting lport 0967717d-564b-4989-8f67-1cd8c2de57ce ovn-installed in OVS
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01495|binding|INFO|Setting lport 0967717d-564b-4989-8f67-1cd8c2de57ce up in Southbound
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.024 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01496|if_status|INFO|Dropped 1 log messages in last 36 seconds (most recently, 36 seconds ago) due to excessive rate
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01497|if_status|INFO|Not updating pb chassis for 6effd17c-b1ee-44e2-8346-9e445de0dfb9 now as sb is readonly
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.029 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8747a3-1e84-4856-98dd-5d4c93226c22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01498|binding|INFO|Claiming lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 for this chassis.
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01499|binding|INFO|6effd17c-b1ee-44e2-8346-9e445de0dfb9: Claiming fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde
Nov 25 09:08:12 compute-0 systemd-udevd[404143]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:08:12 compute-0 systemd-udevd[404142]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.043 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde'], port_security=['fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe28:6fde/64 2001:db8::f816:3eff:fe28:6fde/64', 'neutron:device_id': '24611274-7a7c-4258-8631-032a6c1d8410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6effd17c-b1ee-44e2-8346-9e445de0dfb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:12 compute-0 NetworkManager[48915]: <info>  [1764061692.0566] device (tap0967717d-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:08:12 compute-0 NetworkManager[48915]: <info>  [1764061692.0580] device (tap0967717d-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:08:12 compute-0 NetworkManager[48915]: <info>  [1764061692.0587] device (tap6effd17c-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01500|binding|INFO|Setting lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 ovn-installed in OVS
Nov 25 09:08:12 compute-0 NetworkManager[48915]: <info>  [1764061692.0598] device (tap6effd17c-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:08:12 compute-0 ovn_controller[152859]: 2025-11-25T09:08:12Z|01501|binding|INFO|Setting lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 up in Southbound
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:12 compute-0 systemd-machined[215790]: New machine qemu-173-instance-0000008f.
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.062 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b64fde43-7b7f-4f19-8606-2d1f5852cfba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.066 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[0b882f74-0d08-443e-ae29-08d0c39f60d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008f.
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.094 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8b7eff-c40f-4bd0-99d9-4dd41098d115]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.109 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[27227a41-b7aa-4770-846c-f031ebf9cd85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a6609b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:74:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701075, 'reachable_time': 37459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404152, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.124 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cd310dd8-e34c-4243-804e-e781a19cb49b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a6609b2-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701090, 'tstamp': 701090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404158, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2a6609b2-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701094, 'tstamp': 701094}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404158, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a6609b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.129 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a6609b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.129 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.129 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a6609b2-b0, col_values=(('external_ids', {'iface-id': 'd96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.130 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.131 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6effd17c-b1ee-44e2-8346-9e445de0dfb9 in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 unbound from our chassis
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.132 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.149 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[244398fb-54e0-4816-8c6a-6f885d006c9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.184 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4efeb9-0756-4c0e-85d6-d3ba7d57de1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.187 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[99009079-1392-418c-8662-7b82cf958a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 sudo[404162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:08:12 compute-0 sudo[404162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.214 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c75e840-98d5-40b5-a954-86f6d1a3c1a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 sudo[404162]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.232 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[577e72b4-fd9c-4bbd-b96f-2e69dd19d692]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21786b2a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:b7:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701174, 'reachable_time': 20349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404190, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.246 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbb7e66-a310-440e-b78b-81f5b8bfd799]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap21786b2a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701187, 'tstamp': 701187}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404202, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.252 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21786b2a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.255 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21786b2a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.255 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.256 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21786b2a-50, col_values=(('external_ids', {'iface-id': 'c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:12.256 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:12 compute-0 sudo[404191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:08:12 compute-0 sudo[404191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:12 compute-0 sudo[404191]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:12 compute-0 sudo[404217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:12 compute-0 sudo[404217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:12 compute-0 sudo[404217]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.353 253542 DEBUG nova.compute.manager [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.354 253542 DEBUG oslo_concurrency.lockutils [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.354 253542 DEBUG oslo_concurrency.lockutils [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.354 253542 DEBUG oslo_concurrency.lockutils [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.354 253542 DEBUG nova.compute.manager [req-70d5bd52-353f-4a70-b714-26d74a49e2e7 req-10261606-cf60-4c3f-b8bc-ee6a046b3aa1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Processing event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:08:12 compute-0 sudo[404242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:08:12 compute-0 sudo[404242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.402 253542 DEBUG nova.compute.manager [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.436 253542 INFO nova.compute.manager [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] instance snapshotting
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.524 253542 DEBUG nova.network.neutron [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updated VIF entry in instance network info cache for port 6effd17c-b1ee-44e2-8346-9e445de0dfb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.526 253542 DEBUG nova.network.neutron [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.540 253542 DEBUG oslo_concurrency.lockutils [req-fcb3e5b3-7f48-471d-af94-b4088e8205ab req-13e21f76-62d1-4267-9bbf-b02c360942af b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.688 253542 INFO nova.virt.libvirt.driver [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Beginning live snapshot process
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.851 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] creating snapshot(ac3bcbe503ff4146b596d39c14ad05a1) on rbd image(de5bfbef-7a99-4280-a304-71b9099f110b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.956 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061692.9560034, 24611274-7a7c-4258-8631-032a6c1d8410 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.957 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] VM Started (Lifecycle Event)
Nov 25 09:08:12 compute-0 sudo[404242]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.982 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.985 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061692.9567604, 24611274-7a7c-4258-8631-032a6c1d8410 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:08:12 compute-0 nova_compute[253538]: 2025-11-25 09:08:12.985 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] VM Paused (Lifecycle Event)
Nov 25 09:08:13 compute-0 nova_compute[253538]: 2025-11-25 09:08:13.001 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:13 compute-0 nova_compute[253538]: 2025-11-25 09:08:13.005 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:08:13 compute-0 nova_compute[253538]: 2025-11-25 09:08:13.029 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:08:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:08:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:08:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:08:13 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d2528635-ae30-4bab-b79b-d24bee4a10ae does not exist
Nov 25 09:08:13 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4b8cf18e-0c9b-46e7-a9b2-fe3285533b4d does not exist
Nov 25 09:08:13 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 74799879-c0a7-437b-94b5-12e0e37cecc1 does not exist
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:08:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:08:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:08:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:13 compute-0 sudo[404392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:13 compute-0 sudo[404392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:13 compute-0 sudo[404392]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:13 compute-0 sudo[404417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:08:13 compute-0 sudo[404417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:13 compute-0 sudo[404417]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:13 compute-0 sudo[404442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:13 compute-0 sudo[404442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:13 compute-0 sudo[404442]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:13 compute-0 sudo[404467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:08:13 compute-0 sudo[404467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 25 09:08:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 25 09:08:13 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 25 09:08:13 compute-0 ceph-mon[75015]: pgmap v2626: 321 pgs: 321 active+clean; 389 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:08:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:08:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:08:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:08:13 compute-0 nova_compute[253538]: 2025-11-25 09:08:13.518 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] cloning vms/de5bfbef-7a99-4280-a304-71b9099f110b_disk@ac3bcbe503ff4146b596d39c14ad05a1 to images/c1cd651b-e908-45f2-ad61-7952319cf709 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 09:08:13 compute-0 nova_compute[253538]: 2025-11-25 09:08:13.663 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] flattening images/c1cd651b-e908-45f2-ad61-7952319cf709 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 09:08:13 compute-0 podman[404568]: 2025-11-25 09:08:13.672091426 +0000 UTC m=+0.062073549 container create ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:08:13 compute-0 podman[404568]: 2025-11-25 09:08:13.641533275 +0000 UTC m=+0.031515498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:08:13 compute-0 systemd[1]: Started libpod-conmon-ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956.scope.
Nov 25 09:08:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:08:13 compute-0 podman[404568]: 2025-11-25 09:08:13.809950123 +0000 UTC m=+0.199932236 container init ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:08:13 compute-0 podman[404568]: 2025-11-25 09:08:13.819524003 +0000 UTC m=+0.209506116 container start ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 09:08:13 compute-0 podman[404568]: 2025-11-25 09:08:13.825223678 +0000 UTC m=+0.215205791 container attach ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:08:13 compute-0 systemd[1]: libpod-ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956.scope: Deactivated successfully.
Nov 25 09:08:13 compute-0 intelligent_noether[404606]: 167 167
Nov 25 09:08:13 compute-0 conmon[404606]: conmon ad180e4f50eb484df2a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956.scope/container/memory.events
Nov 25 09:08:13 compute-0 podman[404611]: 2025-11-25 09:08:13.882172906 +0000 UTC m=+0.032077863 container died ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 390 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 1.2 MiB/s wr, 30 op/s
Nov 25 09:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab7a8d1accd815a6477abad2d9398c2bbb06d8c8e4cc247ddf034fadd04cce91-merged.mount: Deactivated successfully.
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.441 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.442 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.443 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.443 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.444 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No event matching network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce in dict_keys([('network-vif-plugged', '6effd17c-b1ee-44e2-8346-9e445de0dfb9')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.444 253542 WARNING nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received unexpected event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce for instance with vm_state building and task_state spawning.
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.445 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.445 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.446 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.446 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.446 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Processing event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.447 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.447 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.448 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.448 253542 DEBUG oslo_concurrency.lockutils [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.448 253542 DEBUG nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.449 253542 WARNING nova.compute.manager [req-1be31d02-151f-4a3f-8529-0b4118bb5d3d req-13a0cfe7-2f54-420c-9bd8-57107dfcd184 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received unexpected event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 for instance with vm_state building and task_state spawning.
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.450 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.456 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061694.4563718, 24611274-7a7c-4258-8631-032a6c1d8410 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.457 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] VM Resumed (Lifecycle Event)
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.460 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.466 253542 INFO nova.virt.libvirt.driver [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance spawned successfully.
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.467 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.480 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.487 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.492 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.492 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.493 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.493 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.494 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.494 253542 DEBUG nova.virt.libvirt.driver [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.518 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:08:14 compute-0 ceph-mon[75015]: osdmap e258: 3 total, 3 up, 3 in
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.566 253542 INFO nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Took 13.54 seconds to spawn the instance on the hypervisor.
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.567 253542 DEBUG nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:14 compute-0 podman[404611]: 2025-11-25 09:08:14.598771127 +0000 UTC m=+0.748676074 container remove ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 09:08:14 compute-0 systemd[1]: libpod-conmon-ad180e4f50eb484df2a2ab7ba33d16a4447d44c373e24ff27aa1160a15e1a956.scope: Deactivated successfully.
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.632 253542 INFO nova.compute.manager [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Took 15.05 seconds to build instance.
Nov 25 09:08:14 compute-0 nova_compute[253538]: 2025-11-25 09:08:14.645 253542 DEBUG oslo_concurrency.lockutils [None req-bb3b520f-9d15-44fe-b5d5-b950290e8ab8 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:14 compute-0 podman[404634]: 2025-11-25 09:08:14.879486778 +0000 UTC m=+0.087710116 container create b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:08:14 compute-0 podman[404634]: 2025-11-25 09:08:14.817547564 +0000 UTC m=+0.025770922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:08:14 compute-0 systemd[1]: Started libpod-conmon-b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8.scope.
Nov 25 09:08:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:14 compute-0 podman[404634]: 2025-11-25 09:08:14.993198449 +0000 UTC m=+0.201421817 container init b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:08:15 compute-0 podman[404634]: 2025-11-25 09:08:15.002766469 +0000 UTC m=+0.210989807 container start b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:08:15 compute-0 podman[404634]: 2025-11-25 09:08:15.007791796 +0000 UTC m=+0.216015154 container attach b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 09:08:15 compute-0 nova_compute[253538]: 2025-11-25 09:08:15.068 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] removing snapshot(ac3bcbe503ff4146b596d39c14ad05a1) on rbd image(de5bfbef-7a99-4280-a304-71b9099f110b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 09:08:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 25 09:08:15 compute-0 ceph-mon[75015]: pgmap v2628: 321 pgs: 321 active+clean; 390 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 1.2 MiB/s wr, 30 op/s
Nov 25 09:08:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 25 09:08:15 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 25 09:08:15 compute-0 nova_compute[253538]: 2025-11-25 09:08:15.640 253542 DEBUG nova.storage.rbd_utils [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] creating snapshot(snap) on rbd image(c1cd651b-e908-45f2-ad61-7952319cf709) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 09:08:15 compute-0 nova_compute[253538]: 2025-11-25 09:08:15.821 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 398 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 34 op/s
Nov 25 09:08:16 compute-0 serene_pare[404650]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:08:16 compute-0 serene_pare[404650]: --> relative data size: 1.0
Nov 25 09:08:16 compute-0 serene_pare[404650]: --> All data devices are unavailable
Nov 25 09:08:16 compute-0 systemd[1]: libpod-b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8.scope: Deactivated successfully.
Nov 25 09:08:16 compute-0 systemd[1]: libpod-b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8.scope: Consumed 1.081s CPU time.
Nov 25 09:08:16 compute-0 podman[404634]: 2025-11-25 09:08:16.289053796 +0000 UTC m=+1.497277154 container died b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-36738e8cc97da19508b1796b9aae3723644bcdeedd62ded9498f4fa57c292dc7-merged.mount: Deactivated successfully.
Nov 25 09:08:16 compute-0 podman[404634]: 2025-11-25 09:08:16.35727338 +0000 UTC m=+1.565496718 container remove b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:08:16 compute-0 systemd[1]: libpod-conmon-b3be44f5da134a0fc396883fa007c538181ad670e5c9d8aa4c0947a8bef1ffe8.scope: Deactivated successfully.
Nov 25 09:08:16 compute-0 sudo[404467]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:16 compute-0 sudo[404727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:16 compute-0 sudo[404727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:16 compute-0 sudo[404727]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:16 compute-0 sudo[404752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:08:16 compute-0 sudo[404752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:16 compute-0 sudo[404752]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:16 compute-0 sudo[404777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:16 compute-0 sudo[404777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:16 compute-0 sudo[404777]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 25 09:08:16 compute-0 ceph-mon[75015]: osdmap e259: 3 total, 3 up, 3 in
Nov 25 09:08:16 compute-0 ceph-mon[75015]: pgmap v2630: 321 pgs: 321 active+clean; 398 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 34 op/s
Nov 25 09:08:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 25 09:08:16 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 25 09:08:16 compute-0 sudo[404802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:08:16 compute-0 sudo[404802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:17 compute-0 podman[404865]: 2025-11-25 09:08:17.007071915 +0000 UTC m=+0.047335588 container create 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:08:17 compute-0 systemd[1]: Started libpod-conmon-3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132.scope.
Nov 25 09:08:17 compute-0 kernel: hrtimer: interrupt took 34451577 ns
Nov 25 09:08:17 compute-0 podman[404865]: 2025-11-25 09:08:16.988452879 +0000 UTC m=+0.028716562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:08:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:08:17 compute-0 podman[404865]: 2025-11-25 09:08:17.177434896 +0000 UTC m=+0.217698599 container init 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:08:17 compute-0 podman[404865]: 2025-11-25 09:08:17.18494737 +0000 UTC m=+0.225211043 container start 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:08:17 compute-0 podman[404865]: 2025-11-25 09:08:17.188541358 +0000 UTC m=+0.228805211 container attach 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:08:17 compute-0 nostalgic_einstein[404881]: 167 167
Nov 25 09:08:17 compute-0 podman[404865]: 2025-11-25 09:08:17.193745509 +0000 UTC m=+0.234009182 container died 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 09:08:17 compute-0 systemd[1]: libpod-3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132.scope: Deactivated successfully.
Nov 25 09:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e19838e9a19a7a78fc8d5c730f72bce02cf9c010ce28bab6cc948edac06379c6-merged.mount: Deactivated successfully.
Nov 25 09:08:17 compute-0 podman[404865]: 2025-11-25 09:08:17.232074392 +0000 UTC m=+0.272338065 container remove 3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_einstein, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:08:17 compute-0 systemd[1]: libpod-conmon-3925224abfc47289b034eb7e2b17375fdae796df8af1f806a4ba81a60eaf3132.scope: Deactivated successfully.
Nov 25 09:08:17 compute-0 podman[404905]: 2025-11-25 09:08:17.448914726 +0000 UTC m=+0.051223673 container create e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:08:17 compute-0 systemd[1]: Started libpod-conmon-e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783.scope.
Nov 25 09:08:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:08:17 compute-0 podman[404905]: 2025-11-25 09:08:17.424001798 +0000 UTC m=+0.026310775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:17 compute-0 podman[404905]: 2025-11-25 09:08:17.536367243 +0000 UTC m=+0.138676220 container init e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:08:17 compute-0 podman[404905]: 2025-11-25 09:08:17.543060575 +0000 UTC m=+0.145369512 container start e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:08:17 compute-0 podman[404905]: 2025-11-25 09:08:17.546071627 +0000 UTC m=+0.148380574 container attach e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:08:17 compute-0 ceph-mon[75015]: osdmap e260: 3 total, 3 up, 3 in
Nov 25 09:08:17 compute-0 nova_compute[253538]: 2025-11-25 09:08:17.879 253542 INFO nova.virt.libvirt.driver [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Snapshot image upload complete
Nov 25 09:08:17 compute-0 nova_compute[253538]: 2025-11-25 09:08:17.881 253542 INFO nova.compute.manager [None req-9d4ca3e1-3b2b-4664-bd04-7aa25b498be8 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 5.44 seconds to snapshot the instance on the hypervisor.
Nov 25 09:08:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 474 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 278 op/s
Nov 25 09:08:18 compute-0 great_lamarr[404921]: {
Nov 25 09:08:18 compute-0 great_lamarr[404921]:     "0": [
Nov 25 09:08:18 compute-0 great_lamarr[404921]:         {
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "devices": [
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "/dev/loop3"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             ],
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_name": "ceph_lv0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_size": "21470642176",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "name": "ceph_lv0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "tags": {
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cluster_name": "ceph",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.crush_device_class": "",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.encrypted": "0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osd_id": "0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.type": "block",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.vdo": "0"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             },
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "type": "block",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "vg_name": "ceph_vg0"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:         }
Nov 25 09:08:18 compute-0 great_lamarr[404921]:     ],
Nov 25 09:08:18 compute-0 great_lamarr[404921]:     "1": [
Nov 25 09:08:18 compute-0 great_lamarr[404921]:         {
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "devices": [
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "/dev/loop4"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             ],
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_name": "ceph_lv1",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_size": "21470642176",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "name": "ceph_lv1",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "tags": {
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cluster_name": "ceph",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.crush_device_class": "",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.encrypted": "0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osd_id": "1",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.type": "block",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.vdo": "0"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             },
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "type": "block",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "vg_name": "ceph_vg1"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:         }
Nov 25 09:08:18 compute-0 great_lamarr[404921]:     ],
Nov 25 09:08:18 compute-0 great_lamarr[404921]:     "2": [
Nov 25 09:08:18 compute-0 great_lamarr[404921]:         {
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "devices": [
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "/dev/loop5"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             ],
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_name": "ceph_lv2",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_size": "21470642176",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "name": "ceph_lv2",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "tags": {
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.cluster_name": "ceph",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.crush_device_class": "",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.encrypted": "0",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osd_id": "2",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.type": "block",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:                 "ceph.vdo": "0"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             },
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "type": "block",
Nov 25 09:08:18 compute-0 great_lamarr[404921]:             "vg_name": "ceph_vg2"
Nov 25 09:08:18 compute-0 great_lamarr[404921]:         }
Nov 25 09:08:18 compute-0 great_lamarr[404921]:     ]
Nov 25 09:08:18 compute-0 great_lamarr[404921]: }
Nov 25 09:08:18 compute-0 systemd[1]: libpod-e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783.scope: Deactivated successfully.
Nov 25 09:08:18 compute-0 conmon[404921]: conmon e619ae6ddb954b2c71eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783.scope/container/memory.events
Nov 25 09:08:18 compute-0 podman[404905]: 2025-11-25 09:08:18.424428464 +0000 UTC m=+1.026737411 container died e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-81783dc73e06d8c667ebc529af6f85d6bcac726618c0936989e0acc46e6e6f3f-merged.mount: Deactivated successfully.
Nov 25 09:08:18 compute-0 podman[404905]: 2025-11-25 09:08:18.479571333 +0000 UTC m=+1.081880280 container remove e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:08:18 compute-0 systemd[1]: libpod-conmon-e619ae6ddb954b2c71ebdcd1ab0971e2edac92a05963475f60dad49d5e77c783.scope: Deactivated successfully.
Nov 25 09:08:18 compute-0 sudo[404802]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:18 compute-0 sudo[404941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:18 compute-0 sudo[404941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:18 compute-0 sudo[404941]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 25 09:08:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 25 09:08:18 compute-0 ceph-mon[75015]: pgmap v2632: 321 pgs: 321 active+clean; 474 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 278 op/s
Nov 25 09:08:18 compute-0 sudo[404966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:08:18 compute-0 sudo[404966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:18 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 25 09:08:18 compute-0 sudo[404966]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:18 compute-0 sudo[404991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:18 compute-0 sudo[404991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:18 compute-0 sudo[404991]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:18 compute-0 sudo[405016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:08:18 compute-0 sudo[405016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:19 compute-0 podman[405083]: 2025-11-25 09:08:19.175108691 +0000 UTC m=+0.045124378 container create b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:08:19 compute-0 nova_compute[253538]: 2025-11-25 09:08:19.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:19 compute-0 systemd[1]: Started libpod-conmon-b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1.scope.
Nov 25 09:08:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:08:19 compute-0 podman[405083]: 2025-11-25 09:08:19.157300207 +0000 UTC m=+0.027315914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:08:19 compute-0 podman[405083]: 2025-11-25 09:08:19.261326714 +0000 UTC m=+0.131342421 container init b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 09:08:19 compute-0 podman[405083]: 2025-11-25 09:08:19.268967442 +0000 UTC m=+0.138983119 container start b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:08:19 compute-0 podman[405083]: 2025-11-25 09:08:19.279321494 +0000 UTC m=+0.149337191 container attach b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:08:19 compute-0 interesting_visvesvaraya[405100]: 167 167
Nov 25 09:08:19 compute-0 systemd[1]: libpod-b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1.scope: Deactivated successfully.
Nov 25 09:08:19 compute-0 podman[405083]: 2025-11-25 09:08:19.282293824 +0000 UTC m=+0.152309501 container died b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc1b6d4b374b756391a7867573b4d57efc2ddade5c068189bd86b8749a4956f3-merged.mount: Deactivated successfully.
Nov 25 09:08:19 compute-0 podman[405083]: 2025-11-25 09:08:19.316716801 +0000 UTC m=+0.186732478 container remove b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_visvesvaraya, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:08:19 compute-0 systemd[1]: libpod-conmon-b6fa9582c852e898116b520d66d7550d37e3b46db87425ca57666b519fe663a1.scope: Deactivated successfully.
Nov 25 09:08:19 compute-0 podman[405124]: 2025-11-25 09:08:19.486895306 +0000 UTC m=+0.023264893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:08:19 compute-0 podman[405124]: 2025-11-25 09:08:19.822670984 +0000 UTC m=+0.359040561 container create f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:08:19 compute-0 ceph-mon[75015]: osdmap e261: 3 total, 3 up, 3 in
Nov 25 09:08:19 compute-0 systemd[1]: Started libpod-conmon-f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06.scope.
Nov 25 09:08:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:08:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 15 MiB/s wr, 357 op/s
Nov 25 09:08:20 compute-0 podman[405124]: 2025-11-25 09:08:20.542238775 +0000 UTC m=+1.078608332 container init f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:08:20 compute-0 podman[405124]: 2025-11-25 09:08:20.550083368 +0000 UTC m=+1.086452925 container start f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.569 253542 DEBUG nova.compute.manager [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.570 253542 DEBUG nova.compute.manager [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing instance network info cache due to event network-changed-ff88ca8a-d270-4991-b2c4-617f04418848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.570 253542 DEBUG oslo_concurrency.lockutils [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.570 253542 DEBUG oslo_concurrency.lockutils [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.571 253542 DEBUG nova.network.neutron [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Refreshing network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:08:20 compute-0 podman[405124]: 2025-11-25 09:08:20.615936698 +0000 UTC m=+1.152306255 container attach f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.634 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.635 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.635 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.636 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.636 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.637 253542 INFO nova.compute.manager [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Terminating instance
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.638 253542 DEBUG nova.compute.manager [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.665 253542 DEBUG nova.compute.manager [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.666 253542 DEBUG nova.compute.manager [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing instance network info cache due to event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.666 253542 DEBUG oslo_concurrency.lockutils [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.666 253542 DEBUG oslo_concurrency.lockutils [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.667 253542 DEBUG nova.network.neutron [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:08:20 compute-0 kernel: tapff88ca8a-d2 (unregistering): left promiscuous mode
Nov 25 09:08:20 compute-0 NetworkManager[48915]: <info>  [1764061700.7318] device (tapff88ca8a-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.743 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:20 compute-0 ovn_controller[152859]: 2025-11-25T09:08:20Z|01502|binding|INFO|Releasing lport ff88ca8a-d270-4991-b2c4-617f04418848 from this chassis (sb_readonly=0)
Nov 25 09:08:20 compute-0 ovn_controller[152859]: 2025-11-25T09:08:20Z|01503|binding|INFO|Setting lport ff88ca8a-d270-4991-b2c4-617f04418848 down in Southbound
Nov 25 09:08:20 compute-0 ovn_controller[152859]: 2025-11-25T09:08:20Z|01504|binding|INFO|Removing iface tapff88ca8a-d2 ovn-installed in OVS
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.760 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:c7:f1 10.100.0.6'], port_security=['fa:16:3e:ec:c7:f1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'de5bfbef-7a99-4280-a304-71b9099f110b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b51ac1f2-fc04-45c4-8aed-ff9624bae478', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=774fcdab-a888-48f5-b941-79ea7db76602, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=ff88ca8a-d270-4991-b2c4-617f04418848) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.761 162739 INFO neutron.agent.ovn.metadata.agent [-] Port ff88ca8a-d270-4991-b2c4-617f04418848 in datapath 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c unbound from our chassis
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.762 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.787 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fab2f4b5-0c35-409e-909f-d566455707d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:20 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 25 09:08:20 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008e.scope: Consumed 14.886s CPU time.
Nov 25 09:08:20 compute-0 systemd-machined[215790]: Machine qemu-171-instance-0000008e terminated.
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.821 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[aff40079-9238-49a6-9bbc-1a31492bfc34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.825 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[39553819-68fb-48b6-acaf-d2e24312bf16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:20 compute-0 ceph-mon[75015]: pgmap v2634: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 15 MiB/s wr, 357 op/s
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.861 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[83522beb-2617-45fa-b496-550ff1b30f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.881 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a562be3-8dca-4c46-a767-ad41e32f216a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3eb82e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4d:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696918, 'reachable_time': 22573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405164, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.886 253542 INFO nova.virt.libvirt.driver [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Instance destroyed successfully.
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.886 253542 DEBUG nova.objects.instance [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'resources' on Instance uuid de5bfbef-7a99-4280-a304-71b9099f110b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.899 253542 DEBUG nova.virt.libvirt.vif [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1748422570',display_name='tempest-TestSnapshotPattern-server-1748422570',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1748422570',id=142,image_ref='cea21f13-1c78-4633-9d51-3cb641934c22',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:07:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-a2i419d2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7aeb9ccf-2506-41d1-92c2-c72892096857',image_min_disk='1',image_min_ram='0',image_owner_id='8771100a91ef4eb3b58cc4840f6154b4',image_owner_project_name='tempest-TestSnapshotPattern-569624779',image_owner_user_name='tempest-TestSnapshotPattern-569624779-project-member',image_user_id='aef72e2ffce442d1848c4753c324ae92',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:08:17Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=de5bfbef-7a99-4280-a304-71b9099f110b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.899 253542 DEBUG nova.network.os_vif_util [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.900 253542 DEBUG nova.network.os_vif_util [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.900 253542 DEBUG os_vif [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.902 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff88ca8a-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.901 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a770fc-573b-424b-98da-d112fd4f404d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c3eb82e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696928, 'tstamp': 696928}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405171, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c3eb82e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696931, 'tstamp': 696931}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405171, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.903 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3eb82e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.906 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3eb82e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.906 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.906 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c3eb82e-10, col_values=(('external_ids', {'iface-id': 'aca5006e-311f-469a-ba5d-688da3f7d396'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:20 compute-0 nova_compute[253538]: 2025-11-25 09:08:20.906 253542 INFO os_vif [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:c7:f1,bridge_name='br-int',has_traffic_filtering=True,id=ff88ca8a-d270-4991-b2c4-617f04418848,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff88ca8a-d2')
Nov 25 09:08:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:20.907 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]: {
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "osd_id": 1,
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "type": "bluestore"
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:     },
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "osd_id": 2,
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "type": "bluestore"
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:     },
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "osd_id": 0,
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:         "type": "bluestore"
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]:     }
Nov 25 09:08:21 compute-0 flamboyant_ellis[405140]: }
Nov 25 09:08:21 compute-0 systemd[1]: libpod-f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06.scope: Deactivated successfully.
Nov 25 09:08:21 compute-0 systemd[1]: libpod-f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06.scope: Consumed 1.006s CPU time.
Nov 25 09:08:21 compute-0 podman[405124]: 2025-11-25 09:08:21.60772711 +0000 UTC m=+2.144096667 container died f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 25 09:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d64a689ba754deaf2200527744d4a18e654eda104dfbf380de1bbefdb1651a1-merged.mount: Deactivated successfully.
Nov 25 09:08:21 compute-0 podman[405124]: 2025-11-25 09:08:21.756515154 +0000 UTC m=+2.292884701 container remove f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:08:21 compute-0 systemd[1]: libpod-conmon-f77e1defb29e00104c6c525ae29b653b7896618d91240ebea65b3d06504ffe06.scope: Deactivated successfully.
Nov 25 09:08:21 compute-0 sudo[405016]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:08:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:08:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:08:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:08:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0e4d596b-fcbe-4cc8-9b2f-0d37bfc4bbb9 does not exist
Nov 25 09:08:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f6de2a22-d82e-4119-9aa8-4307f8484f0f does not exist
Nov 25 09:08:21 compute-0 nova_compute[253538]: 2025-11-25 09:08:21.924 253542 INFO nova.virt.libvirt.driver [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Deleting instance files /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b_del
Nov 25 09:08:21 compute-0 nova_compute[253538]: 2025-11-25 09:08:21.925 253542 INFO nova.virt.libvirt.driver [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Deletion of /var/lib/nova/instances/de5bfbef-7a99-4280-a304-71b9099f110b_del complete
Nov 25 09:08:21 compute-0 sudo[405234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:08:21 compute-0 sudo[405234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:21 compute-0 sudo[405234]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 compute-0 nova_compute[253538]: 2025-11-25 09:08:21.981 253542 INFO nova.compute.manager [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 1.34 seconds to destroy the instance on the hypervisor.
Nov 25 09:08:21 compute-0 nova_compute[253538]: 2025-11-25 09:08:21.981 253542 DEBUG oslo.service.loopingcall [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:08:21 compute-0 nova_compute[253538]: 2025-11-25 09:08:21.981 253542 DEBUG nova.compute.manager [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:08:21 compute-0 nova_compute[253538]: 2025-11-25 09:08:21.981 253542 DEBUG nova.network.neutron [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:08:21 compute-0 sudo[405259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:08:21 compute-0 sudo[405259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:08:22 compute-0 sudo[405259]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 420 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 13 MiB/s wr, 322 op/s
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.425 253542 DEBUG nova.network.neutron [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updated VIF entry in instance network info cache for port ff88ca8a-d270-4991-b2c4-617f04418848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.426 253542 DEBUG nova.network.neutron [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [{"id": "ff88ca8a-d270-4991-b2c4-617f04418848", "address": "fa:16:3e:ec:c7:f1", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff88ca8a-d2", "ovs_interfaceid": "ff88ca8a-d270-4991-b2c4-617f04418848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.446 253542 DEBUG oslo_concurrency.lockutils [req-712bd2d9-6b12-4c2c-8c7c-6915f04b4a83 req-c0dcf633-4fb0-4383-bda9-6ce5f7b5b753 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-de5bfbef-7a99-4280-a304-71b9099f110b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.511 253542 DEBUG nova.network.neutron [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updated VIF entry in instance network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.512 253542 DEBUG nova.network.neutron [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.532 253542 DEBUG oslo_concurrency.lockutils [req-8bc10e62-c0ab-4357-86d8-d984ce101314 req-ec19b939-f1b8-4364-a277-29feceafb417 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.622 253542 DEBUG nova.network.neutron [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.634 253542 INFO nova.compute.manager [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Took 0.65 seconds to deallocate network for instance.
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.640 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-unplugged-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.640 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.641 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.641 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.641 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] No waiting events found dispatching network-vif-unplugged-ff88ca8a-d270-4991-b2c4-617f04418848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.641 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-unplugged-ff88ca8a-d270-4991-b2c4-617f04418848 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.642 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.642 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.642 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.642 253542 DEBUG oslo_concurrency.lockutils [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.643 253542 DEBUG nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] No waiting events found dispatching network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.644 253542 WARNING nova.compute.manager [req-a90d65f1-b91d-4ff6-8333-7c77af4e3f4e req-5afd8118-1999-479e-bf2f-bc8baab6230c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received unexpected event network-vif-plugged-ff88ca8a-d270-4991-b2c4-617f04418848 for instance with vm_state active and task_state deleting.
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.680 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.680 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.729 253542 DEBUG nova.compute.manager [req-1c0550e7-e5df-4602-840f-b435f1bbc2e8 req-d77ab5fb-5a3e-4f4c-95de-da350ec737f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Received event network-vif-deleted-ff88ca8a-d270-4991-b2c4-617f04418848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:22 compute-0 nova_compute[253538]: 2025-11-25 09:08:22.783 253542 DEBUG oslo_concurrency.processutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:08:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:08:22 compute-0 ceph-mon[75015]: pgmap v2635: 321 pgs: 321 active+clean; 420 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 13 MiB/s wr, 322 op/s
Nov 25 09:08:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 25 09:08:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:08:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827507217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 25 09:08:23 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 25 09:08:23 compute-0 nova_compute[253538]: 2025-11-25 09:08:23.270 253542 DEBUG oslo_concurrency.processutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:23 compute-0 nova_compute[253538]: 2025-11-25 09:08:23.276 253542 DEBUG nova.compute.provider_tree [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:08:23 compute-0 nova_compute[253538]: 2025-11-25 09:08:23.295 253542 DEBUG nova.scheduler.client.report [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:08:23 compute-0 nova_compute[253538]: 2025-11-25 09:08:23.326 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:23 compute-0 nova_compute[253538]: 2025-11-25 09:08:23.379 253542 INFO nova.scheduler.client.report [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Deleted allocations for instance de5bfbef-7a99-4280-a304-71b9099f110b
Nov 25 09:08:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:08:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:08:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:08:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:08:23 compute-0 nova_compute[253538]: 2025-11-25 09:08:23.474 253542 DEBUG oslo_concurrency.lockutils [None req-abcac5a9-12e6-4832-baa1-63f475ec0c00 aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "de5bfbef-7a99-4280-a304-71b9099f110b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:08:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:08:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2827507217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:23 compute-0 ceph-mon[75015]: osdmap e262: 3 total, 3 up, 3 in
Nov 25 09:08:24 compute-0 nova_compute[253538]: 2025-11-25 09:08:24.186 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 385 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.7 MiB/s wr, 211 op/s
Nov 25 09:08:24 compute-0 sshd-session[405305]: Invalid user vagrant from 45.202.211.6 port 59416
Nov 25 09:08:24 compute-0 sshd-session[405305]: Received disconnect from 45.202.211.6 port 59416:11: Bye Bye [preauth]
Nov 25 09:08:24 compute-0 sshd-session[405305]: Disconnected from invalid user vagrant 45.202.211.6 port 59416 [preauth]
Nov 25 09:08:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 25 09:08:24 compute-0 ceph-mon[75015]: pgmap v2637: 321 pgs: 321 active+clean; 385 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.7 MiB/s wr, 211 op/s
Nov 25 09:08:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 25 09:08:24 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 25 09:08:25 compute-0 sshd-session[405146]: Invalid user user4 from 45.78.217.205 port 43586
Nov 25 09:08:25 compute-0 sshd-session[405146]: Received disconnect from 45.78.217.205 port 43586:11: Bye Bye [preauth]
Nov 25 09:08:25 compute-0 sshd-session[405146]: Disconnected from invalid user user4 45.78.217.205 port 43586 [preauth]
Nov 25 09:08:25 compute-0 ceph-mon[75015]: osdmap e263: 3 total, 3 up, 3 in
Nov 25 09:08:25 compute-0 nova_compute[253538]: 2025-11-25 09:08:25.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 380 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Nov 25 09:08:26 compute-0 ceph-mon[75015]: pgmap v2639: 321 pgs: 321 active+clean; 380 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Nov 25 09:08:27 compute-0 ovn_controller[152859]: 2025-11-25T09:08:27Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:69:66 10.100.0.14
Nov 25 09:08:27 compute-0 ovn_controller[152859]: 2025-11-25T09:08:27Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:69:66 10.100.0.14
Nov 25 09:08:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 330 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 1.3 MiB/s wr, 115 op/s
Nov 25 09:08:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 25 09:08:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 25 09:08:28 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.521 253542 DEBUG nova.compute.manager [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.521 253542 DEBUG nova.compute.manager [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing instance network info cache due to event network-changed-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.522 253542 DEBUG oslo_concurrency.lockutils [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.522 253542 DEBUG oslo_concurrency.lockutils [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.522 253542 DEBUG nova.network.neutron [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Refreshing network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.870 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.870 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.870 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.871 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.871 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.872 253542 INFO nova.compute.manager [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Terminating instance
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.873 253542 DEBUG nova.compute.manager [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:08:28 compute-0 kernel: tap0ddcebf0-d7 (unregistering): left promiscuous mode
Nov 25 09:08:28 compute-0 NetworkManager[48915]: <info>  [1764061708.9288] device (tap0ddcebf0-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:28 compute-0 ovn_controller[152859]: 2025-11-25T09:08:28Z|01505|binding|INFO|Releasing lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 from this chassis (sb_readonly=0)
Nov 25 09:08:28 compute-0 ovn_controller[152859]: 2025-11-25T09:08:28Z|01506|binding|INFO|Setting lport 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 down in Southbound
Nov 25 09:08:28 compute-0 ovn_controller[152859]: 2025-11-25T09:08:28Z|01507|binding|INFO|Removing iface tap0ddcebf0-d7 ovn-installed in OVS
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.940 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.954 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:95:36 10.100.0.13'], port_security=['fa:16:3e:d3:95:36 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7aeb9ccf-2506-41d1-92c2-c72892096857', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8771100a91ef4eb3b58cc4840f6154b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b51ac1f2-fc04-45c4-8aed-ff9624bae478', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=774fcdab-a888-48f5-b941-79ea7db76602, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.955 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 in datapath 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c unbound from our chassis
Nov 25 09:08:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.957 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:08:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.958 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93e2ef1b-d5e0-4288-9fb2-24fab8a250c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:28.959 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c namespace which is not needed anymore
Nov 25 09:08:28 compute-0 nova_compute[253538]: 2025-11-25 09:08:28.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 25 09:08:29 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008c.scope: Consumed 16.654s CPU time.
Nov 25 09:08:29 compute-0 systemd-machined[215790]: Machine qemu-170-instance-0000008c terminated.
Nov 25 09:08:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:08:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2865710207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:08:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:08:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2865710207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [NOTICE]   (401548) : haproxy version is 2.8.14-c23fe91
Nov 25 09:08:29 compute-0 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [NOTICE]   (401548) : path to executable is /usr/sbin/haproxy
Nov 25 09:08:29 compute-0 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [WARNING]  (401548) : Exiting Master process...
Nov 25 09:08:29 compute-0 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [ALERT]    (401548) : Current worker (401550) exited with code 143 (Terminated)
Nov 25 09:08:29 compute-0 neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c[401544]: [WARNING]  (401548) : All workers exited. Exiting... (0)
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 systemd[1]: libpod-83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41.scope: Deactivated successfully.
Nov 25 09:08:29 compute-0 podman[405333]: 2025-11-25 09:08:29.105825768 +0000 UTC m=+0.045084697 container died 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.107 253542 INFO nova.virt.libvirt.driver [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Instance destroyed successfully.
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.108 253542 DEBUG nova.objects.instance [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lazy-loading 'resources' on Instance uuid 7aeb9ccf-2506-41d1-92c2-c72892096857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.122 253542 DEBUG nova.virt.libvirt.vif [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-150948456',display_name='tempest-TestSnapshotPattern-server-150948456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-150948456',id=140,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLErEV2xwN9mBf+XGKCIgVT/D6OXwB0cwyyX2NBYPEm+JKbk9OpT/b7EfE4XEaPQBYqSc0cfR5p0864dWMnh2OIUXkOANq7ZGUSKBMTzjmr8EUsGjFEiOhtcXJ0bHjOVZQ==',key_name='tempest-TestSnapshotPattern-1105951773',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:06:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8771100a91ef4eb3b58cc4840f6154b4',ramdisk_id='',reservation_id='r-9z2sb50j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-569624779',owner_user_name='tempest-TestSnapshotPattern-569624779-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:07:23Z,user_data=None,user_id='aef72e2ffce442d1848c4753c324ae92',uuid=7aeb9ccf-2506-41d1-92c2-c72892096857,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.124 253542 DEBUG nova.network.os_vif_util [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converting VIF {"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.125 253542 DEBUG nova.network.os_vif_util [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.125 253542 DEBUG os_vif [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.127 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ddcebf0-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.132 253542 INFO os_vif [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:95:36,bridge_name='br-int',has_traffic_filtering=True,id=0ddcebf0-d7e9-474d-b53b-d1746f5af8f2,network=Network(3c3eb82e-1161-4c2f-9fce-53fdf4386d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddcebf0-d7')
Nov 25 09:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41-userdata-shm.mount: Deactivated successfully.
Nov 25 09:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c24402a25caa013e206ed4779ad47cd8d103184e7670c046105bc5ea23a229e0-merged.mount: Deactivated successfully.
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.188 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 podman[405333]: 2025-11-25 09:08:29.199537296 +0000 UTC m=+0.138796225 container cleanup 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 09:08:29 compute-0 systemd[1]: libpod-conmon-83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41.scope: Deactivated successfully.
Nov 25 09:08:29 compute-0 ceph-mon[75015]: pgmap v2640: 321 pgs: 321 active+clean; 330 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 1.3 MiB/s wr, 115 op/s
Nov 25 09:08:29 compute-0 ceph-mon[75015]: osdmap e264: 3 total, 3 up, 3 in
Nov 25 09:08:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2865710207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:08:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2865710207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:08:29 compute-0 podman[405390]: 2025-11-25 09:08:29.530956225 +0000 UTC m=+0.309415293 container remove 83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.542 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3188a8c-2124-43ec-9049-181fd3aa5cb8]: (4, ('Tue Nov 25 09:08:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c (83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41)\n83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41\nTue Nov 25 09:08:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c (83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41)\n83877085b0968a0b59b5171956da2977080f99dba624d8394ba1eb5fbfc46c41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.545 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[15c557aa-6a4f-4a2d-9fe7-28138e999fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.547 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3eb82e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.549 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 kernel: tap3c3eb82e-10: left promiscuous mode
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.566 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.570 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a9eef5-7260-488b-84df-3bc9c5c1d5eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.581 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f80f3419-44b8-4e70-a8bb-28c438aeb179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.582 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[855a30ca-7223-4ea3-8afb-d641013e513c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.598 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[239aeb40-d2b8-4c71-9381-1317b6a1e64d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696910, 'reachable_time': 39197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405405, 'error': None, 'target': 'ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.601 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c3eb82e-1161-4c2f-9fce-53fdf4386d9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:08:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:29.602 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[84d1f845-3b2d-4a15-92d6-a22e7b16cbb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d3c3eb82e\x2d1161\x2d4c2f\x2d9fce\x2d53fdf4386d9c.mount: Deactivated successfully.
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.834 253542 DEBUG nova.compute.manager [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-unplugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.835 253542 DEBUG oslo_concurrency.lockutils [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.835 253542 DEBUG oslo_concurrency.lockutils [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.835 253542 DEBUG oslo_concurrency.lockutils [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.836 253542 DEBUG nova.compute.manager [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] No waiting events found dispatching network-vif-unplugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.836 253542 DEBUG nova.compute.manager [req-751a684b-39a3-4c98-a612-6e8b076dba49 req-b8b9dd29-fe18-4574-a2e9-902d7c8a9745 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-unplugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.856 253542 INFO nova.virt.libvirt.driver [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Deleting instance files /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857_del
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.858 253542 INFO nova.virt.libvirt.driver [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Deletion of /var/lib/nova/instances/7aeb9ccf-2506-41d1-92c2-c72892096857_del complete
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.912 253542 INFO nova.compute.manager [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 1.04 seconds to destroy the instance on the hypervisor.
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.913 253542 DEBUG oslo.service.loopingcall [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.914 253542 DEBUG nova.compute.manager [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:08:29 compute-0 nova_compute[253538]: 2025-11-25 09:08:29.914 253542 DEBUG nova.network.neutron [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:08:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 317 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 569 KiB/s rd, 2.8 MiB/s wr, 152 op/s
Nov 25 09:08:30 compute-0 podman[405408]: 2025-11-25 09:08:30.825461715 +0000 UTC m=+0.070213920 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:08:30 compute-0 podman[405407]: 2025-11-25 09:08:30.827281934 +0000 UTC m=+0.075155674 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 09:08:30 compute-0 nova_compute[253538]: 2025-11-25 09:08:30.985 253542 DEBUG nova.network.neutron [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.007 253542 INFO nova.compute.manager [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Took 1.09 seconds to deallocate network for instance.
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.053 253542 DEBUG nova.compute.manager [req-982378e9-806a-41a4-8343-c2cd3abfa89f req-86895d17-9848-4c2b-9439-12fe085b7ab5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-deleted-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.058 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.058 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.140 253542 DEBUG oslo_concurrency.processutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.260 253542 DEBUG nova.network.neutron [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updated VIF entry in instance network info cache for port 0ddcebf0-d7e9-474d-b53b-d1746f5af8f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.261 253542 DEBUG nova.network.neutron [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Updating instance_info_cache with network_info: [{"id": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "address": "fa:16:3e:d3:95:36", "network": {"id": "3c3eb82e-1161-4c2f-9fce-53fdf4386d9c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-990123454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8771100a91ef4eb3b58cc4840f6154b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddcebf0-d7", "ovs_interfaceid": "0ddcebf0-d7e9-474d-b53b-d1746f5af8f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:31 compute-0 ceph-mon[75015]: pgmap v2642: 321 pgs: 321 active+clean; 317 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 569 KiB/s rd, 2.8 MiB/s wr, 152 op/s
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.281 253542 DEBUG oslo_concurrency.lockutils [req-6619ff2c-6f80-4f41-963b-ab060189b9e8 req-4eb8fa7c-1093-411e-bcf7-603589142a4b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-7aeb9ccf-2506-41d1-92c2-c72892096857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:08:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:08:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880294285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.587 253542 DEBUG oslo_concurrency.processutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.592 253542 DEBUG nova.compute.provider_tree [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.608 253542 DEBUG nova.scheduler.client.report [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.635 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.678 253542 INFO nova.scheduler.client.report [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Deleted allocations for instance 7aeb9ccf-2506-41d1-92c2-c72892096857
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.754 253542 DEBUG oslo_concurrency.lockutils [None req-2c5d4260-8d92-42c3-821e-54515d89fc8a aef72e2ffce442d1848c4753c324ae92 8771100a91ef4eb3b58cc4840f6154b4 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.906 253542 DEBUG nova.compute.manager [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.906 253542 DEBUG oslo_concurrency.lockutils [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.907 253542 DEBUG oslo_concurrency.lockutils [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.908 253542 DEBUG oslo_concurrency.lockutils [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "7aeb9ccf-2506-41d1-92c2-c72892096857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.909 253542 DEBUG nova.compute.manager [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] No waiting events found dispatching network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:31 compute-0 nova_compute[253538]: 2025-11-25 09:08:31.910 253542 WARNING nova.compute.manager [req-d1fdf7b5-5295-4972-bae5-41bbe0e3a570 req-a5e3e4c2-48bc-4dfb-8f27-2b75e3f82c19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Received unexpected event network-vif-plugged-0ddcebf0-d7e9-474d-b53b-d1746f5af8f2 for instance with vm_state deleted and task_state None.
Nov 25 09:08:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.2 MiB/s wr, 153 op/s
Nov 25 09:08:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/880294285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 25 09:08:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 25 09:08:33 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 25 09:08:33 compute-0 ceph-mon[75015]: pgmap v2643: 321 pgs: 321 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.2 MiB/s wr, 153 op/s
Nov 25 09:08:33 compute-0 ceph-mon[75015]: osdmap e265: 3 total, 3 up, 3 in
Nov 25 09:08:34 compute-0 nova_compute[253538]: 2025-11-25 09:08:34.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:34 compute-0 nova_compute[253538]: 2025-11-25 09:08:34.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 276 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 571 KiB/s rd, 3.2 MiB/s wr, 182 op/s
Nov 25 09:08:35 compute-0 ceph-mon[75015]: pgmap v2645: 321 pgs: 321 active+clean; 276 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 571 KiB/s rd, 3.2 MiB/s wr, 182 op/s
Nov 25 09:08:35 compute-0 nova_compute[253538]: 2025-11-25 09:08:35.874 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061700.873678, de5bfbef-7a99-4280-a304-71b9099f110b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:08:35 compute-0 nova_compute[253538]: 2025-11-25 09:08:35.875 253542 INFO nova.compute.manager [-] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] VM Stopped (Lifecycle Event)
Nov 25 09:08:35 compute-0 nova_compute[253538]: 2025-11-25 09:08:35.889 253542 DEBUG nova.compute.manager [None req-859ebcd9-672b-43f0-a40a-78dcb0223ae2 - - - - - -] [instance: de5bfbef-7a99-4280-a304-71b9099f110b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:36 compute-0 ovn_controller[152859]: 2025-11-25T09:08:36Z|01508|binding|INFO|Releasing lport c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693 from this chassis (sb_readonly=0)
Nov 25 09:08:36 compute-0 ovn_controller[152859]: 2025-11-25T09:08:36Z|01509|binding|INFO|Releasing lport d96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c from this chassis (sb_readonly=0)
Nov 25 09:08:36 compute-0 nova_compute[253538]: 2025-11-25 09:08:36.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 1.9 MiB/s wr, 110 op/s
Nov 25 09:08:37 compute-0 ceph-mon[75015]: pgmap v2646: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 1.9 MiB/s wr, 110 op/s
Nov 25 09:08:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:37.600 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:37.601 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:08:37 compute-0 nova_compute[253538]: 2025-11-25 09:08:37.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:37.601 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:37 compute-0 podman[405467]: 2025-11-25 09:08:37.847259155 +0000 UTC m=+0.100522933 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:08:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Nov 25 09:08:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:39 compute-0 nova_compute[253538]: 2025-11-25 09:08:39.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:39 compute-0 nova_compute[253538]: 2025-11-25 09:08:39.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:39 compute-0 ceph-mon[75015]: pgmap v2647: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.103 253542 DEBUG nova.compute.manager [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.104 253542 DEBUG nova.compute.manager [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing instance network info cache due to event network-changed-0967717d-564b-4989-8f67-1cd8c2de57ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.104 253542 DEBUG oslo_concurrency.lockutils [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.104 253542 DEBUG oslo_concurrency.lockutils [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.105 253542 DEBUG nova.network.neutron [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Refreshing network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.187 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.189 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.189 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.190 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.191 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.193 253542 INFO nova.compute.manager [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Terminating instance
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.195 253542 DEBUG nova.compute.manager [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:08:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 653 KiB/s wr, 52 op/s
Nov 25 09:08:40 compute-0 kernel: tap0967717d-56 (unregistering): left promiscuous mode
Nov 25 09:08:40 compute-0 NetworkManager[48915]: <info>  [1764061720.2468] device (tap0967717d-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:08:40 compute-0 ovn_controller[152859]: 2025-11-25T09:08:40Z|01510|binding|INFO|Releasing lport 0967717d-564b-4989-8f67-1cd8c2de57ce from this chassis (sb_readonly=0)
Nov 25 09:08:40 compute-0 ovn_controller[152859]: 2025-11-25T09:08:40Z|01511|binding|INFO|Setting lport 0967717d-564b-4989-8f67-1cd8c2de57ce down in Southbound
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.255 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 ovn_controller[152859]: 2025-11-25T09:08:40Z|01512|binding|INFO|Removing iface tap0967717d-56 ovn-installed in OVS
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 kernel: tap6effd17c-b1 (unregistering): left promiscuous mode
Nov 25 09:08:40 compute-0 NetworkManager[48915]: <info>  [1764061720.2748] device (tap6effd17c-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.293 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:69:66 10.100.0.14'], port_security=['fa:16:3e:11:69:66 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '24611274-7a7c-4258-8631-032a6c1d8410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d52cd819-aa98-4895-9898-b2ec17432e84, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=0967717d-564b-4989-8f67-1cd8c2de57ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.295 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 0967717d-564b-4989-8f67-1cd8c2de57ce in datapath 2a6609b2-beb0-48a5-8dc0-1a4c153da77e unbound from our chassis
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.297 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a6609b2-beb0-48a5-8dc0-1a4c153da77e
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.324 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 ovn_controller[152859]: 2025-11-25T09:08:40Z|01513|binding|INFO|Releasing lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 from this chassis (sb_readonly=0)
Nov 25 09:08:40 compute-0 ovn_controller[152859]: 2025-11-25T09:08:40Z|01514|binding|INFO|Setting lport 6effd17c-b1ee-44e2-8346-9e445de0dfb9 down in Southbound
Nov 25 09:08:40 compute-0 ovn_controller[152859]: 2025-11-25T09:08:40Z|01515|binding|INFO|Removing iface tap6effd17c-b1 ovn-installed in OVS
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.336 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fefa194c-03ec-4691-90c5-083af5d66944]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 25 09:08:40 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008f.scope: Consumed 15.211s CPU time.
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.358 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde'], port_security=['fa:16:3e:28:6f:de 2001:db8:0:1:f816:3eff:fe28:6fde 2001:db8::f816:3eff:fe28:6fde'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe28:6fde/64 2001:db8::f816:3eff:fe28:6fde/64', 'neutron:device_id': '24611274-7a7c-4258-8631-032a6c1d8410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=6effd17c-b1ee-44e2-8346-9e445de0dfb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:40 compute-0 systemd-machined[215790]: Machine qemu-173-instance-0000008f terminated.
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.367 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5f63d16e-7473-47ad-bb48-2e0a8c4b6c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.370 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[257cbf1a-6293-4ec8-843b-1fe20e44aea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.396 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2564ca-49a1-4a5a-8cf2-2d0617555869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.417 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[08684bc4-5aa0-4e09-a7c4-50b284d2a5f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a6609b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:74:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701075, 'reachable_time': 37459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405508, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 NetworkManager[48915]: <info>  [1764061720.4360] manager: (tap6effd17c-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.438 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3271e2-2da8-4159-b307-1f917f745d7e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a6609b2-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701090, 'tstamp': 701090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405515, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2a6609b2-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701094, 'tstamp': 701094}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405515, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.440 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a6609b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.442 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.451 253542 INFO nova.virt.libvirt.driver [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Instance destroyed successfully.
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.452 253542 DEBUG nova.objects.instance [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 24611274-7a7c-4258-8631-032a6c1d8410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.456 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.457 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a6609b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.458 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.458 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a6609b2-b0, col_values=(('external_ids', {'iface-id': 'd96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.459 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.460 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 6effd17c-b1ee-44e2-8346-9e445de0dfb9 in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 unbound from our chassis
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.461 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.463 253542 DEBUG nova.virt.libvirt.vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:08:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:08:14Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.463 253542 DEBUG nova.network.os_vif_util [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.464 253542 DEBUG nova.network.os_vif_util [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.464 253542 DEBUG os_vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.466 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.466 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0967717d-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.468 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.472 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.474 253542 INFO os_vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:69:66,bridge_name='br-int',has_traffic_filtering=True,id=0967717d-564b-4989-8f67-1cd8c2de57ce,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0967717d-56')
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.475 253542 DEBUG nova.virt.libvirt.vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1852306099',display_name='tempest-TestGettingAddress-server-1852306099',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1852306099',id=143,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:08:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-0x2ihl6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:08:14Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=24611274-7a7c-4258-8631-032a6c1d8410,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.475 253542 DEBUG nova.network.os_vif_util [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.476 253542 DEBUG nova.network.os_vif_util [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.476 253542 DEBUG os_vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.477 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6effd17c-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.478 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[39d20875-6748-4713-a4a1-31142c7c1693]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.480 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.482 253542 INFO os_vif [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6f:de,bridge_name='br-int',has_traffic_filtering=True,id=6effd17c-b1ee-44e2-8346-9e445de0dfb9,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6effd17c-b1')
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.514 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf2b1fe-462d-4679-adc4-d57f309dd54d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.518 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[627c65e7-6049-451a-9a5d-5aed999d2a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.537 253542 DEBUG nova.compute.manager [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-unplugged-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.537 253542 DEBUG oslo_concurrency.lockutils [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.537 253542 DEBUG oslo_concurrency.lockutils [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.538 253542 DEBUG oslo_concurrency.lockutils [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.538 253542 DEBUG nova.compute.manager [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-unplugged-0967717d-564b-4989-8f67-1cd8c2de57ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.538 253542 DEBUG nova.compute.manager [req-009a916c-8d2a-4023-9bbe-9816f18b533f req-768c907b-eda8-48da-9406-392902845947 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-unplugged-0967717d-564b-4989-8f67-1cd8c2de57ce for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.547 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42d8ccb5-62d6-4b85-818c-1052bbce6e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.562 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a682535e-b296-4445-83d3-76fa60f2f533]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21786b2a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:b7:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701174, 'reachable_time': 20349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405559, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfa2f5a-8e66-4569-b64c-d0df7061fc8e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap21786b2a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701187, 'tstamp': 701187}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405560, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.579 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21786b2a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.580 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 nova_compute[253538]: 2025-11-25 09:08:40.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21786b2a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.582 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21786b2a-50, col_values=(('external_ids', {'iface-id': 'c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:40.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:08:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:41.092 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:41.093 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:41.094 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.189 253542 INFO nova.virt.libvirt.driver [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Deleting instance files /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410_del
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.190 253542 INFO nova.virt.libvirt.driver [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Deletion of /var/lib/nova/instances/24611274-7a7c-4258-8631-032a6c1d8410_del complete
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.253 253542 INFO nova.compute.manager [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Took 1.06 seconds to destroy the instance on the hypervisor.
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.254 253542 DEBUG oslo.service.loopingcall [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.254 253542 DEBUG nova.compute.manager [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.254 253542 DEBUG nova.network.neutron [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.298 253542 DEBUG nova.network.neutron [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updated VIF entry in instance network info cache for port 0967717d-564b-4989-8f67-1cd8c2de57ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.298 253542 DEBUG nova.network.neutron [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [{"id": "0967717d-564b-4989-8f67-1cd8c2de57ce", "address": "fa:16:3e:11:69:66", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0967717d-56", "ovs_interfaceid": "0967717d-564b-4989-8f67-1cd8c2de57ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "address": "fa:16:3e:28:6f:de", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:6fde", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6effd17c-b1", "ovs_interfaceid": "6effd17c-b1ee-44e2-8346-9e445de0dfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:41 compute-0 ceph-mon[75015]: pgmap v2648: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 653 KiB/s wr, 52 op/s
Nov 25 09:08:41 compute-0 nova_compute[253538]: 2025-11-25 09:08:41.319 253542 DEBUG oslo_concurrency.lockutils [req-ccd81664-0551-4b7f-9614-e1bac9ade488 req-792219a4-27cd-4903-b8f0-b564b0c0e99c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-24611274-7a7c-4258-8631-032a6c1d8410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:08:41 compute-0 ovn_controller[152859]: 2025-11-25T09:08:41Z|01516|binding|INFO|Releasing lport c3131dc3-cd6c-4c0c-a1cd-e385c6ff1693 from this chassis (sb_readonly=0)
Nov 25 09:08:41 compute-0 ovn_controller[152859]: 2025-11-25T09:08:41Z|01517|binding|INFO|Releasing lport d96f8f4b-113a-4ac4-a7d8-3fb3e9d6b94c from this chassis (sb_readonly=0)
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 227 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 26 KiB/s wr, 58 op/s
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.601 253542 DEBUG nova.network.neutron [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.619 253542 INFO nova.compute.manager [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Took 1.36 seconds to deallocate network for instance.
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.649 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.650 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.650 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.650 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.650 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.651 253542 WARNING nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received unexpected event network-vif-plugged-0967717d-564b-4989-8f67-1cd8c2de57ce for instance with vm_state active and task_state deleting.
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.651 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-unplugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.652 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.652 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.652 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.652 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-unplugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.653 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-unplugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.653 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.653 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "24611274-7a7c-4258-8631-032a6c1d8410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.653 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.654 253542 DEBUG oslo_concurrency.lockutils [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.654 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] No waiting events found dispatching network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.654 253542 WARNING nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received unexpected event network-vif-plugged-6effd17c-b1ee-44e2-8346-9e445de0dfb9 for instance with vm_state active and task_state deleting.
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.654 253542 DEBUG nova.compute.manager [req-d1e574d4-2f7c-4401-8b55-ab40a8550c2a req-58e08b0f-321e-4d5c-96c6-2d2815d68ef0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-deleted-0967717d-564b-4989-8f67-1cd8c2de57ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.665 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.666 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:42 compute-0 nova_compute[253538]: 2025-11-25 09:08:42.734 253542 DEBUG oslo_concurrency.processutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:08:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633812308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:43 compute-0 nova_compute[253538]: 2025-11-25 09:08:43.192 253542 DEBUG oslo_concurrency.processutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:43 compute-0 nova_compute[253538]: 2025-11-25 09:08:43.198 253542 DEBUG nova.compute.provider_tree [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:08:43 compute-0 nova_compute[253538]: 2025-11-25 09:08:43.241 253542 DEBUG nova.scheduler.client.report [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:08:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:43 compute-0 nova_compute[253538]: 2025-11-25 09:08:43.271 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:43 compute-0 nova_compute[253538]: 2025-11-25 09:08:43.297 253542 INFO nova.scheduler.client.report [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 24611274-7a7c-4258-8631-032a6c1d8410
Nov 25 09:08:43 compute-0 nova_compute[253538]: 2025-11-25 09:08:43.357 253542 DEBUG oslo_concurrency.lockutils [None req-49111bd6-c78f-4a62-beb9-cc3235068cf6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "24611274-7a7c-4258-8631-032a6c1d8410" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:43 compute-0 sshd-session[405563]: Invalid user sgf from 193.32.162.151 port 35026
Nov 25 09:08:43 compute-0 ceph-mon[75015]: pgmap v2649: 321 pgs: 321 active+clean; 227 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 26 KiB/s wr, 58 op/s
Nov 25 09:08:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1633812308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:43 compute-0 sshd-session[405563]: Connection closed by invalid user sgf 193.32.162.151 port 35026 [preauth]
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.102 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061709.1016028, 7aeb9ccf-2506-41d1-92c2-c72892096857 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.103 253542 INFO nova.compute.manager [-] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] VM Stopped (Lifecycle Event)
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.121 253542 DEBUG nova.compute.manager [None req-47934868-6c55-4104-9539-0ad2ef8b9e77 - - - - - -] [instance: 7aeb9ccf-2506-41d1-92c2-c72892096857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.196 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 25 KiB/s wr, 29 op/s
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.307 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.307 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.308 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.308 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.308 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.309 253542 INFO nova.compute.manager [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Terminating instance
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.310 253542 DEBUG nova.compute.manager [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:08:44 compute-0 kernel: tap05d0fd93-ce (unregistering): left promiscuous mode
Nov 25 09:08:44 compute-0 NetworkManager[48915]: <info>  [1764061724.3904] device (tap05d0fd93-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 ovn_controller[152859]: 2025-11-25T09:08:44Z|01518|binding|INFO|Releasing lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 from this chassis (sb_readonly=0)
Nov 25 09:08:44 compute-0 ovn_controller[152859]: 2025-11-25T09:08:44Z|01519|binding|INFO|Setting lport 05d0fd93-ce0f-4842-962f-c9491d3850c8 down in Southbound
Nov 25 09:08:44 compute-0 ovn_controller[152859]: 2025-11-25T09:08:44Z|01520|binding|INFO|Removing iface tap05d0fd93-ce ovn-installed in OVS
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.411 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:ec:f8 10.100.0.7'], port_security=['fa:16:3e:e4:ec:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a5ec67ec-7042-47d0-925d-6ff3847d3846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d52cd819-aa98-4895-9898-b2ec17432e84, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=05d0fd93-ce0f-4842-962f-c9491d3850c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.412 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 05d0fd93-ce0f-4842-962f-c9491d3850c8 in datapath 2a6609b2-beb0-48a5-8dc0-1a4c153da77e unbound from our chassis
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.413 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2a6609b2-beb0-48a5-8dc0-1a4c153da77e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.414 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[aca258b2-4181-467d-a2b6-14780485f4b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.415 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e namespace which is not needed anymore
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 kernel: tap2eda6ce1-df (unregistering): left promiscuous mode
Nov 25 09:08:44 compute-0 NetworkManager[48915]: <info>  [1764061724.4373] device (tap2eda6ce1-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.440 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 ovn_controller[152859]: 2025-11-25T09:08:44Z|01521|binding|INFO|Releasing lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e from this chassis (sb_readonly=0)
Nov 25 09:08:44 compute-0 ovn_controller[152859]: 2025-11-25T09:08:44Z|01522|binding|INFO|Setting lport 2eda6ce1-df50-4620-a5e9-d08e62f7350e down in Southbound
Nov 25 09:08:44 compute-0 ovn_controller[152859]: 2025-11-25T09:08:44Z|01523|binding|INFO|Removing iface tap2eda6ce1-df ovn-installed in OVS
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.450 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.452 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.464 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.475 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73'], port_security=['fa:16:3e:fc:ed:73 2001:db8:0:1:f816:3eff:fefc:ed73 2001:db8::f816:3eff:fefc:ed73'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fefc:ed73/64 2001:db8::f816:3eff:fefc:ed73/64', 'neutron:device_id': 'a5ec67ec-7042-47d0-925d-6ff3847d3846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b341b451-7173-4ca0-817f-090d2ce6e1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4355854-52b0-49fe-b048-f2ee64c9c702, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2eda6ce1-df50-4620-a5e9-d08e62f7350e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:08:44 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 25 09:08:44 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Consumed 16.976s CPU time.
Nov 25 09:08:44 compute-0 systemd-machined[215790]: Machine qemu-172-instance-0000008d terminated.
Nov 25 09:08:44 compute-0 NetworkManager[48915]: <info>  [1764061724.5346] manager: (tap05d0fd93-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Nov 25 09:08:44 compute-0 NetworkManager[48915]: <info>  [1764061724.5471] manager: (tap2eda6ce1-df): new Tun device (/org/freedesktop/NetworkManager/Devices/623)
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.583 253542 INFO nova.virt.libvirt.driver [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Instance destroyed successfully.
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.584 253542 DEBUG nova.objects.instance [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid a5ec67ec-7042-47d0-925d-6ff3847d3846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.596 253542 DEBUG nova.virt.libvirt.vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:07:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:07:37Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.596 253542 DEBUG nova.network.os_vif_util [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.597 253542 DEBUG nova.network.os_vif_util [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.597 253542 DEBUG os_vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.599 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.599 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05d0fd93-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.606 253542 INFO os_vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:ec:f8,bridge_name='br-int',has_traffic_filtering=True,id=05d0fd93-ce0f-4842-962f-c9491d3850c8,network=Network(2a6609b2-beb0-48a5-8dc0-1a4c153da77e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05d0fd93-ce')
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.607 253542 DEBUG nova.virt.libvirt.vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1660352591',display_name='tempest-TestGettingAddress-server-1660352591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1660352591',id=141,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNXZVgUcpOXPWXO6P6soD+m15iDx22PdjLpwIPj4NhiqrYSa8gOlVpM1gBbkAKhCG74Rw2mdANKyxO2M4jwTdZWZdvZ4G3xfBv8VGqbvmMGN/YvwZlGw5MNBtRc1Ho0Cw==',key_name='tempest-TestGettingAddress-1769249141',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:07:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-umnvxfkq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:07:37Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=a5ec67ec-7042-47d0-925d-6ff3847d3846,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.607 253542 DEBUG nova.network.os_vif_util [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.609 253542 DEBUG nova.network.os_vif_util [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:08:44 compute-0 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [NOTICE]   (403598) : haproxy version is 2.8.14-c23fe91
Nov 25 09:08:44 compute-0 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [NOTICE]   (403598) : path to executable is /usr/sbin/haproxy
Nov 25 09:08:44 compute-0 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [WARNING]  (403598) : Exiting Master process...
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.609 253542 DEBUG os_vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.610 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.610 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eda6ce1-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:44 compute-0 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [ALERT]    (403598) : Current worker (403600) exited with code 143 (Terminated)
Nov 25 09:08:44 compute-0 neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e[403594]: [WARNING]  (403598) : All workers exited. Exiting... (0)
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.611 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.613 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 systemd[1]: libpod-9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa.scope: Deactivated successfully.
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.614 253542 INFO os_vif [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ed:73,bridge_name='br-int',has_traffic_filtering=True,id=2eda6ce1-df50-4620-a5e9-d08e62f7350e,network=Network(21786b2a-59f9-4c4e-b462-8a28f7bd93a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eda6ce1-df')
Nov 25 09:08:44 compute-0 podman[405614]: 2025-11-25 09:08:44.620676324 +0000 UTC m=+0.085916826 container died 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 09:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa-userdata-shm.mount: Deactivated successfully.
Nov 25 09:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e09528d4ae8d074a0aa083a055a8dc133b765cc40f864841f0ed3ff34768cc83-merged.mount: Deactivated successfully.
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.804 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Received event network-vif-deleted-6effd17c-b1ee-44e2-8346-9e445de0dfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.806 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.806 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing instance network info cache due to event network-changed-05d0fd93-ce0f-4842-962f-c9491d3850c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.807 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.807 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.808 253542 DEBUG nova.network.neutron [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Refreshing network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:08:44 compute-0 podman[405614]: 2025-11-25 09:08:44.812281743 +0000 UTC m=+0.277522245 container cleanup 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:08:44 compute-0 systemd[1]: libpod-conmon-9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa.scope: Deactivated successfully.
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.901 253542 DEBUG nova.compute.manager [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-unplugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.902 253542 DEBUG oslo_concurrency.lockutils [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.902 253542 DEBUG oslo_concurrency.lockutils [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.903 253542 DEBUG oslo_concurrency.lockutils [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.903 253542 DEBUG nova.compute.manager [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-unplugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.904 253542 DEBUG nova.compute.manager [req-93a17075-a365-42a4-baa9-1a1aaa9646a6 req-263576da-841c-4d9d-adfb-8b311c924534 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-unplugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:08:44 compute-0 podman[405684]: 2025-11-25 09:08:44.909738872 +0000 UTC m=+0.066501049 container remove 9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.922 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0231f5a3-07f3-4f98-ac32-fec44f19f244]: (4, ('Tue Nov 25 09:08:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e (9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa)\n9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa\nTue Nov 25 09:08:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e (9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa)\n9a504edfb770e834c8ae706c82d27a846bbfdf5baeddccd2911bc4b7aa6bc6fa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.924 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b01468-cda2-45c5-8581-80aab1799641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.925 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a6609b2-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:44 compute-0 kernel: tap2a6609b2-b0: left promiscuous mode
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.943 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 nova_compute[253538]: 2025-11-25 09:08:44.944 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.946 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c36b5d13-a4f8-401f-af09-687e2cc70aef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.962 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f532be33-8405-4d5a-a859-8cbdf1b567bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.963 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[78dc80ff-4238-4c54-9a64-79a0b5ba31c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.978 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1e494a-d7f7-45bd-89f2-fd20e3bbb276]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701063, 'reachable_time': 19494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405699, 'error': None, 'target': 'ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d2a6609b2\x2dbeb0\x2d48a5\x2d8dc0\x2d1a4c153da77e.mount: Deactivated successfully.
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.982 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2a6609b2-beb0-48a5-8dc0-1a4c153da77e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.983 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0207fb-cbd4-4b7b-86c9-61d4d77355bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.983 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2eda6ce1-df50-4620-a5e9-d08e62f7350e in datapath 21786b2a-59f9-4c4e-b462-8a28f7bd93a3 unbound from our chassis
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.984 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21786b2a-59f9-4c4e-b462-8a28f7bd93a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.986 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[abe3672d-0930-45de-86a0-5aee95ea6de8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:44 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:44.986 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 namespace which is not needed anymore
Nov 25 09:08:45 compute-0 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [NOTICE]   (403687) : haproxy version is 2.8.14-c23fe91
Nov 25 09:08:45 compute-0 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [NOTICE]   (403687) : path to executable is /usr/sbin/haproxy
Nov 25 09:08:45 compute-0 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [WARNING]  (403687) : Exiting Master process...
Nov 25 09:08:45 compute-0 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [ALERT]    (403687) : Current worker (403689) exited with code 143 (Terminated)
Nov 25 09:08:45 compute-0 neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3[403683]: [WARNING]  (403687) : All workers exited. Exiting... (0)
Nov 25 09:08:45 compute-0 systemd[1]: libpod-4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51.scope: Deactivated successfully.
Nov 25 09:08:45 compute-0 podman[405718]: 2025-11-25 09:08:45.1344248 +0000 UTC m=+0.047008889 container died 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:08:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51-userdata-shm.mount: Deactivated successfully.
Nov 25 09:08:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-489261d86b924eb877c414a169c28246ea802beb251d049c1cd79a5b173b9a69-merged.mount: Deactivated successfully.
Nov 25 09:08:45 compute-0 podman[405718]: 2025-11-25 09:08:45.178144738 +0000 UTC m=+0.090728817 container cleanup 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:08:45 compute-0 systemd[1]: libpod-conmon-4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51.scope: Deactivated successfully.
Nov 25 09:08:45 compute-0 nova_compute[253538]: 2025-11-25 09:08:45.212 253542 INFO nova.virt.libvirt.driver [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Deleting instance files /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846_del
Nov 25 09:08:45 compute-0 nova_compute[253538]: 2025-11-25 09:08:45.213 253542 INFO nova.virt.libvirt.driver [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Deletion of /var/lib/nova/instances/a5ec67ec-7042-47d0-925d-6ff3847d3846_del complete
Nov 25 09:08:45 compute-0 podman[405748]: 2025-11-25 09:08:45.246651801 +0000 UTC m=+0.044584113 container remove 4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.253 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b212c38-19a3-4676-befa-3c41055eed14]: (4, ('Tue Nov 25 09:08:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 (4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51)\n4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51\nTue Nov 25 09:08:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 (4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51)\n4c4ab6fadfcdb863651d7b97fe31c5cb7b19c73ef6a849b2c903b75365943d51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.255 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[289cc496-5170-4280-a1e5-20afad61ca7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.256 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21786b2a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:08:45 compute-0 nova_compute[253538]: 2025-11-25 09:08:45.257 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:45 compute-0 kernel: tap21786b2a-50: left promiscuous mode
Nov 25 09:08:45 compute-0 nova_compute[253538]: 2025-11-25 09:08:45.267 253542 INFO nova.compute.manager [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Took 0.96 seconds to destroy the instance on the hypervisor.
Nov 25 09:08:45 compute-0 nova_compute[253538]: 2025-11-25 09:08:45.267 253542 DEBUG oslo.service.loopingcall [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:08:45 compute-0 nova_compute[253538]: 2025-11-25 09:08:45.268 253542 DEBUG nova.compute.manager [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:08:45 compute-0 nova_compute[253538]: 2025-11-25 09:08:45.268 253542 DEBUG nova.network.neutron [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:08:45 compute-0 nova_compute[253538]: 2025-11-25 09:08:45.273 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.276 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9e86c0-6c72-4bf2-a7e0-9ffdb3380636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[c05936b9-1889-415d-923f-a33d091e65dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.293 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e52abfe-9e31-435e-8e9d-7169b245ac83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.312 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18bd4bf9-d269-4c9f-b165-f21a600d5f66]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701165, 'reachable_time': 26579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405764, 'error': None, 'target': 'ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.314 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-21786b2a-59f9-4c4e-b462-8a28f7bd93a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:08:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:08:45.315 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[529dc2ef-91a3-425f-8663-ea5e67b8a29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:08:45 compute-0 ceph-mon[75015]: pgmap v2650: 321 pgs: 321 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 25 KiB/s wr, 29 op/s
Nov 25 09:08:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d21786b2a\x2d59f9\x2d4c4e\x2db462\x2d8a28f7bd93a3.mount: Deactivated successfully.
Nov 25 09:08:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 23 KiB/s wr, 32 op/s
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.761 253542 DEBUG nova.network.neutron [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.777 253542 INFO nova.compute.manager [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Took 1.51 seconds to deallocate network for instance.
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.833 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.834 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.892 253542 DEBUG oslo_concurrency.processutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.931 253542 DEBUG nova.compute.manager [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.931 253542 DEBUG oslo_concurrency.lockutils [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.932 253542 DEBUG oslo_concurrency.lockutils [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.932 253542 DEBUG oslo_concurrency.lockutils [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.932 253542 DEBUG nova.compute.manager [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:46 compute-0 nova_compute[253538]: 2025-11-25 09:08:46.932 253542 WARNING nova.compute.manager [req-1105d66f-5fc5-48a5-bee6-c9700bfb427f req-9e21b266-4f8d-4ca6-add1-3e62b5d87d1b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received unexpected event network-vif-plugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 for instance with vm_state deleted and task_state None.
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.030 253542 DEBUG nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.031 253542 DEBUG oslo_concurrency.lockutils [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.031 253542 DEBUG oslo_concurrency.lockutils [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.031 253542 DEBUG oslo_concurrency.lockutils [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.031 253542 DEBUG nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.032 253542 WARNING nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received unexpected event network-vif-plugged-2eda6ce1-df50-4620-a5e9-d08e62f7350e for instance with vm_state deleted and task_state None.
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.032 253542 DEBUG nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-deleted-2eda6ce1-df50-4620-a5e9-d08e62f7350e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.032 253542 DEBUG nova.compute.manager [req-7b45d0c1-b8a9-4119-9865-01fc6e8b2357 req-4788bcaf-7125-4b0f-a95f-72e5a7267338 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-deleted-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:08:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717188419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.340 253542 DEBUG oslo_concurrency.processutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.350 253542 DEBUG nova.compute.provider_tree [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.365 253542 DEBUG nova.scheduler.client.report [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.389 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.430 253542 INFO nova.scheduler.client.report [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance a5ec67ec-7042-47d0-925d-6ff3847d3846
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.435 253542 DEBUG nova.network.neutron [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updated VIF entry in instance network info cache for port 05d0fd93-ce0f-4842-962f-c9491d3850c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.436 253542 DEBUG nova.network.neutron [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Updating instance_info_cache with network_info: [{"id": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "address": "fa:16:3e:e4:ec:f8", "network": {"id": "2a6609b2-beb0-48a5-8dc0-1a4c153da77e", "bridge": "br-int", "label": "tempest-network-smoke--734444177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05d0fd93-ce", "ovs_interfaceid": "05d0fd93-ce0f-4842-962f-c9491d3850c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "address": "fa:16:3e:fc:ed:73", "network": {"id": "21786b2a-59f9-4c4e-b462-8a28f7bd93a3", "bridge": "br-int", "label": "tempest-network-smoke--1716336469", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefc:ed73", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eda6ce1-df", "ovs_interfaceid": "2eda6ce1-df50-4620-a5e9-d08e62f7350e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.459 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-a5ec67ec-7042-47d0-925d-6ff3847d3846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.460 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-unplugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.460 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.460 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.461 253542 DEBUG oslo_concurrency.lockutils [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.461 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] No waiting events found dispatching network-vif-unplugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.461 253542 DEBUG nova.compute.manager [req-84a9d82c-d20b-40f0-9620-63ccf07470fb req-11a43452-1aef-4bdc-90c3-75b82ef6971b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Received event network-vif-unplugged-05d0fd93-ce0f-4842-962f-c9491d3850c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:08:47 compute-0 nova_compute[253538]: 2025-11-25 09:08:47.487 253542 DEBUG oslo_concurrency.lockutils [None req-a622154e-08b5-4423-b43a-15ce0902fe7c c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "a5ec67ec-7042-47d0-925d-6ff3847d3846" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:08:47 compute-0 ceph-mon[75015]: pgmap v2651: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 23 KiB/s wr, 32 op/s
Nov 25 09:08:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3717188419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:08:48 compute-0 nova_compute[253538]: 2025-11-25 09:08:48.137 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:48 compute-0 nova_compute[253538]: 2025-11-25 09:08:48.137 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 113 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 24 KiB/s wr, 45 op/s
Nov 25 09:08:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:49 compute-0 nova_compute[253538]: 2025-11-25 09:08:49.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:49 compute-0 ceph-mon[75015]: pgmap v2652: 321 pgs: 321 active+clean; 113 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 24 KiB/s wr, 45 op/s
Nov 25 09:08:49 compute-0 nova_compute[253538]: 2025-11-25 09:08:49.612 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 09:08:51 compute-0 nova_compute[253538]: 2025-11-25 09:08:51.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:51 compute-0 nova_compute[253538]: 2025-11-25 09:08:51.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:08:51 compute-0 ceph-mon[75015]: pgmap v2653: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 09:08:51 compute-0 nova_compute[253538]: 2025-11-25 09:08:51.600 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:08:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 09:08:52 compute-0 ceph-mon[75015]: pgmap v2654: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 09:08:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:08:53
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control', '.mgr', 'volumes', 'cephfs.cephfs.data']
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:08:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:08:53 compute-0 nova_compute[253538]: 2025-11-25 09:08:53.593 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:08:54 compute-0 nova_compute[253538]: 2025-11-25 09:08:54.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.6 KiB/s wr, 38 op/s
Nov 25 09:08:54 compute-0 nova_compute[253538]: 2025-11-25 09:08:54.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:54 compute-0 nova_compute[253538]: 2025-11-25 09:08:54.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:08:54 compute-0 nova_compute[253538]: 2025-11-25 09:08:54.614 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:55 compute-0 ceph-mon[75015]: pgmap v2655: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.6 KiB/s wr, 38 op/s
Nov 25 09:08:55 compute-0 nova_compute[253538]: 2025-11-25 09:08:55.450 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061720.4491594, 24611274-7a7c-4258-8631-032a6c1d8410 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:08:55 compute-0 nova_compute[253538]: 2025-11-25 09:08:55.450 253542 INFO nova.compute.manager [-] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] VM Stopped (Lifecycle Event)
Nov 25 09:08:55 compute-0 nova_compute[253538]: 2025-11-25 09:08:55.469 253542 DEBUG nova.compute.manager [None req-bdb2c934-49ff-48f6-bdaa-4335a5f8666f - - - - - -] [instance: 24611274-7a7c-4258-8631-032a6c1d8410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 33 op/s
Nov 25 09:08:56 compute-0 nova_compute[253538]: 2025-11-25 09:08:56.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:57 compute-0 ceph-mon[75015]: pgmap v2656: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 33 op/s
Nov 25 09:08:57 compute-0 nova_compute[253538]: 2025-11-25 09:08:57.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:57 compute-0 nova_compute[253538]: 2025-11-25 09:08:57.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:08:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:08:58 compute-0 nova_compute[253538]: 2025-11-25 09:08:58.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:59 compute-0 nova_compute[253538]: 2025-11-25 09:08:59.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:08:59 compute-0 ceph-mon[75015]: pgmap v2657: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:08:59 compute-0 nova_compute[253538]: 2025-11-25 09:08:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:08:59 compute-0 nova_compute[253538]: 2025-11-25 09:08:59.576 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061724.5751293, a5ec67ec-7042-47d0-925d-6ff3847d3846 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:08:59 compute-0 nova_compute[253538]: 2025-11-25 09:08:59.576 253542 INFO nova.compute.manager [-] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] VM Stopped (Lifecycle Event)
Nov 25 09:08:59 compute-0 nova_compute[253538]: 2025-11-25 09:08:59.645 253542 DEBUG nova.compute.manager [None req-6836202c-8fc5-4033-9a5b-785cc7a0d1ad - - - - - -] [instance: a5ec67ec-7042-47d0-925d-6ff3847d3846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:08:59 compute-0 nova_compute[253538]: 2025-11-25 09:08:59.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 09:09:01 compute-0 ceph-mon[75015]: pgmap v2658: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 25 09:09:01 compute-0 podman[405789]: 2025-11-25 09:09:01.823051805 +0000 UTC m=+0.063223100 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 09:09:01 compute-0 podman[405788]: 2025-11-25 09:09:01.829297664 +0000 UTC m=+0.075522784 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 25 09:09:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:02 compute-0 nova_compute[253538]: 2025-11-25 09:09:02.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:03 compute-0 ceph-mon[75015]: pgmap v2659: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:04 compute-0 nova_compute[253538]: 2025-11-25 09:09:04.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:09:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:09:04 compute-0 nova_compute[253538]: 2025-11-25 09:09:04.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:04 compute-0 nova_compute[253538]: 2025-11-25 09:09:04.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:04 compute-0 nova_compute[253538]: 2025-11-25 09:09:04.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:04 compute-0 nova_compute[253538]: 2025-11-25 09:09:04.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:04 compute-0 nova_compute[253538]: 2025-11-25 09:09:04.582 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:09:04 compute-0 nova_compute[253538]: 2025-11-25 09:09:04.583 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:04 compute-0 nova_compute[253538]: 2025-11-25 09:09:04.647 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:09:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9535386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.032 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.209 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.211 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3674MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.211 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.211 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.282 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.282 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.306 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:05 compute-0 ceph-mon[75015]: pgmap v2660: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/9535386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:09:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:09:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2902937317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.719 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.727 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.746 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.966 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:09:05 compute-0 nova_compute[253538]: 2025-11-25 09:09:05.966 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2902937317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:09:07 compute-0 ceph-mon[75015]: pgmap v2661: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:08 compute-0 podman[405873]: 2025-11-25 09:09:08.853710557 +0000 UTC m=+0.099332042 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:09:09 compute-0 nova_compute[253538]: 2025-11-25 09:09:09.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:09 compute-0 ceph-mon[75015]: pgmap v2662: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:09 compute-0 nova_compute[253538]: 2025-11-25 09:09:09.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:11 compute-0 ceph-mon[75015]: pgmap v2663: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:13 compute-0 ceph-mon[75015]: pgmap v2664: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:14 compute-0 nova_compute[253538]: 2025-11-25 09:09:14.206 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:14 compute-0 nova_compute[253538]: 2025-11-25 09:09:14.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:15 compute-0 ceph-mon[75015]: pgmap v2665: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:17 compute-0 ceph-mon[75015]: pgmap v2666: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:19 compute-0 nova_compute[253538]: 2025-11-25 09:09:19.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:19 compute-0 ceph-mon[75015]: pgmap v2667: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:19 compute-0 nova_compute[253538]: 2025-11-25 09:09:19.655 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:21 compute-0 ceph-mon[75015]: pgmap v2668: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:22 compute-0 sudo[405901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:22 compute-0 sudo[405901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:22 compute-0 sudo[405901]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:22 compute-0 sudo[405926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:09:22 compute-0 sudo[405926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:22 compute-0 sudo[405926]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:22 compute-0 sudo[405951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:22 compute-0 sudo[405951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:22 compute-0 sudo[405951]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:22 compute-0 sudo[405976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:09:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:22 compute-0 sudo[405976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:22 compute-0 ceph-mon[75015]: pgmap v2669: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:22 compute-0 sudo[405976]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:09:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:09:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:09:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:09:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:09:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:09:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0a869e24-bd15-4ad4-b68c-c1fc3e539624 does not exist
Nov 25 09:09:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev b77957e3-2d2f-437d-9db9-7b1b910fada7 does not exist
Nov 25 09:09:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 995942e8-7cbb-44ec-82d3-d718509fe275 does not exist
Nov 25 09:09:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:09:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:09:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:09:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:09:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:09:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:09:22 compute-0 sudo[406031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:22 compute-0 sudo[406031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:22 compute-0 sudo[406031]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:22 compute-0 sudo[406056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:09:22 compute-0 sudo[406056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:22 compute-0 sudo[406056]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:22 compute-0 sudo[406081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:22 compute-0 sudo[406081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:22 compute-0 sudo[406081]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:23 compute-0 sudo[406106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:09:23 compute-0 sudo[406106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:23 compute-0 podman[406170]: 2025-11-25 09:09:23.385007136 +0000 UTC m=+0.050475053 container create b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:09:23 compute-0 systemd[1]: Started libpod-conmon-b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d.scope.
Nov 25 09:09:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:09:23 compute-0 podman[406170]: 2025-11-25 09:09:23.35759646 +0000 UTC m=+0.023064407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:09:23 compute-0 podman[406170]: 2025-11-25 09:09:23.467971231 +0000 UTC m=+0.133439148 container init b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 09:09:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:09:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:09:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:09:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:09:23 compute-0 podman[406170]: 2025-11-25 09:09:23.475644159 +0000 UTC m=+0.141112076 container start b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:09:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:09:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:09:23 compute-0 podman[406170]: 2025-11-25 09:09:23.481005705 +0000 UTC m=+0.146477962 container attach b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:09:23 compute-0 relaxed_jepsen[406186]: 167 167
Nov 25 09:09:23 compute-0 systemd[1]: libpod-b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d.scope: Deactivated successfully.
Nov 25 09:09:23 compute-0 podman[406170]: 2025-11-25 09:09:23.482520757 +0000 UTC m=+0.147988704 container died b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a1a3f6977a5659bfd2098e1fa6a495e7ae1a854ef22b08f7908a6882aef0794-merged.mount: Deactivated successfully.
Nov 25 09:09:23 compute-0 podman[406170]: 2025-11-25 09:09:23.527089258 +0000 UTC m=+0.192557175 container remove b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jepsen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:09:23 compute-0 systemd[1]: libpod-conmon-b0423fa64cef0d1e28222733dc95d95409c40c2b9273275eb054a6e37905ec6d.scope: Deactivated successfully.
Nov 25 09:09:23 compute-0 podman[406213]: 2025-11-25 09:09:23.678203926 +0000 UTC m=+0.041768866 container create 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:09:23 compute-0 systemd[1]: Started libpod-conmon-9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0.scope.
Nov 25 09:09:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:09:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:09:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:09:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:09:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:09:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:09:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:23 compute-0 podman[406213]: 2025-11-25 09:09:23.659519898 +0000 UTC m=+0.023084858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:23 compute-0 podman[406213]: 2025-11-25 09:09:23.771493222 +0000 UTC m=+0.135058262 container init 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:09:23 compute-0 podman[406213]: 2025-11-25 09:09:23.781176675 +0000 UTC m=+0.144741655 container start 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:09:23 compute-0 podman[406213]: 2025-11-25 09:09:23.785717429 +0000 UTC m=+0.149282379 container attach 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:09:24 compute-0 nova_compute[253538]: 2025-11-25 09:09:24.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:24 compute-0 nova_compute[253538]: 2025-11-25 09:09:24.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:24 compute-0 ceph-mon[75015]: pgmap v2670: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:24 compute-0 romantic_elgamal[406230]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:09:24 compute-0 romantic_elgamal[406230]: --> relative data size: 1.0
Nov 25 09:09:24 compute-0 romantic_elgamal[406230]: --> All data devices are unavailable
Nov 25 09:09:24 compute-0 systemd[1]: libpod-9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0.scope: Deactivated successfully.
Nov 25 09:09:24 compute-0 podman[406213]: 2025-11-25 09:09:24.858019498 +0000 UTC m=+1.221584448 container died 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 09:09:24 compute-0 systemd[1]: libpod-9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0.scope: Consumed 1.013s CPU time.
Nov 25 09:09:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc784e53912096613f60d00844f606a6f0d5bc3379ad47fea2baf08c5574cb44-merged.mount: Deactivated successfully.
Nov 25 09:09:24 compute-0 podman[406213]: 2025-11-25 09:09:24.971214235 +0000 UTC m=+1.334779175 container remove 9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_elgamal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:09:24 compute-0 systemd[1]: libpod-conmon-9cd936b2ec58c5842f65901d26a2692b64428f8683f622ff2c7357d78d338ef0.scope: Deactivated successfully.
Nov 25 09:09:25 compute-0 sudo[406106]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:25 compute-0 sudo[406272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:25 compute-0 sudo[406272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:25 compute-0 sudo[406272]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:25 compute-0 sudo[406297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:09:25 compute-0 sudo[406297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:25 compute-0 sudo[406297]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:25 compute-0 sudo[406322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:25 compute-0 sudo[406322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:25 compute-0 sudo[406322]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:25 compute-0 sudo[406347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:09:25 compute-0 sudo[406347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:25 compute-0 podman[406412]: 2025-11-25 09:09:25.603841433 +0000 UTC m=+0.100344950 container create 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 09:09:25 compute-0 podman[406412]: 2025-11-25 09:09:25.528616057 +0000 UTC m=+0.025119604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:09:25 compute-0 systemd[1]: Started libpod-conmon-5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292.scope.
Nov 25 09:09:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:09:25 compute-0 podman[406412]: 2025-11-25 09:09:25.736881199 +0000 UTC m=+0.233384736 container init 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 09:09:25 compute-0 podman[406412]: 2025-11-25 09:09:25.747910808 +0000 UTC m=+0.244414325 container start 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:09:25 compute-0 podman[406412]: 2025-11-25 09:09:25.75054232 +0000 UTC m=+0.247045847 container attach 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:09:25 compute-0 flamboyant_bell[406428]: 167 167
Nov 25 09:09:25 compute-0 systemd[1]: libpod-5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292.scope: Deactivated successfully.
Nov 25 09:09:25 compute-0 podman[406412]: 2025-11-25 09:09:25.753608233 +0000 UTC m=+0.250111760 container died 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:09:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e9ef2d5ac065b2bc5192ea50d99ebcc4f8f876dd3367fa9000921fe3149229a-merged.mount: Deactivated successfully.
Nov 25 09:09:25 compute-0 podman[406412]: 2025-11-25 09:09:25.832796597 +0000 UTC m=+0.329300114 container remove 5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 09:09:25 compute-0 systemd[1]: libpod-conmon-5fe72f4a9cbe5d08899e2c81f0e1b60a49399284a41facff2ae0cb7ff6209292.scope: Deactivated successfully.
Nov 25 09:09:25 compute-0 nova_compute[253538]: 2025-11-25 09:09:25.899 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:25 compute-0 nova_compute[253538]: 2025-11-25 09:09:25.901 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:25 compute-0 nova_compute[253538]: 2025-11-25 09:09:25.915 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.000 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.001 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.010 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.010 253542 INFO nova.compute.claims [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:09:26 compute-0 podman[406453]: 2025-11-25 09:09:26.051537653 +0000 UTC m=+0.080489860 container create 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:09:26 compute-0 podman[406453]: 2025-11-25 09:09:26.001061581 +0000 UTC m=+0.030013788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:09:26 compute-0 systemd[1]: Started libpod-conmon-0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815.scope.
Nov 25 09:09:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.139 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:26 compute-0 podman[406453]: 2025-11-25 09:09:26.149830185 +0000 UTC m=+0.178782382 container init 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:09:26 compute-0 podman[406453]: 2025-11-25 09:09:26.159493647 +0000 UTC m=+0.188445824 container start 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:09:26 compute-0 podman[406453]: 2025-11-25 09:09:26.165349616 +0000 UTC m=+0.194301793 container attach 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:09:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:09:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534811428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.646 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.656 253542 DEBUG nova.compute.provider_tree [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.671 253542 DEBUG nova.scheduler.client.report [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.693 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.694 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.733 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.734 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.751 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.766 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.838 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.840 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.841 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Creating image(s)
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.864 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.886 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.905 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:09:26 compute-0 practical_meitner[406469]: {
Nov 25 09:09:26 compute-0 practical_meitner[406469]:     "0": [
Nov 25 09:09:26 compute-0 practical_meitner[406469]:         {
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "devices": [
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "/dev/loop3"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             ],
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_name": "ceph_lv0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_size": "21470642176",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "name": "ceph_lv0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "tags": {
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cluster_name": "ceph",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.crush_device_class": "",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.encrypted": "0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osd_id": "0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.type": "block",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.vdo": "0"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             },
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "type": "block",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "vg_name": "ceph_vg0"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:         }
Nov 25 09:09:26 compute-0 practical_meitner[406469]:     ],
Nov 25 09:09:26 compute-0 practical_meitner[406469]:     "1": [
Nov 25 09:09:26 compute-0 practical_meitner[406469]:         {
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "devices": [
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "/dev/loop4"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             ],
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_name": "ceph_lv1",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_size": "21470642176",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "name": "ceph_lv1",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "tags": {
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cluster_name": "ceph",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.crush_device_class": "",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.encrypted": "0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osd_id": "1",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.type": "block",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.vdo": "0"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             },
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "type": "block",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "vg_name": "ceph_vg1"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:         }
Nov 25 09:09:26 compute-0 practical_meitner[406469]:     ],
Nov 25 09:09:26 compute-0 practical_meitner[406469]:     "2": [
Nov 25 09:09:26 compute-0 practical_meitner[406469]:         {
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "devices": [
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "/dev/loop5"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             ],
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_name": "ceph_lv2",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_size": "21470642176",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "name": "ceph_lv2",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "tags": {
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.cluster_name": "ceph",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.crush_device_class": "",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.encrypted": "0",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osd_id": "2",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.type": "block",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:                 "ceph.vdo": "0"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             },
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "type": "block",
Nov 25 09:09:26 compute-0 practical_meitner[406469]:             "vg_name": "ceph_vg2"
Nov 25 09:09:26 compute-0 practical_meitner[406469]:         }
Nov 25 09:09:26 compute-0 practical_meitner[406469]:     ]
Nov 25 09:09:26 compute-0 practical_meitner[406469]: }
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.912 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:26 compute-0 systemd[1]: libpod-0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815.scope: Deactivated successfully.
Nov 25 09:09:26 compute-0 podman[406453]: 2025-11-25 09:09:26.938457382 +0000 UTC m=+0.967409559 container died 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:09:26 compute-0 nova_compute[253538]: 2025-11-25 09:09:26.958 253542 DEBUG nova.policy [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-34dd24bf07a3cebc5f515829bd367951621b3bdb00b0d4ae2c0671d1a8748f6e-merged.mount: Deactivated successfully.
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.019 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.021 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.022 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.022 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.044 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.048 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 23232702-7686-425d-8921-7aa6192ca1c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:27 compute-0 podman[406453]: 2025-11-25 09:09:27.082105388 +0000 UTC m=+1.111057555 container remove 0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:09:27 compute-0 systemd[1]: libpod-conmon-0e12bc02a0800ce572bfcd6463e02e5377617f10b1bf3c56b70eb8d19920a815.scope: Deactivated successfully.
Nov 25 09:09:27 compute-0 sudo[406347]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:27 compute-0 sudo[406605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:27 compute-0 sudo[406605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:27 compute-0 sudo[406605]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:27 compute-0 sudo[406630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:09:27 compute-0 sudo[406630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:27 compute-0 sudo[406630]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:27 compute-0 sudo[406655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:27 compute-0 sudo[406655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:27 compute-0 sudo[406655]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:27 compute-0 sudo[406683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:09:27 compute-0 sudo[406683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:27 compute-0 ceph-mon[75015]: pgmap v2671: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:09:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3534811428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:09:27 compute-0 podman[406749]: 2025-11-25 09:09:27.7605338 +0000 UTC m=+0.091541890 container create 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.763 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Successfully created port: 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:09:27 compute-0 podman[406749]: 2025-11-25 09:09:27.696425777 +0000 UTC m=+0.027433877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.823 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 23232702-7686-425d-8921-7aa6192ca1c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:27 compute-0 systemd[1]: Started libpod-conmon-218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee.scope.
Nov 25 09:09:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:09:27 compute-0 podman[406749]: 2025-11-25 09:09:27.89441816 +0000 UTC m=+0.225426250 container init 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:09:27 compute-0 nova_compute[253538]: 2025-11-25 09:09:27.895 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:09:27 compute-0 podman[406749]: 2025-11-25 09:09:27.901732288 +0000 UTC m=+0.232740358 container start 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:09:27 compute-0 quizzical_ardinghelli[406767]: 167 167
Nov 25 09:09:27 compute-0 systemd[1]: libpod-218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee.scope: Deactivated successfully.
Nov 25 09:09:27 compute-0 podman[406749]: 2025-11-25 09:09:27.908425551 +0000 UTC m=+0.239433621 container attach 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:09:27 compute-0 conmon[406767]: conmon 218fccc0e195e87922e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee.scope/container/memory.events
Nov 25 09:09:27 compute-0 podman[406823]: 2025-11-25 09:09:27.948677605 +0000 UTC m=+0.022853233 container died 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-097fda53ec4e86b13dd9b1b1d56c921a2fd414a02e1c9e144d3e72e4ddb70390-merged.mount: Deactivated successfully.
Nov 25 09:09:28 compute-0 podman[406823]: 2025-11-25 09:09:28.010445174 +0000 UTC m=+0.084620772 container remove 218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 09:09:28 compute-0 systemd[1]: libpod-conmon-218fccc0e195e87922e46e9c56c050ee5205433976cf707e4409956d13e0e8ee.scope: Deactivated successfully.
Nov 25 09:09:28 compute-0 nova_compute[253538]: 2025-11-25 09:09:28.043 253542 DEBUG nova.objects.instance [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:09:28 compute-0 nova_compute[253538]: 2025-11-25 09:09:28.054 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:09:28 compute-0 nova_compute[253538]: 2025-11-25 09:09:28.054 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Ensure instance console log exists: /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:09:28 compute-0 nova_compute[253538]: 2025-11-25 09:09:28.055 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:28 compute-0 nova_compute[253538]: 2025-11-25 09:09:28.055 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:28 compute-0 nova_compute[253538]: 2025-11-25 09:09:28.055 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:28 compute-0 podman[406866]: 2025-11-25 09:09:28.189743468 +0000 UTC m=+0.054573064 container create d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:09:28 compute-0 nova_compute[253538]: 2025-11-25 09:09:28.216 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Successfully created port: 24beb614-6f72-4107-adca-af1258052ab5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:09:28 compute-0 systemd[1]: Started libpod-conmon-d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f.scope.
Nov 25 09:09:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 95 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 267 KiB/s wr, 1 op/s
Nov 25 09:09:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:28 compute-0 podman[406866]: 2025-11-25 09:09:28.163939316 +0000 UTC m=+0.028768972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:09:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:28 compute-0 podman[406866]: 2025-11-25 09:09:28.279494777 +0000 UTC m=+0.144324343 container init d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 09:09:28 compute-0 podman[406866]: 2025-11-25 09:09:28.296771097 +0000 UTC m=+0.161600663 container start d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 09:09:28 compute-0 podman[406866]: 2025-11-25 09:09:28.303634324 +0000 UTC m=+0.168463890 container attach d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:09:28 compute-0 nova_compute[253538]: 2025-11-25 09:09:28.925 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Successfully updated port: 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:09:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:09:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/86718841' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:09:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:09:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/86718841' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.061 253542 DEBUG nova.compute.manager [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.062 253542 DEBUG nova.compute.manager [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing instance network info cache due to event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.062 253542 DEBUG oslo_concurrency.lockutils [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.063 253542 DEBUG oslo_concurrency.lockutils [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.063 253542 DEBUG nova.network.neutron [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.230 253542 DEBUG nova.network.neutron [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:09:29 compute-0 charming_moore[406883]: {
Nov 25 09:09:29 compute-0 charming_moore[406883]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "osd_id": 1,
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "type": "bluestore"
Nov 25 09:09:29 compute-0 charming_moore[406883]:     },
Nov 25 09:09:29 compute-0 charming_moore[406883]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "osd_id": 2,
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "type": "bluestore"
Nov 25 09:09:29 compute-0 charming_moore[406883]:     },
Nov 25 09:09:29 compute-0 charming_moore[406883]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "osd_id": 0,
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:09:29 compute-0 charming_moore[406883]:         "type": "bluestore"
Nov 25 09:09:29 compute-0 charming_moore[406883]:     }
Nov 25 09:09:29 compute-0 charming_moore[406883]: }
Nov 25 09:09:29 compute-0 systemd[1]: libpod-d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f.scope: Deactivated successfully.
Nov 25 09:09:29 compute-0 systemd[1]: libpod-d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f.scope: Consumed 1.013s CPU time.
Nov 25 09:09:29 compute-0 podman[406866]: 2025-11-25 09:09:29.315342996 +0000 UTC m=+1.180172562 container died d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:09:29 compute-0 ceph-mon[75015]: pgmap v2672: 321 pgs: 321 active+clean; 95 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 267 KiB/s wr, 1 op/s
Nov 25 09:09:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/86718841' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:09:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/86718841' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.527 253542 DEBUG nova.network.neutron [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.547 253542 DEBUG oslo_concurrency.lockutils [req-8b47df65-ac74-4bc9-8aad-9ca6de80858d req-b9b9ce20-9c81-4a5a-be4c-13e6df046318 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e0a2fe2cd55fb33b52ff6c29ca2652d1d37f8e12233ea7e62c7c3baae06e69a-merged.mount: Deactivated successfully.
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.724 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Successfully updated port: 24beb614-6f72-4107-adca-af1258052ab5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.739 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.740 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.740 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:09:29 compute-0 nova_compute[253538]: 2025-11-25 09:09:29.897 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:09:30 compute-0 podman[406866]: 2025-11-25 09:09:30.053609164 +0000 UTC m=+1.918438730 container remove d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:09:30 compute-0 systemd[1]: libpod-conmon-d8ff41e62a7aca3e69c51183ec38d7febb2e4ea7e14044a2458dcc5344927a9f.scope: Deactivated successfully.
Nov 25 09:09:30 compute-0 sudo[406683]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:09:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:09:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:09:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:09:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e2b5150c-412b-406f-9bda-a2a0df841436 does not exist
Nov 25 09:09:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 83e10143-9f2b-4ec7-bd05-a5d85e1adeeb does not exist
Nov 25 09:09:30 compute-0 sudo[406931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:09:30 compute-0 sudo[406931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:30 compute-0 sudo[406931]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 108 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 681 KiB/s wr, 14 op/s
Nov 25 09:09:30 compute-0 sudo[406956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:09:30 compute-0 sudo[406956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:09:30 compute-0 sudo[406956]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:09:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:09:31 compute-0 ceph-mon[75015]: pgmap v2673: 321 pgs: 321 active+clean; 108 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 681 KiB/s wr, 14 op/s
Nov 25 09:09:31 compute-0 nova_compute[253538]: 2025-11-25 09:09:31.155 253542 DEBUG nova.compute.manager [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-changed-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:09:31 compute-0 nova_compute[253538]: 2025-11-25 09:09:31.155 253542 DEBUG nova.compute.manager [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing instance network info cache due to event network-changed-24beb614-6f72-4107-adca-af1258052ab5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:09:31 compute-0 nova_compute[253538]: 2025-11-25 09:09:31.156 253542 DEBUG oslo_concurrency.lockutils [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:09:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.254 253542 DEBUG nova.network.neutron [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.272 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.273 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance network_info: |[{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.274 253542 DEBUG oslo_concurrency.lockutils [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.275 253542 DEBUG nova.network.neutron [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing network info cache for port 24beb614-6f72-4107-adca-af1258052ab5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.281 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start _get_guest_xml network_info=[{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.288 253542 WARNING nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.300 253542 DEBUG nova.virt.libvirt.host [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.301 253542 DEBUG nova.virt.libvirt.host [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.305 253542 DEBUG nova.virt.libvirt.host [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.305 253542 DEBUG nova.virt.libvirt.host [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.306 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.307 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.308 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.308 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.309 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.309 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.309 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.310 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.310 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.311 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.312 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.312 253542 DEBUG nova.virt.hardware [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.320 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:09:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542945606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.790 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.813 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:09:32 compute-0 nova_compute[253538]: 2025-11-25 09:09:32.817 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:32 compute-0 podman[407001]: 2025-11-25 09:09:32.834150171 +0000 UTC m=+0.075887494 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:09:32 compute-0 podman[407002]: 2025-11-25 09:09:32.854293979 +0000 UTC m=+0.095626191 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 09:09:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:09:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791276440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.290 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.292 253542 DEBUG nova.virt.libvirt.vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:09:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.293 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.294 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.295 253542 DEBUG nova.virt.libvirt.vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:09:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.295 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.296 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.297 253542 DEBUG nova.objects.instance [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.317 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <uuid>23232702-7686-425d-8921-7aa6192ca1c8</uuid>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <name>instance-00000090</name>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-1362533958</nova:name>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:09:32</nova:creationTime>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:port uuid="2cf452f4-d6c3-4977-9e5b-874c9d9707e6">
Nov 25 09:09:33 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <nova:port uuid="24beb614-6f72-4107-adca-af1258052ab5">
Nov 25 09:09:33 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe18:a07e" ipVersion="6"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <system>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <entry name="serial">23232702-7686-425d-8921-7aa6192ca1c8</entry>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <entry name="uuid">23232702-7686-425d-8921-7aa6192ca1c8</entry>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </system>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <os>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   </os>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <features>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   </features>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/23232702-7686-425d-8921-7aa6192ca1c8_disk">
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       </source>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/23232702-7686-425d-8921-7aa6192ca1c8_disk.config">
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       </source>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:09:33 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:57:54:60"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <target dev="tap2cf452f4-d6"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:18:a0:7e"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <target dev="tap24beb614-6f"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/console.log" append="off"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <video>
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </video>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:09:33 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:09:33 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:09:33 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:09:33 compute-0 nova_compute[253538]: </domain>
Nov 25 09:09:33 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.319 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Preparing to wait for external event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.320 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.321 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.322 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.322 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Preparing to wait for external event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.323 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.323 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.324 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.325 253542 DEBUG nova.virt.libvirt.vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:09:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.326 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.328 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.329 253542 DEBUG os_vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.330 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.331 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.332 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.336 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.337 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cf452f4-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.338 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cf452f4-d6, col_values=(('external_ids', {'iface-id': '2cf452f4-d6c3-4977-9e5b-874c9d9707e6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:54:60', 'vm-uuid': '23232702-7686-425d-8921-7aa6192ca1c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:33 compute-0 ceph-mon[75015]: pgmap v2674: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:09:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/542945606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:09:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/791276440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.340 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:33 compute-0 NetworkManager[48915]: <info>  [1764061773.3416] manager: (tap2cf452f4-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.343 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.346 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.347 253542 INFO os_vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6')
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.349 253542 DEBUG nova.virt.libvirt.vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:09:26Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.349 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.350 253542 DEBUG nova.network.os_vif_util [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.351 253542 DEBUG os_vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.352 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.352 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.353 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.356 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.356 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24beb614-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.357 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24beb614-6f, col_values=(('external_ids', {'iface-id': '24beb614-6f72-4107-adca-af1258052ab5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:a0:7e', 'vm-uuid': '23232702-7686-425d-8921-7aa6192ca1c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.358 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:33 compute-0 NetworkManager[48915]: <info>  [1764061773.3599] manager: (tap24beb614-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.362 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.367 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.369 253542 INFO os_vif [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f')
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.423 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.424 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.425 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:57:54:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.425 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:18:a0:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.426 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Using config drive
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.451 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.727 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Creating config drive at /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.732 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp23oeljy5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.769 253542 DEBUG nova.network.neutron [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated VIF entry in instance network info cache for port 24beb614-6f72-4107-adca-af1258052ab5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.770 253542 DEBUG nova.network.neutron [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.782 253542 DEBUG oslo_concurrency.lockutils [req-7f818ead-650a-4980-8637-fc93d3aac516 req-fdbd555e-2983-49b2-bf73-68c3b7837290 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.875 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp23oeljy5" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.908 253542 DEBUG nova.storage.rbd_utils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 23232702-7686-425d-8921-7aa6192ca1c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:09:33 compute-0 nova_compute[253538]: 2025-11-25 09:09:33.912 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config 23232702-7686-425d-8921-7aa6192ca1c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.103 253542 DEBUG oslo_concurrency.processutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config 23232702-7686-425d-8921-7aa6192ca1c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.104 253542 INFO nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Deleting local config drive /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8/disk.config because it was imported into RBD.
Nov 25 09:09:34 compute-0 kernel: tap2cf452f4-d6: entered promiscuous mode
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.2008] manager: (tap2cf452f4-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/626)
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01524|binding|INFO|Claiming lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 for this chassis.
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01525|binding|INFO|2cf452f4-d6c3-4977-9e5b-874c9d9707e6: Claiming fa:16:3e:57:54:60 10.100.0.6
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.2525] manager: (tap24beb614-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/627)
Nov 25 09:09:34 compute-0 systemd-udevd[407154]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:09:34 compute-0 systemd-udevd[407155]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.278 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:54:60 10.100.0.6'], port_security=['fa:16:3e:57:54:60 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '23232702-7686-425d-8921-7aa6192ca1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a8bb3c-a4c7-425c-b375-b8834bad3945, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2cf452f4-d6c3-4977-9e5b-874c9d9707e6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.279 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 in datapath 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 bound to our chassis
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.280 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.2856] device (tap2cf452f4-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.2865] device (tap2cf452f4-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6de6d5-c3ff-4605-a99b-84a20155f4a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.294 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7bf4f588-e1 in ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.296 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7bf4f588-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.297 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[95d12376-6505-4945-a64f-9f8ec71e3d67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 systemd-machined[215790]: New machine qemu-174-instance-00000090.
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.297 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9ccaa5aa-efad-4d7b-b2c0-07b36245006d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.309 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c36008-9c5c-41be-ab33-7ce880e2472b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.328 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e733c043-7dad-4a30-9443-4e16fb64cd1a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 kernel: tap24beb614-6f: entered promiscuous mode
Nov 25 09:09:34 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-00000090.
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.3299] device (tap24beb614-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.3317] device (tap24beb614-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.335 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01526|binding|INFO|Claiming lport 24beb614-6f72-4107-adca-af1258052ab5 for this chassis.
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01527|binding|INFO|24beb614-6f72-4107-adca-af1258052ab5: Claiming fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01528|binding|INFO|Setting lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 ovn-installed in OVS
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.342 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01529|binding|INFO|Setting lport 24beb614-6f72-4107-adca-af1258052ab5 ovn-installed in OVS
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.363 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9889ff75-acf7-4d25-be4e-0996443e8736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01530|binding|INFO|Setting lport 24beb614-6f72-4107-adca-af1258052ab5 up in Southbound
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01531|binding|INFO|Setting lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 up in Southbound
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.3712] manager: (tap7bf4f588-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/628)
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.370 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e'], port_security=['fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe18:a07e/64', 'neutron:device_id': '23232702-7686-425d-8921-7aa6192ca1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cc67e51-433c-4c50-9e32-11618e10c494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b005d09-6aec-4c42-a2eb-38372057a2c4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=24beb614-6f72-4107-adca-af1258052ab5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.372 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[158ef1a0-4a25-480e-a7a2-669e2d83e2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.403 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[342a5fe0-6641-487b-899d-e6d2cc35f1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.410 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[32f2b838-db0f-4ee1-8abb-7ac5f5015641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.4333] device (tap7bf4f588-e0): carrier: link connected
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.440 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c0333e54-5870-4da3-8054-a358b1f52b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.459 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b35e0e95-02ad-4a04-af34-e6b6837d78f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bf4f588-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cc:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712903, 'reachable_time': 15708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407191, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.483 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[64541eac-a104-4def-b8d5-9cd59b7fac03]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:ccc4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712903, 'tstamp': 712903}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407192, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.506 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8af0b789-3cbf-425a-a17f-7f818d236b45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bf4f588-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cc:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712903, 'reachable_time': 15708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407193, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.546 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[53724410-3016-4623-bb95-9deb3374e56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.620 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[899b93e9-79ca-4108-bd17-fa7e4678cad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bf4f588-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.624 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bf4f588-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.626 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 NetworkManager[48915]: <info>  [1764061774.6273] manager: (tap7bf4f588-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Nov 25 09:09:34 compute-0 kernel: tap7bf4f588-e0: entered promiscuous mode
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.635 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bf4f588-e0, col_values=(('external_ids', {'iface-id': '681702e6-167a-4d5b-9bcf-7f086c4e8bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 ovn_controller[152859]: 2025-11-25T09:09:34Z|01532|binding|INFO|Releasing lport 681702e6-167a-4d5b-9bcf-7f086c4e8bad from this chassis (sb_readonly=0)
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.640 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7bf4f588-ebc7-4f3f-bad9-0474cdb461a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7bf4f588-ebc7-4f3f-bad9-0474cdb461a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[416c9b49-35e0-4e25-a63b-3f54405a0b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.643 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/7bf4f588-ebc7-4f3f-bad9-0474cdb461a6.pid.haproxy
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:09:34 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:34.645 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'env', 'PROCESS_TAG=haproxy-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7bf4f588-ebc7-4f3f-bad9-0474cdb461a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.765 253542 DEBUG nova.compute.manager [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.766 253542 DEBUG oslo_concurrency.lockutils [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.766 253542 DEBUG oslo_concurrency.lockutils [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.766 253542 DEBUG oslo_concurrency.lockutils [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:34 compute-0 nova_compute[253538]: 2025-11-25 09:09:34.766 253542 DEBUG nova.compute.manager [req-47196a83-ad19-41d4-96e6-e03fa5d9e09d req-78d2eae6-e86f-4e07-abb5-8fd2e23e3541 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Processing event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:09:35 compute-0 podman[407226]: 2025-11-25 09:09:35.011338736 +0000 UTC m=+0.048686084 container create 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 09:09:35 compute-0 systemd[1]: Started libpod-conmon-21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d.scope.
Nov 25 09:09:35 compute-0 podman[407226]: 2025-11-25 09:09:34.986331436 +0000 UTC m=+0.023678804 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:09:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f10253aa29a2d9744f35c5a42f8958d44efd52ebdbabf6a5f52a1d200b9fc6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:35 compute-0 podman[407226]: 2025-11-25 09:09:35.11228681 +0000 UTC m=+0.149634178 container init 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:09:35 compute-0 podman[407226]: 2025-11-25 09:09:35.123788693 +0000 UTC m=+0.161136041 container start 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.125 253542 DEBUG nova.compute.manager [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.126 253542 DEBUG oslo_concurrency.lockutils [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.126 253542 DEBUG oslo_concurrency.lockutils [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.126 253542 DEBUG oslo_concurrency.lockutils [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.126 253542 DEBUG nova.compute.manager [req-a793bf7d-07d8-4efb-aadd-56940ee7ef93 req-d625b315-a552-439c-9ab1-b721af268997 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Processing event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:09:35 compute-0 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [NOTICE]   (407260) : New worker (407265) forked
Nov 25 09:09:35 compute-0 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [NOTICE]   (407260) : Loading success.
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.188 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 24beb614-6f72-4107-adca-af1258052ab5 in datapath 3cc67e51-433c-4c50-9e32-11618e10c494 unbound from our chassis
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.191 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cc67e51-433c-4c50-9e32-11618e10c494
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.204 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8a65dd3c-b562-4b68-b3f5-cd6af3803f9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.205 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3cc67e51-41 in ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.209 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3cc67e51-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.209 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d241de-1262-474a-9145-d13115ab4a99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.211 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0b55cb31-8e54-42af-8bf2-35d50dc991ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.222 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[99d7bc0c-4ae0-4ed8-83dc-b2476cb8e2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.237 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[09eaf04a-9974-4905-85d3-c51f8dc48f3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.265 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[bff68061-c9da-4c9d-9bea-6e286b53e8cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.272 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[03e73254-4463-4944-9391-98acb7412f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 NetworkManager[48915]: <info>  [1764061775.2731] manager: (tap3cc67e51-40): new Veth device (/org/freedesktop/NetworkManager/Devices/630)
Nov 25 09:09:35 compute-0 systemd-udevd[407173]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.307 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc7b631-4df4-45a2-b9db-6c15204f49f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.310 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2acf3026-932f-41fb-9b97-d65afd80437f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.313 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061775.3132522, 23232702-7686-425d-8921-7aa6192ca1c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.314 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] VM Started (Lifecycle Event)
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.316 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.319 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.322 253542 INFO nova.virt.libvirt.driver [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance spawned successfully.
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.322 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.332 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.336 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:09:35 compute-0 NetworkManager[48915]: <info>  [1764061775.3403] device (tap3cc67e51-40): carrier: link connected
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.340 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.341 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.341 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.341 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.342 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.342 253542 DEBUG nova.virt.libvirt.driver [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:09:35 compute-0 ceph-mon[75015]: pgmap v2675: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.348 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[173e00a6-5d4c-4959-afbb-316423f9070a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.365 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.366 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061775.3134649, 23232702-7686-425d-8921-7aa6192ca1c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.365 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[00eb0e73-2813-4930-a9cb-71215364a8ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cc67e51-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:16:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712993, 'reachable_time': 20254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407313, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.366 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] VM Paused (Lifecycle Event)
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.382 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.382 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[28776d6a-8bdc-48a0-8a65-c6a80e6078b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:1667'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712993, 'tstamp': 712993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407314, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.385 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061775.3184218, 23232702-7686-425d-8921-7aa6192ca1c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.385 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] VM Resumed (Lifecycle Event)
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.402 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f41b77-52df-4c8e-afa9-4d71e5aa5a0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cc67e51-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:16:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712993, 'reachable_time': 20254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407315, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.412 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.414 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.427 253542 INFO nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Took 8.59 seconds to spawn the instance on the hypervisor.
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.428 253542 DEBUG nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.433 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4037d67e-c35d-4c8a-a8c6-ddc4baeba408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.458 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.469 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[317f5dd3-0cd8-446a-9ab1-a7cdec802b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.471 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc67e51-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.471 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.471 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cc67e51-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:35 compute-0 NetworkManager[48915]: <info>  [1764061775.4749] manager: (tap3cc67e51-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Nov 25 09:09:35 compute-0 kernel: tap3cc67e51-40: entered promiscuous mode
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.478 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cc67e51-40, col_values=(('external_ids', {'iface-id': '7cc0292c-b133-4cb7-8177-2a55fd592909'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.483 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3cc67e51-433c-4c50-9e32-11618e10c494.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3cc67e51-433c-4c50-9e32-11618e10c494.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:09:35 compute-0 ovn_controller[152859]: 2025-11-25T09:09:35Z|01533|binding|INFO|Releasing lport 7cc0292c-b133-4cb7-8177-2a55fd592909 from this chassis (sb_readonly=0)
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.484 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cd975e-1ea7-4ee7-9989-2a9edf4f10c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.486 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-3cc67e51-433c-4c50-9e32-11618e10c494
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/3cc67e51-433c-4c50-9e32-11618e10c494.pid.haproxy
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 3cc67e51-433c-4c50-9e32-11618e10c494
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:09:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:35.488 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'env', 'PROCESS_TAG=haproxy-3cc67e51-433c-4c50-9e32-11618e10c494', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3cc67e51-433c-4c50-9e32-11618e10c494.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.503 253542 INFO nova.compute.manager [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Took 9.54 seconds to build instance.
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:35 compute-0 nova_compute[253538]: 2025-11-25 09:09:35.516 253542 DEBUG oslo_concurrency.lockutils [None req-b36e01e6-4ffd-4f01-913d-275d77801532 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:35 compute-0 podman[407345]: 2025-11-25 09:09:35.879538177 +0000 UTC m=+0.028890716 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:09:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:09:36 compute-0 podman[407345]: 2025-11-25 09:09:36.317163984 +0000 UTC m=+0.466516503 container create f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:09:36 compute-0 systemd[1]: Started libpod-conmon-f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d.scope.
Nov 25 09:09:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15b7818f4970c473f15ea612dd47dfe4573d2b3b5179a915f1ffcc79793320c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:09:36 compute-0 podman[407345]: 2025-11-25 09:09:36.454181218 +0000 UTC m=+0.603533717 container init f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:09:36 compute-0 podman[407345]: 2025-11-25 09:09:36.460783279 +0000 UTC m=+0.610135758 container start f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:09:36 compute-0 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [NOTICE]   (407364) : New worker (407366) forked
Nov 25 09:09:36 compute-0 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [NOTICE]   (407364) : Loading success.
Nov 25 09:09:36 compute-0 sshd-session[407309]: Received disconnect from 45.202.211.6 port 44892:11: Bye Bye [preauth]
Nov 25 09:09:36 compute-0 sshd-session[407309]: Disconnected from authenticating user root 45.202.211.6 port 44892 [preauth]
Nov 25 09:09:36 compute-0 nova_compute[253538]: 2025-11-25 09:09:36.836 253542 DEBUG nova.compute.manager [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:09:36 compute-0 nova_compute[253538]: 2025-11-25 09:09:36.836 253542 DEBUG oslo_concurrency.lockutils [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:36 compute-0 nova_compute[253538]: 2025-11-25 09:09:36.837 253542 DEBUG oslo_concurrency.lockutils [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:36 compute-0 nova_compute[253538]: 2025-11-25 09:09:36.837 253542 DEBUG oslo_concurrency.lockutils [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:36 compute-0 nova_compute[253538]: 2025-11-25 09:09:36.837 253542 DEBUG nova.compute.manager [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:09:36 compute-0 nova_compute[253538]: 2025-11-25 09:09:36.838 253542 WARNING nova.compute.manager [req-a3d6e6fb-c888-4f79-9f78-037428f598f0 req-262bd564-9737-44aa-a4f8-0ba2ec8ca679 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received unexpected event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 for instance with vm_state active and task_state None.
Nov 25 09:09:37 compute-0 nova_compute[253538]: 2025-11-25 09:09:37.206 253542 DEBUG nova.compute.manager [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:09:37 compute-0 nova_compute[253538]: 2025-11-25 09:09:37.206 253542 DEBUG oslo_concurrency.lockutils [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:37 compute-0 nova_compute[253538]: 2025-11-25 09:09:37.206 253542 DEBUG oslo_concurrency.lockutils [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:37 compute-0 nova_compute[253538]: 2025-11-25 09:09:37.207 253542 DEBUG oslo_concurrency.lockutils [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:37 compute-0 nova_compute[253538]: 2025-11-25 09:09:37.207 253542 DEBUG nova.compute.manager [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:09:37 compute-0 nova_compute[253538]: 2025-11-25 09:09:37.207 253542 WARNING nova.compute.manager [req-6e93873c-1a57-4f91-bfe5-5c748c58c96c req-25d2078c-c535-4ebe-aa3e-a28662bddafe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received unexpected event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 for instance with vm_state active and task_state None.
Nov 25 09:09:37 compute-0 ceph-mon[75015]: pgmap v2676: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:09:37 compute-0 nova_compute[253538]: 2025-11-25 09:09:37.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:37.677 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:09:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:37.680 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:09:38 compute-0 ovn_controller[152859]: 2025-11-25T09:09:38Z|01534|binding|INFO|Releasing lport 681702e6-167a-4d5b-9bcf-7f086c4e8bad from this chassis (sb_readonly=0)
Nov 25 09:09:38 compute-0 ovn_controller[152859]: 2025-11-25T09:09:38Z|01535|binding|INFO|Releasing lport 7cc0292c-b133-4cb7-8177-2a55fd592909 from this chassis (sb_readonly=0)
Nov 25 09:09:38 compute-0 NetworkManager[48915]: <info>  [1764061778.1090] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Nov 25 09:09:38 compute-0 NetworkManager[48915]: <info>  [1764061778.1107] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:38 compute-0 ovn_controller[152859]: 2025-11-25T09:09:38Z|01536|binding|INFO|Releasing lport 681702e6-167a-4d5b-9bcf-7f086c4e8bad from this chassis (sb_readonly=0)
Nov 25 09:09:38 compute-0 ovn_controller[152859]: 2025-11-25T09:09:38Z|01537|binding|INFO|Releasing lport 7cc0292c-b133-4cb7-8177-2a55fd592909 from this chassis (sb_readonly=0)
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.141 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 25 09:09:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.934 253542 DEBUG nova.compute.manager [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.935 253542 DEBUG nova.compute.manager [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing instance network info cache due to event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.935 253542 DEBUG oslo_concurrency.lockutils [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.935 253542 DEBUG oslo_concurrency.lockutils [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:09:38 compute-0 nova_compute[253538]: 2025-11-25 09:09:38.936 253542 DEBUG nova.network.neutron [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:09:39 compute-0 nova_compute[253538]: 2025-11-25 09:09:39.328 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:39 compute-0 ceph-mon[75015]: pgmap v2677: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 25 09:09:39 compute-0 podman[407376]: 2025-11-25 09:09:39.882238567 +0000 UTC m=+0.109825396 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 09:09:40 compute-0 nova_compute[253538]: 2025-11-25 09:09:40.043 253542 DEBUG nova.network.neutron [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated VIF entry in instance network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:09:40 compute-0 nova_compute[253538]: 2025-11-25 09:09:40.045 253542 DEBUG nova.network.neutron [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:09:40 compute-0 nova_compute[253538]: 2025-11-25 09:09:40.063 253542 DEBUG oslo_concurrency.lockutils [req-e9200fef-df0f-4468-9dd6-efbbc1ddd11c req-ada053e1-5809-4f38-b8bf-54a63aa38535 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:09:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 88 op/s
Nov 25 09:09:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:41.093 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:09:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:41.094 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:09:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:41.094 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:09:41 compute-0 ceph-mon[75015]: pgmap v2678: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 88 op/s
Nov 25 09:09:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 09:09:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:43 compute-0 nova_compute[253538]: 2025-11-25 09:09:43.365 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:43 compute-0 ceph-mon[75015]: pgmap v2679: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 09:09:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:09:44 compute-0 nova_compute[253538]: 2025-11-25 09:09:44.331 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:45 compute-0 ceph-mon[75015]: pgmap v2680: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:09:45 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:09:45.682 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:09:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:09:47 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Nov 25 09:09:47 compute-0 ceph-mon[75015]: pgmap v2681: 321 pgs: 321 active+clean; 134 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:09:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 140 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 489 KiB/s wr, 86 op/s
Nov 25 09:09:48 compute-0 nova_compute[253538]: 2025-11-25 09:09:48.369 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:48 compute-0 ovn_controller[152859]: 2025-11-25T09:09:48Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:54:60 10.100.0.6
Nov 25 09:09:48 compute-0 ovn_controller[152859]: 2025-11-25T09:09:48Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:54:60 10.100.0.6
Nov 25 09:09:48 compute-0 nova_compute[253538]: 2025-11-25 09:09:48.968 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:48 compute-0 nova_compute[253538]: 2025-11-25 09:09:48.968 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:49 compute-0 nova_compute[253538]: 2025-11-25 09:09:49.333 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:49 compute-0 ceph-mon[75015]: pgmap v2682: 321 pgs: 321 active+clean; 140 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 489 KiB/s wr, 86 op/s
Nov 25 09:09:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 157 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.3 MiB/s wr, 74 op/s
Nov 25 09:09:51 compute-0 ceph-mon[75015]: pgmap v2683: 321 pgs: 321 active+clean; 157 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.3 MiB/s wr, 74 op/s
Nov 25 09:09:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 166 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 25 09:09:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:53 compute-0 nova_compute[253538]: 2025-11-25 09:09:53.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:09:53
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups']
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:09:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:09:53 compute-0 nova_compute[253538]: 2025-11-25 09:09:53.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:53 compute-0 nova_compute[253538]: 2025-11-25 09:09:53.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:09:53 compute-0 nova_compute[253538]: 2025-11-25 09:09:53.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:09:53 compute-0 ceph-mon[75015]: pgmap v2684: 321 pgs: 321 active+clean; 166 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 25 09:09:53 compute-0 nova_compute[253538]: 2025-11-25 09:09:53.951 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:09:53 compute-0 nova_compute[253538]: 2025-11-25 09:09:53.952 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:09:53 compute-0 nova_compute[253538]: 2025-11-25 09:09:53.952 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:09:53 compute-0 nova_compute[253538]: 2025-11-25 09:09:53.953 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:09:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:09:54 compute-0 nova_compute[253538]: 2025-11-25 09:09:54.352 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:54 compute-0 ceph-mon[75015]: pgmap v2685: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:09:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:09:57 compute-0 ceph-mon[75015]: pgmap v2686: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:09:57 compute-0 nova_compute[253538]: 2025-11-25 09:09:57.828 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:09:57 compute-0 nova_compute[253538]: 2025-11-25 09:09:57.993 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:09:57 compute-0 nova_compute[253538]: 2025-11-25 09:09:57.994 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:09:57 compute-0 nova_compute[253538]: 2025-11-25 09:09:57.994 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:57 compute-0 nova_compute[253538]: 2025-11-25 09:09:57.995 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:57 compute-0 nova_compute[253538]: 2025-11-25 09:09:57.995 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:09:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:09:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:09:58 compute-0 nova_compute[253538]: 2025-11-25 09:09:58.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:58 compute-0 nova_compute[253538]: 2025-11-25 09:09:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:58 compute-0 nova_compute[253538]: 2025-11-25 09:09:58.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:59 compute-0 nova_compute[253538]: 2025-11-25 09:09:59.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:09:59 compute-0 nova_compute[253538]: 2025-11-25 09:09:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:09:59 compute-0 ceph-mon[75015]: pgmap v2687: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:10:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Nov 25 09:10:00 compute-0 ceph-mon[75015]: pgmap v2688: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Nov 25 09:10:01 compute-0 nova_compute[253538]: 2025-11-25 09:10:01.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:10:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 817 KiB/s wr, 18 op/s
Nov 25 09:10:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:03 compute-0 ceph-mon[75015]: pgmap v2689: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 817 KiB/s wr, 18 op/s
Nov 25 09:10:03 compute-0 nova_compute[253538]: 2025-11-25 09:10:03.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:03 compute-0 podman[407404]: 2025-11-25 09:10:03.841821565 +0000 UTC m=+0.069188482 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:10:03 compute-0 podman[407403]: 2025-11-25 09:10:03.871293746 +0000 UTC m=+0.098404256 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 27 KiB/s wr, 4 op/s
Nov 25 09:10:04 compute-0 nova_compute[253538]: 2025-11-25 09:10:04.357 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:10:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:10:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:10:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.5 total, 600.0 interval
                                           Cumulative writes: 43K writes, 175K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 43K writes, 15K syncs, 2.86 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4589 writes, 19K keys, 4589 commit groups, 1.0 writes per commit group, ingest: 23.86 MB, 0.04 MB/s
                                           Interval WAL: 4589 writes, 1654 syncs, 2.77 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:10:05 compute-0 ceph-mon[75015]: pgmap v2690: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 27 KiB/s wr, 4 op/s
Nov 25 09:10:05 compute-0 nova_compute[253538]: 2025-11-25 09:10:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:10:05 compute-0 nova_compute[253538]: 2025-11-25 09:10:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:05 compute-0 nova_compute[253538]: 2025-11-25 09:10:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:05 compute-0 nova_compute[253538]: 2025-11-25 09:10:05.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:05 compute-0 nova_compute[253538]: 2025-11-25 09:10:05.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:10:05 compute-0 nova_compute[253538]: 2025-11-25 09:10:05.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:10:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3652824619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.067 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.129 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.129 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:10:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.310 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.311 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3448MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.312 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.312 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.382 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 23232702-7686-425d-8921-7aa6192ca1c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.383 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.383 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.428 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3652824619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:10:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:10:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067024358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.971 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:06 compute-0 nova_compute[253538]: 2025-11-25 09:10:06.978 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:10:07 compute-0 nova_compute[253538]: 2025-11-25 09:10:07.000 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:10:07 compute-0 nova_compute[253538]: 2025-11-25 09:10:07.160 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:10:07 compute-0 nova_compute[253538]: 2025-11-25 09:10:07.160 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:07 compute-0 ceph-mon[75015]: pgmap v2691: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 09:10:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4067024358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:10:08 compute-0 ovn_controller[152859]: 2025-11-25T09:10:08Z|01538|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory
Nov 25 09:10:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 09:10:08 compute-0 nova_compute[253538]: 2025-11-25 09:10:08.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:09 compute-0 nova_compute[253538]: 2025-11-25 09:10:09.359 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:09 compute-0 ceph-mon[75015]: pgmap v2692: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 09:10:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 09:10:10 compute-0 sshd-session[407487]: Invalid user hello from 45.78.222.2 port 37294
Nov 25 09:10:10 compute-0 podman[407489]: 2025-11-25 09:10:10.392747225 +0000 UTC m=+0.080910041 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 09:10:11 compute-0 ceph-mon[75015]: pgmap v2693: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 09:10:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:10:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.4 total, 600.0 interval
                                           Cumulative writes: 42K writes, 168K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.81 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4521 writes, 19K keys, 4521 commit groups, 1.0 writes per commit group, ingest: 23.42 MB, 0.04 MB/s
                                           Interval WAL: 4521 writes, 1686 syncs, 2.68 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:10:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 09:10:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:13 compute-0 nova_compute[253538]: 2025-11-25 09:10:13.383 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:13 compute-0 ceph-mon[75015]: pgmap v2694: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 09:10:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 25 09:10:14 compute-0 nova_compute[253538]: 2025-11-25 09:10:14.361 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:14 compute-0 nova_compute[253538]: 2025-11-25 09:10:14.709 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:14 compute-0 nova_compute[253538]: 2025-11-25 09:10:14.710 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:14 compute-0 nova_compute[253538]: 2025-11-25 09:10:14.755 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.018 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.019 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.026 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.026 253542 INFO nova.compute.claims [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.261 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:15 compute-0 sshd-session[407487]: Received disconnect from 45.78.222.2 port 37294:11: Bye Bye [preauth]
Nov 25 09:10:15 compute-0 sshd-session[407487]: Disconnected from invalid user hello 45.78.222.2 port 37294 [preauth]
Nov 25 09:10:15 compute-0 ceph-mon[75015]: pgmap v2695: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 25 09:10:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:10:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060131293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.721 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.729 253542 DEBUG nova.compute.provider_tree [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.752 253542 DEBUG nova.scheduler.client.report [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.799 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.801 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.869 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:10:15 compute-0 nova_compute[253538]: 2025-11-25 09:10:15.870 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.091 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.154 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:10:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.512 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.513 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.514 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Creating image(s)
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.537 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.561 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.587 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.591 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2060131293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.668 253542 DEBUG nova.policy [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.690 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.692 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.692 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.693 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.720 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:10:16 compute-0 nova_compute[253538]: 2025-11-25 09:10:16.725 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e30f8c90-01de-40a5-8c04-289a035fca22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.112 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc e30f8c90-01de-40a5-8c04-289a035fca22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.180 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.271 253542 DEBUG nova.objects.instance [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid e30f8c90-01de-40a5-8c04-289a035fca22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.298 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.299 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Ensure instance console log exists: /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.299 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.300 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.300 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:17 compute-0 ceph-mon[75015]: pgmap v2696: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 25 09:10:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:17.797 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:10:17 compute-0 nova_compute[253538]: 2025-11-25 09:10:17.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:17 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:17.798 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:10:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 428 KiB/s wr, 13 op/s
Nov 25 09:10:18 compute-0 nova_compute[253538]: 2025-11-25 09:10:18.330 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Successfully created port: a2ae2d19-2b35-4e83-b6ba-9f037762a501 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:10:18 compute-0 nova_compute[253538]: 2025-11-25 09:10:18.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:19 compute-0 nova_compute[253538]: 2025-11-25 09:10:19.363 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:19 compute-0 ceph-mon[75015]: pgmap v2697: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 428 KiB/s wr, 13 op/s
Nov 25 09:10:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:19.801 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 195 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 952 KiB/s wr, 24 op/s
Nov 25 09:10:20 compute-0 nova_compute[253538]: 2025-11-25 09:10:20.434 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Successfully created port: aa75ca22-e976-4c62-b1e2-cc57fac51dec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:10:21 compute-0 ceph-mon[75015]: pgmap v2698: 321 pgs: 321 active+clean; 195 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 952 KiB/s wr, 24 op/s
Nov 25 09:10:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:10:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4802.4 total, 600.0 interval
                                           Cumulative writes: 33K writes, 133K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.85 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3363 writes, 13K keys, 3363 commit groups, 1.0 writes per commit group, ingest: 13.80 MB, 0.02 MB/s
                                           Interval WAL: 3363 writes, 1299 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:10:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:10:22 compute-0 nova_compute[253538]: 2025-11-25 09:10:22.473 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Successfully updated port: a2ae2d19-2b35-4e83-b6ba-9f037762a501 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:10:22 compute-0 nova_compute[253538]: 2025-11-25 09:10:22.568 253542 DEBUG nova.compute.manager [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:10:22 compute-0 nova_compute[253538]: 2025-11-25 09:10:22.569 253542 DEBUG nova.compute.manager [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing instance network info cache due to event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:10:22 compute-0 nova_compute[253538]: 2025-11-25 09:10:22.569 253542 DEBUG oslo_concurrency.lockutils [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:10:22 compute-0 nova_compute[253538]: 2025-11-25 09:10:22.570 253542 DEBUG oslo_concurrency.lockutils [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:10:22 compute-0 nova_compute[253538]: 2025-11-25 09:10:22.570 253542 DEBUG nova.network.neutron [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:10:22 compute-0 nova_compute[253538]: 2025-11-25 09:10:22.740 253542 DEBUG nova.network.neutron [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:10:22 compute-0 ceph-mon[75015]: pgmap v2699: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:10:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:23 compute-0 nova_compute[253538]: 2025-11-25 09:10:23.316 253542 DEBUG nova.network.neutron [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:10:23 compute-0 nova_compute[253538]: 2025-11-25 09:10:23.331 253542 DEBUG oslo_concurrency.lockutils [req-a0fd49ff-fe77-4529-82f0-4b0240829479 req-712a073d-1f89-469c-94db-5169507a40d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:10:23 compute-0 nova_compute[253538]: 2025-11-25 09:10:23.388 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:10:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:10:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:10:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:10:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:10:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:10:23 compute-0 nova_compute[253538]: 2025-11-25 09:10:23.769 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Successfully updated port: aa75ca22-e976-4c62-b1e2-cc57fac51dec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:10:23 compute-0 nova_compute[253538]: 2025-11-25 09:10:23.866 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:10:23 compute-0 nova_compute[253538]: 2025-11-25 09:10:23.867 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:10:23 compute-0 nova_compute[253538]: 2025-11-25 09:10:23.867 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:10:24 compute-0 nova_compute[253538]: 2025-11-25 09:10:24.169 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:10:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:10:24 compute-0 nova_compute[253538]: 2025-11-25 09:10:24.364 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:24 compute-0 nova_compute[253538]: 2025-11-25 09:10:24.643 253542 DEBUG nova.compute.manager [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-changed-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:10:24 compute-0 nova_compute[253538]: 2025-11-25 09:10:24.644 253542 DEBUG nova.compute.manager [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing instance network info cache due to event network-changed-aa75ca22-e976-4c62-b1e2-cc57fac51dec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:10:24 compute-0 nova_compute[253538]: 2025-11-25 09:10:24.644 253542 DEBUG oslo_concurrency.lockutils [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:10:25 compute-0 ceph-mon[75015]: pgmap v2700: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:10:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:10:26 compute-0 nova_compute[253538]: 2025-11-25 09:10:26.881 253542 DEBUG nova.network.neutron [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.122 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.122 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance network_info: |[{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.123 253542 DEBUG oslo_concurrency.lockutils [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.124 253542 DEBUG nova.network.neutron [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing network info cache for port aa75ca22-e976-4c62-b1e2-cc57fac51dec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.134 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start _get_guest_xml network_info=[{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.142 253542 WARNING nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.155 253542 DEBUG nova.virt.libvirt.host [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.156 253542 DEBUG nova.virt.libvirt.host [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.161 253542 DEBUG nova.virt.libvirt.host [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.162 253542 DEBUG nova.virt.libvirt.host [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.163 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.163 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.164 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.165 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.165 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.166 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.167 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.167 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.168 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.168 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.169 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.170 253542 DEBUG nova.virt.hardware [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.175 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:27 compute-0 ceph-mon[75015]: pgmap v2701: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:10:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:10:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174543525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.654 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.677 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:10:27 compute-0 nova_compute[253538]: 2025-11-25 09:10:27.681 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:10:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197990714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.149 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.154 253542 DEBUG nova.virt.libvirt.vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:10:16Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.155 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.157 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.158 253542 DEBUG nova.virt.libvirt.vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:10:16Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.159 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.160 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.162 253542 DEBUG nova.objects.instance [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid e30f8c90-01de-40a5-8c04-289a035fca22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.182 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <uuid>e30f8c90-01de-40a5-8c04-289a035fca22</uuid>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <name>instance-00000091</name>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-640466180</nova:name>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:10:27</nova:creationTime>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:port uuid="a2ae2d19-2b35-4e83-b6ba-9f037762a501">
Nov 25 09:10:28 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <nova:port uuid="aa75ca22-e976-4c62-b1e2-cc57fac51dec">
Nov 25 09:10:28 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe81:2453" ipVersion="6"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <system>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <entry name="serial">e30f8c90-01de-40a5-8c04-289a035fca22</entry>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <entry name="uuid">e30f8c90-01de-40a5-8c04-289a035fca22</entry>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </system>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <os>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   </os>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <features>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   </features>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e30f8c90-01de-40a5-8c04-289a035fca22_disk">
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       </source>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/e30f8c90-01de-40a5-8c04-289a035fca22_disk.config">
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       </source>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:10:28 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:1a:7a:85"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <target dev="tapa2ae2d19-2b"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:81:24:53"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <target dev="tapaa75ca22-e9"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/console.log" append="off"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <video>
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </video>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:10:28 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:10:28 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:10:28 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:10:28 compute-0 nova_compute[253538]: </domain>
Nov 25 09:10:28 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.183 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Preparing to wait for external event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.184 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.184 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.184 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.184 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Preparing to wait for external event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.185 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.185 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.186 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.187 253542 DEBUG nova.virt.libvirt.vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:10:16Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.187 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.188 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.188 253542 DEBUG os_vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.189 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.190 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.195 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2ae2d19-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.196 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa2ae2d19-2b, col_values=(('external_ids', {'iface-id': 'a2ae2d19-2b35-4e83-b6ba-9f037762a501', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:7a:85', 'vm-uuid': 'e30f8c90-01de-40a5-8c04-289a035fca22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:28 compute-0 NetworkManager[48915]: <info>  [1764061828.1984] manager: (tapa2ae2d19-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.204 253542 INFO os_vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b')
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.205 253542 DEBUG nova.virt.libvirt.vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:10:16Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.205 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.206 253542 DEBUG nova.network.os_vif_util [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.206 253542 DEBUG os_vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.207 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.207 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.207 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.209 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa75ca22-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.210 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa75ca22-e9, col_values=(('external_ids', {'iface-id': 'aa75ca22-e976-4c62-b1e2-cc57fac51dec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:24:53', 'vm-uuid': 'e30f8c90-01de-40a5-8c04-289a035fca22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:28 compute-0 NetworkManager[48915]: <info>  [1764061828.2119] manager: (tapaa75ca22-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.217 253542 INFO os_vif [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9')
Nov 25 09:10:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.277 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.277 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.277 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:1a:7a:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.278 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:81:24:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.278 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Using config drive
Nov 25 09:10:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.299 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:10:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4174543525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:10:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/197990714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.613 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Creating config drive at /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.624 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpio18bur_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.683 253542 DEBUG nova.network.neutron [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updated VIF entry in instance network info cache for port aa75ca22-e976-4c62-b1e2-cc57fac51dec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.684 253542 DEBUG nova.network.neutron [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.705 253542 DEBUG oslo_concurrency.lockutils [req-748d4071-a155-4e74-960c-86c19bc79b5c req-972e4abb-f7de-4040-9398-523737d20e42 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.797 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpio18bur_" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.830 253542 DEBUG nova.storage.rbd_utils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image e30f8c90-01de-40a5-8c04-289a035fca22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:10:28 compute-0 nova_compute[253538]: 2025-11-25 09:10:28.834 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config e30f8c90-01de-40a5-8c04-289a035fca22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:10:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:10:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/392999834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:10:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:10:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/392999834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.029 253542 DEBUG oslo_concurrency.processutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config e30f8c90-01de-40a5-8c04-289a035fca22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.030 253542 INFO nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Deleting local config drive /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22/disk.config because it was imported into RBD.
Nov 25 09:10:29 compute-0 NetworkManager[48915]: <info>  [1764061829.0885] manager: (tapa2ae2d19-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Nov 25 09:10:29 compute-0 kernel: tapa2ae2d19-2b: entered promiscuous mode
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01539|binding|INFO|Claiming lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 for this chassis.
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01540|binding|INFO|a2ae2d19-2b35-4e83-b6ba-9f037762a501: Claiming fa:16:3e:1a:7a:85 10.100.0.3
Nov 25 09:10:29 compute-0 NetworkManager[48915]: <info>  [1764061829.1112] manager: (tapaa75ca22-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/637)
Nov 25 09:10:29 compute-0 kernel: tapaa75ca22-e9: entered promiscuous mode
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01541|binding|INFO|Setting lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 ovn-installed in OVS
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.156 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01542|if_status|INFO|Not updating pb chassis for aa75ca22-e976-4c62-b1e2-cc57fac51dec now as sb is readonly
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.171 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 systemd-udevd[407844]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:10:29 compute-0 systemd-udevd[407843]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:10:29 compute-0 systemd-machined[215790]: New machine qemu-175-instance-00000091.
Nov 25 09:10:29 compute-0 NetworkManager[48915]: <info>  [1764061829.1882] device (tapa2ae2d19-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:10:29 compute-0 NetworkManager[48915]: <info>  [1764061829.1896] device (tapa2ae2d19-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:10:29 compute-0 NetworkManager[48915]: <info>  [1764061829.1928] device (tapaa75ca22-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:10:29 compute-0 NetworkManager[48915]: <info>  [1764061829.1937] device (tapaa75ca22-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:10:29 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-00000091.
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01543|binding|INFO|Claiming lport aa75ca22-e976-4c62-b1e2-cc57fac51dec for this chassis.
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01544|binding|INFO|aa75ca22-e976-4c62-b1e2-cc57fac51dec: Claiming fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.273 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:7a:85 10.100.0.3'], port_security=['fa:16:3e:1a:7a:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e30f8c90-01de-40a5-8c04-289a035fca22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a8bb3c-a4c7-425c-b375-b8834bad3945, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a2ae2d19-2b35-4e83-b6ba-9f037762a501) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01545|binding|INFO|Setting lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 up in Southbound
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01546|binding|INFO|Setting lport aa75ca22-e976-4c62-b1e2-cc57fac51dec ovn-installed in OVS
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.275 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a2ae2d19-2b35-4e83-b6ba-9f037762a501 in datapath 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 bound to our chassis
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.276 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.278 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.292 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f37fd433-e0d1-47b2-9232-8273e74a7663]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.319 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42e14545-d7cd-40de-a588-dc8f377e804e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.323 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[79e5e28d-e72e-45e4-b5f8-1890b8cfae13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.348 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[15a2ba11-9cec-43ac-9c93-d751b8de0428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.368 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.368 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7337e8ca-1e8a-442e-bf18-fd7716e32396]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bf4f588-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cc:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712903, 'reachable_time': 15708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407860, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ceph-mon[75015]: pgmap v2702: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:10:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/392999834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:10:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/392999834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.382 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[806e44b8-ede8-4175-a6cc-2050f21ca3b9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7bf4f588-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712917, 'tstamp': 712917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407861, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7bf4f588-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712921, 'tstamp': 712921}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407861, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.383 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bf4f588-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.386 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.387 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bf4f588-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.387 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.387 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bf4f588-e0, col_values=(('external_ids', {'iface-id': '681702e6-167a-4d5b-9bcf-7f086c4e8bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.387 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:10:29 compute-0 ovn_controller[152859]: 2025-11-25T09:10:29Z|01547|binding|INFO|Setting lport aa75ca22-e976-4c62-b1e2-cc57fac51dec up in Southbound
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.524 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453'], port_security=['fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe81:2453/64', 'neutron:device_id': 'e30f8c90-01de-40a5-8c04-289a035fca22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cc67e51-433c-4c50-9e32-11618e10c494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b005d09-6aec-4c42-a2eb-38372057a2c4, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=aa75ca22-e976-4c62-b1e2-cc57fac51dec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.525 162739 INFO neutron.agent.ovn.metadata.agent [-] Port aa75ca22-e976-4c62-b1e2-cc57fac51dec in datapath 3cc67e51-433c-4c50-9e32-11618e10c494 bound to our chassis
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.527 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cc67e51-433c-4c50-9e32-11618e10c494
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.543 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96fcaac6-9410-47f3-8320-db30e254ce08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.573 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f19b6d94-fe1a-4ac4-bd1c-49d4fa333a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.576 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[67417b36-de36-412f-9dee-8402683770f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.607 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b847d4d9-8978-4d0c-887c-dbf676680e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.623 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3718c77d-419c-4d9e-8739-d60db5fc19f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cc67e51-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:16:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712993, 'reachable_time': 20254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407907, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.639 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fdac41-7438-4b89-98c3-1548eec78c69]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3cc67e51-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713006, 'tstamp': 713006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407910, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.640 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc67e51-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.642 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.643 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.645 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cc67e51-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.645 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.646 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cc67e51-40, col_values=(('external_ids', {'iface-id': '7cc0292c-b133-4cb7-8177-2a55fd592909'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:29.646 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.744 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061829.7437663, e30f8c90-01de-40a5-8c04-289a035fca22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.744 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] VM Started (Lifecycle Event)
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.765 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.770 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061829.7439542, e30f8c90-01de-40a5-8c04-289a035fca22 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.770 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] VM Paused (Lifecycle Event)
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.787 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.790 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:10:29 compute-0 nova_compute[253538]: 2025-11-25 09:10:29.807 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:10:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 MiB/s wr, 18 op/s
Nov 25 09:10:30 compute-0 sudo[407912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:30 compute-0 sudo[407912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:30 compute-0 sudo[407912]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:30 compute-0 sudo[407937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:10:30 compute-0 sudo[407937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:30 compute-0 sudo[407937]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:30 compute-0 sudo[407962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:30 compute-0 sudo[407962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:30 compute-0 sudo[407962]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:30 compute-0 sudo[407987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:10:30 compute-0 sudo[407987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:31 compute-0 sudo[407987]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:10:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:10:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:10:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:10:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:10:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:10:31 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d3ae4a1e-1188-4d73-a34a-4fea98432f27 does not exist
Nov 25 09:10:31 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f9a51ff7-bd13-4a9a-a77c-476abd801f98 does not exist
Nov 25 09:10:31 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6ca4df0d-2b5a-486f-8a27-5c258e093420 does not exist
Nov 25 09:10:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:10:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:10:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:10:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:10:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:10:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:10:31 compute-0 sudo[408045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:31 compute-0 sudo[408045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:31 compute-0 sudo[408045]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:31 compute-0 sudo[408070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:10:31 compute-0 sudo[408070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:31 compute-0 sudo[408070]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:31 compute-0 ceph-mon[75015]: pgmap v2703: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 MiB/s wr, 18 op/s
Nov 25 09:10:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:10:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:10:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:10:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:10:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:10:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:10:31 compute-0 sudo[408095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:31 compute-0 sudo[408095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:31 compute-0 sudo[408095]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:31 compute-0 sudo[408120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:10:31 compute-0 sudo[408120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:31 compute-0 podman[408186]: 2025-11-25 09:10:31.828866655 +0000 UTC m=+0.051364877 container create 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:10:31 compute-0 systemd[1]: Started libpod-conmon-2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e.scope.
Nov 25 09:10:31 compute-0 podman[408186]: 2025-11-25 09:10:31.805215223 +0000 UTC m=+0.027713435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:10:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:10:31 compute-0 podman[408186]: 2025-11-25 09:10:31.940289014 +0000 UTC m=+0.162787246 container init 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:10:31 compute-0 podman[408186]: 2025-11-25 09:10:31.950562973 +0000 UTC m=+0.173061205 container start 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:10:31 compute-0 podman[408186]: 2025-11-25 09:10:31.954901421 +0000 UTC m=+0.177399653 container attach 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:10:31 compute-0 quizzical_cerf[408203]: 167 167
Nov 25 09:10:31 compute-0 systemd[1]: libpod-2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e.scope: Deactivated successfully.
Nov 25 09:10:31 compute-0 conmon[408203]: conmon 2eaad78e45e4e03f2073 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e.scope/container/memory.events
Nov 25 09:10:31 compute-0 podman[408186]: 2025-11-25 09:10:31.959789184 +0000 UTC m=+0.182287416 container died 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-574f3d04aab82f2cf040c106de3bf2ff24e2fcf0e76438df3afac3554c474473-merged.mount: Deactivated successfully.
Nov 25 09:10:32 compute-0 podman[408186]: 2025-11-25 09:10:32.012030794 +0000 UTC m=+0.234528986 container remove 2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 25 09:10:32 compute-0 systemd[1]: libpod-conmon-2eaad78e45e4e03f2073b2aebc974c615bc4c836025814b0049408ed0cd37a1e.scope: Deactivated successfully.
Nov 25 09:10:32 compute-0 podman[408229]: 2025-11-25 09:10:32.234464641 +0000 UTC m=+0.046571327 container create a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:10:32 compute-0 systemd[1]: Started libpod-conmon-a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93.scope.
Nov 25 09:10:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 880 KiB/s wr, 8 op/s
Nov 25 09:10:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:32 compute-0 podman[408229]: 2025-11-25 09:10:32.216185044 +0000 UTC m=+0.028291760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:10:32 compute-0 podman[408229]: 2025-11-25 09:10:32.330263015 +0000 UTC m=+0.142369741 container init a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:10:32 compute-0 podman[408229]: 2025-11-25 09:10:32.337658906 +0000 UTC m=+0.149765592 container start a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:10:32 compute-0 podman[408229]: 2025-11-25 09:10:32.344748879 +0000 UTC m=+0.156855585 container attach a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.859 253542 DEBUG nova.compute.manager [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.861 253542 DEBUG oslo_concurrency.lockutils [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.861 253542 DEBUG oslo_concurrency.lockutils [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.862 253542 DEBUG oslo_concurrency.lockutils [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.862 253542 DEBUG nova.compute.manager [req-f6b4b0ad-9086-4c5e-a617-740052144201 req-7f7cc4ac-16c4-4654-9666-6ea6b122207c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Processing event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.916 253542 DEBUG nova.compute.manager [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.916 253542 DEBUG oslo_concurrency.lockutils [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.917 253542 DEBUG oslo_concurrency.lockutils [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.917 253542 DEBUG oslo_concurrency.lockutils [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.917 253542 DEBUG nova.compute.manager [req-7cbd9d89-f379-4215-b917-800f2283ba59 req-149eaedd-f754-49fe-a153-140974a8ea84 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Processing event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.918 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.925 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061832.9254777, e30f8c90-01de-40a5-8c04-289a035fca22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.926 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] VM Resumed (Lifecycle Event)
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.942 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.945 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.953 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.956 253542 INFO nova.virt.libvirt.driver [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance spawned successfully.
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.957 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.975 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.986 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.987 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.987 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.988 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.989 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:10:32 compute-0 nova_compute[253538]: 2025-11-25 09:10:32.989 253542 DEBUG nova.virt.libvirt.driver [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:10:33 compute-0 nova_compute[253538]: 2025-11-25 09:10:33.213 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:33 compute-0 ceph-mon[75015]: pgmap v2704: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 880 KiB/s wr, 8 op/s
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.408555) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833408610, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2133, "num_deletes": 255, "total_data_size": 3400252, "memory_usage": 3475328, "flush_reason": "Manual Compaction"}
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833431178, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3342084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54387, "largest_seqno": 56519, "table_properties": {"data_size": 3332304, "index_size": 6209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20116, "raw_average_key_size": 20, "raw_value_size": 3312743, "raw_average_value_size": 3380, "num_data_blocks": 273, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061620, "oldest_key_time": 1764061620, "file_creation_time": 1764061833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 22659 microseconds, and 6479 cpu microseconds.
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.431230) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3342084 bytes OK
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.431252) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.432850) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.432865) EVENT_LOG_v1 {"time_micros": 1764061833432860, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.432884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3391234, prev total WAL file size 3391234, number of live WAL files 2.
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.433827) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3263KB)], [128(8279KB)]
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833433856, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11819855, "oldest_snapshot_seqno": -1}
Nov 25 09:10:33 compute-0 modest_hamilton[408245]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:10:33 compute-0 modest_hamilton[408245]: --> relative data size: 1.0
Nov 25 09:10:33 compute-0 modest_hamilton[408245]: --> All data devices are unavailable
Nov 25 09:10:33 compute-0 systemd[1]: libpod-a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93.scope: Deactivated successfully.
Nov 25 09:10:33 compute-0 podman[408229]: 2025-11-25 09:10:33.470586083 +0000 UTC m=+1.282692769 container died a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:10:33 compute-0 systemd[1]: libpod-a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93.scope: Consumed 1.063s CPU time.
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7780 keys, 10126774 bytes, temperature: kUnknown
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833500703, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10126774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10076285, "index_size": 29960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19461, "raw_key_size": 202445, "raw_average_key_size": 26, "raw_value_size": 9938845, "raw_average_value_size": 1277, "num_data_blocks": 1172, "num_entries": 7780, "num_filter_entries": 7780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.500948) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10126774 bytes
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.502507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.6 rd, 151.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 8304, records dropped: 524 output_compression: NoCompression
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.502526) EVENT_LOG_v1 {"time_micros": 1764061833502517, "job": 78, "event": "compaction_finished", "compaction_time_micros": 66930, "compaction_time_cpu_micros": 28995, "output_level": 6, "num_output_files": 1, "total_output_size": 10126774, "num_input_records": 8304, "num_output_records": 7780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833503199, "job": 78, "event": "table_file_deletion", "file_number": 130}
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061833505012, "job": 78, "event": "table_file_deletion", "file_number": 128}
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.433774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:10:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:10:33.505111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6de99040cd4c97045259785f2851eb070fa6080f37c890a4a542ff12f12b9e3-merged.mount: Deactivated successfully.
Nov 25 09:10:33 compute-0 podman[408229]: 2025-11-25 09:10:33.555131802 +0000 UTC m=+1.367238498 container remove a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:10:33 compute-0 systemd[1]: libpod-conmon-a9e4c76ce1d86957e6c6bf3215feba304b4e6b760c448bb24580ebde99528e93.scope: Deactivated successfully.
Nov 25 09:10:33 compute-0 sudo[408120]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:33 compute-0 sudo[408287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:33 compute-0 sudo[408287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:33 compute-0 sudo[408287]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:33 compute-0 sudo[408312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:10:33 compute-0 sudo[408312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:33 compute-0 nova_compute[253538]: 2025-11-25 09:10:33.719 253542 INFO nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Took 17.21 seconds to spawn the instance on the hypervisor.
Nov 25 09:10:33 compute-0 nova_compute[253538]: 2025-11-25 09:10:33.722 253542 DEBUG nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:10:33 compute-0 sudo[408312]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:33 compute-0 sudo[408337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:33 compute-0 sudo[408337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:33 compute-0 sudo[408337]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:33 compute-0 sudo[408362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:10:33 compute-0 sudo[408362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:33 compute-0 podman[408386]: 2025-11-25 09:10:33.969086675 +0000 UTC m=+0.065880332 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:10:34 compute-0 podman[408387]: 2025-11-25 09:10:34.002174704 +0000 UTC m=+0.098952400 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.215 253542 INFO nova.compute.manager [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Took 19.23 seconds to build instance.
Nov 25 09:10:34 compute-0 podman[408465]: 2025-11-25 09:10:34.283655086 +0000 UTC m=+0.052080086 container create 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 09:10:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 13 KiB/s wr, 7 op/s
Nov 25 09:10:34 compute-0 systemd[1]: Started libpod-conmon-490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab.scope.
Nov 25 09:10:34 compute-0 podman[408465]: 2025-11-25 09:10:34.258378139 +0000 UTC m=+0.026803179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.370 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:10:34 compute-0 podman[408465]: 2025-11-25 09:10:34.396818213 +0000 UTC m=+0.165243243 container init 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:10:34 compute-0 podman[408465]: 2025-11-25 09:10:34.406273929 +0000 UTC m=+0.174698929 container start 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:10:34 compute-0 podman[408465]: 2025-11-25 09:10:34.409703762 +0000 UTC m=+0.178128772 container attach 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:10:34 compute-0 infallible_moser[408481]: 167 167
Nov 25 09:10:34 compute-0 systemd[1]: libpod-490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab.scope: Deactivated successfully.
Nov 25 09:10:34 compute-0 podman[408465]: 2025-11-25 09:10:34.41620118 +0000 UTC m=+0.184626180 container died 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 09:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b966882ed199a006a9dada10119e00576719353983316202abe14b7a8312945-merged.mount: Deactivated successfully.
Nov 25 09:10:34 compute-0 podman[408465]: 2025-11-25 09:10:34.454785658 +0000 UTC m=+0.223210648 container remove 490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.487 253542 DEBUG oslo_concurrency.lockutils [None req-d92035a5-8b8f-4b18-a7ed-f45f7573b37f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:34 compute-0 systemd[1]: libpod-conmon-490e86bd9eacba91a7edf3676c2b661f50b4a9d1e62441f203ec0d6173e191ab.scope: Deactivated successfully.
Nov 25 09:10:34 compute-0 podman[408505]: 2025-11-25 09:10:34.674575303 +0000 UTC m=+0.041052417 container create 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:10:34 compute-0 systemd[1]: Started libpod-conmon-8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415.scope.
Nov 25 09:10:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:10:34 compute-0 podman[408505]: 2025-11-25 09:10:34.657644943 +0000 UTC m=+0.024122077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:34 compute-0 podman[408505]: 2025-11-25 09:10:34.768723722 +0000 UTC m=+0.135200866 container init 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:10:34 compute-0 podman[408505]: 2025-11-25 09:10:34.778670713 +0000 UTC m=+0.145147827 container start 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:10:34 compute-0 podman[408505]: 2025-11-25 09:10:34.783116784 +0000 UTC m=+0.149593948 container attach 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.948 253542 DEBUG nova.compute.manager [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.950 253542 DEBUG oslo_concurrency.lockutils [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.950 253542 DEBUG oslo_concurrency.lockutils [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.951 253542 DEBUG oslo_concurrency.lockutils [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.951 253542 DEBUG nova.compute.manager [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:10:34 compute-0 nova_compute[253538]: 2025-11-25 09:10:34.952 253542 WARNING nova.compute.manager [req-d53d2619-a0da-44b4-826d-9fc60f6e8ca0 req-42fc910c-a01e-4b3b-809d-5c8f79f7feac b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received unexpected event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 for instance with vm_state active and task_state None.
Nov 25 09:10:35 compute-0 nova_compute[253538]: 2025-11-25 09:10:35.030 253542 DEBUG nova.compute.manager [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:10:35 compute-0 nova_compute[253538]: 2025-11-25 09:10:35.030 253542 DEBUG oslo_concurrency.lockutils [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:35 compute-0 nova_compute[253538]: 2025-11-25 09:10:35.031 253542 DEBUG oslo_concurrency.lockutils [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:35 compute-0 nova_compute[253538]: 2025-11-25 09:10:35.031 253542 DEBUG oslo_concurrency.lockutils [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:35 compute-0 nova_compute[253538]: 2025-11-25 09:10:35.031 253542 DEBUG nova.compute.manager [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:10:35 compute-0 nova_compute[253538]: 2025-11-25 09:10:35.032 253542 WARNING nova.compute.manager [req-1663f7bf-c89c-4ef7-9694-273bea9e0687 req-f468c3d8-2ee7-48cb-b38e-dcf45c8c4970 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received unexpected event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec for instance with vm_state active and task_state None.
Nov 25 09:10:35 compute-0 ceph-mon[75015]: pgmap v2705: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 13 KiB/s wr, 7 op/s
Nov 25 09:10:35 compute-0 brave_jang[408522]: {
Nov 25 09:10:35 compute-0 brave_jang[408522]:     "0": [
Nov 25 09:10:35 compute-0 brave_jang[408522]:         {
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "devices": [
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "/dev/loop3"
Nov 25 09:10:35 compute-0 brave_jang[408522]:             ],
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_name": "ceph_lv0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_size": "21470642176",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "name": "ceph_lv0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "tags": {
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cluster_name": "ceph",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.crush_device_class": "",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.encrypted": "0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osd_id": "0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.type": "block",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.vdo": "0"
Nov 25 09:10:35 compute-0 brave_jang[408522]:             },
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "type": "block",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "vg_name": "ceph_vg0"
Nov 25 09:10:35 compute-0 brave_jang[408522]:         }
Nov 25 09:10:35 compute-0 brave_jang[408522]:     ],
Nov 25 09:10:35 compute-0 brave_jang[408522]:     "1": [
Nov 25 09:10:35 compute-0 brave_jang[408522]:         {
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "devices": [
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "/dev/loop4"
Nov 25 09:10:35 compute-0 brave_jang[408522]:             ],
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_name": "ceph_lv1",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_size": "21470642176",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "name": "ceph_lv1",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "tags": {
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cluster_name": "ceph",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.crush_device_class": "",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.encrypted": "0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osd_id": "1",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.type": "block",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.vdo": "0"
Nov 25 09:10:35 compute-0 brave_jang[408522]:             },
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "type": "block",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "vg_name": "ceph_vg1"
Nov 25 09:10:35 compute-0 brave_jang[408522]:         }
Nov 25 09:10:35 compute-0 brave_jang[408522]:     ],
Nov 25 09:10:35 compute-0 brave_jang[408522]:     "2": [
Nov 25 09:10:35 compute-0 brave_jang[408522]:         {
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "devices": [
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "/dev/loop5"
Nov 25 09:10:35 compute-0 brave_jang[408522]:             ],
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_name": "ceph_lv2",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_size": "21470642176",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "name": "ceph_lv2",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "tags": {
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.cluster_name": "ceph",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.crush_device_class": "",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.encrypted": "0",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osd_id": "2",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.type": "block",
Nov 25 09:10:35 compute-0 brave_jang[408522]:                 "ceph.vdo": "0"
Nov 25 09:10:35 compute-0 brave_jang[408522]:             },
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "type": "block",
Nov 25 09:10:35 compute-0 brave_jang[408522]:             "vg_name": "ceph_vg2"
Nov 25 09:10:35 compute-0 brave_jang[408522]:         }
Nov 25 09:10:35 compute-0 brave_jang[408522]:     ]
Nov 25 09:10:35 compute-0 brave_jang[408522]: }
Nov 25 09:10:35 compute-0 systemd[1]: libpod-8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415.scope: Deactivated successfully.
Nov 25 09:10:35 compute-0 podman[408505]: 2025-11-25 09:10:35.542206289 +0000 UTC m=+0.908683403 container died 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 09:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a93cbee2b1e38cef28b54b4274c9179936b4ee426d3e41ef04a644da9a77cefa-merged.mount: Deactivated successfully.
Nov 25 09:10:35 compute-0 podman[408505]: 2025-11-25 09:10:35.599995459 +0000 UTC m=+0.966472563 container remove 8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jang, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:10:35 compute-0 systemd[1]: libpod-conmon-8009d928d94acd01ff5d146534eaf9557c30a13f5d7974a9db4536fa25978415.scope: Deactivated successfully.
Nov 25 09:10:35 compute-0 sudo[408362]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:35 compute-0 sudo[408545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:35 compute-0 sudo[408545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:35 compute-0 sudo[408545]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:35 compute-0 sudo[408570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:10:35 compute-0 sudo[408570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:35 compute-0 sudo[408570]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:35 compute-0 sudo[408595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:35 compute-0 sudo[408595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:35 compute-0 sudo[408595]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:35 compute-0 sudo[408620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:10:35 compute-0 sudo[408620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 25 09:10:36 compute-0 podman[408681]: 2025-11-25 09:10:36.195175859 +0000 UTC m=+0.023082209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:10:36 compute-0 podman[408681]: 2025-11-25 09:10:36.343604434 +0000 UTC m=+0.171510744 container create d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:10:36 compute-0 systemd[1]: Started libpod-conmon-d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8.scope.
Nov 25 09:10:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:10:36 compute-0 podman[408681]: 2025-11-25 09:10:36.493734585 +0000 UTC m=+0.321640905 container init d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:10:36 compute-0 podman[408681]: 2025-11-25 09:10:36.500132719 +0000 UTC m=+0.328039029 container start d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 09:10:36 compute-0 nice_lehmann[408698]: 167 167
Nov 25 09:10:36 compute-0 systemd[1]: libpod-d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8.scope: Deactivated successfully.
Nov 25 09:10:36 compute-0 podman[408681]: 2025-11-25 09:10:36.511055275 +0000 UTC m=+0.338961615 container attach d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 09:10:36 compute-0 podman[408681]: 2025-11-25 09:10:36.51158016 +0000 UTC m=+0.339486470 container died d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-57e5ed7f3d547be82b3edeef16ad9012fc615701d73719561911e2f911a69afb-merged.mount: Deactivated successfully.
Nov 25 09:10:36 compute-0 podman[408681]: 2025-11-25 09:10:36.572415813 +0000 UTC m=+0.400322123 container remove d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:10:36 compute-0 systemd[1]: libpod-conmon-d5e9e777ab698b0cf2b5180db6440d53d2e12a229020adf7c2bb65ddd58ce6d8.scope: Deactivated successfully.
Nov 25 09:10:36 compute-0 podman[408722]: 2025-11-25 09:10:36.836085891 +0000 UTC m=+0.117304549 container create ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:10:36 compute-0 podman[408722]: 2025-11-25 09:10:36.746423594 +0000 UTC m=+0.027642282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:10:36 compute-0 systemd[1]: Started libpod-conmon-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope.
Nov 25 09:10:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:10:37 compute-0 podman[408722]: 2025-11-25 09:10:37.141275357 +0000 UTC m=+0.422494025 container init ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:10:37 compute-0 podman[408722]: 2025-11-25 09:10:37.148861724 +0000 UTC m=+0.430080372 container start ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:10:37 compute-0 podman[408722]: 2025-11-25 09:10:37.216989365 +0000 UTC m=+0.498208013 container attach ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:10:37 compute-0 ceph-mon[75015]: pgmap v2706: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 25 09:10:38 compute-0 magical_mendel[408738]: {
Nov 25 09:10:38 compute-0 magical_mendel[408738]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "osd_id": 1,
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "type": "bluestore"
Nov 25 09:10:38 compute-0 magical_mendel[408738]:     },
Nov 25 09:10:38 compute-0 magical_mendel[408738]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "osd_id": 2,
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "type": "bluestore"
Nov 25 09:10:38 compute-0 magical_mendel[408738]:     },
Nov 25 09:10:38 compute-0 magical_mendel[408738]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "osd_id": 0,
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:10:38 compute-0 magical_mendel[408738]:         "type": "bluestore"
Nov 25 09:10:38 compute-0 magical_mendel[408738]:     }
Nov 25 09:10:38 compute-0 magical_mendel[408738]: }
Nov 25 09:10:38 compute-0 systemd[1]: libpod-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope: Deactivated successfully.
Nov 25 09:10:38 compute-0 systemd[1]: libpod-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope: Consumed 1.044s CPU time.
Nov 25 09:10:38 compute-0 conmon[408738]: conmon ac6c8c4529a71e2fe6f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope/container/memory.events
Nov 25 09:10:38 compute-0 podman[408722]: 2025-11-25 09:10:38.240093858 +0000 UTC m=+1.521312506 container died ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:10:38 compute-0 nova_compute[253538]: 2025-11-25 09:10:38.239 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef02148c62232d5cc6378d6968bf96eac2b4c4c2436662085f17a539a624d62e-merged.mount: Deactivated successfully.
Nov 25 09:10:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 09:10:38 compute-0 podman[408722]: 2025-11-25 09:10:38.315009844 +0000 UTC m=+1.596228512 container remove ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:10:38 compute-0 systemd[1]: libpod-conmon-ac6c8c4529a71e2fe6f883bfdc09542969c9eb789474d76bc9bf6ab8458132d2.scope: Deactivated successfully.
Nov 25 09:10:38 compute-0 sudo[408620]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:10:38 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:10:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:10:38 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:10:38 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 87366dac-6a43-411c-ab7e-bd81e3a5a69e does not exist
Nov 25 09:10:38 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 7e7c539f-3c16-4855-aa9b-8cc3e4a0d495 does not exist
Nov 25 09:10:38 compute-0 sudo[408781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:10:38 compute-0 sudo[408781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:38 compute-0 sudo[408781]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:38 compute-0 sudo[408806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:10:38 compute-0 sudo[408806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:10:38 compute-0 sudo[408806]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:38 compute-0 nova_compute[253538]: 2025-11-25 09:10:38.847 253542 DEBUG nova.compute.manager [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:10:38 compute-0 nova_compute[253538]: 2025-11-25 09:10:38.848 253542 DEBUG nova.compute.manager [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing instance network info cache due to event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:10:38 compute-0 nova_compute[253538]: 2025-11-25 09:10:38.848 253542 DEBUG oslo_concurrency.lockutils [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:10:38 compute-0 nova_compute[253538]: 2025-11-25 09:10:38.849 253542 DEBUG oslo_concurrency.lockutils [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:10:38 compute-0 nova_compute[253538]: 2025-11-25 09:10:38.849 253542 DEBUG nova.network.neutron [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:10:39 compute-0 ceph-mon[75015]: pgmap v2707: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 09:10:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:10:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:10:39 compute-0 nova_compute[253538]: 2025-11-25 09:10:39.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Nov 25 09:10:40 compute-0 nova_compute[253538]: 2025-11-25 09:10:40.823 253542 DEBUG nova.network.neutron [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updated VIF entry in instance network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:10:40 compute-0 nova_compute[253538]: 2025-11-25 09:10:40.823 253542 DEBUG nova.network.neutron [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:10:40 compute-0 podman[408831]: 2025-11-25 09:10:40.876675801 +0000 UTC m=+0.116534879 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 09:10:40 compute-0 nova_compute[253538]: 2025-11-25 09:10:40.976 253542 DEBUG oslo_concurrency.lockutils [req-89092815-01a8-4ab4-97a2-68ad2cfbcb72 req-c7758340-ba60-49ce-bc7d-7b63f4636121 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:10:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:41.095 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:41.095 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:41 compute-0 ceph-mon[75015]: pgmap v2708: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Nov 25 09:10:42 compute-0 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 09:10:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.5 KiB/s wr, 70 op/s
Nov 25 09:10:43 compute-0 nova_compute[253538]: 2025-11-25 09:10:43.244 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:43 compute-0 ceph-mon[75015]: pgmap v2709: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.5 KiB/s wr, 70 op/s
Nov 25 09:10:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 69 op/s
Nov 25 09:10:44 compute-0 nova_compute[253538]: 2025-11-25 09:10:44.374 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:45 compute-0 ceph-mon[75015]: pgmap v2710: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 69 op/s
Nov 25 09:10:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 67 op/s
Nov 25 09:10:46 compute-0 ceph-mon[75015]: pgmap v2711: 321 pgs: 321 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 67 op/s
Nov 25 09:10:47 compute-0 ovn_controller[152859]: 2025-11-25T09:10:47Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:7a:85 10.100.0.3
Nov 25 09:10:47 compute-0 ovn_controller[152859]: 2025-11-25T09:10:47Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:7a:85 10.100.0.3
Nov 25 09:10:47 compute-0 sshd-session[408862]: Invalid user tony from 45.202.211.6 port 58000
Nov 25 09:10:47 compute-0 sshd-session[408859]: Invalid user frappe from 45.78.217.205 port 55298
Nov 25 09:10:47 compute-0 sshd-session[408862]: Received disconnect from 45.202.211.6 port 58000:11: Bye Bye [preauth]
Nov 25 09:10:47 compute-0 sshd-session[408862]: Disconnected from invalid user tony 45.202.211.6 port 58000 [preauth]
Nov 25 09:10:47 compute-0 sshd-session[408859]: Received disconnect from 45.78.217.205 port 55298:11: Bye Bye [preauth]
Nov 25 09:10:47 compute-0 sshd-session[408859]: Disconnected from invalid user frappe 45.78.217.205 port 55298 [preauth]
Nov 25 09:10:48 compute-0 nova_compute[253538]: 2025-11-25 09:10:48.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 229 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 82 op/s
Nov 25 09:10:49 compute-0 nova_compute[253538]: 2025-11-25 09:10:49.161 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:10:49 compute-0 nova_compute[253538]: 2025-11-25 09:10:49.162 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:10:49 compute-0 ceph-mon[75015]: pgmap v2712: 321 pgs: 321 active+clean; 229 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 82 op/s
Nov 25 09:10:49 compute-0 nova_compute[253538]: 2025-11-25 09:10:49.377 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Nov 25 09:10:51 compute-0 ceph-mon[75015]: pgmap v2713: 321 pgs: 321 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Nov 25 09:10:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:10:52 compute-0 sshd-session[408864]: Invalid user hduser from 193.32.162.151 port 49982
Nov 25 09:10:52 compute-0 sshd-session[408864]: Connection closed by invalid user hduser 193.32.162.151 port 49982 [preauth]
Nov 25 09:10:53 compute-0 nova_compute[253538]: 2025-11-25 09:10:53.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:10:53
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', '.mgr', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:10:53 compute-0 ceph-mon[75015]: pgmap v2714: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:10:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:10:53 compute-0 nova_compute[253538]: 2025-11-25 09:10:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:10:53 compute-0 nova_compute[253538]: 2025-11-25 09:10:53.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:10:53 compute-0 nova_compute[253538]: 2025-11-25 09:10:53.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:10:53 compute-0 nova_compute[253538]: 2025-11-25 09:10:53.824 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:10:53 compute-0 nova_compute[253538]: 2025-11-25 09:10:53.825 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:10:53 compute-0 nova_compute[253538]: 2025-11-25 09:10:53.825 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:10:53 compute-0 nova_compute[253538]: 2025-11-25 09:10:53.825 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:10:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:10:54 compute-0 nova_compute[253538]: 2025-11-25 09:10:54.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:55 compute-0 ceph-mon[75015]: pgmap v2715: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:10:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:10:57 compute-0 ceph-mon[75015]: pgmap v2716: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:10:58 compute-0 nova_compute[253538]: 2025-11-25 09:10:58.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:10:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:10:58 compute-0 ceph-mon[75015]: pgmap v2717: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.073 253542 DEBUG nova.compute.manager [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.073 253542 DEBUG nova.compute.manager [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing instance network info cache due to event network-changed-a2ae2d19-2b35-4e83-b6ba-9f037762a501. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.074 253542 DEBUG oslo_concurrency.lockutils [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.074 253542 DEBUG oslo_concurrency.lockutils [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.074 253542 DEBUG nova.network.neutron [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Refreshing network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.381 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.605 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.606 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.606 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.607 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.607 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.609 253542 INFO nova.compute.manager [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Terminating instance
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.610 253542 DEBUG nova.compute.manager [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.674 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:10:59 compute-0 kernel: tapa2ae2d19-2b (unregistering): left promiscuous mode
Nov 25 09:10:59 compute-0 NetworkManager[48915]: <info>  [1764061859.6870] device (tapa2ae2d19-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.687 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.688 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.689 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.689 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.689 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:10:59 compute-0 ovn_controller[152859]: 2025-11-25T09:10:59Z|01548|binding|INFO|Releasing lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 from this chassis (sb_readonly=0)
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.746 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:59 compute-0 ovn_controller[152859]: 2025-11-25T09:10:59Z|01549|binding|INFO|Setting lport a2ae2d19-2b35-4e83-b6ba-9f037762a501 down in Southbound
Nov 25 09:10:59 compute-0 ovn_controller[152859]: 2025-11-25T09:10:59Z|01550|binding|INFO|Removing iface tapa2ae2d19-2b ovn-installed in OVS
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:59 compute-0 kernel: tapaa75ca22-e9 (unregistering): left promiscuous mode
Nov 25 09:10:59 compute-0 NetworkManager[48915]: <info>  [1764061859.7724] device (tapaa75ca22-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:59 compute-0 ovn_controller[152859]: 2025-11-25T09:10:59Z|01551|binding|INFO|Releasing lport aa75ca22-e976-4c62-b1e2-cc57fac51dec from this chassis (sb_readonly=1)
Nov 25 09:10:59 compute-0 ovn_controller[152859]: 2025-11-25T09:10:59Z|01552|binding|INFO|Removing iface tapaa75ca22-e9 ovn-installed in OVS
Nov 25 09:10:59 compute-0 ovn_controller[152859]: 2025-11-25T09:10:59Z|01553|if_status|INFO|Dropped 6 log messages in last 262 seconds (most recently, 262 seconds ago) due to excessive rate
Nov 25 09:10:59 compute-0 ovn_controller[152859]: 2025-11-25T09:10:59Z|01554|if_status|INFO|Not setting lport aa75ca22-e976-4c62-b1e2-cc57fac51dec down as sb is readonly
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:59 compute-0 ovn_controller[152859]: 2025-11-25T09:10:59Z|01555|binding|INFO|Setting lport aa75ca22-e976-4c62-b1e2-cc57fac51dec down in Southbound
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.787 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:7a:85 10.100.0.3'], port_security=['fa:16:3e:1a:7a:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e30f8c90-01de-40a5-8c04-289a035fca22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a8bb3c-a4c7-425c-b375-b8834bad3945, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=a2ae2d19-2b35-4e83-b6ba-9f037762a501) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.788 162739 INFO neutron.agent.ovn.metadata.agent [-] Port a2ae2d19-2b35-4e83-b6ba-9f037762a501 in datapath 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 unbound from our chassis
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.790 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[65a403cf-5a00-4918-8480-bd631585b9a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:59 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 25 09:10:59 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000091.scope: Consumed 14.602s CPU time.
Nov 25 09:10:59 compute-0 systemd-machined[215790]: Machine qemu-175-instance-00000091 terminated.
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.847 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[beb44bca-4e62-468f-ad1c-b8f84d0dfffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.850 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[93d34977-2dcb-4bdc-9d73-55e7b7ae38be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.881 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[65ac49f4-df5b-4ca5-8078-86e7daeac035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.886 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453'], port_security=['fa:16:3e:81:24:53 2001:db8::f816:3eff:fe81:2453'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe81:2453/64', 'neutron:device_id': 'e30f8c90-01de-40a5-8c04-289a035fca22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cc67e51-433c-4c50-9e32-11618e10c494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b005d09-6aec-4c42-a2eb-38372057a2c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=aa75ca22-e976-4c62-b1e2-cc57fac51dec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.898 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[47c682a4-a6dc-4063-9f3c-6af5359c218c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bf4f588-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cc:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712903, 'reachable_time': 15708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408881, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.916 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55566d7a-f7e5-443e-a92a-a8a855084795]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7bf4f588-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712917, 'tstamp': 712917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408882, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7bf4f588-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712921, 'tstamp': 712921}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408882, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.919 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bf4f588-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:59 compute-0 nova_compute[253538]: 2025-11-25 09:10:59.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.929 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bf4f588-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.929 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.930 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bf4f588-e0, col_values=(('external_ids', {'iface-id': '681702e6-167a-4d5b-9bcf-7f086c4e8bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.930 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.932 162739 INFO neutron.agent.ovn.metadata.agent [-] Port aa75ca22-e976-4c62-b1e2-cc57fac51dec in datapath 3cc67e51-433c-4c50-9e32-11618e10c494 unbound from our chassis
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.934 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cc67e51-433c-4c50-9e32-11618e10c494
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.952 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87091937-5cb6-41e2-b5e1-f9e4efe73d50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.991 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d74abeff-e3d3-4cf0-b18d-f14b9ed077f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:10:59 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:10:59.995 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a28c1aaa-e819-4f65-b5ca-1c36d9b0f900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.036 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[67110452-cae8-4b70-9e3b-66d5d9cfc379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:00 compute-0 NetworkManager[48915]: <info>  [1764061860.0480] manager: (tapaa75ca22-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Nov 25 09:11:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.056 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7efe3720-7e7c-4065-b9f6-0c8e2908e1ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cc67e51-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:16:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712993, 'reachable_time': 20254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408898, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.068 253542 INFO nova.virt.libvirt.driver [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Instance destroyed successfully.
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.069 253542 DEBUG nova.objects.instance [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid e30f8c90-01de-40a5-8c04-289a035fca22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:11:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.080 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1d19c7-1f24-40ff-bcd6-9c3b3efaf7b6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3cc67e51-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713006, 'tstamp': 713006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408912, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.082 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc67e51-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.083 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.092 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cc67e51-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.093 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:11:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.093 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cc67e51-40, col_values=(('external_ids', {'iface-id': '7cc0292c-b133-4cb7-8177-2a55fd592909'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:00.093 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.094 253542 DEBUG nova.virt.libvirt.vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:10:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:10:33Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.094 253542 DEBUG nova.network.os_vif_util [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.095 253542 DEBUG nova.network.os_vif_util [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.096 253542 DEBUG os_vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.099 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2ae2d19-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.103 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.108 253542 INFO os_vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:7a:85,bridge_name='br-int',has_traffic_filtering=True,id=a2ae2d19-2b35-4e83-b6ba-9f037762a501,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ae2d19-2b')
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.110 253542 DEBUG nova.virt.libvirt.vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-640466180',display_name='tempest-TestGettingAddress-server-640466180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-640466180',id=145,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:10:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-vi0tehjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:10:33Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=e30f8c90-01de-40a5-8c04-289a035fca22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.110 253542 DEBUG nova.network.os_vif_util [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.111 253542 DEBUG nova.network.os_vif_util [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.112 253542 DEBUG os_vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.114 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.115 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa75ca22-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.118 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.121 253542 INFO os_vif [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:24:53,bridge_name='br-int',has_traffic_filtering=True,id=aa75ca22-e976-4c62-b1e2-cc57fac51dec,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa75ca22-e9')
Nov 25 09:11:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 490 KiB/s wr, 30 op/s
Nov 25 09:11:00 compute-0 nova_compute[253538]: 2025-11-25 09:11:00.682 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.112 253542 DEBUG nova.compute.manager [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-unplugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.113 253542 DEBUG oslo_concurrency.lockutils [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.113 253542 DEBUG oslo_concurrency.lockutils [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.114 253542 DEBUG oslo_concurrency.lockutils [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.114 253542 DEBUG nova.compute.manager [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-unplugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.115 253542 DEBUG nova.compute.manager [req-c40e33b1-c7d7-40d5-878d-58ac47eff964 req-598a95bb-59ac-4b40-b804-76a6d9e587ed b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-unplugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.187 253542 DEBUG nova.compute.manager [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-unplugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.187 253542 DEBUG oslo_concurrency.lockutils [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.187 253542 DEBUG oslo_concurrency.lockutils [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.188 253542 DEBUG oslo_concurrency.lockutils [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.188 253542 DEBUG nova.compute.manager [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-unplugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.188 253542 DEBUG nova.compute.manager [req-c0c781a8-be71-4853-8f8c-dd99ae9e4279 req-158673f7-c580-458b-bf8f-ef7620bcb3d9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-unplugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:11:01 compute-0 ceph-mon[75015]: pgmap v2718: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 490 KiB/s wr, 30 op/s
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:01 compute-0 nova_compute[253538]: 2025-11-25 09:11:01.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.273 253542 INFO nova.virt.libvirt.driver [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Deleting instance files /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22_del
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.274 253542 INFO nova.virt.libvirt.driver [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Deletion of /var/lib/nova/instances/e30f8c90-01de-40a5-8c04-289a035fca22_del complete
Nov 25 09:11:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 91 KiB/s wr, 10 op/s
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.368 253542 INFO nova.compute.manager [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Took 2.76 seconds to destroy the instance on the hypervisor.
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.369 253542 DEBUG oslo.service.loopingcall [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.369 253542 DEBUG nova.compute.manager [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.370 253542 DEBUG nova.network.neutron [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.446 253542 DEBUG nova.network.neutron [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updated VIF entry in instance network info cache for port a2ae2d19-2b35-4e83-b6ba-9f037762a501. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.446 253542 DEBUG nova.network.neutron [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [{"id": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "address": "fa:16:3e:1a:7a:85", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ae2d19-2b", "ovs_interfaceid": "a2ae2d19-2b35-4e83-b6ba-9f037762a501", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "address": "fa:16:3e:81:24:53", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:2453", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa75ca22-e9", "ovs_interfaceid": "aa75ca22-e976-4c62-b1e2-cc57fac51dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.451 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:02.451 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:11:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:02.452 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:11:02 compute-0 nova_compute[253538]: 2025-11-25 09:11:02.692 253542 DEBUG oslo_concurrency.lockutils [req-25a4439f-b7c1-4aca-b54f-02daff1a0216 req-7adb7897-5e2b-4786-810c-d9558e68676d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-e30f8c90-01de-40a5-8c04-289a035fca22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:11:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.463 253542 DEBUG nova.compute.manager [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.463 253542 DEBUG oslo_concurrency.lockutils [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.463 253542 DEBUG oslo_concurrency.lockutils [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.464 253542 DEBUG oslo_concurrency.lockutils [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.464 253542 DEBUG nova.compute.manager [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.464 253542 WARNING nova.compute.manager [req-3c429f28-ec10-4d1d-a32b-d51f16fc222e req-3093768b-3e5f-4b3a-8217-6625d0b73dbd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received unexpected event network-vif-plugged-a2ae2d19-2b35-4e83-b6ba-9f037762a501 for instance with vm_state active and task_state deleting.
Nov 25 09:11:03 compute-0 ceph-mon[75015]: pgmap v2719: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 91 KiB/s wr, 10 op/s
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.514 253542 DEBUG nova.compute.manager [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 DEBUG oslo_concurrency.lockutils [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 DEBUG oslo_concurrency.lockutils [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 DEBUG oslo_concurrency.lockutils [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 DEBUG nova.compute.manager [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] No waiting events found dispatching network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:11:03 compute-0 nova_compute[253538]: 2025-11-25 09:11:03.515 253542 WARNING nova.compute.manager [req-1c8b59a6-d30c-4a59-a438-6603a7e62ea8 req-e0577a2c-c613-45e6-89d2-7fa5122aabd5 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received unexpected event network-vif-plugged-aa75ca22-e976-4c62-b1e2-cc57fac51dec for instance with vm_state active and task_state deleting.
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 22 KiB/s wr, 12 op/s
Nov 25 09:11:04 compute-0 nova_compute[253538]: 2025-11-25 09:11:04.391 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012764562780816565 of space, bias 1.0, pg target 0.38293688342449694 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:11:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:11:04 compute-0 nova_compute[253538]: 2025-11-25 09:11:04.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:04 compute-0 podman[408937]: 2025-11-25 09:11:04.81493781 +0000 UTC m=+0.063263901 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:11:04 compute-0 podman[408938]: 2025-11-25 09:11:04.835143359 +0000 UTC m=+0.083468130 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:11:04 compute-0 nova_compute[253538]: 2025-11-25 09:11:04.866 253542 DEBUG nova.network.neutron [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:11:04 compute-0 nova_compute[253538]: 2025-11-25 09:11:04.939 253542 INFO nova.compute.manager [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Took 2.57 seconds to deallocate network for instance.
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.036 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.036 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.121 253542 DEBUG oslo_concurrency.processutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:05 compute-0 ceph-mon[75015]: pgmap v2720: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 22 KiB/s wr, 12 op/s
Nov 25 09:11:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:11:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945930788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.604 253542 DEBUG oslo_concurrency.processutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.612 253542 DEBUG nova.compute.provider_tree [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.633 253542 DEBUG nova.compute.manager [req-d5a8a11a-268e-44e7-b8e8-8d86ba4eb6e9 req-8c25e619-4d19-4f22-a0f9-388a947dad67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-deleted-aa75ca22-e976-4c62-b1e2-cc57fac51dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.633 253542 DEBUG nova.compute.manager [req-d5a8a11a-268e-44e7-b8e8-8d86ba4eb6e9 req-8c25e619-4d19-4f22-a0f9-388a947dad67 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Received event network-vif-deleted-a2ae2d19-2b35-4e83-b6ba-9f037762a501 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.635 253542 DEBUG nova.scheduler.client.report [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.769 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:05 compute-0 nova_compute[253538]: 2025-11-25 09:11:05.878 253542 INFO nova.scheduler.client.report [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance e30f8c90-01de-40a5-8c04-289a035fca22
Nov 25 09:11:06 compute-0 nova_compute[253538]: 2025-11-25 09:11:06.072 253542 DEBUG oslo_concurrency.lockutils [None req-d9f8d56c-07ea-4dde-829f-4cd4c5736f9f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "e30f8c90-01de-40a5-8c04-289a035fca22" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 28 op/s
Nov 25 09:11:06 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:06.454 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:06 compute-0 nova_compute[253538]: 2025-11-25 09:11:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:06 compute-0 nova_compute[253538]: 2025-11-25 09:11:06.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:06 compute-0 nova_compute[253538]: 2025-11-25 09:11:06.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:06 compute-0 nova_compute[253538]: 2025-11-25 09:11:06.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:06 compute-0 nova_compute[253538]: 2025-11-25 09:11:06.589 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:11:06 compute-0 nova_compute[253538]: 2025-11-25 09:11:06.589 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:06 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2945930788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:11:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2809866083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.068 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.157 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.157 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.324 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.325 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3419MB free_disk=59.922340393066406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.325 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.326 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.386 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 23232702-7686-425d-8921-7aa6192ca1c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.387 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.387 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.424 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:11:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157968572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.861 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.870 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:11:07 compute-0 nova_compute[253538]: 2025-11-25 09:11:07.885 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:11:08 compute-0 ceph-mon[75015]: pgmap v2721: 321 pgs: 321 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 28 op/s
Nov 25 09:11:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2809866083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:08 compute-0 nova_compute[253538]: 2025-11-25 09:11:08.033 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:11:08 compute-0 nova_compute[253538]: 2025-11-25 09:11:08.034 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.307614) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868307653, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 525, "num_deletes": 256, "total_data_size": 539408, "memory_usage": 550744, "flush_reason": "Manual Compaction"}
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868336229, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 535047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56520, "largest_seqno": 57044, "table_properties": {"data_size": 532054, "index_size": 964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6697, "raw_average_key_size": 18, "raw_value_size": 526150, "raw_average_value_size": 1445, "num_data_blocks": 43, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061834, "oldest_key_time": 1764061834, "file_creation_time": 1764061868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 28745 microseconds, and 4330 cpu microseconds.
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.336299) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 535047 bytes OK
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.336378) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.386575) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.386616) EVENT_LOG_v1 {"time_micros": 1764061868386607, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.386638) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 536389, prev total WAL file size 536389, number of live WAL files 2.
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.387182) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323638' seq:72057594037927935, type:22 .. '6C6F676D0032353230' seq:0, type:0; will stop at (end)
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(522KB)], [131(9889KB)]
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868387423, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10661821, "oldest_snapshot_seqno": -1}
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7621 keys, 10541420 bytes, temperature: kUnknown
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868864775, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10541420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10490862, "index_size": 30413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19077, "raw_key_size": 200049, "raw_average_key_size": 26, "raw_value_size": 10355040, "raw_average_value_size": 1358, "num_data_blocks": 1187, "num_entries": 7621, "num_filter_entries": 7621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.865042) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10541420 bytes
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.872643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.3 rd, 22.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.7 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(39.6) write-amplify(19.7) OK, records in: 8144, records dropped: 523 output_compression: NoCompression
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.872682) EVENT_LOG_v1 {"time_micros": 1764061868872664, "job": 80, "event": "compaction_finished", "compaction_time_micros": 477436, "compaction_time_cpu_micros": 30521, "output_level": 6, "num_output_files": 1, "total_output_size": 10541420, "num_input_records": 8144, "num_output_records": 7621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868873023, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061868875613, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.387069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:11:08 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:11:08.875867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:11:08 compute-0 nova_compute[253538]: 2025-11-25 09:11:08.937 253542 DEBUG nova.compute.manager [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:08 compute-0 nova_compute[253538]: 2025-11-25 09:11:08.937 253542 DEBUG nova.compute.manager [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing instance network info cache due to event network-changed-2cf452f4-d6c3-4977-9e5b-874c9d9707e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:11:08 compute-0 nova_compute[253538]: 2025-11-25 09:11:08.938 253542 DEBUG oslo_concurrency.lockutils [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:11:08 compute-0 nova_compute[253538]: 2025-11-25 09:11:08.938 253542 DEBUG oslo_concurrency.lockutils [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:11:08 compute-0 nova_compute[253538]: 2025-11-25 09:11:08.938 253542 DEBUG nova.network.neutron [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Refreshing network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.067 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.068 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.068 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.068 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.069 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.070 253542 INFO nova.compute.manager [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Terminating instance
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.071 253542 DEBUG nova.compute.manager [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:11:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2157968572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:09 compute-0 ceph-mon[75015]: pgmap v2722: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 kernel: tap2cf452f4-d6 (unregistering): left promiscuous mode
Nov 25 09:11:09 compute-0 NetworkManager[48915]: <info>  [1764061869.4961] device (tap2cf452f4-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:11:09 compute-0 ovn_controller[152859]: 2025-11-25T09:11:09Z|01556|binding|INFO|Releasing lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 from this chassis (sb_readonly=0)
Nov 25 09:11:09 compute-0 ovn_controller[152859]: 2025-11-25T09:11:09Z|01557|binding|INFO|Setting lport 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 down in Southbound
Nov 25 09:11:09 compute-0 ovn_controller[152859]: 2025-11-25T09:11:09Z|01558|binding|INFO|Removing iface tap2cf452f4-d6 ovn-installed in OVS
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.506 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.509 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 kernel: tap24beb614-6f (unregistering): left promiscuous mode
Nov 25 09:11:09 compute-0 NetworkManager[48915]: <info>  [1764061869.5422] device (tap24beb614-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.557 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 ovn_controller[152859]: 2025-11-25T09:11:09Z|01559|binding|INFO|Releasing lport 24beb614-6f72-4107-adca-af1258052ab5 from this chassis (sb_readonly=1)
Nov 25 09:11:09 compute-0 ovn_controller[152859]: 2025-11-25T09:11:09Z|01560|binding|INFO|Removing iface tap24beb614-6f ovn-installed in OVS
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.559 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 25 09:11:09 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000090.scope: Consumed 17.936s CPU time.
Nov 25 09:11:09 compute-0 systemd-machined[215790]: Machine qemu-174-instance-00000090 terminated.
Nov 25 09:11:09 compute-0 ovn_controller[152859]: 2025-11-25T09:11:09Z|01561|binding|INFO|Setting lport 24beb614-6f72-4107-adca-af1258052ab5 down in Southbound
Nov 25 09:11:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.636 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:54:60 10.100.0.6'], port_security=['fa:16:3e:57:54:60 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '23232702-7686-425d-8921-7aa6192ca1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a8bb3c-a4c7-425c-b375-b8834bad3945, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=2cf452f4-d6c3-4977-9e5b-874c9d9707e6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:11:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.638 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6 in datapath 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 unbound from our chassis
Nov 25 09:11:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.639 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:11:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.641 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6847c358-e372-45d5-8655-53d42856823e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.641 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 namespace which is not needed anymore
Nov 25 09:11:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:09.670 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e'], port_security=['fa:16:3e:18:a0:7e 2001:db8::f816:3eff:fe18:a07e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe18:a07e/64', 'neutron:device_id': '23232702-7686-425d-8921-7aa6192ca1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cc67e51-433c-4c50-9e32-11618e10c494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd62dafc0-5bc7-45ab-b9df-841bbdd333a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b005d09-6aec-4c42-a2eb-38372057a2c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=24beb614-6f72-4107-adca-af1258052ab5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:11:09 compute-0 NetworkManager[48915]: <info>  [1764061869.7084] manager: (tap24beb614-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.729 253542 INFO nova.virt.libvirt.driver [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Instance destroyed successfully.
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.731 253542 DEBUG nova.objects.instance [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 23232702-7686-425d-8921-7aa6192ca1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.746 253542 DEBUG nova.virt.libvirt.vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:09:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:09:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.746 253542 DEBUG nova.network.os_vif_util [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.747 253542 DEBUG nova.network.os_vif_util [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.748 253542 DEBUG os_vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.750 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cf452f4-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.760 253542 INFO os_vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:54:60,bridge_name='br-int',has_traffic_filtering=True,id=2cf452f4-d6c3-4977-9e5b-874c9d9707e6,network=Network(7bf4f588-ebc7-4f3f-bad9-0474cdb461a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf452f4-d6')
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.761 253542 DEBUG nova.virt.libvirt.vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1362533958',display_name='tempest-TestGettingAddress-server-1362533958',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1362533958',id=144,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOe89J94R4qKvHMYwrfikowJ7ceUXQ642a2tchAyKy2knwK13gMoa9aQVgsZFx+J1CMsBQiBzDcpzQJhqE9YwLxM9sA075e5a1p5R338zHNdy69LTAjp0AMqDwCa+QT4pw==',key_name='tempest-TestGettingAddress-230342741',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:09:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-1pg69rxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:09:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=23232702-7686-425d-8921-7aa6192ca1c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.761 253542 DEBUG nova.network.os_vif_util [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.762 253542 DEBUG nova.network.os_vif_util [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.763 253542 DEBUG os_vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.764 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24beb614-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:11:09 compute-0 nova_compute[253538]: 2025-11-25 09:11:09.770 253542 INFO os_vif [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:a0:7e,bridge_name='br-int',has_traffic_filtering=True,id=24beb614-6f72-4107-adca-af1258052ab5,network=Network(3cc67e51-433c-4c50-9e32-11618e10c494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24beb614-6f')
Nov 25 09:11:09 compute-0 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [NOTICE]   (407260) : haproxy version is 2.8.14-c23fe91
Nov 25 09:11:09 compute-0 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [NOTICE]   (407260) : path to executable is /usr/sbin/haproxy
Nov 25 09:11:09 compute-0 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [WARNING]  (407260) : Exiting Master process...
Nov 25 09:11:10 compute-0 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [ALERT]    (407260) : Current worker (407265) exited with code 143 (Terminated)
Nov 25 09:11:10 compute-0 neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6[407241]: [WARNING]  (407260) : All workers exited. Exiting... (0)
Nov 25 09:11:10 compute-0 systemd[1]: libpod-21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d.scope: Deactivated successfully.
Nov 25 09:11:10 compute-0 podman[409090]: 2025-11-25 09:11:10.009246412 +0000 UTC m=+0.245954347 container died 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d-userdata-shm.mount: Deactivated successfully.
Nov 25 09:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f10253aa29a2d9744f35c5a42f8958d44efd52ebdbabf6a5f52a1d200b9fc6b-merged.mount: Deactivated successfully.
Nov 25 09:11:10 compute-0 podman[409090]: 2025-11-25 09:11:10.220846635 +0000 UTC m=+0.457554570 container cleanup 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 09:11:10 compute-0 systemd[1]: libpod-conmon-21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d.scope: Deactivated successfully.
Nov 25 09:11:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 10 KiB/s wr, 28 op/s
Nov 25 09:11:10 compute-0 podman[409138]: 2025-11-25 09:11:10.315991411 +0000 UTC m=+0.073916921 container remove 21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.321 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[62c12c0d-5c00-4f99-b85a-90d043b17c10]: (4, ('Tue Nov 25 09:11:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 (21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d)\n21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d\nTue Nov 25 09:11:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 (21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d)\n21ad73ad938112a08976cbfcb3c2201d1d10bcaa8ba0451deb54ab338f9b407d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.323 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[385403c2-4b85-4fc6-b7b0-0cc48de47e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.324 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bf4f588-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:10 compute-0 nova_compute[253538]: 2025-11-25 09:11:10.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:10 compute-0 kernel: tap7bf4f588-e0: left promiscuous mode
Nov 25 09:11:10 compute-0 nova_compute[253538]: 2025-11-25 09:11:10.344 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.347 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6a70d5-f978-45e9-80fb-00bd25fca234]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.360 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[148b9266-a187-43fc-ac8f-29802022e072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.361 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b44d07-1047-4efc-9f89-5c39eed8bbf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.384 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[870b2a11-59c1-4d59-aa3f-459160887c0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712895, 'reachable_time': 19419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409151, 'error': None, 'target': 'ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.388 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7bf4f588-ebc7-4f3f-bad9-0474cdb461a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.388 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3580b9-2dfd-42e1-be08-381a94384806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.389 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 24beb614-6f72-4107-adca-af1258052ab5 in datapath 3cc67e51-433c-4c50-9e32-11618e10c494 unbound from our chassis
Nov 25 09:11:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d7bf4f588\x2debc7\x2d4f3f\x2dbad9\x2d0474cdb461a6.mount: Deactivated successfully.
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.390 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cc67e51-433c-4c50-9e32-11618e10c494, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.390 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7e072d-f2aa-4ff7-bfad-51cd7e76eeab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:10.391 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 namespace which is not needed anymore
Nov 25 09:11:10 compute-0 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [NOTICE]   (407364) : haproxy version is 2.8.14-c23fe91
Nov 25 09:11:10 compute-0 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [NOTICE]   (407364) : path to executable is /usr/sbin/haproxy
Nov 25 09:11:10 compute-0 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [WARNING]  (407364) : Exiting Master process...
Nov 25 09:11:10 compute-0 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [ALERT]    (407364) : Current worker (407366) exited with code 143 (Terminated)
Nov 25 09:11:10 compute-0 neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494[407360]: [WARNING]  (407364) : All workers exited. Exiting... (0)
Nov 25 09:11:10 compute-0 systemd[1]: libpod-f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d.scope: Deactivated successfully.
Nov 25 09:11:10 compute-0 podman[409168]: 2025-11-25 09:11:10.759800065 +0000 UTC m=+0.260284766 container died f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:11:10 compute-0 nova_compute[253538]: 2025-11-25 09:11:10.936 253542 INFO nova.virt.libvirt.driver [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Deleting instance files /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8_del
Nov 25 09:11:10 compute-0 nova_compute[253538]: 2025-11-25 09:11:10.938 253542 INFO nova.virt.libvirt.driver [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Deletion of /var/lib/nova/instances/23232702-7686-425d-8921-7aa6192ca1c8_del complete
Nov 25 09:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-15b7818f4970c473f15ea612dd47dfe4573d2b3b5179a915f1ffcc79793320c6-merged.mount: Deactivated successfully.
Nov 25 09:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d-userdata-shm.mount: Deactivated successfully.
Nov 25 09:11:10 compute-0 podman[409168]: 2025-11-25 09:11:10.949085302 +0000 UTC m=+0.449570043 container cleanup f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:11:10 compute-0 systemd[1]: libpod-conmon-f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d.scope: Deactivated successfully.
Nov 25 09:11:11 compute-0 podman[409204]: 2025-11-25 09:11:11.027523824 +0000 UTC m=+0.046856185 container remove f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.034 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[72a8363c-a65f-4a00-bb61-8fe6785491d4]: (4, ('Tue Nov 25 09:11:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 (f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d)\nf6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d\nTue Nov 25 09:11:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 (f6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d)\nf6574fad04277da8ca88c648bbe0a2e0e2f056b52818dd6a57db9f9ca00b052d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.035 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f0d961-d837-47b8-80bb-1a3e52c737dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.036 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc67e51-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.038 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:11 compute-0 kernel: tap3cc67e51-40: left promiscuous mode
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.048 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-unplugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.049 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.049 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.050 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.050 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-unplugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.050 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-unplugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.051 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.051 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.051 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.051 253542 DEBUG oslo_concurrency.lockutils [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.052 253542 DEBUG nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.052 253542 WARNING nova.compute.manager [req-79af0000-45c5-4b62-94df-2c5c1be87075 req-ad9b29f8-c13c-44b5-b7d8-b89df451d3dd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received unexpected event network-vif-plugged-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 for instance with vm_state active and task_state deleting.
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.053 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.055 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[625d7fa2-b818-4ccc-acf7-a58fe9938e3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.069 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf18e22d-3356-4fd6-ad35-f3c608437b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.071 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcff5b2-f687-4cd8-b482-a75758f363fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.079 253542 INFO nova.compute.manager [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Took 2.01 seconds to destroy the instance on the hypervisor.
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.080 253542 DEBUG oslo.service.loopingcall [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.080 253542 DEBUG nova.compute.manager [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.080 253542 DEBUG nova.network.neutron [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:11:11 compute-0 podman[409197]: 2025-11-25 09:11:11.087139494 +0000 UTC m=+0.134013764 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.087 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[98808b5a-e7da-43fa-a852-83d589a14c5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712985, 'reachable_time': 39888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409241, 'error': None, 'target': 'ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.090 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3cc67e51-433c-4c50-9e32-11618e10c494 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:11:11 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:11.090 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b9f351-15ee-42a0-8b91-1ad8d8568719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d3cc67e51\x2d433c\x2d4c50\x2d9e32\x2d11618e10c494.mount: Deactivated successfully.
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.119 253542 DEBUG nova.compute.manager [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-unplugged-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.119 253542 DEBUG oslo_concurrency.lockutils [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.120 253542 DEBUG oslo_concurrency.lockutils [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.120 253542 DEBUG oslo_concurrency.lockutils [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.121 253542 DEBUG nova.compute.manager [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-unplugged-24beb614-6f72-4107-adca-af1258052ab5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.121 253542 DEBUG nova.compute.manager [req-4041e62b-3e23-438e-933d-c1796c953176 req-970237c2-c5b2-402a-9f98-99eb0ec4eb0b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-unplugged-24beb614-6f72-4107-adca-af1258052ab5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:11:11 compute-0 ceph-mon[75015]: pgmap v2723: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 10 KiB/s wr, 28 op/s
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.844 253542 DEBUG nova.network.neutron [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updated VIF entry in instance network info cache for port 2cf452f4-d6c3-4977-9e5b-874c9d9707e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.845 253542 DEBUG nova.network.neutron [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [{"id": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "address": "fa:16:3e:57:54:60", "network": {"id": "7bf4f588-ebc7-4f3f-bad9-0474cdb461a6", "bridge": "br-int", "label": "tempest-network-smoke--1406303604", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf452f4-d6", "ovs_interfaceid": "2cf452f4-d6c3-4977-9e5b-874c9d9707e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24beb614-6f72-4107-adca-af1258052ab5", "address": "fa:16:3e:18:a0:7e", "network": {"id": "3cc67e51-433c-4c50-9e32-11618e10c494", "bridge": "br-int", "label": "tempest-network-smoke--1717122316", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe18:a07e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24beb614-6f", "ovs_interfaceid": "24beb614-6f72-4107-adca-af1258052ab5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:11:11 compute-0 nova_compute[253538]: 2025-11-25 09:11:11.867 253542 DEBUG oslo_concurrency.lockutils [req-fb44d24c-ccee-4366-9df6-a1c8bb08dfc9 req-dab485f7-cb48-408f-b459-6b9eab0d5bfc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-23232702-7686-425d-8921-7aa6192ca1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:11:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 132 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 49 op/s
Nov 25 09:11:12 compute-0 nova_compute[253538]: 2025-11-25 09:11:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:12 compute-0 nova_compute[253538]: 2025-11-25 09:11:12.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:11:12 compute-0 ceph-mon[75015]: pgmap v2724: 321 pgs: 321 active+clean; 132 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 49 op/s
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.059 253542 DEBUG nova.network.neutron [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.129 253542 INFO nova.compute.manager [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Took 2.05 seconds to deallocate network for instance.
Nov 25 09:11:13 compute-0 sshd-session[409242]: Connection closed by authenticating user root 171.244.51.45 port 33108 [preauth]
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.217 253542 DEBUG nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.217 253542 DEBUG oslo_concurrency.lockutils [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "23232702-7686-425d-8921-7aa6192ca1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.218 253542 DEBUG oslo_concurrency.lockutils [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.218 253542 DEBUG oslo_concurrency.lockutils [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.218 253542 DEBUG nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] No waiting events found dispatching network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.219 253542 WARNING nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received unexpected event network-vif-plugged-24beb614-6f72-4107-adca-af1258052ab5 for instance with vm_state deleted and task_state None.
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.219 253542 DEBUG nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-deleted-2cf452f4-d6c3-4977-9e5b-874c9d9707e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.219 253542 DEBUG nova.compute.manager [req-33e069e5-b8de-430e-bda8-9f8e6ad770d0 req-1f665d4e-6802-4dd1-9f08-fc9180250173 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Received event network-vif-deleted-24beb614-6f72-4107-adca-af1258052ab5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.221 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.222 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.267 253542 DEBUG oslo_concurrency.processutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:11:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269866015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.699 253542 DEBUG oslo_concurrency.processutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.706 253542 DEBUG nova.compute.provider_tree [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.726 253542 DEBUG nova.scheduler.client.report [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.796 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.866 253542 INFO nova.scheduler.client.report [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 23232702-7686-425d-8921-7aa6192ca1c8
Nov 25 09:11:13 compute-0 nova_compute[253538]: 2025-11-25 09:11:13.928 253542 DEBUG oslo_concurrency.lockutils [None req-7f6f8d8e-94b7-4b92-a14c-e64cd9ad5635 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "23232702-7686-425d-8921-7aa6192ca1c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1269866015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.1 KiB/s wr, 49 op/s
Nov 25 09:11:14 compute-0 nova_compute[253538]: 2025-11-25 09:11:14.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:14 compute-0 nova_compute[253538]: 2025-11-25 09:11:14.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:15 compute-0 ceph-mon[75015]: pgmap v2725: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.1 KiB/s wr, 49 op/s
Nov 25 09:11:15 compute-0 nova_compute[253538]: 2025-11-25 09:11:15.064 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061860.0629854, e30f8c90-01de-40a5-8c04-289a035fca22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:11:15 compute-0 nova_compute[253538]: 2025-11-25 09:11:15.065 253542 INFO nova.compute.manager [-] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] VM Stopped (Lifecycle Event)
Nov 25 09:11:15 compute-0 nova_compute[253538]: 2025-11-25 09:11:15.082 253542 DEBUG nova.compute.manager [None req-671db374-3833-4943-a2c1-4b845b32ceef - - - - - -] [instance: e30f8c90-01de-40a5-8c04-289a035fca22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:11:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Nov 25 09:11:17 compute-0 ceph-mon[75015]: pgmap v2726: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Nov 25 09:11:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Nov 25 09:11:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:19 compute-0 nova_compute[253538]: 2025-11-25 09:11:19.102 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:19 compute-0 nova_compute[253538]: 2025-11-25 09:11:19.217 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:19 compute-0 ceph-mon[75015]: pgmap v2727: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Nov 25 09:11:19 compute-0 nova_compute[253538]: 2025-11-25 09:11:19.416 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:19 compute-0 nova_compute[253538]: 2025-11-25 09:11:19.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:19 compute-0 nova_compute[253538]: 2025-11-25 09:11:19.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:11:19 compute-0 nova_compute[253538]: 2025-11-25 09:11:19.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:11:19 compute-0 nova_compute[253538]: 2025-11-25 09:11:19.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:11:21 compute-0 ceph-mon[75015]: pgmap v2728: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:11:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:11:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:23 compute-0 ceph-mon[75015]: pgmap v2729: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:11:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:11:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:11:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:11:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:11:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:11:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:11:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 6 op/s
Nov 25 09:11:24 compute-0 nova_compute[253538]: 2025-11-25 09:11:24.418 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:24 compute-0 nova_compute[253538]: 2025-11-25 09:11:24.726 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061869.7246559, 23232702-7686-425d-8921-7aa6192ca1c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:11:24 compute-0 nova_compute[253538]: 2025-11-25 09:11:24.726 253542 INFO nova.compute.manager [-] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] VM Stopped (Lifecycle Event)
Nov 25 09:11:24 compute-0 nova_compute[253538]: 2025-11-25 09:11:24.741 253542 DEBUG nova.compute.manager [None req-0f45dd52-99e4-42e8-9019-d27f6e7ebc79 - - - - - -] [instance: 23232702-7686-425d-8921-7aa6192ca1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:11:24 compute-0 nova_compute[253538]: 2025-11-25 09:11:24.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:25 compute-0 ceph-mon[75015]: pgmap v2730: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 6 op/s
Nov 25 09:11:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 5 op/s
Nov 25 09:11:27 compute-0 ceph-mon[75015]: pgmap v2731: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 5 op/s
Nov 25 09:11:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:29 compute-0 ceph-mon[75015]: pgmap v2732: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:29 compute-0 nova_compute[253538]: 2025-11-25 09:11:29.419 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:29 compute-0 nova_compute[253538]: 2025-11-25 09:11:29.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:31 compute-0 ceph-mon[75015]: pgmap v2733: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:31 compute-0 nova_compute[253538]: 2025-11-25 09:11:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:31 compute-0 nova_compute[253538]: 2025-11-25 09:11:31.872 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:33 compute-0 ceph-mon[75015]: pgmap v2734: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:34 compute-0 nova_compute[253538]: 2025-11-25 09:11:34.421 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:34 compute-0 nova_compute[253538]: 2025-11-25 09:11:34.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:35 compute-0 ceph-mon[75015]: pgmap v2735: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:35 compute-0 podman[409269]: 2025-11-25 09:11:35.812752216 +0000 UTC m=+0.057690858 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:11:35 compute-0 podman[409268]: 2025-11-25 09:11:35.823232361 +0000 UTC m=+0.076441208 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:11:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:37 compute-0 ceph-mon[75015]: pgmap v2736: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:38 compute-0 sudo[409305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:38 compute-0 sudo[409305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:38 compute-0 sudo[409305]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:38 compute-0 sudo[409330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:11:38 compute-0 sudo[409330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:38 compute-0 sudo[409330]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:38 compute-0 sudo[409355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:38 compute-0 sudo[409355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:38 compute-0 sudo[409355]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:38 compute-0 sudo[409380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:11:38 compute-0 sudo[409380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:39 compute-0 sudo[409380]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:11:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:11:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:11:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:11:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:11:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:11:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 80dc861e-d747-492b-bdcc-055aba6e25b1 does not exist
Nov 25 09:11:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3d42c405-f862-4b61-a46a-75f0f94a23bf does not exist
Nov 25 09:11:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3b4f3c47-c5b9-4601-8f51-fa311145cec2 does not exist
Nov 25 09:11:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:11:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:11:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:11:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:11:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:11:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:11:39 compute-0 sudo[409436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:39 compute-0 sudo[409436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:39 compute-0 sudo[409436]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:39 compute-0 nova_compute[253538]: 2025-11-25 09:11:39.422 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:39 compute-0 ceph-mon[75015]: pgmap v2737: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:11:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:11:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:11:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:11:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:11:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:11:39 compute-0 sudo[409461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:11:39 compute-0 sudo[409461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:39 compute-0 sudo[409461]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:39 compute-0 sudo[409486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:39 compute-0 sudo[409486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:39 compute-0 sudo[409486]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:39 compute-0 sudo[409511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:11:39 compute-0 sudo[409511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:39 compute-0 nova_compute[253538]: 2025-11-25 09:11:39.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:40 compute-0 podman[409578]: 2025-11-25 09:11:39.951993248 +0000 UTC m=+0.037605393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:11:40 compute-0 podman[409578]: 2025-11-25 09:11:40.107202707 +0000 UTC m=+0.192814832 container create c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:11:40 compute-0 systemd[1]: Started libpod-conmon-c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec.scope.
Nov 25 09:11:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:11:40 compute-0 podman[409578]: 2025-11-25 09:11:40.189275088 +0000 UTC m=+0.274887213 container init c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:11:40 compute-0 podman[409578]: 2025-11-25 09:11:40.197318807 +0000 UTC m=+0.282930932 container start c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:11:40 compute-0 podman[409578]: 2025-11-25 09:11:40.200289958 +0000 UTC m=+0.285902093 container attach c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 09:11:40 compute-0 lucid_herschel[409594]: 167 167
Nov 25 09:11:40 compute-0 systemd[1]: libpod-c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec.scope: Deactivated successfully.
Nov 25 09:11:40 compute-0 podman[409578]: 2025-11-25 09:11:40.203036472 +0000 UTC m=+0.288648597 container died c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-024e037a0fa6d2b0c0fa6e1734cd32af1218634665780f703eef61e6b417bf25-merged.mount: Deactivated successfully.
Nov 25 09:11:40 compute-0 podman[409578]: 2025-11-25 09:11:40.236077201 +0000 UTC m=+0.321689326 container remove c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:11:40 compute-0 systemd[1]: libpod-conmon-c3543ffac13cde9715cb8c28026f02c7f06aaa660360cbb8699cd49d7658b4ec.scope: Deactivated successfully.
Nov 25 09:11:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:40 compute-0 podman[409620]: 2025-11-25 09:11:40.439765628 +0000 UTC m=+0.053226358 container create 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:11:40 compute-0 systemd[1]: Started libpod-conmon-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope.
Nov 25 09:11:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:40 compute-0 podman[409620]: 2025-11-25 09:11:40.419916308 +0000 UTC m=+0.033377058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:11:40 compute-0 podman[409620]: 2025-11-25 09:11:40.524216764 +0000 UTC m=+0.137677524 container init 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:11:40 compute-0 podman[409620]: 2025-11-25 09:11:40.537979648 +0000 UTC m=+0.151440358 container start 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:11:40 compute-0 podman[409620]: 2025-11-25 09:11:40.545587075 +0000 UTC m=+0.159047795 container attach 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:11:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:41 compute-0 ceph-mon[75015]: pgmap v2738: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:41 compute-0 musing_curran[409635]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:11:41 compute-0 musing_curran[409635]: --> relative data size: 1.0
Nov 25 09:11:41 compute-0 musing_curran[409635]: --> All data devices are unavailable
Nov 25 09:11:41 compute-0 systemd[1]: libpod-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope: Deactivated successfully.
Nov 25 09:11:41 compute-0 systemd[1]: libpod-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope: Consumed 1.004s CPU time.
Nov 25 09:11:41 compute-0 conmon[409635]: conmon 6f582e60efe8893fe8ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope/container/memory.events
Nov 25 09:11:41 compute-0 podman[409620]: 2025-11-25 09:11:41.59676861 +0000 UTC m=+1.210229320 container died 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7378a4a6e66958e4037f0cd7ec6fc981809faecf25eb3b42446c16973e78fa57-merged.mount: Deactivated successfully.
Nov 25 09:11:41 compute-0 podman[409620]: 2025-11-25 09:11:41.663922105 +0000 UTC m=+1.277382805 container remove 6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:11:41 compute-0 systemd[1]: libpod-conmon-6f582e60efe8893fe8ae52bb75f8d206e84549f1c9f395a63be5f730caffc214.scope: Deactivated successfully.
Nov 25 09:11:41 compute-0 sudo[409511]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:41 compute-0 podman[409666]: 2025-11-25 09:11:41.75642586 +0000 UTC m=+0.118234535 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 09:11:41 compute-0 sudo[409701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:41 compute-0 sudo[409701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:41 compute-0 sudo[409701]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:41 compute-0 sudo[409730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:11:41 compute-0 sudo[409730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:41 compute-0 sudo[409730]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:41 compute-0 sudo[409755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:41 compute-0 sudo[409755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:41 compute-0 sudo[409755]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:41 compute-0 sudo[409780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:11:41 compute-0 sudo[409780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:42.164 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:f7:0e 10.100.0.2 2001:db8::f816:3eff:fef0:f70e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:f70e/64', 'neutron:device_id': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=08f181bc-bee1-4710-a487-b95c62cfce38) old=Port_Binding(mac=['fa:16:3e:f0:f7:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:11:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:42.165 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 08f181bc-bee1-4710-a487-b95c62cfce38 in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 updated
Nov 25 09:11:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:42.166 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:11:42 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:42.167 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1ed3d0-f656-4b0e-aadd-67e4f33257a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:42 compute-0 podman[409845]: 2025-11-25 09:11:42.29755176 +0000 UTC m=+0.039743231 container create 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 09:11:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:42 compute-0 systemd[1]: Started libpod-conmon-9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7.scope.
Nov 25 09:11:42 compute-0 podman[409845]: 2025-11-25 09:11:42.280411734 +0000 UTC m=+0.022603125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:11:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:11:42 compute-0 podman[409845]: 2025-11-25 09:11:42.398066662 +0000 UTC m=+0.140258083 container init 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:11:42 compute-0 podman[409845]: 2025-11-25 09:11:42.407530799 +0000 UTC m=+0.149722160 container start 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:11:42 compute-0 podman[409845]: 2025-11-25 09:11:42.410716336 +0000 UTC m=+0.152907727 container attach 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:11:42 compute-0 sad_brown[409861]: 167 167
Nov 25 09:11:42 compute-0 systemd[1]: libpod-9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7.scope: Deactivated successfully.
Nov 25 09:11:42 compute-0 podman[409845]: 2025-11-25 09:11:42.417765808 +0000 UTC m=+0.159957199 container died 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa4644053f06a5cb9970416252bc1ed9810c181cc6b8819ed76b5ac502ee0780-merged.mount: Deactivated successfully.
Nov 25 09:11:42 compute-0 podman[409845]: 2025-11-25 09:11:42.450754645 +0000 UTC m=+0.192946026 container remove 9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:11:42 compute-0 systemd[1]: libpod-conmon-9c784d3f9a9d5c1efd1cad1c87c7a5c67d4a72ed5b8fc2123078b543cf37eef7.scope: Deactivated successfully.
Nov 25 09:11:42 compute-0 podman[409885]: 2025-11-25 09:11:42.635890878 +0000 UTC m=+0.046845145 container create 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 09:11:42 compute-0 systemd[1]: Started libpod-conmon-4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1.scope.
Nov 25 09:11:42 compute-0 podman[409885]: 2025-11-25 09:11:42.614481435 +0000 UTC m=+0.025435752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:11:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:42 compute-0 podman[409885]: 2025-11-25 09:11:42.749511066 +0000 UTC m=+0.160465403 container init 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:11:42 compute-0 podman[409885]: 2025-11-25 09:11:42.75846319 +0000 UTC m=+0.169417467 container start 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:11:42 compute-0 podman[409885]: 2025-11-25 09:11:42.761741939 +0000 UTC m=+0.172696256 container attach 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:11:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:43 compute-0 ceph-mon[75015]: pgmap v2739: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:43 compute-0 silly_yalow[409901]: {
Nov 25 09:11:43 compute-0 silly_yalow[409901]:     "0": [
Nov 25 09:11:43 compute-0 silly_yalow[409901]:         {
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "devices": [
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "/dev/loop3"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             ],
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_name": "ceph_lv0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_size": "21470642176",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "name": "ceph_lv0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "tags": {
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cluster_name": "ceph",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.crush_device_class": "",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.encrypted": "0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osd_id": "0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.type": "block",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.vdo": "0"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             },
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "type": "block",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "vg_name": "ceph_vg0"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:         }
Nov 25 09:11:43 compute-0 silly_yalow[409901]:     ],
Nov 25 09:11:43 compute-0 silly_yalow[409901]:     "1": [
Nov 25 09:11:43 compute-0 silly_yalow[409901]:         {
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "devices": [
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "/dev/loop4"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             ],
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_name": "ceph_lv1",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_size": "21470642176",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "name": "ceph_lv1",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "tags": {
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cluster_name": "ceph",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.crush_device_class": "",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.encrypted": "0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osd_id": "1",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.type": "block",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.vdo": "0"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             },
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "type": "block",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "vg_name": "ceph_vg1"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:         }
Nov 25 09:11:43 compute-0 silly_yalow[409901]:     ],
Nov 25 09:11:43 compute-0 silly_yalow[409901]:     "2": [
Nov 25 09:11:43 compute-0 silly_yalow[409901]:         {
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "devices": [
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "/dev/loop5"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             ],
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_name": "ceph_lv2",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_size": "21470642176",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "name": "ceph_lv2",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "tags": {
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.cluster_name": "ceph",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.crush_device_class": "",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.encrypted": "0",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osd_id": "2",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.type": "block",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:                 "ceph.vdo": "0"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             },
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "type": "block",
Nov 25 09:11:43 compute-0 silly_yalow[409901]:             "vg_name": "ceph_vg2"
Nov 25 09:11:43 compute-0 silly_yalow[409901]:         }
Nov 25 09:11:43 compute-0 silly_yalow[409901]:     ]
Nov 25 09:11:43 compute-0 silly_yalow[409901]: }
Nov 25 09:11:43 compute-0 systemd[1]: libpod-4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1.scope: Deactivated successfully.
Nov 25 09:11:43 compute-0 podman[409885]: 2025-11-25 09:11:43.607641924 +0000 UTC m=+1.018596181 container died 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-62ebbed220cd40372dfa5c3e94bec32a6227c4e5ac5ebc4598b063fdbb86075f-merged.mount: Deactivated successfully.
Nov 25 09:11:43 compute-0 podman[409885]: 2025-11-25 09:11:43.788919081 +0000 UTC m=+1.199873348 container remove 4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:11:43 compute-0 systemd[1]: libpod-conmon-4e2e07be7b80a064cfe125d5199401f95ba9e9b209f467c3809f3b8459f5e6e1.scope: Deactivated successfully.
Nov 25 09:11:43 compute-0 sudo[409780]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:43 compute-0 sudo[409921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:43 compute-0 sudo[409921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:43 compute-0 sudo[409921]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:43 compute-0 sudo[409946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:11:43 compute-0 sudo[409946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:43 compute-0 sudo[409946]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:44 compute-0 sudo[409971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:44 compute-0 sudo[409971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:44 compute-0 sudo[409971]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:44 compute-0 sudo[409996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:11:44 compute-0 sudo[409996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:44 compute-0 podman[410060]: 2025-11-25 09:11:44.400062995 +0000 UTC m=+0.042030803 container create 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:11:44 compute-0 nova_compute[253538]: 2025-11-25 09:11:44.461 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:44 compute-0 systemd[1]: Started libpod-conmon-2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce.scope.
Nov 25 09:11:44 compute-0 podman[410060]: 2025-11-25 09:11:44.384938913 +0000 UTC m=+0.026906741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:11:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:11:44 compute-0 podman[410060]: 2025-11-25 09:11:44.499877888 +0000 UTC m=+0.141845726 container init 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:11:44 compute-0 podman[410060]: 2025-11-25 09:11:44.504806522 +0000 UTC m=+0.146774330 container start 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:11:44 compute-0 podman[410060]: 2025-11-25 09:11:44.508157863 +0000 UTC m=+0.150125701 container attach 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 09:11:44 compute-0 nice_davinci[410076]: 167 167
Nov 25 09:11:44 compute-0 systemd[1]: libpod-2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce.scope: Deactivated successfully.
Nov 25 09:11:44 compute-0 podman[410060]: 2025-11-25 09:11:44.511511214 +0000 UTC m=+0.153479022 container died 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-001d7dad2cdd72052f792ab04075ea5559219cb9ce7e468494d58c3c752b0e22-merged.mount: Deactivated successfully.
Nov 25 09:11:44 compute-0 podman[410060]: 2025-11-25 09:11:44.555820289 +0000 UTC m=+0.197788137 container remove 2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:11:44 compute-0 systemd[1]: libpod-conmon-2520e6047af675bbe29cde88ba511a3c7cda3e48bffbce765080126bfde4b7ce.scope: Deactivated successfully.
Nov 25 09:11:44 compute-0 podman[410100]: 2025-11-25 09:11:44.747837929 +0000 UTC m=+0.056396104 container create ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:11:44 compute-0 systemd[1]: Started libpod-conmon-ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f.scope.
Nov 25 09:11:44 compute-0 podman[410100]: 2025-11-25 09:11:44.71733469 +0000 UTC m=+0.025892865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:11:44 compute-0 nova_compute[253538]: 2025-11-25 09:11:44.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:11:44 compute-0 podman[410100]: 2025-11-25 09:11:44.96787326 +0000 UTC m=+0.276431495 container init ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:11:44 compute-0 podman[410100]: 2025-11-25 09:11:44.976349711 +0000 UTC m=+0.284907896 container start ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:11:44 compute-0 podman[410100]: 2025-11-25 09:11:44.979592759 +0000 UTC m=+0.288150934 container attach ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:11:45 compute-0 ceph-mon[75015]: pgmap v2740: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]: {
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "osd_id": 1,
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "type": "bluestore"
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:     },
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "osd_id": 2,
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "type": "bluestore"
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:     },
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "osd_id": 0,
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:         "type": "bluestore"
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]:     }
Nov 25 09:11:45 compute-0 compassionate_shaw[410117]: }
Nov 25 09:11:46 compute-0 systemd[1]: libpod-ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f.scope: Deactivated successfully.
Nov 25 09:11:46 compute-0 systemd[1]: libpod-ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f.scope: Consumed 1.026s CPU time.
Nov 25 09:11:46 compute-0 podman[410100]: 2025-11-25 09:11:46.011233683 +0000 UTC m=+1.319791868 container died ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:11:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6d2763e30f65eb5a845e4da3a7fe367cad2132e028969588340110a1a06eb4a-merged.mount: Deactivated successfully.
Nov 25 09:11:46 compute-0 podman[410100]: 2025-11-25 09:11:46.091688999 +0000 UTC m=+1.400247144 container remove ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:11:46 compute-0 systemd[1]: libpod-conmon-ab0e6496e1d796924c4177d477b75d04e27e7079b3a2ac80b75d1428727dc23f.scope: Deactivated successfully.
Nov 25 09:11:46 compute-0 sudo[409996]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:11:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:11:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:11:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:11:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e24b4369-0a2e-411e-a53d-0bc600d7406c does not exist
Nov 25 09:11:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4efe061a-0f5b-47bc-abd9-847bb23f30e4 does not exist
Nov 25 09:11:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:46 compute-0 sudo[410163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:11:46 compute-0 sudo[410163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:46 compute-0 sudo[410163]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:46 compute-0 sudo[410188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:11:46 compute-0 sudo[410188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:11:46 compute-0 sudo[410188]: pam_unix(sudo:session): session closed for user root
Nov 25 09:11:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:11:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:11:47 compute-0 ceph-mon[75015]: pgmap v2741: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:47 compute-0 nova_compute[253538]: 2025-11-25 09:11:47.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:48.030 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:f7:0e 10.100.0.2 2001:db8:0:1:f816:3eff:fef0:f70e 2001:db8::f816:3eff:fef0:f70e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fef0:f70e/64 2001:db8::f816:3eff:fef0:f70e/64', 'neutron:device_id': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=08f181bc-bee1-4710-a487-b95c62cfce38) old=Port_Binding(mac=['fa:16:3e:f0:f7:0e 10.100.0.2 2001:db8::f816:3eff:fef0:f70e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:f70e/64', 'neutron:device_id': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:11:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:48.032 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 08f181bc-bee1-4710-a487-b95c62cfce38 in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 updated
Nov 25 09:11:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:48.033 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:11:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:11:48.033 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1cba74a-ee95-432d-b966-b1b524d724d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:11:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:49 compute-0 nova_compute[253538]: 2025-11-25 09:11:49.465 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:49 compute-0 ceph-mon[75015]: pgmap v2742: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:49 compute-0 nova_compute[253538]: 2025-11-25 09:11:49.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:49 compute-0 nova_compute[253538]: 2025-11-25 09:11:49.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:51 compute-0 ceph-mon[75015]: pgmap v2743: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:52 compute-0 nova_compute[253538]: 2025-11-25 09:11:52.521 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:52 compute-0 nova_compute[253538]: 2025-11-25 09:11:52.522 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:52 compute-0 nova_compute[253538]: 2025-11-25 09:11:52.555 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:11:52 compute-0 nova_compute[253538]: 2025-11-25 09:11:52.635 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:52 compute-0 nova_compute[253538]: 2025-11-25 09:11:52.636 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:52 compute-0 nova_compute[253538]: 2025-11-25 09:11:52.644 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:11:52 compute-0 nova_compute[253538]: 2025-11-25 09:11:52.645 253542 INFO nova.compute.claims [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:11:52 compute-0 nova_compute[253538]: 2025-11-25 09:11:52.791 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:11:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131094186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.245 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.252 253542 DEBUG nova.compute.provider_tree [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.266 253542 DEBUG nova.scheduler.client.report [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:11:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.360 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.361 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:11:53
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'vms', 'backups', 'images', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.421 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.422 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:11:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.525 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:11:53 compute-0 ceph-mon[75015]: pgmap v2744: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3131094186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.544 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.651 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.653 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.653 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Creating image(s)
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.686 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.706 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.729 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.733 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.782 253542 DEBUG nova.policy [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.817 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.818 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.818 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.819 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.848 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:11:53 compute-0 nova_compute[253538]: 2025-11-25 09:11:53.853 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f5964963-11b8-4fd9-ace9-e5ee67571925_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.170 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc f5964963-11b8-4fd9-ace9-e5ee67571925_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.220 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.301 253542 DEBUG nova.objects.instance [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid f5964963-11b8-4fd9-ace9-e5ee67571925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:11:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.326 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.327 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Ensure instance console log exists: /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.327 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.327 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.328 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.467 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.580 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.754 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Successfully created port: 637fce28-ce53-4bd9-95fb-dc0675dd7009 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:11:54 compute-0 nova_compute[253538]: 2025-11-25 09:11:54.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:55 compute-0 nova_compute[253538]: 2025-11-25 09:11:55.535 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Successfully updated port: 637fce28-ce53-4bd9-95fb-dc0675dd7009 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:11:55 compute-0 ceph-mon[75015]: pgmap v2745: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:11:55 compute-0 nova_compute[253538]: 2025-11-25 09:11:55.563 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:11:55 compute-0 nova_compute[253538]: 2025-11-25 09:11:55.563 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:11:55 compute-0 nova_compute[253538]: 2025-11-25 09:11:55.563 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:11:55 compute-0 nova_compute[253538]: 2025-11-25 09:11:55.637 253542 DEBUG nova.compute.manager [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:11:55 compute-0 nova_compute[253538]: 2025-11-25 09:11:55.637 253542 DEBUG nova.compute.manager [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing instance network info cache due to event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:11:55 compute-0 nova_compute[253538]: 2025-11-25 09:11:55.637 253542 DEBUG oslo_concurrency.lockutils [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:11:55 compute-0 nova_compute[253538]: 2025-11-25 09:11:55.718 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:11:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 104 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 650 KiB/s wr, 1 op/s
Nov 25 09:11:57 compute-0 ceph-mon[75015]: pgmap v2746: 321 pgs: 321 active+clean; 104 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 650 KiB/s wr, 1 op/s
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.757 253542 DEBUG nova.network.neutron [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.790 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.791 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance network_info: |[{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.792 253542 DEBUG oslo_concurrency.lockutils [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.793 253542 DEBUG nova.network.neutron [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.798 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start _get_guest_xml network_info=[{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.806 253542 WARNING nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.817 253542 DEBUG nova.virt.libvirt.host [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.819 253542 DEBUG nova.virt.libvirt.host [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.823 253542 DEBUG nova.virt.libvirt.host [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.824 253542 DEBUG nova.virt.libvirt.host [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.826 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.826 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.826 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.827 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.828 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.828 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.828 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.828 253542 DEBUG nova.virt.hardware [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:11:57 compute-0 nova_compute[253538]: 2025-11-25 09:11:57.831 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:11:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589502343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.307 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:11:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.334 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.337 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:11:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3589502343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:11:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:11:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553256577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.772 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.773 253542 DEBUG nova.virt.libvirt.vif [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1898594818',display_name='tempest-TestGettingAddress-server-1898594818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1898594818',id=146,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ezfjzmm2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:11:53Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f5964963-11b8-4fd9-ace9-e5ee67571925,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.774 253542 DEBUG nova.network.os_vif_util [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.775 253542 DEBUG nova.network.os_vif_util [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.777 253542 DEBUG nova.objects.instance [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5964963-11b8-4fd9-ace9-e5ee67571925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.789 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <uuid>f5964963-11b8-4fd9-ace9-e5ee67571925</uuid>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <name>instance-00000092</name>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-1898594818</nova:name>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:11:57</nova:creationTime>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <nova:port uuid="637fce28-ce53-4bd9-95fb-dc0675dd7009">
Nov 25 09:11:58 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe59:4e62" ipVersion="6"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe59:4e62" ipVersion="6"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <system>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <entry name="serial">f5964963-11b8-4fd9-ace9-e5ee67571925</entry>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <entry name="uuid">f5964963-11b8-4fd9-ace9-e5ee67571925</entry>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </system>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <os>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   </os>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <features>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   </features>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/f5964963-11b8-4fd9-ace9-e5ee67571925_disk">
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       </source>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config">
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       </source>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:11:58 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:59:4e:62"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <target dev="tap637fce28-ce"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/console.log" append="off"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <video>
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </video>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:11:58 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:11:58 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:11:58 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:11:58 compute-0 nova_compute[253538]: </domain>
Nov 25 09:11:58 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.790 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Preparing to wait for external event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.791 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.791 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.791 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.792 253542 DEBUG nova.virt.libvirt.vif [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1898594818',display_name='tempest-TestGettingAddress-server-1898594818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1898594818',id=146,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ezfjzmm2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:11:53Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f5964963-11b8-4fd9-ace9-e5ee67571925,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.792 253542 DEBUG nova.network.os_vif_util [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.793 253542 DEBUG nova.network.os_vif_util [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.794 253542 DEBUG os_vif [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.795 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.795 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.800 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap637fce28-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.800 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap637fce28-ce, col_values=(('external_ids', {'iface-id': '637fce28-ce53-4bd9-95fb-dc0675dd7009', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:4e:62', 'vm-uuid': 'f5964963-11b8-4fd9-ace9-e5ee67571925'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:58 compute-0 NetworkManager[48915]: <info>  [1764061918.8029] manager: (tap637fce28-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/640)
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.804 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.811 253542 INFO os_vif [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce')
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.858 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.859 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.859 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:59:4e:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.860 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Using config drive
Nov 25 09:11:58 compute-0 nova_compute[253538]: 2025-11-25 09:11:58.886 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:11:58 compute-0 sshd-session[410401]: Received disconnect from 45.202.211.6 port 50584:11: Bye Bye [preauth]
Nov 25 09:11:58 compute-0 sshd-session[410401]: Disconnected from authenticating user root 45.202.211.6 port 50584 [preauth]
Nov 25 09:11:59 compute-0 nova_compute[253538]: 2025-11-25 09:11:59.469 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:11:59 compute-0 ceph-mon[75015]: pgmap v2747: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:11:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2553256577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:11:59 compute-0 nova_compute[253538]: 2025-11-25 09:11:59.890 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Creating config drive at /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config
Nov 25 09:11:59 compute-0 nova_compute[253538]: 2025-11-25 09:11:59.895 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4qp6oy1o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.064 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4qp6oy1o" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.091 253542 DEBUG nova.storage.rbd_utils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.095 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.406 253542 DEBUG oslo_concurrency.processutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config f5964963-11b8-4fd9-ace9-e5ee67571925_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.407 253542 INFO nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Deleting local config drive /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925/disk.config because it was imported into RBD.
Nov 25 09:12:00 compute-0 kernel: tap637fce28-ce: entered promiscuous mode
Nov 25 09:12:00 compute-0 NetworkManager[48915]: <info>  [1764061920.4868] manager: (tap637fce28-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/641)
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.533 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:00 compute-0 ovn_controller[152859]: 2025-11-25T09:12:00Z|01562|binding|INFO|Claiming lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 for this chassis.
Nov 25 09:12:00 compute-0 ovn_controller[152859]: 2025-11-25T09:12:00Z|01563|binding|INFO|637fce28-ce53-4bd9-95fb-dc0675dd7009: Claiming fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:00 compute-0 systemd-udevd[410538]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.560 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62'], port_security=['fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fe59:4e62/64 2001:db8::f816:3eff:fe59:4e62/64', 'neutron:device_id': 'f5964963-11b8-4fd9-ace9-e5ee67571925', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67e8b3b6-9b34-4c2b-a8fc-88ec98721eb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=637fce28-ce53-4bd9-95fb-dc0675dd7009) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.561 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 637fce28-ce53-4bd9-95fb-dc0675dd7009 in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 bound to our chassis
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.562 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7
Nov 25 09:12:00 compute-0 systemd-machined[215790]: New machine qemu-176-instance-00000092.
Nov 25 09:12:00 compute-0 NetworkManager[48915]: <info>  [1764061920.5741] device (tap637fce28-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:12:00 compute-0 NetworkManager[48915]: <info>  [1764061920.5749] device (tap637fce28-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.576 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a65c10-6e8f-44d9-b87c-f4a00a0cfda8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.577 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c73317d-f1 in ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.579 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c73317d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.579 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e283315d-92e7-4048-b70b-c9b2aafb3945]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.579 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2780a2-57b6-4b7a-a998-d55741baae9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.592 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[2d225b4c-d9d8-472d-a9f8-39eb9816246d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_controller[152859]: 2025-11-25T09:12:00Z|01564|binding|INFO|Setting lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 ovn-installed in OVS
Nov 25 09:12:00 compute-0 ovn_controller[152859]: 2025-11-25T09:12:00Z|01565|binding|INFO|Setting lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 up in Southbound
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.603 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:00 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-00000092.
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.604 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[790422a2-a904-405b-b8df-f5248ad675e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.633 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1a853244-e882-42da-a91b-dd369c4e9f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 systemd-udevd[410542]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.638 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c83020-5220-4c8e-abd5-e2ac51343660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 NetworkManager[48915]: <info>  [1764061920.6394] manager: (tap6c73317d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/642)
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.663 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[097cd7c8-b190-4abe-be2e-49436b321121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.666 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f40b5f84-939c-4e09-b7ff-5672b1bc26ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 NetworkManager[48915]: <info>  [1764061920.6909] device (tap6c73317d-f0): carrier: link connected
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.699 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[53b8c74a-9744-4606-a743-2e9ad7b2aff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.719 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5f3f27-514b-4a89-a7d7-dd93aa688068]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c73317d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:f7:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727529, 'reachable_time': 16900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410572, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.734 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b04d295c-cf1b-4038-9260-683959449f81]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef0:f70e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727529, 'tstamp': 727529}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410573, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.752 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfa37ba-0c25-4a6f-a23b-d04cc5f11ef5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c73317d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:f7:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727529, 'reachable_time': 16900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 410574, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.780 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[29470ef7-7760-4720-8686-344342504983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.829 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[10bb94a6-b49c-4085-97a3-67dbf5be2e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.830 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c73317d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.830 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.830 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c73317d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:00 compute-0 kernel: tap6c73317d-f0: entered promiscuous mode
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.832 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:00 compute-0 NetworkManager[48915]: <info>  [1764061920.8330] manager: (tap6c73317d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.835 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c73317d-f0, col_values=(('external_ids', {'iface-id': '08f181bc-bee1-4710-a487-b95c62cfce38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.836 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:00 compute-0 ovn_controller[152859]: 2025-11-25T09:12:00Z|01566|binding|INFO|Releasing lport 08f181bc-bee1-4710-a487-b95c62cfce38 from this chassis (sb_readonly=0)
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.848 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.849 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c73317d-f647-4813-8469-7d8f6ba2c0c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c73317d-f647-4813-8469-7d8f6ba2c0c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.850 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[41f03221-7a75-4089-8260-743e70a9ef5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.851 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-6c73317d-f647-4813-8469-7d8f6ba2c0c7
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/6c73317d-f647-4813-8469-7d8f6ba2c0c7.pid.haproxy
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 6c73317d-f647-4813-8469-7d8f6ba2c0c7
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:12:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:00.852 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'env', 'PROCESS_TAG=haproxy-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c73317d-f647-4813-8469-7d8f6ba2c0c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.996 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061920.995621, f5964963-11b8-4fd9-ace9-e5ee67571925 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:12:00 compute-0 nova_compute[253538]: 2025-11-25 09:12:00.997 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] VM Started (Lifecycle Event)
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.014 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.018 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061920.9968166, f5964963-11b8-4fd9-ace9-e5ee67571925 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.018 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] VM Paused (Lifecycle Event)
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.033 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.036 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.049 253542 DEBUG nova.compute.manager [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.049 253542 DEBUG oslo_concurrency.lockutils [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.049 253542 DEBUG oslo_concurrency.lockutils [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.050 253542 DEBUG oslo_concurrency.lockutils [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.050 253542 DEBUG nova.compute.manager [req-b1e22d23-b575-4717-98a4-0a6a9bd8b3bb req-bc07bb9c-eff6-4e84-bffc-f42ee607084a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Processing event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.051 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.051 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.054 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061921.0542498, f5964963-11b8-4fd9-ace9-e5ee67571925 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.054 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] VM Resumed (Lifecycle Event)
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.056 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.058 253542 INFO nova.virt.libvirt.driver [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance spawned successfully.
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.058 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.071 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.076 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.080 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.081 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.081 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.082 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.082 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.083 253542 DEBUG nova.virt.libvirt.driver [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.107 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.244 253542 INFO nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Took 7.59 seconds to spawn the instance on the hypervisor.
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.245 253542 DEBUG nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:12:01 compute-0 podman[410649]: 2025-11-25 09:12:01.252968143 +0000 UTC m=+0.104747628 container create c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:12:01 compute-0 podman[410649]: 2025-11-25 09:12:01.16749278 +0000 UTC m=+0.019272265 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:12:01 compute-0 systemd[1]: Started libpod-conmon-c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d.scope.
Nov 25 09:12:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8dc5cf81cc7a79b68a934106958d2a3e5f0faa44968a236fc77d5815309e349/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.339 253542 INFO nova.compute.manager [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Took 8.74 seconds to build instance.
Nov 25 09:12:01 compute-0 nova_compute[253538]: 2025-11-25 09:12:01.366 253542 DEBUG oslo_concurrency.lockutils [None req-ff78a7af-1a34-46b5-8d0c-947bedf9c0c2 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:01 compute-0 podman[410649]: 2025-11-25 09:12:01.389881505 +0000 UTC m=+0.241661010 container init c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 09:12:01 compute-0 podman[410649]: 2025-11-25 09:12:01.395099957 +0000 UTC m=+0.246879432 container start c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 09:12:01 compute-0 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [NOTICE]   (410669) : New worker (410671) forked
Nov 25 09:12:01 compute-0 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [NOTICE]   (410669) : Loading success.
Nov 25 09:12:01 compute-0 ceph-mon[75015]: pgmap v2748: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:12:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.449 253542 DEBUG nova.network.neutron [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updated VIF entry in instance network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.451 253542 DEBUG nova.network.neutron [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.466 253542 DEBUG oslo_concurrency.lockutils [req-1276c942-f00a-4b72-8c3e-9d70456b1d47 req-65b3014d-b0d6-4e6f-9140-91ec67e3f355 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:12:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:02.581 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:02 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:02.583 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.842 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.862 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Triggering sync for uuid f5964963-11b8-4fd9-ace9-e5ee67571925 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.862 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.863 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:02 compute-0 nova_compute[253538]: 2025-11-25 09:12:02.883 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:03 compute-0 nova_compute[253538]: 2025-11-25 09:12:03.140 253542 DEBUG nova.compute.manager [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:12:03 compute-0 nova_compute[253538]: 2025-11-25 09:12:03.141 253542 DEBUG oslo_concurrency.lockutils [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:03 compute-0 nova_compute[253538]: 2025-11-25 09:12:03.141 253542 DEBUG oslo_concurrency.lockutils [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:03 compute-0 nova_compute[253538]: 2025-11-25 09:12:03.141 253542 DEBUG oslo_concurrency.lockutils [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:03 compute-0 nova_compute[253538]: 2025-11-25 09:12:03.142 253542 DEBUG nova.compute.manager [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] No waiting events found dispatching network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:12:03 compute-0 nova_compute[253538]: 2025-11-25 09:12:03.142 253542 WARNING nova.compute.manager [req-b454c817-ea45-431c-a663-e83279fb6eaa req-f423d5ca-d59b-47d1-9c88-cdb6a4a134ef b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received unexpected event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 for instance with vm_state active and task_state None.
Nov 25 09:12:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:03 compute-0 ceph-mon[75015]: pgmap v2749: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 09:12:03 compute-0 nova_compute[253538]: 2025-11-25 09:12:03.803 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 920 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10383726680015627 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:12:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:12:04 compute-0 nova_compute[253538]: 2025-11-25 09:12:04.470 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:05.584 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:05 compute-0 ceph-mon[75015]: pgmap v2750: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 920 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 25 09:12:06 compute-0 ovn_controller[152859]: 2025-11-25T09:12:06Z|01567|binding|INFO|Releasing lport 08f181bc-bee1-4710-a487-b95c62cfce38 from this chassis (sb_readonly=0)
Nov 25 09:12:06 compute-0 NetworkManager[48915]: <info>  [1764061926.0048] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Nov 25 09:12:06 compute-0 NetworkManager[48915]: <info>  [1764061926.0056] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Nov 25 09:12:06 compute-0 nova_compute[253538]: 2025-11-25 09:12:06.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:06 compute-0 ovn_controller[152859]: 2025-11-25T09:12:06Z|01568|binding|INFO|Releasing lport 08f181bc-bee1-4710-a487-b95c62cfce38 from this chassis (sb_readonly=0)
Nov 25 09:12:06 compute-0 nova_compute[253538]: 2025-11-25 09:12:06.036 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:06 compute-0 nova_compute[253538]: 2025-11-25 09:12:06.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 09:12:06 compute-0 podman[410681]: 2025-11-25 09:12:06.805873904 +0000 UTC m=+0.057249378 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:12:06 compute-0 podman[410682]: 2025-11-25 09:12:06.809257906 +0000 UTC m=+0.057258228 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 09:12:06 compute-0 nova_compute[253538]: 2025-11-25 09:12:06.832 253542 DEBUG nova.compute.manager [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:12:06 compute-0 nova_compute[253538]: 2025-11-25 09:12:06.833 253542 DEBUG nova.compute.manager [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing instance network info cache due to event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:12:06 compute-0 nova_compute[253538]: 2025-11-25 09:12:06.833 253542 DEBUG oslo_concurrency.lockutils [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:12:06 compute-0 nova_compute[253538]: 2025-11-25 09:12:06.833 253542 DEBUG oslo_concurrency.lockutils [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:12:06 compute-0 nova_compute[253538]: 2025-11-25 09:12:06.834 253542 DEBUG nova.network.neutron [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:12:07 compute-0 nova_compute[253538]: 2025-11-25 09:12:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:12:07 compute-0 nova_compute[253538]: 2025-11-25 09:12:07.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:07 compute-0 nova_compute[253538]: 2025-11-25 09:12:07.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:07 compute-0 nova_compute[253538]: 2025-11-25 09:12:07.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:07 compute-0 nova_compute[253538]: 2025-11-25 09:12:07.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:12:07 compute-0 nova_compute[253538]: 2025-11-25 09:12:07.577 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:07 compute-0 ceph-mon[75015]: pgmap v2751: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 09:12:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:12:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72231431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.083 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.144 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.144 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.297 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.298 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3464MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.299 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.299 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.372 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance f5964963-11b8-4fd9-ace9-e5ee67571925 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.373 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.373 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.415 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.909 253542 DEBUG nova.network.neutron [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updated VIF entry in instance network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.910 253542 DEBUG nova.network.neutron [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:12:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:12:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794914448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.943 253542 DEBUG oslo_concurrency.lockutils [req-92f8daa1-25a1-46dc-a696-0d8817feec42 req-21beb53a-5af7-4dd5-bb0d-534579138817 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.966 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.974 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:12:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/72231431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:12:08 compute-0 ceph-mon[75015]: pgmap v2752: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Nov 25 09:12:08 compute-0 nova_compute[253538]: 2025-11-25 09:12:08.990 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:12:09 compute-0 nova_compute[253538]: 2025-11-25 09:12:09.046 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:12:09 compute-0 nova_compute[253538]: 2025-11-25 09:12:09.047 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:09 compute-0 nova_compute[253538]: 2025-11-25 09:12:09.473 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2794914448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:12:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 127 op/s
Nov 25 09:12:11 compute-0 ceph-mon[75015]: pgmap v2753: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 127 op/s
Nov 25 09:12:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 133 op/s
Nov 25 09:12:12 compute-0 podman[410763]: 2025-11-25 09:12:12.83356675 +0000 UTC m=+0.078722761 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 09:12:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:13 compute-0 ceph-mon[75015]: pgmap v2754: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 133 op/s
Nov 25 09:12:13 compute-0 nova_compute[253538]: 2025-11-25 09:12:13.810 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 135 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 404 KiB/s wr, 135 op/s
Nov 25 09:12:14 compute-0 nova_compute[253538]: 2025-11-25 09:12:14.476 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:15 compute-0 ovn_controller[152859]: 2025-11-25T09:12:15Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:59:4e:62 10.100.0.4
Nov 25 09:12:15 compute-0 ovn_controller[152859]: 2025-11-25T09:12:15Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:4e:62 10.100.0.4
Nov 25 09:12:15 compute-0 ceph-mon[75015]: pgmap v2755: 321 pgs: 321 active+clean; 135 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 404 KiB/s wr, 135 op/s
Nov 25 09:12:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 144 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 116 op/s
Nov 25 09:12:16 compute-0 ceph-mon[75015]: pgmap v2756: 321 pgs: 321 active+clean; 144 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 116 op/s
Nov 25 09:12:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 161 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 617 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 25 09:12:18 compute-0 nova_compute[253538]: 2025-11-25 09:12:18.815 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:19 compute-0 ceph-mon[75015]: pgmap v2757: 321 pgs: 321 active+clean; 161 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 617 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 25 09:12:19 compute-0 nova_compute[253538]: 2025-11-25 09:12:19.479 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 162 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 09:12:21 compute-0 ceph-mon[75015]: pgmap v2758: 321 pgs: 321 active+clean; 162 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 09:12:21 compute-0 systemd[1]: Starting dnf makecache...
Nov 25 09:12:21 compute-0 dnf[410792]: Metadata cache refreshed recently.
Nov 25 09:12:22 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 09:12:22 compute-0 systemd[1]: Finished dnf makecache.
Nov 25 09:12:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 09:12:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:12:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:12:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:12:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:12:23 compute-0 ceph-mon[75015]: pgmap v2759: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 09:12:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:12:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:12:23 compute-0 nova_compute[253538]: 2025-11-25 09:12:23.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 09:12:24 compute-0 nova_compute[253538]: 2025-11-25 09:12:24.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:25 compute-0 ceph-mon[75015]: pgmap v2760: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 09:12:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 09:12:26 compute-0 nova_compute[253538]: 2025-11-25 09:12:26.774 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:26 compute-0 nova_compute[253538]: 2025-11-25 09:12:26.775 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:26 compute-0 nova_compute[253538]: 2025-11-25 09:12:26.834 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.013 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.014 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.023 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.024 253542 INFO nova.compute.claims [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.319 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:27 compute-0 ceph-mon[75015]: pgmap v2761: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 09:12:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:12:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877166837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.768 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.775 253542 DEBUG nova.compute.provider_tree [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.790 253542 DEBUG nova.scheduler.client.report [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.956 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:27 compute-0 nova_compute[253538]: 2025-11-25 09:12:27.957 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.001 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.002 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.027 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.135 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.312 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:12:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.314 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.315 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Creating image(s)
Nov 25 09:12:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 948 KiB/s wr, 43 op/s
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.340 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.366 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.392 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.396 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.431 253542 DEBUG nova.policy [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.465 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.466 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.467 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.468 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.493 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.496 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2877166837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.819 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.853 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:28 compute-0 nova_compute[253538]: 2025-11-25 09:12:28.926 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:12:29 compute-0 nova_compute[253538]: 2025-11-25 09:12:29.048 253542 DEBUG nova.objects.instance [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 985307b1-28a6-47cc-8dfc-f18ab08169f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:12:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:12:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/530805451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:12:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:12:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/530805451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:12:29 compute-0 nova_compute[253538]: 2025-11-25 09:12:29.060 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:12:29 compute-0 nova_compute[253538]: 2025-11-25 09:12:29.061 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Ensure instance console log exists: /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:12:29 compute-0 nova_compute[253538]: 2025-11-25 09:12:29.061 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:29 compute-0 nova_compute[253538]: 2025-11-25 09:12:29.062 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:29 compute-0 nova_compute[253538]: 2025-11-25 09:12:29.062 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:29 compute-0 nova_compute[253538]: 2025-11-25 09:12:29.483 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:29 compute-0 nova_compute[253538]: 2025-11-25 09:12:29.529 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Successfully created port: 970a51bb-207b-46ae-bb14-c743ea86eb2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:12:29 compute-0 ceph-mon[75015]: pgmap v2762: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 948 KiB/s wr, 43 op/s
Nov 25 09:12:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/530805451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:12:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/530805451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:12:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 186 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 425 KiB/s wr, 27 op/s
Nov 25 09:12:30 compute-0 nova_compute[253538]: 2025-11-25 09:12:30.815 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Successfully updated port: 970a51bb-207b-46ae-bb14-c743ea86eb2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:12:30 compute-0 nova_compute[253538]: 2025-11-25 09:12:30.833 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:12:30 compute-0 nova_compute[253538]: 2025-11-25 09:12:30.833 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:12:30 compute-0 nova_compute[253538]: 2025-11-25 09:12:30.834 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:12:30 compute-0 nova_compute[253538]: 2025-11-25 09:12:30.897 253542 DEBUG nova.compute.manager [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:12:30 compute-0 nova_compute[253538]: 2025-11-25 09:12:30.898 253542 DEBUG nova.compute.manager [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing instance network info cache due to event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:12:30 compute-0 nova_compute[253538]: 2025-11-25 09:12:30.898 253542 DEBUG oslo_concurrency.lockutils [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:12:30 compute-0 ceph-mon[75015]: pgmap v2763: 321 pgs: 321 active+clean; 186 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 425 KiB/s wr, 27 op/s
Nov 25 09:12:30 compute-0 nova_compute[253538]: 2025-11-25 09:12:30.979 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:12:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 201 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 MiB/s wr, 32 op/s
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.024 253542 DEBUG nova.network.neutron [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.051 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.051 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance network_info: |[{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.052 253542 DEBUG oslo_concurrency.lockutils [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.052 253542 DEBUG nova.network.neutron [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.057 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start _get_guest_xml network_info=[{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.062 253542 WARNING nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.069 253542 DEBUG nova.virt.libvirt.host [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.070 253542 DEBUG nova.virt.libvirt.host [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.073 253542 DEBUG nova.virt.libvirt.host [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.074 253542 DEBUG nova.virt.libvirt.host [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.074 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.075 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.075 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.075 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.076 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.076 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.076 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.077 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.077 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.078 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.078 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.078 253542 DEBUG nova.virt.hardware [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.082 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:33 compute-0 ceph-mon[75015]: pgmap v2764: 321 pgs: 321 active+clean; 201 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 MiB/s wr, 32 op/s
Nov 25 09:12:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:12:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/872168593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.576 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.601 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.605 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:33 compute-0 nova_compute[253538]: 2025-11-25 09:12:33.823 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:12:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257947021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.101 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.103 253542 DEBUG nova.virt.libvirt.vif [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-639443231',display_name='tempest-TestGettingAddress-server-639443231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-639443231',id=147,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-8f1ydg3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:12:28Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=985307b1-28a6-47cc-8dfc-f18ab08169f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.103 253542 DEBUG nova.network.os_vif_util [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.104 253542 DEBUG nova.network.os_vif_util [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.105 253542 DEBUG nova.objects.instance [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 985307b1-28a6-47cc-8dfc-f18ab08169f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.126 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <uuid>985307b1-28a6-47cc-8dfc-f18ab08169f7</uuid>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <name>instance-00000093</name>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-639443231</nova:name>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:12:33</nova:creationTime>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <nova:port uuid="970a51bb-207b-46ae-bb14-c743ea86eb2f">
Nov 25 09:12:34 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fef0:74a0" ipVersion="6"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef0:74a0" ipVersion="6"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <system>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <entry name="serial">985307b1-28a6-47cc-8dfc-f18ab08169f7</entry>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <entry name="uuid">985307b1-28a6-47cc-8dfc-f18ab08169f7</entry>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </system>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <os>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   </os>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <features>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   </features>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/985307b1-28a6-47cc-8dfc-f18ab08169f7_disk">
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       </source>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config">
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       </source>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:12:34 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:f0:74:a0"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <target dev="tap970a51bb-20"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/console.log" append="off"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <video>
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </video>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:12:34 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:12:34 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:12:34 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:12:34 compute-0 nova_compute[253538]: </domain>
Nov 25 09:12:34 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.127 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Preparing to wait for external event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.127 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.127 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.127 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.128 253542 DEBUG nova.virt.libvirt.vif [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-639443231',display_name='tempest-TestGettingAddress-server-639443231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-639443231',id=147,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-8f1ydg3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:12:28Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=985307b1-28a6-47cc-8dfc-f18ab08169f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.128 253542 DEBUG nova.network.os_vif_util [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.129 253542 DEBUG nova.network.os_vif_util [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.129 253542 DEBUG os_vif [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.130 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.131 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.133 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970a51bb-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.134 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap970a51bb-20, col_values=(('external_ids', {'iface-id': '970a51bb-207b-46ae-bb14-c743ea86eb2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:74:a0', 'vm-uuid': '985307b1-28a6-47cc-8dfc-f18ab08169f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:34 compute-0 NetworkManager[48915]: <info>  [1764061954.1368] manager: (tap970a51bb-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.138 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.147 253542 INFO os_vif [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20')
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.197 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.197 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.197 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:f0:74:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.198 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Using config drive
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.217 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:12:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:12:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/872168593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:12:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4257947021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.485 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.883 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Creating config drive at /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config
Nov 25 09:12:34 compute-0 nova_compute[253538]: 2025-11-25 09:12:34.888 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwldmiogl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.012 253542 DEBUG nova.network.neutron [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updated VIF entry in instance network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.013 253542 DEBUG nova.network.neutron [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.030 253542 DEBUG oslo_concurrency.lockutils [req-247a3b0a-b173-461b-b055-2e65b904b168 req-bcd1f5ba-cebc-4bad-b5dd-b2a28928bc85 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.044 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwldmiogl" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.066 253542 DEBUG nova.storage.rbd_utils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.069 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.400 253542 DEBUG oslo_concurrency.processutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config 985307b1-28a6-47cc-8dfc-f18ab08169f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.401 253542 INFO nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Deleting local config drive /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7/disk.config because it was imported into RBD.
Nov 25 09:12:35 compute-0 ceph-mon[75015]: pgmap v2765: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:12:35 compute-0 kernel: tap970a51bb-20: entered promiscuous mode
Nov 25 09:12:35 compute-0 NetworkManager[48915]: <info>  [1764061955.4724] manager: (tap970a51bb-20): new Tun device (/org/freedesktop/NetworkManager/Devices/647)
Nov 25 09:12:35 compute-0 ovn_controller[152859]: 2025-11-25T09:12:35Z|01569|binding|INFO|Claiming lport 970a51bb-207b-46ae-bb14-c743ea86eb2f for this chassis.
Nov 25 09:12:35 compute-0 ovn_controller[152859]: 2025-11-25T09:12:35Z|01570|binding|INFO|970a51bb-207b-46ae-bb14-c743ea86eb2f: Claiming fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.474 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.484 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0'], port_security=['fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fef0:74a0/64 2001:db8::f816:3eff:fef0:74a0/64', 'neutron:device_id': '985307b1-28a6-47cc-8dfc-f18ab08169f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67e8b3b6-9b34-4c2b-a8fc-88ec98721eb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=970a51bb-207b-46ae-bb14-c743ea86eb2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.485 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 970a51bb-207b-46ae-bb14-c743ea86eb2f in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 bound to our chassis
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.486 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.490 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:35 compute-0 ovn_controller[152859]: 2025-11-25T09:12:35Z|01571|binding|INFO|Setting lport 970a51bb-207b-46ae-bb14-c743ea86eb2f ovn-installed in OVS
Nov 25 09:12:35 compute-0 ovn_controller[152859]: 2025-11-25T09:12:35Z|01572|binding|INFO|Setting lport 970a51bb-207b-46ae-bb14-c743ea86eb2f up in Southbound
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.506 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45341451-801b-47d4-b97f-eccd233c686c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:35 compute-0 systemd-machined[215790]: New machine qemu-177-instance-00000093.
Nov 25 09:12:35 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-00000093.
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.546 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ef4180-987a-4393-8b13-dd0f94486f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.548 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c62d3c5b-00e6-4e1b-bc0b-9285043dd70d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:35 compute-0 systemd-udevd[411119]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:12:35 compute-0 NetworkManager[48915]: <info>  [1764061955.5616] device (tap970a51bb-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:12:35 compute-0 NetworkManager[48915]: <info>  [1764061955.5626] device (tap970a51bb-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.576 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b73079a6-7c21-4423-815d-7dea3f48487f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.593 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5422ab-87f9-4612-9558-5444df57ffa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c73317d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:f7:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 6, 'rx_bytes': 1930, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 6, 'rx_bytes': 1930, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727529, 'reachable_time': 16900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411129, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.605 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7939b83-f134-4e2a-895f-1f891b8e6ba5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c73317d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727540, 'tstamp': 727540}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411130, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c73317d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727542, 'tstamp': 727542}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411130, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.607 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c73317d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.608 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.609 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.609 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c73317d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.609 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.610 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c73317d-f0, col_values=(('external_ids', {'iface-id': '08f181bc-bee1-4710-a487-b95c62cfce38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:12:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:35.610 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.923 253542 DEBUG nova.compute.manager [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.924 253542 DEBUG oslo_concurrency.lockutils [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.924 253542 DEBUG oslo_concurrency.lockutils [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.925 253542 DEBUG oslo_concurrency.lockutils [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:35 compute-0 nova_compute[253538]: 2025-11-25 09:12:35.925 253542 DEBUG nova.compute.manager [req-fd3ae7e5-a1f5-466c-a9f2-1fe6991cf0cc req-bef8e50c-8956-427a-8b6f-e3871ab1badf b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Processing event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:12:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.577 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061956.5762353, 985307b1-28a6-47cc-8dfc-f18ab08169f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.578 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] VM Started (Lifecycle Event)
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.582 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.586 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.591 253542 INFO nova.virt.libvirt.driver [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance spawned successfully.
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.591 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.607 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.621 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.628 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.629 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.629 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.630 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.631 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.632 253542 DEBUG nova.virt.libvirt.driver [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.643 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.644 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061956.5777004, 985307b1-28a6-47cc-8dfc-f18ab08169f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.644 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] VM Paused (Lifecycle Event)
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.730 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.735 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764061956.585542, 985307b1-28a6-47cc-8dfc-f18ab08169f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.736 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] VM Resumed (Lifecycle Event)
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.741 253542 INFO nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Took 8.43 seconds to spawn the instance on the hypervisor.
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.742 253542 DEBUG nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.752 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.756 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.785 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.818 253542 INFO nova.compute.manager [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Took 9.84 seconds to build instance.
Nov 25 09:12:36 compute-0 nova_compute[253538]: 2025-11-25 09:12:36.832 253542 DEBUG oslo_concurrency.lockutils [None req-a152a26a-c1e0-4c61-b2e2-5b8345590ff6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:37 compute-0 podman[411175]: 2025-11-25 09:12:37.854047887 +0000 UTC m=+0.091633012 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 09:12:37 compute-0 ceph-mon[75015]: pgmap v2766: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:12:37 compute-0 podman[411174]: 2025-11-25 09:12:37.875204212 +0000 UTC m=+0.114659918 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 25 09:12:38 compute-0 nova_compute[253538]: 2025-11-25 09:12:38.022 253542 DEBUG nova.compute.manager [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:12:38 compute-0 nova_compute[253538]: 2025-11-25 09:12:38.023 253542 DEBUG oslo_concurrency.lockutils [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:38 compute-0 nova_compute[253538]: 2025-11-25 09:12:38.024 253542 DEBUG oslo_concurrency.lockutils [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:38 compute-0 nova_compute[253538]: 2025-11-25 09:12:38.024 253542 DEBUG oslo_concurrency.lockutils [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:38 compute-0 nova_compute[253538]: 2025-11-25 09:12:38.025 253542 DEBUG nova.compute.manager [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] No waiting events found dispatching network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:12:38 compute-0 nova_compute[253538]: 2025-11-25 09:12:38.026 253542 WARNING nova.compute.manager [req-04ac4448-679f-4f3c-9379-ed2d2377fe79 req-8ffa9116-9036-47df-b818-1a8470a98341 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received unexpected event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f for instance with vm_state active and task_state None.
Nov 25 09:12:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 09:12:38 compute-0 ceph-mon[75015]: pgmap v2767: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 09:12:39 compute-0 nova_compute[253538]: 2025-11-25 09:12:39.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:39 compute-0 nova_compute[253538]: 2025-11-25 09:12:39.486 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 09:12:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:41.096 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:12:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:41.097 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:12:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:12:41.098 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:12:41 compute-0 ceph-mon[75015]: pgmap v2768: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 09:12:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 87 op/s
Nov 25 09:12:42 compute-0 nova_compute[253538]: 2025-11-25 09:12:42.548 253542 DEBUG nova.compute.manager [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:12:42 compute-0 nova_compute[253538]: 2025-11-25 09:12:42.549 253542 DEBUG nova.compute.manager [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing instance network info cache due to event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:12:42 compute-0 nova_compute[253538]: 2025-11-25 09:12:42.549 253542 DEBUG oslo_concurrency.lockutils [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:12:42 compute-0 nova_compute[253538]: 2025-11-25 09:12:42.550 253542 DEBUG oslo_concurrency.lockutils [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:12:42 compute-0 nova_compute[253538]: 2025-11-25 09:12:42.550 253542 DEBUG nova.network.neutron [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:12:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:43 compute-0 ceph-mon[75015]: pgmap v2769: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 87 op/s
Nov 25 09:12:43 compute-0 podman[411211]: 2025-11-25 09:12:43.874008704 +0000 UTC m=+0.120030854 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:12:44 compute-0 nova_compute[253538]: 2025-11-25 09:12:44.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 337 KiB/s wr, 76 op/s
Nov 25 09:12:44 compute-0 nova_compute[253538]: 2025-11-25 09:12:44.489 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:44 compute-0 nova_compute[253538]: 2025-11-25 09:12:44.922 253542 DEBUG nova.network.neutron [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updated VIF entry in instance network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:12:44 compute-0 nova_compute[253538]: 2025-11-25 09:12:44.923 253542 DEBUG nova.network.neutron [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:12:44 compute-0 nova_compute[253538]: 2025-11-25 09:12:44.963 253542 DEBUG oslo_concurrency.lockutils [req-b3a82283-a9ed-4f06-aeeb-bc754cd0f994 req-cb36829f-c91a-4653-a842-a238a63a6253 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:12:45 compute-0 ceph-mon[75015]: pgmap v2770: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 337 KiB/s wr, 76 op/s
Nov 25 09:12:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:12:46 compute-0 sudo[411237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:46 compute-0 sudo[411237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:46 compute-0 sudo[411237]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:46 compute-0 sudo[411262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:12:46 compute-0 sudo[411262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:46 compute-0 sudo[411262]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:46 compute-0 sudo[411287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:46 compute-0 sudo[411287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:46 compute-0 sudo[411287]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:46 compute-0 sudo[411312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 09:12:46 compute-0 sudo[411312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:46 compute-0 sudo[411312]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:12:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:12:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:47 compute-0 sudo[411357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:47 compute-0 sudo[411357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:47 compute-0 sudo[411357]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:47 compute-0 sudo[411382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:12:47 compute-0 sudo[411382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:47 compute-0 sudo[411382]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:47 compute-0 sudo[411407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:47 compute-0 sudo[411407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:47 compute-0 sudo[411407]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:47 compute-0 sudo[411432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:12:47 compute-0 sudo[411432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:47 compute-0 ceph-mon[75015]: pgmap v2771: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:12:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:47 compute-0 sudo[411432]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:12:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:12:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:12:47 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:12:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:12:47 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:47 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9feed2c6-9ac8-4657-aa0e-57793f638a6d does not exist
Nov 25 09:12:47 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 46a597e8-d045-4bab-a1e6-a04550781dad does not exist
Nov 25 09:12:47 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9eb9cfc2-eee0-4cf3-b889-d008ef60e18a does not exist
Nov 25 09:12:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:12:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:12:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:12:47 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:12:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:12:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:12:47 compute-0 sudo[411488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:47 compute-0 sudo[411488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:47 compute-0 sudo[411488]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:47 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 09:12:47 compute-0 sudo[411515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:12:47 compute-0 sudo[411515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:47 compute-0 sudo[411515]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:47 compute-0 sudo[411540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:47 compute-0 sudo[411540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:47 compute-0 sudo[411540]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:48 compute-0 sudo[411565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:12:48 compute-0 sudo[411565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Nov 25 09:12:48 compute-0 podman[411629]: 2025-11-25 09:12:48.440394717 +0000 UTC m=+0.057880855 container create d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:12:48 compute-0 systemd[1]: Started libpod-conmon-d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243.scope.
Nov 25 09:12:48 compute-0 podman[411629]: 2025-11-25 09:12:48.408906701 +0000 UTC m=+0.026392889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:12:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:12:48 compute-0 podman[411629]: 2025-11-25 09:12:48.537297991 +0000 UTC m=+0.154784149 container init d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:12:48 compute-0 podman[411629]: 2025-11-25 09:12:48.548001332 +0000 UTC m=+0.165487470 container start d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:12:48 compute-0 podman[411629]: 2025-11-25 09:12:48.552211007 +0000 UTC m=+0.169697185 container attach d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:12:48 compute-0 systemd[1]: libpod-d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243.scope: Deactivated successfully.
Nov 25 09:12:48 compute-0 competent_chaum[411645]: 167 167
Nov 25 09:12:48 compute-0 conmon[411645]: conmon d01ecc76876e37789a26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243.scope/container/memory.events
Nov 25 09:12:48 compute-0 podman[411629]: 2025-11-25 09:12:48.558273241 +0000 UTC m=+0.175759419 container died d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a47aa50298d875825775d3fd26afa60997543dae2228fa6bd2f7a5c1bf3b4c4-merged.mount: Deactivated successfully.
Nov 25 09:12:48 compute-0 podman[411629]: 2025-11-25 09:12:48.633285701 +0000 UTC m=+0.250771849 container remove d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:12:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:12:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:12:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:12:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:12:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:12:48 compute-0 systemd[1]: libpod-conmon-d01ecc76876e37789a26fa695ede4de4416cd188cab79e4344ecd9f985711243.scope: Deactivated successfully.
Nov 25 09:12:48 compute-0 podman[411668]: 2025-11-25 09:12:48.850057933 +0000 UTC m=+0.081191778 container create 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:12:48 compute-0 podman[411668]: 2025-11-25 09:12:48.794572505 +0000 UTC m=+0.025706370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:12:49 compute-0 systemd[1]: Started libpod-conmon-1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b.scope.
Nov 25 09:12:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:49 compute-0 nova_compute[253538]: 2025-11-25 09:12:49.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:49 compute-0 podman[411668]: 2025-11-25 09:12:49.333972188 +0000 UTC m=+0.565106083 container init 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:12:49 compute-0 podman[411668]: 2025-11-25 09:12:49.345783609 +0000 UTC m=+0.576917494 container start 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:12:49 compute-0 nova_compute[253538]: 2025-11-25 09:12:49.492 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:49 compute-0 ovn_controller[152859]: 2025-11-25T09:12:49Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:74:a0 10.100.0.9
Nov 25 09:12:49 compute-0 ovn_controller[152859]: 2025-11-25T09:12:49Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:74:a0 10.100.0.9
Nov 25 09:12:49 compute-0 podman[411668]: 2025-11-25 09:12:49.971692313 +0000 UTC m=+1.202826198 container attach 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:12:49 compute-0 ceph-mon[75015]: pgmap v2772: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Nov 25 09:12:50 compute-0 nova_compute[253538]: 2025-11-25 09:12:50.047 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:12:50 compute-0 nova_compute[253538]: 2025-11-25 09:12:50.048 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:12:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 25 09:12:50 compute-0 happy_dewdney[411685]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:12:50 compute-0 happy_dewdney[411685]: --> relative data size: 1.0
Nov 25 09:12:50 compute-0 happy_dewdney[411685]: --> All data devices are unavailable
Nov 25 09:12:50 compute-0 systemd[1]: libpod-1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b.scope: Deactivated successfully.
Nov 25 09:12:50 compute-0 podman[411668]: 2025-11-25 09:12:50.400602413 +0000 UTC m=+1.631736278 container died 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9ec2446fb53877a34d2620da23a164585d163ae5369b2270321e59fcd36628d-merged.mount: Deactivated successfully.
Nov 25 09:12:50 compute-0 podman[411668]: 2025-11-25 09:12:50.569842034 +0000 UTC m=+1.800975879 container remove 1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:12:50 compute-0 systemd[1]: libpod-conmon-1117f22c156f6e59608d2232b86b8a14d20df233e7d5e9de49750208bf17659b.scope: Deactivated successfully.
Nov 25 09:12:50 compute-0 sudo[411565]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:50 compute-0 sudo[411727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:50 compute-0 sudo[411727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:50 compute-0 sudo[411727]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:50 compute-0 sudo[411752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:12:50 compute-0 sudo[411752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:50 compute-0 sudo[411752]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:50 compute-0 sudo[411777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:50 compute-0 sudo[411777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:50 compute-0 sudo[411777]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:50 compute-0 sudo[411802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:12:50 compute-0 sudo[411802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:50 compute-0 ceph-mon[75015]: pgmap v2773: 321 pgs: 321 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 25 09:12:51 compute-0 podman[411865]: 2025-11-25 09:12:51.273459991 +0000 UTC m=+0.051856500 container create 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:12:51 compute-0 systemd[1]: Started libpod-conmon-21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0.scope.
Nov 25 09:12:51 compute-0 podman[411865]: 2025-11-25 09:12:51.24253865 +0000 UTC m=+0.020935179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:12:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:12:51 compute-0 podman[411865]: 2025-11-25 09:12:51.460792293 +0000 UTC m=+0.239188882 container init 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 09:12:51 compute-0 podman[411865]: 2025-11-25 09:12:51.469403517 +0000 UTC m=+0.247800026 container start 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:12:51 compute-0 epic_hamilton[411882]: 167 167
Nov 25 09:12:51 compute-0 systemd[1]: libpod-21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0.scope: Deactivated successfully.
Nov 25 09:12:51 compute-0 podman[411865]: 2025-11-25 09:12:51.478218037 +0000 UTC m=+0.256614666 container attach 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:12:51 compute-0 podman[411865]: 2025-11-25 09:12:51.478976737 +0000 UTC m=+0.257373326 container died 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0480f3300461ffdedeb9d88735ebcd4aaeda4a9e04548b5b338f64377e6d871b-merged.mount: Deactivated successfully.
Nov 25 09:12:51 compute-0 podman[411865]: 2025-11-25 09:12:51.56809035 +0000 UTC m=+0.346486859 container remove 21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:12:51 compute-0 systemd[1]: libpod-conmon-21f7f9acef5e828aab660b2b44dfe3368896061d6ea7545b011cd091aaf4cfe0.scope: Deactivated successfully.
Nov 25 09:12:51 compute-0 podman[411908]: 2025-11-25 09:12:51.842698405 +0000 UTC m=+0.079521183 container create 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:12:51 compute-0 podman[411908]: 2025-11-25 09:12:51.807756975 +0000 UTC m=+0.044579833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:12:51 compute-0 systemd[1]: Started libpod-conmon-61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30.scope.
Nov 25 09:12:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:52 compute-0 podman[411908]: 2025-11-25 09:12:52.026395199 +0000 UTC m=+0.263218047 container init 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:12:52 compute-0 podman[411908]: 2025-11-25 09:12:52.041022596 +0000 UTC m=+0.277845404 container start 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:12:52 compute-0 podman[411908]: 2025-11-25 09:12:52.046365181 +0000 UTC m=+0.283188039 container attach 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 09:12:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]: {
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:     "0": [
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:         {
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "devices": [
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "/dev/loop3"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             ],
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_name": "ceph_lv0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_size": "21470642176",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "name": "ceph_lv0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "tags": {
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cluster_name": "ceph",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.crush_device_class": "",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.encrypted": "0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osd_id": "0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.type": "block",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.vdo": "0"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             },
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "type": "block",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "vg_name": "ceph_vg0"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:         }
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:     ],
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:     "1": [
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:         {
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "devices": [
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "/dev/loop4"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             ],
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_name": "ceph_lv1",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_size": "21470642176",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "name": "ceph_lv1",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "tags": {
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cluster_name": "ceph",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.crush_device_class": "",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.encrypted": "0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osd_id": "1",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.type": "block",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.vdo": "0"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             },
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "type": "block",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "vg_name": "ceph_vg1"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:         }
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:     ],
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:     "2": [
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:         {
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "devices": [
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "/dev/loop5"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             ],
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_name": "ceph_lv2",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_size": "21470642176",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "name": "ceph_lv2",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "tags": {
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.cluster_name": "ceph",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.crush_device_class": "",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.encrypted": "0",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osd_id": "2",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.type": "block",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:                 "ceph.vdo": "0"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             },
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "type": "block",
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:             "vg_name": "ceph_vg2"
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:         }
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]:     ]
Nov 25 09:12:52 compute-0 hardcore_hertz[411924]: }
Nov 25 09:12:52 compute-0 systemd[1]: libpod-61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30.scope: Deactivated successfully.
Nov 25 09:12:52 compute-0 podman[411908]: 2025-11-25 09:12:52.898586288 +0000 UTC m=+1.135409076 container died 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4da366d0ed2dea6acef763089482daf39c0051bde2840f61b3c1af134d6989c9-merged.mount: Deactivated successfully.
Nov 25 09:12:52 compute-0 podman[411908]: 2025-11-25 09:12:52.989079438 +0000 UTC m=+1.225902206 container remove 61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:12:52 compute-0 systemd[1]: libpod-conmon-61ff14f45ea6b3b2b2b81448aeed940a2baaf638832f90540b2703b0c0570a30.scope: Deactivated successfully.
Nov 25 09:12:53 compute-0 sudo[411802]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:53 compute-0 sudo[411947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:53 compute-0 sudo[411947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:53 compute-0 sudo[411947]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:53 compute-0 sudo[411972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:12:53 compute-0 sudo[411972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:53 compute-0 sudo[411972]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:53 compute-0 sudo[411997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:53 compute-0 sudo[411997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:53 compute-0 sudo[411997]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:53 compute-0 sudo[412022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:12:53 compute-0 sudo[412022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:12:53
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:12:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:12:53 compute-0 ceph-mon[75015]: pgmap v2774: 321 pgs: 321 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 09:12:53 compute-0 podman[412090]: 2025-11-25 09:12:53.785886239 +0000 UTC m=+0.042204139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:12:53 compute-0 podman[412090]: 2025-11-25 09:12:53.915771849 +0000 UTC m=+0.172089779 container create 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:12:54 compute-0 systemd[1]: Started libpod-conmon-3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691.scope.
Nov 25 09:12:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:12:54 compute-0 podman[412090]: 2025-11-25 09:12:54.173772973 +0000 UTC m=+0.430090953 container init 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:12:54 compute-0 nova_compute[253538]: 2025-11-25 09:12:54.179 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:54 compute-0 podman[412090]: 2025-11-25 09:12:54.189784948 +0000 UTC m=+0.446102878 container start 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 09:12:54 compute-0 sleepy_sinoussi[412107]: 167 167
Nov 25 09:12:54 compute-0 systemd[1]: libpod-3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691.scope: Deactivated successfully.
Nov 25 09:12:54 compute-0 podman[412090]: 2025-11-25 09:12:54.272021073 +0000 UTC m=+0.528339013 container attach 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:12:54 compute-0 podman[412090]: 2025-11-25 09:12:54.272757504 +0000 UTC m=+0.529075434 container died 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:12:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 244 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 09:12:54 compute-0 nova_compute[253538]: 2025-11-25 09:12:54.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:54 compute-0 nova_compute[253538]: 2025-11-25 09:12:54.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-50366c2d4307fdf72ea739595996f67c3d1eb67f194b72164d92389018808d47-merged.mount: Deactivated successfully.
Nov 25 09:12:54 compute-0 podman[412090]: 2025-11-25 09:12:54.871409387 +0000 UTC m=+1.127727277 container remove 3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sinoussi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 09:12:54 compute-0 systemd[1]: libpod-conmon-3ffb6c6bcf54fae2dd86a1af60c58ec2465d459d6ed103901e1c981c24713691.scope: Deactivated successfully.
Nov 25 09:12:55 compute-0 podman[412131]: 2025-11-25 09:12:55.059168761 +0000 UTC m=+0.037521550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:12:55 compute-0 podman[412131]: 2025-11-25 09:12:55.248584331 +0000 UTC m=+0.226937110 container create 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:12:55 compute-0 systemd[1]: Started libpod-conmon-04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a.scope.
Nov 25 09:12:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:12:55 compute-0 podman[412131]: 2025-11-25 09:12:55.625181748 +0000 UTC m=+0.603534537 container init 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:12:55 compute-0 podman[412131]: 2025-11-25 09:12:55.638436068 +0000 UTC m=+0.616788837 container start 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:12:55 compute-0 ceph-mon[75015]: pgmap v2775: 321 pgs: 321 active+clean; 244 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 09:12:55 compute-0 podman[412131]: 2025-11-25 09:12:55.724911309 +0000 UTC m=+0.703264098 container attach 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 09:12:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:12:56 compute-0 nova_compute[253538]: 2025-11-25 09:12:56.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:12:56 compute-0 nova_compute[253538]: 2025-11-25 09:12:56.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:12:56 compute-0 nova_compute[253538]: 2025-11-25 09:12:56.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:12:56 compute-0 magical_buck[412147]: {
Nov 25 09:12:56 compute-0 magical_buck[412147]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "osd_id": 1,
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "type": "bluestore"
Nov 25 09:12:56 compute-0 magical_buck[412147]:     },
Nov 25 09:12:56 compute-0 magical_buck[412147]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "osd_id": 2,
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "type": "bluestore"
Nov 25 09:12:56 compute-0 magical_buck[412147]:     },
Nov 25 09:12:56 compute-0 magical_buck[412147]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "osd_id": 0,
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:12:56 compute-0 magical_buck[412147]:         "type": "bluestore"
Nov 25 09:12:56 compute-0 magical_buck[412147]:     }
Nov 25 09:12:56 compute-0 magical_buck[412147]: }
Nov 25 09:12:56 compute-0 systemd[1]: libpod-04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a.scope: Deactivated successfully.
Nov 25 09:12:56 compute-0 podman[412131]: 2025-11-25 09:12:56.772376114 +0000 UTC m=+1.750728903 container died 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:12:56 compute-0 systemd[1]: libpod-04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a.scope: Consumed 1.138s CPU time.
Nov 25 09:12:56 compute-0 ceph-mon[75015]: pgmap v2776: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3eb9c41bb4af90bc173b8eb91037a67e99f5c30df201a4b49a4944a7edcbe99-merged.mount: Deactivated successfully.
Nov 25 09:12:57 compute-0 podman[412131]: 2025-11-25 09:12:57.244714664 +0000 UTC m=+2.223067443 container remove 04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:12:57 compute-0 systemd[1]: libpod-conmon-04b6940eeb64b5118929b8109d47ace60fa022f44fe537608f23221faff27e5a.scope: Deactivated successfully.
Nov 25 09:12:57 compute-0 sudo[412022]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:12:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:57 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:12:57 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6a863280-1c69-4b2e-9897-4a59b058550f does not exist
Nov 25 09:12:57 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2784088b-d8d4-415f-9a16-748063c7bb84 does not exist
Nov 25 09:12:57 compute-0 sudo[412193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:12:57 compute-0 sudo[412193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:57 compute-0 sudo[412193]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:57 compute-0 sudo[412218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:12:57 compute-0 sudo[412218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:12:57 compute-0 sudo[412218]: pam_unix(sudo:session): session closed for user root
Nov 25 09:12:57 compute-0 nova_compute[253538]: 2025-11-25 09:12:57.763 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:12:57 compute-0 nova_compute[253538]: 2025-11-25 09:12:57.764 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:12:57 compute-0 nova_compute[253538]: 2025-11-25 09:12:57.764 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:12:57 compute-0 nova_compute[253538]: 2025-11-25 09:12:57.764 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f5964963-11b8-4fd9-ace9-e5ee67571925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:12:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:12:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:12:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:12:59 compute-0 nova_compute[253538]: 2025-11-25 09:12:59.182 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:12:59 compute-0 sshd-session[412243]: Invalid user dbuser from 193.32.162.151 port 36734
Nov 25 09:12:59 compute-0 ceph-mon[75015]: pgmap v2777: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:12:59 compute-0 sshd-session[412243]: Connection closed by invalid user dbuser 193.32.162.151 port 36734 [preauth]
Nov 25 09:12:59 compute-0 nova_compute[253538]: 2025-11-25 09:12:59.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 1.1 MiB/s wr, 60 op/s
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.922 253542 DEBUG nova.compute.manager [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.923 253542 DEBUG nova.compute.manager [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing instance network info cache due to event network-changed-970a51bb-207b-46ae-bb14-c743ea86eb2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.923 253542 DEBUG oslo_concurrency.lockutils [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.923 253542 DEBUG oslo_concurrency.lockutils [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.924 253542 DEBUG nova.network.neutron [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Refreshing network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.995 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.996 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.996 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.996 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.997 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.998 253542 INFO nova.compute.manager [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Terminating instance
Nov 25 09:13:00 compute-0 nova_compute[253538]: 2025-11-25 09:13:00.999 253542 DEBUG nova.compute.manager [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:13:01 compute-0 kernel: tap970a51bb-20 (unregistering): left promiscuous mode
Nov 25 09:13:01 compute-0 NetworkManager[48915]: <info>  [1764061981.3950] device (tap970a51bb-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:13:01 compute-0 ovn_controller[152859]: 2025-11-25T09:13:01Z|01573|binding|INFO|Releasing lport 970a51bb-207b-46ae-bb14-c743ea86eb2f from this chassis (sb_readonly=0)
Nov 25 09:13:01 compute-0 ovn_controller[152859]: 2025-11-25T09:13:01Z|01574|binding|INFO|Setting lport 970a51bb-207b-46ae-bb14-c743ea86eb2f down in Southbound
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.404 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:01 compute-0 ovn_controller[152859]: 2025-11-25T09:13:01Z|01575|binding|INFO|Removing iface tap970a51bb-20 ovn-installed in OVS
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.407 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.416 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0'], port_security=['fa:16:3e:f0:74:a0 10.100.0.9 2001:db8:0:1:f816:3eff:fef0:74a0 2001:db8::f816:3eff:fef0:74a0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fef0:74a0/64 2001:db8::f816:3eff:fef0:74a0/64', 'neutron:device_id': '985307b1-28a6-47cc-8dfc-f18ab08169f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67e8b3b6-9b34-4c2b-a8fc-88ec98721eb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=970a51bb-207b-46ae-bb14-c743ea86eb2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.417 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 970a51bb-207b-46ae-bb14-c743ea86eb2f in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 unbound from our chassis
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.418 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.419 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.436 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[45618230-df9a-4f75-a06e-8820acd8910a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:01 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000093.scope: Deactivated successfully.
Nov 25 09:13:01 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000093.scope: Consumed 14.106s CPU time.
Nov 25 09:13:01 compute-0 systemd-machined[215790]: Machine qemu-177-instance-00000093 terminated.
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.463 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[b0200af2-81df-4b71-9be3-c6be2cf7db7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.467 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[283c4ec5-23ae-43a6-b5c5-aa4323dd68be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.493 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[cff87346-1df0-4d5a-948d-082d61d1b873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.508 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[55b743b3-d731-48e6-a3fb-b46feb4ab651]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c73317d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:f7:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 3328, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 3328, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727529, 'reachable_time': 37897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412257, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.520 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[18dcacbb-1fdc-4f96-bf9f-079d366867e5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6c73317d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727540, 'tstamp': 727540}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412258, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6c73317d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727542, 'tstamp': 727542}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412258, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.522 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c73317d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.528 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c73317d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.528 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.528 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.529 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c73317d-f0, col_values=(('external_ids', {'iface-id': '08f181bc-bee1-4710-a487-b95c62cfce38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:01.529 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:13:01 compute-0 ceph-mon[75015]: pgmap v2778: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 1.1 MiB/s wr, 60 op/s
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.646 253542 INFO nova.virt.libvirt.driver [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Instance destroyed successfully.
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.646 253542 DEBUG nova.objects.instance [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 985307b1-28a6-47cc-8dfc-f18ab08169f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.662 253542 DEBUG nova.virt.libvirt.vif [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-639443231',display_name='tempest-TestGettingAddress-server-639443231',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-639443231',id=147,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:12:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-8f1ydg3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:12:36Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=985307b1-28a6-47cc-8dfc-f18ab08169f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.662 253542 DEBUG nova.network.os_vif_util [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.664 253542 DEBUG nova.network.os_vif_util [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.665 253542 DEBUG os_vif [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.668 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970a51bb-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.673 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.676 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.679 253542 INFO os_vif [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:74:a0,bridge_name='br-int',has_traffic_filtering=True,id=970a51bb-207b-46ae-bb14-c743ea86eb2f,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap970a51bb-20')
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.798 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.817 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.817 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.818 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.819 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.819 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.997 253542 DEBUG nova.compute.manager [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-unplugged-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.998 253542 DEBUG oslo_concurrency.lockutils [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:01 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.999 253542 DEBUG oslo_concurrency.lockutils [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:01.999 253542 DEBUG oslo_concurrency.lockutils [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.000 253542 DEBUG nova.compute.manager [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] No waiting events found dispatching network-vif-unplugged-970a51bb-207b-46ae-bb14-c743ea86eb2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.001 253542 DEBUG nova.compute.manager [req-506cd6e5-6087-4c7c-86b1-c9362694b5d1 req-81978fe0-d6ef-44f1-893a-064580734d97 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-unplugged-970a51bb-207b-46ae-bb14-c743ea86eb2f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:13:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 484 KiB/s wr, 32 op/s
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.449 253542 INFO nova.virt.libvirt.driver [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Deleting instance files /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7_del
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.450 253542 INFO nova.virt.libvirt.driver [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Deletion of /var/lib/nova/instances/985307b1-28a6-47cc-8dfc-f18ab08169f7_del complete
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.506 253542 INFO nova.compute.manager [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Took 1.51 seconds to destroy the instance on the hypervisor.
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.507 253542 DEBUG oslo.service.loopingcall [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.507 253542 DEBUG nova.compute.manager [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.508 253542 DEBUG nova.network.neutron [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:13:02 compute-0 nova_compute[253538]: 2025-11-25 09:13:02.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.005 253542 DEBUG nova.network.neutron [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.020 253542 INFO nova.compute.manager [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Took 0.51 seconds to deallocate network for instance.
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.068 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.069 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.096 253542 DEBUG nova.compute.manager [req-efa619af-4cca-49e7-9759-6f96fb7ae67e req-4ae3d06b-490f-4514-bcad-96d293946cae b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-deleted-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.165 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.233 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.234 253542 DEBUG nova.compute.provider_tree [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.308 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:13:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.333 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.389 253542 DEBUG oslo_concurrency.processutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:03 compute-0 ceph-mon[75015]: pgmap v2779: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 484 KiB/s wr, 32 op/s
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.635 253542 DEBUG nova.network.neutron [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updated VIF entry in instance network info cache for port 970a51bb-207b-46ae-bb14-c743ea86eb2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.636 253542 DEBUG nova.network.neutron [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Updating instance_info_cache with network_info: [{"id": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "address": "fa:16:3e:f0:74:a0", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:74a0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap970a51bb-20", "ovs_interfaceid": "970a51bb-207b-46ae-bb14-c743ea86eb2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.766 253542 DEBUG oslo_concurrency.lockutils [req-62190143-8b4a-4975-be50-1cd79a5aa54f req-a1327cd6-251c-4372-8b74-ee3e82b136b6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-985307b1-28a6-47cc-8dfc-f18ab08169f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:13:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:13:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942042518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.919 253542 DEBUG oslo_concurrency.processutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.926 253542 DEBUG nova.compute.provider_tree [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.938 253542 DEBUG nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:13:03 compute-0 nova_compute[253538]: 2025-11-25 09:13:03.973 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.006 253542 INFO nova.scheduler.client.report [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 985307b1-28a6-47cc-8dfc-f18ab08169f7
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.075 253542 DEBUG nova.compute.manager [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.076 253542 DEBUG oslo_concurrency.lockutils [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.076 253542 DEBUG oslo_concurrency.lockutils [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.076 253542 DEBUG oslo_concurrency.lockutils [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.077 253542 DEBUG nova.compute.manager [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] No waiting events found dispatching network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.077 253542 WARNING nova.compute.manager [req-706e7dee-802d-43ec-a280-dcb34b18f9b5 req-f4ab714a-08fd-43a7-adbd-b6ae671f88eb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Received unexpected event network-vif-plugged-970a51bb-207b-46ae-bb14-c743ea86eb2f for instance with vm_state deleted and task_state None.
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.094 253542 DEBUG oslo_concurrency.lockutils [None req-a8d5cb5a-6bac-4adf-b11d-be50e9684937 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "985307b1-28a6-47cc-8dfc-f18ab08169f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 222 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 93 KiB/s wr, 13 op/s
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012365213135703589 of space, bias 1.0, pg target 0.37095639407110764 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:13:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.500 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:04 compute-0 nova_compute[253538]: 2025-11-25 09:13:04.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2942042518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.186 253542 DEBUG nova.compute.manager [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.187 253542 DEBUG nova.compute.manager [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing instance network info cache due to event network-changed-637fce28-ce53-4bd9-95fb-dc0675dd7009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.187 253542 DEBUG oslo_concurrency.lockutils [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.187 253542 DEBUG oslo_concurrency.lockutils [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.187 253542 DEBUG nova.network.neutron [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Refreshing network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.211 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.212 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.212 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.212 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.212 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.213 253542 INFO nova.compute.manager [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Terminating instance
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.214 253542 DEBUG nova.compute.manager [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:13:05 compute-0 kernel: tap637fce28-ce (unregistering): left promiscuous mode
Nov 25 09:13:05 compute-0 NetworkManager[48915]: <info>  [1764061985.2832] device (tap637fce28-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:13:05 compute-0 ovn_controller[152859]: 2025-11-25T09:13:05Z|01576|binding|INFO|Releasing lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 from this chassis (sb_readonly=0)
Nov 25 09:13:05 compute-0 ovn_controller[152859]: 2025-11-25T09:13:05Z|01577|binding|INFO|Setting lport 637fce28-ce53-4bd9-95fb-dc0675dd7009 down in Southbound
Nov 25 09:13:05 compute-0 ovn_controller[152859]: 2025-11-25T09:13:05Z|01578|binding|INFO|Removing iface tap637fce28-ce ovn-installed in OVS
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.299 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62'], port_security=['fa:16:3e:59:4e:62 10.100.0.4 2001:db8:0:1:f816:3eff:fe59:4e62 2001:db8::f816:3eff:fe59:4e62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fe59:4e62/64 2001:db8::f816:3eff:fe59:4e62/64', 'neutron:device_id': 'f5964963-11b8-4fd9-ace9-e5ee67571925', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67e8b3b6-9b34-4c2b-a8fc-88ec98721eb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82db8616-e2a9-492c-9b1b-8c775409acb3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=637fce28-ce53-4bd9-95fb-dc0675dd7009) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.301 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 637fce28-ce53-4bd9-95fb-dc0675dd7009 in datapath 6c73317d-f647-4813-8469-7d8f6ba2c0c7 unbound from our chassis
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.303 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c73317d-f647-4813-8469-7d8f6ba2c0c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.305 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee7742f-dda6-4f97-a6a9-4148b4cb6334]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.306 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 namespace which is not needed anymore
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.313 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.313 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 25 09:13:05 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000092.scope: Consumed 16.725s CPU time.
Nov 25 09:13:05 compute-0 systemd-machined[215790]: Machine qemu-176-instance-00000092 terminated.
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.482 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.496 253542 INFO nova.virt.libvirt.driver [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Instance destroyed successfully.
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.497 253542 DEBUG nova.objects.instance [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid f5964963-11b8-4fd9-ace9-e5ee67571925 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:13:05 compute-0 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [NOTICE]   (410669) : haproxy version is 2.8.14-c23fe91
Nov 25 09:13:05 compute-0 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [NOTICE]   (410669) : path to executable is /usr/sbin/haproxy
Nov 25 09:13:05 compute-0 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [WARNING]  (410669) : Exiting Master process...
Nov 25 09:13:05 compute-0 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [WARNING]  (410669) : Exiting Master process...
Nov 25 09:13:05 compute-0 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [ALERT]    (410669) : Current worker (410671) exited with code 143 (Terminated)
Nov 25 09:13:05 compute-0 neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7[410665]: [WARNING]  (410669) : All workers exited. Exiting... (0)
Nov 25 09:13:05 compute-0 systemd[1]: libpod-c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d.scope: Deactivated successfully.
Nov 25 09:13:05 compute-0 podman[412337]: 2025-11-25 09:13:05.513868322 +0000 UTC m=+0.068440381 container died c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.516 253542 DEBUG nova.virt.libvirt.vif [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1898594818',display_name='tempest-TestGettingAddress-server-1898594818',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1898594818',id=146,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCYWSfdVj+Wf7cbG/yHvSQzac2UDJ+lVbG+gHLhUdgH1axNTDtOQgu7Qi4+rF49LHVdNelU4faTO+e7DMH4d34+ViVy+2CuP/uQsAgIE7Bo9udHLpfncBLpy55zcDymblA==',key_name='tempest-TestGettingAddress-275156659',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:12:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-ezfjzmm2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:12:01Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=f5964963-11b8-4fd9-ace9-e5ee67571925,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.518 253542 DEBUG nova.network.os_vif_util [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.520 253542 DEBUG nova.network.os_vif_util [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.521 253542 DEBUG os_vif [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.522 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.523 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap637fce28-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.524 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.526 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.529 253542 INFO os_vif [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:4e:62,bridge_name='br-int',has_traffic_filtering=True,id=637fce28-ce53-4bd9-95fb-dc0675dd7009,network=Network(6c73317d-f647-4813-8469-7d8f6ba2c0c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637fce28-ce')
Nov 25 09:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d-userdata-shm.mount: Deactivated successfully.
Nov 25 09:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8dc5cf81cc7a79b68a934106958d2a3e5f0faa44968a236fc77d5815309e349-merged.mount: Deactivated successfully.
Nov 25 09:13:05 compute-0 podman[412337]: 2025-11-25 09:13:05.586733653 +0000 UTC m=+0.141305702 container cleanup c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:13:05 compute-0 systemd[1]: libpod-conmon-c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d.scope: Deactivated successfully.
Nov 25 09:13:05 compute-0 ceph-mon[75015]: pgmap v2780: 321 pgs: 321 active+clean; 222 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 93 KiB/s wr, 13 op/s
Nov 25 09:13:05 compute-0 podman[412394]: 2025-11-25 09:13:05.677597874 +0000 UTC m=+0.070549729 container remove c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.687 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87f9b7ac-f4e8-41e6-8597-a102af40da92]: (4, ('Tue Nov 25 09:13:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 (c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d)\nc02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d\nTue Nov 25 09:13:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 (c02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d)\nc02b5a0433a2173378512657c6a2bae707a7c1cd7bc6a3918a43c1fa8e1f5a4d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.690 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d53fc46a-b7c1-4b32-a9f3-33b7ceaf38c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.691 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c73317d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:05 compute-0 kernel: tap6c73317d-f0: left promiscuous mode
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 nova_compute[253538]: 2025-11-25 09:13:05.722 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.725 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f213d44f-cf3d-4e81-a232-bd23c28aad45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.744 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[611173dc-7966-4be6-aebb-387360d86862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.745 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[889a86bb-da78-4504-9266-a7eb10bafe3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.770 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4b75279d-8008-4907-b452-e0acb90e2671]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727522, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412408, 'error': None, 'target': 'ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.774 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c73317d-f647-4813-8469-7d8f6ba2c0c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.774 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e9dcc30e-4396-4c11-8545-9f1e0f06d4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c73317d\x2df647\x2d4813\x2d8469\x2d7d8f6ba2c0c7.mount: Deactivated successfully.
Nov 25 09:13:05 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:05.777 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.099 253542 INFO nova.virt.libvirt.driver [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Deleting instance files /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925_del
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.099 253542 INFO nova.virt.libvirt.driver [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Deletion of /var/lib/nova/instances/f5964963-11b8-4fd9-ace9-e5ee67571925_del complete
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.151 253542 INFO nova.compute.manager [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Took 0.94 seconds to destroy the instance on the hypervisor.
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.152 253542 DEBUG oslo.service.loopingcall [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.152 253542 DEBUG nova.compute.manager [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.152 253542 DEBUG nova.network.neutron [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.233 253542 DEBUG nova.compute.manager [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-unplugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.233 253542 DEBUG oslo_concurrency.lockutils [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.234 253542 DEBUG oslo_concurrency.lockutils [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.234 253542 DEBUG oslo_concurrency.lockutils [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.234 253542 DEBUG nova.compute.manager [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] No waiting events found dispatching network-vif-unplugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.234 253542 DEBUG nova.compute.manager [req-4e736d69-db68-4c0e-843f-17b22b365233 req-7eb6223e-0325-49ef-9865-94d3985954d1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-unplugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:13:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 50 KiB/s wr, 22 op/s
Nov 25 09:13:06 compute-0 nova_compute[253538]: 2025-11-25 09:13:06.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.217 253542 DEBUG nova.network.neutron [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.242 253542 INFO nova.compute.manager [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Took 1.09 seconds to deallocate network for instance.
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.284 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.285 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.292 253542 DEBUG nova.compute.manager [req-be9705fd-7464-4422-a391-ee554b622e9e req-40a362ce-4149-4023-961a-505f4809e6ca b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-deleted-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.338 253542 DEBUG oslo_concurrency.processutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:07 compute-0 ceph-mon[75015]: pgmap v2781: 321 pgs: 321 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 50 KiB/s wr, 22 op/s
Nov 25 09:13:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:13:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816843362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.852 253542 DEBUG oslo_concurrency.processutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.862 253542 DEBUG nova.compute.provider_tree [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.880 253542 DEBUG nova.scheduler.client.report [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.902 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.932 253542 INFO nova.scheduler.client.report [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance f5964963-11b8-4fd9-ace9-e5ee67571925
Nov 25 09:13:07 compute-0 nova_compute[253538]: 2025-11-25 09:13:07.985 253542 DEBUG oslo_concurrency.lockutils [None req-ad8dc415-7069-44b7-9ed8-89ea97771199 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.170 253542 DEBUG nova.network.neutron [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updated VIF entry in instance network info cache for port 637fce28-ce53-4bd9-95fb-dc0675dd7009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.171 253542 DEBUG nova.network.neutron [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Updating instance_info_cache with network_info: [{"id": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "address": "fa:16:3e:59:4e:62", "network": {"id": "6c73317d-f647-4813-8469-7d8f6ba2c0c7", "bridge": "br-int", "label": "tempest-network-smoke--565189020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe59:4e62", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637fce28-ce", "ovs_interfaceid": "637fce28-ce53-4bd9-95fb-dc0675dd7009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.189 253542 DEBUG oslo_concurrency.lockutils [req-bed71b33-ffa3-4ad7-b911-d1a5c1fcf508 req-df142172-2af5-4a75-83dc-f899e6e456b0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-f5964963-11b8-4fd9-ace9-e5ee67571925" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:13:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.330 253542 DEBUG nova.compute.manager [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.331 253542 DEBUG oslo_concurrency.lockutils [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.331 253542 DEBUG oslo_concurrency.lockutils [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.332 253542 DEBUG oslo_concurrency.lockutils [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "f5964963-11b8-4fd9-ace9-e5ee67571925-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.332 253542 DEBUG nova.compute.manager [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] No waiting events found dispatching network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.333 253542 WARNING nova.compute.manager [req-aca2b843-a6c9-4aa7-8f71-0aaaa923de90 req-0e757407-11cc-431f-bf60-7981a46ce807 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Received unexpected event network-vif-plugged-637fce28-ce53-4bd9-95fb-dc0675dd7009 for instance with vm_state deleted and task_state None.
Nov 25 09:13:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 114 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 17 KiB/s wr, 47 op/s
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.575 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:13:08 compute-0 nova_compute[253538]: 2025-11-25 09:13:08.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3816843362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:08 compute-0 podman[412455]: 2025-11-25 09:13:08.826013011 +0000 UTC m=+0.066365026 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:13:08 compute-0 podman[412454]: 2025-11-25 09:13:08.837379769 +0000 UTC m=+0.078081944 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:13:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:13:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890086821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.008 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:09 compute-0 sshd-session[412432]: Invalid user es from 45.202.211.6 port 51718
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.191 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.193 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3597MB free_disk=59.970909118652344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.193 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.193 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:09 compute-0 sshd-session[412432]: Received disconnect from 45.202.211.6 port 51718:11: Bye Bye [preauth]
Nov 25 09:13:09 compute-0 sshd-session[412432]: Disconnected from invalid user es 45.202.211.6 port 51718 [preauth]
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.267 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.267 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.290 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:09 compute-0 ceph-mon[75015]: pgmap v2782: 321 pgs: 321 active+clean; 114 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 17 KiB/s wr, 47 op/s
Nov 25 09:13:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/890086821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:13:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282700708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.762 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.767 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.808 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.835 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:13:09 compute-0 nova_compute[253538]: 2025-11-25 09:13:09.836 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 6.5 KiB/s wr, 57 op/s
Nov 25 09:13:10 compute-0 nova_compute[253538]: 2025-11-25 09:13:10.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/282700708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:10.779 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:11 compute-0 ceph-mon[75015]: pgmap v2783: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 6.5 KiB/s wr, 57 op/s
Nov 25 09:13:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.2 KiB/s wr, 56 op/s
Nov 25 09:13:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:13 compute-0 ceph-mon[75015]: pgmap v2784: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.2 KiB/s wr, 56 op/s
Nov 25 09:13:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.1 KiB/s wr, 55 op/s
Nov 25 09:13:14 compute-0 nova_compute[253538]: 2025-11-25 09:13:14.504 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:14 compute-0 nova_compute[253538]: 2025-11-25 09:13:14.706 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:14 compute-0 nova_compute[253538]: 2025-11-25 09:13:14.776 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:14 compute-0 podman[412519]: 2025-11-25 09:13:14.903175052 +0000 UTC m=+0.119321175 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 09:13:15 compute-0 nova_compute[253538]: 2025-11-25 09:13:15.527 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:15 compute-0 ceph-mon[75015]: pgmap v2785: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.1 KiB/s wr, 55 op/s
Nov 25 09:13:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 25 09:13:16 compute-0 nova_compute[253538]: 2025-11-25 09:13:16.644 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061981.6433387, 985307b1-28a6-47cc-8dfc-f18ab08169f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:13:16 compute-0 nova_compute[253538]: 2025-11-25 09:13:16.645 253542 INFO nova.compute.manager [-] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] VM Stopped (Lifecycle Event)
Nov 25 09:13:16 compute-0 nova_compute[253538]: 2025-11-25 09:13:16.728 253542 DEBUG nova.compute.manager [None req-69d292d2-0d52-4b1c-a844-3678ae8d7323 - - - - - -] [instance: 985307b1-28a6-47cc-8dfc-f18ab08169f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:13:17 compute-0 ceph-mon[75015]: pgmap v2786: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 25 09:13:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.347633) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998347665, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1317, "num_deletes": 250, "total_data_size": 2002081, "memory_usage": 2030040, "flush_reason": "Manual Compaction"}
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 25 09:13:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.4 KiB/s wr, 41 op/s
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998362290, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 1185099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57045, "largest_seqno": 58361, "table_properties": {"data_size": 1180416, "index_size": 2078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12407, "raw_average_key_size": 20, "raw_value_size": 1170176, "raw_average_value_size": 1950, "num_data_blocks": 95, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061869, "oldest_key_time": 1764061869, "file_creation_time": 1764061998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 14750 microseconds, and 5110 cpu microseconds.
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.362374) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 1185099 bytes OK
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.362400) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.365245) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.365296) EVENT_LOG_v1 {"time_micros": 1764061998365285, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.365358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1996188, prev total WAL file size 1996188, number of live WAL files 2.
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.366673) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323535' seq:72057594037927935, type:22 .. '6D6772737461740032353036' seq:0, type:0; will stop at (end)
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(1157KB)], [134(10MB)]
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998366750, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 11726519, "oldest_snapshot_seqno": -1}
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7768 keys, 9199115 bytes, temperature: kUnknown
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998452159, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9199115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9150641, "index_size": 27945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19461, "raw_key_size": 203243, "raw_average_key_size": 26, "raw_value_size": 9015372, "raw_average_value_size": 1160, "num_data_blocks": 1090, "num_entries": 7768, "num_filter_entries": 7768, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764061998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.452552) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9199115 bytes
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.456672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.0 rd, 107.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(17.7) write-amplify(7.8) OK, records in: 8221, records dropped: 453 output_compression: NoCompression
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.456702) EVENT_LOG_v1 {"time_micros": 1764061998456689, "job": 82, "event": "compaction_finished", "compaction_time_micros": 85574, "compaction_time_cpu_micros": 22588, "output_level": 6, "num_output_files": 1, "total_output_size": 9199115, "num_input_records": 8221, "num_output_records": 7768, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998457678, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764061998461673, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.366518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:13:18 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:13:18.461881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:13:19 compute-0 nova_compute[253538]: 2025-11-25 09:13:19.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:19 compute-0 ceph-mon[75015]: pgmap v2787: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.4 KiB/s wr, 41 op/s
Nov 25 09:13:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 09:13:20 compute-0 nova_compute[253538]: 2025-11-25 09:13:20.494 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764061985.4911578, f5964963-11b8-4fd9-ace9-e5ee67571925 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:13:20 compute-0 nova_compute[253538]: 2025-11-25 09:13:20.495 253542 INFO nova.compute.manager [-] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] VM Stopped (Lifecycle Event)
Nov 25 09:13:20 compute-0 nova_compute[253538]: 2025-11-25 09:13:20.516 253542 DEBUG nova.compute.manager [None req-5549c912-1677-41b9-8c42-835b85cc2250 - - - - - -] [instance: f5964963-11b8-4fd9-ace9-e5ee67571925] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:13:20 compute-0 nova_compute[253538]: 2025-11-25 09:13:20.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:20 compute-0 ceph-mon[75015]: pgmap v2788: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 09:13:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:23 compute-0 ceph-mon[75015]: pgmap v2789: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:13:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:13:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:13:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:13:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:13:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:13:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:24 compute-0 nova_compute[253538]: 2025-11-25 09:13:24.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:25 compute-0 ceph-mon[75015]: pgmap v2790: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:25 compute-0 nova_compute[253538]: 2025-11-25 09:13:25.581 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:27 compute-0 ceph-mon[75015]: pgmap v2791: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:13:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283372595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:13:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:13:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283372595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:13:29 compute-0 ceph-mon[75015]: pgmap v2792: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/283372595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:13:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/283372595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:13:29 compute-0 nova_compute[253538]: 2025-11-25 09:13:29.510 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:30 compute-0 nova_compute[253538]: 2025-11-25 09:13:30.584 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:30 compute-0 sshd-session[412546]: Connection closed by 45.78.217.205 port 59684 [preauth]
Nov 25 09:13:31 compute-0 ceph-mon[75015]: pgmap v2793: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:32.980 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:94:0e 10.100.0.2 2001:db8::f816:3eff:feae:940e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feae:940e/64', 'neutron:device_id': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0596e673-151c-4eed-ad1e-d612e39d6f14) old=Port_Binding(mac=['fa:16:3e:ae:94:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:13:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:32.982 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0596e673-151c-4eed-ad1e-d612e39d6f14 in datapath a0d85633-9402-4022-8c0a-b00348775e93 updated
Nov 25 09:13:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:32.983 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d85633-9402-4022-8c0a-b00348775e93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:13:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:32.984 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6446cfcf-419e-46ff-9151-10feee93b3b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:33 compute-0 ceph-mon[75015]: pgmap v2794: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:34 compute-0 nova_compute[253538]: 2025-11-25 09:13:34.512 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:35 compute-0 ceph-mon[75015]: pgmap v2795: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:35 compute-0 nova_compute[253538]: 2025-11-25 09:13:35.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:37 compute-0 ceph-mon[75015]: pgmap v2796: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:39 compute-0 nova_compute[253538]: 2025-11-25 09:13:39.516 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:39 compute-0 ceph-mon[75015]: pgmap v2797: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:39 compute-0 podman[412549]: 2025-11-25 09:13:39.860852564 +0000 UTC m=+0.092726682 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:13:39 compute-0 podman[412548]: 2025-11-25 09:13:39.882219435 +0000 UTC m=+0.122257045 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:13:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:39.930 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:94:0e 10.100.0.2 2001:db8:0:1:f816:3eff:feae:940e 2001:db8::f816:3eff:feae:940e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:feae:940e/64 2001:db8::f816:3eff:feae:940e/64', 'neutron:device_id': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0596e673-151c-4eed-ad1e-d612e39d6f14) old=Port_Binding(mac=['fa:16:3e:ae:94:0e 10.100.0.2 2001:db8::f816:3eff:feae:940e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feae:940e/64', 'neutron:device_id': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:13:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:39.932 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0596e673-151c-4eed-ad1e-d612e39d6f14 in datapath a0d85633-9402-4022-8c0a-b00348775e93 updated
Nov 25 09:13:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:39.932 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d85633-9402-4022-8c0a-b00348775e93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:13:39 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:39.933 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e6b951-6f36-4ed7-b950-a31606b50059]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:40 compute-0 nova_compute[253538]: 2025-11-25 09:13:40.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:41.097 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:41.098 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:41.098 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:41 compute-0 ceph-mon[75015]: pgmap v2798: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:43 compute-0 ceph-mon[75015]: pgmap v2799: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:44 compute-0 nova_compute[253538]: 2025-11-25 09:13:44.518 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.014 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.015 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.029 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.111 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.111 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.119 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.119 253542 INFO nova.compute.claims [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.227 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:13:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659642029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.654 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.661 253542 DEBUG nova.compute.provider_tree [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.674 253542 DEBUG nova.scheduler.client.report [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.703 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.704 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:13:45 compute-0 ceph-mon[75015]: pgmap v2800: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2659642029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.749 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.749 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.797 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.819 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:13:45 compute-0 podman[412608]: 2025-11-25 09:13:45.866560373 +0000 UTC m=+0.110898446 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.927 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.929 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.930 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Creating image(s)
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.957 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:13:45 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.978 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:45.999 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.003 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.077 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.078 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.079 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.080 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.112 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.117 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.506 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.577 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.690 253542 DEBUG nova.objects.instance [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.707 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.708 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Ensure instance console log exists: /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.709 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.709 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.709 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:46 compute-0 nova_compute[253538]: 2025-11-25 09:13:46.804 253542 DEBUG nova.policy [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:13:47 compute-0 ceph-mon[75015]: pgmap v2801: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:13:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 846 KiB/s wr, 3 op/s
Nov 25 09:13:48 compute-0 nova_compute[253538]: 2025-11-25 09:13:48.529 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Successfully created port: bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:49 compute-0 ceph-mon[75015]: pgmap v2802: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 846 KiB/s wr, 3 op/s
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.813 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Successfully updated port: bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.836 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.837 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.843 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.843 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.844 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.973 253542 DEBUG nova.compute.manager [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.973 253542 DEBUG nova.compute.manager [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing instance network info cache due to event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:13:49 compute-0 nova_compute[253538]: 2025-11-25 09:13:49.974 253542 DEBUG oslo_concurrency.lockutils [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:13:50 compute-0 nova_compute[253538]: 2025-11-25 09:13:50.015 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:13:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 126 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 25 09:13:50 compute-0 nova_compute[253538]: 2025-11-25 09:13:50.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:51 compute-0 ceph-mon[75015]: pgmap v2803: 321 pgs: 321 active+clean; 126 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.915 253542 DEBUG nova.network.neutron [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.963 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.963 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance network_info: |[{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.964 253542 DEBUG oslo_concurrency.lockutils [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.965 253542 DEBUG nova.network.neutron [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.970 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start _get_guest_xml network_info=[{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.976 253542 WARNING nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.982 253542 DEBUG nova.virt.libvirt.host [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.983 253542 DEBUG nova.virt.libvirt.host [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.991 253542 DEBUG nova.virt.libvirt.host [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.992 253542 DEBUG nova.virt.libvirt.host [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.992 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.993 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.993 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.993 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.994 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.994 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.994 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.995 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.995 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.995 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.996 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:13:51 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.996 253542 DEBUG nova.virt.hardware [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:51.999 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:13:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:13:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252352678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:52.433 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:52.460 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:52.465 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:52 compute-0 ceph-mon[75015]: pgmap v2804: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:13:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3252352678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:13:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:13:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2957749098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:52.972 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:52.973 253542 DEBUG nova.virt.libvirt.vif [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-846287804',display_name='tempest-TestGettingAddress-server-846287804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-846287804',id=148,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4cgshl0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:13:45Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3525156a-e9c9-40b7-88f6-db0de5eb3cd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:52.974 253542 DEBUG nova.network.os_vif_util [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:52.975 253542 DEBUG nova.network.os_vif_util [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:13:52 compute-0 nova_compute[253538]: 2025-11-25 09:13:52.976 253542 DEBUG nova.objects.instance [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.000 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <uuid>3525156a-e9c9-40b7-88f6-db0de5eb3cd1</uuid>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <name>instance-00000094</name>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-846287804</nova:name>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:13:51</nova:creationTime>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <nova:port uuid="bc0d7fbf-1c1d-43bc-884b-d89f14be4712">
Nov 25 09:13:53 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe57:11da" ipVersion="6"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe57:11da" ipVersion="6"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <system>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <entry name="serial">3525156a-e9c9-40b7-88f6-db0de5eb3cd1</entry>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <entry name="uuid">3525156a-e9c9-40b7-88f6-db0de5eb3cd1</entry>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </system>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <os>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   </os>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <features>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   </features>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk">
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       </source>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config">
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       </source>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:13:53 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:57:11:da"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <target dev="tapbc0d7fbf-1c"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/console.log" append="off"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <video>
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </video>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:13:53 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:13:53 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:13:53 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:13:53 compute-0 nova_compute[253538]: </domain>
Nov 25 09:13:53 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.002 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Preparing to wait for external event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.002 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.003 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.003 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.004 253542 DEBUG nova.virt.libvirt.vif [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-846287804',display_name='tempest-TestGettingAddress-server-846287804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-846287804',id=148,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4cgshl0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:13:45Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3525156a-e9c9-40b7-88f6-db0de5eb3cd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.004 253542 DEBUG nova.network.os_vif_util [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.005 253542 DEBUG nova.network.os_vif_util [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.005 253542 DEBUG os_vif [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.006 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.006 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.007 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.009 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc0d7fbf-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.009 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc0d7fbf-1c, col_values=(('external_ids', {'iface-id': 'bc0d7fbf-1c1d-43bc-884b-d89f14be4712', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:11:da', 'vm-uuid': '3525156a-e9c9-40b7-88f6-db0de5eb3cd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 NetworkManager[48915]: <info>  [1764062033.0119] manager: (tapbc0d7fbf-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.013 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.021 253542 INFO os_vif [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c')
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.058 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.059 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.060 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:57:11:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.060 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Using config drive
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.082 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:13:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.347 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Creating config drive at /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.358 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpun_364r7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:13:53
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'images']
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:13:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.513 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpun_364r7" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.535 253542 DEBUG nova.storage.rbd_utils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.539 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.688 253542 DEBUG oslo_concurrency.processutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config 3525156a-e9c9-40b7-88f6-db0de5eb3cd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.690 253542 INFO nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Deleting local config drive /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1/disk.config because it was imported into RBD.
Nov 25 09:13:53 compute-0 kernel: tapbc0d7fbf-1c: entered promiscuous mode
Nov 25 09:13:53 compute-0 NetworkManager[48915]: <info>  [1764062033.7711] manager: (tapbc0d7fbf-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/649)
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2957749098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:13:53 compute-0 ovn_controller[152859]: 2025-11-25T09:13:53Z|01579|binding|INFO|Claiming lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 for this chassis.
Nov 25 09:13:53 compute-0 ovn_controller[152859]: 2025-11-25T09:13:53Z|01580|binding|INFO|bc0d7fbf-1c1d-43bc-884b-d89f14be4712: Claiming fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.780 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.784 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.794 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da'], port_security=['fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe57:11da/64 2001:db8::f816:3eff:fe57:11da/64', 'neutron:device_id': '3525156a-e9c9-40b7-88f6-db0de5eb3cd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a20ef1-698a-4857-9ba1-1bcdffeb008a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc0d7fbf-1c1d-43bc-884b-d89f14be4712) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.795 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 in datapath a0d85633-9402-4022-8c0a-b00348775e93 bound to our chassis
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.796 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d85633-9402-4022-8c0a-b00348775e93
Nov 25 09:13:53 compute-0 systemd-udevd[412933]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.799 253542 DEBUG nova.network.neutron [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updated VIF entry in instance network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.800 253542 DEBUG nova.network.neutron [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:13:53 compute-0 NetworkManager[48915]: <info>  [1764062033.8095] device (tapbc0d7fbf-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:13:53 compute-0 NetworkManager[48915]: <info>  [1764062033.8119] device (tapbc0d7fbf-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.809 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3d802538-a2d4-4aa4-840d-3e77be0e9034]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.810 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d85633-91 in ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.812 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d85633-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.812 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[71dc1414-4cd6-442f-9aa6-b0e5b68e8646]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.813 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[23f41ec7-1971-4fef-a5da-7d61c523867b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 systemd-machined[215790]: New machine qemu-178-instance-00000094.
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.816 253542 DEBUG oslo_concurrency.lockutils [req-ea5a7282-22c4-49d2-b8f9-6a88f6cf9300 req-24e4fff6-9109-4b47-a959-89cec155777e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.826 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[4b293432-ff8f-4926-ac4d-74e4a5b1b970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000094.
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.852 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[942a472f-b178-4c00-ac66-1389f17e291e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 ovn_controller[152859]: 2025-11-25T09:13:53Z|01581|binding|INFO|Setting lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 ovn-installed in OVS
Nov 25 09:13:53 compute-0 ovn_controller[152859]: 2025-11-25T09:13:53Z|01582|binding|INFO|Setting lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 up in Southbound
Nov 25 09:13:53 compute-0 nova_compute[253538]: 2025-11-25 09:13:53.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.883 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[36822361-48c5-4b1b-96f8-50a43991eb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 NetworkManager[48915]: <info>  [1764062033.8892] manager: (tapa0d85633-90): new Veth device (/org/freedesktop/NetworkManager/Devices/650)
Nov 25 09:13:53 compute-0 systemd-udevd[412937]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.888 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[68afee91-7aa8-4964-8e36-e15320448cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.919 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[f11face7-61b0-4a79-9087-f93c628cf312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.921 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c857d6-59e5-4ca1-ba5d-e4edc324f8d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 NetworkManager[48915]: <info>  [1764062033.9469] device (tapa0d85633-90): carrier: link connected
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.954 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[4129cdca-2311-4c7b-a53f-9da83bd0cb71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.973 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4cf901-d5c9-4103-8fb0-e56e3a73449e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d85633-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:94:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738854, 'reachable_time': 17518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412967, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:53.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d30da5-735c-42fa-9009-f8bac4a259ea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:940e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738854, 'tstamp': 738854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412968, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[87302f19-4df3-4f3b-9937-b387ebe295d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d85633-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:94:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738854, 'reachable_time': 17518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412969, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.046 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[60ab94d4-6a2a-44e9-96a6-466ce1c2ce8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.061 253542 DEBUG nova.compute.manager [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.063 253542 DEBUG oslo_concurrency.lockutils [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.063 253542 DEBUG oslo_concurrency.lockutils [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.064 253542 DEBUG oslo_concurrency.lockutils [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.064 253542 DEBUG nova.compute.manager [req-88301cb8-71e7-44b2-8fbf-664bdacccf4f req-a0215cc6-94e4-44af-81ce-87cec939e0a4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Processing event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.102 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8425dcb5-b697-4298-bc3b-f5ecb8bb365b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.103 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d85633-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.103 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.104 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d85633-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.105 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:54 compute-0 kernel: tapa0d85633-90: entered promiscuous mode
Nov 25 09:13:54 compute-0 NetworkManager[48915]: <info>  [1764062034.1069] manager: (tapa0d85633-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.111 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d85633-90, col_values=(('external_ids', {'iface-id': '0596e673-151c-4eed-ad1e-d612e39d6f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.112 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:54 compute-0 ovn_controller[152859]: 2025-11-25T09:13:54Z|01583|binding|INFO|Releasing lport 0596e673-151c-4eed-ad1e-d612e39d6f14 from this chassis (sb_readonly=0)
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.116 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d85633-9402-4022-8c0a-b00348775e93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d85633-9402-4022-8c0a-b00348775e93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.117 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5fd35a-1f46-4d8b-80e2-4d1aa2472f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.119 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-a0d85633-9402-4022-8c0a-b00348775e93
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/a0d85633-9402-4022-8c0a-b00348775e93.pid.haproxy
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID a0d85633-9402-4022-8c0a-b00348775e93
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:13:54 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:13:54.119 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'env', 'PROCESS_TAG=haproxy-a0d85633-9402-4022-8c0a-b00348775e93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d85633-9402-4022-8c0a-b00348775e93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.226 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.229 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062034.2285385, 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.229 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] VM Started (Lifecycle Event)
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.231 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.234 253542 INFO nova.virt.libvirt.driver [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance spawned successfully.
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.234 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.245 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.250 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.255 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.255 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.256 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.256 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.257 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.257 253542 DEBUG nova.virt.libvirt.driver [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.276 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.277 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062034.2286754, 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.277 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] VM Paused (Lifecycle Event)
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.301 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.304 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062034.2297797, 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.305 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] VM Resumed (Lifecycle Event)
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.313 253542 INFO nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Took 8.38 seconds to spawn the instance on the hypervisor.
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.313 253542 DEBUG nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.322 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.324 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.342 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.370 253542 INFO nova.compute.manager [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Took 9.29 seconds to build instance.
Nov 25 09:13:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.385 253542 DEBUG oslo_concurrency.lockutils [None req-9519dcae-62f0-45ad-a54d-9a20fba8c9e5 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:54 compute-0 podman[413043]: 2025-11-25 09:13:54.514103718 +0000 UTC m=+0.047120392 container create 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 09:13:54 compute-0 nova_compute[253538]: 2025-11-25 09:13:54.523 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:54 compute-0 systemd[1]: Started libpod-conmon-7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9.scope.
Nov 25 09:13:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:13:54 compute-0 podman[413043]: 2025-11-25 09:13:54.490694901 +0000 UTC m=+0.023711605 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92dfa951f831771d47b5f3c19baa67a33147d7a8d9cd91e8a74b897c79b9363c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:13:54 compute-0 podman[413043]: 2025-11-25 09:13:54.599590812 +0000 UTC m=+0.132607586 container init 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 09:13:54 compute-0 podman[413043]: 2025-11-25 09:13:54.605073701 +0000 UTC m=+0.138090385 container start 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:13:54 compute-0 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [NOTICE]   (413062) : New worker (413064) forked
Nov 25 09:13:54 compute-0 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [NOTICE]   (413062) : Loading success.
Nov 25 09:13:54 compute-0 ceph-mon[75015]: pgmap v2805: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:13:56 compute-0 nova_compute[253538]: 2025-11-25 09:13:56.129 253542 DEBUG nova.compute.manager [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:56 compute-0 nova_compute[253538]: 2025-11-25 09:13:56.129 253542 DEBUG oslo_concurrency.lockutils [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:13:56 compute-0 nova_compute[253538]: 2025-11-25 09:13:56.130 253542 DEBUG oslo_concurrency.lockutils [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:13:56 compute-0 nova_compute[253538]: 2025-11-25 09:13:56.130 253542 DEBUG oslo_concurrency.lockutils [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:13:56 compute-0 nova_compute[253538]: 2025-11-25 09:13:56.131 253542 DEBUG nova.compute.manager [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] No waiting events found dispatching network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:13:56 compute-0 nova_compute[253538]: 2025-11-25 09:13:56.131 253542 WARNING nova.compute.manager [req-619ec3ae-127f-445e-b9f0-53f1a7b51493 req-b0435bb4-1a42-4ce0-81ed-1ddca32c571f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received unexpected event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 for instance with vm_state active and task_state None.
Nov 25 09:13:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 09:13:56 compute-0 nova_compute[253538]: 2025-11-25 09:13:56.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:57 compute-0 ceph-mon[75015]: pgmap v2806: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 09:13:57 compute-0 sudo[413073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:13:57 compute-0 sudo[413073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:13:57 compute-0 sudo[413073]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:57 compute-0 sudo[413098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:13:57 compute-0 sudo[413098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:13:57 compute-0 sudo[413098]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:57 compute-0 sudo[413123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:13:57 compute-0 sudo[413123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:13:57 compute-0 sudo[413123]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:57 compute-0 sudo[413148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:13:57 compute-0 sudo[413148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:13:57 compute-0 ovn_controller[152859]: 2025-11-25T09:13:57Z|01584|binding|INFO|Releasing lport 0596e673-151c-4eed-ad1e-d612e39d6f14 from this chassis (sb_readonly=0)
Nov 25 09:13:57 compute-0 nova_compute[253538]: 2025-11-25 09:13:57.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:57 compute-0 NetworkManager[48915]: <info>  [1764062037.8918] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Nov 25 09:13:57 compute-0 NetworkManager[48915]: <info>  [1764062037.8932] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Nov 25 09:13:57 compute-0 ovn_controller[152859]: 2025-11-25T09:13:57Z|01585|binding|INFO|Releasing lport 0596e673-151c-4eed-ad1e-d612e39d6f14 from this chassis (sb_readonly=0)
Nov 25 09:13:57 compute-0 nova_compute[253538]: 2025-11-25 09:13:57.949 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:57 compute-0 nova_compute[253538]: 2025-11-25 09:13:57.956 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.011 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:13:58 compute-0 sudo[413148]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 09:13:58 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:13:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:13:58 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:13:58 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:13:58 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev b91b1ca9-cab7-4043-96b0-af1f9f7e327f does not exist
Nov 25 09:13:58 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f10292e2-2cfb-4ea8-9911-e5ae8a4767ad does not exist
Nov 25 09:13:58 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c9f0f336-e2a1-42bb-8bda-c58de83795a7 does not exist
Nov 25 09:13:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:13:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:13:58 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:13:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:13:58 compute-0 sudo[413206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:13:58 compute-0 sudo[413206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:13:58 compute-0 sudo[413206]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.305 253542 DEBUG nova.compute.manager [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.305 253542 DEBUG nova.compute.manager [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing instance network info cache due to event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.306 253542 DEBUG oslo_concurrency.lockutils [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.306 253542 DEBUG oslo_concurrency.lockutils [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.307 253542 DEBUG nova.network.neutron [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:13:58 compute-0 sudo[413231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:13:58 compute-0 sudo[413231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:13:58 compute-0 sudo[413231]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:13:58 compute-0 sudo[413256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:13:58 compute-0 sudo[413256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:13:58 compute-0 sudo[413256]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 25 09:13:58 compute-0 sudo[413281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:13:58 compute-0 sudo[413281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:13:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:13:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:13:58 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:13:58 compute-0 nova_compute[253538]: 2025-11-25 09:13:58.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:13:58 compute-0 podman[413345]: 2025-11-25 09:13:58.77845888 +0000 UTC m=+0.039836943 container create bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:13:58 compute-0 systemd[1]: Started libpod-conmon-bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d.scope.
Nov 25 09:13:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:13:58 compute-0 podman[413345]: 2025-11-25 09:13:58.762439935 +0000 UTC m=+0.023818028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:13:58 compute-0 podman[413345]: 2025-11-25 09:13:58.86784191 +0000 UTC m=+0.129220003 container init bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:13:58 compute-0 podman[413345]: 2025-11-25 09:13:58.877839002 +0000 UTC m=+0.139217075 container start bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:13:58 compute-0 podman[413345]: 2025-11-25 09:13:58.881626275 +0000 UTC m=+0.143004338 container attach bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 09:13:58 compute-0 funny_cray[413361]: 167 167
Nov 25 09:13:58 compute-0 systemd[1]: libpod-bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d.scope: Deactivated successfully.
Nov 25 09:13:58 compute-0 conmon[413361]: conmon bf3dd2a5ecdd0a960df0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d.scope/container/memory.events
Nov 25 09:13:58 compute-0 podman[413345]: 2025-11-25 09:13:58.884818452 +0000 UTC m=+0.146196545 container died bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-57f8277c8d2e55bea2df96da31bd48e1b1bbbaa3f8a2699c52030ab04ccf3fbf-merged.mount: Deactivated successfully.
Nov 25 09:13:58 compute-0 podman[413345]: 2025-11-25 09:13:58.92742309 +0000 UTC m=+0.188801153 container remove bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:13:58 compute-0 systemd[1]: libpod-conmon-bf3dd2a5ecdd0a960df01a6ab40f620c78de063813e8b60433eb72fa8c87c75d.scope: Deactivated successfully.
Nov 25 09:13:59 compute-0 podman[413385]: 2025-11-25 09:13:59.125184866 +0000 UTC m=+0.068786101 container create 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 09:13:59 compute-0 systemd[1]: Started libpod-conmon-1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6.scope.
Nov 25 09:13:59 compute-0 podman[413385]: 2025-11-25 09:13:59.102009656 +0000 UTC m=+0.045610931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:13:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:13:59 compute-0 podman[413385]: 2025-11-25 09:13:59.237131639 +0000 UTC m=+0.180732924 container init 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:13:59 compute-0 podman[413385]: 2025-11-25 09:13:59.245015084 +0000 UTC m=+0.188616319 container start 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:13:59 compute-0 podman[413385]: 2025-11-25 09:13:59.248593081 +0000 UTC m=+0.192194336 container attach 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:13:59 compute-0 ceph-mon[75015]: pgmap v2807: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 25 09:13:59 compute-0 nova_compute[253538]: 2025-11-25 09:13:59.525 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:00 compute-0 nova_compute[253538]: 2025-11-25 09:14:00.304 253542 DEBUG nova.network.neutron [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updated VIF entry in instance network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:14:00 compute-0 nova_compute[253538]: 2025-11-25 09:14:00.306 253542 DEBUG nova.network.neutron [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:00 compute-0 flamboyant_fermat[413402]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:14:00 compute-0 flamboyant_fermat[413402]: --> relative data size: 1.0
Nov 25 09:14:00 compute-0 flamboyant_fermat[413402]: --> All data devices are unavailable
Nov 25 09:14:00 compute-0 nova_compute[253538]: 2025-11-25 09:14:00.325 253542 DEBUG oslo_concurrency.lockutils [req-0e83b409-0f3d-4c66-ab89-5c6894dc6607 req-a1f9a9fe-f856-4482-9422-572c6e51aa6d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:14:00 compute-0 systemd[1]: libpod-1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6.scope: Deactivated successfully.
Nov 25 09:14:00 compute-0 systemd[1]: libpod-1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6.scope: Consumed 1.022s CPU time.
Nov 25 09:14:00 compute-0 podman[413385]: 2025-11-25 09:14:00.353757893 +0000 UTC m=+1.297359128 container died 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 09:14:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 981 KiB/s wr, 96 op/s
Nov 25 09:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bba751563e3b0d4ca3fcede2fc4893e70ff4bba3d2d05211dcdef3ef882f1e2-merged.mount: Deactivated successfully.
Nov 25 09:14:00 compute-0 podman[413385]: 2025-11-25 09:14:00.421586508 +0000 UTC m=+1.365187733 container remove 1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermat, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:14:00 compute-0 systemd[1]: libpod-conmon-1b0f107261c3873fceffcf70ee55357b4159539398614d28e04f8699acdd08f6.scope: Deactivated successfully.
Nov 25 09:14:00 compute-0 sudo[413281]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:00 compute-0 sudo[413443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:14:00 compute-0 sudo[413443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:00 compute-0 sudo[413443]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:00 compute-0 nova_compute[253538]: 2025-11-25 09:14:00.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:00 compute-0 nova_compute[253538]: 2025-11-25 09:14:00.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:14:00 compute-0 sudo[413468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:14:00 compute-0 sudo[413468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:00 compute-0 sudo[413468]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:00 compute-0 sudo[413493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:14:00 compute-0 sudo[413493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:00 compute-0 sudo[413493]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:00 compute-0 sudo[413518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:14:00 compute-0 sudo[413518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:01 compute-0 podman[413584]: 2025-11-25 09:14:01.06654836 +0000 UTC m=+0.086926514 container create d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:14:01 compute-0 podman[413584]: 2025-11-25 09:14:01.003447794 +0000 UTC m=+0.023825988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:14:01 compute-0 systemd[1]: Started libpod-conmon-d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962.scope.
Nov 25 09:14:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:14:01 compute-0 podman[413584]: 2025-11-25 09:14:01.153019581 +0000 UTC m=+0.173397745 container init d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:14:01 compute-0 podman[413584]: 2025-11-25 09:14:01.163864976 +0000 UTC m=+0.184243140 container start d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:14:01 compute-0 confident_hopper[413598]: 167 167
Nov 25 09:14:01 compute-0 systemd[1]: libpod-d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962.scope: Deactivated successfully.
Nov 25 09:14:01 compute-0 podman[413584]: 2025-11-25 09:14:01.18208338 +0000 UTC m=+0.202461574 container attach d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:14:01 compute-0 podman[413584]: 2025-11-25 09:14:01.183599842 +0000 UTC m=+0.203978006 container died d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 09:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7eda25c3b017e90ce1aaf4aab8573785d50ab1f91e39c607f309096d328886-merged.mount: Deactivated successfully.
Nov 25 09:14:01 compute-0 podman[413584]: 2025-11-25 09:14:01.35568667 +0000 UTC m=+0.376064824 container remove d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:14:01 compute-0 systemd[1]: libpod-conmon-d04b1f2371c8f9a8176629a8941193a27508d614f1a14d095a3caade322f8962.scope: Deactivated successfully.
Nov 25 09:14:01 compute-0 ceph-mon[75015]: pgmap v2808: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 981 KiB/s wr, 96 op/s
Nov 25 09:14:01 compute-0 podman[413625]: 2025-11-25 09:14:01.537109092 +0000 UTC m=+0.029226996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:14:01 compute-0 podman[413625]: 2025-11-25 09:14:01.628447724 +0000 UTC m=+0.120565568 container create e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:14:01 compute-0 systemd[1]: Started libpod-conmon-e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c.scope.
Nov 25 09:14:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:14:01 compute-0 podman[413625]: 2025-11-25 09:14:01.766269061 +0000 UTC m=+0.258386935 container init e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:14:01 compute-0 podman[413625]: 2025-11-25 09:14:01.779213343 +0000 UTC m=+0.271331167 container start e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:14:01 compute-0 podman[413625]: 2025-11-25 09:14:01.788615739 +0000 UTC m=+0.280733633 container attach e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 09:14:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 677 KiB/s wr, 74 op/s
Nov 25 09:14:02 compute-0 nova_compute[253538]: 2025-11-25 09:14:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]: {
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:     "0": [
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:         {
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "devices": [
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "/dev/loop3"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             ],
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_name": "ceph_lv0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_size": "21470642176",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "name": "ceph_lv0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "tags": {
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cluster_name": "ceph",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.crush_device_class": "",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.encrypted": "0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osd_id": "0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.type": "block",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.vdo": "0"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             },
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "type": "block",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "vg_name": "ceph_vg0"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:         }
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:     ],
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:     "1": [
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:         {
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "devices": [
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "/dev/loop4"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             ],
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_name": "ceph_lv1",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_size": "21470642176",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "name": "ceph_lv1",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "tags": {
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cluster_name": "ceph",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.crush_device_class": "",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.encrypted": "0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osd_id": "1",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.type": "block",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.vdo": "0"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             },
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "type": "block",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "vg_name": "ceph_vg1"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:         }
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:     ],
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:     "2": [
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:         {
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "devices": [
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "/dev/loop5"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             ],
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_name": "ceph_lv2",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_size": "21470642176",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:14:02 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "name": "ceph_lv2",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "tags": {
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.cluster_name": "ceph",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.crush_device_class": "",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.encrypted": "0",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osd_id": "2",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.type": "block",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:                 "ceph.vdo": "0"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             },
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "type": "block",
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:             "vg_name": "ceph_vg2"
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:         }
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]:     ]
Nov 25 09:14:02 compute-0 affectionate_lumiere[413642]: }
Nov 25 09:14:02 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:14:02 compute-0 systemd[1]: libpod-e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c.scope: Deactivated successfully.
Nov 25 09:14:02 compute-0 podman[413625]: 2025-11-25 09:14:02.606715988 +0000 UTC m=+1.098833792 container died e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-881f1c42da3ce90144cd3878c7f7a0d12d93fdf94203ca216a183c591c2750da-merged.mount: Deactivated successfully.
Nov 25 09:14:02 compute-0 podman[413625]: 2025-11-25 09:14:02.681064549 +0000 UTC m=+1.173182353 container remove e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:14:02 compute-0 systemd[1]: libpod-conmon-e769b7ccd85d22682b181c46edd19f42d490ff174bc0457df6aa65d6a329717c.scope: Deactivated successfully.
Nov 25 09:14:02 compute-0 sudo[413518]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:02 compute-0 sudo[413664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:14:02 compute-0 sudo[413664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:02 compute-0 sudo[413664]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:02 compute-0 sudo[413689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:14:02 compute-0 sudo[413689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:02 compute-0 sudo[413689]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:02 compute-0 sudo[413714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:14:02 compute-0 sudo[413714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:02 compute-0 sudo[413714]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:02 compute-0 sudo[413739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:14:02 compute-0 sudo[413739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:03 compute-0 nova_compute[253538]: 2025-11-25 09:14:03.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:03 compute-0 podman[413804]: 2025-11-25 09:14:03.282869588 +0000 UTC m=+0.054567164 container create adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:14:03 compute-0 systemd[1]: Started libpod-conmon-adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464.scope.
Nov 25 09:14:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:03 compute-0 podman[413804]: 2025-11-25 09:14:03.253664935 +0000 UTC m=+0.025362521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:14:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:14:03 compute-0 podman[413804]: 2025-11-25 09:14:03.418750142 +0000 UTC m=+0.190447818 container init adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:14:03 compute-0 podman[413804]: 2025-11-25 09:14:03.43078705 +0000 UTC m=+0.202484636 container start adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:14:03 compute-0 blissful_lehmann[413820]: 167 167
Nov 25 09:14:03 compute-0 systemd[1]: libpod-adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464.scope: Deactivated successfully.
Nov 25 09:14:03 compute-0 podman[413804]: 2025-11-25 09:14:03.445609052 +0000 UTC m=+0.217306668 container attach adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 09:14:03 compute-0 podman[413804]: 2025-11-25 09:14:03.446085315 +0000 UTC m=+0.217782911 container died adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6813109c5a7abe3e95fec5cfc1042fd70bb7a62d752e9ed5b0b49ae3707f036-merged.mount: Deactivated successfully.
Nov 25 09:14:03 compute-0 podman[413804]: 2025-11-25 09:14:03.511077211 +0000 UTC m=+0.282774787 container remove adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:14:03 compute-0 ceph-mon[75015]: pgmap v2809: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 677 KiB/s wr, 74 op/s
Nov 25 09:14:03 compute-0 systemd[1]: libpod-conmon-adb6d30d8a4be183cc04c644aa59b1aedb5a9f57a0e20e44ae0dbc0b0fd5a464.scope: Deactivated successfully.
Nov 25 09:14:03 compute-0 nova_compute[253538]: 2025-11-25 09:14:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:03 compute-0 podman[413842]: 2025-11-25 09:14:03.75342115 +0000 UTC m=+0.084824297 container create b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:14:03 compute-0 podman[413842]: 2025-11-25 09:14:03.699177486 +0000 UTC m=+0.030580683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:14:03 compute-0 systemd[1]: Started libpod-conmon-b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570.scope.
Nov 25 09:14:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:14:03 compute-0 podman[413842]: 2025-11-25 09:14:03.890124216 +0000 UTC m=+0.221527403 container init b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:14:03 compute-0 podman[413842]: 2025-11-25 09:14:03.897934649 +0000 UTC m=+0.229337796 container start b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:14:03 compute-0 podman[413842]: 2025-11-25 09:14:03.946997742 +0000 UTC m=+0.278400889 container attach b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:14:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:14:04 compute-0 nova_compute[253538]: 2025-11-25 09:14:04.528 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:04 compute-0 confident_banach[413858]: {
Nov 25 09:14:04 compute-0 confident_banach[413858]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "osd_id": 1,
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "type": "bluestore"
Nov 25 09:14:04 compute-0 confident_banach[413858]:     },
Nov 25 09:14:04 compute-0 confident_banach[413858]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "osd_id": 2,
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "type": "bluestore"
Nov 25 09:14:04 compute-0 confident_banach[413858]:     },
Nov 25 09:14:04 compute-0 confident_banach[413858]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "osd_id": 0,
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:14:04 compute-0 confident_banach[413858]:         "type": "bluestore"
Nov 25 09:14:04 compute-0 confident_banach[413858]:     }
Nov 25 09:14:04 compute-0 confident_banach[413858]: }
Nov 25 09:14:04 compute-0 systemd[1]: libpod-b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570.scope: Deactivated successfully.
Nov 25 09:14:04 compute-0 podman[413842]: 2025-11-25 09:14:04.857967946 +0000 UTC m=+1.189371053 container died b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ae3c80e45b88e4f37b7facc1f9a758ec98451035f141aa6d0823629fae6a915-merged.mount: Deactivated successfully.
Nov 25 09:14:05 compute-0 podman[413842]: 2025-11-25 09:14:05.013873994 +0000 UTC m=+1.345277111 container remove b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:14:05 compute-0 systemd[1]: libpod-conmon-b2c646e61f3ea34972cf3615008ec48150466aa823f702ed6de2b1e01546e570.scope: Deactivated successfully.
Nov 25 09:14:05 compute-0 sudo[413739]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:14:05 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:14:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:14:05 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:14:05 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 77ad1e66-7fc0-4824-abe8-27f6d78b5ee4 does not exist
Nov 25 09:14:05 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 77576eab-69ae-45d8-8c77-ffc564017ff8 does not exist
Nov 25 09:14:05 compute-0 sudo[413906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:14:05 compute-0 sudo[413906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:05 compute-0 sudo[413906]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:05 compute-0 sudo[413931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:14:05 compute-0 sudo[413931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:14:05 compute-0 sudo[413931]: pam_unix(sudo:session): session closed for user root
Nov 25 09:14:05 compute-0 nova_compute[253538]: 2025-11-25 09:14:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:05 compute-0 ceph-mon[75015]: pgmap v2810: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:14:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:14:05 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:14:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:14:07 compute-0 ovn_controller[152859]: 2025-11-25T09:14:07Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:11:da 10.100.0.12
Nov 25 09:14:07 compute-0 ovn_controller[152859]: 2025-11-25T09:14:07Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:11:da 10.100.0.12
Nov 25 09:14:07 compute-0 ceph-mon[75015]: pgmap v2811: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:14:08 compute-0 nova_compute[253538]: 2025-11-25 09:14:08.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 150 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 102 op/s
Nov 25 09:14:08 compute-0 nova_compute[253538]: 2025-11-25 09:14:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:08 compute-0 nova_compute[253538]: 2025-11-25 09:14:08.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:08 compute-0 nova_compute[253538]: 2025-11-25 09:14:08.590 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:08 compute-0 nova_compute[253538]: 2025-11-25 09:14:08.590 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:08 compute-0 nova_compute[253538]: 2025-11-25 09:14:08.590 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:14:08 compute-0 nova_compute[253538]: 2025-11-25 09:14:08.591 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:14:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2925464592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.014 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.078 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.078 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.277 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.278 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3456MB free_disk=59.94898986816406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.278 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.278 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.359 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.359 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.359 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.420 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.532 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:09 compute-0 ceph-mon[75015]: pgmap v2812: 321 pgs: 321 active+clean; 150 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 102 op/s
Nov 25 09:14:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2925464592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:14:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661340366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.901 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.907 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.925 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.985 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:14:09 compute-0 nova_compute[253538]: 2025-11-25 09:14:09.985 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 160 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 09:14:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/661340366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:10 compute-0 podman[414002]: 2025-11-25 09:14:10.812065433 +0000 UTC m=+0.062158362 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 09:14:10 compute-0 podman[414001]: 2025-11-25 09:14:10.81419585 +0000 UTC m=+0.064607338 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 25 09:14:11 compute-0 ceph-mon[75015]: pgmap v2813: 321 pgs: 321 active+clean; 160 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 25 09:14:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:14:13 compute-0 nova_compute[253538]: 2025-11-25 09:14:13.020 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:13 compute-0 ceph-mon[75015]: pgmap v2814: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:14:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:14:14 compute-0 nova_compute[253538]: 2025-11-25 09:14:14.535 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.757910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054757954, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 712, "num_deletes": 251, "total_data_size": 884990, "memory_usage": 899176, "flush_reason": "Manual Compaction"}
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054767843, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 876885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58362, "largest_seqno": 59073, "table_properties": {"data_size": 873133, "index_size": 1595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8476, "raw_average_key_size": 19, "raw_value_size": 865603, "raw_average_value_size": 1989, "num_data_blocks": 71, "num_entries": 435, "num_filter_entries": 435, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764061999, "oldest_key_time": 1764061999, "file_creation_time": 1764062054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 10006 microseconds, and 5437 cpu microseconds.
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.767909) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 876885 bytes OK
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.767936) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.770249) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.770270) EVENT_LOG_v1 {"time_micros": 1764062054770263, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.770294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 881291, prev total WAL file size 881291, number of live WAL files 2.
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.771027) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(856KB)], [137(8983KB)]
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054771075, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 10076000, "oldest_snapshot_seqno": -1}
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7689 keys, 8350119 bytes, temperature: kUnknown
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054829783, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8350119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8302978, "index_size": 26804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19269, "raw_key_size": 202292, "raw_average_key_size": 26, "raw_value_size": 8169847, "raw_average_value_size": 1062, "num_data_blocks": 1035, "num_entries": 7689, "num_filter_entries": 7689, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.830057) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8350119 bytes
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.831666) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.3 rd, 142.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.8 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(21.0) write-amplify(9.5) OK, records in: 8203, records dropped: 514 output_compression: NoCompression
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.831686) EVENT_LOG_v1 {"time_micros": 1764062054831677, "job": 84, "event": "compaction_finished", "compaction_time_micros": 58809, "compaction_time_cpu_micros": 23845, "output_level": 6, "num_output_files": 1, "total_output_size": 8350119, "num_input_records": 8203, "num_output_records": 7689, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054832140, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062054834290, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.770931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:14:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:14:14.834462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:14:15 compute-0 ceph-mon[75015]: pgmap v2815: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:14:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:14:16 compute-0 nova_compute[253538]: 2025-11-25 09:14:16.491 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:16 compute-0 nova_compute[253538]: 2025-11-25 09:14:16.492 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:16 compute-0 nova_compute[253538]: 2025-11-25 09:14:16.520 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:14:16 compute-0 nova_compute[253538]: 2025-11-25 09:14:16.731 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:16 compute-0 nova_compute[253538]: 2025-11-25 09:14:16.732 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:16 compute-0 nova_compute[253538]: 2025-11-25 09:14:16.740 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:14:16 compute-0 nova_compute[253538]: 2025-11-25 09:14:16.740 253542 INFO nova.compute.claims [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:14:16 compute-0 ceph-mon[75015]: pgmap v2816: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:14:16 compute-0 podman[414037]: 2025-11-25 09:14:16.912193299 +0000 UTC m=+0.154985794 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:14:16 compute-0 nova_compute[253538]: 2025-11-25 09:14:16.917 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:14:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522660723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.358 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.365 253542 DEBUG nova.compute.provider_tree [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.387 253542 DEBUG nova.scheduler.client.report [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.413 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.415 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.503 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.504 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.545 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.600 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.754 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.755 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.756 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Creating image(s)
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.781 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.804 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:14:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1522660723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.825 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.830 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.865 253542 DEBUG nova.policy [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.905 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.906 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.907 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.907 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.928 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:14:17 compute-0 nova_compute[253538]: 2025-11-25 09:14:17.932 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 49fdf548-77e1-47b2-9118-f42acc3a4052_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.022 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.265 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 49fdf548-77e1-47b2-9118-f42acc3a4052_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.352 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:14:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.6 MiB/s wr, 66 op/s
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.484 253542 DEBUG nova.objects.instance [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 49fdf548-77e1-47b2-9118-f42acc3a4052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.630 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.631 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Ensure instance console log exists: /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.631 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.632 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:18 compute-0 nova_compute[253538]: 2025-11-25 09:14:18.632 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:18 compute-0 ceph-mon[75015]: pgmap v2817: 321 pgs: 321 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.6 MiB/s wr, 66 op/s
Nov 25 09:14:19 compute-0 nova_compute[253538]: 2025-11-25 09:14:19.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 203 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 1.9 MiB/s wr, 44 op/s
Nov 25 09:14:21 compute-0 sshd-session[414251]: Connection closed by authenticating user root 171.244.51.45 port 52194 [preauth]
Nov 25 09:14:21 compute-0 ceph-mon[75015]: pgmap v2818: 321 pgs: 321 active+clean; 203 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 1.9 MiB/s wr, 44 op/s
Nov 25 09:14:21 compute-0 nova_compute[253538]: 2025-11-25 09:14:21.780 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Successfully created port: 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:14:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:14:22 compute-0 nova_compute[253538]: 2025-11-25 09:14:22.939 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Successfully updated port: 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:14:23 compute-0 nova_compute[253538]: 2025-11-25 09:14:23.021 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:14:23 compute-0 nova_compute[253538]: 2025-11-25 09:14:23.022 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:14:23 compute-0 nova_compute[253538]: 2025-11-25 09:14:23.022 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:14:23 compute-0 nova_compute[253538]: 2025-11-25 09:14:23.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:23 compute-0 nova_compute[253538]: 2025-11-25 09:14:23.032 253542 DEBUG nova.compute.manager [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:23 compute-0 nova_compute[253538]: 2025-11-25 09:14:23.033 253542 DEBUG nova.compute.manager [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing instance network info cache due to event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:14:23 compute-0 nova_compute[253538]: 2025-11-25 09:14:23.033 253542 DEBUG oslo_concurrency.lockutils [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:14:23 compute-0 nova_compute[253538]: 2025-11-25 09:14:23.165 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:14:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:14:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:14:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:14:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:14:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:14:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:14:23 compute-0 ceph-mon[75015]: pgmap v2819: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:14:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:14:24 compute-0 nova_compute[253538]: 2025-11-25 09:14:24.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:24 compute-0 sshd-session[414253]: Invalid user ts from 45.202.211.6 port 46772
Nov 25 09:14:24 compute-0 sshd-session[414253]: Received disconnect from 45.202.211.6 port 46772:11: Bye Bye [preauth]
Nov 25 09:14:24 compute-0 sshd-session[414253]: Disconnected from invalid user ts 45.202.211.6 port 46772 [preauth]
Nov 25 09:14:25 compute-0 ceph-mon[75015]: pgmap v2820: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.799 253542 DEBUG nova.network.neutron [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.845 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.846 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance network_info: |[{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.846 253542 DEBUG oslo_concurrency.lockutils [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.846 253542 DEBUG nova.network.neutron [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.851 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start _get_guest_xml network_info=[{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.856 253542 WARNING nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.863 253542 DEBUG nova.virt.libvirt.host [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.864 253542 DEBUG nova.virt.libvirt.host [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.868 253542 DEBUG nova.virt.libvirt.host [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.869 253542 DEBUG nova.virt.libvirt.host [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.869 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.870 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.870 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.870 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.871 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.871 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.871 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.872 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.872 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.872 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.873 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.873 253542 DEBUG nova.virt.hardware [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:14:25 compute-0 nova_compute[253538]: 2025-11-25 09:14:25.877 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:14:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528925284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.369 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.394 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.398 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2528925284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:14:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:14:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454110497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.853 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.856 253542 DEBUG nova.virt.libvirt.vif [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1132307495',display_name='tempest-TestGettingAddress-server-1132307495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1132307495',id=149,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4n1kgn5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:14:17Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=49fdf548-77e1-47b2-9118-f42acc3a4052,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.856 253542 DEBUG nova.network.os_vif_util [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.857 253542 DEBUG nova.network.os_vif_util [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.858 253542 DEBUG nova.objects.instance [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49fdf548-77e1-47b2-9118-f42acc3a4052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.871 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <uuid>49fdf548-77e1-47b2-9118-f42acc3a4052</uuid>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <name>instance-00000095</name>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-1132307495</nova:name>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:14:25</nova:creationTime>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <nova:port uuid="8567d2b8-5fd3-45d6-9d10-d88839de3d8a">
Nov 25 09:14:26 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feef:e344" ipVersion="6"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feef:e344" ipVersion="6"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <system>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <entry name="serial">49fdf548-77e1-47b2-9118-f42acc3a4052</entry>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <entry name="uuid">49fdf548-77e1-47b2-9118-f42acc3a4052</entry>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </system>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <os>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   </os>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <features>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   </features>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/49fdf548-77e1-47b2-9118-f42acc3a4052_disk">
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       </source>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config">
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       </source>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:14:26 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:ef:e3:44"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <target dev="tap8567d2b8-5f"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/console.log" append="off"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <video>
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </video>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:14:26 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:14:26 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:14:26 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:14:26 compute-0 nova_compute[253538]: </domain>
Nov 25 09:14:26 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.872 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Preparing to wait for external event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.873 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.874 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.874 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.876 253542 DEBUG nova.virt.libvirt.vif [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1132307495',display_name='tempest-TestGettingAddress-server-1132307495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1132307495',id=149,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4n1kgn5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:14:17Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=49fdf548-77e1-47b2-9118-f42acc3a4052,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.876 253542 DEBUG nova.network.os_vif_util [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.878 253542 DEBUG nova.network.os_vif_util [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.879 253542 DEBUG os_vif [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.880 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.881 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.881 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.885 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8567d2b8-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.886 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8567d2b8-5f, col_values=(('external_ids', {'iface-id': '8567d2b8-5fd3-45d6-9d10-d88839de3d8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:e3:44', 'vm-uuid': '49fdf548-77e1-47b2-9118-f42acc3a4052'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:26 compute-0 NetworkManager[48915]: <info>  [1764062066.8901] manager: (tap8567d2b8-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.897 253542 INFO os_vif [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f')
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.938 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.938 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.939 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:ef:e3:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.939 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Using config drive
Nov 25 09:14:26 compute-0 nova_compute[253538]: 2025-11-25 09:14:26.960 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.263 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Creating config drive at /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.272 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5t44b9p1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.419 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5t44b9p1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.452 253542 DEBUG nova.storage.rbd_utils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.457 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:27 compute-0 ceph-mon[75015]: pgmap v2821: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:14:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2454110497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.642 253542 DEBUG oslo_concurrency.processutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config 49fdf548-77e1-47b2-9118-f42acc3a4052_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.643 253542 INFO nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Deleting local config drive /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052/disk.config because it was imported into RBD.
Nov 25 09:14:27 compute-0 kernel: tap8567d2b8-5f: entered promiscuous mode
Nov 25 09:14:27 compute-0 NetworkManager[48915]: <info>  [1764062067.6896] manager: (tap8567d2b8-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Nov 25 09:14:27 compute-0 ovn_controller[152859]: 2025-11-25T09:14:27Z|01586|binding|INFO|Claiming lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a for this chassis.
Nov 25 09:14:27 compute-0 ovn_controller[152859]: 2025-11-25T09:14:27Z|01587|binding|INFO|8567d2b8-5fd3-45d6-9d10-d88839de3d8a: Claiming fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.689 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:27 compute-0 ovn_controller[152859]: 2025-11-25T09:14:27Z|01588|binding|INFO|Setting lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a ovn-installed in OVS
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.708 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:27 compute-0 systemd-machined[215790]: New machine qemu-179-instance-00000095.
Nov 25 09:14:27 compute-0 systemd-udevd[414389]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:14:27 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000095.
Nov 25 09:14:27 compute-0 NetworkManager[48915]: <info>  [1764062067.7345] device (tap8567d2b8-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:14:27 compute-0 NetworkManager[48915]: <info>  [1764062067.7360] device (tap8567d2b8-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:14:27 compute-0 ovn_controller[152859]: 2025-11-25T09:14:27Z|01589|binding|INFO|Setting lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a up in Southbound
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.768 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344'], port_security=['fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:feef:e344/64 2001:db8::f816:3eff:feef:e344/64', 'neutron:device_id': '49fdf548-77e1-47b2-9118-f42acc3a4052', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a20ef1-698a-4857-9ba1-1bcdffeb008a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8567d2b8-5fd3-45d6-9d10-d88839de3d8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.769 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a in datapath a0d85633-9402-4022-8c0a-b00348775e93 bound to our chassis
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.770 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d85633-9402-4022-8c0a-b00348775e93
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.786 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6398b08b-5d5c-40ab-bb49-8c328f3c02f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.819 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[822afb82-ae9d-44ff-80ee-64253d60cc5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.822 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[d45e3f40-b7a4-4c01-92d5-392eccb88384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.867 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf82c06-d4dc-4173-816f-778651a88513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.891 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[62f5086c-fb02-4c55-8599-96b9d57affa4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d85633-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:94:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738854, 'reachable_time': 17518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414403, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdd5a99-2726-4d62-9b65-a6068ab8d4c2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa0d85633-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738867, 'tstamp': 738867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414404, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa0d85633-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738869, 'tstamp': 738869}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414404, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.910 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d85633-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d85633-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:14:27 compute-0 nova_compute[253538]: 2025-11-25 09:14:27.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.913 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d85633-90, col_values=(('external_ids', {'iface-id': '0596e673-151c-4eed-ad1e-d612e39d6f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:27.914 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.053 253542 DEBUG nova.network.neutron [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updated VIF entry in instance network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.054 253542 DEBUG nova.network.neutron [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.069 253542 DEBUG oslo_concurrency.lockutils [req-7fc5e1f1-2e5b-4fe5-aa65-5a8f530ef661 req-510f3780-9c10-4aa9-afeb-084c2d88730f b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.197 253542 DEBUG nova.compute.manager [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.198 253542 DEBUG oslo_concurrency.lockutils [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.198 253542 DEBUG oslo_concurrency.lockutils [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.199 253542 DEBUG oslo_concurrency.lockutils [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.199 253542 DEBUG nova.compute.manager [req-a92dc730-5e01-4bc1-8c4f-23f46be963e0 req-f1b4608f-ffca-4674-8e13-752e298853f8 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Processing event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:14:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.735 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062068.735344, 49fdf548-77e1-47b2-9118-f42acc3a4052 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.736 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] VM Started (Lifecycle Event)
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.739 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.743 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.747 253542 INFO nova.virt.libvirt.driver [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance spawned successfully.
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.747 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.760 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.766 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.769 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.770 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.770 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.771 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.771 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.772 253542 DEBUG nova.virt.libvirt.driver [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.794 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.795 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062068.7354622, 49fdf548-77e1-47b2-9118-f42acc3a4052 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.795 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] VM Paused (Lifecycle Event)
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.809 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.812 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062068.7427714, 49fdf548-77e1-47b2-9118-f42acc3a4052 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.812 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] VM Resumed (Lifecycle Event)
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.832 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.835 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.852 253542 INFO nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Took 11.10 seconds to spawn the instance on the hypervisor.
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.852 253542 DEBUG nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.853 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:14:28 compute-0 nova_compute[253538]: 2025-11-25 09:14:28.992 253542 INFO nova.compute.manager [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Took 12.31 seconds to build instance.
Nov 25 09:14:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:14:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2576011999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:14:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:14:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2576011999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:14:29 compute-0 nova_compute[253538]: 2025-11-25 09:14:29.084 253542 DEBUG oslo_concurrency.lockutils [None req-adbbced1-e284-4a74-9b62-5b622b64136f c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:29 compute-0 ceph-mon[75015]: pgmap v2822: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:14:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2576011999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:14:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2576011999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:14:29 compute-0 nova_compute[253538]: 2025-11-25 09:14:29.540 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:30 compute-0 nova_compute[253538]: 2025-11-25 09:14:30.280 253542 DEBUG nova.compute.manager [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:30 compute-0 nova_compute[253538]: 2025-11-25 09:14:30.280 253542 DEBUG oslo_concurrency.lockutils [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:30 compute-0 nova_compute[253538]: 2025-11-25 09:14:30.280 253542 DEBUG oslo_concurrency.lockutils [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:30 compute-0 nova_compute[253538]: 2025-11-25 09:14:30.280 253542 DEBUG oslo_concurrency.lockutils [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:30 compute-0 nova_compute[253538]: 2025-11-25 09:14:30.281 253542 DEBUG nova.compute.manager [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] No waiting events found dispatching network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:14:30 compute-0 nova_compute[253538]: 2025-11-25 09:14:30.281 253542 WARNING nova.compute.manager [req-0c51a311-32f0-44a0-a6a1-685453e82542 req-ab20ee44-9494-4947-b5a6-eadacf4edff4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received unexpected event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a for instance with vm_state active and task_state None.
Nov 25 09:14:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Nov 25 09:14:31 compute-0 ceph-mon[75015]: pgmap v2823: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Nov 25 09:14:31 compute-0 nova_compute[253538]: 2025-11-25 09:14:31.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:32 compute-0 nova_compute[253538]: 2025-11-25 09:14:32.356 253542 DEBUG nova.compute.manager [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:32 compute-0 nova_compute[253538]: 2025-11-25 09:14:32.356 253542 DEBUG nova.compute.manager [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing instance network info cache due to event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:14:32 compute-0 nova_compute[253538]: 2025-11-25 09:14:32.356 253542 DEBUG oslo_concurrency.lockutils [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:14:32 compute-0 nova_compute[253538]: 2025-11-25 09:14:32.357 253542 DEBUG oslo_concurrency.lockutils [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:14:32 compute-0 nova_compute[253538]: 2025-11-25 09:14:32.357 253542 DEBUG nova.network.neutron [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:14:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 472 KiB/s wr, 50 op/s
Nov 25 09:14:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:33 compute-0 ceph-mon[75015]: pgmap v2824: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 472 KiB/s wr, 50 op/s
Nov 25 09:14:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 25 09:14:34 compute-0 nova_compute[253538]: 2025-11-25 09:14:34.528 253542 DEBUG nova.network.neutron [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updated VIF entry in instance network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:14:34 compute-0 nova_compute[253538]: 2025-11-25 09:14:34.529 253542 DEBUG nova.network.neutron [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:34 compute-0 nova_compute[253538]: 2025-11-25 09:14:34.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:34 compute-0 nova_compute[253538]: 2025-11-25 09:14:34.547 253542 DEBUG oslo_concurrency.lockutils [req-63329d58-0308-4706-ad4e-b98627749bdc req-48831c82-2c90-4f84-8833-56dfe5ed0086 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:14:35 compute-0 ceph-mon[75015]: pgmap v2825: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 25 09:14:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:14:36 compute-0 nova_compute[253538]: 2025-11-25 09:14:36.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:37 compute-0 ceph-mon[75015]: pgmap v2826: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:14:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:14:39 compute-0 nova_compute[253538]: 2025-11-25 09:14:39.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:39 compute-0 ceph-mon[75015]: pgmap v2827: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:14:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 73 op/s
Nov 25 09:14:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:41.098 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:41.099 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:41.100 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:41 compute-0 ovn_controller[152859]: 2025-11-25T09:14:41Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:e3:44 10.100.0.13
Nov 25 09:14:41 compute-0 ovn_controller[152859]: 2025-11-25T09:14:41Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:e3:44 10.100.0.13
Nov 25 09:14:41 compute-0 ceph-mon[75015]: pgmap v2828: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 73 op/s
Nov 25 09:14:41 compute-0 podman[414449]: 2025-11-25 09:14:41.828212096 +0000 UTC m=+0.065462461 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:14:41 compute-0 podman[414450]: 2025-11-25 09:14:41.832506213 +0000 UTC m=+0.062327625 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 09:14:41 compute-0 nova_compute[253538]: 2025-11-25 09:14:41.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 228 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 695 KiB/s wr, 65 op/s
Nov 25 09:14:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:43 compute-0 ceph-mon[75015]: pgmap v2829: 321 pgs: 321 active+clean; 228 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 695 KiB/s wr, 65 op/s
Nov 25 09:14:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 236 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 82 op/s
Nov 25 09:14:44 compute-0 nova_compute[253538]: 2025-11-25 09:14:44.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:45 compute-0 ceph-mon[75015]: pgmap v2830: 321 pgs: 321 active+clean; 236 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 82 op/s
Nov 25 09:14:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 865 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 09:14:46 compute-0 nova_compute[253538]: 2025-11-25 09:14:46.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:47 compute-0 ceph-mon[75015]: pgmap v2831: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 865 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 09:14:47 compute-0 podman[414489]: 2025-11-25 09:14:47.858449932 +0000 UTC m=+0.101378477 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 09:14:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:14:49 compute-0 nova_compute[253538]: 2025-11-25 09:14:49.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:49 compute-0 ceph-mon[75015]: pgmap v2832: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:14:49 compute-0 nova_compute[253538]: 2025-11-25 09:14:49.986 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:14:51 compute-0 nova_compute[253538]: 2025-11-25 09:14:51.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:51 compute-0 ceph-mon[75015]: pgmap v2833: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:14:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:51.859 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:14:51 compute-0 nova_compute[253538]: 2025-11-25 09:14:51.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:51 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:51.860 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:14:51 compute-0 nova_compute[253538]: 2025-11-25 09:14:51.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.099 253542 DEBUG nova.compute.manager [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.099 253542 DEBUG nova.compute.manager [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing instance network info cache due to event network-changed-8567d2b8-5fd3-45d6-9d10-d88839de3d8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.100 253542 DEBUG oslo_concurrency.lockutils [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.100 253542 DEBUG oslo_concurrency.lockutils [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.100 253542 DEBUG nova.network.neutron [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Refreshing network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.262 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.263 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.263 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.263 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.264 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.265 253542 INFO nova.compute.manager [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Terminating instance
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.266 253542 DEBUG nova.compute.manager [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:14:52 compute-0 kernel: tap8567d2b8-5f (unregistering): left promiscuous mode
Nov 25 09:14:52 compute-0 NetworkManager[48915]: <info>  [1764062092.3286] device (tap8567d2b8-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:14:52 compute-0 ovn_controller[152859]: 2025-11-25T09:14:52Z|01590|binding|INFO|Releasing lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a from this chassis (sb_readonly=0)
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:52 compute-0 ovn_controller[152859]: 2025-11-25T09:14:52Z|01591|binding|INFO|Setting lport 8567d2b8-5fd3-45d6-9d10-d88839de3d8a down in Southbound
Nov 25 09:14:52 compute-0 ovn_controller[152859]: 2025-11-25T09:14:52Z|01592|binding|INFO|Removing iface tap8567d2b8-5f ovn-installed in OVS
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.339 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.379 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344'], port_security=['fa:16:3e:ef:e3:44 10.100.0.13 2001:db8:0:1:f816:3eff:feef:e344 2001:db8::f816:3eff:feef:e344'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:feef:e344/64 2001:db8::f816:3eff:feef:e344/64', 'neutron:device_id': '49fdf548-77e1-47b2-9118-f42acc3a4052', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a20ef1-698a-4857-9ba1-1bcdffeb008a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=8567d2b8-5fd3-45d6-9d10-d88839de3d8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.380 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a in datapath a0d85633-9402-4022-8c0a-b00348775e93 unbound from our chassis
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.381 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d85633-9402-4022-8c0a-b00348775e93
Nov 25 09:14:52 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000095.scope: Deactivated successfully.
Nov 25 09:14:52 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000095.scope: Consumed 14.017s CPU time.
Nov 25 09:14:52 compute-0 systemd-machined[215790]: Machine qemu-179-instance-00000095 terminated.
Nov 25 09:14:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.407 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[da5fd852-d368-4533-93a4-d4dd2322adb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.448 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3623434d-559d-44fb-986d-6c8e3a1dcd95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.451 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[878d7676-fec2-42ab-b616-e4a2f473e892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.490 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[42cbacf2-15aa-420f-a8d0-10edc135cb8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.500 253542 INFO nova.virt.libvirt.driver [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Instance destroyed successfully.
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.501 253542 DEBUG nova.objects.instance [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 49fdf548-77e1-47b2-9118-f42acc3a4052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.514 253542 DEBUG nova.virt.libvirt.vif [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1132307495',display_name='tempest-TestGettingAddress-server-1132307495',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1132307495',id=149,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:14:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4n1kgn5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:14:28Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=49fdf548-77e1-47b2-9118-f42acc3a4052,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.514 253542 DEBUG nova.network.os_vif_util [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.515 253542 DEBUG nova.network.os_vif_util [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.516 253542 DEBUG os_vif [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.517 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.517 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8567d2b8-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.519 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.519 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf2096f-0c74-4381-9690-6a9142ace8c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d85633-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:94:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738854, 'reachable_time': 17518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414536, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.525 253542 INFO os_vif [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:e3:44,bridge_name='br-int',has_traffic_filtering=True,id=8567d2b8-5fd3-45d6-9d10-d88839de3d8a,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8567d2b8-5f')
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.538 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[70a8f38d-a254-4939-a35d-fcfdad246b00]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa0d85633-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738867, 'tstamp': 738867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414542, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa0d85633-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738869, 'tstamp': 738869}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414542, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.540 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d85633-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.543 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d85633-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.543 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.544 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d85633-90, col_values=(('external_ids', {'iface-id': '0596e673-151c-4eed-ad1e-d612e39d6f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:52 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:52.544 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.958 253542 INFO nova.virt.libvirt.driver [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Deleting instance files /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052_del
Nov 25 09:14:52 compute-0 nova_compute[253538]: 2025-11-25 09:14:52.959 253542 INFO nova.virt.libvirt.driver [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Deletion of /var/lib/nova/instances/49fdf548-77e1-47b2-9118-f42acc3a4052_del complete
Nov 25 09:14:53 compute-0 nova_compute[253538]: 2025-11-25 09:14:53.109 253542 INFO nova.compute.manager [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 25 09:14:53 compute-0 nova_compute[253538]: 2025-11-25 09:14:53.110 253542 DEBUG oslo.service.loopingcall [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:14:53 compute-0 nova_compute[253538]: 2025-11-25 09:14:53.111 253542 DEBUG nova.compute.manager [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:14:53 compute-0 nova_compute[253538]: 2025-11-25 09:14:53.111 253542 DEBUG nova.network.neutron [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:14:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:14:53
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', 'volumes', 'images']
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:14:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:14:53 compute-0 ceph-mon[75015]: pgmap v2834: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.021 253542 DEBUG nova.network.neutron [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updated VIF entry in instance network info cache for port 8567d2b8-5fd3-45d6-9d10-d88839de3d8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.022 253542 DEBUG nova.network.neutron [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [{"id": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "address": "fa:16:3e:ef:e3:44", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feef:e344", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8567d2b8-5f", "ovs_interfaceid": "8567d2b8-5fd3-45d6-9d10-d88839de3d8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.117 253542 DEBUG oslo_concurrency.lockutils [req-6620b07e-2786-4486-b029-349c7d28a458 req-f87febc5-96eb-451a-a6ff-95898e1238c3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-49fdf548-77e1-47b2-9118-f42acc3a4052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.119 253542 DEBUG nova.network.neutron [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.161 253542 INFO nova.compute.manager [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Took 1.05 seconds to deallocate network for instance.
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.176 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-unplugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.177 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.177 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.177 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.177 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] No waiting events found dispatching network-vif-unplugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.178 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-unplugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.178 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.178 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.179 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.179 253542 DEBUG oslo_concurrency.lockutils [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.179 253542 DEBUG nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] No waiting events found dispatching network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.179 253542 WARNING nova.compute.manager [req-3da9552a-0579-47ce-99a7-6f362509ac87 req-6d9f33d2-516a-40fa-88d9-1b509e1af972 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received unexpected event network-vif-plugged-8567d2b8-5fd3-45d6-9d10-d88839de3d8a for instance with vm_state active and task_state deleting.
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.208 253542 DEBUG nova.compute.manager [req-1d849c1b-bdcd-427c-b26f-3b7f1278ee7e req-a7f9e33f-7720-409d-851c-bed139f4a90e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Received event network-vif-deleted-8567d2b8-5fd3-45d6-9d10-d88839de3d8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.223 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.224 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.306 253542 DEBUG oslo_concurrency.processutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 224 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.603 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:14:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1965543821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.754 253542 DEBUG oslo_concurrency.processutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.761 253542 DEBUG nova.compute.provider_tree [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.776 253542 DEBUG nova.scheduler.client.report [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.794 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.825 253542 INFO nova.scheduler.client.report [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 49fdf548-77e1-47b2-9118-f42acc3a4052
Nov 25 09:14:54 compute-0 nova_compute[253538]: 2025-11-25 09:14:54.904 253542 DEBUG oslo_concurrency.lockutils [None req-897de6a4-6b39-4f4a-9d4e-5de56a8ddf8d c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "49fdf548-77e1-47b2-9118-f42acc3a4052" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:55 compute-0 ceph-mon[75015]: pgmap v2835: 321 pgs: 321 active+clean; 224 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Nov 25 09:14:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1965543821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:55.863 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.308 253542 DEBUG nova.compute.manager [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.308 253542 DEBUG nova.compute.manager [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing instance network info cache due to event network-changed-bc0d7fbf-1c1d-43bc-884b-d89f14be4712. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.309 253542 DEBUG oslo_concurrency.lockutils [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.310 253542 DEBUG oslo_concurrency.lockutils [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.310 253542 DEBUG nova.network.neutron [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Refreshing network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:14:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 709 KiB/s wr, 39 op/s
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.532 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.533 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.534 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.534 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.535 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.537 253542 INFO nova.compute.manager [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Terminating instance
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.538 253542 DEBUG nova.compute.manager [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:14:56 compute-0 kernel: tapbc0d7fbf-1c (unregistering): left promiscuous mode
Nov 25 09:14:56 compute-0 NetworkManager[48915]: <info>  [1764062096.6072] device (tapbc0d7fbf-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 ovn_controller[152859]: 2025-11-25T09:14:56Z|01593|binding|INFO|Releasing lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 from this chassis (sb_readonly=0)
Nov 25 09:14:56 compute-0 ovn_controller[152859]: 2025-11-25T09:14:56Z|01594|binding|INFO|Setting lport bc0d7fbf-1c1d-43bc-884b-d89f14be4712 down in Southbound
Nov 25 09:14:56 compute-0 ovn_controller[152859]: 2025-11-25T09:14:56Z|01595|binding|INFO|Removing iface tapbc0d7fbf-1c ovn-installed in OVS
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.642 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da'], port_security=['fa:16:3e:57:11:da 10.100.0.12 2001:db8:0:1:f816:3eff:fe57:11da 2001:db8::f816:3eff:fe57:11da'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe57:11da/64 2001:db8::f816:3eff:fe57:11da/64', 'neutron:device_id': '3525156a-e9c9-40b7-88f6-db0de5eb3cd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d85633-9402-4022-8c0a-b00348775e93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a20ef1-698a-4857-9ba1-1bcdffeb008a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcec4777-cf00-4ad5-abd7-0fc74f3a8f46, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=bc0d7fbf-1c1d-43bc-884b-d89f14be4712) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.645 162739 INFO neutron.agent.ovn.metadata.agent [-] Port bc0d7fbf-1c1d-43bc-884b-d89f14be4712 in datapath a0d85633-9402-4022-8c0a-b00348775e93 unbound from our chassis
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.647 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d85633-9402-4022-8c0a-b00348775e93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.648 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4d3393-7ffa-49e7-98a2-965ab8852651]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.649 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 namespace which is not needed anymore
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000094.scope: Deactivated successfully.
Nov 25 09:14:56 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000094.scope: Consumed 15.125s CPU time.
Nov 25 09:14:56 compute-0 systemd-machined[215790]: Machine qemu-178-instance-00000094 terminated.
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.782 253542 INFO nova.virt.libvirt.driver [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Instance destroyed successfully.
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.782 253542 DEBUG nova.objects.instance [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.793 253542 DEBUG nova.virt.libvirt.vif [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-846287804',display_name='tempest-TestGettingAddress-server-846287804',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-846287804',id=148,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLjQVPPbaLK0o8SGnTKKoAiXxzMCTbiXlcClrAsKocG23tIVNKIl+uyDryAiBRRNgOXs1VZtoFvH020JuANRMKekifvu6hXrGsC+ZSxX2adgnjX2NrqkFauwj51UvSALOg==',key_name='tempest-TestGettingAddress-1641139979',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:13:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-4cgshl0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:13:54Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=3525156a-e9c9-40b7-88f6-db0de5eb3cd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.794 253542 DEBUG nova.network.os_vif_util [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.795 253542 DEBUG nova.network.os_vif_util [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.795 253542 DEBUG os_vif [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.796 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.797 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc0d7fbf-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.804 253542 INFO os_vif [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:11:da,bridge_name='br-int',has_traffic_filtering=True,id=bc0d7fbf-1c1d-43bc-884b-d89f14be4712,network=Network(a0d85633-9402-4022-8c0a-b00348775e93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc0d7fbf-1c')
Nov 25 09:14:56 compute-0 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [NOTICE]   (413062) : haproxy version is 2.8.14-c23fe91
Nov 25 09:14:56 compute-0 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [NOTICE]   (413062) : path to executable is /usr/sbin/haproxy
Nov 25 09:14:56 compute-0 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [WARNING]  (413062) : Exiting Master process...
Nov 25 09:14:56 compute-0 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [WARNING]  (413062) : Exiting Master process...
Nov 25 09:14:56 compute-0 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [ALERT]    (413062) : Current worker (413064) exited with code 143 (Terminated)
Nov 25 09:14:56 compute-0 neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93[413058]: [WARNING]  (413062) : All workers exited. Exiting... (0)
Nov 25 09:14:56 compute-0 systemd[1]: libpod-7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9.scope: Deactivated successfully.
Nov 25 09:14:56 compute-0 ceph-mon[75015]: pgmap v2836: 321 pgs: 321 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 709 KiB/s wr, 39 op/s
Nov 25 09:14:56 compute-0 podman[414608]: 2025-11-25 09:14:56.82835194 +0000 UTC m=+0.060358282 container died 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-92dfa951f831771d47b5f3c19baa67a33147d7a8d9cd91e8a74b897c79b9363c-merged.mount: Deactivated successfully.
Nov 25 09:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9-userdata-shm.mount: Deactivated successfully.
Nov 25 09:14:56 compute-0 podman[414608]: 2025-11-25 09:14:56.870027403 +0000 UTC m=+0.102033705 container cleanup 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:14:56 compute-0 systemd[1]: libpod-conmon-7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9.scope: Deactivated successfully.
Nov 25 09:14:56 compute-0 podman[414662]: 2025-11-25 09:14:56.943295435 +0000 UTC m=+0.047835942 container remove 7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4f7f76-2d58-42af-afc8-c9d742a44eed]: (4, ('Tue Nov 25 09:14:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 (7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9)\n7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9\nTue Nov 25 09:14:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 (7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9)\n7cb63ae56f9aa9c60b0517e753abd1c81a65a009b616cb2f760cf458d27adbd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.954 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[403d629f-ffa7-4350-9a73-e517ccfd00cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.957 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d85633-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 kernel: tapa0d85633-90: left promiscuous mode
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.970 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[482cb757-8001-4b22-8ec0-8fdb75dae85c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:56 compute-0 nova_compute[253538]: 2025-11-25 09:14:56.979 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.993 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[429e5248-490f-4a67-9915-a704105876ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:56 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:56.994 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5740a8-0fdc-4673-9c1b-4191f3605c6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:57.011 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[332e2d1a-a59f-45a3-bdec-27c4b3b39469]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738847, 'reachable_time': 34208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414678, 'error': None, 'target': 'ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:57 compute-0 systemd[1]: run-netns-ovnmeta\x2da0d85633\x2d9402\x2d4022\x2d8c0a\x2db00348775e93.mount: Deactivated successfully.
Nov 25 09:14:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:57.014 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d85633-9402-4022-8c0a-b00348775e93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:14:57 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:14:57.015 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ac7ccc-c4c7-4460-848d-b9a911bebbec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:14:57 compute-0 nova_compute[253538]: 2025-11-25 09:14:57.195 253542 INFO nova.virt.libvirt.driver [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Deleting instance files /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1_del
Nov 25 09:14:57 compute-0 nova_compute[253538]: 2025-11-25 09:14:57.196 253542 INFO nova.virt.libvirt.driver [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Deletion of /var/lib/nova/instances/3525156a-e9c9-40b7-88f6-db0de5eb3cd1_del complete
Nov 25 09:14:57 compute-0 nova_compute[253538]: 2025-11-25 09:14:57.355 253542 INFO nova.compute.manager [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 25 09:14:57 compute-0 nova_compute[253538]: 2025-11-25 09:14:57.356 253542 DEBUG oslo.service.loopingcall [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:14:57 compute-0 nova_compute[253538]: 2025-11-25 09:14:57.356 253542 DEBUG nova.compute.manager [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:14:57 compute-0 nova_compute[253538]: 2025-11-25 09:14:57.356 253542 DEBUG nova.network.neutron [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:14:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.377 253542 DEBUG nova.network.neutron [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 147 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 24 KiB/s wr, 32 op/s
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.413 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-unplugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.414 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.414 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.414 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.415 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] No waiting events found dispatching network-vif-unplugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.415 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-unplugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.415 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.415 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.416 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.416 253542 DEBUG oslo_concurrency.lockutils [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.416 253542 DEBUG nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] No waiting events found dispatching network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.416 253542 WARNING nova.compute.manager [req-e912a41d-9fee-40c5-9fce-6b519b52bb7a req-b349f6de-17fd-49dc-a035-2c30b54d4156 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received unexpected event network-vif-plugged-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 for instance with vm_state active and task_state deleting.
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.437 253542 INFO nova.compute.manager [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Took 1.08 seconds to deallocate network for instance.
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.485 253542 DEBUG nova.compute.manager [req-8c371b19-424f-47e8-97a9-5e671eed0634 req-1422bebf-84c5-4199-a9d3-97b237218ae1 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Received event network-vif-deleted-bc0d7fbf-1c1d-43bc-884b-d89f14be4712 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.515 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.516 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.546 253542 DEBUG oslo_concurrency.processutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.591 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.593 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.593 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.593 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.772 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.795 253542 DEBUG nova.network.neutron [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updated VIF entry in instance network info cache for port bc0d7fbf-1c1d-43bc-884b-d89f14be4712. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.796 253542 DEBUG nova.network.neutron [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [{"id": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "address": "fa:16:3e:57:11:da", "network": {"id": "a0d85633-9402-4022-8c0a-b00348775e93", "bridge": "br-int", "label": "tempest-network-smoke--1636625224", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:11da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc0d7fbf-1c", "ovs_interfaceid": "bc0d7fbf-1c1d-43bc-884b-d89f14be4712", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.829 253542 DEBUG oslo_concurrency.lockutils [req-524fb2af-fc48-47f6-b334-d8b525ddcdc4 req-49701a26-3eab-4fe3-9b24-74bca72d683a b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.829 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.829 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:14:58 compute-0 nova_compute[253538]: 2025-11-25 09:14:58.830 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:14:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:14:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2934596963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.019 253542 DEBUG oslo_concurrency.processutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.030 253542 DEBUG nova.compute.provider_tree [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.045 253542 DEBUG nova.scheduler.client.report [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.141 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.244 253542 INFO nova.scheduler.client.report [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 3525156a-e9c9-40b7-88f6-db0de5eb3cd1
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.262 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.307 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-3525156a-e9c9-40b7-88f6-db0de5eb3cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.308 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:14:59 compute-0 ceph-mon[75015]: pgmap v2837: 321 pgs: 321 active+clean; 147 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 24 KiB/s wr, 32 op/s
Nov 25 09:14:59 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2934596963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.492 253542 DEBUG oslo_concurrency.lockutils [None req-637521c8-0853-4fb4-9c14-1e99f7de6afc c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "3525156a-e9c9-40b7-88f6-db0de5eb3cd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:14:59 compute-0 nova_compute[253538]: 2025-11-25 09:14:59.659 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 129 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 24 KiB/s wr, 47 op/s
Nov 25 09:15:01 compute-0 ceph-mon[75015]: pgmap v2838: 321 pgs: 321 active+clean; 129 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 24 KiB/s wr, 47 op/s
Nov 25 09:15:01 compute-0 nova_compute[253538]: 2025-11-25 09:15:01.798 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 57 op/s
Nov 25 09:15:02 compute-0 nova_compute[253538]: 2025-11-25 09:15:02.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:02 compute-0 nova_compute[253538]: 2025-11-25 09:15:02.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:15:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:03 compute-0 ceph-mon[75015]: pgmap v2839: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 57 op/s
Nov 25 09:15:03 compute-0 nova_compute[253538]: 2025-11-25 09:15:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:15:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:15:04 compute-0 nova_compute[253538]: 2025-11-25 09:15:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:04 compute-0 nova_compute[253538]: 2025-11-25 09:15:04.662 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:05 compute-0 sudo[414701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:05 compute-0 sudo[414701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:05 compute-0 sudo[414701]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:05 compute-0 sudo[414726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:15:05 compute-0 sudo[414726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:05 compute-0 sudo[414726]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:05 compute-0 sudo[414751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:05 compute-0 sudo[414751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:05 compute-0 sudo[414751]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:05 compute-0 sudo[414776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:15:05 compute-0 sudo[414776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:05 compute-0 ceph-mon[75015]: pgmap v2840: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 09:15:05 compute-0 nova_compute[253538]: 2025-11-25 09:15:05.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:05 compute-0 sudo[414776]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:15:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:15:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:15:06 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:15:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:15:06 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:15:06 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0d227d0d-0490-4f3f-90e1-4fb7b1c518eb does not exist
Nov 25 09:15:06 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e0a2ee8c-dbe3-4bbe-b944-0282fc37092d does not exist
Nov 25 09:15:06 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e209057a-81f0-4f1c-aaa4-a77683fa7793 does not exist
Nov 25 09:15:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:15:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:15:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:15:06 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:15:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:15:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:15:06 compute-0 sudo[414832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:06 compute-0 sudo[414832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:06 compute-0 sudo[414832]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:06 compute-0 sudo[414857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:15:06 compute-0 sudo[414857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:06 compute-0 sudo[414857]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:06 compute-0 sudo[414882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:06 compute-0 sudo[414882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:06 compute-0 sudo[414882]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:06 compute-0 sudo[414907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:15:06 compute-0 sudo[414907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 55 op/s
Nov 25 09:15:06 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:15:06 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:15:06 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:15:06 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:15:06 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:15:06 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:15:06 compute-0 nova_compute[253538]: 2025-11-25 09:15:06.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:06 compute-0 podman[414969]: 2025-11-25 09:15:06.695973481 +0000 UTC m=+0.118041260 container create 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:15:06 compute-0 podman[414969]: 2025-11-25 09:15:06.620744215 +0000 UTC m=+0.042812074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:15:06 compute-0 systemd[1]: Started libpod-conmon-61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41.scope.
Nov 25 09:15:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:15:06 compute-0 podman[414969]: 2025-11-25 09:15:06.797936273 +0000 UTC m=+0.220004082 container init 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:15:06 compute-0 nova_compute[253538]: 2025-11-25 09:15:06.799 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:06 compute-0 podman[414969]: 2025-11-25 09:15:06.809826876 +0000 UTC m=+0.231894655 container start 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:15:06 compute-0 podman[414969]: 2025-11-25 09:15:06.81586389 +0000 UTC m=+0.237931669 container attach 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 09:15:06 compute-0 nice_northcutt[414985]: 167 167
Nov 25 09:15:06 compute-0 systemd[1]: libpod-61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41.scope: Deactivated successfully.
Nov 25 09:15:06 compute-0 conmon[414985]: conmon 61ffce8873919a4ca8c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41.scope/container/memory.events
Nov 25 09:15:06 compute-0 podman[414969]: 2025-11-25 09:15:06.820919917 +0000 UTC m=+0.242987696 container died 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5526869d61dd4feab378e0141362676a313ab9b8662098db99073eb3772e00ab-merged.mount: Deactivated successfully.
Nov 25 09:15:06 compute-0 podman[414969]: 2025-11-25 09:15:06.861345316 +0000 UTC m=+0.283413095 container remove 61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:15:06 compute-0 systemd[1]: libpod-conmon-61ffce8873919a4ca8c13277357f524f381bc7b3a4d60929a3ca97b5b37d2a41.scope: Deactivated successfully.
Nov 25 09:15:07 compute-0 podman[415012]: 2025-11-25 09:15:07.01858518 +0000 UTC m=+0.041791696 container create 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:15:07 compute-0 systemd[1]: Started libpod-conmon-77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f.scope.
Nov 25 09:15:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:07 compute-0 podman[415012]: 2025-11-25 09:15:06.999577194 +0000 UTC m=+0.022783720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:15:07 compute-0 podman[415012]: 2025-11-25 09:15:07.101517765 +0000 UTC m=+0.124724311 container init 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:15:07 compute-0 podman[415012]: 2025-11-25 09:15:07.108207267 +0000 UTC m=+0.131413783 container start 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:15:07 compute-0 podman[415012]: 2025-11-25 09:15:07.111215669 +0000 UTC m=+0.134422185 container attach 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 09:15:07 compute-0 sshd-session[415007]: Invalid user oracle from 193.32.162.151 port 51694
Nov 25 09:15:07 compute-0 nova_compute[253538]: 2025-11-25 09:15:07.498 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062092.4977322, 49fdf548-77e1-47b2-9118-f42acc3a4052 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:15:07 compute-0 nova_compute[253538]: 2025-11-25 09:15:07.499 253542 INFO nova.compute.manager [-] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] VM Stopped (Lifecycle Event)
Nov 25 09:15:07 compute-0 ceph-mon[75015]: pgmap v2841: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 55 op/s
Nov 25 09:15:07 compute-0 nova_compute[253538]: 2025-11-25 09:15:07.514 253542 DEBUG nova.compute.manager [None req-4342b7cd-a5da-4645-b592-a039e84dec6a - - - - - -] [instance: 49fdf548-77e1-47b2-9118-f42acc3a4052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:15:07 compute-0 sshd-session[415007]: Connection closed by invalid user oracle 193.32.162.151 port 51694 [preauth]
Nov 25 09:15:07 compute-0 nova_compute[253538]: 2025-11-25 09:15:07.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:07 compute-0 nova_compute[253538]: 2025-11-25 09:15:07.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:08 compute-0 nifty_newton[415028]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:15:08 compute-0 nifty_newton[415028]: --> relative data size: 1.0
Nov 25 09:15:08 compute-0 nifty_newton[415028]: --> All data devices are unavailable
Nov 25 09:15:08 compute-0 systemd[1]: libpod-77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f.scope: Deactivated successfully.
Nov 25 09:15:08 compute-0 systemd[1]: libpod-77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f.scope: Consumed 1.009s CPU time.
Nov 25 09:15:08 compute-0 podman[415012]: 2025-11-25 09:15:08.158416096 +0000 UTC m=+1.181622602 container died 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ce479ae00f5dbe10b241f8bf1c34ef5a61346a54ccb2a7fb1a197c0a76609c-merged.mount: Deactivated successfully.
Nov 25 09:15:08 compute-0 podman[415012]: 2025-11-25 09:15:08.228845441 +0000 UTC m=+1.252051957 container remove 77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_newton, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:15:08 compute-0 systemd[1]: libpod-conmon-77a29c686269194f5fad34e7a4e07eba84285be3980591f77c5ede3c0f211a3f.scope: Deactivated successfully.
Nov 25 09:15:08 compute-0 sudo[414907]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:08 compute-0 sudo[415068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:08 compute-0 sudo[415068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:08 compute-0 sudo[415068]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:08 compute-0 sudo[415093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:15:08 compute-0 sudo[415093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 38 op/s
Nov 25 09:15:08 compute-0 sudo[415093]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:08 compute-0 sudo[415118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:08 compute-0 sudo[415118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:08 compute-0 sudo[415118]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:08 compute-0 sudo[415143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:15:08 compute-0 sudo[415143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:08 compute-0 nova_compute[253538]: 2025-11-25 09:15:08.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:08 compute-0 nova_compute[253538]: 2025-11-25 09:15:08.572 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:08 compute-0 nova_compute[253538]: 2025-11-25 09:15:08.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:08 compute-0 nova_compute[253538]: 2025-11-25 09:15:08.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:08 compute-0 nova_compute[253538]: 2025-11-25 09:15:08.573 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:15:08 compute-0 nova_compute[253538]: 2025-11-25 09:15:08.574 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:08 compute-0 podman[415226]: 2025-11-25 09:15:08.844183478 +0000 UTC m=+0.039459384 container create edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 09:15:08 compute-0 systemd[1]: Started libpod-conmon-edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6.scope.
Nov 25 09:15:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:15:08 compute-0 podman[415226]: 2025-11-25 09:15:08.826425005 +0000 UTC m=+0.021700921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:15:08 compute-0 podman[415226]: 2025-11-25 09:15:08.925383525 +0000 UTC m=+0.120659441 container init edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 09:15:08 compute-0 podman[415226]: 2025-11-25 09:15:08.931426779 +0000 UTC m=+0.126702695 container start edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:15:08 compute-0 podman[415226]: 2025-11-25 09:15:08.935252904 +0000 UTC m=+0.130528820 container attach edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:15:08 compute-0 frosty_greider[415243]: 167 167
Nov 25 09:15:08 compute-0 systemd[1]: libpod-edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6.scope: Deactivated successfully.
Nov 25 09:15:08 compute-0 podman[415226]: 2025-11-25 09:15:08.937059602 +0000 UTC m=+0.132335498 container died edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 09:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4397de3385fac24da8c3fe6e7a4cc398dcbede03eec7d683967c61223cedc716-merged.mount: Deactivated successfully.
Nov 25 09:15:08 compute-0 podman[415226]: 2025-11-25 09:15:08.970136942 +0000 UTC m=+0.165412838 container remove edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:15:08 compute-0 systemd[1]: libpod-conmon-edf18abb7e70eace4c8af4f300998f8b5314fbab31909ec7d9509ee3ae1d65f6.scope: Deactivated successfully.
Nov 25 09:15:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:15:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1681472048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.025 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:09 compute-0 podman[415271]: 2025-11-25 09:15:09.147617286 +0000 UTC m=+0.049232929 container create 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 09:15:09 compute-0 systemd[1]: Started libpod-conmon-784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240.scope.
Nov 25 09:15:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:15:09 compute-0 podman[415271]: 2025-11-25 09:15:09.124828137 +0000 UTC m=+0.026443810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.229 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.230 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3587MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.230 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.230 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:09 compute-0 podman[415271]: 2025-11-25 09:15:09.231511057 +0000 UTC m=+0.133126700 container init 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:15:09 compute-0 podman[415271]: 2025-11-25 09:15:09.241840298 +0000 UTC m=+0.143455941 container start 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:15:09 compute-0 podman[415271]: 2025-11-25 09:15:09.244617153 +0000 UTC m=+0.146232816 container attach 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.291 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.291 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.305 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:09 compute-0 ceph-mon[75015]: pgmap v2842: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 38 op/s
Nov 25 09:15:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1681472048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.663 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:15:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817833574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.751 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.758 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.773 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.795 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:15:09 compute-0 nova_compute[253538]: 2025-11-25 09:15:09.796 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]: {
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:     "0": [
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:         {
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "devices": [
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "/dev/loop3"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             ],
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_name": "ceph_lv0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_size": "21470642176",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "name": "ceph_lv0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "tags": {
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cluster_name": "ceph",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.crush_device_class": "",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.encrypted": "0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osd_id": "0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.type": "block",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.vdo": "0"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             },
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "type": "block",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "vg_name": "ceph_vg0"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:         }
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:     ],
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:     "1": [
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:         {
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "devices": [
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "/dev/loop4"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             ],
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_name": "ceph_lv1",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_size": "21470642176",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "name": "ceph_lv1",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "tags": {
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cluster_name": "ceph",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.crush_device_class": "",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.encrypted": "0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osd_id": "1",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.type": "block",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.vdo": "0"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             },
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "type": "block",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "vg_name": "ceph_vg1"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:         }
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:     ],
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:     "2": [
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:         {
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "devices": [
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "/dev/loop5"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             ],
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_name": "ceph_lv2",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_size": "21470642176",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "name": "ceph_lv2",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "tags": {
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.cluster_name": "ceph",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.crush_device_class": "",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.encrypted": "0",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osd_id": "2",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.type": "block",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:                 "ceph.vdo": "0"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             },
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "type": "block",
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:             "vg_name": "ceph_vg2"
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:         }
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]:     ]
Nov 25 09:15:09 compute-0 romantic_matsumoto[415288]: }
Nov 25 09:15:10 compute-0 systemd[1]: libpod-784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240.scope: Deactivated successfully.
Nov 25 09:15:10 compute-0 podman[415319]: 2025-11-25 09:15:10.066199897 +0000 UTC m=+0.023502690 container died 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f21cb69e41eadaf7f33cdf1ab0999e2d72ce394f6b5b4e56868f2951a4c5d1c-merged.mount: Deactivated successfully.
Nov 25 09:15:10 compute-0 podman[415319]: 2025-11-25 09:15:10.123537776 +0000 UTC m=+0.080840539 container remove 784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:15:10 compute-0 systemd[1]: libpod-conmon-784ec06424a65276e543c291765b4b207983a6afd5ca1975f1ebda0a06df2240.scope: Deactivated successfully.
Nov 25 09:15:10 compute-0 sudo[415143]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:10 compute-0 sudo[415332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:10 compute-0 sudo[415332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:10 compute-0 sudo[415332]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:10 compute-0 sudo[415357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:15:10 compute-0 sudo[415357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:10 compute-0 sudo[415357]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:10 compute-0 sudo[415382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 09:15:10 compute-0 sudo[415382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:10 compute-0 sudo[415382]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:10 compute-0 sudo[415407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:15:10 compute-0 sudo[415407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2817833574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:15:10 compute-0 podman[415472]: 2025-11-25 09:15:10.822827595 +0000 UTC m=+0.043766260 container create a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:15:10 compute-0 systemd[1]: Started libpod-conmon-a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f.scope.
Nov 25 09:15:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:15:10 compute-0 podman[415472]: 2025-11-25 09:15:10.80128368 +0000 UTC m=+0.022222355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:15:10 compute-0 podman[415472]: 2025-11-25 09:15:10.909932544 +0000 UTC m=+0.130871219 container init a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:15:10 compute-0 podman[415472]: 2025-11-25 09:15:10.919026511 +0000 UTC m=+0.139965166 container start a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:15:10 compute-0 podman[415472]: 2025-11-25 09:15:10.922532426 +0000 UTC m=+0.143471101 container attach a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:15:10 compute-0 awesome_feynman[415489]: 167 167
Nov 25 09:15:10 compute-0 systemd[1]: libpod-a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f.scope: Deactivated successfully.
Nov 25 09:15:10 compute-0 podman[415472]: 2025-11-25 09:15:10.926488783 +0000 UTC m=+0.147427438 container died a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 09:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-09d7cad6a588a9ef834755091a5a22e3ce59193f619c1945f235e925e5de8ae9-merged.mount: Deactivated successfully.
Nov 25 09:15:11 compute-0 podman[415472]: 2025-11-25 09:15:11.032070323 +0000 UTC m=+0.253008978 container remove a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_feynman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:15:11 compute-0 systemd[1]: libpod-conmon-a20eed38e2a6f2e2df4ecbcf8fb5004ad470d5ba6081c31ce2715059970b9c1f.scope: Deactivated successfully.
Nov 25 09:15:11 compute-0 podman[415514]: 2025-11-25 09:15:11.246617346 +0000 UTC m=+0.048910561 container create a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:15:11 compute-0 systemd[1]: Started libpod-conmon-a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630.scope.
Nov 25 09:15:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:11 compute-0 podman[415514]: 2025-11-25 09:15:11.22910138 +0000 UTC m=+0.031394625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:15:11 compute-0 podman[415514]: 2025-11-25 09:15:11.338115993 +0000 UTC m=+0.140409228 container init a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:15:11 compute-0 podman[415514]: 2025-11-25 09:15:11.344326942 +0000 UTC m=+0.146620177 container start a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:15:11 compute-0 podman[415514]: 2025-11-25 09:15:11.348244398 +0000 UTC m=+0.150537613 container attach a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:15:11 compute-0 ceph-mon[75015]: pgmap v2843: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 09:15:11 compute-0 nova_compute[253538]: 2025-11-25 09:15:11.779 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062096.778285, 3525156a-e9c9-40b7-88f6-db0de5eb3cd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:15:11 compute-0 nova_compute[253538]: 2025-11-25 09:15:11.780 253542 INFO nova.compute.manager [-] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] VM Stopped (Lifecycle Event)
Nov 25 09:15:11 compute-0 nova_compute[253538]: 2025-11-25 09:15:11.801 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:11 compute-0 nova_compute[253538]: 2025-11-25 09:15:11.854 253542 DEBUG nova.compute.manager [None req-f80ef751-508e-439c-a966-b6d22a063f28 - - - - - -] [instance: 3525156a-e9c9-40b7-88f6-db0de5eb3cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]: {
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "osd_id": 1,
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "type": "bluestore"
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:     },
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "osd_id": 2,
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "type": "bluestore"
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:     },
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "osd_id": 0,
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:         "type": "bluestore"
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]:     }
Nov 25 09:15:12 compute-0 inspiring_rosalind[415531]: }
Nov 25 09:15:12 compute-0 systemd[1]: libpod-a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630.scope: Deactivated successfully.
Nov 25 09:15:12 compute-0 podman[415514]: 2025-11-25 09:15:12.33712483 +0000 UTC m=+1.139418055 container died a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-96d39c256bc6287f107512198cac140db2836fb5955affaa6e90ec1d8c47eded-merged.mount: Deactivated successfully.
Nov 25 09:15:12 compute-0 podman[415514]: 2025-11-25 09:15:12.401724716 +0000 UTC m=+1.204017951 container remove a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 09:15:12 compute-0 systemd[1]: libpod-conmon-a91c4397a0f96ace4c5d6c270c376dab81744b41255c7f57e189a582dc74f630.scope: Deactivated successfully.
Nov 25 09:15:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 511 B/s wr, 10 op/s
Nov 25 09:15:12 compute-0 sudo[415407]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:12 compute-0 podman[415572]: 2025-11-25 09:15:12.439247116 +0000 UTC m=+0.068180154 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:15:12 compute-0 podman[415564]: 2025-11-25 09:15:12.444122318 +0000 UTC m=+0.073061847 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:15:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:15:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:15:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:15:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:15:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 54f3ffc0-816c-4b09-ac25-26edfcf15506 does not exist
Nov 25 09:15:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 816428e8-7017-49dc-8bdd-bbdc72dac66a does not exist
Nov 25 09:15:12 compute-0 sudo[415611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:15:12 compute-0 sudo[415611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:12 compute-0 sudo[415611]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:12 compute-0 sudo[415636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:15:12 compute-0 sudo[415636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:15:12 compute-0 sudo[415636]: pam_unix(sudo:session): session closed for user root
Nov 25 09:15:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:13 compute-0 ceph-mon[75015]: pgmap v2844: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 511 B/s wr, 10 op/s
Nov 25 09:15:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:15:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:15:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:14 compute-0 nova_compute[253538]: 2025-11-25 09:15:14.685 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:15 compute-0 ceph-mon[75015]: pgmap v2845: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2846: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:16 compute-0 nova_compute[253538]: 2025-11-25 09:15:16.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:17 compute-0 ceph-mon[75015]: pgmap v2846: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:18 compute-0 podman[415662]: 2025-11-25 09:15:18.891253118 +0000 UTC m=+0.131110215 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 09:15:19 compute-0 ceph-mon[75015]: pgmap v2847: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:19 compute-0 nova_compute[253538]: 2025-11-25 09:15:19.687 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:21 compute-0 ceph-mon[75015]: pgmap v2848: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:21 compute-0 nova_compute[253538]: 2025-11-25 09:15:21.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:15:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:15:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:15:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:15:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:15:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:15:23 compute-0 ceph-mon[75015]: pgmap v2849: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:23 compute-0 sshd-session[415688]: Invalid user username from 45.78.222.2 port 33138
Nov 25 09:15:23 compute-0 sshd-session[415688]: Received disconnect from 45.78.222.2 port 33138:11: Bye Bye [preauth]
Nov 25 09:15:23 compute-0 sshd-session[415688]: Disconnected from invalid user username 45.78.222.2 port 33138 [preauth]
Nov 25 09:15:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:24 compute-0 nova_compute[253538]: 2025-11-25 09:15:24.721 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:25 compute-0 ceph-mon[75015]: pgmap v2850: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:26 compute-0 nova_compute[253538]: 2025-11-25 09:15:26.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:27 compute-0 ceph-mon[75015]: pgmap v2851: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:27.977 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:95:46 10.100.0.2 2001:db8::f816:3eff:fe81:9546'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe81:9546/64', 'neutron:device_id': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cfc44526-993c-46ae-8c7c-2505531aa9fc) old=Port_Binding(mac=['fa:16:3e:81:95:46 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:15:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:27.979 162739 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cfc44526-993c-46ae-8c7c-2505531aa9fc in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 updated
Nov 25 09:15:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:27.981 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:15:27 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:27.982 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e224885a-0adf-4b8d-bb70-db43482a7f3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:15:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432785060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:15:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:15:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432785060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:15:29 compute-0 ceph-mon[75015]: pgmap v2852: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2432785060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:15:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2432785060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:15:29 compute-0 nova_compute[253538]: 2025-11-25 09:15:29.723 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:31 compute-0 ceph-mon[75015]: pgmap v2853: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:31 compute-0 nova_compute[253538]: 2025-11-25 09:15:31.855 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:33 compute-0 ceph-mon[75015]: pgmap v2854: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.190 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.191 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.204 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.280 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.280 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.291 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.291 253542 INFO nova.compute.claims [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.400 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.725 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:15:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555988415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.909 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.915 253542 DEBUG nova.compute.provider_tree [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.928 253542 DEBUG nova.scheduler.client.report [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.948 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:34 compute-0 nova_compute[253538]: 2025-11-25 09:15:34.950 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.005 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.005 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.025 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.050 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.201 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.204 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.205 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Creating image(s)
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.240 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.267 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.293 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.296 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.340 253542 DEBUG nova.policy [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.375 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.377 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.378 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.378 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.404 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.407 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2abdf1f8-0c71-459d-8467-ec8825219eda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:35 compute-0 ceph-mon[75015]: pgmap v2855: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3555988415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.718 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 2abdf1f8-0c71-459d-8467-ec8825219eda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.818 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.923 253542 DEBUG nova.objects.instance [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 2abdf1f8-0c71-459d-8467-ec8825219eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.936 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.937 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Ensure instance console log exists: /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.937 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.938 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:35 compute-0 nova_compute[253538]: 2025-11-25 09:15:35.938 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:36 compute-0 sshd-session[415690]: Connection closed by 45.78.217.205 port 35524 [preauth]
Nov 25 09:15:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:36 compute-0 nova_compute[253538]: 2025-11-25 09:15:36.844 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Successfully created port: f30cb228-eac2-4d17-a356-bec8d6ae142a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:15:36 compute-0 nova_compute[253538]: 2025-11-25 09:15:36.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:37 compute-0 nova_compute[253538]: 2025-11-25 09:15:37.458 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Successfully updated port: f30cb228-eac2-4d17-a356-bec8d6ae142a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:15:37 compute-0 nova_compute[253538]: 2025-11-25 09:15:37.475 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:15:37 compute-0 nova_compute[253538]: 2025-11-25 09:15:37.475 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:15:37 compute-0 nova_compute[253538]: 2025-11-25 09:15:37.476 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:15:37 compute-0 nova_compute[253538]: 2025-11-25 09:15:37.552 253542 DEBUG nova.compute.manager [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:15:37 compute-0 nova_compute[253538]: 2025-11-25 09:15:37.552 253542 DEBUG nova.compute.manager [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing instance network info cache due to event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:15:37 compute-0 nova_compute[253538]: 2025-11-25 09:15:37.552 253542 DEBUG oslo_concurrency.lockutils [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:15:37 compute-0 ceph-mon[75015]: pgmap v2856: 321 pgs: 321 active+clean; 88 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:15:37 compute-0 nova_compute[253538]: 2025-11-25 09:15:37.818 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:15:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 114 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.865 253542 DEBUG nova.network.neutron [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.896 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.897 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance network_info: |[{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.898 253542 DEBUG oslo_concurrency.lockutils [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.898 253542 DEBUG nova.network.neutron [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.903 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start _get_guest_xml network_info=[{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.909 253542 WARNING nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.920 253542 DEBUG nova.virt.libvirt.host [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.922 253542 DEBUG nova.virt.libvirt.host [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.928 253542 DEBUG nova.virt.libvirt.host [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.929 253542 DEBUG nova.virt.libvirt.host [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.929 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.930 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.931 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.931 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.932 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.932 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.933 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.933 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.934 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.934 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.935 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.935 253542 DEBUG nova.virt.hardware [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:15:38 compute-0 nova_compute[253538]: 2025-11-25 09:15:38.941 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:15:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979270653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.434 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.460 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.464 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:39 compute-0 ceph-mon[75015]: pgmap v2857: 321 pgs: 321 active+clean; 114 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 09:15:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3979270653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.726 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:15:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441562930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.930 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.932 253542 DEBUG nova.virt.libvirt.vif [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-5134712',display_name='tempest-TestGettingAddress-server-5134712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-5134712',id=150,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-s9d0d0tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:15:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=2abdf1f8-0c71-459d-8467-ec8825219eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.932 253542 DEBUG nova.network.os_vif_util [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.933 253542 DEBUG nova.network.os_vif_util [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.935 253542 DEBUG nova.objects.instance [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2abdf1f8-0c71-459d-8467-ec8825219eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.954 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <uuid>2abdf1f8-0c71-459d-8467-ec8825219eda</uuid>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <name>instance-00000096</name>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-5134712</nova:name>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:15:38</nova:creationTime>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <nova:port uuid="f30cb228-eac2-4d17-a356-bec8d6ae142a">
Nov 25 09:15:39 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fede:b878" ipVersion="6"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <system>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <entry name="serial">2abdf1f8-0c71-459d-8467-ec8825219eda</entry>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <entry name="uuid">2abdf1f8-0c71-459d-8467-ec8825219eda</entry>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </system>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <os>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   </os>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <features>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   </features>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2abdf1f8-0c71-459d-8467-ec8825219eda_disk">
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       </source>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config">
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       </source>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:15:39 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:de:b8:78"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <target dev="tapf30cb228-ea"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/console.log" append="off"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <video>
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </video>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:15:39 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:15:39 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:15:39 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:15:39 compute-0 nova_compute[253538]: </domain>
Nov 25 09:15:39 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.955 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Preparing to wait for external event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.956 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.956 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.956 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.957 253542 DEBUG nova.virt.libvirt.vif [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-5134712',display_name='tempest-TestGettingAddress-server-5134712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-5134712',id=150,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-s9d0d0tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:15:35Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=2abdf1f8-0c71-459d-8467-ec8825219eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.958 253542 DEBUG nova.network.os_vif_util [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.958 253542 DEBUG nova.network.os_vif_util [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.959 253542 DEBUG os_vif [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.959 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.960 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.960 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.964 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.964 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf30cb228-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.965 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf30cb228-ea, col_values=(('external_ids', {'iface-id': 'f30cb228-eac2-4d17-a356-bec8d6ae142a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:b8:78', 'vm-uuid': '2abdf1f8-0c71-459d-8467-ec8825219eda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:39 compute-0 NetworkManager[48915]: <info>  [1764062139.9679] manager: (tapf30cb228-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:39 compute-0 nova_compute[253538]: 2025-11-25 09:15:39.974 253542 INFO os_vif [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea')
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.015 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.016 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.016 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:de:b8:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.017 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Using config drive
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.044 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.341 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Creating config drive at /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.348 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5ci664wf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.444 253542 DEBUG nova.network.neutron [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updated VIF entry in instance network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.445 253542 DEBUG nova.network.neutron [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.459 253542 DEBUG oslo_concurrency.lockutils [req-c02adc78-893d-4378-97b0-ef9c2b4403ff req-e204fc5e-aa03-47c2-baf2-c04154eaba80 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.490 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5ci664wf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.515 253542 DEBUG nova.storage.rbd_utils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.519 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:15:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3441562930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.694 253542 DEBUG oslo_concurrency.processutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config 2abdf1f8-0c71-459d-8467-ec8825219eda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.695 253542 INFO nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Deleting local config drive /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda/disk.config because it was imported into RBD.
Nov 25 09:15:40 compute-0 kernel: tapf30cb228-ea: entered promiscuous mode
Nov 25 09:15:40 compute-0 NetworkManager[48915]: <info>  [1764062140.7599] manager: (tapf30cb228-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Nov 25 09:15:40 compute-0 ovn_controller[152859]: 2025-11-25T09:15:40Z|01596|binding|INFO|Claiming lport f30cb228-eac2-4d17-a356-bec8d6ae142a for this chassis.
Nov 25 09:15:40 compute-0 ovn_controller[152859]: 2025-11-25T09:15:40Z|01597|binding|INFO|f30cb228-eac2-4d17-a356-bec8d6ae142a: Claiming fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.762 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:40 compute-0 systemd-udevd[416013]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:15:40 compute-0 systemd-machined[215790]: New machine qemu-180-instance-00000096.
Nov 25 09:15:40 compute-0 NetworkManager[48915]: <info>  [1764062140.8077] device (tapf30cb228-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:15:40 compute-0 NetworkManager[48915]: <info>  [1764062140.8095] device (tapf30cb228-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.809 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878'], port_security=['fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8::f816:3eff:fede:b878/64', 'neutron:device_id': '2abdf1f8-0c71-459d-8467-ec8825219eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15198f79-2199-4ce9-8f4e-9ae43d3cedee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f30cb228-eac2-4d17-a356-bec8d6ae142a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.811 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f30cb228-eac2-4d17-a356-bec8d6ae142a in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 bound to our chassis
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.812 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.824 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[05a63c23-68a5-46b5-852d-71461c63bb28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.825 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf86f1e83-b1 in ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.827 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf86f1e83-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.827 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c126a8-8b8c-42a9-b76b-0f332f002fc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.828 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c75cca-d34f-4847-871e-fc27abd7573e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.839 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[e9fcf106-0fd7-4402-a3fd-71d33a4d5f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000096.
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.866 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f71455e-4453-4ea2-b41a-1019aad61be4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:40 compute-0 ovn_controller[152859]: 2025-11-25T09:15:40Z|01598|binding|INFO|Setting lport f30cb228-eac2-4d17-a356-bec8d6ae142a ovn-installed in OVS
Nov 25 09:15:40 compute-0 ovn_controller[152859]: 2025-11-25T09:15:40Z|01599|binding|INFO|Setting lport f30cb228-eac2-4d17-a356-bec8d6ae142a up in Southbound
Nov 25 09:15:40 compute-0 nova_compute[253538]: 2025-11-25 09:15:40.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.897 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[7eaed574-a2b0-4adc-8c6a-8f2804f4bb82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.903 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[076bad99-0288-41f8-8911-daf291f0ea90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 NetworkManager[48915]: <info>  [1764062140.9041] manager: (tapf86f1e83-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Nov 25 09:15:40 compute-0 systemd-udevd[416016]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.942 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9f4b7a-e04c-49bc-9d8d-ccc1ad60917a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.944 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[49246896-1bf6-4f95-b656-1fa3fef8cdd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:40 compute-0 NetworkManager[48915]: <info>  [1764062140.9717] device (tapf86f1e83-b0): carrier: link connected
Nov 25 09:15:40 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.978 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[a877a279-1f92-4ce1-97bb-478a192911dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:40.997 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[af68bba5-8839-4e57-9491-5cc25065fe08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86f1e83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:95:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 459], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749557, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416049, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.012 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[915571c7-7420-49bb-a55f-e16669c5fd37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:9546'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749557, 'tstamp': 749557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416050, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.034 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[24fb9a4a-1af7-4ab0-b15c-b0b091a78c4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86f1e83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:95:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 459], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749557, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 416051, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.066 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e8c27d-a677-4c0f-a1a2-191153c2d29a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.099 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.099 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.100 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.123 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f61fe2eb-70a8-4fea-ac3e-fa44a69be98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86f1e83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.125 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.126 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86f1e83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:41 compute-0 kernel: tapf86f1e83-b0: entered promiscuous mode
Nov 25 09:15:41 compute-0 NetworkManager[48915]: <info>  [1764062141.1293] manager: (tapf86f1e83-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.133 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86f1e83-b0, col_values=(('external_ids', {'iface-id': 'cfc44526-993c-46ae-8c7c-2505531aa9fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:15:41 compute-0 ovn_controller[152859]: 2025-11-25T09:15:41Z|01600|binding|INFO|Releasing lport cfc44526-993c-46ae-8c7c-2505531aa9fc from this chassis (sb_readonly=0)
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.134 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.135 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f86f1e83-b07b-4abd-bc9a-7c03f3634fc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f86f1e83-b07b-4abd-bc9a-7c03f3634fc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.136 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ce32f5-001f-4000-b540-891dcdbbe7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.137 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/f86f1e83-b07b-4abd-bc9a-7c03f3634fc6.pid.haproxy
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID f86f1e83-b07b-4abd-bc9a-7c03f3634fc6
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:15:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:15:41.137 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'env', 'PROCESS_TAG=haproxy-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f86f1e83-b07b-4abd-bc9a-7c03f3634fc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.325 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062141.3250024, 2abdf1f8-0c71-459d-8467-ec8825219eda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.325 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] VM Started (Lifecycle Event)
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.343 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.347 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062141.3272762, 2abdf1f8-0c71-459d-8467-ec8825219eda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.347 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] VM Paused (Lifecycle Event)
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.362 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.365 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:15:41 compute-0 nova_compute[253538]: 2025-11-25 09:15:41.382 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:15:41 compute-0 podman[416125]: 2025-11-25 09:15:41.536086965 +0000 UTC m=+0.064919306 container create 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:15:41 compute-0 systemd[1]: Started libpod-conmon-919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb.scope.
Nov 25 09:15:41 compute-0 ceph-mon[75015]: pgmap v2858: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:15:41 compute-0 podman[416125]: 2025-11-25 09:15:41.498396761 +0000 UTC m=+0.027229152 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:15:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724b3d88118a671a8ffba4a262a67c17eb9917c6cc062f3440b0fe5cf48c7928/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:15:41 compute-0 podman[416125]: 2025-11-25 09:15:41.654819802 +0000 UTC m=+0.183652213 container init 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:15:41 compute-0 podman[416125]: 2025-11-25 09:15:41.661780082 +0000 UTC m=+0.190612423 container start 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:15:41 compute-0 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [NOTICE]   (416144) : New worker (416146) forked
Nov 25 09:15:41 compute-0 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [NOTICE]   (416144) : Loading success.
Nov 25 09:15:42 compute-0 sshd-session[416018]: Invalid user openbravo from 45.202.211.6 port 54364
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.011 253542 DEBUG nova.compute.manager [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.013 253542 DEBUG oslo_concurrency.lockutils [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.013 253542 DEBUG oslo_concurrency.lockutils [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.013 253542 DEBUG oslo_concurrency.lockutils [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.013 253542 DEBUG nova.compute.manager [req-07f9b31a-0625-4cba-9662-def9a614be31 req-b7ff6f96-38f2-4940-8afa-778c3daabc9c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Processing event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.014 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.018 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062142.0183544, 2abdf1f8-0c71-459d-8467-ec8825219eda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.018 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] VM Resumed (Lifecycle Event)
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.020 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.024 253542 INFO nova.virt.libvirt.driver [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance spawned successfully.
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.024 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.035 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.040 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.043 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.043 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.044 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.044 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.044 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.045 253542 DEBUG nova.virt.libvirt.driver [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.065 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:15:42 compute-0 sshd-session[416018]: Received disconnect from 45.202.211.6 port 54364:11: Bye Bye [preauth]
Nov 25 09:15:42 compute-0 sshd-session[416018]: Disconnected from invalid user openbravo 45.202.211.6 port 54364 [preauth]
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.274 253542 INFO nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Took 7.07 seconds to spawn the instance on the hypervisor.
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.275 253542 DEBUG nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.394 253542 INFO nova.compute.manager [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Took 8.15 seconds to build instance.
Nov 25 09:15:42 compute-0 nova_compute[253538]: 2025-11-25 09:15:42.424 253542 DEBUG oslo_concurrency.lockutils [None req-53df080d-df0f-4c21-9484-79434ec815e6 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:15:42 compute-0 podman[416156]: 2025-11-25 09:15:42.826421972 +0000 UTC m=+0.062827129 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 09:15:42 compute-0 podman[416155]: 2025-11-25 09:15:42.836150276 +0000 UTC m=+0.077897079 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:15:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:43 compute-0 ceph-mon[75015]: pgmap v2859: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:15:44 compute-0 nova_compute[253538]: 2025-11-25 09:15:44.074 253542 DEBUG nova.compute.manager [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:15:44 compute-0 nova_compute[253538]: 2025-11-25 09:15:44.075 253542 DEBUG oslo_concurrency.lockutils [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:15:44 compute-0 nova_compute[253538]: 2025-11-25 09:15:44.075 253542 DEBUG oslo_concurrency.lockutils [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:15:44 compute-0 nova_compute[253538]: 2025-11-25 09:15:44.076 253542 DEBUG oslo_concurrency.lockutils [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:15:44 compute-0 nova_compute[253538]: 2025-11-25 09:15:44.076 253542 DEBUG nova.compute.manager [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] No waiting events found dispatching network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:15:44 compute-0 nova_compute[253538]: 2025-11-25 09:15:44.076 253542 WARNING nova.compute.manager [req-996c0461-7282-4bcb-ae21-12cee14b5783 req-6b5331ea-a2b5-41f4-8111-0f7e4d6352bb b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received unexpected event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a for instance with vm_state active and task_state None.
Nov 25 09:15:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:15:44 compute-0 nova_compute[253538]: 2025-11-25 09:15:44.767 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:44 compute-0 nova_compute[253538]: 2025-11-25 09:15:44.968 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:45 compute-0 ceph-mon[75015]: pgmap v2860: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:15:46 compute-0 ovn_controller[152859]: 2025-11-25T09:15:46Z|01601|binding|INFO|Releasing lport cfc44526-993c-46ae-8c7c-2505531aa9fc from this chassis (sb_readonly=0)
Nov 25 09:15:46 compute-0 NetworkManager[48915]: <info>  [1764062146.2223] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Nov 25 09:15:46 compute-0 nova_compute[253538]: 2025-11-25 09:15:46.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:46 compute-0 NetworkManager[48915]: <info>  [1764062146.2234] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Nov 25 09:15:46 compute-0 nova_compute[253538]: 2025-11-25 09:15:46.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:46 compute-0 ovn_controller[152859]: 2025-11-25T09:15:46Z|01602|binding|INFO|Releasing lport cfc44526-993c-46ae-8c7c-2505531aa9fc from this chassis (sb_readonly=0)
Nov 25 09:15:46 compute-0 nova_compute[253538]: 2025-11-25 09:15:46.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 944 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Nov 25 09:15:46 compute-0 nova_compute[253538]: 2025-11-25 09:15:46.996 253542 DEBUG nova.compute.manager [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:15:46 compute-0 nova_compute[253538]: 2025-11-25 09:15:46.996 253542 DEBUG nova.compute.manager [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing instance network info cache due to event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:15:46 compute-0 nova_compute[253538]: 2025-11-25 09:15:46.996 253542 DEBUG oslo_concurrency.lockutils [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:15:46 compute-0 nova_compute[253538]: 2025-11-25 09:15:46.997 253542 DEBUG oslo_concurrency.lockutils [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:15:46 compute-0 nova_compute[253538]: 2025-11-25 09:15:46.997 253542 DEBUG nova.network.neutron [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:15:47 compute-0 ceph-mon[75015]: pgmap v2861: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 944 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Nov 25 09:15:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:48 compute-0 nova_compute[253538]: 2025-11-25 09:15:48.422 253542 DEBUG nova.network.neutron [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updated VIF entry in instance network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:15:48 compute-0 nova_compute[253538]: 2025-11-25 09:15:48.423 253542 DEBUG nova.network.neutron [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:15:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 09:15:48 compute-0 nova_compute[253538]: 2025-11-25 09:15:48.441 253542 DEBUG oslo_concurrency.lockutils [req-3d58a869-6447-4f91-ad48-e0f8657eeee4 req-8454f4d3-fc12-41f7-b253-eb111e106818 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:15:49 compute-0 ceph-mon[75015]: pgmap v2862: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 09:15:49 compute-0 nova_compute[253538]: 2025-11-25 09:15:49.796 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:49 compute-0 nova_compute[253538]: 2025-11-25 09:15:49.802 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:49 compute-0 podman[416196]: 2025-11-25 09:15:49.859249882 +0000 UTC m=+0.106365362 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 09:15:49 compute-0 nova_compute[253538]: 2025-11-25 09:15:49.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 477 KiB/s wr, 75 op/s
Nov 25 09:15:51 compute-0 ceph-mon[75015]: pgmap v2863: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 477 KiB/s wr, 75 op/s
Nov 25 09:15:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:15:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:15:53
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.meta', 'volumes']
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:15:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:15:53 compute-0 nova_compute[253538]: 2025-11-25 09:15:53.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:53 compute-0 ceph-mon[75015]: pgmap v2864: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:15:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:15:54 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 09:15:54 compute-0 nova_compute[253538]: 2025-11-25 09:15:54.805 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:54 compute-0 nova_compute[253538]: 2025-11-25 09:15:54.971 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:55 compute-0 ovn_controller[152859]: 2025-11-25T09:15:55Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:b8:78 10.100.0.7
Nov 25 09:15:55 compute-0 ovn_controller[152859]: 2025-11-25T09:15:55Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:b8:78 10.100.0.7
Nov 25 09:15:55 compute-0 ceph-mon[75015]: pgmap v2865: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:15:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 151 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 80 op/s
Nov 25 09:15:57 compute-0 ceph-mon[75015]: pgmap v2866: 321 pgs: 321 active+clean; 151 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 80 op/s
Nov 25 09:15:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:15:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 164 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 09:15:58 compute-0 nova_compute[253538]: 2025-11-25 09:15:58.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:58 compute-0 nova_compute[253538]: 2025-11-25 09:15:58.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:15:58 compute-0 nova_compute[253538]: 2025-11-25 09:15:58.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:15:58 compute-0 nova_compute[253538]: 2025-11-25 09:15:58.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:15:59 compute-0 ceph-mon[75015]: pgmap v2867: 321 pgs: 321 active+clean; 164 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 09:15:59 compute-0 nova_compute[253538]: 2025-11-25 09:15:59.807 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:15:59 compute-0 nova_compute[253538]: 2025-11-25 09:15:59.823 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:15:59 compute-0 nova_compute[253538]: 2025-11-25 09:15:59.824 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:15:59 compute-0 nova_compute[253538]: 2025-11-25 09:15:59.824 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:15:59 compute-0 nova_compute[253538]: 2025-11-25 09:15:59.824 253542 DEBUG nova.objects.instance [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2abdf1f8-0c71-459d-8467-ec8825219eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:15:59 compute-0 nova_compute[253538]: 2025-11-25 09:15:59.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:16:01 compute-0 ceph-mon[75015]: pgmap v2868: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:16:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:16:02 compute-0 nova_compute[253538]: 2025-11-25 09:16:02.926 253542 DEBUG nova.network.neutron [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:02 compute-0 nova_compute[253538]: 2025-11-25 09:16:02.938 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:16:02 compute-0 nova_compute[253538]: 2025-11-25 09:16:02.938 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:16:02 compute-0 nova_compute[253538]: 2025-11-25 09:16:02.939 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:02 compute-0 nova_compute[253538]: 2025-11-25 09:16:02.939 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:16:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:03 compute-0 nova_compute[253538]: 2025-11-25 09:16:03.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:03 compute-0 ceph-mon[75015]: pgmap v2869: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:16:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:16:04 compute-0 nova_compute[253538]: 2025-11-25 09:16:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:04 compute-0 nova_compute[253538]: 2025-11-25 09:16:04.811 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:04 compute-0 nova_compute[253538]: 2025-11-25 09:16:04.975 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:05 compute-0 ceph-mon[75015]: pgmap v2870: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:16:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.461 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.462 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.533 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.623 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.624 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.636 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.636 253542 INFO nova.compute.claims [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:16:06 compute-0 nova_compute[253538]: 2025-11-25 09:16:06.878 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:16:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428635491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.423 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.430 253542 DEBUG nova.compute.provider_tree [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.442 253542 DEBUG nova.scheduler.client.report [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.461 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.462 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.519 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.520 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.661 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:16:07 compute-0 ceph-mon[75015]: pgmap v2871: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:16:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1428635491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.708 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.863 253542 DEBUG nova.policy [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9fb13d4ba9041458692330b7276232f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.871 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.872 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.873 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Creating image(s)
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.897 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.924 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.949 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:16:07 compute-0 nova_compute[253538]: 2025-11-25 09:16:07.953 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.033 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.034 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.035 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.035 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.058 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.060 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.361 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.434 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] resizing rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:16:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.542 253542 DEBUG nova.objects.instance [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'migration_context' on Instance uuid 645b40f5-7a87-4de2-8b13-a340bcffd14b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.613 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.614 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Ensure instance console log exists: /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.615 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.615 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:08 compute-0 nova_compute[253538]: 2025-11-25 09:16:08.616 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:09 compute-0 ceph-mon[75015]: pgmap v2872: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Nov 25 09:16:09 compute-0 nova_compute[253538]: 2025-11-25 09:16:09.812 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:09 compute-0 nova_compute[253538]: 2025-11-25 09:16:09.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Nov 25 09:16:10 compute-0 nova_compute[253538]: 2025-11-25 09:16:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:10 compute-0 nova_compute[253538]: 2025-11-25 09:16:10.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:10 compute-0 nova_compute[253538]: 2025-11-25 09:16:10.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:10 compute-0 nova_compute[253538]: 2025-11-25 09:16:10.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:10 compute-0 nova_compute[253538]: 2025-11-25 09:16:10.594 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:16:10 compute-0 nova_compute[253538]: 2025-11-25 09:16:10.594 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:10.973 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:16:10 compute-0 nova_compute[253538]: 2025-11-25 09:16:10.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:10.976 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:16:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:16:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417734888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.027 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.089 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.090 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.198 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Successfully created port: 53302c95-cc0c-4237-a7f3-dca02953a876 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.274 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.275 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3428MB free_disk=59.928916931152344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.275 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.276 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 2abdf1f8-0c71-459d-8467-ec8825219eda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 645b40f5-7a87-4de2-8b13-a340bcffd14b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.377 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:11 compute-0 ceph-mon[75015]: pgmap v2873: 321 pgs: 321 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Nov 25 09:16:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3417734888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:16:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834755696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.867 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.876 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.897 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.922 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:16:11 compute-0 nova_compute[253538]: 2025-11-25 09:16:11.922 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 09:16:12 compute-0 sudo[416453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:12 compute-0 sudo[416453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:12 compute-0 sudo[416453]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1834755696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:12 compute-0 nova_compute[253538]: 2025-11-25 09:16:12.714 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Successfully updated port: 53302c95-cc0c-4237-a7f3-dca02953a876 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:16:12 compute-0 nova_compute[253538]: 2025-11-25 09:16:12.733 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:16:12 compute-0 nova_compute[253538]: 2025-11-25 09:16:12.733 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquired lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:16:12 compute-0 nova_compute[253538]: 2025-11-25 09:16:12.734 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:16:12 compute-0 sudo[416478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:16:12 compute-0 sudo[416478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:12 compute-0 sudo[416478]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:12 compute-0 nova_compute[253538]: 2025-11-25 09:16:12.822 253542 DEBUG nova.compute.manager [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:12 compute-0 nova_compute[253538]: 2025-11-25 09:16:12.823 253542 DEBUG nova.compute.manager [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing instance network info cache due to event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:16:12 compute-0 nova_compute[253538]: 2025-11-25 09:16:12.823 253542 DEBUG oslo_concurrency.lockutils [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:16:12 compute-0 sudo[416503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:12 compute-0 sudo[416503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:12 compute-0 sudo[416503]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:12 compute-0 nova_compute[253538]: 2025-11-25 09:16:12.902 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:16:12 compute-0 sudo[416528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 09:16:12 compute-0 sudo[416528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:12 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:12.979 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:13 compute-0 podman[416552]: 2025-11-25 09:16:13.023441748 +0000 UTC m=+0.079372908 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 09:16:13 compute-0 podman[416553]: 2025-11-25 09:16:13.036818582 +0000 UTC m=+0.083100030 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:16:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:13 compute-0 podman[416662]: 2025-11-25 09:16:13.54440327 +0000 UTC m=+0.074686282 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:16:13 compute-0 podman[416662]: 2025-11-25 09:16:13.655359677 +0000 UTC m=+0.185642679 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:16:13 compute-0 ceph-mon[75015]: pgmap v2874: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 09:16:14 compute-0 sudo[416528]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:16:14 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:16:14 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.413 253542 DEBUG nova.network.neutron [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:14 compute-0 sudo[416818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:14 compute-0 sudo[416818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:14 compute-0 sudo[416818]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.484 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Releasing lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.485 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance network_info: |[{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.486 253542 DEBUG oslo_concurrency.lockutils [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.486 253542 DEBUG nova.network.neutron [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.488 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start _get_guest_xml network_info=[{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.493 253542 WARNING nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.502 253542 DEBUG nova.virt.libvirt.host [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.503 253542 DEBUG nova.virt.libvirt.host [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.506 253542 DEBUG nova.virt.libvirt.host [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.507 253542 DEBUG nova.virt.libvirt.host [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.507 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.507 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.508 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.508 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.509 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.509 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.509 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.509 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:16:14 compute-0 sudo[416844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.510 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.510 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.510 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.510 253542 DEBUG nova.virt.hardware [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:16:14 compute-0 sudo[416844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.514 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:14 compute-0 sudo[416844]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:14 compute-0 sudo[416869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:14 compute-0 sudo[416869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:14 compute-0 sudo[416869]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:14 compute-0 sudo[416895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:16:14 compute-0 sudo[416895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:16:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713355237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:16:14 compute-0 nova_compute[253538]: 2025-11-25 09:16:14.972 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.002 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.007 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:15 compute-0 sudo[416895]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:16:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:16:15 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:16:15 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:15 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d20ac63c-a6a4-46e7-8a07-adcf0ec29b8f does not exist
Nov 25 09:16:15 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 872309ce-bedd-4b29-b1f5-7854fe8d781a does not exist
Nov 25 09:16:15 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev da3f0fd6-2bbf-4f24-8fce-458d1c187fa7 does not exist
Nov 25 09:16:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:16:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:16:15 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:16:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:15 compute-0 ceph-mon[75015]: pgmap v2875: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/713355237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:16:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:16:15 compute-0 sudo[417010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:15 compute-0 sudo[417010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:15 compute-0 sudo[417010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:15 compute-0 sudo[417035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:16:15 compute-0 sudo[417035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:15 compute-0 sudo[417035]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:16:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880378211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.480 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.482 253542 DEBUG nova.virt.libvirt.vif [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:16:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-557180333',display_name='tempest-TestGettingAddress-server-557180333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-557180333',id=151,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-yusne7fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:16:07Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=645b40f5-7a87-4de2-8b13-a340bcffd14b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.482 253542 DEBUG nova.network.os_vif_util [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.483 253542 DEBUG nova.network.os_vif_util [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.484 253542 DEBUG nova.objects.instance [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 645b40f5-7a87-4de2-8b13-a340bcffd14b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:16:15 compute-0 sudo[417060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:15 compute-0 sudo[417060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:15 compute-0 sudo[417060]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.499 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <uuid>645b40f5-7a87-4de2-8b13-a340bcffd14b</uuid>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <name>instance-00000097</name>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <nova:name>tempest-TestGettingAddress-server-557180333</nova:name>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:16:14</nova:creationTime>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <nova:user uuid="c9fb13d4ba9041458692330b7276232f">tempest-TestGettingAddress-364728108-project-member</nova:user>
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <nova:project uuid="a3cf572dfc9f42528923d69b8fa76422">tempest-TestGettingAddress-364728108</nova:project>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <nova:port uuid="53302c95-cc0c-4237-a7f3-dca02953a876">
Nov 25 09:16:15 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe02:9c53" ipVersion="6"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <system>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <entry name="serial">645b40f5-7a87-4de2-8b13-a340bcffd14b</entry>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <entry name="uuid">645b40f5-7a87-4de2-8b13-a340bcffd14b</entry>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </system>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <os>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   </os>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <features>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   </features>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/645b40f5-7a87-4de2-8b13-a340bcffd14b_disk">
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       </source>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config">
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       </source>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:16:15 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:02:9c:53"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <target dev="tap53302c95-cc"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/console.log" append="off"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <video>
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </video>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:16:15 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:16:15 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:16:15 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:16:15 compute-0 nova_compute[253538]: </domain>
Nov 25 09:16:15 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.500 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Preparing to wait for external event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.500 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.500 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.500 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.501 253542 DEBUG nova.virt.libvirt.vif [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:16:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-557180333',display_name='tempest-TestGettingAddress-server-557180333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-557180333',id=151,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-yusne7fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:16:07Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=645b40f5-7a87-4de2-8b13-a340bcffd14b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.501 253542 DEBUG nova.network.os_vif_util [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.502 253542 DEBUG nova.network.os_vif_util [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.502 253542 DEBUG os_vif [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.502 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.503 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.503 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.505 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53302c95-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.506 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53302c95-cc, col_values=(('external_ids', {'iface-id': '53302c95-cc0c-4237-a7f3-dca02953a876', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:9c:53', 'vm-uuid': '645b40f5-7a87-4de2-8b13-a340bcffd14b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:15 compute-0 sudo[417087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:16:15 compute-0 sudo[417087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:15 compute-0 NetworkManager[48915]: <info>  [1764062175.5477] manager: (tap53302c95-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.553 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.555 253542 INFO os_vif [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc')
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.596 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.597 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.597 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] No VIF found with MAC fa:16:3e:02:9c:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.598 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Using config drive
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.626 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:16:15 compute-0 podman[417173]: 2025-11-25 09:16:15.927872142 +0000 UTC m=+0.064998717 container create 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:16:15 compute-0 systemd[1]: Started libpod-conmon-2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b.scope.
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.978 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Creating config drive at /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config
Nov 25 09:16:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:16:15 compute-0 nova_compute[253538]: 2025-11-25 09:16:15.982 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkhs75yei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:15 compute-0 podman[417173]: 2025-11-25 09:16:15.895540823 +0000 UTC m=+0.032667468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:16:16 compute-0 podman[417173]: 2025-11-25 09:16:15.99989645 +0000 UTC m=+0.137023045 container init 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 09:16:16 compute-0 podman[417173]: 2025-11-25 09:16:16.00725138 +0000 UTC m=+0.144377955 container start 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:16:16 compute-0 podman[417173]: 2025-11-25 09:16:16.010818777 +0000 UTC m=+0.147945352 container attach 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:16:16 compute-0 boring_greider[417190]: 167 167
Nov 25 09:16:16 compute-0 systemd[1]: libpod-2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b.scope: Deactivated successfully.
Nov 25 09:16:16 compute-0 podman[417196]: 2025-11-25 09:16:16.052742786 +0000 UTC m=+0.023844188 container died 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-36a2d0c1a0a9a9ba1c2ceaf56251b781a11bd497b4e59cb4fab8abd96f3b5519-merged.mount: Deactivated successfully.
Nov 25 09:16:16 compute-0 podman[417196]: 2025-11-25 09:16:16.088060417 +0000 UTC m=+0.059161799 container remove 2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:16:16 compute-0 systemd[1]: libpod-conmon-2ce613ccee9f77f71aa9659aaf6d48834005a4f3868168b6bf89932f4580eb1b.scope: Deactivated successfully.
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.136 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkhs75yei" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.169 253542 DEBUG nova.storage.rbd_utils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] rbd image 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.173 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:16 compute-0 podman[417240]: 2025-11-25 09:16:16.258084399 +0000 UTC m=+0.042509607 container create fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:16:16 compute-0 systemd[1]: Started libpod-conmon-fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19.scope.
Nov 25 09:16:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:16 compute-0 podman[417240]: 2025-11-25 09:16:16.239598826 +0000 UTC m=+0.024024054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:16:16 compute-0 podman[417240]: 2025-11-25 09:16:16.336416078 +0000 UTC m=+0.120841276 container init fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:16:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/880378211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:16:16 compute-0 podman[417240]: 2025-11-25 09:16:16.348020264 +0000 UTC m=+0.132445472 container start fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:16:16 compute-0 podman[417240]: 2025-11-25 09:16:16.350746958 +0000 UTC m=+0.135172156 container attach fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.369 253542 DEBUG oslo_concurrency.processutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config 645b40f5-7a87-4de2-8b13-a340bcffd14b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.370 253542 INFO nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Deleting local config drive /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b/disk.config because it was imported into RBD.
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.397 253542 DEBUG nova.network.neutron [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updated VIF entry in instance network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.398 253542 DEBUG nova.network.neutron [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.412 253542 DEBUG oslo_concurrency.lockutils [req-08ee7e48-bfcc-4c15-a328-6fe3e2048303 req-41556e18-1e02-4977-bfab-ec40d4927f19 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:16:16 compute-0 NetworkManager[48915]: <info>  [1764062176.4215] manager: (tap53302c95-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Nov 25 09:16:16 compute-0 kernel: tap53302c95-cc: entered promiscuous mode
Nov 25 09:16:16 compute-0 ovn_controller[152859]: 2025-11-25T09:16:16Z|01603|binding|INFO|Claiming lport 53302c95-cc0c-4237-a7f3-dca02953a876 for this chassis.
Nov 25 09:16:16 compute-0 ovn_controller[152859]: 2025-11-25T09:16:16Z|01604|binding|INFO|53302c95-cc0c-4237-a7f3-dca02953a876: Claiming fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.432 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.447 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53'], port_security=['fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe02:9c53/64', 'neutron:device_id': '645b40f5-7a87-4de2-8b13-a340bcffd14b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15198f79-2199-4ce9-8f4e-9ae43d3cedee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=53302c95-cc0c-4237-a7f3-dca02953a876) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.448 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 53302c95-cc0c-4237-a7f3-dca02953a876 in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 bound to our chassis
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.450 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6
Nov 25 09:16:16 compute-0 ovn_controller[152859]: 2025-11-25T09:16:16Z|01605|binding|INFO|Setting lport 53302c95-cc0c-4237-a7f3-dca02953a876 ovn-installed in OVS
Nov 25 09:16:16 compute-0 ovn_controller[152859]: 2025-11-25T09:16:16Z|01606|binding|INFO|Setting lport 53302c95-cc0c-4237-a7f3-dca02953a876 up in Southbound
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.458 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:16 compute-0 systemd-udevd[417291]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.467 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.475 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f524062e-b248-46b4-9ba4-e6ea6eba7c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:16 compute-0 systemd-machined[215790]: New machine qemu-181-instance-00000097.
Nov 25 09:16:16 compute-0 NetworkManager[48915]: <info>  [1764062176.4823] device (tap53302c95-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:16:16 compute-0 NetworkManager[48915]: <info>  [1764062176.4836] device (tap53302c95-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:16:16 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000097.
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.504 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2185ff63-5468-4e71-8775-4b26873c47a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.507 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[3c679241-05d9-4ada-a0fc-95ef988ba84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.538 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd5873d-3c02-4c27-903f-d7d5a381861c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.558 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ec930246-49e0-4498-9013-b16675adceb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86f1e83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:95:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 459], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749557, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417304, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.575 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf8371e-bb19-446e-9fb3-d2fa7d0f62c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf86f1e83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749569, 'tstamp': 749569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417306, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf86f1e83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749571, 'tstamp': 749571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417306, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.578 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86f1e83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.580 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86f1e83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.581 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.581 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86f1e83-b0, col_values=(('external_ids', {'iface-id': 'cfc44526-993c-46ae-8c7c-2505531aa9fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:16 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:16.581 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.982 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062176.9817286, 645b40f5-7a87-4de2-8b13-a340bcffd14b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.982 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] VM Started (Lifecycle Event)
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.989 253542 DEBUG nova.compute.manager [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.990 253542 DEBUG oslo_concurrency.lockutils [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.990 253542 DEBUG oslo_concurrency.lockutils [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.990 253542 DEBUG oslo_concurrency.lockutils [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.991 253542 DEBUG nova.compute.manager [req-9b2078f9-9aff-48e6-bc59-4de26920fc7c req-3995c5fa-51b7-43dc-ad14-b94999221f26 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Processing event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.991 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.995 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.998 253542 INFO nova.virt.libvirt.driver [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance spawned successfully.
Nov 25 09:16:16 compute-0 nova_compute[253538]: 2025-11-25 09:16:16.998 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.001 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.004 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.021 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.022 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062176.9819534, 645b40f5-7a87-4de2-8b13-a340bcffd14b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.022 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] VM Paused (Lifecycle Event)
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.026 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.026 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.027 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.027 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.028 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.028 253542 DEBUG nova.virt.libvirt.driver [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.055 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.058 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062176.9943283, 645b40f5-7a87-4de2-8b13-a340bcffd14b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.059 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] VM Resumed (Lifecycle Event)
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.081 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.083 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.101 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.198 253542 INFO nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Took 9.33 seconds to spawn the instance on the hypervisor.
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.199 253542 DEBUG nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.257 253542 INFO nova.compute.manager [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Took 10.66 seconds to build instance.
Nov 25 09:16:17 compute-0 nova_compute[253538]: 2025-11-25 09:16:17.343 253542 DEBUG oslo_concurrency.lockutils [None req-9d6a0950-4bff-493d-a7b8-87dee7bc5524 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:17 compute-0 ceph-mon[75015]: pgmap v2876: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:16:17 compute-0 brave_margulis[417274]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:16:17 compute-0 brave_margulis[417274]: --> relative data size: 1.0
Nov 25 09:16:17 compute-0 brave_margulis[417274]: --> All data devices are unavailable
Nov 25 09:16:17 compute-0 systemd[1]: libpod-fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19.scope: Deactivated successfully.
Nov 25 09:16:17 compute-0 systemd[1]: libpod-fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19.scope: Consumed 1.016s CPU time.
Nov 25 09:16:17 compute-0 podman[417240]: 2025-11-25 09:16:17.44487452 +0000 UTC m=+1.229299728 container died fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0080199a7588eeeec543c911235672ac1587c3d54c03eea6c4780a5d587e8334-merged.mount: Deactivated successfully.
Nov 25 09:16:17 compute-0 podman[417240]: 2025-11-25 09:16:17.502589589 +0000 UTC m=+1.287014787 container remove fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:16:17 compute-0 systemd[1]: libpod-conmon-fa2e322412eca2e06d976c30e45b58bbaa172ba0bf6fa621d92575149c17ba19.scope: Deactivated successfully.
Nov 25 09:16:17 compute-0 sudo[417087]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:17 compute-0 sudo[417385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:17 compute-0 sudo[417385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:17 compute-0 sudo[417385]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:17 compute-0 sudo[417410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:16:17 compute-0 sudo[417410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:17 compute-0 sudo[417410]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:17 compute-0 sudo[417435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:17 compute-0 sudo[417435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:17 compute-0 sudo[417435]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:17 compute-0 sudo[417460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:16:17 compute-0 sudo[417460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:18 compute-0 podman[417526]: 2025-11-25 09:16:18.14379802 +0000 UTC m=+0.023428278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:16:18 compute-0 podman[417526]: 2025-11-25 09:16:18.271185343 +0000 UTC m=+0.150815581 container create f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:16:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:18 compute-0 systemd[1]: Started libpod-conmon-f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96.scope.
Nov 25 09:16:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 486 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 09:16:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:16:18 compute-0 podman[417526]: 2025-11-25 09:16:18.476547046 +0000 UTC m=+0.356177364 container init f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:16:18 compute-0 podman[417526]: 2025-11-25 09:16:18.489175449 +0000 UTC m=+0.368805717 container start f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 09:16:18 compute-0 podman[417526]: 2025-11-25 09:16:18.492694754 +0000 UTC m=+0.372325092 container attach f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:16:18 compute-0 zealous_mcclintock[417542]: 167 167
Nov 25 09:16:18 compute-0 podman[417526]: 2025-11-25 09:16:18.49876568 +0000 UTC m=+0.378395918 container died f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:16:18 compute-0 systemd[1]: libpod-f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96.scope: Deactivated successfully.
Nov 25 09:16:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb9600aceaa64fc044401ff000cfab8fa319a32a864942c3acb158bbd1946108-merged.mount: Deactivated successfully.
Nov 25 09:16:18 compute-0 podman[417526]: 2025-11-25 09:16:18.544172204 +0000 UTC m=+0.423802442 container remove f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:16:18 compute-0 systemd[1]: libpod-conmon-f84a1a36193ade3f89ed8c0c2729bedad1d2f83cfa14a11a0c99f64b4d8cde96.scope: Deactivated successfully.
Nov 25 09:16:18 compute-0 podman[417565]: 2025-11-25 09:16:18.766497287 +0000 UTC m=+0.067135236 container create 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:16:18 compute-0 podman[417565]: 2025-11-25 09:16:18.72173224 +0000 UTC m=+0.022370179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:16:18 compute-0 systemd[1]: Started libpod-conmon-994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775.scope.
Nov 25 09:16:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:18 compute-0 podman[417565]: 2025-11-25 09:16:18.923361122 +0000 UTC m=+0.223999111 container init 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 09:16:18 compute-0 podman[417565]: 2025-11-25 09:16:18.933480257 +0000 UTC m=+0.234118156 container start 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 09:16:18 compute-0 podman[417565]: 2025-11-25 09:16:18.937177797 +0000 UTC m=+0.237815756 container attach 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:16:19 compute-0 nova_compute[253538]: 2025-11-25 09:16:19.083 253542 DEBUG nova.compute.manager [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:19 compute-0 nova_compute[253538]: 2025-11-25 09:16:19.084 253542 DEBUG oslo_concurrency.lockutils [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:19 compute-0 nova_compute[253538]: 2025-11-25 09:16:19.086 253542 DEBUG oslo_concurrency.lockutils [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:19 compute-0 nova_compute[253538]: 2025-11-25 09:16:19.086 253542 DEBUG oslo_concurrency.lockutils [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:19 compute-0 nova_compute[253538]: 2025-11-25 09:16:19.087 253542 DEBUG nova.compute.manager [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] No waiting events found dispatching network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:16:19 compute-0 nova_compute[253538]: 2025-11-25 09:16:19.087 253542 WARNING nova.compute.manager [req-9bb44a29-35b9-4742-88bc-eafcee40d83f req-8433fc9e-edee-4646-a01d-ec4a5270fc32 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received unexpected event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 for instance with vm_state active and task_state None.
Nov 25 09:16:19 compute-0 ceph-mon[75015]: pgmap v2877: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 486 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 09:16:19 compute-0 silly_lovelace[417582]: {
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:     "0": [
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:         {
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "devices": [
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "/dev/loop3"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             ],
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_name": "ceph_lv0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_size": "21470642176",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "name": "ceph_lv0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "tags": {
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cluster_name": "ceph",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.crush_device_class": "",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.encrypted": "0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osd_id": "0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.type": "block",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.vdo": "0"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             },
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "type": "block",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "vg_name": "ceph_vg0"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:         }
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:     ],
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:     "1": [
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:         {
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "devices": [
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "/dev/loop4"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             ],
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_name": "ceph_lv1",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_size": "21470642176",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "name": "ceph_lv1",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "tags": {
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cluster_name": "ceph",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.crush_device_class": "",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.encrypted": "0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osd_id": "1",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.type": "block",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.vdo": "0"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             },
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "type": "block",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "vg_name": "ceph_vg1"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:         }
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:     ],
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:     "2": [
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:         {
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "devices": [
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "/dev/loop5"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             ],
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_name": "ceph_lv2",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_size": "21470642176",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "name": "ceph_lv2",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "tags": {
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.cluster_name": "ceph",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.crush_device_class": "",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.encrypted": "0",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osd_id": "2",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.type": "block",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:                 "ceph.vdo": "0"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             },
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "type": "block",
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:             "vg_name": "ceph_vg2"
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:         }
Nov 25 09:16:19 compute-0 silly_lovelace[417582]:     ]
Nov 25 09:16:19 compute-0 silly_lovelace[417582]: }
Nov 25 09:16:19 compute-0 systemd[1]: libpod-994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775.scope: Deactivated successfully.
Nov 25 09:16:19 compute-0 podman[417565]: 2025-11-25 09:16:19.762444482 +0000 UTC m=+1.063082391 container died 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-cede17204f8cf7a50c96dfd6a631cd1733a1ff18d59e1d5ddf1c2751cae965dd-merged.mount: Deactivated successfully.
Nov 25 09:16:19 compute-0 nova_compute[253538]: 2025-11-25 09:16:19.816 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:19 compute-0 podman[417565]: 2025-11-25 09:16:19.819543043 +0000 UTC m=+1.120180952 container remove 994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:16:19 compute-0 systemd[1]: libpod-conmon-994e5d7ff8a1b8048ec322eb5c2d2bf0e3b25e9d2ed0c42cf013d01822c8f775.scope: Deactivated successfully.
Nov 25 09:16:19 compute-0 sudo[417460]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:19 compute-0 sudo[417604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:19 compute-0 sudo[417604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:19 compute-0 sudo[417604]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:19 compute-0 sudo[417635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:16:19 compute-0 sudo[417635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:20 compute-0 sudo[417635]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:20 compute-0 podman[417628]: 2025-11-25 09:16:20.039554164 +0000 UTC m=+0.098396966 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:16:20 compute-0 sudo[417674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:20 compute-0 sudo[417674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:20 compute-0 sudo[417674]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:20 compute-0 sudo[417705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:16:20 compute-0 sudo[417705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:20 compute-0 nova_compute[253538]: 2025-11-25 09:16:20.197 253542 DEBUG nova.compute.manager [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:20 compute-0 nova_compute[253538]: 2025-11-25 09:16:20.197 253542 DEBUG nova.compute.manager [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing instance network info cache due to event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:16:20 compute-0 nova_compute[253538]: 2025-11-25 09:16:20.197 253542 DEBUG oslo_concurrency.lockutils [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:16:20 compute-0 nova_compute[253538]: 2025-11-25 09:16:20.197 253542 DEBUG oslo_concurrency.lockutils [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:16:20 compute-0 nova_compute[253538]: 2025-11-25 09:16:20.198 253542 DEBUG nova.network.neutron [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:16:20 compute-0 podman[417767]: 2025-11-25 09:16:20.437963504 +0000 UTC m=+0.051775109 container create 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:16:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 67 op/s
Nov 25 09:16:20 compute-0 systemd[1]: Started libpod-conmon-23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f.scope.
Nov 25 09:16:20 compute-0 podman[417767]: 2025-11-25 09:16:20.410516738 +0000 UTC m=+0.024328363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:16:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:16:20 compute-0 nova_compute[253538]: 2025-11-25 09:16:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:20 compute-0 nova_compute[253538]: 2025-11-25 09:16:20.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:16:20 compute-0 nova_compute[253538]: 2025-11-25 09:16:20.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:20 compute-0 podman[417767]: 2025-11-25 09:16:20.580767427 +0000 UTC m=+0.194579042 container init 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:16:20 compute-0 podman[417767]: 2025-11-25 09:16:20.587042467 +0000 UTC m=+0.200854072 container start 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:16:20 compute-0 xenodochial_williams[417782]: 167 167
Nov 25 09:16:20 compute-0 systemd[1]: libpod-23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f.scope: Deactivated successfully.
Nov 25 09:16:20 compute-0 podman[417767]: 2025-11-25 09:16:20.612445327 +0000 UTC m=+0.226256942 container attach 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 25 09:16:20 compute-0 podman[417767]: 2025-11-25 09:16:20.613691391 +0000 UTC m=+0.227502996 container died 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-521852b2d737ec17599ce289ac518c865d102fa2f94e3402e2b1d5a4ff5d0688-merged.mount: Deactivated successfully.
Nov 25 09:16:20 compute-0 podman[417767]: 2025-11-25 09:16:20.93149499 +0000 UTC m=+0.545306635 container remove 23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:16:20 compute-0 systemd[1]: libpod-conmon-23f47e96c4fb99c365705146e3373b27e981749b5bb943315c136358f56b1f2f.scope: Deactivated successfully.
Nov 25 09:16:21 compute-0 podman[417807]: 2025-11-25 09:16:21.164806583 +0000 UTC m=+0.069549022 container create a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:16:21 compute-0 systemd[1]: Started libpod-conmon-a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2.scope.
Nov 25 09:16:21 compute-0 podman[417807]: 2025-11-25 09:16:21.135519207 +0000 UTC m=+0.040261666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:16:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:16:21 compute-0 podman[417807]: 2025-11-25 09:16:21.257495993 +0000 UTC m=+0.162238452 container init a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:16:21 compute-0 podman[417807]: 2025-11-25 09:16:21.269123858 +0000 UTC m=+0.173866227 container start a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:16:21 compute-0 podman[417807]: 2025-11-25 09:16:21.273390165 +0000 UTC m=+0.178132624 container attach a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:16:21 compute-0 ceph-mon[75015]: pgmap v2878: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 67 op/s
Nov 25 09:16:22 compute-0 romantic_newton[417824]: {
Nov 25 09:16:22 compute-0 romantic_newton[417824]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "osd_id": 1,
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "type": "bluestore"
Nov 25 09:16:22 compute-0 romantic_newton[417824]:     },
Nov 25 09:16:22 compute-0 romantic_newton[417824]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "osd_id": 2,
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "type": "bluestore"
Nov 25 09:16:22 compute-0 romantic_newton[417824]:     },
Nov 25 09:16:22 compute-0 romantic_newton[417824]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "osd_id": 0,
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:16:22 compute-0 romantic_newton[417824]:         "type": "bluestore"
Nov 25 09:16:22 compute-0 romantic_newton[417824]:     }
Nov 25 09:16:22 compute-0 romantic_newton[417824]: }
Nov 25 09:16:22 compute-0 systemd[1]: libpod-a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2.scope: Deactivated successfully.
Nov 25 09:16:22 compute-0 podman[417807]: 2025-11-25 09:16:22.259515702 +0000 UTC m=+1.164258081 container died a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:16:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d0c4c635bbb8a85f19e9d0ce872b25cd2d9fd772499a58d25f3e715cc137cf3-merged.mount: Deactivated successfully.
Nov 25 09:16:22 compute-0 podman[417807]: 2025-11-25 09:16:22.320713766 +0000 UTC m=+1.225456125 container remove a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_newton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 09:16:22 compute-0 systemd[1]: libpod-conmon-a452a978c87f9d95eaba6f214720840c5e76907d97c9d4b79105c0ada72866b2.scope: Deactivated successfully.
Nov 25 09:16:22 compute-0 sudo[417705]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:16:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:16:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 52dd1973-018e-4e74-8f4f-87779c29a6ba does not exist
Nov 25 09:16:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f67eaeed-6825-4ade-9dd0-a7706355aa78 does not exist
Nov 25 09:16:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 620 KiB/s wr, 88 op/s
Nov 25 09:16:22 compute-0 sudo[417872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:16:22 compute-0 sudo[417872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:22 compute-0 sudo[417872]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:22 compute-0 sudo[417897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:16:22 compute-0 sudo[417897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:16:22 compute-0 sudo[417897]: pam_unix(sudo:session): session closed for user root
Nov 25 09:16:22 compute-0 nova_compute[253538]: 2025-11-25 09:16:22.538 253542 DEBUG nova.network.neutron [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updated VIF entry in instance network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:16:22 compute-0 nova_compute[253538]: 2025-11-25 09:16:22.539 253542 DEBUG nova.network.neutron [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:22 compute-0 nova_compute[253538]: 2025-11-25 09:16:22.561 253542 DEBUG oslo_concurrency.lockutils [req-e9d4ddd6-a295-4a2e-9d3e-095761a1fb2e req-f911c114-e128-42bf-9129-cb04149a46f9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:16:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:16:23 compute-0 ceph-mon[75015]: pgmap v2879: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 620 KiB/s wr, 88 op/s
Nov 25 09:16:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:16:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:16:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:16:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:16:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:16:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:16:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 25 09:16:24 compute-0 nova_compute[253538]: 2025-11-25 09:16:24.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:25 compute-0 ceph-mon[75015]: pgmap v2880: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 25 09:16:25 compute-0 nova_compute[253538]: 2025-11-25 09:16:25.617 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:16:27 compute-0 ceph-mon[75015]: pgmap v2881: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:16:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 09:16:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:16:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1653182183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:16:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:16:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1653182183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:16:29 compute-0 ovn_controller[152859]: 2025-11-25T09:16:29Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:9c:53 10.100.0.3
Nov 25 09:16:29 compute-0 ovn_controller[152859]: 2025-11-25T09:16:29Z|00210|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:9c:53 10.100.0.3
Nov 25 09:16:29 compute-0 ceph-mon[75015]: pgmap v2882: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 09:16:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1653182183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:16:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1653182183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:16:29 compute-0 nova_compute[253538]: 2025-11-25 09:16:29.818 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 226 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 09:16:30 compute-0 nova_compute[253538]: 2025-11-25 09:16:30.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:31 compute-0 ceph-mon[75015]: pgmap v2883: 321 pgs: 321 active+clean; 226 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Nov 25 09:16:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 237 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Nov 25 09:16:32 compute-0 nova_compute[253538]: 2025-11-25 09:16:32.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:32 compute-0 nova_compute[253538]: 2025-11-25 09:16:32.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:16:32 compute-0 nova_compute[253538]: 2025-11-25 09:16:32.584 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:16:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:33 compute-0 ceph-mon[75015]: pgmap v2884: 321 pgs: 321 active+clean; 237 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Nov 25 09:16:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 244 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 09:16:34 compute-0 nova_compute[253538]: 2025-11-25 09:16:34.822 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:35 compute-0 ceph-mon[75015]: pgmap v2885: 321 pgs: 321 active+clean; 244 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 09:16:35 compute-0 nova_compute[253538]: 2025-11-25 09:16:35.621 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:16:37 compute-0 ceph-mon[75015]: pgmap v2886: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.656 253542 DEBUG nova.compute.manager [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.657 253542 DEBUG nova.compute.manager [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing instance network info cache due to event network-changed-53302c95-cc0c-4237-a7f3-dca02953a876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.657 253542 DEBUG oslo_concurrency.lockutils [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.657 253542 DEBUG oslo_concurrency.lockutils [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.658 253542 DEBUG nova.network.neutron [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Refreshing network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.705 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.706 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.706 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.707 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.707 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.708 253542 INFO nova.compute.manager [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Terminating instance
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.709 253542 DEBUG nova.compute.manager [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:16:37 compute-0 kernel: tap53302c95-cc (unregistering): left promiscuous mode
Nov 25 09:16:37 compute-0 NetworkManager[48915]: <info>  [1764062197.7598] device (tap53302c95-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.808 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:37 compute-0 ovn_controller[152859]: 2025-11-25T09:16:37Z|01607|binding|INFO|Releasing lport 53302c95-cc0c-4237-a7f3-dca02953a876 from this chassis (sb_readonly=0)
Nov 25 09:16:37 compute-0 ovn_controller[152859]: 2025-11-25T09:16:37Z|01608|binding|INFO|Setting lport 53302c95-cc0c-4237-a7f3-dca02953a876 down in Southbound
Nov 25 09:16:37 compute-0 ovn_controller[152859]: 2025-11-25T09:16:37Z|01609|binding|INFO|Removing iface tap53302c95-cc ovn-installed in OVS
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.846 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53'], port_security=['fa:16:3e:02:9c:53 10.100.0.3 2001:db8::f816:3eff:fe02:9c53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe02:9c53/64', 'neutron:device_id': '645b40f5-7a87-4de2-8b13-a340bcffd14b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '15198f79-2199-4ce9-8f4e-9ae43d3cedee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=53302c95-cc0c-4237-a7f3-dca02953a876) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.848 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 53302c95-cc0c-4237-a7f3-dca02953a876 in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 unbound from our chassis
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.849 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6
Nov 25 09:16:37 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 25 09:16:37 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000097.scope: Consumed 13.579s CPU time.
Nov 25 09:16:37 compute-0 systemd-machined[215790]: Machine qemu-181-instance-00000097 terminated.
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.867 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[425dd0ca-948a-4560-aea1-ef14671e2f7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.903 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[39f38300-c9da-4bcb-bdc0-3265737c1413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.907 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[1a694680-de2f-4dfc-b6f4-f2669c87faa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.938 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[40149a49-6727-4134-9995-134351519c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.948 253542 INFO nova.virt.libvirt.driver [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Instance destroyed successfully.
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.949 253542 DEBUG nova.objects.instance [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 645b40f5-7a87-4de2-8b13-a340bcffd14b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.957 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e05496-166f-4fcb-a52e-469f4f059def]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf86f1e83-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:95:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 459], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749557, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417941, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.962 253542 DEBUG nova.virt.libvirt.vif [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:16:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-557180333',display_name='tempest-TestGettingAddress-server-557180333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-557180333',id=151,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:16:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-yusne7fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:16:17Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=645b40f5-7a87-4de2-8b13-a340bcffd14b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.963 253542 DEBUG nova.network.os_vif_util [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.964 253542 DEBUG nova.network.os_vif_util [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.964 253542 DEBUG os_vif [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.968 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53302c95-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.970 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.973 253542 INFO os_vif [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:9c:53,bridge_name='br-int',has_traffic_filtering=True,id=53302c95-cc0c-4237-a7f3-dca02953a876,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53302c95-cc')
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.980 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[76004572-768d-42d2-9b22-fbb1da189694]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf86f1e83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749569, 'tstamp': 749569}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417946, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf86f1e83-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749571, 'tstamp': 749571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417946, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.981 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86f1e83-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.984 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf86f1e83-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.984 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf86f1e83-b0, col_values=(('external_ids', {'iface-id': 'cfc44526-993c-46ae-8c7c-2505531aa9fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:37.985 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:16:37 compute-0 nova_compute[253538]: 2025-11-25 09:16:37.992 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.135 253542 DEBUG nova.compute.manager [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-unplugged-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG oslo_concurrency.lockutils [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG oslo_concurrency.lockutils [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG oslo_concurrency.lockutils [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG nova.compute.manager [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] No waiting events found dispatching network-vif-unplugged-53302c95-cc0c-4237-a7f3-dca02953a876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.136 253542 DEBUG nova.compute.manager [req-79314b71-d722-466e-9adc-653f541e1f00 req-078f8673-d87e-4335-8b84-7241f786339d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-unplugged-53302c95-cc0c-4237-a7f3-dca02953a876 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.370 253542 INFO nova.virt.libvirt.driver [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Deleting instance files /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b_del
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.371 253542 INFO nova.virt.libvirt.driver [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Deletion of /var/lib/nova/instances/645b40f5-7a87-4de2-8b13-a340bcffd14b_del complete
Nov 25 09:16:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 227 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.491 253542 INFO nova.compute.manager [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.491 253542 DEBUG oslo.service.loopingcall [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.492 253542 DEBUG nova.compute.manager [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:16:38 compute-0 nova_compute[253538]: 2025-11-25 09:16:38.492 253542 DEBUG nova.network.neutron [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:16:39 compute-0 nova_compute[253538]: 2025-11-25 09:16:39.098 253542 DEBUG nova.network.neutron [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updated VIF entry in instance network info cache for port 53302c95-cc0c-4237-a7f3-dca02953a876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:16:39 compute-0 nova_compute[253538]: 2025-11-25 09:16:39.099 253542 DEBUG nova.network.neutron [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [{"id": "53302c95-cc0c-4237-a7f3-dca02953a876", "address": "fa:16:3e:02:9c:53", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe02:9c53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53302c95-cc", "ovs_interfaceid": "53302c95-cc0c-4237-a7f3-dca02953a876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:39 compute-0 ceph-mon[75015]: pgmap v2887: 321 pgs: 321 active+clean; 227 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 25 09:16:39 compute-0 nova_compute[253538]: 2025-11-25 09:16:39.824 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.027 253542 DEBUG oslo_concurrency.lockutils [req-a6069ca5-437e-4b71-a89b-4b1c3d905b3d req-a6fe8234-1daf-43fa-b34c-3dec07b7c7fe b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-645b40f5-7a87-4de2-8b13-a340bcffd14b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:16:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 190 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.552 253542 DEBUG nova.compute.manager [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.553 253542 DEBUG oslo_concurrency.lockutils [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.554 253542 DEBUG oslo_concurrency.lockutils [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.554 253542 DEBUG oslo_concurrency.lockutils [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.555 253542 DEBUG nova.compute.manager [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] No waiting events found dispatching network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.555 253542 WARNING nova.compute.manager [req-1e69153c-a6b7-4e40-9c1a-4b6f9d67bf5b req-7f6728fd-6ab4-4d6b-8c63-d1202196c6f7 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received unexpected event network-vif-plugged-53302c95-cc0c-4237-a7f3-dca02953a876 for instance with vm_state active and task_state deleting.
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.743 253542 DEBUG nova.network.neutron [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.762 253542 INFO nova.compute.manager [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Took 2.27 seconds to deallocate network for instance.
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.809 253542 DEBUG nova.compute.manager [req-f26c5f0e-ac4c-4571-ac7e-5755e28c9bab req-7b1d5594-4296-41f9-a0e1-6a10b8c49c95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Received event network-vif-deleted-53302c95-cc0c-4237-a7f3-dca02953a876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.833 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.833 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:40 compute-0 nova_compute[253538]: 2025-11-25 09:16:40.935 253542 DEBUG oslo_concurrency.processutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:41.100 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:41.101 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:41.102 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:16:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984379008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:41 compute-0 nova_compute[253538]: 2025-11-25 09:16:41.396 253542 DEBUG oslo_concurrency.processutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:41 compute-0 nova_compute[253538]: 2025-11-25 09:16:41.406 253542 DEBUG nova.compute.provider_tree [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:16:41 compute-0 nova_compute[253538]: 2025-11-25 09:16:41.567 253542 DEBUG nova.scheduler.client.report [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:16:41 compute-0 nova_compute[253538]: 2025-11-25 09:16:41.596 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:41 compute-0 nova_compute[253538]: 2025-11-25 09:16:41.634 253542 INFO nova.scheduler.client.report [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 645b40f5-7a87-4de2-8b13-a340bcffd14b
Nov 25 09:16:41 compute-0 ceph-mon[75015]: pgmap v2888: 321 pgs: 321 active+clean; 190 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Nov 25 09:16:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1984379008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:41 compute-0 nova_compute[253538]: 2025-11-25 09:16:41.729 253542 DEBUG oslo_concurrency.lockutils [None req-521483c9-132a-45e0-83e2-dabe0be817a7 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "645b40f5-7a87-4de2-8b13-a340bcffd14b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 1.1 MiB/s wr, 63 op/s
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.887 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.888 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.889 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.890 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.890 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.892 253542 INFO nova.compute.manager [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Terminating instance
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.894 253542 DEBUG nova.compute.manager [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.899 253542 DEBUG nova.compute.manager [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.900 253542 DEBUG nova.compute.manager [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing instance network info cache due to event network-changed-f30cb228-eac2-4d17-a356-bec8d6ae142a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.900 253542 DEBUG oslo_concurrency.lockutils [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.901 253542 DEBUG oslo_concurrency.lockutils [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.901 253542 DEBUG nova.network.neutron [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Refreshing network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:16:42 compute-0 kernel: tapf30cb228-ea (unregistering): left promiscuous mode
Nov 25 09:16:42 compute-0 NetworkManager[48915]: <info>  [1764062202.9661] device (tapf30cb228-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:42 compute-0 ovn_controller[152859]: 2025-11-25T09:16:42Z|01610|binding|INFO|Releasing lport f30cb228-eac2-4d17-a356-bec8d6ae142a from this chassis (sb_readonly=0)
Nov 25 09:16:42 compute-0 ovn_controller[152859]: 2025-11-25T09:16:42Z|01611|binding|INFO|Setting lport f30cb228-eac2-4d17-a356-bec8d6ae142a down in Southbound
Nov 25 09:16:42 compute-0 ovn_controller[152859]: 2025-11-25T09:16:42Z|01612|binding|INFO|Removing iface tapf30cb228-ea ovn-installed in OVS
Nov 25 09:16:42 compute-0 nova_compute[253538]: 2025-11-25 09:16:42.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.002 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.012 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878'], port_security=['fa:16:3e:de:b8:78 10.100.0.7 2001:db8::f816:3eff:fede:b878'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8::f816:3eff:fede:b878/64', 'neutron:device_id': '2abdf1f8-0c71-459d-8467-ec8825219eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3cf572dfc9f42528923d69b8fa76422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '15198f79-2199-4ce9-8f4e-9ae43d3cedee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5830640-550a-474a-a915-1b8e117ec031, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=f30cb228-eac2-4d17-a356-bec8d6ae142a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.013 162739 INFO neutron.agent.ovn.metadata.agent [-] Port f30cb228-eac2-4d17-a356-bec8d6ae142a in datapath f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 unbound from our chassis
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.014 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.015 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[692d7065-890d-4244-ab5f-9f385dab91bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.016 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 namespace which is not needed anymore
Nov 25 09:16:43 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 25 09:16:43 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000096.scope: Consumed 15.449s CPU time.
Nov 25 09:16:43 compute-0 systemd-machined[215790]: Machine qemu-180-instance-00000096 terminated.
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.136 253542 INFO nova.virt.libvirt.driver [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Instance destroyed successfully.
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.136 253542 DEBUG nova.objects.instance [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lazy-loading 'resources' on Instance uuid 2abdf1f8-0c71-459d-8467-ec8825219eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.149 253542 DEBUG nova.virt.libvirt.vif [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-5134712',display_name='tempest-TestGettingAddress-server-5134712',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-5134712',id=150,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyv667Z2ABnyPVBrZx9jd9gq6P58X6si+s81we9pUqnDsWI1jnpnGU0fnIp9UQ/Apxt3tS4iccd2fLnQpe7TCkxqUAyHZSIrSlnLfmfHodlWycVpxB5TAoXVMJ4xTQPeA==',key_name='tempest-TestGettingAddress-1435340358',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3cf572dfc9f42528923d69b8fa76422',ramdisk_id='',reservation_id='r-s9d0d0tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-364728108',owner_user_name='tempest-TestGettingAddress-364728108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:15:42Z,user_data=None,user_id='c9fb13d4ba9041458692330b7276232f',uuid=2abdf1f8-0c71-459d-8467-ec8825219eda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.150 253542 DEBUG nova.network.os_vif_util [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converting VIF {"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.152 253542 DEBUG nova.network.os_vif_util [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.153 253542 DEBUG os_vif [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.154 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.154 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf30cb228-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.156 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.161 253542 INFO os_vif [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:b8:78,bridge_name='br-int',has_traffic_filtering=True,id=f30cb228-eac2-4d17-a356-bec8d6ae142a,network=Network(f86f1e83-b07b-4abd-bc9a-7c03f3634fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf30cb228-ea')
Nov 25 09:16:43 compute-0 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [NOTICE]   (416144) : haproxy version is 2.8.14-c23fe91
Nov 25 09:16:43 compute-0 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [NOTICE]   (416144) : path to executable is /usr/sbin/haproxy
Nov 25 09:16:43 compute-0 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [WARNING]  (416144) : Exiting Master process...
Nov 25 09:16:43 compute-0 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [WARNING]  (416144) : Exiting Master process...
Nov 25 09:16:43 compute-0 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [ALERT]    (416144) : Current worker (416146) exited with code 143 (Terminated)
Nov 25 09:16:43 compute-0 neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6[416140]: [WARNING]  (416144) : All workers exited. Exiting... (0)
Nov 25 09:16:43 compute-0 systemd[1]: libpod-919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb.scope: Deactivated successfully.
Nov 25 09:16:43 compute-0 podman[418011]: 2025-11-25 09:16:43.183453689 +0000 UTC m=+0.066710905 container died 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 09:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb-userdata-shm.mount: Deactivated successfully.
Nov 25 09:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-724b3d88118a671a8ffba4a262a67c17eb9917c6cc062f3440b0fe5cf48c7928-merged.mount: Deactivated successfully.
Nov 25 09:16:43 compute-0 podman[418011]: 2025-11-25 09:16:43.237833668 +0000 UTC m=+0.121090874 container cleanup 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 09:16:43 compute-0 systemd[1]: libpod-conmon-919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb.scope: Deactivated successfully.
Nov 25 09:16:43 compute-0 podman[418044]: 2025-11-25 09:16:43.281466374 +0000 UTC m=+0.091880689 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:16:43 compute-0 podman[418036]: 2025-11-25 09:16:43.286620444 +0000 UTC m=+0.101878310 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 09:16:43 compute-0 podman[418102]: 2025-11-25 09:16:43.328842691 +0000 UTC m=+0.060260369 container remove 919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.336 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[36d25076-e2ba-415b-ab4e-54004f54538b]: (4, ('Tue Nov 25 09:16:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 (919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb)\n919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb\nTue Nov 25 09:16:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 (919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb)\n919ec9bb21963cad983079dccfc8800aa481d29f0479cf6379ec7168a71e58bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.338 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[758a6a7a-c320-459b-ab34-ad90c2ca69fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.339 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf86f1e83-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.341 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:43 compute-0 kernel: tapf86f1e83-b0: left promiscuous mode
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.356 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a822c0-585b-4478-9bdb-102e4f0b4387]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.375 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[535d18ad-373d-4979-b76d-14e474a71e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.377 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[075a6820-b4b0-487a-b4f3-2a562318b8d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.394 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[4863132b-8b26-4e73-8e9d-8cbb03e62286]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749549, 'reachable_time': 35827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418127, 'error': None, 'target': 'ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:43 compute-0 systemd[1]: run-netns-ovnmeta\x2df86f1e83\x2db07b\x2d4abd\x2dbc9a\x2d7c03f3634fc6.mount: Deactivated successfully.
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.397 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f86f1e83-b07b-4abd-bc9a-7c03f3634fc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:16:43 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:16:43.397 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd78e63-3542-4926-8148-baaf269cf6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.521 253542 DEBUG nova.compute.manager [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-unplugged-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.522 253542 DEBUG oslo_concurrency.lockutils [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.522 253542 DEBUG oslo_concurrency.lockutils [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.522 253542 DEBUG oslo_concurrency.lockutils [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.523 253542 DEBUG nova.compute.manager [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] No waiting events found dispatching network-vif-unplugged-f30cb228-eac2-4d17-a356-bec8d6ae142a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.523 253542 DEBUG nova.compute.manager [req-05dcf2de-69f0-49ee-a2b3-003cd63272c2 req-5c8e8eb3-751f-4cde-8ab9-b23fcebf8e44 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-unplugged-f30cb228-eac2-4d17-a356-bec8d6ae142a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.612 253542 INFO nova.virt.libvirt.driver [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Deleting instance files /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda_del
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.613 253542 INFO nova.virt.libvirt.driver [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Deletion of /var/lib/nova/instances/2abdf1f8-0c71-459d-8467-ec8825219eda_del complete
Nov 25 09:16:43 compute-0 ceph-mon[75015]: pgmap v2889: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 1.1 MiB/s wr, 63 op/s
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.726 253542 INFO nova.compute.manager [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.727 253542 DEBUG oslo.service.loopingcall [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.727 253542 DEBUG nova.compute.manager [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:16:43 compute-0 nova_compute[253538]: 2025-11-25 09:16:43.727 253542 DEBUG nova.network.neutron [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:16:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 730 KiB/s wr, 63 op/s
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.623 253542 DEBUG nova.network.neutron [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.641 253542 INFO nova.compute.manager [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Took 0.91 seconds to deallocate network for instance.
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.694 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.695 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.744 253542 DEBUG oslo_concurrency.processutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.827 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.890 253542 DEBUG nova.network.neutron [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updated VIF entry in instance network info cache for port f30cb228-eac2-4d17-a356-bec8d6ae142a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.891 253542 DEBUG nova.network.neutron [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [{"id": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "address": "fa:16:3e:de:b8:78", "network": {"id": "f86f1e83-b07b-4abd-bc9a-7c03f3634fc6", "bridge": "br-int", "label": "tempest-network-smoke--1498407759", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fede:b878", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3cf572dfc9f42528923d69b8fa76422", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf30cb228-ea", "ovs_interfaceid": "f30cb228-eac2-4d17-a356-bec8d6ae142a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.954 253542 DEBUG oslo_concurrency.lockutils [req-7dcb55fa-f6cd-4e91-a53d-c68bc6b44d74 req-9aceae57-4124-47c8-ba14-02e6dae01c8e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-2abdf1f8-0c71-459d-8467-ec8825219eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.969 253542 DEBUG nova.compute.manager [req-1ee320dd-585b-4aea-bba2-2a436f52e337 req-094fd4b8-f274-4366-b0d2-79bc5a920a5e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-deleted-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.969 253542 INFO nova.compute.manager [req-1ee320dd-585b-4aea-bba2-2a436f52e337 req-094fd4b8-f274-4366-b0d2-79bc5a920a5e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Neutron deleted interface f30cb228-eac2-4d17-a356-bec8d6ae142a; detaching it from the instance and deleting it from the info cache
Nov 25 09:16:44 compute-0 nova_compute[253538]: 2025-11-25 09:16:44.969 253542 DEBUG nova.network.neutron [req-1ee320dd-585b-4aea-bba2-2a436f52e337 req-094fd4b8-f274-4366-b0d2-79bc5a920a5e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.013 253542 DEBUG nova.compute.manager [req-1ee320dd-585b-4aea-bba2-2a436f52e337 req-094fd4b8-f274-4366-b0d2-79bc5a920a5e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Detach interface failed, port_id=f30cb228-eac2-4d17-a356-bec8d6ae142a, reason: Instance 2abdf1f8-0c71-459d-8467-ec8825219eda could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 09:16:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:16:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053815818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.207 253542 DEBUG oslo_concurrency.processutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.214 253542 DEBUG nova.compute.provider_tree [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.232 253542 DEBUG nova.scheduler.client.report [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.270 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.336 253542 INFO nova.scheduler.client.report [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Deleted allocations for instance 2abdf1f8-0c71-459d-8467-ec8825219eda
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.417 253542 DEBUG oslo_concurrency.lockutils [None req-6f9c169b-a0fe-430f-8841-61cbc5ef4924 c9fb13d4ba9041458692330b7276232f a3cf572dfc9f42528923d69b8fa76422 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.608 253542 DEBUG nova.compute.manager [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.608 253542 DEBUG oslo_concurrency.lockutils [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.609 253542 DEBUG oslo_concurrency.lockutils [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.609 253542 DEBUG oslo_concurrency.lockutils [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "2abdf1f8-0c71-459d-8467-ec8825219eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.609 253542 DEBUG nova.compute.manager [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] No waiting events found dispatching network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:16:45 compute-0 nova_compute[253538]: 2025-11-25 09:16:45.610 253542 WARNING nova.compute.manager [req-4ebf0d02-9979-4a35-94e8-6514c1552224 req-c8b0074e-296c-4403-aff9-d0fb09ed1676 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Received unexpected event network-vif-plugged-f30cb228-eac2-4d17-a356-bec8d6ae142a for instance with vm_state deleted and task_state None.
Nov 25 09:16:45 compute-0 ceph-mon[75015]: pgmap v2890: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 730 KiB/s wr, 63 op/s
Nov 25 09:16:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2053815818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:16:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 127 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 33 KiB/s wr, 52 op/s
Nov 25 09:16:47 compute-0 ceph-mon[75015]: pgmap v2891: 321 pgs: 321 active+clean; 127 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 33 KiB/s wr, 52 op/s
Nov 25 09:16:48 compute-0 nova_compute[253538]: 2025-11-25 09:16:48.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2892: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 17 KiB/s wr, 57 op/s
Nov 25 09:16:48 compute-0 nova_compute[253538]: 2025-11-25 09:16:48.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:49 compute-0 ceph-mon[75015]: pgmap v2892: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 17 KiB/s wr, 57 op/s
Nov 25 09:16:49 compute-0 nova_compute[253538]: 2025-11-25 09:16:49.828 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 5.7 KiB/s wr, 52 op/s
Nov 25 09:16:50 compute-0 nova_compute[253538]: 2025-11-25 09:16:50.560 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:50 compute-0 nova_compute[253538]: 2025-11-25 09:16:50.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:50 compute-0 podman[418152]: 2025-11-25 09:16:50.875423978 +0000 UTC m=+0.119799578 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 09:16:51 compute-0 ceph-mon[75015]: pgmap v2893: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 5.7 KiB/s wr, 52 op/s
Nov 25 09:16:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.4 KiB/s wr, 38 op/s
Nov 25 09:16:52 compute-0 nova_compute[253538]: 2025-11-25 09:16:52.947 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062197.9445813, 645b40f5-7a87-4de2-8b13-a340bcffd14b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:16:52 compute-0 nova_compute[253538]: 2025-11-25 09:16:52.948 253542 INFO nova.compute.manager [-] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] VM Stopped (Lifecycle Event)
Nov 25 09:16:52 compute-0 nova_compute[253538]: 2025-11-25 09:16:52.976 253542 DEBUG nova.compute.manager [None req-e9777e07-eacc-4f83-817b-1fa92c9fd250 - - - - - -] [instance: 645b40f5-7a87-4de2-8b13-a340bcffd14b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:16:53 compute-0 nova_compute[253538]: 2025-11-25 09:16:53.160 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:16:53
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:16:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:16:53 compute-0 ceph-mon[75015]: pgmap v2894: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.4 KiB/s wr, 38 op/s
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:16:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.9 KiB/s wr, 28 op/s
Nov 25 09:16:54 compute-0 nova_compute[253538]: 2025-11-25 09:16:54.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:54 compute-0 nova_compute[253538]: 2025-11-25 09:16:54.830 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:55 compute-0 ceph-mon[75015]: pgmap v2895: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.9 KiB/s wr, 28 op/s
Nov 25 09:16:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 3.8 KiB/s wr, 22 op/s
Nov 25 09:16:56 compute-0 ceph-mon[75015]: pgmap v2896: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 3.8 KiB/s wr, 22 op/s
Nov 25 09:16:58 compute-0 nova_compute[253538]: 2025-11-25 09:16:58.136 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062203.13455, 2abdf1f8-0c71-459d-8467-ec8825219eda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:16:58 compute-0 nova_compute[253538]: 2025-11-25 09:16:58.136 253542 INFO nova.compute.manager [-] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] VM Stopped (Lifecycle Event)
Nov 25 09:16:58 compute-0 nova_compute[253538]: 2025-11-25 09:16:58.153 253542 DEBUG nova.compute.manager [None req-588e48fd-0318-4279-92ed-14a21bec7ec2 - - - - - -] [instance: 2abdf1f8-0c71-459d-8467-ec8825219eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:16:58 compute-0 nova_compute[253538]: 2025-11-25 09:16:58.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:16:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:16:58 compute-0 sshd-session[418178]: Invalid user mega from 45.202.211.6 port 39048
Nov 25 09:16:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 852 B/s wr, 9 op/s
Nov 25 09:16:58 compute-0 sshd-session[418178]: Received disconnect from 45.202.211.6 port 39048:11: Bye Bye [preauth]
Nov 25 09:16:58 compute-0 sshd-session[418178]: Disconnected from invalid user mega 45.202.211.6 port 39048 [preauth]
Nov 25 09:16:59 compute-0 ceph-mon[75015]: pgmap v2897: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 852 B/s wr, 9 op/s
Nov 25 09:16:59 compute-0 nova_compute[253538]: 2025-11-25 09:16:59.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:59 compute-0 nova_compute[253538]: 2025-11-25 09:16:59.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:16:59 compute-0 nova_compute[253538]: 2025-11-25 09:16:59.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:16:59 compute-0 nova_compute[253538]: 2025-11-25 09:16:59.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:16:59 compute-0 nova_compute[253538]: 2025-11-25 09:16:59.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:16:59 compute-0 nova_compute[253538]: 2025-11-25 09:16:59.833 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:01 compute-0 ceph-mon[75015]: pgmap v2898: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:03 compute-0 nova_compute[253538]: 2025-11-25 09:17:03.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:03 compute-0 ceph-mon[75015]: pgmap v2899: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:03 compute-0 nova_compute[253538]: 2025-11-25 09:17:03.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:17:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:17:04 compute-0 nova_compute[253538]: 2025-11-25 09:17:04.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:04 compute-0 nova_compute[253538]: 2025-11-25 09:17:04.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:04 compute-0 nova_compute[253538]: 2025-11-25 09:17:04.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:17:04 compute-0 nova_compute[253538]: 2025-11-25 09:17:04.834 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:05 compute-0 ceph-mon[75015]: pgmap v2900: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:07 compute-0 nova_compute[253538]: 2025-11-25 09:17:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:07 compute-0 ceph-mon[75015]: pgmap v2901: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:08 compute-0 nova_compute[253538]: 2025-11-25 09:17:08.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:09 compute-0 nova_compute[253538]: 2025-11-25 09:17:09.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:09 compute-0 ceph-mon[75015]: pgmap v2902: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:09 compute-0 nova_compute[253538]: 2025-11-25 09:17:09.836 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:11 compute-0 ceph-mon[75015]: pgmap v2903: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:12 compute-0 nova_compute[253538]: 2025-11-25 09:17:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:12 compute-0 nova_compute[253538]: 2025-11-25 09:17:12.591 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:12 compute-0 nova_compute[253538]: 2025-11-25 09:17:12.591 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:12 compute-0 nova_compute[253538]: 2025-11-25 09:17:12.592 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:12 compute-0 nova_compute[253538]: 2025-11-25 09:17:12.592 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:17:12 compute-0 nova_compute[253538]: 2025-11-25 09:17:12.592 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:17:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/210025578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.083 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.143 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.144 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.165 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.173 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.240 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.241 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.247 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.247 253542 INFO nova.compute.claims [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.285 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.286 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3595MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.287 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.334 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:13 compute-0 ceph-mon[75015]: pgmap v2904: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/210025578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:17:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:17:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461204194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.758 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.763 253542 DEBUG nova.compute.provider_tree [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.777 253542 DEBUG nova.scheduler.client.report [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.795 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.795 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.798 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:13 compute-0 podman[418225]: 2025-11-25 09:17:13.803121905 +0000 UTC m=+0.058991995 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 09:17:13 compute-0 podman[418226]: 2025-11-25 09:17:13.82868751 +0000 UTC m=+0.084099138 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.857 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.857 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.862 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance aefbd3e8-a8ba-4fef-a771-4e2b5091a90a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.863 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.863 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.876 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.893 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:17:13 compute-0 sshd-session[418204]: Invalid user oracle from 193.32.162.151 port 38440
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.906 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.981 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.983 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:17:13 compute-0 nova_compute[253538]: 2025-11-25 09:17:13.983 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Creating image(s)
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.005 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:17:14 compute-0 sshd-session[418204]: Connection closed by invalid user oracle 193.32.162.151 port 38440 [preauth]
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.028 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.048 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.052 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.120 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.121 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.122 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.122 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.158 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.163 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.203 253542 DEBUG nova.policy [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '612906aa606e4268918814ea9f47c674', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3c19f20fbaec489eaece7cf904f192fa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:17:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:17:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112254869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.392 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.397 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.421 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.443 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.444 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3461204194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:17:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2112254869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.764 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Successfully created port: 3e3120d7-1164-497f-8e95-61789ab8f383 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:17:14 compute-0 nova_compute[253538]: 2025-11-25 09:17:14.838 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:15 compute-0 nova_compute[253538]: 2025-11-25 09:17:15.438 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:15 compute-0 nova_compute[253538]: 2025-11-25 09:17:15.492 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] resizing rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:17:15 compute-0 ceph-mon[75015]: pgmap v2905: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:15 compute-0 nova_compute[253538]: 2025-11-25 09:17:15.683 253542 DEBUG nova.objects.instance [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'migration_context' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:15 compute-0 nova_compute[253538]: 2025-11-25 09:17:15.698 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:17:15 compute-0 nova_compute[253538]: 2025-11-25 09:17:15.698 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Ensure instance console log exists: /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:17:15 compute-0 nova_compute[253538]: 2025-11-25 09:17:15.699 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:15 compute-0 nova_compute[253538]: 2025-11-25 09:17:15.699 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:15 compute-0 nova_compute[253538]: 2025-11-25 09:17:15.699 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:17:16 compute-0 nova_compute[253538]: 2025-11-25 09:17:16.645 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Successfully updated port: 3e3120d7-1164-497f-8e95-61789ab8f383 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:17:16 compute-0 nova_compute[253538]: 2025-11-25 09:17:16.657 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:17:16 compute-0 nova_compute[253538]: 2025-11-25 09:17:16.657 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquired lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:17:16 compute-0 nova_compute[253538]: 2025-11-25 09:17:16.657 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:17:16 compute-0 nova_compute[253538]: 2025-11-25 09:17:16.758 253542 DEBUG nova.compute.manager [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-changed-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:16 compute-0 nova_compute[253538]: 2025-11-25 09:17:16.759 253542 DEBUG nova.compute.manager [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Refreshing instance network info cache due to event network-changed-3e3120d7-1164-497f-8e95-61789ab8f383. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:17:16 compute-0 nova_compute[253538]: 2025-11-25 09:17:16.759 253542 DEBUG oslo_concurrency.lockutils [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:17:16 compute-0 nova_compute[253538]: 2025-11-25 09:17:16.794 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:17:17 compute-0 ceph-mon[75015]: pgmap v2906: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.138 253542 DEBUG nova.network.neutron [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.167 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Releasing lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.167 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance network_info: |[{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.168 253542 DEBUG oslo_concurrency.lockutils [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.168 253542 DEBUG nova.network.neutron [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Refreshing network info cache for port 3e3120d7-1164-497f-8e95-61789ab8f383 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.171 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start _get_guest_xml network_info=[{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.174 253542 WARNING nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.175 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.180 253542 DEBUG nova.virt.libvirt.host [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.180 253542 DEBUG nova.virt.libvirt.host [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.189 253542 DEBUG nova.virt.libvirt.host [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.189 253542 DEBUG nova.virt.libvirt.host [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.190 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.190 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.190 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.191 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.191 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.191 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.191 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.192 253542 DEBUG nova.virt.hardware [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.195 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 111 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 782 KiB/s wr, 25 op/s
Nov 25 09:17:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:17:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519843840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:17:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3519843840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.683 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.709 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:17:18 compute-0 nova_compute[253538]: 2025-11-25 09:17:18.716 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:17:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498757576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.191 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.192 253542 DEBUG nova.virt.libvirt.vif [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:17:13Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.193 253542 DEBUG nova.network.os_vif_util [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.194 253542 DEBUG nova.network.os_vif_util [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.195 253542 DEBUG nova.objects.instance [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.209 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <uuid>aefbd3e8-a8ba-4fef-a771-4e2b5091a90a</uuid>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <name>instance-00000098</name>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <nova:name>tempest-TestServerAdvancedOps-server-1580927856</nova:name>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:17:18</nova:creationTime>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <nova:user uuid="612906aa606e4268918814ea9f47c674">tempest-TestServerAdvancedOps-1902459302-project-member</nova:user>
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <nova:project uuid="3c19f20fbaec489eaece7cf904f192fa">tempest-TestServerAdvancedOps-1902459302</nova:project>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <nova:port uuid="3e3120d7-1164-497f-8e95-61789ab8f383">
Nov 25 09:17:19 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <system>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <entry name="serial">aefbd3e8-a8ba-4fef-a771-4e2b5091a90a</entry>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <entry name="uuid">aefbd3e8-a8ba-4fef-a771-4e2b5091a90a</entry>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </system>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <os>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   </os>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <features>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   </features>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk">
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       </source>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config">
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       </source>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:17:19 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:ae:3f:a1"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <target dev="tap3e3120d7-11"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/console.log" append="off"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <video>
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </video>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:17:19 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:17:19 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:17:19 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:17:19 compute-0 nova_compute[253538]: </domain>
Nov 25 09:17:19 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.210 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Preparing to wait for external event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.210 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.211 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.211 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.212 253542 DEBUG nova.virt.libvirt.vif [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:17:13Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.212 253542 DEBUG nova.network.os_vif_util [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.213 253542 DEBUG nova.network.os_vif_util [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.213 253542 DEBUG os_vif [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.214 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.215 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.219 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e3120d7-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.219 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e3120d7-11, col_values=(('external_ids', {'iface-id': '3e3120d7-1164-497f-8e95-61789ab8f383', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:3f:a1', 'vm-uuid': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:19 compute-0 NetworkManager[48915]: <info>  [1764062239.2217] manager: (tap3e3120d7-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.223 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.228 253542 INFO os_vif [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11')
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.269 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.270 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.270 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] No VIF found with MAC fa:16:3e:ae:3f:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.270 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Using config drive
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.289 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:17:19 compute-0 ceph-mon[75015]: pgmap v2907: 321 pgs: 321 active+clean; 111 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 782 KiB/s wr, 25 op/s
Nov 25 09:17:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/498757576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.839 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.981 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Creating config drive at /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config
Nov 25 09:17:19 compute-0 nova_compute[253538]: 2025-11-25 09:17:19.990 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44t3ms5s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.135 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44t3ms5s" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.160 253542 DEBUG nova.storage.rbd_utils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] rbd image aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.164 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.196 253542 DEBUG nova.network.neutron [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updated VIF entry in instance network info cache for port 3e3120d7-1164-497f-8e95-61789ab8f383. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.198 253542 DEBUG nova.network.neutron [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.211 253542 DEBUG oslo_concurrency.lockutils [req-8ed782c1-5642-4471-b5aa-480c4f1fcbd0 req-ee836338-d096-431e-8244-ad5920f2ac63 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.303 253542 DEBUG oslo_concurrency.processutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.304 253542 INFO nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Deleting local config drive /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a/disk.config because it was imported into RBD.
Nov 25 09:17:20 compute-0 kernel: tap3e3120d7-11: entered promiscuous mode
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.350 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:20 compute-0 NetworkManager[48915]: <info>  [1764062240.3520] manager: (tap3e3120d7-11): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Nov 25 09:17:20 compute-0 ovn_controller[152859]: 2025-11-25T09:17:20Z|01613|binding|INFO|Claiming lport 3e3120d7-1164-497f-8e95-61789ab8f383 for this chassis.
Nov 25 09:17:20 compute-0 ovn_controller[152859]: 2025-11-25T09:17:20Z|01614|binding|INFO|3e3120d7-1164-497f-8e95-61789ab8f383: Claiming fa:16:3e:ae:3f:a1 10.100.0.5
Nov 25 09:17:20 compute-0 systemd-udevd[418590]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.390 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:20 compute-0 systemd-machined[215790]: New machine qemu-182-instance-00000098.
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.395 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:20 compute-0 ovn_controller[152859]: 2025-11-25T09:17:20Z|01615|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 ovn-installed in OVS
Nov 25 09:17:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:20.396 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:17:20 compute-0 ovn_controller[152859]: 2025-11-25T09:17:20Z|01616|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 up in Southbound
Nov 25 09:17:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:20.397 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 bound to our chassis
Nov 25 09:17:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:20.398 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 09:17:20 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:20.399 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[74141c63-5a0a-452a-b484-2dbd390d9dbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.399 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:20 compute-0 NetworkManager[48915]: <info>  [1764062240.4008] device (tap3e3120d7-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:17:20 compute-0 NetworkManager[48915]: <info>  [1764062240.4031] device (tap3e3120d7-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:17:20 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000098.
Nov 25 09:17:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.897 253542 DEBUG nova.compute.manager [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.898 253542 DEBUG oslo_concurrency.lockutils [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.898 253542 DEBUG oslo_concurrency.lockutils [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.898 253542 DEBUG oslo_concurrency.lockutils [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:20 compute-0 nova_compute[253538]: 2025-11-25 09:17:20.898 253542 DEBUG nova.compute.manager [req-10ba1098-11f1-4b5f-886e-5e350cb6c767 req-cbdb653b-a43c-4476-8014-fdf40f33f6d0 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Processing event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.170 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.172 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062241.1712468, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.172 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Started (Lifecycle Event)
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.176 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.179 253542 INFO nova.virt.libvirt.driver [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance spawned successfully.
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.180 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.190 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.194 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.202 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.203 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.204 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.204 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.204 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.205 253542 DEBUG nova.virt.libvirt.driver [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.213 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.213 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062241.171398, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.214 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Paused (Lifecycle Event)
Nov 25 09:17:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:21.214 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:21 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:21.216 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.232 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.236 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062241.1749299, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.237 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Resumed (Lifecycle Event)
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.251 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.257 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.271 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.321 253542 INFO nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Took 7.34 seconds to spawn the instance on the hypervisor.
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.322 253542 DEBUG nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.390 253542 INFO nova.compute.manager [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Took 8.18 seconds to build instance.
Nov 25 09:17:21 compute-0 nova_compute[253538]: 2025-11-25 09:17:21.423 253542 DEBUG oslo_concurrency.lockutils [None req-e991bbc4-4882-4760-ba09-1cbed1562b4e 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:21 compute-0 ceph-mon[75015]: pgmap v2908: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:17:21 compute-0 podman[418642]: 2025-11-25 09:17:21.84659469 +0000 UTC m=+0.094664845 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 09:17:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 09:17:22 compute-0 sudo[418665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:22 compute-0 sudo[418665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:22 compute-0 sudo[418665]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:22 compute-0 sudo[418690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:17:22 compute-0 sudo[418690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:22 compute-0 sudo[418690]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:22 compute-0 sudo[418715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:22 compute-0 sudo[418715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:22 compute-0 sudo[418715]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:22 compute-0 sudo[418740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:17:22 compute-0 sudo[418740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:22 compute-0 nova_compute[253538]: 2025-11-25 09:17:22.970 253542 DEBUG nova.compute.manager [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:22 compute-0 nova_compute[253538]: 2025-11-25 09:17:22.971 253542 DEBUG oslo_concurrency.lockutils [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:22 compute-0 nova_compute[253538]: 2025-11-25 09:17:22.972 253542 DEBUG oslo_concurrency.lockutils [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:22 compute-0 nova_compute[253538]: 2025-11-25 09:17:22.972 253542 DEBUG oslo_concurrency.lockutils [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:22 compute-0 nova_compute[253538]: 2025-11-25 09:17:22.972 253542 DEBUG nova.compute.manager [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:22 compute-0 nova_compute[253538]: 2025-11-25 09:17:22.973 253542 WARNING nova.compute.manager [req-5f0fb66f-9092-4da8-85a8-ad57d0c9dc22 req-e5971cc2-4531-4e82-bfa2-b713cd617360 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state active and task_state None.
Nov 25 09:17:23 compute-0 sudo[418740]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:17:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:17:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:17:23 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:17:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:17:23 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9fe04653-f5f5-429d-b5fc-40ee7d92267f does not exist
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev adc6a3b2-ca8c-48ac-a54c-454f3c202900 does not exist
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev fa258db8-0a70-4fdb-a727-a7abd4c91a1b does not exist
Nov 25 09:17:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:17:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:17:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:17:23 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:17:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:17:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:17:23 compute-0 sudo[418796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:23 compute-0 sudo[418796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:23 compute-0 sudo[418796]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:23 compute-0 sudo[418821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:17:23 compute-0 sudo[418821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:23 compute-0 sudo[418821]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:23 compute-0 nova_compute[253538]: 2025-11-25 09:17:23.468 253542 DEBUG nova.objects.instance [None req-f51c54d3-bc18-4760-951b-0325ee83221a 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:23 compute-0 sudo[418846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:23 compute-0 sudo[418846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:17:23 compute-0 sudo[418846]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:17:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:17:23 compute-0 nova_compute[253538]: 2025-11-25 09:17:23.495 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062243.4955335, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:23 compute-0 nova_compute[253538]: 2025-11-25 09:17:23.496 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Paused (Lifecycle Event)
Nov 25 09:17:23 compute-0 nova_compute[253538]: 2025-11-25 09:17:23.511 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:23 compute-0 nova_compute[253538]: 2025-11-25 09:17:23.516 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:17:23 compute-0 sudo[418871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:17:23 compute-0 sudo[418871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:23 compute-0 nova_compute[253538]: 2025-11-25 09:17:23.534 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 09:17:23 compute-0 podman[418939]: 2025-11-25 09:17:23.844564733 +0000 UTC m=+0.024828677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:17:23 compute-0 ceph-mon[75015]: pgmap v2909: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 25 09:17:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:17:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:17:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:17:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:17:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:17:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:17:24 compute-0 podman[418939]: 2025-11-25 09:17:24.032226544 +0000 UTC m=+0.212490438 container create a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 09:17:24 compute-0 kernel: tap3e3120d7-11 (unregistering): left promiscuous mode
Nov 25 09:17:24 compute-0 NetworkManager[48915]: <info>  [1764062244.0946] device (tap3e3120d7-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:17:24 compute-0 systemd[1]: Started libpod-conmon-a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7.scope.
Nov 25 09:17:24 compute-0 nova_compute[253538]: 2025-11-25 09:17:24.108 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:24 compute-0 ovn_controller[152859]: 2025-11-25T09:17:24Z|01617|binding|INFO|Releasing lport 3e3120d7-1164-497f-8e95-61789ab8f383 from this chassis (sb_readonly=0)
Nov 25 09:17:24 compute-0 ovn_controller[152859]: 2025-11-25T09:17:24Z|01618|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 down in Southbound
Nov 25 09:17:24 compute-0 ovn_controller[152859]: 2025-11-25T09:17:24Z|01619|binding|INFO|Removing iface tap3e3120d7-11 ovn-installed in OVS
Nov 25 09:17:24 compute-0 nova_compute[253538]: 2025-11-25 09:17:24.111 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.116 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:17:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.117 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 unbound from our chassis
Nov 25 09:17:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.118 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 09:17:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.119 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[de11c323-83a9-47ee-84cf-6674a91ece0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:17:24 compute-0 nova_compute[253538]: 2025-11-25 09:17:24.122 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:17:24 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 25 09:17:24 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000098.scope: Consumed 3.149s CPU time.
Nov 25 09:17:24 compute-0 systemd-machined[215790]: Machine qemu-182-instance-00000098 terminated.
Nov 25 09:17:24 compute-0 podman[418939]: 2025-11-25 09:17:24.167571243 +0000 UTC m=+0.347835167 container init a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:17:24 compute-0 podman[418939]: 2025-11-25 09:17:24.17776894 +0000 UTC m=+0.358032844 container start a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:17:24 compute-0 relaxed_poincare[418957]: 167 167
Nov 25 09:17:24 compute-0 systemd[1]: libpod-a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7.scope: Deactivated successfully.
Nov 25 09:17:24 compute-0 conmon[418957]: conmon a80d33063619818ac872 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7.scope/container/memory.events
Nov 25 09:17:24 compute-0 podman[418939]: 2025-11-25 09:17:24.185573482 +0000 UTC m=+0.365837386 container attach a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:17:24 compute-0 podman[418939]: 2025-11-25 09:17:24.188872892 +0000 UTC m=+0.369136826 container died a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 25 09:17:24 compute-0 nova_compute[253538]: 2025-11-25 09:17:24.194 253542 DEBUG nova.compute.manager [None req-f51c54d3-bc18-4760-951b-0325ee83221a 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:24.218 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:24 compute-0 nova_compute[253538]: 2025-11-25 09:17:24.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8353aed3f565c9cd61cf066823dce0624fe40c0231ac6523aeec51ffd4530755-merged.mount: Deactivated successfully.
Nov 25 09:17:24 compute-0 podman[418939]: 2025-11-25 09:17:24.255104622 +0000 UTC m=+0.435368526 container remove a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_poincare, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:17:24 compute-0 systemd[1]: libpod-conmon-a80d33063619818ac872d3603d91e1defab113316e80329aa33c3cc9cf989eb7.scope: Deactivated successfully.
Nov 25 09:17:24 compute-0 podman[418998]: 2025-11-25 09:17:24.449168018 +0000 UTC m=+0.058840550 container create ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:17:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 09:17:24 compute-0 systemd[1]: Started libpod-conmon-ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656.scope.
Nov 25 09:17:24 compute-0 podman[418998]: 2025-11-25 09:17:24.416815758 +0000 UTC m=+0.026488320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:17:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:24 compute-0 podman[418998]: 2025-11-25 09:17:24.556158666 +0000 UTC m=+0.165831228 container init ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:17:24 compute-0 podman[418998]: 2025-11-25 09:17:24.565245593 +0000 UTC m=+0.174918115 container start ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:17:24 compute-0 podman[418998]: 2025-11-25 09:17:24.572697236 +0000 UTC m=+0.182369768 container attach ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:17:24 compute-0 nova_compute[253538]: 2025-11-25 09:17:24.841 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:24 compute-0 ceph-mon[75015]: pgmap v2910: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.062 253542 DEBUG nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.063 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.063 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.063 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 DEBUG nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 WARNING nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state None.
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 DEBUG nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.064 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.065 253542 DEBUG oslo_concurrency.lockutils [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.065 253542 DEBUG nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:25 compute-0 nova_compute[253538]: 2025-11-25 09:17:25.065 253542 WARNING nova.compute.manager [req-6802fae2-b7c1-424b-acec-49f85503ff82 req-41e865f2-9305-4601-baad-69b043fea37c b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state None.
Nov 25 09:17:25 compute-0 pedantic_pasteur[419015]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:17:25 compute-0 pedantic_pasteur[419015]: --> relative data size: 1.0
Nov 25 09:17:25 compute-0 pedantic_pasteur[419015]: --> All data devices are unavailable
Nov 25 09:17:25 compute-0 systemd[1]: libpod-ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656.scope: Deactivated successfully.
Nov 25 09:17:25 compute-0 podman[418998]: 2025-11-25 09:17:25.748292253 +0000 UTC m=+1.357964795 container died ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:17:25 compute-0 systemd[1]: libpod-ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656.scope: Consumed 1.134s CPU time.
Nov 25 09:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9fe786a9a6cf15a9cde18f8d46b369b17b37322a074c7f96122f3afaf584a52-merged.mount: Deactivated successfully.
Nov 25 09:17:25 compute-0 podman[418998]: 2025-11-25 09:17:25.820084395 +0000 UTC m=+1.429756927 container remove ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:17:25 compute-0 systemd[1]: libpod-conmon-ae256fddc37385405c4f90e55062754447d7776931625d82e66c5eef8cdef656.scope: Deactivated successfully.
Nov 25 09:17:25 compute-0 sudo[418871]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:25 compute-0 sudo[419055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:25 compute-0 sudo[419055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:25 compute-0 sudo[419055]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:25 compute-0 sudo[419080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:17:25 compute-0 sudo[419080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:25 compute-0 sudo[419080]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:26 compute-0 sudo[419105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:26 compute-0 sudo[419105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:26 compute-0 sudo[419105]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:26 compute-0 sudo[419130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:17:26 compute-0 sudo[419130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:26 compute-0 nova_compute[253538]: 2025-11-25 09:17:26.208 253542 INFO nova.compute.manager [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Resuming
Nov 25 09:17:26 compute-0 nova_compute[253538]: 2025-11-25 09:17:26.209 253542 DEBUG nova.objects.instance [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'flavor' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:26 compute-0 nova_compute[253538]: 2025-11-25 09:17:26.242 253542 DEBUG oslo_concurrency.lockutils [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:17:26 compute-0 nova_compute[253538]: 2025-11-25 09:17:26.243 253542 DEBUG oslo_concurrency.lockutils [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquired lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:17:26 compute-0 nova_compute[253538]: 2025-11-25 09:17:26.243 253542 DEBUG nova.network.neutron [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:17:26 compute-0 podman[419196]: 2025-11-25 09:17:26.405849678 +0000 UTC m=+0.039623008 container create 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:17:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 25 09:17:26 compute-0 podman[419196]: 2025-11-25 09:17:26.388052275 +0000 UTC m=+0.021825635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:17:26 compute-0 systemd[1]: Started libpod-conmon-62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a.scope.
Nov 25 09:17:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:17:26 compute-0 podman[419196]: 2025-11-25 09:17:26.536199992 +0000 UTC m=+0.169973352 container init 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:17:26 compute-0 podman[419196]: 2025-11-25 09:17:26.542698248 +0000 UTC m=+0.176471578 container start 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:17:26 compute-0 pensive_wu[419214]: 167 167
Nov 25 09:17:26 compute-0 systemd[1]: libpod-62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a.scope: Deactivated successfully.
Nov 25 09:17:26 compute-0 conmon[419214]: conmon 62ba897be086476e45e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a.scope/container/memory.events
Nov 25 09:17:26 compute-0 podman[419196]: 2025-11-25 09:17:26.564364878 +0000 UTC m=+0.198138208 container attach 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:17:26 compute-0 podman[419196]: 2025-11-25 09:17:26.564738658 +0000 UTC m=+0.198511988 container died 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:17:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd53542dcc6e8faab4f858cc99c3271a9b8a96385c9467bf43c312705353b276-merged.mount: Deactivated successfully.
Nov 25 09:17:26 compute-0 podman[419196]: 2025-11-25 09:17:26.660651305 +0000 UTC m=+0.294424635 container remove 62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wu, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:17:26 compute-0 systemd[1]: libpod-conmon-62ba897be086476e45e2767e09f5c65a88c476fdc68cd3a83ccc69c9b9d6471a.scope: Deactivated successfully.
Nov 25 09:17:26 compute-0 podman[419240]: 2025-11-25 09:17:26.872789362 +0000 UTC m=+0.067384643 container create e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:17:26 compute-0 systemd[1]: Started libpod-conmon-e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9.scope.
Nov 25 09:17:26 compute-0 podman[419240]: 2025-11-25 09:17:26.849302653 +0000 UTC m=+0.043897974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:17:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:26 compute-0 podman[419240]: 2025-11-25 09:17:26.972093031 +0000 UTC m=+0.166688322 container init e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:17:26 compute-0 podman[419240]: 2025-11-25 09:17:26.980722555 +0000 UTC m=+0.175317836 container start e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:17:26 compute-0 podman[419240]: 2025-11-25 09:17:26.984229251 +0000 UTC m=+0.178824572 container attach e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:17:27 compute-0 ceph-mon[75015]: pgmap v2911: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 25 09:17:27 compute-0 zen_pascal[419256]: {
Nov 25 09:17:27 compute-0 zen_pascal[419256]:     "0": [
Nov 25 09:17:27 compute-0 zen_pascal[419256]:         {
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "devices": [
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "/dev/loop3"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             ],
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_name": "ceph_lv0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_size": "21470642176",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "name": "ceph_lv0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "tags": {
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cluster_name": "ceph",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.crush_device_class": "",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.encrypted": "0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osd_id": "0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.type": "block",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.vdo": "0"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             },
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "type": "block",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "vg_name": "ceph_vg0"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:         }
Nov 25 09:17:27 compute-0 zen_pascal[419256]:     ],
Nov 25 09:17:27 compute-0 zen_pascal[419256]:     "1": [
Nov 25 09:17:27 compute-0 zen_pascal[419256]:         {
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "devices": [
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "/dev/loop4"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             ],
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_name": "ceph_lv1",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_size": "21470642176",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "name": "ceph_lv1",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "tags": {
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cluster_name": "ceph",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.crush_device_class": "",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.encrypted": "0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osd_id": "1",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.type": "block",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.vdo": "0"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             },
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "type": "block",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "vg_name": "ceph_vg1"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:         }
Nov 25 09:17:27 compute-0 zen_pascal[419256]:     ],
Nov 25 09:17:27 compute-0 zen_pascal[419256]:     "2": [
Nov 25 09:17:27 compute-0 zen_pascal[419256]:         {
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "devices": [
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "/dev/loop5"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             ],
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_name": "ceph_lv2",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_size": "21470642176",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "name": "ceph_lv2",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "tags": {
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.cluster_name": "ceph",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.crush_device_class": "",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.encrypted": "0",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osd_id": "2",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.type": "block",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:                 "ceph.vdo": "0"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             },
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "type": "block",
Nov 25 09:17:27 compute-0 zen_pascal[419256]:             "vg_name": "ceph_vg2"
Nov 25 09:17:27 compute-0 zen_pascal[419256]:         }
Nov 25 09:17:27 compute-0 zen_pascal[419256]:     ]
Nov 25 09:17:27 compute-0 zen_pascal[419256]: }
Nov 25 09:17:27 compute-0 systemd[1]: libpod-e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9.scope: Deactivated successfully.
Nov 25 09:17:27 compute-0 podman[419240]: 2025-11-25 09:17:27.763761002 +0000 UTC m=+0.958356293 container died e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b09c0a247f0ddce71821545b3ace0cd46b70e13e26dbc2d4cec91c9df254566-merged.mount: Deactivated successfully.
Nov 25 09:17:27 compute-0 podman[419240]: 2025-11-25 09:17:27.815825727 +0000 UTC m=+1.010420998 container remove e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:17:27 compute-0 systemd[1]: libpod-conmon-e089890665f76fd96184bd8109a7567db5ffcc949fa8ab408810eef17aa078f9.scope: Deactivated successfully.
Nov 25 09:17:27 compute-0 sudo[419130]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:27 compute-0 sudo[419277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:27 compute-0 sudo[419277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:27 compute-0 sudo[419277]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:27 compute-0 sudo[419302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:17:27 compute-0 sudo[419302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:27 compute-0 sudo[419302]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:27 compute-0 nova_compute[253538]: 2025-11-25 09:17:27.986 253542 DEBUG nova.network.neutron [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.003 253542 DEBUG oslo_concurrency.lockutils [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Releasing lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.008 253542 DEBUG nova.virt.libvirt.vif [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:17:24Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.009 253542 DEBUG nova.network.os_vif_util [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.010 253542 DEBUG nova.network.os_vif_util [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.011 253542 DEBUG os_vif [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.012 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.013 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.016 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.016 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e3120d7-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.017 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e3120d7-11, col_values=(('external_ids', {'iface-id': '3e3120d7-1164-497f-8e95-61789ab8f383', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:3f:a1', 'vm-uuid': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.017 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.018 253542 INFO os_vif [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11')
Nov 25 09:17:28 compute-0 sudo[419327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:28 compute-0 sudo[419327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:28 compute-0 sudo[419327]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.037 253542 DEBUG nova.objects.instance [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'numa_topology' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:28 compute-0 sudo[419353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:17:28 compute-0 sudo[419353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:28 compute-0 kernel: tap3e3120d7-11: entered promiscuous mode
Nov 25 09:17:28 compute-0 NetworkManager[48915]: <info>  [1764062248.1018] manager: (tap3e3120d7-11): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Nov 25 09:17:28 compute-0 systemd-udevd[419388]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.145 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:28 compute-0 ovn_controller[152859]: 2025-11-25T09:17:28Z|01620|binding|INFO|Claiming lport 3e3120d7-1164-497f-8e95-61789ab8f383 for this chassis.
Nov 25 09:17:28 compute-0 ovn_controller[152859]: 2025-11-25T09:17:28Z|01621|binding|INFO|3e3120d7-1164-497f-8e95-61789ab8f383: Claiming fa:16:3e:ae:3f:a1 10.100.0.5
Nov 25 09:17:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:28.152 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:17:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:28.153 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 bound to our chassis
Nov 25 09:17:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:28.154 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 09:17:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:28.155 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb7dba4-e9fb-4682-9aaa-2dff32beb969]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:17:28 compute-0 NetworkManager[48915]: <info>  [1764062248.1582] device (tap3e3120d7-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:17:28 compute-0 NetworkManager[48915]: <info>  [1764062248.1593] device (tap3e3120d7-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.163 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:28 compute-0 ovn_controller[152859]: 2025-11-25T09:17:28Z|01622|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 ovn-installed in OVS
Nov 25 09:17:28 compute-0 ovn_controller[152859]: 2025-11-25T09:17:28Z|01623|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 up in Southbound
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:28 compute-0 systemd-machined[215790]: New machine qemu-183-instance-00000098.
Nov 25 09:17:28 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000098.
Nov 25 09:17:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:28 compute-0 podman[419440]: 2025-11-25 09:17:28.411006357 +0000 UTC m=+0.042398994 container create 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.424 253542 DEBUG nova.compute.manager [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.426 253542 DEBUG oslo_concurrency.lockutils [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.426 253542 DEBUG oslo_concurrency.lockutils [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.426 253542 DEBUG oslo_concurrency.lockutils [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.427 253542 DEBUG nova.compute.manager [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:28 compute-0 nova_compute[253538]: 2025-11-25 09:17:28.427 253542 WARNING nova.compute.manager [req-a642535f-8f68-4a2a-8863-d55f4fe70e2e req-7c2d1f87-9a6b-4e9d-8caf-3c8e927d19c9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state resuming.
Nov 25 09:17:28 compute-0 systemd[1]: Started libpod-conmon-6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200.scope.
Nov 25 09:17:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 09:17:28 compute-0 podman[419440]: 2025-11-25 09:17:28.392620776 +0000 UTC m=+0.024013413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:17:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:17:28 compute-0 podman[419440]: 2025-11-25 09:17:28.525351405 +0000 UTC m=+0.156744072 container init 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 09:17:28 compute-0 podman[419440]: 2025-11-25 09:17:28.533261109 +0000 UTC m=+0.164653746 container start 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:17:28 compute-0 podman[419440]: 2025-11-25 09:17:28.536053295 +0000 UTC m=+0.167445962 container attach 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:17:28 compute-0 nervous_benz[419456]: 167 167
Nov 25 09:17:28 compute-0 systemd[1]: libpod-6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200.scope: Deactivated successfully.
Nov 25 09:17:28 compute-0 conmon[419456]: conmon 6eb705af89601b204126 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200.scope/container/memory.events
Nov 25 09:17:28 compute-0 podman[419440]: 2025-11-25 09:17:28.541996878 +0000 UTC m=+0.173389515 container died 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:17:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-790e3f136d75c6efab54f2ba978389ab68ce4e5019555bb3c78725fc6adcaf20-merged.mount: Deactivated successfully.
Nov 25 09:17:28 compute-0 podman[419440]: 2025-11-25 09:17:28.585316065 +0000 UTC m=+0.216708712 container remove 6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:17:28 compute-0 systemd[1]: libpod-conmon-6eb705af89601b204126d136b5c1de439d8fb08d9c9ce931e0016cb10b740200.scope: Deactivated successfully.
Nov 25 09:17:28 compute-0 podman[419479]: 2025-11-25 09:17:28.76611272 +0000 UTC m=+0.046053933 container create d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:17:28 compute-0 systemd[1]: Started libpod-conmon-d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b.scope.
Nov 25 09:17:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:17:28 compute-0 podman[419479]: 2025-11-25 09:17:28.746569819 +0000 UTC m=+0.026511072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:17:28 compute-0 podman[419479]: 2025-11-25 09:17:28.886597655 +0000 UTC m=+0.166538928 container init d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 09:17:28 compute-0 podman[419479]: 2025-11-25 09:17:28.90111555 +0000 UTC m=+0.181056763 container start d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:17:28 compute-0 podman[419479]: 2025-11-25 09:17:28.911909823 +0000 UTC m=+0.191851056 container attach d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:17:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:17:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/16843156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:17:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:17:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/16843156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.394 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for aefbd3e8-a8ba-4fef-a771-4e2b5091a90a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.394 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062249.3940399, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.394 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Started (Lifecycle Event)
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.422 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.426 253542 DEBUG nova.compute.manager [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.427 253542 DEBUG nova.objects.instance [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.432 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.447 253542 INFO nova.virt.libvirt.driver [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance running successfully.
Nov 25 09:17:29 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.450 253542 DEBUG nova.virt.libvirt.guest [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.451 253542 DEBUG nova.compute.manager [None req-254ffe90-dcb6-4ec1-89d2-d004a6c7f005 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.452 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.452 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062249.3987281, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.452 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Resumed (Lifecycle Event)
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.476 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.480 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.499 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 09:17:29 compute-0 ceph-mon[75015]: pgmap v2912: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 09:17:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/16843156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:17:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/16843156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:17:29 compute-0 nova_compute[253538]: 2025-11-25 09:17:29.842 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]: {
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "osd_id": 1,
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "type": "bluestore"
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:     },
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "osd_id": 2,
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "type": "bluestore"
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:     },
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "osd_id": 0,
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:         "type": "bluestore"
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]:     }
Nov 25 09:17:29 compute-0 wizardly_herschel[419493]: }
Nov 25 09:17:29 compute-0 systemd[1]: libpod-d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b.scope: Deactivated successfully.
Nov 25 09:17:29 compute-0 podman[419479]: 2025-11-25 09:17:29.876874015 +0000 UTC m=+1.156815248 container died d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-40580bb91d8be25ca25b9f0eb08c0aaaaa8db588e24bf44fd57a4d00441767d3-merged.mount: Deactivated successfully.
Nov 25 09:17:30 compute-0 podman[419479]: 2025-11-25 09:17:30.301709974 +0000 UTC m=+1.581651207 container remove d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:17:30 compute-0 systemd[1]: libpod-conmon-d921f54f72ea86dec9c2431c0d7d7a505d5fcffbd6e3fd43458a6af91c45661b.scope: Deactivated successfully.
Nov 25 09:17:30 compute-0 sudo[419353]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:17:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:17:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:17:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 75 op/s
Nov 25 09:17:30 compute-0 nova_compute[253538]: 2025-11-25 09:17:30.505 253542 DEBUG nova.compute.manager [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:30 compute-0 nova_compute[253538]: 2025-11-25 09:17:30.505 253542 DEBUG oslo_concurrency.lockutils [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:30 compute-0 nova_compute[253538]: 2025-11-25 09:17:30.506 253542 DEBUG oslo_concurrency.lockutils [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:30 compute-0 nova_compute[253538]: 2025-11-25 09:17:30.506 253542 DEBUG oslo_concurrency.lockutils [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:30 compute-0 nova_compute[253538]: 2025-11-25 09:17:30.506 253542 DEBUG nova.compute.manager [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:30 compute-0 nova_compute[253538]: 2025-11-25 09:17:30.507 253542 WARNING nova.compute.manager [req-a392f206-ee58-4909-84e3-48042f4376cb req-b9718b6a-900a-48fa-a871-56db506cf172 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state active and task_state None.
Nov 25 09:17:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:17:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 58c032ee-56e6-4f3b-908e-c060173d47d1 does not exist
Nov 25 09:17:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 50ae1dd6-f300-465d-af01-11532dbf1071 does not exist
Nov 25 09:17:30 compute-0 sudo[419580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:17:30 compute-0 sudo[419580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:30 compute-0 sudo[419580]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:30 compute-0 sudo[419605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:17:30 compute-0 sudo[419605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:17:30 compute-0 sudo[419605]: pam_unix(sudo:session): session closed for user root
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.072 253542 DEBUG nova.objects.instance [None req-97bf4ae3-08d4-4197-b698-7b420920c92f 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.089 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062251.0891693, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.090 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Paused (Lifecycle Event)
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.110 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.113 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.130 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 09:17:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:17:31 compute-0 ceph-mon[75015]: pgmap v2913: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 75 op/s
Nov 25 09:17:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:17:31 compute-0 kernel: tap3e3120d7-11 (unregistering): left promiscuous mode
Nov 25 09:17:31 compute-0 NetworkManager[48915]: <info>  [1764062251.5740] device (tap3e3120d7-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:31 compute-0 ovn_controller[152859]: 2025-11-25T09:17:31Z|01624|binding|INFO|Releasing lport 3e3120d7-1164-497f-8e95-61789ab8f383 from this chassis (sb_readonly=0)
Nov 25 09:17:31 compute-0 ovn_controller[152859]: 2025-11-25T09:17:31Z|01625|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 down in Southbound
Nov 25 09:17:31 compute-0 ovn_controller[152859]: 2025-11-25T09:17:31Z|01626|binding|INFO|Removing iface tap3e3120d7-11 ovn-installed in OVS
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.587 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:31.599 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:17:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:31.600 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 unbound from our chassis
Nov 25 09:17:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:31.600 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 09:17:31 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:31.601 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[e3441ac7-60ed-4837-a4b9-9ba9ca2edbbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.615 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:31 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 25 09:17:31 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000098.scope: Consumed 2.853s CPU time.
Nov 25 09:17:31 compute-0 systemd-machined[215790]: Machine qemu-183-instance-00000098 terminated.
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:31 compute-0 nova_compute[253538]: 2025-11-25 09:17:31.785 253542 DEBUG nova.compute.manager [None req-97bf4ae3-08d4-4197-b698-7b420920c92f 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.570 253542 DEBUG nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.571 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.571 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.571 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.571 253542 DEBUG nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.572 253542 WARNING nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state None.
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.572 253542 DEBUG nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.572 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.573 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.573 253542 DEBUG oslo_concurrency.lockutils [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.573 253542 DEBUG nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:32 compute-0 nova_compute[253538]: 2025-11-25 09:17:32.574 253542 WARNING nova.compute.manager [req-c4c47d59-8375-48a3-84ff-e61e03c015b3 req-e8efab25-5236-4666-bd09-51ac7e4bdfc6 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state None.
Nov 25 09:17:32 compute-0 sshd-session[419633]: Connection closed by authenticating user root 171.244.51.45 port 52702 [preauth]
Nov 25 09:17:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:33 compute-0 ceph-mon[75015]: pgmap v2914: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 09:17:33 compute-0 nova_compute[253538]: 2025-11-25 09:17:33.628 253542 INFO nova.compute.manager [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Resuming
Nov 25 09:17:33 compute-0 nova_compute[253538]: 2025-11-25 09:17:33.629 253542 DEBUG nova.objects.instance [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'flavor' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:33 compute-0 nova_compute[253538]: 2025-11-25 09:17:33.666 253542 DEBUG oslo_concurrency.lockutils [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:17:33 compute-0 nova_compute[253538]: 2025-11-25 09:17:33.667 253542 DEBUG oslo_concurrency.lockutils [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquired lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:17:33 compute-0 nova_compute[253538]: 2025-11-25 09:17:33.667 253542 DEBUG nova.network.neutron [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:17:34 compute-0 nova_compute[253538]: 2025-11-25 09:17:34.269 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 09:17:34 compute-0 nova_compute[253538]: 2025-11-25 09:17:34.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.197 253542 DEBUG nova.network.neutron [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [{"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.209 253542 DEBUG oslo_concurrency.lockutils [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Releasing lock "refresh_cache-aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.216 253542 DEBUG nova.virt.libvirt.vif [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:17:31Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.217 253542 DEBUG nova.network.os_vif_util [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.218 253542 DEBUG nova.network.os_vif_util [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.219 253542 DEBUG os_vif [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.220 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.221 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.225 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e3120d7-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.225 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e3120d7-11, col_values=(('external_ids', {'iface-id': '3e3120d7-1164-497f-8e95-61789ab8f383', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:3f:a1', 'vm-uuid': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.226 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.226 253542 INFO os_vif [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11')
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.248 253542 DEBUG nova.objects.instance [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'numa_topology' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:35 compute-0 kernel: tap3e3120d7-11: entered promiscuous mode
Nov 25 09:17:35 compute-0 NetworkManager[48915]: <info>  [1764062255.3322] manager: (tap3e3120d7-11): new Tun device (/org/freedesktop/NetworkManager/Devices/667)
Nov 25 09:17:35 compute-0 systemd-udevd[419663]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:17:35 compute-0 ovn_controller[152859]: 2025-11-25T09:17:35Z|01627|binding|INFO|Claiming lport 3e3120d7-1164-497f-8e95-61789ab8f383 for this chassis.
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.366 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:35 compute-0 ovn_controller[152859]: 2025-11-25T09:17:35Z|01628|binding|INFO|3e3120d7-1164-497f-8e95-61789ab8f383: Claiming fa:16:3e:ae:3f:a1 10.100.0.5
Nov 25 09:17:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:35.376 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '7', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:17:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:35.378 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 bound to our chassis
Nov 25 09:17:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:35.378 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 09:17:35 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:35.379 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d292a742-7814-402a-b77a-538538b5016b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:35 compute-0 NetworkManager[48915]: <info>  [1764062255.3829] device (tap3e3120d7-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:17:35 compute-0 ovn_controller[152859]: 2025-11-25T09:17:35Z|01629|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 ovn-installed in OVS
Nov 25 09:17:35 compute-0 ovn_controller[152859]: 2025-11-25T09:17:35Z|01630|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 up in Southbound
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.383 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:35 compute-0 NetworkManager[48915]: <info>  [1764062255.3845] device (tap3e3120d7-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.392 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:35 compute-0 systemd-machined[215790]: New machine qemu-184-instance-00000098.
Nov 25 09:17:35 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000098.
Nov 25 09:17:35 compute-0 ceph-mon[75015]: pgmap v2915: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.979 253542 DEBUG nova.virt.libvirt.host [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Removed pending event for aefbd3e8-a8ba-4fef-a771-4e2b5091a90a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.980 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062255.9792745, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.980 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Started (Lifecycle Event)
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.997 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.998 253542 DEBUG nova.compute.manager [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:17:35 compute-0 nova_compute[253538]: 2025-11-25 09:17:35.999 253542 DEBUG nova.objects.instance [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.002 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.018 253542 INFO nova.virt.libvirt.driver [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance running successfully.
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.020 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 09:17:36 compute-0 virtqemud[253839]: argument unsupported: QEMU guest agent is not configured
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.021 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062255.9827847, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.023 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Resumed (Lifecycle Event)
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.027 253542 DEBUG nova.compute.manager [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.028 253542 DEBUG oslo_concurrency.lockutils [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.028 253542 DEBUG oslo_concurrency.lockutils [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.029 253542 DEBUG oslo_concurrency.lockutils [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.029 253542 DEBUG nova.compute.manager [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.029 253542 WARNING nova.compute.manager [req-5cc0d4b5-325f-44c5-a61e-f37f9414aabd req-4d6361a7-070a-44b4-9cf4-46a1631c69df b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state suspended and task_state resuming.
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.031 253542 DEBUG nova.virt.libvirt.guest [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.031 253542 DEBUG nova.compute.manager [None req-92dd4b2a-0153-4f62-b0f2-6cb5c1a7cfdf 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.038 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.041 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:17:36 compute-0 nova_compute[253538]: 2025-11-25 09:17:36.060 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 09:17:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 85 B/s wr, 56 op/s
Nov 25 09:17:37 compute-0 ceph-mon[75015]: pgmap v2916: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 85 B/s wr, 56 op/s
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.099 253542 DEBUG nova.compute.manager [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.101 253542 DEBUG oslo_concurrency.lockutils [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.101 253542 DEBUG oslo_concurrency.lockutils [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.102 253542 DEBUG oslo_concurrency.lockutils [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.103 253542 DEBUG nova.compute.manager [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.104 253542 WARNING nova.compute.manager [req-989cc6dc-fe33-487a-8de5-58db6483c888 req-d23eced3-6fbd-4ec9-8816-8320530aadf3 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state active and task_state None.
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.284 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.285 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.285 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.286 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.286 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.287 253542 INFO nova.compute.manager [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Terminating instance
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.288 253542 DEBUG nova.compute.manager [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:17:38 compute-0 kernel: tap3e3120d7-11 (unregistering): left promiscuous mode
Nov 25 09:17:38 compute-0 NetworkManager[48915]: <info>  [1764062258.3802] device (tap3e3120d7-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:17:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:38 compute-0 ovn_controller[152859]: 2025-11-25T09:17:38Z|01631|binding|INFO|Releasing lport 3e3120d7-1164-497f-8e95-61789ab8f383 from this chassis (sb_readonly=0)
Nov 25 09:17:38 compute-0 ovn_controller[152859]: 2025-11-25T09:17:38Z|01632|binding|INFO|Setting lport 3e3120d7-1164-497f-8e95-61789ab8f383 down in Southbound
Nov 25 09:17:38 compute-0 ovn_controller[152859]: 2025-11-25T09:17:38Z|01633|binding|INFO|Removing iface tap3e3120d7-11 ovn-installed in OVS
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.383 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.413 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:38.440 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3f:a1 10.100.0.5'], port_security=['fa:16:3e:ae:3f:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aefbd3e8-a8ba-4fef-a771-4e2b5091a90a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f085f3e-8c63-44aa-9adc-cec2428aefd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c19f20fbaec489eaece7cf904f192fa', 'neutron:revision_number': '8', 'neutron:security_group_ids': '280baeb6-358b-4b30-9eb9-c5b8498ccc51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b0834da-ae27-4c3d-83e4-79728139a358, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=3e3120d7-1164-497f-8e95-61789ab8f383) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:17:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:38.441 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3120d7-1164-497f-8e95-61789ab8f383 in datapath 5f085f3e-8c63-44aa-9adc-cec2428aefd2 unbound from our chassis
Nov 25 09:17:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:38.441 162739 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5f085f3e-8c63-44aa-9adc-cec2428aefd2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 09:17:38 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:38.442 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[57b557a9-167a-49d7-8903-fe439d6bdcd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:17:38 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 25 09:17:38 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000098.scope: Consumed 2.832s CPU time.
Nov 25 09:17:38 compute-0 systemd-machined[215790]: Machine qemu-184-instance-00000098 terminated.
Nov 25 09:17:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 23 op/s
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.508 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.513 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.525 253542 INFO nova.virt.libvirt.driver [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Instance destroyed successfully.
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.526 253542 DEBUG nova.objects.instance [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lazy-loading 'resources' on Instance uuid aefbd3e8-a8ba-4fef-a771-4e2b5091a90a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.540 253542 DEBUG nova.virt.libvirt.vif [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1580927856',display_name='tempest-TestServerAdvancedOps-server-1580927856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1580927856',id=152,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c19f20fbaec489eaece7cf904f192fa',ramdisk_id='',reservation_id='r-aqe54enr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1902459302',owner_user_name='tempest-TestServerAdvancedOps-1902459302-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:17:36Z,user_data=None,user_id='612906aa606e4268918814ea9f47c674',uuid=aefbd3e8-a8ba-4fef-a771-4e2b5091a90a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.541 253542 DEBUG nova.network.os_vif_util [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converting VIF {"id": "3e3120d7-1164-497f-8e95-61789ab8f383", "address": "fa:16:3e:ae:3f:a1", "network": {"id": "5f085f3e-8c63-44aa-9adc-cec2428aefd2", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-931270361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3c19f20fbaec489eaece7cf904f192fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3120d7-11", "ovs_interfaceid": "3e3120d7-1164-497f-8e95-61789ab8f383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.542 253542 DEBUG nova.network.os_vif_util [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.542 253542 DEBUG os_vif [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.544 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.544 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e3120d7-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:17:38 compute-0 nova_compute[253538]: 2025-11-25 09:17:38.604 253542 INFO os_vif [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3f:a1,bridge_name='br-int',has_traffic_filtering=True,id=3e3120d7-1164-497f-8e95-61789ab8f383,network=Network(5f085f3e-8c63-44aa-9adc-cec2428aefd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3120d7-11')
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.005 253542 INFO nova.virt.libvirt.driver [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Deleting instance files /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_del
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.006 253542 INFO nova.virt.libvirt.driver [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Deletion of /var/lib/nova/instances/aefbd3e8-a8ba-4fef-a771-4e2b5091a90a_del complete
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.063 253542 INFO nova.compute.manager [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.064 253542 DEBUG oslo.service.loopingcall [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.064 253542 DEBUG nova.compute.manager [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.065 253542 DEBUG nova.network.neutron [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:17:39 compute-0 ceph-mon[75015]: pgmap v2917: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 23 op/s
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.904 253542 DEBUG nova.network.neutron [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.922 253542 INFO nova.compute.manager [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Took 0.86 seconds to deallocate network for instance.
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.960 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:39 compute-0 nova_compute[253538]: 2025-11-25 09:17:39.961 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.002 253542 DEBUG nova.compute.manager [req-c63503c5-9ec3-4b24-82b2-6be1d0b14018 req-9ff7e240-6ef3-4877-8775-09745c6d2b7d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-deleted-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.011 253542 DEBUG oslo_concurrency.processutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.174 253542 DEBUG nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 DEBUG nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.175 253542 WARNING nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-unplugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state deleted and task_state None.
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG oslo_concurrency.lockutils [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 DEBUG nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] No waiting events found dispatching network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.176 253542 WARNING nova.compute.manager [req-0bf78217-fc21-43c0-af76-d1e4aeca6259 req-d3d720ce-cae1-43f4-bffe-9f7cb54b9de2 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Received unexpected event network-vif-plugged-3e3120d7-1164-497f-8e95-61789ab8f383 for instance with vm_state deleted and task_state None.
Nov 25 09:17:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:17:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131093284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.478 253542 DEBUG oslo_concurrency.processutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.483 253542 DEBUG nova.compute.provider_tree [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:17:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 111 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.501 253542 DEBUG nova.scheduler.client.report [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.521 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.557 253542 INFO nova.scheduler.client.report [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Deleted allocations for instance aefbd3e8-a8ba-4fef-a771-4e2b5091a90a
Nov 25 09:17:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1131093284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:17:40 compute-0 nova_compute[253538]: 2025-11-25 09:17:40.648 253542 DEBUG oslo_concurrency.lockutils [None req-260e4df6-9d3d-44ee-916a-da0492ea0072 612906aa606e4268918814ea9f47c674 3c19f20fbaec489eaece7cf904f192fa - - default default] Lock "aefbd3e8-a8ba-4fef-a771-4e2b5091a90a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:41.101 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:17:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:41.102 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:17:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:17:41.102 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:17:41 compute-0 ceph-mon[75015]: pgmap v2918: 321 pgs: 321 active+clean; 111 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 09:17:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:17:42 compute-0 nova_compute[253538]: 2025-11-25 09:17:42.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:43 compute-0 nova_compute[253538]: 2025-11-25 09:17:43.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:43 compute-0 ceph-mon[75015]: pgmap v2919: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:17:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 09:17:44 compute-0 podman[419775]: 2025-11-25 09:17:44.820438901 +0000 UTC m=+0.065164862 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:17:44 compute-0 podman[419776]: 2025-11-25 09:17:44.843062077 +0000 UTC m=+0.088796535 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:17:44 compute-0 nova_compute[253538]: 2025-11-25 09:17:44.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:45 compute-0 ceph-mon[75015]: pgmap v2920: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 09:17:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 09:17:47 compute-0 ceph-mon[75015]: pgmap v2921: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 09:17:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 09:17:48 compute-0 nova_compute[253538]: 2025-11-25 09:17:48.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:49 compute-0 ceph-mon[75015]: pgmap v2922: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 09:17:49 compute-0 nova_compute[253538]: 2025-11-25 09:17:49.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 09:17:51 compute-0 nova_compute[253538]: 2025-11-25 09:17:51.444 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:51 compute-0 ceph-mon[75015]: pgmap v2923: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 09:17:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 25 09:17:52 compute-0 sshd-session[419814]: Invalid user hello from 45.78.217.205 port 45754
Nov 25 09:17:52 compute-0 podman[419816]: 2025-11-25 09:17:52.800301427 +0000 UTC m=+0.088403445 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:17:52 compute-0 sshd-session[419814]: Received disconnect from 45.78.217.205 port 45754:11: Bye Bye [preauth]
Nov 25 09:17:52 compute-0 sshd-session[419814]: Disconnected from invalid user hello 45.78.217.205 port 45754 [preauth]
Nov 25 09:17:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:17:53
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.log', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'backups']
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:17:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:17:53 compute-0 nova_compute[253538]: 2025-11-25 09:17:53.524 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062258.5227954, aefbd3e8-a8ba-4fef-a771-4e2b5091a90a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:17:53 compute-0 nova_compute[253538]: 2025-11-25 09:17:53.524 253542 INFO nova.compute.manager [-] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] VM Stopped (Lifecycle Event)
Nov 25 09:17:53 compute-0 nova_compute[253538]: 2025-11-25 09:17:53.540 253542 DEBUG nova.compute.manager [None req-007dd71e-a4cc-4d83-9ff1-e2923502894f - - - - - -] [instance: aefbd3e8-a8ba-4fef-a771-4e2b5091a90a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:17:53 compute-0 nova_compute[253538]: 2025-11-25 09:17:53.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:53 compute-0 ceph-mon[75015]: pgmap v2924: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:17:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 8 op/s
Nov 25 09:17:54 compute-0 nova_compute[253538]: 2025-11-25 09:17:54.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:55 compute-0 nova_compute[253538]: 2025-11-25 09:17:55.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:55 compute-0 ceph-mon[75015]: pgmap v2925: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 8 op/s
Nov 25 09:17:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:57 compute-0 ceph-mon[75015]: pgmap v2926: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:17:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:58 compute-0 nova_compute[253538]: 2025-11-25 09:17:58.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:17:59 compute-0 nova_compute[253538]: 2025-11-25 09:17:59.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:17:59 compute-0 nova_compute[253538]: 2025-11-25 09:17:59.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:17:59 compute-0 nova_compute[253538]: 2025-11-25 09:17:59.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:17:59 compute-0 nova_compute[253538]: 2025-11-25 09:17:59.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:17:59 compute-0 ceph-mon[75015]: pgmap v2927: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:17:59 compute-0 nova_compute[253538]: 2025-11-25 09:17:59.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:00 compute-0 nova_compute[253538]: 2025-11-25 09:18:00.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:18:01 compute-0 ceph-mon[75015]: pgmap v2928: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:03 compute-0 nova_compute[253538]: 2025-11-25 09:18:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:18:03 compute-0 nova_compute[253538]: 2025-11-25 09:18:03.604 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:03 compute-0 ceph-mon[75015]: pgmap v2929: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.841855) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283841893, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2057, "num_deletes": 251, "total_data_size": 3440247, "memory_usage": 3503488, "flush_reason": "Manual Compaction"}
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283858642, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3373649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59074, "largest_seqno": 61130, "table_properties": {"data_size": 3364202, "index_size": 6004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18904, "raw_average_key_size": 20, "raw_value_size": 3345573, "raw_average_value_size": 3566, "num_data_blocks": 266, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062055, "oldest_key_time": 1764062055, "file_creation_time": 1764062283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 16843 microseconds, and 8983 cpu microseconds.
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.858691) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3373649 bytes OK
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.858717) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.860140) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.860154) EVENT_LOG_v1 {"time_micros": 1764062283860149, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.860177) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3431621, prev total WAL file size 3431621, number of live WAL files 2.
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.861144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3294KB)], [140(8154KB)]
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283861181, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11723768, "oldest_snapshot_seqno": -1}
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8113 keys, 10004109 bytes, temperature: kUnknown
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283914181, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10004109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9952552, "index_size": 30196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 211769, "raw_average_key_size": 26, "raw_value_size": 9810436, "raw_average_value_size": 1209, "num_data_blocks": 1174, "num_entries": 8113, "num_filter_entries": 8113, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.914542) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10004109 bytes
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.915927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.8 rd, 188.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8627, records dropped: 514 output_compression: NoCompression
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.915950) EVENT_LOG_v1 {"time_micros": 1764062283915938, "job": 86, "event": "compaction_finished", "compaction_time_micros": 53106, "compaction_time_cpu_micros": 27441, "output_level": 6, "num_output_files": 1, "total_output_size": 10004109, "num_input_records": 8627, "num_output_records": 8113, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283916916, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062283919058, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.861040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:03.919166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:18:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:18:04 compute-0 nova_compute[253538]: 2025-11-25 09:18:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:18:04 compute-0 nova_compute[253538]: 2025-11-25 09:18:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:18:04 compute-0 ceph-mon[75015]: pgmap v2930: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:04 compute-0 nova_compute[253538]: 2025-11-25 09:18:04.857 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:05 compute-0 sshd-session[419843]: Invalid user soporte from 45.78.222.2 port 37148
Nov 25 09:18:05 compute-0 nova_compute[253538]: 2025-11-25 09:18:05.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:18:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:18:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1362 writes, 6172 keys, 1362 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s
                                           Interval WAL: 1362 writes, 1362 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     33.0      2.26              0.27        43    0.053       0      0       0.0       0.0
                                             L6      1/0    9.54 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.5     62.6     52.5      6.42              1.08        42    0.153    265K    22K       0.0       0.0
                                            Sum      1/0    9.54 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.5     46.3     47.4      8.69              1.34        85    0.102    265K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2     64.0     65.7      0.83              0.16        10    0.083     41K   2528       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     62.6     52.5      6.42              1.08        42    0.153    265K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     33.0      2.26              0.27        42    0.054       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.073, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.40 GB write, 0.08 MB/s write, 0.39 GB read, 0.07 MB/s read, 8.7 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 47.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000535 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3090,45.65 MB,15.018%) FilterBlock(86,742.80 KB,0.238614%) IndexBlock(86,1.16 MB,0.382172%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:18:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:06 compute-0 sshd-session[419843]: Received disconnect from 45.78.222.2 port 37148:11: Bye Bye [preauth]
Nov 25 09:18:06 compute-0 sshd-session[419843]: Disconnected from invalid user soporte 45.78.222.2 port 37148 [preauth]
Nov 25 09:18:07 compute-0 sshd-session[419845]: Invalid user kali from 45.202.211.6 port 43344
Nov 25 09:18:07 compute-0 ceph-mon[75015]: pgmap v2931: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:07 compute-0 sshd-session[419845]: Received disconnect from 45.202.211.6 port 43344:11: Bye Bye [preauth]
Nov 25 09:18:07 compute-0 sshd-session[419845]: Disconnected from invalid user kali 45.202.211.6 port 43344 [preauth]
Nov 25 09:18:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:08 compute-0 nova_compute[253538]: 2025-11-25 09:18:08.605 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:09 compute-0 nova_compute[253538]: 2025-11-25 09:18:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:18:09 compute-0 ceph-mon[75015]: pgmap v2932: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:09 compute-0 nova_compute[253538]: 2025-11-25 09:18:09.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:11 compute-0 ceph-mon[75015]: pgmap v2933: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:13 compute-0 nova_compute[253538]: 2025-11-25 09:18:13.606 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:13 compute-0 ceph-mon[75015]: pgmap v2934: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:14 compute-0 nova_compute[253538]: 2025-11-25 09:18:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:18:14 compute-0 nova_compute[253538]: 2025-11-25 09:18:14.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:14 compute-0 nova_compute[253538]: 2025-11-25 09:18:14.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:14 compute-0 nova_compute[253538]: 2025-11-25 09:18:14.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:14 compute-0 nova_compute[253538]: 2025-11-25 09:18:14.579 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:18:14 compute-0 nova_compute[253538]: 2025-11-25 09:18:14.579 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:14 compute-0 nova_compute[253538]: 2025-11-25 09:18:14.860 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:18:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768762082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.086 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:15 compute-0 podman[419872]: 2025-11-25 09:18:15.212087029 +0000 UTC m=+0.068539774 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 09:18:15 compute-0 podman[419871]: 2025-11-25 09:18:15.221070643 +0000 UTC m=+0.082123564 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.289 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.291 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.291 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.291 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.425 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.426 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.519 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.537 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.538 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.599 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.616 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:18:15 compute-0 ceph-mon[75015]: pgmap v2935: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2768762082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.648 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.954 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.955 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:15 compute-0 nova_compute[253538]: 2025-11-25 09:18:15.969 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.038 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:18:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664273862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.113 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.119 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.132 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.172 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.172 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.172 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.180 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.181 253542 INFO nova.compute.claims [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.440 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2664273862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:18:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:18:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569016593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.882 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.888 253542 DEBUG nova.compute.provider_tree [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.900 253542 DEBUG nova.scheduler.client.report [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.950 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.951 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.997 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:18:16 compute-0 nova_compute[253538]: 2025-11-25 09:18:16.997 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.017 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.044 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.164 253542 DEBUG nova.policy [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a68fbd2f756d42aa982630f3a41f0a1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '41c67820b40a4185a60c4245f9c43ef5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.227 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.228 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.229 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating image(s)
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.257 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.283 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.309 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.313 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.385 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.386 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.387 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.388 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.416 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.420 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:17 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Nov 25 09:18:17 compute-0 ceph-mon[75015]: pgmap v2936: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:18:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1569016593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.795 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.869 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] resizing rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.966 253542 DEBUG nova.objects.instance [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'migration_context' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.978 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.979 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Ensure instance console log exists: /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.979 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.979 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:17 compute-0 nova_compute[253538]: 2025-11-25 09:18:17.980 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 103 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 846 KiB/s wr, 12 op/s
Nov 25 09:18:18 compute-0 nova_compute[253538]: 2025-11-25 09:18:18.607 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:18 compute-0 nova_compute[253538]: 2025-11-25 09:18:18.895 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Successfully created port: 9d78d6ba-3489-4cfd-ae33-9166be3f940c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:18:19 compute-0 ceph-mon[75015]: pgmap v2937: 321 pgs: 321 active+clean; 103 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 846 KiB/s wr, 12 op/s
Nov 25 09:18:19 compute-0 nova_compute[253538]: 2025-11-25 09:18:19.863 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1018 KiB/s wr, 26 op/s
Nov 25 09:18:20 compute-0 nova_compute[253538]: 2025-11-25 09:18:20.516 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Successfully updated port: 9d78d6ba-3489-4cfd-ae33-9166be3f940c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:18:20 compute-0 nova_compute[253538]: 2025-11-25 09:18:20.519 253542 DEBUG nova.compute.manager [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:18:20 compute-0 nova_compute[253538]: 2025-11-25 09:18:20.519 253542 DEBUG nova.compute.manager [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:18:20 compute-0 nova_compute[253538]: 2025-11-25 09:18:20.519 253542 DEBUG oslo_concurrency.lockutils [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:18:20 compute-0 nova_compute[253538]: 2025-11-25 09:18:20.519 253542 DEBUG oslo_concurrency.lockutils [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:18:20 compute-0 nova_compute[253538]: 2025-11-25 09:18:20.520 253542 DEBUG nova.network.neutron [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:18:20 compute-0 nova_compute[253538]: 2025-11-25 09:18:20.535 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:18:20 compute-0 nova_compute[253538]: 2025-11-25 09:18:20.715 253542 DEBUG nova.network.neutron [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.034 253542 DEBUG nova.network.neutron [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.048 253542 DEBUG oslo_concurrency.lockutils [req-334bc46f-f43f-4629-b5b0-798e2d0e4a2f req-02a97ce5-31ae-4706-ba90-7d8049005db4 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.048 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.048 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.179 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:18:21 compute-0 ceph-mon[75015]: pgmap v2938: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1018 KiB/s wr, 26 op/s
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.949 253542 DEBUG nova.network.neutron [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.966 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.967 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance network_info: |[{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.971 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start _get_guest_xml network_info=[{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.975 253542 WARNING nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.980 253542 DEBUG nova.virt.libvirt.host [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.981 253542 DEBUG nova.virt.libvirt.host [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.983 253542 DEBUG nova.virt.libvirt.host [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.984 253542 DEBUG nova.virt.libvirt.host [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.984 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.984 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.985 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.985 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.985 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.986 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.986 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.986 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.986 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.987 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.987 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.987 253542 DEBUG nova.virt.hardware [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:18:21 compute-0 nova_compute[253538]: 2025-11-25 09:18:21.990 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:18:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250252886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.441 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.467 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.471 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:18:22 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1250252886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:18:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:18:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867417623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.936 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.938 253542 DEBUG nova.virt.libvirt.vif [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNJQsCu4fQ3ll5Z4ZaGvMq+pPgiaY3EvL05ETUACffFb5NNPT58fZR5bxwEgYmFiG8knRhbPhzoHa6MYoWsZMDhMe0q2RDfQW/VzCu9RVlFpki+QcCgPNPr5WwLwdENig==',key_name='tempest-TestShelveInstance-83615415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:18:17Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.939 253542 DEBUG nova.network.os_vif_util [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.941 253542 DEBUG nova.network.os_vif_util [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.943 253542 DEBUG nova.objects.instance [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.956 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <uuid>282b7217-4c1e-4a42-b3da-05616f4e1da3</uuid>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <name>instance-00000099</name>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <nova:name>tempest-TestShelveInstance-server-853165821</nova:name>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:18:21</nova:creationTime>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <nova:user uuid="a68fbd2f756d42aa982630f3a41f0a1f">tempest-TestShelveInstance-1867415308-project-member</nova:user>
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <nova:project uuid="41c67820b40a4185a60c4245f9c43ef5">tempest-TestShelveInstance-1867415308</nova:project>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <nova:port uuid="9d78d6ba-3489-4cfd-ae33-9166be3f940c">
Nov 25 09:18:22 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <system>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <entry name="serial">282b7217-4c1e-4a42-b3da-05616f4e1da3</entry>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <entry name="uuid">282b7217-4c1e-4a42-b3da-05616f4e1da3</entry>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </system>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <os>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   </os>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <features>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   </features>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk">
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       </source>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config">
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       </source>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:18:22 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:eb:22:67"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <target dev="tap9d78d6ba-34"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/console.log" append="off"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <video>
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </video>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:18:22 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:18:22 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:18:22 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:18:22 compute-0 nova_compute[253538]: </domain>
Nov 25 09:18:22 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.958 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Preparing to wait for external event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.958 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.958 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.959 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.959 253542 DEBUG nova.virt.libvirt.vif [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNJQsCu4fQ3ll5Z4ZaGvMq+pPgiaY3EvL05ETUACffFb5NNPT58fZR5bxwEgYmFiG8knRhbPhzoHa6MYoWsZMDhMe0q2RDfQW/VzCu9RVlFpki+QcCgPNPr5WwLwdENig==',key_name='tempest-TestShelveInstance-83615415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:18:17Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.960 253542 DEBUG nova.network.os_vif_util [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.961 253542 DEBUG nova.network.os_vif_util [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.961 253542 DEBUG os_vif [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.962 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.962 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.963 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.967 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.968 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d78d6ba-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:22 compute-0 nova_compute[253538]: 2025-11-25 09:18:22.968 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d78d6ba-34, col_values=(('external_ids', {'iface-id': '9d78d6ba-3489-4cfd-ae33-9166be3f940c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:22:67', 'vm-uuid': '282b7217-4c1e-4a42-b3da-05616f4e1da3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:23 compute-0 NetworkManager[48915]: <info>  [1764062303.0008] manager: (tap9d78d6ba-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.010 253542 INFO os_vif [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34')
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.054 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.056 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.056 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No VIF found with MAC fa:16:3e:eb:22:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.056 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Using config drive
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.078 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:18:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.454 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating config drive at /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.460 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p660v3i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:18:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:18:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:18:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:18:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:18:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.602 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p660v3i" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.628 253542 DEBUG nova.storage.rbd_utils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.632 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:23 compute-0 ceph-mon[75015]: pgmap v2939: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:18:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2867417623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.806 253542 DEBUG oslo_concurrency.processutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.807 253542 INFO nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deleting local config drive /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config because it was imported into RBD.
Nov 25 09:18:23 compute-0 podman[420234]: 2025-11-25 09:18:23.836788233 +0000 UTC m=+0.092063474 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:18:23 compute-0 kernel: tap9d78d6ba-34: entered promiscuous mode
Nov 25 09:18:23 compute-0 NetworkManager[48915]: <info>  [1764062303.8673] manager: (tap9d78d6ba-34): new Tun device (/org/freedesktop/NetworkManager/Devices/669)
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:23 compute-0 ovn_controller[152859]: 2025-11-25T09:18:23Z|01634|binding|INFO|Claiming lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c for this chassis.
Nov 25 09:18:23 compute-0 ovn_controller[152859]: 2025-11-25T09:18:23Z|01635|binding|INFO|9d78d6ba-3489-4cfd-ae33-9166be3f940c: Claiming fa:16:3e:eb:22:67 10.100.0.14
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.892 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:22:67 10.100.0.14'], port_security=['fa:16:3e:eb:22:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '282b7217-4c1e-4a42-b3da-05616f4e1da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41c67820b40a4185a60c4245f9c43ef5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b17be75-81bb-4f4d-9234-5572279d07c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3ce9783-7822-423e-a3be-85165987da53, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9d78d6ba-3489-4cfd-ae33-9166be3f940c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.894 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78d6ba-3489-4cfd-ae33-9166be3f940c in datapath 26d70c6d-e66b-4570-a7d7-11486a935ed8 bound to our chassis
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.895 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 09:18:23 compute-0 systemd-udevd[420276]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.908 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a07af6-2e05-425c-ab3c-9e88b3fed90c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.909 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26d70c6d-e1 in ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:18:23 compute-0 systemd-machined[215790]: New machine qemu-185-instance-00000099.
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.915 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26d70c6d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.915 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[93816612-d97a-4617-90a8-cd14637db13d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.916 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63efcb56-200f-4312-aa55-97709d936995]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:23 compute-0 NetworkManager[48915]: <info>  [1764062303.9215] device (tap9d78d6ba-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:18:23 compute-0 NetworkManager[48915]: <info>  [1764062303.9231] device (tap9d78d6ba-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.927 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[fb76d939-5017-4831-bb48-85bda173dd94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.939 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:23 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000099.
Nov 25 09:18:23 compute-0 ovn_controller[152859]: 2025-11-25T09:18:23Z|01636|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c ovn-installed in OVS
Nov 25 09:18:23 compute-0 ovn_controller[152859]: 2025-11-25T09:18:23Z|01637|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c up in Southbound
Nov 25 09:18:23 compute-0 nova_compute[253538]: 2025-11-25 09:18:23.945 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.950 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb840eb-8528-4620-abd2-7b881f8df821]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.977 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[01ba7914-86d3-4719-9d3e-9fdd20a495c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:23 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:23.983 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[962ca2ad-d3c5-4b12-8c65-baadf53b692e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:23 compute-0 NetworkManager[48915]: <info>  [1764062303.9852] manager: (tap26d70c6d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/670)
Nov 25 09:18:23 compute-0 systemd-udevd[420280]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.017 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6c91d4-9d95-4436-b5a1-b7bccfc86491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.020 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[5c207857-2703-4e66-a5bf-226f43a1a3fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 NetworkManager[48915]: <info>  [1764062304.0406] device (tap26d70c6d-e0): carrier: link connected
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.045 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[c72198ac-f81a-4673-8c4e-df66f7e340ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.061 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[efe02358-c03f-4553-ba5e-8e0b25e17d28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d70c6d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:90:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765863, 'reachable_time': 33599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420309, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.073 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[63277a6f-a0ca-4cee-9dcf-fa279cbf7627]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:9093'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 765863, 'tstamp': 765863}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420310, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.089 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9862a0c-f1f6-4243-9072-d8ed68d0e3d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d70c6d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:90:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765863, 'reachable_time': 33599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 420311, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.119 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a6956cbb-0a82-4936-b0e6-750305cb8488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.133 253542 DEBUG nova.compute.manager [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.134 253542 DEBUG oslo_concurrency.lockutils [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.134 253542 DEBUG oslo_concurrency.lockutils [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.134 253542 DEBUG oslo_concurrency.lockutils [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.134 253542 DEBUG nova.compute.manager [req-f8e89a3a-7522-412a-80c2-b3add5bb3a8b req-ac6b0d0d-d0c7-440e-a2df-8fd89a25741d b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Processing event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.177 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f8ff3a-531f-44f6-9eb0-79e1e0b68bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.178 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d70c6d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.178 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.179 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d70c6d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:24 compute-0 kernel: tap26d70c6d-e0: entered promiscuous mode
Nov 25 09:18:24 compute-0 NetworkManager[48915]: <info>  [1764062304.1822] manager: (tap26d70c6d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.182 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26d70c6d-e0, col_values=(('external_ids', {'iface-id': '49a3f274-19b1-4763-bafc-281fe099299b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.183 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:24 compute-0 ovn_controller[152859]: 2025-11-25T09:18:24Z|01638|binding|INFO|Releasing lport 49a3f274-19b1-4763-bafc-281fe099299b from this chassis (sb_readonly=0)
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.198 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.199 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.200 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[9070ac40-e0bd-408b-a77f-f96b9d5b120f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.200 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:18:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:24.201 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'env', 'PROCESS_TAG=haproxy-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26d70c6d-e66b-4570-a7d7-11486a935ed8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:18:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.520 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062304.5197344, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.521 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Started (Lifecycle Event)
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.524 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.528 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.532 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance spawned successfully.
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.533 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.536 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.539 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.548 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.549 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.549 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.550 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.550 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.551 253542 DEBUG nova.virt.libvirt.driver [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.556 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.556 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062304.5208545, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.556 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Paused (Lifecycle Event)
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.585 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.588 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062304.5274475, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.588 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Resumed (Lifecycle Event)
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.613 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.616 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.626 253542 INFO nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Took 7.40 seconds to spawn the instance on the hypervisor.
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.627 253542 DEBUG nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.654 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:18:24 compute-0 podman[420385]: 2025-11-25 09:18:24.578437664 +0000 UTC m=+0.025467843 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.683 253542 INFO nova.compute.manager [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Took 8.67 seconds to build instance.
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.699 253542 DEBUG oslo_concurrency.lockutils [None req-0ad32855-dd92-421e-a879-3a223e28a816 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:24 compute-0 podman[420385]: 2025-11-25 09:18:24.70187302 +0000 UTC m=+0.148903169 container create 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:18:24 compute-0 systemd[1]: Started libpod-conmon-9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9.scope.
Nov 25 09:18:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8869441ad4141e8c78750280aebdef1e5823b5dc10b72133bb9fe07f53bcf62c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:24 compute-0 podman[420385]: 2025-11-25 09:18:24.853691756 +0000 UTC m=+0.300722015 container init 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 09:18:24 compute-0 podman[420385]: 2025-11-25 09:18:24.861214111 +0000 UTC m=+0.308244280 container start 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:18:24 compute-0 nova_compute[253538]: 2025-11-25 09:18:24.864 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:24 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [NOTICE]   (420404) : New worker (420406) forked
Nov 25 09:18:24 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [NOTICE]   (420404) : Loading success.
Nov 25 09:18:25 compute-0 sshd-session[420416]: error: kex_exchange_identification: read: Connection reset by peer
Nov 25 09:18:25 compute-0 sshd-session[420416]: Connection reset by 45.140.17.97 port 11956
Nov 25 09:18:25 compute-0 ceph-mon[75015]: pgmap v2940: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:18:26 compute-0 nova_compute[253538]: 2025-11-25 09:18:26.202 253542 DEBUG nova.compute.manager [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:18:26 compute-0 nova_compute[253538]: 2025-11-25 09:18:26.203 253542 DEBUG oslo_concurrency.lockutils [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:26 compute-0 nova_compute[253538]: 2025-11-25 09:18:26.203 253542 DEBUG oslo_concurrency.lockutils [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:26 compute-0 nova_compute[253538]: 2025-11-25 09:18:26.203 253542 DEBUG oslo_concurrency.lockutils [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:26 compute-0 nova_compute[253538]: 2025-11-25 09:18:26.203 253542 DEBUG nova.compute.manager [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:18:26 compute-0 nova_compute[253538]: 2025-11-25 09:18:26.204 253542 WARNING nova.compute.manager [req-e141d6cf-c3d0-49e1-b4ff-555ddf0fc098 req-c72f8e2d-437e-42e6-9ed5-484740d6820b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state None.
Nov 25 09:18:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:18:27 compute-0 ceph-mon[75015]: pgmap v2941: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:18:28 compute-0 nova_compute[253538]: 2025-11-25 09:18:28.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 25 09:18:28 compute-0 ceph-mon[75015]: pgmap v2942: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 25 09:18:28 compute-0 nova_compute[253538]: 2025-11-25 09:18:28.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:28.929 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:18:28 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:28.929 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:18:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:18:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634771501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:18:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:18:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634771501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:18:29 compute-0 nova_compute[253538]: 2025-11-25 09:18:29.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:29 compute-0 NetworkManager[48915]: <info>  [1764062309.3951] manager: (patch-br-int-to-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/672)
Nov 25 09:18:29 compute-0 NetworkManager[48915]: <info>  [1764062309.3964] manager: (patch-provnet-14ba300c-36d8-42f5-a7bd-8245a4488c5f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/673)
Nov 25 09:18:29 compute-0 nova_compute[253538]: 2025-11-25 09:18:29.481 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:29 compute-0 ovn_controller[152859]: 2025-11-25T09:18:29Z|01639|binding|INFO|Releasing lport 49a3f274-19b1-4763-bafc-281fe099299b from this chassis (sb_readonly=0)
Nov 25 09:18:29 compute-0 nova_compute[253538]: 2025-11-25 09:18:29.491 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:29 compute-0 nova_compute[253538]: 2025-11-25 09:18:29.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3634771501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:18:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3634771501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:18:30 compute-0 nova_compute[253538]: 2025-11-25 09:18:30.091 253542 DEBUG nova.compute.manager [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:18:30 compute-0 nova_compute[253538]: 2025-11-25 09:18:30.092 253542 DEBUG nova.compute.manager [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:18:30 compute-0 nova_compute[253538]: 2025-11-25 09:18:30.092 253542 DEBUG oslo_concurrency.lockutils [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:18:30 compute-0 nova_compute[253538]: 2025-11-25 09:18:30.093 253542 DEBUG oslo_concurrency.lockutils [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:18:30 compute-0 nova_compute[253538]: 2025-11-25 09:18:30.093 253542 DEBUG nova.network.neutron [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:18:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 981 KiB/s wr, 87 op/s
Nov 25 09:18:30 compute-0 sudo[420418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:30 compute-0 sudo[420418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:30 compute-0 sudo[420418]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:30 compute-0 sudo[420443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:18:30 compute-0 sudo[420443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:30 compute-0 sudo[420443]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:30 compute-0 ceph-mon[75015]: pgmap v2943: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 981 KiB/s wr, 87 op/s
Nov 25 09:18:30 compute-0 sudo[420468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:30 compute-0 sudo[420468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:30 compute-0 sudo[420468]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:31 compute-0 sudo[420493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:18:31 compute-0 sudo[420493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:31 compute-0 nova_compute[253538]: 2025-11-25 09:18:31.257 253542 DEBUG nova.network.neutron [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updated VIF entry in instance network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:18:31 compute-0 nova_compute[253538]: 2025-11-25 09:18:31.257 253542 DEBUG nova.network.neutron [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:18:31 compute-0 nova_compute[253538]: 2025-11-25 09:18:31.274 253542 DEBUG oslo_concurrency.lockutils [req-2f94ac36-b991-45bf-8a99-f093e88e5b45 req-76cbf340-fffe-41bf-9957-26b1af64e4dc b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:18:31 compute-0 sudo[420493]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:18:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:18:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:18:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:18:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:18:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:18:31 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 26b6720f-df57-4801-b599-defd5da1fbf5 does not exist
Nov 25 09:18:31 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev bad9114d-2602-4f8c-b888-adf2608a20f4 does not exist
Nov 25 09:18:31 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 547d7171-a413-4532-bf56-71882ceabb0b does not exist
Nov 25 09:18:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:18:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:18:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:18:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:18:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:18:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:18:31 compute-0 sudo[420549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:31 compute-0 sudo[420549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:31 compute-0 sudo[420549]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:31 compute-0 sudo[420574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:18:31 compute-0 sudo[420574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:31 compute-0 sudo[420574]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:31 compute-0 sudo[420599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:31 compute-0 sudo[420599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:31 compute-0 sudo[420599]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:31 compute-0 sudo[420624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:18:31 compute-0 sudo[420624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:18:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:18:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:18:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:18:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:18:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:18:32 compute-0 podman[420688]: 2025-11-25 09:18:32.208721575 +0000 UTC m=+0.043573765 container create 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:18:32 compute-0 systemd[1]: Started libpod-conmon-385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8.scope.
Nov 25 09:18:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:18:32 compute-0 podman[420688]: 2025-11-25 09:18:32.186492122 +0000 UTC m=+0.021344332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:18:32 compute-0 podman[420688]: 2025-11-25 09:18:32.301061285 +0000 UTC m=+0.135913505 container init 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:18:32 compute-0 podman[420688]: 2025-11-25 09:18:32.309381282 +0000 UTC m=+0.144233472 container start 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:18:32 compute-0 podman[420688]: 2025-11-25 09:18:32.314369858 +0000 UTC m=+0.149222048 container attach 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 09:18:32 compute-0 jolly_sanderson[420704]: 167 167
Nov 25 09:18:32 compute-0 systemd[1]: libpod-385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8.scope: Deactivated successfully.
Nov 25 09:18:32 compute-0 podman[420688]: 2025-11-25 09:18:32.318835309 +0000 UTC m=+0.153687509 container died 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecc77f87cf1e6c11d3fa4be2f6f29dbe70b7d0819029780d5822af48aacbb5c3-merged.mount: Deactivated successfully.
Nov 25 09:18:32 compute-0 podman[420688]: 2025-11-25 09:18:32.377536285 +0000 UTC m=+0.212388475 container remove 385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:18:32 compute-0 systemd[1]: libpod-conmon-385ec1b4f0a27aca40f9e51e99b16c274d9d55aaad1097c023685425c5a4c6f8.scope: Deactivated successfully.
Nov 25 09:18:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 809 KiB/s wr, 74 op/s
Nov 25 09:18:32 compute-0 podman[420730]: 2025-11-25 09:18:32.624912959 +0000 UTC m=+0.057893125 container create a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:18:32 compute-0 systemd[1]: Started libpod-conmon-a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2.scope.
Nov 25 09:18:32 compute-0 podman[420730]: 2025-11-25 09:18:32.596472096 +0000 UTC m=+0.029452282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:18:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:32 compute-0 podman[420730]: 2025-11-25 09:18:32.726407788 +0000 UTC m=+0.159387954 container init a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:18:32 compute-0 podman[420730]: 2025-11-25 09:18:32.736252076 +0000 UTC m=+0.169232242 container start a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 09:18:32 compute-0 podman[420730]: 2025-11-25 09:18:32.740661716 +0000 UTC m=+0.173641882 container attach a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:18:32 compute-0 ceph-mon[75015]: pgmap v2944: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 809 KiB/s wr, 74 op/s
Nov 25 09:18:33 compute-0 nova_compute[253538]: 2025-11-25 09:18:33.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.455981) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313456013, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 516, "num_deletes": 255, "total_data_size": 449145, "memory_usage": 460552, "flush_reason": "Manual Compaction"}
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313486170, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 444845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61131, "largest_seqno": 61646, "table_properties": {"data_size": 441982, "index_size": 834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6735, "raw_average_key_size": 18, "raw_value_size": 436204, "raw_average_value_size": 1195, "num_data_blocks": 38, "num_entries": 365, "num_filter_entries": 365, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062284, "oldest_key_time": 1764062284, "file_creation_time": 1764062313, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 30236 microseconds, and 2016 cpu microseconds.
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.486213) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 444845 bytes OK
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.486231) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.500945) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.500960) EVENT_LOG_v1 {"time_micros": 1764062313500955, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.500978) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 446142, prev total WAL file size 446142, number of live WAL files 2.
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.501395) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353139' seq:72057594037927935, type:22 .. '6C6F676D0032373730' seq:0, type:0; will stop at (end)
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(434KB)], [143(9769KB)]
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313501475, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10448954, "oldest_snapshot_seqno": -1}
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7958 keys, 10343495 bytes, temperature: kUnknown
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313574995, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10343495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10292010, "index_size": 30508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19909, "raw_key_size": 209502, "raw_average_key_size": 26, "raw_value_size": 10151610, "raw_average_value_size": 1275, "num_data_blocks": 1186, "num_entries": 7958, "num_filter_entries": 7958, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062313, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.575376) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10343495 bytes
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.576867) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.9 rd, 140.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(46.7) write-amplify(23.3) OK, records in: 8478, records dropped: 520 output_compression: NoCompression
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.576896) EVENT_LOG_v1 {"time_micros": 1764062313576883, "job": 88, "event": "compaction_finished", "compaction_time_micros": 73616, "compaction_time_cpu_micros": 28779, "output_level": 6, "num_output_files": 1, "total_output_size": 10343495, "num_input_records": 8478, "num_output_records": 7958, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313577202, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062313581310, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.501292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:33 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:18:33.581436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:18:33 compute-0 condescending_black[420746]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:18:33 compute-0 condescending_black[420746]: --> relative data size: 1.0
Nov 25 09:18:33 compute-0 condescending_black[420746]: --> All data devices are unavailable
Nov 25 09:18:33 compute-0 systemd[1]: libpod-a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2.scope: Deactivated successfully.
Nov 25 09:18:33 compute-0 podman[420730]: 2025-11-25 09:18:33.910563369 +0000 UTC m=+1.343543535 container died a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:18:33 compute-0 systemd[1]: libpod-a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2.scope: Consumed 1.110s CPU time.
Nov 25 09:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c81282f4e78d3cd1c63f5c35f4dcf9bdbd3589de809e49e49cb4e52938d1ceb7-merged.mount: Deactivated successfully.
Nov 25 09:18:33 compute-0 podman[420730]: 2025-11-25 09:18:33.964024372 +0000 UTC m=+1.397004538 container remove a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:18:34 compute-0 systemd[1]: libpod-conmon-a821990298980c51035ae7d7615927331d30b10972f2b09bdfe9fe68376ab2d2.scope: Deactivated successfully.
Nov 25 09:18:34 compute-0 sudo[420624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:34 compute-0 sudo[420788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:34 compute-0 sudo[420788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:34 compute-0 sudo[420788]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:34 compute-0 sudo[420813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:18:34 compute-0 sudo[420813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:34 compute-0 sudo[420813]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:34 compute-0 sudo[420838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:34 compute-0 sudo[420838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:34 compute-0 sudo[420838]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:34 compute-0 sudo[420863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:18:34 compute-0 sudo[420863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:18:34 compute-0 podman[420927]: 2025-11-25 09:18:34.633000317 +0000 UTC m=+0.042969189 container create 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:18:34 compute-0 systemd[1]: Started libpod-conmon-96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160.scope.
Nov 25 09:18:34 compute-0 podman[420927]: 2025-11-25 09:18:34.612191021 +0000 UTC m=+0.022159913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:18:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:18:34 compute-0 podman[420927]: 2025-11-25 09:18:34.727929468 +0000 UTC m=+0.137898370 container init 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 09:18:34 compute-0 podman[420927]: 2025-11-25 09:18:34.737606131 +0000 UTC m=+0.147575003 container start 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:18:34 compute-0 podman[420927]: 2025-11-25 09:18:34.741389324 +0000 UTC m=+0.151358226 container attach 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 25 09:18:34 compute-0 musing_gagarin[420941]: 167 167
Nov 25 09:18:34 compute-0 systemd[1]: libpod-96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160.scope: Deactivated successfully.
Nov 25 09:18:34 compute-0 podman[420927]: 2025-11-25 09:18:34.745760133 +0000 UTC m=+0.155729025 container died 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb0d98d688efe5ce92032b852db357408613bb256a8b904bc395e871d3073634-merged.mount: Deactivated successfully.
Nov 25 09:18:34 compute-0 podman[420927]: 2025-11-25 09:18:34.790002165 +0000 UTC m=+0.199971037 container remove 96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:18:34 compute-0 systemd[1]: libpod-conmon-96ed90b84f24739d2731214821bf87593a654d07b1dc1c6127bf0087cbd4a160.scope: Deactivated successfully.
Nov 25 09:18:34 compute-0 nova_compute[253538]: 2025-11-25 09:18:34.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:34 compute-0 podman[420965]: 2025-11-25 09:18:34.981357337 +0000 UTC m=+0.041531360 container create b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:18:35 compute-0 systemd[1]: Started libpod-conmon-b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771.scope.
Nov 25 09:18:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:35 compute-0 podman[420965]: 2025-11-25 09:18:34.96268093 +0000 UTC m=+0.022854973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:18:35 compute-0 podman[420965]: 2025-11-25 09:18:35.071901039 +0000 UTC m=+0.132075112 container init b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:18:35 compute-0 podman[420965]: 2025-11-25 09:18:35.080941504 +0000 UTC m=+0.141115527 container start b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:18:35 compute-0 podman[420965]: 2025-11-25 09:18:35.118843424 +0000 UTC m=+0.179017447 container attach b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 09:18:35 compute-0 ceph-mon[75015]: pgmap v2945: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]: {
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:     "0": [
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:         {
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "devices": [
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "/dev/loop3"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             ],
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_name": "ceph_lv0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_size": "21470642176",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "name": "ceph_lv0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "tags": {
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cluster_name": "ceph",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.crush_device_class": "",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.encrypted": "0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osd_id": "0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.type": "block",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.vdo": "0"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             },
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "type": "block",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "vg_name": "ceph_vg0"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:         }
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:     ],
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:     "1": [
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:         {
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "devices": [
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "/dev/loop4"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             ],
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_name": "ceph_lv1",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_size": "21470642176",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "name": "ceph_lv1",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "tags": {
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cluster_name": "ceph",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.crush_device_class": "",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.encrypted": "0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osd_id": "1",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.type": "block",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.vdo": "0"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             },
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "type": "block",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "vg_name": "ceph_vg1"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:         }
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:     ],
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:     "2": [
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:         {
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "devices": [
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "/dev/loop5"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             ],
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_name": "ceph_lv2",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_size": "21470642176",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "name": "ceph_lv2",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "tags": {
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.cluster_name": "ceph",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.crush_device_class": "",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.encrypted": "0",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osd_id": "2",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.type": "block",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:                 "ceph.vdo": "0"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             },
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "type": "block",
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:             "vg_name": "ceph_vg2"
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:         }
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]:     ]
Nov 25 09:18:35 compute-0 admiring_antonelli[420982]: }
Nov 25 09:18:35 compute-0 systemd[1]: libpod-b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771.scope: Deactivated successfully.
Nov 25 09:18:35 compute-0 podman[420965]: 2025-11-25 09:18:35.867928458 +0000 UTC m=+0.928102501 container died b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-86672213367fb3c872b22f8b375caaf849b67cae25c785b3753f43fd6c5795d2-merged.mount: Deactivated successfully.
Nov 25 09:18:35 compute-0 podman[420965]: 2025-11-25 09:18:35.997686575 +0000 UTC m=+1.057860598 container remove b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:18:36 compute-0 systemd[1]: libpod-conmon-b7bfbe40595c91dd118f019350030aac73586d6697e4cc2d90af73a8a5f7d771.scope: Deactivated successfully.
Nov 25 09:18:36 compute-0 sudo[420863]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:36 compute-0 sudo[421003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:36 compute-0 sudo[421003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:36 compute-0 sudo[421003]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:36 compute-0 sudo[421028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:18:36 compute-0 sudo[421028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:36 compute-0 sudo[421028]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:36 compute-0 sudo[421053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:36 compute-0 sudo[421053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:36 compute-0 sudo[421053]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:36 compute-0 sudo[421078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:18:36 compute-0 sudo[421078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:18:36 compute-0 podman[421143]: 2025-11-25 09:18:36.695234427 +0000 UTC m=+0.043191155 container create 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:18:36 compute-0 systemd[1]: Started libpod-conmon-17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49.scope.
Nov 25 09:18:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:18:36 compute-0 podman[421143]: 2025-11-25 09:18:36.676468117 +0000 UTC m=+0.024424865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:18:36 compute-0 podman[421143]: 2025-11-25 09:18:36.779656582 +0000 UTC m=+0.127613360 container init 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:18:36 compute-0 podman[421143]: 2025-11-25 09:18:36.791840583 +0000 UTC m=+0.139797311 container start 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:18:36 compute-0 podman[421143]: 2025-11-25 09:18:36.795351419 +0000 UTC m=+0.143308197 container attach 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:18:36 compute-0 agitated_lalande[421159]: 167 167
Nov 25 09:18:36 compute-0 systemd[1]: libpod-17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49.scope: Deactivated successfully.
Nov 25 09:18:36 compute-0 conmon[421159]: conmon 17969f5917273aaaa4b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49.scope/container/memory.events
Nov 25 09:18:36 compute-0 podman[421143]: 2025-11-25 09:18:36.799130711 +0000 UTC m=+0.147087439 container died 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7054b438cd380079ce27c5f1f1f394c7d3a97c4b4c703a4372490306583bef8-merged.mount: Deactivated successfully.
Nov 25 09:18:36 compute-0 podman[421143]: 2025-11-25 09:18:36.837117374 +0000 UTC m=+0.185074102 container remove 17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:18:36 compute-0 systemd[1]: libpod-conmon-17969f5917273aaaa4b9f90dc0e735cc189c4a0eb4ec38dce878db81f54a6b49.scope: Deactivated successfully.
Nov 25 09:18:37 compute-0 podman[421181]: 2025-11-25 09:18:37.037238074 +0000 UTC m=+0.040174573 container create 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:18:37 compute-0 systemd[1]: Started libpod-conmon-580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591.scope.
Nov 25 09:18:37 compute-0 podman[421181]: 2025-11-25 09:18:37.019592035 +0000 UTC m=+0.022528554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:18:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:18:37 compute-0 podman[421181]: 2025-11-25 09:18:37.135107545 +0000 UTC m=+0.138044074 container init 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:18:37 compute-0 podman[421181]: 2025-11-25 09:18:37.148514349 +0000 UTC m=+0.151450848 container start 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:18:37 compute-0 podman[421181]: 2025-11-25 09:18:37.153366841 +0000 UTC m=+0.156303350 container attach 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 09:18:37 compute-0 ovn_controller[152859]: 2025-11-25T09:18:37Z|00211|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:22:67 10.100.0.14
Nov 25 09:18:37 compute-0 ovn_controller[152859]: 2025-11-25T09:18:37Z|00212|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:22:67 10.100.0.14
Nov 25 09:18:37 compute-0 ceph-mon[75015]: pgmap v2946: 321 pgs: 321 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:18:37 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:37.931 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:38 compute-0 nova_compute[253538]: 2025-11-25 09:18:38.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]: {
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "osd_id": 1,
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "type": "bluestore"
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:     },
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "osd_id": 2,
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "type": "bluestore"
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:     },
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "osd_id": 0,
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:         "type": "bluestore"
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]:     }
Nov 25 09:18:38 compute-0 upbeat_hamilton[421197]: }
Nov 25 09:18:38 compute-0 systemd[1]: libpod-580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591.scope: Deactivated successfully.
Nov 25 09:18:38 compute-0 systemd[1]: libpod-580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591.scope: Consumed 1.078s CPU time.
Nov 25 09:18:38 compute-0 podman[421181]: 2025-11-25 09:18:38.224037866 +0000 UTC m=+1.226974395 container died 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:18:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4771cb63b069d1007544b3f1b25814c400b4833ad6021438486341c4b4d58b84-merged.mount: Deactivated successfully.
Nov 25 09:18:38 compute-0 podman[421181]: 2025-11-25 09:18:38.333644886 +0000 UTC m=+1.336581365 container remove 580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:18:38 compute-0 systemd[1]: libpod-conmon-580d9736b00a2c1451acbed0796894bf08a26ab9d5d1b8546ffc9882b8bd3591.scope: Deactivated successfully.
Nov 25 09:18:38 compute-0 sudo[421078]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:18:38 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:18:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:18:38 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:18:38 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 40ff5b64-8821-4775-9053-90b2660f5d2c does not exist
Nov 25 09:18:38 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 224a3f87-18bc-4e7c-8b69-bffe0a98030d does not exist
Nov 25 09:18:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:38 compute-0 sudo[421242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:18:38 compute-0 sudo[421242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:38 compute-0 sudo[421242]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 159 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Nov 25 09:18:38 compute-0 sudo[421267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:18:38 compute-0 sudo[421267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:18:38 compute-0 sudo[421267]: pam_unix(sudo:session): session closed for user root
Nov 25 09:18:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:18:39 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:18:39 compute-0 ceph-mon[75015]: pgmap v2947: 321 pgs: 321 active+clean; 159 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Nov 25 09:18:39 compute-0 nova_compute[253538]: 2025-11-25 09:18:39.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 165 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 575 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:18:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:41.103 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:41.103 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:41.104 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:41 compute-0 ceph-mon[75015]: pgmap v2948: 321 pgs: 321 active+clean; 165 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 575 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:18:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:18:43 compute-0 nova_compute[253538]: 2025-11-25 09:18:43.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:43 compute-0 ceph-mon[75015]: pgmap v2949: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:18:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:18:44 compute-0 nova_compute[253538]: 2025-11-25 09:18:44.872 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:45 compute-0 ceph-mon[75015]: pgmap v2950: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:18:45 compute-0 podman[421292]: 2025-11-25 09:18:45.842536978 +0000 UTC m=+0.081737363 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 09:18:45 compute-0 podman[421293]: 2025-11-25 09:18:45.843492174 +0000 UTC m=+0.076577573 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:18:46 compute-0 nova_compute[253538]: 2025-11-25 09:18:46.397 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:46 compute-0 nova_compute[253538]: 2025-11-25 09:18:46.397 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:46 compute-0 nova_compute[253538]: 2025-11-25 09:18:46.397 253542 INFO nova.compute.manager [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Shelving
Nov 25 09:18:46 compute-0 nova_compute[253538]: 2025-11-25 09:18:46.426 253542 DEBUG nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 09:18:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 25 09:18:47 compute-0 ceph-mon[75015]: pgmap v2951: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 25 09:18:48 compute-0 nova_compute[253538]: 2025-11-25 09:18:48.058 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 09:18:48 compute-0 kernel: tap9d78d6ba-34 (unregistering): left promiscuous mode
Nov 25 09:18:48 compute-0 NetworkManager[48915]: <info>  [1764062328.7551] device (tap9d78d6ba-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:18:48 compute-0 ovn_controller[152859]: 2025-11-25T09:18:48Z|01640|binding|INFO|Releasing lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c from this chassis (sb_readonly=0)
Nov 25 09:18:48 compute-0 ovn_controller[152859]: 2025-11-25T09:18:48Z|01641|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c down in Southbound
Nov 25 09:18:48 compute-0 nova_compute[253538]: 2025-11-25 09:18:48.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:48 compute-0 ovn_controller[152859]: 2025-11-25T09:18:48Z|01642|binding|INFO|Removing iface tap9d78d6ba-34 ovn-installed in OVS
Nov 25 09:18:48 compute-0 nova_compute[253538]: 2025-11-25 09:18:48.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.775 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:22:67 10.100.0.14'], port_security=['fa:16:3e:eb:22:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '282b7217-4c1e-4a42-b3da-05616f4e1da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41c67820b40a4185a60c4245f9c43ef5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b17be75-81bb-4f4d-9234-5572279d07c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3ce9783-7822-423e-a3be-85165987da53, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9d78d6ba-3489-4cfd-ae33-9166be3f940c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:18:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.778 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78d6ba-3489-4cfd-ae33-9166be3f940c in datapath 26d70c6d-e66b-4570-a7d7-11486a935ed8 unbound from our chassis
Nov 25 09:18:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.780 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26d70c6d-e66b-4570-a7d7-11486a935ed8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:18:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.782 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c7701a-c053-47df-b933-8ed93dc2657b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:48.782 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 namespace which is not needed anymore
Nov 25 09:18:48 compute-0 nova_compute[253538]: 2025-11-25 09:18:48.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:48 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 25 09:18:48 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Consumed 13.933s CPU time.
Nov 25 09:18:48 compute-0 systemd-machined[215790]: Machine qemu-185-instance-00000099 terminated.
Nov 25 09:18:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [NOTICE]   (420404) : haproxy version is 2.8.14-c23fe91
Nov 25 09:18:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [NOTICE]   (420404) : path to executable is /usr/sbin/haproxy
Nov 25 09:18:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [WARNING]  (420404) : Exiting Master process...
Nov 25 09:18:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [ALERT]    (420404) : Current worker (420406) exited with code 143 (Terminated)
Nov 25 09:18:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[420400]: [WARNING]  (420404) : All workers exited. Exiting... (0)
Nov 25 09:18:48 compute-0 systemd[1]: libpod-9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9.scope: Deactivated successfully.
Nov 25 09:18:48 compute-0 podman[421356]: 2025-11-25 09:18:48.936287419 +0000 UTC m=+0.050161265 container died 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9-userdata-shm.mount: Deactivated successfully.
Nov 25 09:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8869441ad4141e8c78750280aebdef1e5823b5dc10b72133bb9fe07f53bcf62c-merged.mount: Deactivated successfully.
Nov 25 09:18:48 compute-0 nova_compute[253538]: 2025-11-25 09:18:48.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:48 compute-0 podman[421356]: 2025-11-25 09:18:48.990990546 +0000 UTC m=+0.104864392 container cleanup 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:18:48 compute-0 nova_compute[253538]: 2025-11-25 09:18:48.994 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:49 compute-0 systemd[1]: libpod-conmon-9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9.scope: Deactivated successfully.
Nov 25 09:18:49 compute-0 podman[421394]: 2025-11-25 09:18:49.060813194 +0000 UTC m=+0.045967800 container remove 9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.065 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[f78b05b8-10ff-48ae-8615-43d081b57e8a]: (4, ('Tue Nov 25 09:18:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 (9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9)\n9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9\nTue Nov 25 09:18:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 (9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9)\n9d9b9766124d9feb2b45bea3dc9173783fcf7a6d1b818c502ca20a1c08707dd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.067 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[093dec41-c2c9-405f-ba13-7aa21241b085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.068 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d70c6d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:49 compute-0 kernel: tap26d70c6d-e0: left promiscuous mode
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.081 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.088 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[82591a0e-2569-4fc1-b251-6a1d73e4da34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.102 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b39dbd-d469-4df7-832d-32a657de50e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.103 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fb0ae1-c484-4781-ad5c-840aeaf1922a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.104 253542 DEBUG nova.compute.manager [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.104 253542 DEBUG oslo_concurrency.lockutils [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.104 253542 DEBUG oslo_concurrency.lockutils [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.105 253542 DEBUG oslo_concurrency.lockutils [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.105 253542 DEBUG nova.compute.manager [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.105 253542 WARNING nova.compute.manager [req-b172a661-bc70-4568-8400-c89dcade785f req-6c098d6c-ee30-4edb-895b-bdbedd512d23 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state shelving.
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.118 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[503f8958-4d34-4925-a25f-04da05deb551]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765857, 'reachable_time': 35317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421415, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.121 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:18:49 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:18:49.121 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[1af300b6-711c-482a-9019-88dabed4a5f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:18:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d26d70c6d\x2de66b\x2d4570\x2da7d7\x2d11486a935ed8.mount: Deactivated successfully.
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.445 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance shutdown successfully after 3 seconds.
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.454 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance destroyed successfully.
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.455 253542 DEBUG nova.objects.instance [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.704 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Beginning cold snapshot process
Nov 25 09:18:49 compute-0 ceph-mon[75015]: pgmap v2952: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.831 253542 DEBUG nova.virt.libvirt.imagebackend [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No parent info for 8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 09:18:49 compute-0 nova_compute[253538]: 2025-11-25 09:18:49.874 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:50 compute-0 nova_compute[253538]: 2025-11-25 09:18:50.061 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] creating snapshot(31dc4e47f1b5484f859997b028f1b43e) on rbd image(282b7217-4c1e-4a42-b3da-05616f4e1da3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 09:18:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 753 KiB/s wr, 43 op/s
Nov 25 09:18:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 25 09:18:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 25 09:18:50 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 25 09:18:50 compute-0 nova_compute[253538]: 2025-11-25 09:18:50.851 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] cloning vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk@31dc4e47f1b5484f859997b028f1b43e to images/d40366df-73ef-47e3-8169-d7bf9c208bbb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 09:18:50 compute-0 nova_compute[253538]: 2025-11-25 09:18:50.973 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] flattening images/d40366df-73ef-47e3-8169-d7bf9c208bbb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 09:18:51 compute-0 nova_compute[253538]: 2025-11-25 09:18:51.190 253542 DEBUG nova.compute.manager [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:18:51 compute-0 nova_compute[253538]: 2025-11-25 09:18:51.191 253542 DEBUG oslo_concurrency.lockutils [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:51 compute-0 nova_compute[253538]: 2025-11-25 09:18:51.191 253542 DEBUG oslo_concurrency.lockutils [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:51 compute-0 nova_compute[253538]: 2025-11-25 09:18:51.192 253542 DEBUG oslo_concurrency.lockutils [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:51 compute-0 nova_compute[253538]: 2025-11-25 09:18:51.192 253542 DEBUG nova.compute.manager [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:18:51 compute-0 nova_compute[253538]: 2025-11-25 09:18:51.193 253542 WARNING nova.compute.manager [req-39178f6c-4a2d-4222-ba15-bea9148884c7 req-04c1e154-eb03-4f38-84f8-9436edd78492 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 09:18:51 compute-0 nova_compute[253538]: 2025-11-25 09:18:51.375 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] removing snapshot(31dc4e47f1b5484f859997b028f1b43e) on rbd image(282b7217-4c1e-4a42-b3da-05616f4e1da3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 09:18:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 25 09:18:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 25 09:18:51 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 25 09:18:51 compute-0 ceph-mon[75015]: pgmap v2953: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 753 KiB/s wr, 43 op/s
Nov 25 09:18:51 compute-0 ceph-mon[75015]: osdmap e266: 3 total, 3 up, 3 in
Nov 25 09:18:51 compute-0 nova_compute[253538]: 2025-11-25 09:18:51.817 253542 DEBUG nova.storage.rbd_utils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] creating snapshot(snap) on rbd image(d40366df-73ef-47e3-8169-d7bf9c208bbb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 09:18:52 compute-0 nova_compute[253538]: 2025-11-25 09:18:52.172 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:18:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.7 MiB/s wr, 60 op/s
Nov 25 09:18:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 25 09:18:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 25 09:18:52 compute-0 ceph-mon[75015]: osdmap e267: 3 total, 3 up, 3 in
Nov 25 09:18:52 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 25 09:18:53 compute-0 nova_compute[253538]: 2025-11-25 09:18:53.060 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:18:53
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'backups', 'vms', 'images', 'cephfs.cephfs.meta']
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:18:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:18:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:18:53 compute-0 ceph-mon[75015]: pgmap v2956: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.7 MiB/s wr, 60 op/s
Nov 25 09:18:53 compute-0 ceph-mon[75015]: osdmap e268: 3 total, 3 up, 3 in
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.214 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Snapshot image upload complete
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.215 253542 DEBUG nova.compute.manager [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.255 253542 INFO nova.compute.manager [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Shelve offloading
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.262 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance destroyed successfully.
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.263 253542 DEBUG nova.compute.manager [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.266 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.266 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.267 253542 DEBUG nova.network.neutron [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:18:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.6 MiB/s wr, 134 op/s
Nov 25 09:18:54 compute-0 podman[421559]: 2025-11-25 09:18:54.8401772 +0000 UTC m=+0.091538370 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:18:54 compute-0 nova_compute[253538]: 2025-11-25 09:18:54.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:54 compute-0 ceph-mon[75015]: pgmap v2958: 321 pgs: 321 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.6 MiB/s wr, 134 op/s
Nov 25 09:18:55 compute-0 nova_compute[253538]: 2025-11-25 09:18:55.279 253542 DEBUG nova.network.neutron [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:18:55 compute-0 nova_compute[253538]: 2025-11-25 09:18:55.346 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:18:55 compute-0 nova_compute[253538]: 2025-11-25 09:18:55.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:18:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 170 op/s
Nov 25 09:18:57 compute-0 ceph-mon[75015]: pgmap v2959: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 170 op/s
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.319 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance destroyed successfully.
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.319 253542 DEBUG nova.objects.instance [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'resources' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.331 253542 DEBUG nova.virt.libvirt.vif [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNJQsCu4fQ3ll5Z4ZaGvMq+pPgiaY3EvL05ETUACffFb5NNPT58fZR5bxwEgYmFiG8knRhbPhzoHa6MYoWsZMDhMe0q2RDfQW/VzCu9RVlFpki+QcCgPNPr5WwLwdENig==',key_name='tempest-TestShelveInstance-83615415',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:18:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member',shelved_at='2025-11-25T09:18:54.215428',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d40366df-73ef-47e3-8169-d7bf9c208bbb'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:18:49Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.331 253542 DEBUG nova.network.os_vif_util [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.332 253542 DEBUG nova.network.os_vif_util [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.333 253542 DEBUG os_vif [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.335 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.335 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d78d6ba-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.337 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.342 253542 INFO os_vif [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34')
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.394 253542 DEBUG nova.compute.manager [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.395 253542 DEBUG nova.compute.manager [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.395 253542 DEBUG oslo_concurrency.lockutils [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.396 253542 DEBUG oslo_concurrency.lockutils [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:18:58 compute-0 nova_compute[253538]: 2025-11-25 09:18:58.396 253542 DEBUG nova.network.neutron [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:18:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:18:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 25 09:18:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 25 09:18:58 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 25 09:18:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.273 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deleting instance files /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3_del
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.274 253542 INFO nova.virt.libvirt.driver [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deletion of /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3_del complete
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.372 253542 INFO nova.scheduler.client.report [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Deleted allocations for instance 282b7217-4c1e-4a42-b3da-05616f4e1da3
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.413 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.414 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.438 253542 DEBUG oslo_concurrency.processutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:18:59 compute-0 ceph-mon[75015]: osdmap e269: 3 total, 3 up, 3 in
Nov 25 09:18:59 compute-0 ceph-mon[75015]: pgmap v2961: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.680 253542 DEBUG nova.network.neutron [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updated VIF entry in instance network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.681 253542 DEBUG nova.network.neutron [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": null, "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.707 253542 DEBUG oslo_concurrency.lockutils [req-09827419-15be-48f4-a6d0-c149c7ff5af3 req-6950896a-8ab7-40f9-bc58-c5cee0026aab b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:18:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:18:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621795008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.871 253542 DEBUG oslo_concurrency.processutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.880 253542 DEBUG nova.compute.provider_tree [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.895 253542 DEBUG nova.scheduler.client.report [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.913 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:18:59 compute-0 nova_compute[253538]: 2025-11-25 09:18:59.954 253542 DEBUG oslo_concurrency.lockutils [None req-c048a4d2-93dd-4b12-a5c7-ed35a63cad27 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2621795008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 99 op/s
Nov 25 09:19:00 compute-0 nova_compute[253538]: 2025-11-25 09:19:00.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:00 compute-0 nova_compute[253538]: 2025-11-25 09:19:00.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:19:00 compute-0 nova_compute[253538]: 2025-11-25 09:19:00.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:19:00 compute-0 nova_compute[253538]: 2025-11-25 09:19:00.566 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:19:01 compute-0 ceph-mon[75015]: pgmap v2962: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 99 op/s
Nov 25 09:19:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.558 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.720 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.721 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.721 253542 INFO nova.compute.manager [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Unshelving
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.831 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.832 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.837 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.854 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.865 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.866 253542 INFO nova.compute.claims [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:19:02 compute-0 nova_compute[253538]: 2025-11-25 09:19:02.973 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:19:03 compute-0 nova_compute[253538]: 2025-11-25 09:19:03.338 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:19:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307464301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:03 compute-0 nova_compute[253538]: 2025-11-25 09:19:03.419 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:19:03 compute-0 nova_compute[253538]: 2025-11-25 09:19:03.427 253542 DEBUG nova.compute.provider_tree [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:19:03 compute-0 nova_compute[253538]: 2025-11-25 09:19:03.442 253542 DEBUG nova.scheduler.client.report [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:19:03 compute-0 nova_compute[253538]: 2025-11-25 09:19:03.462 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:03 compute-0 nova_compute[253538]: 2025-11-25 09:19:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:03 compute-0 nova_compute[253538]: 2025-11-25 09:19:03.585 253542 INFO nova.network.neutron [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating port 9d78d6ba-3489-4cfd-ae33-9166be3f940c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 25 09:19:03 compute-0 ceph-mon[75015]: pgmap v2963: 321 pgs: 321 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 25 09:19:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/307464301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:03.999 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062328.9989727, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.000 253542 INFO nova.compute.manager [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Stopped (Lifecycle Event)
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.017 253542 DEBUG nova.compute.manager [None req-eeb986e4-5a10-407c-8af3-57f704a1782d - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.202 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.203 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.203 253542 DEBUG nova.network.neutron [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.308 253542 DEBUG nova.compute.manager [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.309 253542 DEBUG nova.compute.manager [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.309 253542 DEBUG oslo_concurrency.lockutils [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 761 KiB/s wr, 59 op/s
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001770937622094159 of space, bias 1.0, pg target 0.5312812866282477 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:19:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:19:04 compute-0 nova_compute[253538]: 2025-11-25 09:19:04.879 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:05 compute-0 ceph-mon[75015]: pgmap v2964: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 761 KiB/s wr, 59 op/s
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.056 253542 DEBUG nova.network.neutron [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.165 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.167 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.167 253542 INFO nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating image(s)
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.194 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.198 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.199 253542 DEBUG oslo_concurrency.lockutils [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.200 253542 DEBUG nova.network.neutron [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.246 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.282 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.286 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "2702c9a3c09b88045258aad233b10ea7fa141e76" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.287 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "2702c9a3c09b88045258aad233b10ea7fa141e76" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.563 253542 DEBUG nova.virt.libvirt.imagebackend [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image locations are: [{'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/d40366df-73ef-47e3-8169-d7bf9c208bbb/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/d40366df-73ef-47e3-8169-d7bf9c208bbb/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.626 253542 DEBUG nova.virt.libvirt.imagebackend [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Selected location: {'url': 'rbd://a058ea16-8b73-51e1-b172-ed66107102bf/images/d40366df-73ef-47e3-8169-d7bf9c208bbb/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.628 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] cloning images/d40366df-73ef-47e3-8169-d7bf9c208bbb@snap to None/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.742 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "2702c9a3c09b88045258aad233b10ea7fa141e76" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.868 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'migration_context' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:06 compute-0 nova_compute[253538]: 2025-11-25 09:19:06.917 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] flattening vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 09:19:07 compute-0 ceph-mon[75015]: pgmap v2965: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.970 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Image rbd:vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.970 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.971 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Ensure instance console log exists: /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.971 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.971 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.972 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.973 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start _get_guest_xml network_info=[{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T09:18:46Z,direct_url=<?>,disk_format='raw',id=d40366df-73ef-47e3-8169-d7bf9c208bbb,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-853165821-shelved',owner='41c67820b40a4185a60c4245f9c43ef5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T09:18:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.978 253542 WARNING nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.983 253542 DEBUG nova.virt.libvirt.host [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.984 253542 DEBUG nova.virt.libvirt.host [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.987 253542 DEBUG nova.virt.libvirt.host [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.988 253542 DEBUG nova.virt.libvirt.host [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.988 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.988 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T09:18:46Z,direct_url=<?>,disk_format='raw',id=d40366df-73ef-47e3-8169-d7bf9c208bbb,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-853165821-shelved',owner='41c67820b40a4185a60c4245f9c43ef5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T09:18:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.989 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.989 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.989 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.989 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.990 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.991 253542 DEBUG nova.virt.hardware [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:19:07 compute-0 nova_compute[253538]: 2025-11-25 09:19:07.991 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.004 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.308 253542 DEBUG nova.network.neutron [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updated VIF entry in instance network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.309 253542 DEBUG nova.network.neutron [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.335 253542 DEBUG oslo_concurrency.lockutils [req-2f320d5b-fe0f-4318-aed8-254fa32f9757 req-59257299-084b-4ac4-b717-a3857312bc95 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.340 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:19:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466850822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:19:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.484 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.503 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.506 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:19:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 201 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 67 op/s
Nov 25 09:19:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2466850822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:19:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:19:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089369585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.935 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.937 253542 DEBUG nova.virt.libvirt.vif [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='d40366df-73ef-47e3-8169-d7bf9c208bbb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-83615415',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:18:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member',shelved_at='2025-11-25T09:18:54.215428',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d40366df-73ef-47e3-8169-d7bf9c208bbb'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:19:02Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.937 253542 DEBUG nova.network.os_vif_util [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.938 253542 DEBUG nova.network.os_vif_util [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.939 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.963 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <uuid>282b7217-4c1e-4a42-b3da-05616f4e1da3</uuid>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <name>instance-00000099</name>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <nova:name>tempest-TestShelveInstance-server-853165821</nova:name>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:19:07</nova:creationTime>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <nova:user uuid="a68fbd2f756d42aa982630f3a41f0a1f">tempest-TestShelveInstance-1867415308-project-member</nova:user>
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <nova:project uuid="41c67820b40a4185a60c4245f9c43ef5">tempest-TestShelveInstance-1867415308</nova:project>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="d40366df-73ef-47e3-8169-d7bf9c208bbb"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <nova:ports>
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <nova:port uuid="9d78d6ba-3489-4cfd-ae33-9166be3f940c">
Nov 25 09:19:08 compute-0 nova_compute[253538]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:         </nova:port>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       </nova:ports>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <system>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <entry name="serial">282b7217-4c1e-4a42-b3da-05616f4e1da3</entry>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <entry name="uuid">282b7217-4c1e-4a42-b3da-05616f4e1da3</entry>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </system>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <os>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   </os>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <features>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   </features>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk">
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       </source>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config">
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       </source>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:19:08 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <interface type="ethernet">
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <mac address="fa:16:3e:eb:22:67"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <mtu size="1442"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <target dev="tap9d78d6ba-34"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </interface>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/console.log" append="off"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <video>
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </video>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <input type="keyboard" bus="usb"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:19:08 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:19:08 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:19:08 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:19:08 compute-0 nova_compute[253538]: </domain>
Nov 25 09:19:08 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.964 253542 DEBUG nova.compute.manager [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Preparing to wait for external event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.964 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.964 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.964 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.965 253542 DEBUG nova.virt.libvirt.vif [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='d40366df-73ef-47e3-8169-d7bf9c208bbb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-83615415',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:18:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member',shelved_at='2025-11-25T09:18:54.215428',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d40366df-73ef-47e3-8169-d7bf9c208bbb'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:19:02Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.965 253542 DEBUG nova.network.os_vif_util [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.965 253542 DEBUG nova.network.os_vif_util [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.966 253542 DEBUG os_vif [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.966 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.967 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.970 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.970 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d78d6ba-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.971 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d78d6ba-34, col_values=(('external_ids', {'iface-id': '9d78d6ba-3489-4cfd-ae33-9166be3f940c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:22:67', 'vm-uuid': '282b7217-4c1e-4a42-b3da-05616f4e1da3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:08 compute-0 NetworkManager[48915]: <info>  [1764062348.9731] manager: (tap9d78d6ba-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/674)
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.974 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:08 compute-0 nova_compute[253538]: 2025-11-25 09:19:08.977 253542 INFO os_vif [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34')
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.037 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.037 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.037 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] No VIF found with MAC fa:16:3e:eb:22:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.038 253542 INFO nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Using config drive
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.059 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.074 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.108 253542 DEBUG nova.objects.instance [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'keypairs' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.424 253542 INFO nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Creating config drive at /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.428 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d4z3lxa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.595 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d4z3lxa" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.621 253542 DEBUG nova.storage.rbd_utils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] rbd image 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.624 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:19:09 compute-0 ceph-mon[75015]: pgmap v2966: 321 pgs: 321 active+clean; 201 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 67 op/s
Nov 25 09:19:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4089369585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.804 253542 DEBUG oslo_concurrency.processutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config 282b7217-4c1e-4a42-b3da-05616f4e1da3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.806 253542 INFO nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deleting local config drive /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3/disk.config because it was imported into RBD.
Nov 25 09:19:09 compute-0 kernel: tap9d78d6ba-34: entered promiscuous mode
Nov 25 09:19:09 compute-0 NetworkManager[48915]: <info>  [1764062349.8638] manager: (tap9d78d6ba-34): new Tun device (/org/freedesktop/NetworkManager/Devices/675)
Nov 25 09:19:09 compute-0 ovn_controller[152859]: 2025-11-25T09:19:09Z|01643|binding|INFO|Claiming lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c for this chassis.
Nov 25 09:19:09 compute-0 ovn_controller[152859]: 2025-11-25T09:19:09Z|01644|binding|INFO|9d78d6ba-3489-4cfd-ae33-9166be3f940c: Claiming fa:16:3e:eb:22:67 10.100.0.14
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.874 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:22:67 10.100.0.14'], port_security=['fa:16:3e:eb:22:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '282b7217-4c1e-4a42-b3da-05616f4e1da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41c67820b40a4185a60c4245f9c43ef5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '5b17be75-81bb-4f4d-9234-5572279d07c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3ce9783-7822-423e-a3be-85165987da53, chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9d78d6ba-3489-4cfd-ae33-9166be3f940c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.875 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78d6ba-3489-4cfd-ae33-9166be3f940c in datapath 26d70c6d-e66b-4570-a7d7-11486a935ed8 bound to our chassis
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.876 162739 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 09:19:09 compute-0 ovn_controller[152859]: 2025-11-25T09:19:09Z|01645|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c ovn-installed in OVS
Nov 25 09:19:09 compute-0 ovn_controller[152859]: 2025-11-25T09:19:09Z|01646|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c up in Southbound
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.884 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:09 compute-0 nova_compute[253538]: 2025-11-25 09:19:09.887 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.889 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4a6f30-511b-4e84-a7b2-afad8be5ee26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.890 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26d70c6d-e1 in ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.893 269370 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26d70c6d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.893 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5a68f1-540e-4e13-aaeb-cb1a1a2e71d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.894 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[37ad09d5-ea13-474b-af62-fd50820e094e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:09 compute-0 systemd-udevd[421999]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.909 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[22b6c6d8-8e25-4137-a5eb-c5466f601a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:09 compute-0 NetworkManager[48915]: <info>  [1764062349.9129] device (tap9d78d6ba-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:19:09 compute-0 systemd-machined[215790]: New machine qemu-186-instance-00000099.
Nov 25 09:19:09 compute-0 NetworkManager[48915]: <info>  [1764062349.9139] device (tap9d78d6ba-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:19:09 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000099.
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.928 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcbfb19-27ea-4105-ad01-d332d76961ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.969 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[2724e08b-62bc-42b0-9d61-74ac5977760b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:09 compute-0 systemd-udevd[422003]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:19:09 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:09.976 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[75acd2d6-ab81-43e7-94dd-c494406b11a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:09 compute-0 NetworkManager[48915]: <info>  [1764062349.9777] manager: (tap26d70c6d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/676)
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.019 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[84bceb90-6d30-499e-94f7-50e28fd92ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.023 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9a90f188-e04c-4d94-9c95-07739df7e3d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 NetworkManager[48915]: <info>  [1764062350.0528] device (tap26d70c6d-e0): carrier: link connected
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.058 269560 DEBUG oslo.privsep.daemon [-] privsep: reply[9f06a2b6-3fe6-4d24-9320-bc6381fc4cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.078 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[19374641-4be7-4a83-8aab-625f1710ec54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d70c6d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:90:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770465, 'reachable_time': 21033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422032, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.096 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b7610ec1-d72f-41b1-8ce3-bed113229400]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:9093'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770465, 'tstamp': 770465}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422033, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.112 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccf5f83-5d30-468f-a697-806ea81193ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d70c6d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:90:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770465, 'reachable_time': 21033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 422034, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.150 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2e25d807-2d31-487b-8782-6d8e8594a628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.205 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2226d01a-fd17-4953-a224-6bbdbc50df3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.207 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d70c6d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.207 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.207 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d70c6d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:10 compute-0 NetworkManager[48915]: <info>  [1764062350.2097] manager: (tap26d70c6d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/677)
Nov 25 09:19:10 compute-0 kernel: tap26d70c6d-e0: entered promiscuous mode
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.211 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26d70c6d-e0, col_values=(('external_ids', {'iface-id': '49a3f274-19b1-4763-bafc-281fe099299b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:10 compute-0 ovn_controller[152859]: 2025-11-25T09:19:10Z|01647|binding|INFO|Releasing lport 49a3f274-19b1-4763-bafc-281fe099299b from this chassis (sb_readonly=0)
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.229 162739 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.230 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[2324c13a-8e36-48a9-a644-dd81c06c27e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.230 162739 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: global
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     log         /dev/log local0 debug
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     log-tag     haproxy-metadata-proxy-26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     user        root
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     group       root
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     maxconn     1024
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     pidfile     /var/lib/neutron/external/pids/26d70c6d-e66b-4570-a7d7-11486a935ed8.pid.haproxy
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     daemon
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: defaults
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     log global
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     mode http
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     option httplog
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     option dontlognull
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     option http-server-close
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     option forwardfor
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     retries                 3
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     timeout http-request    30s
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     timeout connect         30s
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     timeout client          32s
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     timeout server          32s
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     timeout http-keep-alive 30s
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: listen listener
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     bind 169.254.169.254:80
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:     http-request add-header X-OVN-Network-ID 26d70c6d-e66b-4570-a7d7-11486a935ed8
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:19:10 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:10.231 162739 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'env', 'PROCESS_TAG=haproxy-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26d70c6d-e66b-4570-a7d7-11486a935ed8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.325 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062350.3252456, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.326 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Started (Lifecycle Event)
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.346 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.352 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062350.326644, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.352 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Paused (Lifecycle Event)
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.368 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.372 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.388 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:19:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 220 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 77 op/s
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:10 compute-0 podman[422107]: 2025-11-25 09:19:10.567497623 +0000 UTC m=+0.024585720 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:19:10 compute-0 podman[422107]: 2025-11-25 09:19:10.942633981 +0000 UTC m=+0.399722068 container create 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.975 253542 DEBUG nova.compute.manager [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.976 253542 DEBUG oslo_concurrency.lockutils [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.976 253542 DEBUG oslo_concurrency.lockutils [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.976 253542 DEBUG oslo_concurrency.lockutils [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.977 253542 DEBUG nova.compute.manager [req-a32ca9eb-3073-4741-bca3-cc38e41cd82f req-b7bfc538-f76e-4366-b6be-d4f66216054e b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Processing event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.977 253542 DEBUG nova.compute.manager [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.982 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062350.981649, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.983 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Resumed (Lifecycle Event)
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.984 253542 DEBUG nova.virt.libvirt.driver [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.987 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance spawned successfully.
Nov 25 09:19:10 compute-0 nova_compute[253538]: 2025-11-25 09:19:10.997 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:19:11 compute-0 nova_compute[253538]: 2025-11-25 09:19:11.000 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:19:11 compute-0 nova_compute[253538]: 2025-11-25 09:19:11.014 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:19:11 compute-0 ceph-mon[75015]: pgmap v2967: 321 pgs: 321 active+clean; 220 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 77 op/s
Nov 25 09:19:11 compute-0 systemd[1]: Started libpod-conmon-59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f.scope.
Nov 25 09:19:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1e777106289c23b7a822a87608b70a386d112d661b25f06e481127e86f854e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:11 compute-0 podman[422107]: 2025-11-25 09:19:11.168014928 +0000 UTC m=+0.625103055 container init 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:19:11 compute-0 podman[422107]: 2025-11-25 09:19:11.1754921 +0000 UTC m=+0.632580207 container start 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 09:19:11 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [NOTICE]   (422126) : New worker (422128) forked
Nov 25 09:19:11 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [NOTICE]   (422126) : Loading success.
Nov 25 09:19:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 25 09:19:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 25 09:19:12 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 25 09:19:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 112 op/s
Nov 25 09:19:12 compute-0 nova_compute[253538]: 2025-11-25 09:19:12.806 253542 DEBUG nova.compute.manager [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:19:12 compute-0 nova_compute[253538]: 2025-11-25 09:19:12.869 253542 DEBUG oslo_concurrency.lockutils [None req-44cee0f2-669b-4ed1-b8eb-7bcd0fab10ee a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:13 compute-0 nova_compute[253538]: 2025-11-25 09:19:13.040 253542 DEBUG nova.compute.manager [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:19:13 compute-0 nova_compute[253538]: 2025-11-25 09:19:13.041 253542 DEBUG oslo_concurrency.lockutils [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:13 compute-0 nova_compute[253538]: 2025-11-25 09:19:13.041 253542 DEBUG oslo_concurrency.lockutils [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:13 compute-0 nova_compute[253538]: 2025-11-25 09:19:13.041 253542 DEBUG oslo_concurrency.lockutils [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:13 compute-0 nova_compute[253538]: 2025-11-25 09:19:13.041 253542 DEBUG nova.compute.manager [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:19:13 compute-0 nova_compute[253538]: 2025-11-25 09:19:13.042 253542 WARNING nova.compute.manager [req-f0f8d291-386b-4271-a5d2-b444c4e92894 req-4885265b-7930-4be3-9415-c6f1c418a90b b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state None.
Nov 25 09:19:13 compute-0 ceph-mon[75015]: osdmap e270: 3 total, 3 up, 3 in
Nov 25 09:19:13 compute-0 ceph-mon[75015]: pgmap v2969: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 112 op/s
Nov 25 09:19:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:13 compute-0 nova_compute[253538]: 2025-11-25 09:19:13.972 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 226 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 135 op/s
Nov 25 09:19:14 compute-0 nova_compute[253538]: 2025-11-25 09:19:14.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:14 compute-0 ceph-mon[75015]: pgmap v2970: 321 pgs: 321 active+clean; 226 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 135 op/s
Nov 25 09:19:15 compute-0 sshd-session[422138]: Invalid user oracle from 193.32.162.151 port 53408
Nov 25 09:19:15 compute-0 nova_compute[253538]: 2025-11-25 09:19:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:15 compute-0 nova_compute[253538]: 2025-11-25 09:19:15.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:15 compute-0 nova_compute[253538]: 2025-11-25 09:19:15.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:15 compute-0 nova_compute[253538]: 2025-11-25 09:19:15.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:15 compute-0 nova_compute[253538]: 2025-11-25 09:19:15.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:19:15 compute-0 nova_compute[253538]: 2025-11-25 09:19:15.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:19:15 compute-0 sshd-session[422138]: Connection closed by invalid user oracle 193.32.162.151 port 53408 [preauth]
Nov 25 09:19:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:19:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248222726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.049 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:19:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3248222726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.125 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.126 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:19:16 compute-0 podman[422166]: 2025-11-25 09:19:16.149608637 +0000 UTC m=+0.054010309 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:19:16 compute-0 podman[422165]: 2025-11-25 09:19:16.190035116 +0000 UTC m=+0.094789468 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.294 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.295 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3427MB free_disk=59.94276428222656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.295 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.296 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.361 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 282b7217-4c1e-4a42-b3da-05616f4e1da3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.362 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.362 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.390 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:19:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 09:19:16 compute-0 sshd-session[422140]: Invalid user cisco from 45.202.211.6 port 49496
Nov 25 09:19:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:19:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239824916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.867 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:19:16 compute-0 sshd-session[422140]: Received disconnect from 45.202.211.6 port 49496:11: Bye Bye [preauth]
Nov 25 09:19:16 compute-0 sshd-session[422140]: Disconnected from invalid user cisco 45.202.211.6 port 49496 [preauth]
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.876 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.895 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.961 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:19:16 compute-0 nova_compute[253538]: 2025-11-25 09:19:16.962 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:17 compute-0 ceph-mon[75015]: pgmap v2971: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 09:19:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4239824916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 25 09:19:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 25 09:19:18 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 25 09:19:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Nov 25 09:19:18 compute-0 nova_compute[253538]: 2025-11-25 09:19:18.976 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:19 compute-0 ceph-mon[75015]: osdmap e271: 3 total, 3 up, 3 in
Nov 25 09:19:19 compute-0 ceph-mon[75015]: pgmap v2973: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Nov 25 09:19:19 compute-0 nova_compute[253538]: 2025-11-25 09:19:19.891 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 19 KiB/s wr, 127 op/s
Nov 25 09:19:21 compute-0 ceph-mon[75015]: pgmap v2974: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 19 KiB/s wr, 127 op/s
Nov 25 09:19:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 106 op/s
Nov 25 09:19:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:19:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:19:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:19:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:19:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:19:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:19:23 compute-0 ceph-mon[75015]: pgmap v2975: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 106 op/s
Nov 25 09:19:23 compute-0 nova_compute[253538]: 2025-11-25 09:19:23.979 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 16 KiB/s wr, 77 op/s
Nov 25 09:19:24 compute-0 nova_compute[253538]: 2025-11-25 09:19:24.893 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:24 compute-0 ceph-mon[75015]: pgmap v2976: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 16 KiB/s wr, 77 op/s
Nov 25 09:19:25 compute-0 ovn_controller[152859]: 2025-11-25T09:19:25Z|00213|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:22:67 10.100.0.14
Nov 25 09:19:25 compute-0 podman[422222]: 2025-11-25 09:19:25.831972823 +0000 UTC m=+0.085421492 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 09:19:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 15 KiB/s wr, 38 op/s
Nov 25 09:19:27 compute-0 ceph-mon[75015]: pgmap v2977: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 15 KiB/s wr, 38 op/s
Nov 25 09:19:27 compute-0 ovn_controller[152859]: 2025-11-25T09:19:27Z|01648|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 09:19:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 16 KiB/s wr, 50 op/s
Nov 25 09:19:28 compute-0 nova_compute[253538]: 2025-11-25 09:19:28.983 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:19:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3341607574' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:19:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:19:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3341607574' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:19:29 compute-0 ceph-mon[75015]: pgmap v2978: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 16 KiB/s wr, 50 op/s
Nov 25 09:19:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3341607574' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:19:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3341607574' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:19:29 compute-0 nova_compute[253538]: 2025-11-25 09:19:29.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 15 KiB/s wr, 44 op/s
Nov 25 09:19:31 compute-0 ceph-mon[75015]: pgmap v2979: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 15 KiB/s wr, 44 op/s
Nov 25 09:19:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 09:19:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:33 compute-0 ceph-mon[75015]: pgmap v2980: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 09:19:33 compute-0 nova_compute[253538]: 2025-11-25 09:19:33.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 09:19:34 compute-0 nova_compute[253538]: 2025-11-25 09:19:34.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:35 compute-0 ceph-mon[75015]: pgmap v2981: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 09:19:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 26 KiB/s wr, 40 op/s
Nov 25 09:19:37 compute-0 ceph-mon[75015]: pgmap v2982: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 26 KiB/s wr, 40 op/s
Nov 25 09:19:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 14 KiB/s wr, 13 op/s
Nov 25 09:19:38 compute-0 sudo[422248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:38 compute-0 sudo[422248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:38 compute-0 sudo[422248]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:38 compute-0 sudo[422273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:19:38 compute-0 sudo[422273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:38 compute-0 sudo[422273]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:38 compute-0 sudo[422298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:38 compute-0 sudo[422298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:38 compute-0 sudo[422298]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:38 compute-0 sudo[422323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:19:38 compute-0 sudo[422323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:38 compute-0 nova_compute[253538]: 2025-11-25 09:19:38.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:39 compute-0 sudo[422323]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:39 compute-0 sudo[422379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:39 compute-0 sudo[422379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:39 compute-0 sudo[422379]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:39 compute-0 sudo[422404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:19:39 compute-0 sudo[422404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:39 compute-0 sudo[422404]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:39 compute-0 sudo[422429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:39 compute-0 sudo[422429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:39 compute-0 sudo[422429]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:39 compute-0 sudo[422454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 09:19:39 compute-0 sudo[422454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:39 compute-0 ceph-mon[75015]: pgmap v2983: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 14 KiB/s wr, 13 op/s
Nov 25 09:19:39 compute-0 sudo[422454]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:19:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:19:39 compute-0 nova_compute[253538]: 2025-11-25 09:19:39.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:19:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:19:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:19:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:19:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:19:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0c9e4ba1-c56e-49dd-abfc-b120d358b84e does not exist
Nov 25 09:19:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c4c9d1bc-b424-4082-bbfe-d2598d757cb6 does not exist
Nov 25 09:19:39 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8471738d-8e22-475f-84a4-264c6ed8d87c does not exist
Nov 25 09:19:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:19:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:19:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:19:39 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:19:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:19:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:19:39 compute-0 sudo[422497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:39 compute-0 sudo[422497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:39 compute-0 sudo[422497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:40 compute-0 sudo[422522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:19:40 compute-0 sudo[422522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:40 compute-0 sudo[422522]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:40 compute-0 sudo[422547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:40 compute-0 sudo[422547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:40 compute-0 sudo[422547]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:40 compute-0 sudo[422572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:19:40 compute-0 sudo[422572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 21 KiB/s wr, 3 op/s
Nov 25 09:19:40 compute-0 podman[422638]: 2025-11-25 09:19:40.565735609 +0000 UTC m=+0.052054016 container create 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:19:40 compute-0 systemd[1]: Started libpod-conmon-9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5.scope.
Nov 25 09:19:40 compute-0 podman[422638]: 2025-11-25 09:19:40.539809753 +0000 UTC m=+0.026128150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:19:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:19:40 compute-0 podman[422638]: 2025-11-25 09:19:40.694013444 +0000 UTC m=+0.180331861 container init 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:19:40 compute-0 podman[422638]: 2025-11-25 09:19:40.701598041 +0000 UTC m=+0.187916478 container start 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:19:40 compute-0 competent_ptolemy[422654]: 167 167
Nov 25 09:19:40 compute-0 systemd[1]: libpod-9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5.scope: Deactivated successfully.
Nov 25 09:19:40 compute-0 podman[422638]: 2025-11-25 09:19:40.769821085 +0000 UTC m=+0.256139482 container attach 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:19:40 compute-0 podman[422638]: 2025-11-25 09:19:40.770331039 +0000 UTC m=+0.256649436 container died 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0058601d3ee8d9d487e3f3d083deba8504aad63d918e9c2616c9d2d4102a4b5c-merged.mount: Deactivated successfully.
Nov 25 09:19:40 compute-0 podman[422638]: 2025-11-25 09:19:40.853077068 +0000 UTC m=+0.339395465 container remove 9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:19:40 compute-0 systemd[1]: libpod-conmon-9165c5030731bb1891f13bc4d6eee4e462baf287067e2d0beb693c17de7c51d5.scope: Deactivated successfully.
Nov 25 09:19:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:19:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:19:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:19:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:19:40 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:19:40 compute-0 ceph-mon[75015]: pgmap v2984: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 21 KiB/s wr, 3 op/s
Nov 25 09:19:41 compute-0 podman[422682]: 2025-11-25 09:19:41.090183763 +0000 UTC m=+0.091942720 container create ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 25 09:19:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:41.104 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:41.105 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:41 compute-0 podman[422682]: 2025-11-25 09:19:41.023697916 +0000 UTC m=+0.025456893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:19:41 compute-0 systemd[1]: Started libpod-conmon-ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c.scope.
Nov 25 09:19:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:41 compute-0 podman[422682]: 2025-11-25 09:19:41.265510899 +0000 UTC m=+0.267269906 container init ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:19:41 compute-0 podman[422682]: 2025-11-25 09:19:41.277204078 +0000 UTC m=+0.278963055 container start ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 09:19:41 compute-0 podman[422682]: 2025-11-25 09:19:41.296948964 +0000 UTC m=+0.298707921 container attach ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:19:42 compute-0 quizzical_tesla[422698]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:19:42 compute-0 quizzical_tesla[422698]: --> relative data size: 1.0
Nov 25 09:19:42 compute-0 quizzical_tesla[422698]: --> All data devices are unavailable
Nov 25 09:19:42 compute-0 systemd[1]: libpod-ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c.scope: Deactivated successfully.
Nov 25 09:19:42 compute-0 podman[422682]: 2025-11-25 09:19:42.345260808 +0000 UTC m=+1.347019775 container died ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:19:42 compute-0 systemd[1]: libpod-ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c.scope: Consumed 1.009s CPU time.
Nov 25 09:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-91ae3d3b39563aa312204c86eb6289f885ecc8e11033dbf42f1a8beb7724f1b6-merged.mount: Deactivated successfully.
Nov 25 09:19:42 compute-0 podman[422682]: 2025-11-25 09:19:42.407199676 +0000 UTC m=+1.408958633 container remove ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:19:42 compute-0 systemd[1]: libpod-conmon-ddf5a69f0cb2c5b52acb27669ce0db720a67b4ee5b8c5bd7bd5de3fc8143367c.scope: Deactivated successfully.
Nov 25 09:19:42 compute-0 sudo[422572]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:42 compute-0 sudo[422739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:42 compute-0 sudo[422739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:42 compute-0 sudo[422739]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Nov 25 09:19:42 compute-0 sudo[422764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:19:42 compute-0 sudo[422764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:42 compute-0 sudo[422764]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:42 compute-0 sudo[422789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:42 compute-0 sudo[422789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:42 compute-0 sudo[422789]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:42 compute-0 sudo[422814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:19:42 compute-0 sudo[422814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:43 compute-0 podman[422876]: 2025-11-25 09:19:43.052459567 +0000 UTC m=+0.086192241 container create b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:19:43 compute-0 podman[422876]: 2025-11-25 09:19:42.989182373 +0000 UTC m=+0.022915087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:19:43 compute-0 systemd[1]: Started libpod-conmon-b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507.scope.
Nov 25 09:19:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:19:43 compute-0 podman[422876]: 2025-11-25 09:19:43.161237873 +0000 UTC m=+0.194970957 container init b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:19:43 compute-0 podman[422876]: 2025-11-25 09:19:43.170749922 +0000 UTC m=+0.204482596 container start b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 09:19:43 compute-0 nervous_blackwell[422892]: 167 167
Nov 25 09:19:43 compute-0 systemd[1]: libpod-b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507.scope: Deactivated successfully.
Nov 25 09:19:43 compute-0 podman[422876]: 2025-11-25 09:19:43.181652679 +0000 UTC m=+0.215385383 container attach b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:19:43 compute-0 podman[422876]: 2025-11-25 09:19:43.182696828 +0000 UTC m=+0.216429512 container died b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7afaa41cf58d7b32eb598cb0b9bd30540590ac4bcee3d399b6634895d472e1ff-merged.mount: Deactivated successfully.
Nov 25 09:19:43 compute-0 podman[422876]: 2025-11-25 09:19:43.397873445 +0000 UTC m=+0.431606129 container remove b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:19:43 compute-0 systemd[1]: libpod-conmon-b216db34ec1a0ce62644298f48d470a44ce14f38f8a06d779ff9434c5d31b507.scope: Deactivated successfully.
Nov 25 09:19:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:43 compute-0 podman[422916]: 2025-11-25 09:19:43.565026771 +0000 UTC m=+0.024010815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:19:43 compute-0 podman[422916]: 2025-11-25 09:19:43.65999764 +0000 UTC m=+0.118981664 container create e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:19:43 compute-0 ceph-mon[75015]: pgmap v2985: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Nov 25 09:19:43 compute-0 systemd[1]: Started libpod-conmon-e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed.scope.
Nov 25 09:19:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:43 compute-0 podman[422916]: 2025-11-25 09:19:43.840424689 +0000 UTC m=+0.299408743 container init e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 25 09:19:43 compute-0 podman[422916]: 2025-11-25 09:19:43.851870781 +0000 UTC m=+0.310854805 container start e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:19:43 compute-0 podman[422916]: 2025-11-25 09:19:43.874683443 +0000 UTC m=+0.333667487 container attach e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 09:19:43 compute-0 nova_compute[253538]: 2025-11-25 09:19:43.992 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Nov 25 09:19:44 compute-0 unruffled_golick[422933]: {
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:     "0": [
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:         {
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "devices": [
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "/dev/loop3"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             ],
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_name": "ceph_lv0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_size": "21470642176",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "name": "ceph_lv0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "tags": {
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cluster_name": "ceph",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.crush_device_class": "",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.encrypted": "0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osd_id": "0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.type": "block",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.vdo": "0"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             },
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "type": "block",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "vg_name": "ceph_vg0"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:         }
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:     ],
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:     "1": [
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:         {
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "devices": [
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "/dev/loop4"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             ],
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_name": "ceph_lv1",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_size": "21470642176",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "name": "ceph_lv1",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "tags": {
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cluster_name": "ceph",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.crush_device_class": "",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.encrypted": "0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osd_id": "1",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.type": "block",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.vdo": "0"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             },
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "type": "block",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "vg_name": "ceph_vg1"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:         }
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:     ],
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:     "2": [
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:         {
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "devices": [
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "/dev/loop5"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             ],
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_name": "ceph_lv2",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_size": "21470642176",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "name": "ceph_lv2",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "tags": {
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.cluster_name": "ceph",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.crush_device_class": "",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.encrypted": "0",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osd_id": "2",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.type": "block",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:                 "ceph.vdo": "0"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             },
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "type": "block",
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:             "vg_name": "ceph_vg2"
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:         }
Nov 25 09:19:44 compute-0 unruffled_golick[422933]:     ]
Nov 25 09:19:44 compute-0 unruffled_golick[422933]: }
Nov 25 09:19:44 compute-0 systemd[1]: libpod-e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed.scope: Deactivated successfully.
Nov 25 09:19:44 compute-0 podman[422916]: 2025-11-25 09:19:44.622408718 +0000 UTC m=+1.081392742 container died e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:19:44 compute-0 nova_compute[253538]: 2025-11-25 09:19:44.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:45 compute-0 ceph-mon[75015]: pgmap v2986: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Nov 25 09:19:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-265224fe9158640971946e0afef5654e8296ee8815e07f186539595e57b55207-merged.mount: Deactivated successfully.
Nov 25 09:19:45 compute-0 podman[422916]: 2025-11-25 09:19:45.23984416 +0000 UTC m=+1.698828194 container remove e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:19:45 compute-0 systemd[1]: libpod-conmon-e16b2f43b88a69caa6a4e650730576c7a8b1be71b56176efe35ae25f86aeaaed.scope: Deactivated successfully.
Nov 25 09:19:45 compute-0 sudo[422814]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:45 compute-0 sudo[422954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:45 compute-0 sudo[422954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:45 compute-0 sudo[422954]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:45 compute-0 sudo[422979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:19:45 compute-0 sudo[422979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:45 compute-0 sudo[422979]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:45 compute-0 sudo[423004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:45 compute-0 sudo[423004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:45 compute-0 sudo[423004]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:45 compute-0 sudo[423029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:19:45 compute-0 sudo[423029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:45 compute-0 podman[423092]: 2025-11-25 09:19:45.915449938 +0000 UTC m=+0.036497486 container create 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:19:45 compute-0 systemd[1]: Started libpod-conmon-6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385.scope.
Nov 25 09:19:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:19:45 compute-0 podman[423092]: 2025-11-25 09:19:45.898149797 +0000 UTC m=+0.019197335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:19:46 compute-0 podman[423092]: 2025-11-25 09:19:46.152860441 +0000 UTC m=+0.273907989 container init 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 09:19:46 compute-0 podman[423092]: 2025-11-25 09:19:46.159761098 +0000 UTC m=+0.280808616 container start 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:19:46 compute-0 wizardly_jackson[423108]: 167 167
Nov 25 09:19:46 compute-0 systemd[1]: libpod-6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385.scope: Deactivated successfully.
Nov 25 09:19:46 compute-0 podman[423092]: 2025-11-25 09:19:46.187102204 +0000 UTC m=+0.308149732 container attach 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:19:46 compute-0 podman[423092]: 2025-11-25 09:19:46.188979776 +0000 UTC m=+0.310027334 container died 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-27dacb2e904bd63a77d6b270b9310e457abdf91178b5af1e60c22124a10bdae4-merged.mount: Deactivated successfully.
Nov 25 09:19:46 compute-0 podman[423092]: 2025-11-25 09:19:46.265540912 +0000 UTC m=+0.386588450 container remove 6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:19:46 compute-0 systemd[1]: libpod-conmon-6cc5c4614d0f3bdef44fcec693ccb4f2a6940206ee5158e514162e6ec3fdb385.scope: Deactivated successfully.
Nov 25 09:19:46 compute-0 podman[423113]: 2025-11-25 09:19:46.305754379 +0000 UTC m=+0.110800642 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 09:19:46 compute-0 podman[423135]: 2025-11-25 09:19:46.325986801 +0000 UTC m=+0.060822140 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:19:46 compute-0 podman[423169]: 2025-11-25 09:19:46.459059888 +0000 UTC m=+0.062309179 container create 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:19:46 compute-0 systemd[1]: Started libpod-conmon-4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8.scope.
Nov 25 09:19:46 compute-0 podman[423169]: 2025-11-25 09:19:46.436103342 +0000 UTC m=+0.039352673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:19:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:19:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:19:46 compute-0 podman[423169]: 2025-11-25 09:19:46.561335307 +0000 UTC m=+0.164584618 container init 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:19:46 compute-0 podman[423169]: 2025-11-25 09:19:46.569770587 +0000 UTC m=+0.173019878 container start 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 09:19:46 compute-0 podman[423169]: 2025-11-25 09:19:46.572994125 +0000 UTC m=+0.176243416 container attach 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:19:47 compute-0 brave_mclean[423186]: {
Nov 25 09:19:47 compute-0 brave_mclean[423186]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "osd_id": 1,
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "type": "bluestore"
Nov 25 09:19:47 compute-0 brave_mclean[423186]:     },
Nov 25 09:19:47 compute-0 brave_mclean[423186]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "osd_id": 2,
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "type": "bluestore"
Nov 25 09:19:47 compute-0 brave_mclean[423186]:     },
Nov 25 09:19:47 compute-0 brave_mclean[423186]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "osd_id": 0,
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:19:47 compute-0 brave_mclean[423186]:         "type": "bluestore"
Nov 25 09:19:47 compute-0 brave_mclean[423186]:     }
Nov 25 09:19:47 compute-0 brave_mclean[423186]: }
Nov 25 09:19:47 compute-0 systemd[1]: libpod-4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8.scope: Deactivated successfully.
Nov 25 09:19:47 compute-0 podman[423169]: 2025-11-25 09:19:47.554618525 +0000 UTC m=+1.157867806 container died 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:19:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-33332d9c80f45b1bf790d432af2653e54a11da36e6a9e811117f7a2e026a9510-merged.mount: Deactivated successfully.
Nov 25 09:19:47 compute-0 podman[423169]: 2025-11-25 09:19:47.629345743 +0000 UTC m=+1.232595034 container remove 4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:19:47 compute-0 ceph-mon[75015]: pgmap v2987: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:19:47 compute-0 systemd[1]: libpod-conmon-4bed9fdd579f0f3eb008c5d66a5a0f83fed4b20af3b4f1324d18a2316a2360c8.scope: Deactivated successfully.
Nov 25 09:19:47 compute-0 sudo[423029]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:19:47 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:19:47 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:47 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 26c0a479-fc87-488a-8865-e25675579d21 does not exist
Nov 25 09:19:47 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev eb693f94-19b7-430c-9486-b3f11ad15642 does not exist
Nov 25 09:19:47 compute-0 sudo[423233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:19:47 compute-0 sudo[423233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:47 compute-0 sudo[423233]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:47 compute-0 sudo[423258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:19:47 compute-0 sudo[423258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:19:47 compute-0 sudo[423258]: pam_unix(sudo:session): session closed for user root
Nov 25 09:19:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:47.950 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:19:47 compute-0 nova_compute[253538]: 2025-11-25 09:19:47.951 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:47 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:47.952 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.183 253542 DEBUG nova.compute.manager [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.183 253542 DEBUG nova.compute.manager [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing instance network info cache due to event network-changed-9d78d6ba-3489-4cfd-ae33-9166be3f940c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.184 253542 DEBUG oslo_concurrency.lockutils [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.184 253542 DEBUG oslo_concurrency.lockutils [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquired lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.185 253542 DEBUG nova.network.neutron [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Refreshing network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.230 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.231 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.231 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.231 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.232 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.233 253542 INFO nova.compute.manager [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Terminating instance
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.234 253542 DEBUG nova.compute.manager [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:19:48 compute-0 kernel: tap9d78d6ba-34 (unregistering): left promiscuous mode
Nov 25 09:19:48 compute-0 NetworkManager[48915]: <info>  [1764062388.2936] device (tap9d78d6ba-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:19:48 compute-0 ovn_controller[152859]: 2025-11-25T09:19:48Z|01649|binding|INFO|Releasing lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c from this chassis (sb_readonly=0)
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 ovn_controller[152859]: 2025-11-25T09:19:48Z|01650|binding|INFO|Setting lport 9d78d6ba-3489-4cfd-ae33-9166be3f940c down in Southbound
Nov 25 09:19:48 compute-0 ovn_controller[152859]: 2025-11-25T09:19:48Z|01651|binding|INFO|Removing iface tap9d78d6ba-34 ovn-installed in OVS
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.306 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.312 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:22:67 10.100.0.14'], port_security=['fa:16:3e:eb:22:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '282b7217-4c1e-4a42-b3da-05616f4e1da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41c67820b40a4185a60c4245f9c43ef5', 'neutron:revision_number': '9', 'neutron:security_group_ids': '5b17be75-81bb-4f4d-9234-5572279d07c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3ce9783-7822-423e-a3be-85165987da53, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>], logical_port=9d78d6ba-3489-4cfd-ae33-9166be3f940c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f150fc07ee0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.313 162739 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78d6ba-3489-4cfd-ae33-9166be3f940c in datapath 26d70c6d-e66b-4570-a7d7-11486a935ed8 unbound from our chassis
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.315 162739 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26d70c6d-e66b-4570-a7d7-11486a935ed8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.316 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[96ee1c1e-1891-46ff-853e-7f7e07df1a37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.316 162739 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 namespace which is not needed anymore
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.326 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 25 09:19:48 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Consumed 14.566s CPU time.
Nov 25 09:19:48 compute-0 systemd-machined[215790]: Machine qemu-186-instance-00000099 terminated.
Nov 25 09:19:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [NOTICE]   (422126) : haproxy version is 2.8.14-c23fe91
Nov 25 09:19:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [NOTICE]   (422126) : path to executable is /usr/sbin/haproxy
Nov 25 09:19:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [WARNING]  (422126) : Exiting Master process...
Nov 25 09:19:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [WARNING]  (422126) : Exiting Master process...
Nov 25 09:19:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [ALERT]    (422126) : Current worker (422128) exited with code 143 (Terminated)
Nov 25 09:19:48 compute-0 neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8[422122]: [WARNING]  (422126) : All workers exited. Exiting... (0)
Nov 25 09:19:48 compute-0 systemd[1]: libpod-59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f.scope: Deactivated successfully.
Nov 25 09:19:48 compute-0 podman[423307]: 2025-11-25 09:19:48.457544542 +0000 UTC m=+0.049209773 container died 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.458 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.463 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.473 253542 INFO nova.virt.libvirt.driver [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance destroyed successfully.
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.474 253542 DEBUG nova.objects.instance [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lazy-loading 'resources' on Instance uuid 282b7217-4c1e-4a42-b3da-05616f4e1da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:19:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f-userdata-shm.mount: Deactivated successfully.
Nov 25 09:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c1e777106289c23b7a822a87608b70a386d112d661b25f06e481127e86f854e-merged.mount: Deactivated successfully.
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.492 253542 DEBUG nova.virt.libvirt.vif [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T09:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-853165821',display_name='tempest-TestShelveInstance-server-853165821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-853165821',id=153,image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNJQsCu4fQ3ll5Z4ZaGvMq+pPgiaY3EvL05ETUACffFb5NNPT58fZR5bxwEgYmFiG8knRhbPhzoHa6MYoWsZMDhMe0q2RDfQW/VzCu9RVlFpki+QcCgPNPr5WwLwdENig==',key_name='tempest-TestShelveInstance-83615415',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:19:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='41c67820b40a4185a60c4245f9c43ef5',ramdisk_id='',reservation_id='r-5ikkcbmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1867415308',owner_user_name='tempest-TestShelveInstance-1867415308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:19:12Z,user_data=None,user_id='a68fbd2f756d42aa982630f3a41f0a1f',uuid=282b7217-4c1e-4a42-b3da-05616f4e1da3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.493 253542 DEBUG nova.network.os_vif_util [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converting VIF {"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.493 253542 DEBUG nova.network.os_vif_util [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.494 253542 DEBUG os_vif [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.495 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.496 253542 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d78d6ba-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.497 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.501 253542 INFO os_vif [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:22:67,bridge_name='br-int',has_traffic_filtering=True,id=9d78d6ba-3489-4cfd-ae33-9166be3f940c,network=Network(26d70c6d-e66b-4570-a7d7-11486a935ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78d6ba-34')
Nov 25 09:19:48 compute-0 podman[423307]: 2025-11-25 09:19:48.502829506 +0000 UTC m=+0.094494737 container cleanup 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:19:48 compute-0 systemd[1]: libpod-conmon-59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f.scope: Deactivated successfully.
Nov 25 09:19:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 9.3 KiB/s wr, 5 op/s
Nov 25 09:19:48 compute-0 podman[423356]: 2025-11-25 09:19:48.56970184 +0000 UTC m=+0.046017347 container remove 59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.577 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[ce230150-cd84-4a4a-bd6b-8ee8d99b1635]: (4, ('Tue Nov 25 09:19:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 (59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f)\n59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f\nTue Nov 25 09:19:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 (59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f)\n59697db1d2110bdf8b4b7ad10f66ca81825c76e57228cc3dcf2c53833e7f5a2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.582 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca1dcf2-17a5-4d54-9e1e-df4c10f0cd52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.583 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d70c6d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 kernel: tap26d70c6d-e0: left promiscuous mode
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.595 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[11326ed1-cdc5-40c8-beff-dc2eea33b5dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.602 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.616 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba7fbe0-0dd5-45b4-bf2c-9660be12f57e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.618 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[b43285b1-e626-44f9-95f7-c435578a6a1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.637 269370 DEBUG oslo.privsep.daemon [-] privsep: reply[11f87d3c-eb56-4431-831b-3d08667a498e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770455, 'reachable_time': 31865, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423378, 'error': None, 'target': 'ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d26d70c6d\x2de66b\x2d4570\x2da7d7\x2d11486a935ed8.mount: Deactivated successfully.
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.640 162852 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26d70c6d-e66b-4570-a7d7-11486a935ed8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:19:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:48.640 162852 DEBUG oslo.privsep.daemon [-] privsep: reply[810f0c11-fb03-4bbb-99ce-6ba4b3a2b2e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.683 253542 DEBUG nova.compute.manager [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG oslo_concurrency.lockutils [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG oslo_concurrency.lockutils [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG oslo_concurrency.lockutils [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG nova.compute.manager [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.684 253542 DEBUG nova.compute.manager [req-028e5bab-983f-46e4-8a12-c71776a6ec3c req-baeca2e0-7f9e-489c-8847-2a0bc6123f36 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-unplugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:19:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.931 253542 INFO nova.virt.libvirt.driver [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deleting instance files /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3_del
Nov 25 09:19:48 compute-0 nova_compute[253538]: 2025-11-25 09:19:48.932 253542 INFO nova.virt.libvirt.driver [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deletion of /var/lib/nova/instances/282b7217-4c1e-4a42-b3da-05616f4e1da3_del complete
Nov 25 09:19:49 compute-0 nova_compute[253538]: 2025-11-25 09:19:49.127 253542 INFO nova.compute.manager [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Took 0.89 seconds to destroy the instance on the hypervisor.
Nov 25 09:19:49 compute-0 nova_compute[253538]: 2025-11-25 09:19:49.127 253542 DEBUG oslo.service.loopingcall [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:19:49 compute-0 nova_compute[253538]: 2025-11-25 09:19:49.128 253542 DEBUG nova.compute.manager [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:19:49 compute-0 nova_compute[253538]: 2025-11-25 09:19:49.128 253542 DEBUG nova.network.neutron [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:19:49 compute-0 ceph-mon[75015]: pgmap v2988: 321 pgs: 321 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 9.3 KiB/s wr, 5 op/s
Nov 25 09:19:49 compute-0 nova_compute[253538]: 2025-11-25 09:19:49.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 148 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 8.7 KiB/s wr, 27 op/s
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.004 253542 DEBUG nova.compute.manager [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.005 253542 DEBUG oslo_concurrency.lockutils [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Acquiring lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.005 253542 DEBUG oslo_concurrency.lockutils [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.005 253542 DEBUG oslo_concurrency.lockutils [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.005 253542 DEBUG nova.compute.manager [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] No waiting events found dispatching network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.006 253542 WARNING nova.compute.manager [req-2597a671-5c7d-474d-a5fd-d963119417a4 req-b6eecc01-9243-493e-8f58-711f75a43bfd b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received unexpected event network-vif-plugged-9d78d6ba-3489-4cfd-ae33-9166be3f940c for instance with vm_state active and task_state deleting.
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.205 253542 DEBUG nova.network.neutron [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.310 253542 DEBUG nova.network.neutron [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updated VIF entry in instance network info cache for port 9d78d6ba-3489-4cfd-ae33-9166be3f940c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.311 253542 DEBUG nova.network.neutron [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Updating instance_info_cache with network_info: [{"id": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "address": "fa:16:3e:eb:22:67", "network": {"id": "26d70c6d-e66b-4570-a7d7-11486a935ed8", "bridge": "br-int", "label": "tempest-TestShelveInstance-164848660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c67820b40a4185a60c4245f9c43ef5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78d6ba-34", "ovs_interfaceid": "9d78d6ba-3489-4cfd-ae33-9166be3f940c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.476 253542 INFO nova.compute.manager [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Took 2.35 seconds to deallocate network for instance.
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.526 253542 DEBUG oslo_concurrency.lockutils [req-828f7a84-f713-4e60-b804-c65243625863 req-57f12959-50b9-46bb-82c4-d47139d0ca79 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] Releasing lock "refresh_cache-282b7217-4c1e-4a42-b3da-05616f4e1da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.632 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.633 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.697 253542 DEBUG oslo_concurrency.processutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:19:51 compute-0 ceph-mon[75015]: pgmap v2989: 321 pgs: 321 active+clean; 148 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 8.7 KiB/s wr, 27 op/s
Nov 25 09:19:51 compute-0 nova_compute[253538]: 2025-11-25 09:19:51.963 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:19:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730549159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:52 compute-0 nova_compute[253538]: 2025-11-25 09:19:52.167 253542 DEBUG oslo_concurrency.processutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:19:52 compute-0 nova_compute[253538]: 2025-11-25 09:19:52.175 253542 DEBUG nova.compute.provider_tree [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:19:52 compute-0 nova_compute[253538]: 2025-11-25 09:19:52.190 253542 DEBUG nova.scheduler.client.report [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:19:52 compute-0 nova_compute[253538]: 2025-11-25 09:19:52.215 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:52 compute-0 nova_compute[253538]: 2025-11-25 09:19:52.289 253542 INFO nova.scheduler.client.report [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Deleted allocations for instance 282b7217-4c1e-4a42-b3da-05616f4e1da3
Nov 25 09:19:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 122 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 25 09:19:52 compute-0 nova_compute[253538]: 2025-11-25 09:19:52.722 253542 DEBUG oslo_concurrency.lockutils [None req-cb207b26-6d8b-43e4-90cd-a0aad9008460 a68fbd2f756d42aa982630f3a41f0a1f 41c67820b40a4185a60c4245f9c43ef5 - - default default] Lock "282b7217-4c1e-4a42-b3da-05616f4e1da3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:19:52 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/730549159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:19:53 compute-0 nova_compute[253538]: 2025-11-25 09:19:53.113 253542 DEBUG nova.compute.manager [req-a29a5ef3-e94c-4e8c-be46-71ec71439748 req-9e9f73c0-265b-49b7-a3b7-db5144ef89a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Received event network-vif-deleted-9d78d6ba-3489-4cfd-ae33-9166be3f940c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:19:53 compute-0 nova_compute[253538]: 2025-11-25 09:19:53.114 253542 INFO nova.compute.manager [req-a29a5ef3-e94c-4e8c-be46-71ec71439748 req-9e9f73c0-265b-49b7-a3b7-db5144ef89a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Neutron deleted interface 9d78d6ba-3489-4cfd-ae33-9166be3f940c; detaching it from the instance and deleting it from the info cache
Nov 25 09:19:53 compute-0 nova_compute[253538]: 2025-11-25 09:19:53.114 253542 DEBUG nova.network.neutron [req-a29a5ef3-e94c-4e8c-be46-71ec71439748 req-9e9f73c0-265b-49b7-a3b7-db5144ef89a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 25 09:19:53 compute-0 nova_compute[253538]: 2025-11-25 09:19:53.117 253542 DEBUG nova.compute.manager [req-a29a5ef3-e94c-4e8c-be46-71ec71439748 req-9e9f73c0-265b-49b7-a3b7-db5144ef89a9 b4dc24c9a0484aa89c29232753d215bd f58a086bebaa4de083ef3e4067e98836 - - default default] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Detach interface failed, port_id=9d78d6ba-3489-4cfd-ae33-9166be3f940c, reason: Instance 282b7217-4c1e-4a42-b3da-05616f4e1da3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:19:53
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:19:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:19:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:19:53 compute-0 nova_compute[253538]: 2025-11-25 09:19:53.498 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:53 compute-0 ceph-mon[75015]: pgmap v2990: 321 pgs: 321 active+clean; 122 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:19:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 09:19:54 compute-0 nova_compute[253538]: 2025-11-25 09:19:54.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:55 compute-0 nova_compute[253538]: 2025-11-25 09:19:55.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:19:55 compute-0 ceph-mon[75015]: pgmap v2991: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 09:19:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:19:55.954 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:19:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 09:19:56 compute-0 podman[423402]: 2025-11-25 09:19:56.854488119 +0000 UTC m=+0.106978187 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 09:19:57 compute-0 ceph-mon[75015]: pgmap v2992: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 09:19:57 compute-0 nova_compute[253538]: 2025-11-25 09:19:57.499 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:57 compute-0 nova_compute[253538]: 2025-11-25 09:19:57.656 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:19:58 compute-0 nova_compute[253538]: 2025-11-25 09:19:58.501 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:19:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 09:19:59 compute-0 ceph-mon[75015]: pgmap v2993: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 09:19:59 compute-0 nova_compute[253538]: 2025-11-25 09:19:59.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:00 compute-0 nova_compute[253538]: 2025-11-25 09:20:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:00 compute-0 nova_compute[253538]: 2025-11-25 09:20:00.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:20:00 compute-0 nova_compute[253538]: 2025-11-25 09:20:00.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:20:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 25 09:20:00 compute-0 nova_compute[253538]: 2025-11-25 09:20:00.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:20:01 compute-0 ceph-mon[75015]: pgmap v2994: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 25 09:20:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Nov 25 09:20:03 compute-0 nova_compute[253538]: 2025-11-25 09:20:03.472 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062388.4714363, 282b7217-4c1e-4a42-b3da-05616f4e1da3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:20:03 compute-0 nova_compute[253538]: 2025-11-25 09:20:03.472 253542 INFO nova.compute.manager [-] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] VM Stopped (Lifecycle Event)
Nov 25 09:20:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:03 compute-0 nova_compute[253538]: 2025-11-25 09:20:03.491 253542 DEBUG nova.compute.manager [None req-22a257ec-0687-4db3-8326-093e753ffaf7 - - - - - -] [instance: 282b7217-4c1e-4a42-b3da-05616f4e1da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:20:03 compute-0 nova_compute[253538]: 2025-11-25 09:20:03.503 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:03 compute-0 nova_compute[253538]: 2025-11-25 09:20:03.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:03 compute-0 ceph-mon[75015]: pgmap v2995: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Nov 25 09:20:04 compute-0 nova_compute[253538]: 2025-11-25 09:20:04.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 4 op/s
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:20:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:20:04 compute-0 nova_compute[253538]: 2025-11-25 09:20:04.908 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:20:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.5 total, 600.0 interval
                                           Cumulative writes: 46K writes, 186K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.84 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2946 writes, 11K keys, 2946 commit groups, 1.0 writes per commit group, ingest: 12.87 MB, 0.02 MB/s
                                           Interval WAL: 2946 writes, 1143 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:20:05 compute-0 ceph-mon[75015]: pgmap v2996: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 4 op/s
Nov 25 09:20:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:07 compute-0 nova_compute[253538]: 2025-11-25 09:20:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:07 compute-0 ceph-mon[75015]: pgmap v2997: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:08 compute-0 nova_compute[253538]: 2025-11-25 09:20:08.505 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:08 compute-0 nova_compute[253538]: 2025-11-25 09:20:08.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:08 compute-0 nova_compute[253538]: 2025-11-25 09:20:08.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:20:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:09 compute-0 ceph-mon[75015]: pgmap v2998: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:09 compute-0 nova_compute[253538]: 2025-11-25 09:20:09.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:20:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.4 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2715 writes, 11K keys, 2715 commit groups, 1.0 writes per commit group, ingest: 11.35 MB, 0.02 MB/s
                                           Interval WAL: 2715 writes, 1097 syncs, 2.47 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.705 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "73412c84-02b0-4ed4-872c-78d4714956d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.705 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.717 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:20:11 compute-0 ceph-mon[75015]: pgmap v2999: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.777 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.778 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.786 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.786 253542 INFO nova.compute.claims [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:20:11 compute-0 nova_compute[253538]: 2025-11-25 09:20:11.875 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:20:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231694633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.337 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.343 253542 DEBUG nova.compute.provider_tree [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.360 253542 DEBUG nova.scheduler.client.report [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.393 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.394 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.439 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.440 253542 DEBUG nova.network.neutron [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.456 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.475 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.554 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.555 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.555 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Creating image(s)
Nov 25 09:20:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.574 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.594 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.615 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.619 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.703 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.704 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.704 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.705 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "ad982bd9427c86feb49d0b60fa1a5b2511227adc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.724 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:20:12 compute-0 nova_compute[253538]: 2025-11-25 09:20:12.728 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 73412c84-02b0-4ed4-872c-78d4714956d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4231694633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.037 253542 DEBUG nova.network.neutron [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.037 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:20:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.496 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc 73412c84-02b0-4ed4-872c-78d4714956d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.769s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.536 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.581 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] resizing rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.688 253542 DEBUG nova.objects.instance [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lazy-loading 'migration_context' on Instance uuid 73412c84-02b0-4ed4-872c-78d4714956d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:20:13 compute-0 ceph-mon[75015]: pgmap v3000: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.769 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.770 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Ensure instance console log exists: /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.770 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.771 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.771 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.773 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'disk_bus': 'virtio', 'image_id': '8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.778 253542 WARNING nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.791 253542 DEBUG nova.virt.libvirt.host [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.792 253542 DEBUG nova.virt.libvirt.host [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.822 253542 DEBUG nova.virt.libvirt.host [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.823 253542 DEBUG nova.virt.libvirt.host [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.824 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.824 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T08:20:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c0247c91-0f34-45b2-87e7-43f31a790a00',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T08:20:54Z,direct_url=<?>,disk_format='qcow2',id=8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d5293c2f698f43d69b5e1b38a119911e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T08:20:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.825 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.825 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.825 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.825 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.826 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.826 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.826 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.826 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.827 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.827 253542 DEBUG nova.virt.hardware [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:20:13 compute-0 nova_compute[253538]: 2025-11-25 09:20:13.830 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:20:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504769425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.307 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.334 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.338 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 843 KiB/s wr, 3 op/s
Nov 25 09:20:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 09:20:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016389856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.771 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.774 253542 DEBUG nova.objects.instance [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73412c84-02b0-4ed4-872c-78d4714956d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.791 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <uuid>73412c84-02b0-4ed4-872c-78d4714956d9</uuid>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <name>instance-0000009a</name>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <memory>131072</memory>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <vcpu>1</vcpu>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <metadata>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <nova:name>tempest-AggregatesAdminTestJSON-server-149578569</nova:name>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <nova:creationTime>2025-11-25 09:20:13</nova:creationTime>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <nova:flavor name="m1.nano">
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <nova:memory>128</nova:memory>
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <nova:disk>1</nova:disk>
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <nova:swap>0</nova:swap>
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       </nova:flavor>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <nova:owner>
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <nova:user uuid="f152533b0a03477485b6883b0c89d441">tempest-AggregatesAdminTestJSON-1484531335-project-member</nova:user>
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <nova:project uuid="195498a87961428fa33efd9eb8f206a9">tempest-AggregatesAdminTestJSON-1484531335</nova:project>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       </nova:owner>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <nova:root type="image" uuid="8d6d7aa8-d8eb-4d80-8c79-123877bbdc7e"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <nova:ports/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     </nova:instance>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   </metadata>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <sysinfo type="smbios">
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <system>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <entry name="serial">73412c84-02b0-4ed4-872c-78d4714956d9</entry>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <entry name="uuid">73412c84-02b0-4ed4-872c-78d4714956d9</entry>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     </system>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   </sysinfo>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <os>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <boot dev="hd"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <smbios mode="sysinfo"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   </os>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <features>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <acpi/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <apic/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <vmcoreinfo/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   </features>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <clock offset="utc">
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <timer name="hpet" present="no"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   </clock>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <cpu mode="host-model" match="exact">
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   </cpu>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   <devices>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <disk type="network" device="disk">
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/73412c84-02b0-4ed4-872c-78d4714956d9_disk">
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       </source>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <target dev="vda" bus="virtio"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <disk type="network" device="cdrom">
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <driver type="raw" cache="none"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <source protocol="rbd" name="vms/73412c84-02b0-4ed4-872c-78d4714956d9_disk.config">
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       </source>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <auth username="openstack">
Nov 25 09:20:14 compute-0 nova_compute[253538]:         <secret type="ceph" uuid="a058ea16-8b73-51e1-b172-ed66107102bf"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       </auth>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <target dev="sda" bus="sata"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     </disk>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <serial type="pty">
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <log file="/var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/console.log" append="off"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     </serial>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <video>
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <model type="virtio"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     </video>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <input type="tablet" bus="usb"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <rng model="virtio">
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     </rng>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <controller type="usb" index="0"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     <memballoon model="virtio">
Nov 25 09:20:14 compute-0 nova_compute[253538]:       <stats period="10"/>
Nov 25 09:20:14 compute-0 nova_compute[253538]:     </memballoon>
Nov 25 09:20:14 compute-0 nova_compute[253538]:   </devices>
Nov 25 09:20:14 compute-0 nova_compute[253538]: </domain>
Nov 25 09:20:14 compute-0 nova_compute[253538]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:20:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1504769425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:20:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1016389856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.874 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.875 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.875 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Using config drive
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.897 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:20:14 compute-0 nova_compute[253538]: 2025-11-25 09:20:14.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.066 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Creating config drive at /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.322 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsk9l96vw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.489 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsk9l96vw" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.519 253542 DEBUG nova.storage.rbd_utils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] rbd image 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.523 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.586 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.587 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:20:15 compute-0 nova_compute[253538]: 2025-11-25 09:20:15.587 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:15 compute-0 ceph-mon[75015]: pgmap v3001: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 843 KiB/s wr, 3 op/s
Nov 25 09:20:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:20:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644718927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.065 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.076 253542 DEBUG oslo_concurrency.processutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config 73412c84-02b0-4ed4-872c-78d4714956d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.077 253542 INFO nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Deleting local config drive /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9/disk.config because it was imported into RBD.
Nov 25 09:20:16 compute-0 systemd-machined[215790]: New machine qemu-187-instance-0000009a.
Nov 25 09:20:16 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-0000009a.
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.362 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.364 253542 DEBUG nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.515 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.516 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.9786262512207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.516 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.517 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.8 MiB/s wr, 19 op/s
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.570 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Instance 73412c84-02b0-4ed4-872c-78d4714956d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.571 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.571 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:20:16 compute-0 nova_compute[253538]: 2025-11-25 09:20:16.603 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:16 compute-0 podman[423812]: 2025-11-25 09:20:16.808384621 +0000 UTC m=+0.056341027 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 09:20:16 compute-0 podman[423795]: 2025-11-25 09:20:16.813679556 +0000 UTC m=+0.058816725 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:20:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1644718927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:20:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:20:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595739862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.272 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.279 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.293 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.318 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.319 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.539 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062417.5388584, 73412c84-02b0-4ed4-872c-78d4714956d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.540 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] VM Resumed (Lifecycle Event)
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.544 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.544 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.550 253542 INFO nova.virt.libvirt.driver [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance spawned successfully.
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.551 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.562 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.572 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.579 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.580 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.581 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.582 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.582 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.583 253542 DEBUG nova.virt.libvirt.driver [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.589 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.590 253542 DEBUG nova.virt.driver [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] Emitting event <LifecycleEvent: 1764062417.540476, 73412c84-02b0-4ed4-872c-78d4714956d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.590 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] VM Started (Lifecycle Event)
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.614 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.618 253542 DEBUG nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.639 253542 INFO nova.compute.manager [None req-d21e776b-5034-4f6d-89ea-26f3158bf51e - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.645 253542 INFO nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Took 5.09 seconds to spawn the instance on the hypervisor.
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.646 253542 DEBUG nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.731 253542 INFO nova.compute.manager [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Took 5.98 seconds to build instance.
Nov 25 09:20:17 compute-0 nova_compute[253538]: 2025-11-25 09:20:17.753 253542 DEBUG oslo_concurrency.lockutils [None req-fd6958ed-9d92-4759-b61a-0ce56dffd471 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:18 compute-0 ceph-mon[75015]: pgmap v3002: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.8 MiB/s wr, 19 op/s
Nov 25 09:20:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3595739862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:20:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.695 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "73412c84-02b0-4ed4-872c-78d4714956d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.696 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.696 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "73412c84-02b0-4ed4-872c-78d4714956d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.697 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.697 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.698 253542 INFO nova.compute.manager [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Terminating instance
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.698 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "refresh_cache-73412c84-02b0-4ed4-872c-78d4714956d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.699 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquired lock "refresh_cache-73412c84-02b0-4ed4-872c-78d4714956d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.699 253542 DEBUG nova.network.neutron [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:20:18 compute-0 nova_compute[253538]: 2025-11-25 09:20:18.861 253542 DEBUG nova.network.neutron [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:20:19 compute-0 nova_compute[253538]: 2025-11-25 09:20:19.044 253542 DEBUG nova.network.neutron [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:20:19 compute-0 nova_compute[253538]: 2025-11-25 09:20:19.060 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Releasing lock "refresh_cache-73412c84-02b0-4ed4-872c-78d4714956d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:20:19 compute-0 nova_compute[253538]: 2025-11-25 09:20:19.061 253542 DEBUG nova.compute.manager [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:20:19 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Nov 25 09:20:19 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Consumed 2.340s CPU time.
Nov 25 09:20:19 compute-0 systemd-machined[215790]: Machine qemu-187-instance-0000009a terminated.
Nov 25 09:20:19 compute-0 ceph-mon[75015]: pgmap v3003: 321 pgs: 321 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 25 09:20:19 compute-0 nova_compute[253538]: 2025-11-25 09:20:19.281 253542 INFO nova.virt.libvirt.driver [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance destroyed successfully.
Nov 25 09:20:19 compute-0 nova_compute[253538]: 2025-11-25 09:20:19.281 253542 DEBUG nova.objects.instance [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lazy-loading 'resources' on Instance uuid 73412c84-02b0-4ed4-872c-78d4714956d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:20:19 compute-0 nova_compute[253538]: 2025-11-25 09:20:19.796 253542 INFO nova.virt.libvirt.driver [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Deleting instance files /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9_del
Nov 25 09:20:19 compute-0 nova_compute[253538]: 2025-11-25 09:20:19.796 253542 INFO nova.virt.libvirt.driver [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Deletion of /var/lib/nova/instances/73412c84-02b0-4ed4-872c-78d4714956d9_del complete
Nov 25 09:20:19 compute-0 nova_compute[253538]: 2025-11-25 09:20:19.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.168 253542 INFO nova.compute.manager [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Took 1.11 seconds to destroy the instance on the hypervisor.
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.168 253542 DEBUG oslo.service.loopingcall [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.169 253542 DEBUG nova.compute.manager [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.169 253542 DEBUG nova.network.neutron [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:20:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 116 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 907 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.731 253542 DEBUG nova.network.neutron [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.744 253542 DEBUG nova.network.neutron [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.758 253542 INFO nova.compute.manager [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Took 0.59 seconds to deallocate network for instance.
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.820 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.821 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:20 compute-0 sshd-session[423761]: Received disconnect from 45.78.217.205 port 39430:11: Bye Bye [preauth]
Nov 25 09:20:20 compute-0 sshd-session[423761]: Disconnected from authenticating user root 45.78.217.205 port 39430 [preauth]
Nov 25 09:20:20 compute-0 nova_compute[253538]: 2025-11-25 09:20:20.867 253542 DEBUG oslo_concurrency.processutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:20:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:20:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128121174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:20:21 compute-0 ceph-mon[75015]: pgmap v3004: 321 pgs: 321 active+clean; 116 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 907 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 09:20:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2128121174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:20:21 compute-0 nova_compute[253538]: 2025-11-25 09:20:21.796 253542 DEBUG oslo_concurrency.processutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:20:21 compute-0 nova_compute[253538]: 2025-11-25 09:20:21.805 253542 DEBUG nova.compute.provider_tree [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:20:21 compute-0 nova_compute[253538]: 2025-11-25 09:20:21.820 253542 DEBUG nova.scheduler.client.report [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:20:21 compute-0 nova_compute[253538]: 2025-11-25 09:20:21.852 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:21 compute-0 nova_compute[253538]: 2025-11-25 09:20:21.881 253542 INFO nova.scheduler.client.report [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Deleted allocations for instance 73412c84-02b0-4ed4-872c-78d4714956d9
Nov 25 09:20:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:20:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5402.4 total, 600.0 interval
                                           Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.84 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1830 writes, 8268 keys, 1830 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s
                                           Interval WAL: 1830 writes, 682 syncs, 2.68 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:20:21 compute-0 nova_compute[253538]: 2025-11-25 09:20:21.942 253542 DEBUG oslo_concurrency.lockutils [None req-b18a868f-8566-4a2c-9b67-33cc86778341 f152533b0a03477485b6883b0c89d441 195498a87961428fa33efd9eb8f206a9 - - default default] Lock "73412c84-02b0-4ed4-872c-78d4714956d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 25 09:20:23 compute-0 ceph-mon[75015]: pgmap v3005: 321 pgs: 321 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 25 09:20:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:20:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:20:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:20:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:20:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:20:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:20:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:23 compute-0 nova_compute[253538]: 2025-11-25 09:20:23.542 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 09:20:24 compute-0 nova_compute[253538]: 2025-11-25 09:20:24.914 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:25 compute-0 ceph-mon[75015]: pgmap v3006: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 09:20:26 compute-0 sshd-session[423922]: Invalid user m from 45.202.211.6 port 50488
Nov 25 09:20:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 985 KiB/s wr, 123 op/s
Nov 25 09:20:26 compute-0 sshd-session[423922]: Received disconnect from 45.202.211.6 port 50488:11: Bye Bye [preauth]
Nov 25 09:20:26 compute-0 sshd-session[423922]: Disconnected from invalid user m 45.202.211.6 port 50488 [preauth]
Nov 25 09:20:27 compute-0 ceph-mon[75015]: pgmap v3007: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 985 KiB/s wr, 123 op/s
Nov 25 09:20:27 compute-0 podman[423924]: 2025-11-25 09:20:27.831026172 +0000 UTC m=+0.075996223 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:20:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:28 compute-0 nova_compute[253538]: 2025-11-25 09:20:28.545 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 107 op/s
Nov 25 09:20:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:20:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163204498' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:20:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:20:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163204498' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:20:29 compute-0 ceph-mon[75015]: pgmap v3008: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 107 op/s
Nov 25 09:20:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4163204498' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:20:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4163204498' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:20:29 compute-0 nova_compute[253538]: 2025-11-25 09:20:29.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 84 op/s
Nov 25 09:20:31 compute-0 ceph-mon[75015]: pgmap v3009: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 84 op/s
Nov 25 09:20:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 852 B/s wr, 51 op/s
Nov 25 09:20:33 compute-0 ovn_controller[152859]: 2025-11-25T09:20:33Z|01652|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 09:20:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:33 compute-0 nova_compute[253538]: 2025-11-25 09:20:33.548 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:33 compute-0 sshd-session[423950]: Invalid user ark from 45.78.222.2 port 40812
Nov 25 09:20:33 compute-0 ceph-mon[75015]: pgmap v3010: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 852 B/s wr, 51 op/s
Nov 25 09:20:33 compute-0 sshd-session[423950]: Received disconnect from 45.78.222.2 port 40812:11: Bye Bye [preauth]
Nov 25 09:20:33 compute-0 sshd-session[423950]: Disconnected from invalid user ark 45.78.222.2 port 40812 [preauth]
Nov 25 09:20:34 compute-0 nova_compute[253538]: 2025-11-25 09:20:34.280 253542 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764062419.2789662, 73412c84-02b0-4ed4-872c-78d4714956d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:20:34 compute-0 nova_compute[253538]: 2025-11-25 09:20:34.280 253542 INFO nova.compute.manager [-] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] VM Stopped (Lifecycle Event)
Nov 25 09:20:34 compute-0 nova_compute[253538]: 2025-11-25 09:20:34.306 253542 DEBUG nova.compute.manager [None req-dabff886-4c65-4c7e-ae5c-30b465c1e561 - - - - - -] [instance: 73412c84-02b0-4ed4-872c-78d4714956d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:20:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 09:20:34 compute-0 nova_compute[253538]: 2025-11-25 09:20:34.919 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:35 compute-0 ceph-mon[75015]: pgmap v3011: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 09:20:36 compute-0 sshd-session[423952]: Connection closed by authenticating user root 171.244.51.45 port 50616 [preauth]
Nov 25 09:20:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 09:20:37 compute-0 ceph-mon[75015]: pgmap v3012: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 09:20:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:38 compute-0 nova_compute[253538]: 2025-11-25 09:20:38.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:39 compute-0 ceph-mon[75015]: pgmap v3013: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:39 compute-0 nova_compute[253538]: 2025-11-25 09:20:39.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:20:41.105 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:20:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:20:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:20:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:20:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:20:41 compute-0 ceph-mon[75015]: pgmap v3014: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:42 compute-0 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 09:20:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:42 compute-0 ceph-mon[75015]: pgmap v3015: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:43 compute-0 nova_compute[253538]: 2025-11-25 09:20:43.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:44 compute-0 nova_compute[253538]: 2025-11-25 09:20:44.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:45 compute-0 ceph-mon[75015]: pgmap v3016: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:47 compute-0 podman[423955]: 2025-11-25 09:20:47.82900581 +0000 UTC m=+0.063953505 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:20:47 compute-0 podman[423954]: 2025-11-25 09:20:47.836763862 +0000 UTC m=+0.079805427 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:20:47 compute-0 ceph-mon[75015]: pgmap v3017: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:47 compute-0 sudo[423990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:20:47 compute-0 sudo[423990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:47 compute-0 sudo[423990]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:47 compute-0 sudo[424015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:20:47 compute-0 sudo[424015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:47 compute-0 sudo[424015]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:48 compute-0 sudo[424040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:20:48 compute-0 sudo[424040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:48 compute-0 sudo[424040]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:48 compute-0 sudo[424065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:20:48 compute-0 sudo[424065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:48 compute-0 nova_compute[253538]: 2025-11-25 09:20:48.556 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:48 compute-0 sudo[424065]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:20:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:20:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:20:48 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:20:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:20:48 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:20:48 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5f3fdbe6-2bfe-45e0-8698-4e43b79b1daf does not exist
Nov 25 09:20:48 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4489c1f5-5e7d-4966-99fc-b9b72a8a6a8f does not exist
Nov 25 09:20:48 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 225f3f89-0a2d-4a0c-babc-833d96c521e5 does not exist
Nov 25 09:20:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:20:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:20:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:20:48 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:20:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:20:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:20:48 compute-0 sudo[424120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:20:48 compute-0 sudo[424120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:48 compute-0 sudo[424120]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:48 compute-0 sudo[424145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:20:48 compute-0 sudo[424145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:48 compute-0 sudo[424145]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:20:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:20:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:20:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:20:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:20:48 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:20:48 compute-0 sudo[424170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:20:48 compute-0 sudo[424170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:48 compute-0 sudo[424170]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:49 compute-0 sudo[424195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:20:49 compute-0 sudo[424195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:49 compute-0 podman[424260]: 2025-11-25 09:20:49.363457943 +0000 UTC m=+0.029007472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:20:49 compute-0 podman[424260]: 2025-11-25 09:20:49.614908127 +0000 UTC m=+0.280457566 container create eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:20:49 compute-0 systemd[1]: Started libpod-conmon-eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd.scope.
Nov 25 09:20:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:20:49 compute-0 nova_compute[253538]: 2025-11-25 09:20:49.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:49 compute-0 podman[424260]: 2025-11-25 09:20:49.962827062 +0000 UTC m=+0.628376521 container init eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:20:49 compute-0 podman[424260]: 2025-11-25 09:20:49.971888149 +0000 UTC m=+0.637437578 container start eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:20:49 compute-0 vibrant_saha[424276]: 167 167
Nov 25 09:20:49 compute-0 systemd[1]: libpod-eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd.scope: Deactivated successfully.
Nov 25 09:20:50 compute-0 ceph-mon[75015]: pgmap v3018: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:50 compute-0 podman[424260]: 2025-11-25 09:20:50.076582214 +0000 UTC m=+0.742131663 container attach eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:20:50 compute-0 podman[424260]: 2025-11-25 09:20:50.077653042 +0000 UTC m=+0.743202481 container died eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc987e33be2951abf964d7d211ef1a4ded0d54379645fe1e829b46ae7ecce0ec-merged.mount: Deactivated successfully.
Nov 25 09:20:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:20:50.475 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:20:50 compute-0 nova_compute[253538]: 2025-11-25 09:20:50.475 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:50 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:20:50.476 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:20:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:50 compute-0 podman[424260]: 2025-11-25 09:20:50.647769305 +0000 UTC m=+1.313318734 container remove eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:20:50 compute-0 systemd[1]: libpod-conmon-eae07aca2f3a10e1e9a6c426de08b743efbb7f98184aa8e9ca22369e1b68ecbd.scope: Deactivated successfully.
Nov 25 09:20:50 compute-0 podman[424301]: 2025-11-25 09:20:50.877174018 +0000 UTC m=+0.089124100 container create 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:20:50 compute-0 podman[424301]: 2025-11-25 09:20:50.810258025 +0000 UTC m=+0.022208087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:20:51 compute-0 systemd[1]: Started libpod-conmon-0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60.scope.
Nov 25 09:20:51 compute-0 ceph-mon[75015]: pgmap v3019: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:51 compute-0 podman[424301]: 2025-11-25 09:20:51.284580316 +0000 UTC m=+0.496530428 container init 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:20:51 compute-0 podman[424301]: 2025-11-25 09:20:51.291863784 +0000 UTC m=+0.503813876 container start 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:20:51 compute-0 podman[424301]: 2025-11-25 09:20:51.55987055 +0000 UTC m=+0.771820632 container attach 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:20:52 compute-0 nova_compute[253538]: 2025-11-25 09:20:52.312 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:52 compute-0 gallant_montalcini[424318]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:20:52 compute-0 gallant_montalcini[424318]: --> relative data size: 1.0
Nov 25 09:20:52 compute-0 gallant_montalcini[424318]: --> All data devices are unavailable
Nov 25 09:20:52 compute-0 systemd[1]: libpod-0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60.scope: Deactivated successfully.
Nov 25 09:20:52 compute-0 podman[424347]: 2025-11-25 09:20:52.395251724 +0000 UTC m=+0.032611709 container died 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 09:20:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d1f22d9d8eb12b9d075be3681b48166c29a9aa38c1fa3bc64b21bdd36e32513-merged.mount: Deactivated successfully.
Nov 25 09:20:53 compute-0 ceph-mon[75015]: pgmap v3020: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:20:53
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups']
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:20:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:20:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:53 compute-0 podman[424347]: 2025-11-25 09:20:53.542706286 +0000 UTC m=+1.180066251 container remove 0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:20:53 compute-0 systemd[1]: libpod-conmon-0c867feb5f543a99e827249aba541c516c660785dbb3979eefabfebaf1ab3c60.scope: Deactivated successfully.
Nov 25 09:20:53 compute-0 nova_compute[253538]: 2025-11-25 09:20:53.559 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:53 compute-0 sudo[424195]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:53 compute-0 sudo[424361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:20:53 compute-0 sudo[424361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:53 compute-0 sudo[424361]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:53 compute-0 sudo[424386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:20:53 compute-0 sudo[424386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:53 compute-0 sudo[424386]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:53 compute-0 sudo[424411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:20:53 compute-0 sudo[424411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:53 compute-0 sudo[424411]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:53 compute-0 sudo[424436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:20:53 compute-0 sudo[424436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:20:54 compute-0 podman[424502]: 2025-11-25 09:20:54.170798259 +0000 UTC m=+0.021777555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:20:54 compute-0 podman[424502]: 2025-11-25 09:20:54.280399487 +0000 UTC m=+0.131378753 container create 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 09:20:54 compute-0 systemd[1]: Started libpod-conmon-8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a.scope.
Nov 25 09:20:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:20:54 compute-0 podman[424502]: 2025-11-25 09:20:54.489837357 +0000 UTC m=+0.340816643 container init 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 09:20:54 compute-0 podman[424502]: 2025-11-25 09:20:54.497678061 +0000 UTC m=+0.348657327 container start 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:20:54 compute-0 romantic_leavitt[424518]: 167 167
Nov 25 09:20:54 compute-0 systemd[1]: libpod-8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a.scope: Deactivated successfully.
Nov 25 09:20:54 compute-0 podman[424502]: 2025-11-25 09:20:54.553665547 +0000 UTC m=+0.404644833 container attach 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 09:20:54 compute-0 podman[424502]: 2025-11-25 09:20:54.555281411 +0000 UTC m=+0.406260707 container died 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:20:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-52c4e31615e2cfcdb31699b95bdbc2310afde06f442bbc5771e3ea4fc14e56f4-merged.mount: Deactivated successfully.
Nov 25 09:20:54 compute-0 nova_compute[253538]: 2025-11-25 09:20:54.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:55 compute-0 podman[424502]: 2025-11-25 09:20:55.355419945 +0000 UTC m=+1.206399211 container remove 8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:20:55 compute-0 systemd[1]: libpod-conmon-8ea1801437fc8b958555e90fceaa2ac933da51b4b8df7eac470acb6994fa955a.scope: Deactivated successfully.
Nov 25 09:20:55 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:20:55.478 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:20:55 compute-0 podman[424541]: 2025-11-25 09:20:55.581060436 +0000 UTC m=+0.092313217 container create 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:20:55 compute-0 podman[424541]: 2025-11-25 09:20:55.513837014 +0000 UTC m=+0.025089825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:20:55 compute-0 systemd[1]: Started libpod-conmon-4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1.scope.
Nov 25 09:20:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:55 compute-0 podman[424541]: 2025-11-25 09:20:55.756077377 +0000 UTC m=+0.267330178 container init 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:20:55 compute-0 ceph-mon[75015]: pgmap v3021: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:55 compute-0 podman[424541]: 2025-11-25 09:20:55.762212034 +0000 UTC m=+0.273464815 container start 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:20:55 compute-0 podman[424541]: 2025-11-25 09:20:55.822462777 +0000 UTC m=+0.333715578 container attach 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 09:20:56 compute-0 vigilant_wing[424558]: {
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:     "0": [
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:         {
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "devices": [
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "/dev/loop3"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             ],
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_name": "ceph_lv0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_size": "21470642176",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "name": "ceph_lv0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "tags": {
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cluster_name": "ceph",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.crush_device_class": "",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.encrypted": "0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osd_id": "0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.type": "block",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.vdo": "0"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             },
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "type": "block",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "vg_name": "ceph_vg0"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:         }
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:     ],
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:     "1": [
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:         {
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "devices": [
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "/dev/loop4"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             ],
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_name": "ceph_lv1",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_size": "21470642176",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "name": "ceph_lv1",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "tags": {
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cluster_name": "ceph",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.crush_device_class": "",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.encrypted": "0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osd_id": "1",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.type": "block",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.vdo": "0"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             },
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "type": "block",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "vg_name": "ceph_vg1"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:         }
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:     ],
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:     "2": [
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:         {
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "devices": [
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "/dev/loop5"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             ],
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_name": "ceph_lv2",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_size": "21470642176",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "name": "ceph_lv2",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "tags": {
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.cluster_name": "ceph",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.crush_device_class": "",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.encrypted": "0",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osd_id": "2",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.type": "block",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:                 "ceph.vdo": "0"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             },
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "type": "block",
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:             "vg_name": "ceph_vg2"
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:         }
Nov 25 09:20:56 compute-0 vigilant_wing[424558]:     ]
Nov 25 09:20:56 compute-0 vigilant_wing[424558]: }
Nov 25 09:20:56 compute-0 systemd[1]: libpod-4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1.scope: Deactivated successfully.
Nov 25 09:20:56 compute-0 podman[424541]: 2025-11-25 09:20:56.541961433 +0000 UTC m=+1.053214224 container died 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:20:56 compute-0 nova_compute[253538]: 2025-11-25 09:20:56.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:20:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c67b3107dae7618167f188050dda4eff6d2c8b161d359901569de1e4b1809f45-merged.mount: Deactivated successfully.
Nov 25 09:20:57 compute-0 podman[424541]: 2025-11-25 09:20:57.052907711 +0000 UTC m=+1.564160532 container remove 4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:20:57 compute-0 sudo[424436]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:57 compute-0 systemd[1]: libpod-conmon-4add07f471d3b1aef44a6a05e82495f614d605be0f8043b8692085e45ccb45f1.scope: Deactivated successfully.
Nov 25 09:20:57 compute-0 sudo[424578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:20:57 compute-0 sudo[424578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:57 compute-0 sudo[424578]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:57 compute-0 sudo[424603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:20:57 compute-0 sudo[424603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:57 compute-0 sudo[424603]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:57 compute-0 sudo[424628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:20:57 compute-0 sudo[424628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:57 compute-0 sudo[424628]: pam_unix(sudo:session): session closed for user root
Nov 25 09:20:57 compute-0 sudo[424653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:20:57 compute-0 sudo[424653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:20:57 compute-0 podman[424719]: 2025-11-25 09:20:57.710153089 +0000 UTC m=+0.102539156 container create 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:20:57 compute-0 podman[424719]: 2025-11-25 09:20:57.627330751 +0000 UTC m=+0.019716848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:20:57 compute-0 systemd[1]: Started libpod-conmon-45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b.scope.
Nov 25 09:20:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:20:57 compute-0 ceph-mon[75015]: pgmap v3022: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:57 compute-0 podman[424719]: 2025-11-25 09:20:57.952906577 +0000 UTC m=+0.345292664 container init 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:20:57 compute-0 podman[424719]: 2025-11-25 09:20:57.960086993 +0000 UTC m=+0.352473060 container start 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:20:57 compute-0 hardcore_poitras[424735]: 167 167
Nov 25 09:20:57 compute-0 systemd[1]: libpod-45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b.scope: Deactivated successfully.
Nov 25 09:20:58 compute-0 podman[424719]: 2025-11-25 09:20:58.188832049 +0000 UTC m=+0.581218156 container attach 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 09:20:58 compute-0 podman[424719]: 2025-11-25 09:20:58.189760775 +0000 UTC m=+0.582146912 container died 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:20:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d348ea0d038407c834841e37c586e229548ee2fdbd150554aaed7c7961a8c40-merged.mount: Deactivated successfully.
Nov 25 09:20:58 compute-0 nova_compute[253538]: 2025-11-25 09:20:58.562 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:20:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:58 compute-0 podman[424719]: 2025-11-25 09:20:58.779637685 +0000 UTC m=+1.172023752 container remove 45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:20:58 compute-0 systemd[1]: libpod-conmon-45a401c1685078eb9e65b7e27dc167cc89c1b6861b7c906f1de6cb219f22cf2b.scope: Deactivated successfully.
Nov 25 09:20:58 compute-0 podman[424741]: 2025-11-25 09:20:58.910642767 +0000 UTC m=+0.911587312 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:20:58 compute-0 ceph-mon[75015]: pgmap v3023: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:20:58 compute-0 podman[424785]: 2025-11-25 09:20:58.980519042 +0000 UTC m=+0.077729720 container create 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:20:59 compute-0 podman[424785]: 2025-11-25 09:20:58.923302992 +0000 UTC m=+0.020513690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:20:59 compute-0 systemd[1]: Started libpod-conmon-91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d.scope.
Nov 25 09:20:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:20:59 compute-0 podman[424785]: 2025-11-25 09:20:59.172552248 +0000 UTC m=+0.269762956 container init 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:20:59 compute-0 podman[424785]: 2025-11-25 09:20:59.180656698 +0000 UTC m=+0.277867376 container start 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:20:59 compute-0 podman[424785]: 2025-11-25 09:20:59.205121545 +0000 UTC m=+0.302332223 container attach 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:20:59 compute-0 nova_compute[253538]: 2025-11-25 09:20:59.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:00 compute-0 naughty_tharp[424807]: {
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "osd_id": 1,
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "type": "bluestore"
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:     },
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "osd_id": 2,
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "type": "bluestore"
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:     },
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "osd_id": 0,
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:         "type": "bluestore"
Nov 25 09:21:00 compute-0 naughty_tharp[424807]:     }
Nov 25 09:21:00 compute-0 naughty_tharp[424807]: }
Nov 25 09:21:00 compute-0 systemd[1]: libpod-91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d.scope: Deactivated successfully.
Nov 25 09:21:00 compute-0 systemd[1]: libpod-91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d.scope: Consumed 1.027s CPU time.
Nov 25 09:21:00 compute-0 podman[424785]: 2025-11-25 09:21:00.206148556 +0000 UTC m=+1.303359264 container died 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f20d92439824b7cba7e9538e80b0f1e5bc0e6fd0a3fa521781413b29f58604e-merged.mount: Deactivated successfully.
Nov 25 09:21:00 compute-0 podman[424785]: 2025-11-25 09:21:00.281973473 +0000 UTC m=+1.379184151 container remove 91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:21:00 compute-0 systemd[1]: libpod-conmon-91cceaf23f8e5c3e03b55a3e412a74e2bc180d128327c9a83b86c45aa819b38d.scope: Deactivated successfully.
Nov 25 09:21:00 compute-0 sudo[424653]: pam_unix(sudo:session): session closed for user root
Nov 25 09:21:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:21:00 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:21:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:21:00 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:21:00 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c2151727-2d2f-4bca-a401-ca77242fc724 does not exist
Nov 25 09:21:00 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 091011c9-35e9-4ac6-8d1a-ccf45d3403f2 does not exist
Nov 25 09:21:00 compute-0 sudo[424852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:21:00 compute-0 sudo[424852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:21:00 compute-0 sudo[424852]: pam_unix(sudo:session): session closed for user root
Nov 25 09:21:00 compute-0 sudo[424877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:21:00 compute-0 sudo[424877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:21:00 compute-0 sudo[424877]: pam_unix(sudo:session): session closed for user root
Nov 25 09:21:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:21:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:21:01 compute-0 ceph-mon[75015]: pgmap v3024: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:02 compute-0 nova_compute[253538]: 2025-11-25 09:21:02.557 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:02 compute-0 nova_compute[253538]: 2025-11-25 09:21:02.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:21:02 compute-0 nova_compute[253538]: 2025-11-25 09:21:02.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:21:02 compute-0 nova_compute[253538]: 2025-11-25 09:21:02.588 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:21:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:03 compute-0 nova_compute[253538]: 2025-11-25 09:21:03.564 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:03 compute-0 ceph-mon[75015]: pgmap v3025: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010122368870873217 of space, bias 1.0, pg target 0.3036710661261965 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:21:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:04 compute-0 nova_compute[253538]: 2025-11-25 09:21:04.929 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:05 compute-0 nova_compute[253538]: 2025-11-25 09:21:05.579 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:05 compute-0 ceph-mon[75015]: pgmap v3026: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:06 compute-0 nova_compute[253538]: 2025-11-25 09:21:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:07 compute-0 ceph-mon[75015]: pgmap v3027: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:08 compute-0 nova_compute[253538]: 2025-11-25 09:21:08.567 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:09 compute-0 nova_compute[253538]: 2025-11-25 09:21:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:09 compute-0 nova_compute[253538]: 2025-11-25 09:21:09.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:09 compute-0 nova_compute[253538]: 2025-11-25 09:21:09.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:21:09 compute-0 ceph-mon[75015]: pgmap v3028: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:09 compute-0 nova_compute[253538]: 2025-11-25 09:21:09.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:10 compute-0 ovn_controller[152859]: 2025-11-25T09:21:10Z|01653|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 09:21:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:10 compute-0 ceph-mon[75015]: pgmap v3029: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:13 compute-0 nova_compute[253538]: 2025-11-25 09:21:13.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:13 compute-0 nova_compute[253538]: 2025-11-25 09:21:13.570 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:13 compute-0 ceph-mon[75015]: pgmap v3030: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:14 compute-0 nova_compute[253538]: 2025-11-25 09:21:14.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:14 compute-0 nova_compute[253538]: 2025-11-25 09:21:14.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:15 compute-0 ceph-mon[75015]: pgmap v3031: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:17 compute-0 nova_compute[253538]: 2025-11-25 09:21:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:17 compute-0 nova_compute[253538]: 2025-11-25 09:21:17.707 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:21:17 compute-0 nova_compute[253538]: 2025-11-25 09:21:17.708 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:21:17 compute-0 nova_compute[253538]: 2025-11-25 09:21:17.708 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:21:17 compute-0 nova_compute[253538]: 2025-11-25 09:21:17.708 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:21:17 compute-0 nova_compute[253538]: 2025-11-25 09:21:17.708 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:21:17 compute-0 ceph-mon[75015]: pgmap v3032: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:21:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600632121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.190 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:21:18 compute-0 podman[424926]: 2025-11-25 09:21:18.308483067 +0000 UTC m=+0.065300171 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:21:18 compute-0 podman[424925]: 2025-11-25 09:21:18.313634797 +0000 UTC m=+0.070558494 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.385 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.386 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3604MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.386 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.386 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.441 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.442 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.463 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:21:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.573 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:18 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/600632121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:21:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:21:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662410292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.968 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.974 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:21:18 compute-0 nova_compute[253538]: 2025-11-25 09:21:18.997 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:21:19 compute-0 nova_compute[253538]: 2025-11-25 09:21:19.013 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:21:19 compute-0 nova_compute[253538]: 2025-11-25 09:21:19.013 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:21:19 compute-0 ceph-mon[75015]: pgmap v3033: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:19 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2662410292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:21:19 compute-0 nova_compute[253538]: 2025-11-25 09:21:19.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:21 compute-0 sshd-session[424986]: Invalid user hadoop from 193.32.162.151 port 40160
Nov 25 09:21:21 compute-0 ceph-mon[75015]: pgmap v3034: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:21 compute-0 sshd-session[424986]: Connection closed by invalid user hadoop 193.32.162.151 port 40160 [preauth]
Nov 25 09:21:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:21:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:21:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:21:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:21:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:21:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:21:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:23 compute-0 nova_compute[253538]: 2025-11-25 09:21:23.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:23 compute-0 nova_compute[253538]: 2025-11-25 09:21:23.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:21:23 compute-0 nova_compute[253538]: 2025-11-25 09:21:23.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:23 compute-0 ceph-mon[75015]: pgmap v3035: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:24 compute-0 ceph-mon[75015]: pgmap v3036: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:25 compute-0 nova_compute[253538]: 2025-11-25 09:21:25.037 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:27 compute-0 ceph-mon[75015]: pgmap v3037: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:28 compute-0 nova_compute[253538]: 2025-11-25 09:21:28.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:21:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1916060938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:21:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:21:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1916060938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:21:29 compute-0 podman[424988]: 2025-11-25 09:21:29.861673501 +0000 UTC m=+0.107522583 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 09:21:29 compute-0 ceph-mon[75015]: pgmap v3038: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1916060938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:21:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1916060938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:21:30 compute-0 nova_compute[253538]: 2025-11-25 09:21:30.039 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 25 09:21:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 25 09:21:30 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 25 09:21:31 compute-0 ceph-mon[75015]: pgmap v3039: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:31 compute-0 ceph-mon[75015]: osdmap e272: 3 total, 3 up, 3 in
Nov 25 09:21:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 64 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Nov 25 09:21:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 25 09:21:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 25 09:21:32 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 25 09:21:32 compute-0 ceph-mon[75015]: pgmap v3041: 321 pgs: 321 active+clean; 64 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Nov 25 09:21:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:33 compute-0 nova_compute[253538]: 2025-11-25 09:21:33.582 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:33 compute-0 ceph-mon[75015]: osdmap e273: 3 total, 3 up, 3 in
Nov 25 09:21:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 52 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 25 09:21:34 compute-0 ceph-mon[75015]: pgmap v3043: 321 pgs: 321 active+clean; 52 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 25 09:21:35 compute-0 nova_compute[253538]: 2025-11-25 09:21:35.042 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Nov 25 09:21:37 compute-0 ceph-mon[75015]: pgmap v3044: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Nov 25 09:21:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 25 09:21:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 25 09:21:38 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.520642) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498520680, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1837, "num_deletes": 256, "total_data_size": 2948287, "memory_usage": 2997320, "flush_reason": "Manual Compaction"}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498530933, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 1754942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61647, "largest_seqno": 63483, "table_properties": {"data_size": 1748606, "index_size": 3282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16338, "raw_average_key_size": 21, "raw_value_size": 1734710, "raw_average_value_size": 2244, "num_data_blocks": 149, "num_entries": 773, "num_filter_entries": 773, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062314, "oldest_key_time": 1764062314, "file_creation_time": 1764062498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 10448 microseconds, and 4799 cpu microseconds.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.531083) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 1754942 bytes OK
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.531108) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.537656) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.537701) EVENT_LOG_v1 {"time_micros": 1764062498537692, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.537723) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2940416, prev total WAL file size 2951727, number of live WAL files 2.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.538982) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353035' seq:72057594037927935, type:22 .. '6D6772737461740032373536' seq:0, type:0; will stop at (end)
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(1713KB)], [146(10101KB)]
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498539053, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 12098437, "oldest_snapshot_seqno": -1}
Nov 25 09:21:38 compute-0 nova_compute[253538]: 2025-11-25 09:21:38.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8291 keys, 9927801 bytes, temperature: kUnknown
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498600902, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 9927801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9876035, "index_size": 29957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20741, "raw_key_size": 216589, "raw_average_key_size": 26, "raw_value_size": 9731745, "raw_average_value_size": 1173, "num_data_blocks": 1169, "num_entries": 8291, "num_filter_entries": 8291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.601176) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 9927801 bytes
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.602340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.3 rd, 160.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(12.6) write-amplify(5.7) OK, records in: 8731, records dropped: 440 output_compression: NoCompression
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.602360) EVENT_LOG_v1 {"time_micros": 1764062498602350, "job": 90, "event": "compaction_finished", "compaction_time_micros": 61937, "compaction_time_cpu_micros": 23227, "output_level": 6, "num_output_files": 1, "total_output_size": 9927801, "num_input_records": 8731, "num_output_records": 8291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498602837, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498604978, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.538886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.605672) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498605808, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 251, "total_data_size": 13330, "memory_usage": 19384, "flush_reason": "Manual Compaction"}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 25 09:21:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 KiB/s wr, 66 op/s
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498609760, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 13305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63484, "largest_seqno": 63739, "table_properties": {"data_size": 11551, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062498, "oldest_key_time": 1764062498, "file_creation_time": 1764062498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 4117 microseconds, and 1484 cpu microseconds.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.609811) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 13305 bytes OK
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.609836) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611359) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611378) EVENT_LOG_v1 {"time_micros": 1764062498611370, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611409) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 11311, prev total WAL file size 11311, number of live WAL files 2.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611987) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(12KB)], [149(9695KB)]
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498612062, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 9941106, "oldest_snapshot_seqno": -1}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8041 keys, 8216905 bytes, temperature: kUnknown
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498659480, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8216905, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8168371, "index_size": 27301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20165, "raw_key_size": 212089, "raw_average_key_size": 26, "raw_value_size": 8030013, "raw_average_value_size": 998, "num_data_blocks": 1048, "num_entries": 8041, "num_filter_entries": 8041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.659841) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8216905 bytes
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.661115) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.1 rd, 172.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.5 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(1364.8) write-amplify(617.6) OK, records in: 8547, records dropped: 506 output_compression: NoCompression
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.661146) EVENT_LOG_v1 {"time_micros": 1764062498661133, "job": 92, "event": "compaction_finished", "compaction_time_micros": 47541, "compaction_time_cpu_micros": 25666, "output_level": 6, "num_output_files": 1, "total_output_size": 8216905, "num_input_records": 8547, "num_output_records": 8041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498661349, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062498664578, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.611847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:38 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:21:38.664674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:21:39 compute-0 ceph-mon[75015]: osdmap e274: 3 total, 3 up, 3 in
Nov 25 09:21:39 compute-0 ceph-mon[75015]: pgmap v3046: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 KiB/s wr, 66 op/s
Nov 25 09:21:39 compute-0 nova_compute[253538]: 2025-11-25 09:21:39.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:39 compute-0 nova_compute[253538]: 2025-11-25 09:21:39.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:21:39 compute-0 nova_compute[253538]: 2025-11-25 09:21:39.607 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:21:40 compute-0 nova_compute[253538]: 2025-11-25 09:21:40.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Nov 25 09:21:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:21:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:21:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:21:41.106 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:21:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:21:41.107 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:21:41 compute-0 ceph-mon[75015]: pgmap v3047: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Nov 25 09:21:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 09:21:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:43 compute-0 nova_compute[253538]: 2025-11-25 09:21:43.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:43 compute-0 ceph-mon[75015]: pgmap v3048: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 09:21:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Nov 25 09:21:45 compute-0 nova_compute[253538]: 2025-11-25 09:21:45.045 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:45 compute-0 nova_compute[253538]: 2025-11-25 09:21:45.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:45 compute-0 ceph-mon[75015]: pgmap v3049: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Nov 25 09:21:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:47 compute-0 ceph-mon[75015]: pgmap v3050: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:48 compute-0 nova_compute[253538]: 2025-11-25 09:21:48.593 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:48 compute-0 podman[425018]: 2025-11-25 09:21:48.810178116 +0000 UTC m=+0.054108906 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:21:48 compute-0 podman[425017]: 2025-11-25 09:21:48.82609918 +0000 UTC m=+0.069825215 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 25 09:21:49 compute-0 ceph-mon[75015]: pgmap v3051: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:50 compute-0 nova_compute[253538]: 2025-11-25 09:21:50.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:52 compute-0 ceph-mon[75015]: pgmap v3052: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:52 compute-0 nova_compute[253538]: 2025-11-25 09:21:52.593 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:53 compute-0 ceph-mon[75015]: pgmap v3053: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:21:53
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'backups']
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:21:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:21:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:53 compute-0 nova_compute[253538]: 2025-11-25 09:21:53.596 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:21:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:55 compute-0 nova_compute[253538]: 2025-11-25 09:21:55.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:55 compute-0 ceph-mon[75015]: pgmap v3054: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:57 compute-0 ceph-mon[75015]: pgmap v3055: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:21:58 compute-0 nova_compute[253538]: 2025-11-25 09:21:58.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:21:58 compute-0 nova_compute[253538]: 2025-11-25 09:21:58.598 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:21:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3056: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:21:59 compute-0 ceph-mon[75015]: pgmap v3056: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:00 compute-0 nova_compute[253538]: 2025-11-25 09:22:00.050 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:00 compute-0 sudo[425056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:00 compute-0 sudo[425056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:00 compute-0 sudo[425056]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:00 compute-0 sudo[425087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:22:00 compute-0 sudo[425087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:00 compute-0 sudo[425087]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:00 compute-0 podman[425080]: 2025-11-25 09:22:00.704396827 +0000 UTC m=+0.096789529 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 25 09:22:00 compute-0 sudo[425129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:00 compute-0 sudo[425129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:00 compute-0 sudo[425129]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:00 compute-0 sudo[425156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:22:00 compute-0 sudo[425156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:01 compute-0 sudo[425156]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:22:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:22:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:22:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:22:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:22:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:22:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev eae0f1db-3b04-48ea-973e-51446c668f5b does not exist
Nov 25 09:22:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 31a77a54-cfad-4656-9879-07850ecf10e7 does not exist
Nov 25 09:22:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev eabdb512-9bff-4e26-9a2e-11dde76d1767 does not exist
Nov 25 09:22:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:22:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:22:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:22:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:22:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:22:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:22:01 compute-0 sudo[425212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:01 compute-0 sudo[425212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:01 compute-0 sudo[425212]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:01 compute-0 sudo[425237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:22:01 compute-0 sudo[425237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:01 compute-0 sudo[425237]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:01 compute-0 sudo[425262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:01 compute-0 sudo[425262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:01 compute-0 sudo[425262]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:01 compute-0 sudo[425287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:22:01 compute-0 sudo[425287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:01 compute-0 ceph-mon[75015]: pgmap v3057: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:22:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:22:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:22:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:22:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:22:01 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:22:01 compute-0 podman[425351]: 2025-11-25 09:22:01.940873026 +0000 UTC m=+0.041673007 container create bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:22:01 compute-0 systemd[1]: Started libpod-conmon-bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62.scope.
Nov 25 09:22:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:22:02 compute-0 podman[425351]: 2025-11-25 09:22:01.922709971 +0000 UTC m=+0.023509982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:22:02 compute-0 podman[425351]: 2025-11-25 09:22:02.028494375 +0000 UTC m=+0.129294386 container init bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:22:02 compute-0 podman[425351]: 2025-11-25 09:22:02.037814958 +0000 UTC m=+0.138614939 container start bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:22:02 compute-0 podman[425351]: 2025-11-25 09:22:02.043767191 +0000 UTC m=+0.144567202 container attach bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:22:02 compute-0 hungry_kirch[425367]: 167 167
Nov 25 09:22:02 compute-0 systemd[1]: libpod-bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62.scope: Deactivated successfully.
Nov 25 09:22:02 compute-0 conmon[425367]: conmon bbd1b7f1234447f02bde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62.scope/container/memory.events
Nov 25 09:22:02 compute-0 podman[425351]: 2025-11-25 09:22:02.051749218 +0000 UTC m=+0.152549219 container died bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4457175f1487932a659d67a08fb27dda186a440b731efd24d810d29fe1555d78-merged.mount: Deactivated successfully.
Nov 25 09:22:02 compute-0 podman[425351]: 2025-11-25 09:22:02.096266572 +0000 UTC m=+0.197066553 container remove bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:22:02 compute-0 systemd[1]: libpod-conmon-bbd1b7f1234447f02bdee02bfd29ef2f8565ff3e75fe8524a8a746edbbc8ef62.scope: Deactivated successfully.
Nov 25 09:22:02 compute-0 podman[425390]: 2025-11-25 09:22:02.269876825 +0000 UTC m=+0.044097473 container create 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:22:02 compute-0 systemd[1]: Started libpod-conmon-4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096.scope.
Nov 25 09:22:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:02 compute-0 podman[425390]: 2025-11-25 09:22:02.250374393 +0000 UTC m=+0.024595071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:22:02 compute-0 podman[425390]: 2025-11-25 09:22:02.350839042 +0000 UTC m=+0.125059710 container init 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 09:22:02 compute-0 podman[425390]: 2025-11-25 09:22:02.361398561 +0000 UTC m=+0.135619209 container start 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:22:02 compute-0 podman[425390]: 2025-11-25 09:22:02.364548416 +0000 UTC m=+0.138769084 container attach 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:22:02 compute-0 nova_compute[253538]: 2025-11-25 09:22:02.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:02 compute-0 nova_compute[253538]: 2025-11-25 09:22:02.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:22:02 compute-0 nova_compute[253538]: 2025-11-25 09:22:02.558 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:22:02 compute-0 nova_compute[253538]: 2025-11-25 09:22:02.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:22:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:03 compute-0 gifted_swirles[425406]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:22:03 compute-0 gifted_swirles[425406]: --> relative data size: 1.0
Nov 25 09:22:03 compute-0 gifted_swirles[425406]: --> All data devices are unavailable
Nov 25 09:22:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:03 compute-0 systemd[1]: libpod-4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096.scope: Deactivated successfully.
Nov 25 09:22:03 compute-0 systemd[1]: libpod-4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096.scope: Consumed 1.131s CPU time.
Nov 25 09:22:03 compute-0 podman[425390]: 2025-11-25 09:22:03.542507999 +0000 UTC m=+1.316728667 container died 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 09:22:03 compute-0 nova_compute[253538]: 2025-11-25 09:22:03.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0731f039435028043a9ea05d7fe35ba2c8c333e0aeaf888661280ff465ed1ee2-merged.mount: Deactivated successfully.
Nov 25 09:22:03 compute-0 podman[425390]: 2025-11-25 09:22:03.705271378 +0000 UTC m=+1.479492026 container remove 4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 09:22:03 compute-0 systemd[1]: libpod-conmon-4ddc64317917f2be3d01eed47cfb3284c0060d468c075dc0aa8dfd17ef751096.scope: Deactivated successfully.
Nov 25 09:22:03 compute-0 sudo[425287]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:03 compute-0 sudo[425447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:03 compute-0 sudo[425447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:03 compute-0 sudo[425447]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:03 compute-0 ceph-mon[75015]: pgmap v3058: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:03 compute-0 sudo[425472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:22:03 compute-0 sudo[425472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:03 compute-0 sudo[425472]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:03 compute-0 sudo[425497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:03 compute-0 sudo[425497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:03 compute-0 sudo[425497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:04 compute-0 sudo[425522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:22:04 compute-0 sudo[425522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:04 compute-0 podman[425588]: 2025-11-25 09:22:04.392905093 +0000 UTC m=+0.043029613 container create 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:22:04 compute-0 systemd[1]: Started libpod-conmon-35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae.scope.
Nov 25 09:22:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:22:04 compute-0 podman[425588]: 2025-11-25 09:22:04.374433059 +0000 UTC m=+0.024557609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:22:04 compute-0 podman[425588]: 2025-11-25 09:22:04.492117048 +0000 UTC m=+0.142241588 container init 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 09:22:04 compute-0 podman[425588]: 2025-11-25 09:22:04.500816136 +0000 UTC m=+0.150940666 container start 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:22:04 compute-0 podman[425588]: 2025-11-25 09:22:04.503723455 +0000 UTC m=+0.153847985 container attach 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 09:22:04 compute-0 dreamy_varahamihira[425604]: 167 167
Nov 25 09:22:04 compute-0 systemd[1]: libpod-35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae.scope: Deactivated successfully.
Nov 25 09:22:04 compute-0 podman[425588]: 2025-11-25 09:22:04.510321225 +0000 UTC m=+0.160445755 container died 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:22:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-517aff901b9ba4530065e9feac4ed62e5a6a8e02e669c770fa3000c44cf036c3-merged.mount: Deactivated successfully.
Nov 25 09:22:04 compute-0 podman[425588]: 2025-11-25 09:22:04.575868551 +0000 UTC m=+0.225993081 container remove 35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:22:04 compute-0 systemd[1]: libpod-conmon-35892926119a23fbdae6620e82e5ea6806af6325417644515750d0b9fdcb41ae.scope: Deactivated successfully.
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:22:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:04 compute-0 podman[425628]: 2025-11-25 09:22:04.785589619 +0000 UTC m=+0.071777698 container create 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:22:04 compute-0 podman[425628]: 2025-11-25 09:22:04.742899605 +0000 UTC m=+0.029087714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:22:04 compute-0 systemd[1]: Started libpod-conmon-0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2.scope.
Nov 25 09:22:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:04 compute-0 podman[425628]: 2025-11-25 09:22:04.897935701 +0000 UTC m=+0.184123820 container init 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:22:04 compute-0 podman[425628]: 2025-11-25 09:22:04.905451937 +0000 UTC m=+0.191640026 container start 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:22:04 compute-0 podman[425628]: 2025-11-25 09:22:04.918526113 +0000 UTC m=+0.204714232 container attach 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:22:05 compute-0 nova_compute[253538]: 2025-11-25 09:22:05.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:05 compute-0 nova_compute[253538]: 2025-11-25 09:22:05.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:05 compute-0 trusting_easley[425645]: {
Nov 25 09:22:05 compute-0 trusting_easley[425645]:     "0": [
Nov 25 09:22:05 compute-0 trusting_easley[425645]:         {
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "devices": [
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "/dev/loop3"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             ],
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_name": "ceph_lv0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_size": "21470642176",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "name": "ceph_lv0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "tags": {
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cluster_name": "ceph",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.crush_device_class": "",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.encrypted": "0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osd_id": "0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.type": "block",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.vdo": "0"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             },
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "type": "block",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "vg_name": "ceph_vg0"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:         }
Nov 25 09:22:05 compute-0 trusting_easley[425645]:     ],
Nov 25 09:22:05 compute-0 trusting_easley[425645]:     "1": [
Nov 25 09:22:05 compute-0 trusting_easley[425645]:         {
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "devices": [
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "/dev/loop4"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             ],
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_name": "ceph_lv1",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_size": "21470642176",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "name": "ceph_lv1",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "tags": {
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cluster_name": "ceph",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.crush_device_class": "",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.encrypted": "0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osd_id": "1",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.type": "block",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.vdo": "0"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             },
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "type": "block",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "vg_name": "ceph_vg1"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:         }
Nov 25 09:22:05 compute-0 trusting_easley[425645]:     ],
Nov 25 09:22:05 compute-0 trusting_easley[425645]:     "2": [
Nov 25 09:22:05 compute-0 trusting_easley[425645]:         {
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "devices": [
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "/dev/loop5"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             ],
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_name": "ceph_lv2",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_size": "21470642176",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "name": "ceph_lv2",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "tags": {
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.cluster_name": "ceph",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.crush_device_class": "",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.encrypted": "0",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osd_id": "2",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.type": "block",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:                 "ceph.vdo": "0"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             },
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "type": "block",
Nov 25 09:22:05 compute-0 trusting_easley[425645]:             "vg_name": "ceph_vg2"
Nov 25 09:22:05 compute-0 trusting_easley[425645]:         }
Nov 25 09:22:05 compute-0 trusting_easley[425645]:     ]
Nov 25 09:22:05 compute-0 trusting_easley[425645]: }
Nov 25 09:22:05 compute-0 systemd[1]: libpod-0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2.scope: Deactivated successfully.
Nov 25 09:22:05 compute-0 podman[425654]: 2025-11-25 09:22:05.851426096 +0000 UTC m=+0.031227432 container died 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 09:22:05 compute-0 ceph-mon[75015]: pgmap v3059: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-12bb353cd3955ae142a9a7c531e87504b0864a74b18db3e4c88bb33873a2bfac-merged.mount: Deactivated successfully.
Nov 25 09:22:06 compute-0 podman[425654]: 2025-11-25 09:22:06.02619185 +0000 UTC m=+0.205993166 container remove 0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 09:22:06 compute-0 systemd[1]: libpod-conmon-0e6c91df2c9e4eb97e86877e76adf48dd769792dc559a3513ae167f3aa319ae2.scope: Deactivated successfully.
Nov 25 09:22:06 compute-0 sudo[425522]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:06 compute-0 sudo[425669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:06 compute-0 sudo[425669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:06 compute-0 sudo[425669]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:06 compute-0 sudo[425694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:22:06 compute-0 sudo[425694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:06 compute-0 sudo[425694]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:06 compute-0 sudo[425719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:06 compute-0 sudo[425719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:06 compute-0 sudo[425719]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:06 compute-0 sudo[425744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:22:06 compute-0 sudo[425744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:06 compute-0 nova_compute[253538]: 2025-11-25 09:22:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:06 compute-0 podman[425810]: 2025-11-25 09:22:06.725042543 +0000 UTC m=+0.050484588 container create d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:22:06 compute-0 systemd[1]: Started libpod-conmon-d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3.scope.
Nov 25 09:22:06 compute-0 podman[425810]: 2025-11-25 09:22:06.703743702 +0000 UTC m=+0.029185777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:22:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:22:06 compute-0 podman[425810]: 2025-11-25 09:22:06.824569565 +0000 UTC m=+0.150011640 container init d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:22:06 compute-0 podman[425810]: 2025-11-25 09:22:06.833389796 +0000 UTC m=+0.158831841 container start d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:22:06 compute-0 podman[425810]: 2025-11-25 09:22:06.839179794 +0000 UTC m=+0.164621869 container attach d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:22:06 compute-0 focused_davinci[425827]: 167 167
Nov 25 09:22:06 compute-0 systemd[1]: libpod-d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3.scope: Deactivated successfully.
Nov 25 09:22:06 compute-0 podman[425810]: 2025-11-25 09:22:06.840801638 +0000 UTC m=+0.166243683 container died d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 09:22:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff1b9ce1bbe2965ab562cbb30c77ccf7cc528e1c88957481f238d83fe257604-merged.mount: Deactivated successfully.
Nov 25 09:22:06 compute-0 podman[425810]: 2025-11-25 09:22:06.98062719 +0000 UTC m=+0.306069235 container remove d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:22:06 compute-0 ceph-mon[75015]: pgmap v3060: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:07 compute-0 systemd[1]: libpod-conmon-d5df9424402254395571f05dc2915e3e59d808eba93ef6f80c32d08ce469d2e3.scope: Deactivated successfully.
Nov 25 09:22:07 compute-0 podman[425851]: 2025-11-25 09:22:07.163073724 +0000 UTC m=+0.053660704 container create a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:22:07 compute-0 systemd[1]: Started libpod-conmon-a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7.scope.
Nov 25 09:22:07 compute-0 podman[425851]: 2025-11-25 09:22:07.137946759 +0000 UTC m=+0.028533759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:22:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:22:07 compute-0 podman[425851]: 2025-11-25 09:22:07.269760182 +0000 UTC m=+0.160347182 container init a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:22:07 compute-0 podman[425851]: 2025-11-25 09:22:07.280513626 +0000 UTC m=+0.171100626 container start a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 09:22:07 compute-0 podman[425851]: 2025-11-25 09:22:07.283410055 +0000 UTC m=+0.173997045 container attach a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:22:08 compute-0 naughty_kilby[425868]: {
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "osd_id": 1,
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "type": "bluestore"
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:     },
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "osd_id": 2,
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "type": "bluestore"
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:     },
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "osd_id": 0,
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:         "type": "bluestore"
Nov 25 09:22:08 compute-0 naughty_kilby[425868]:     }
Nov 25 09:22:08 compute-0 naughty_kilby[425868]: }
Nov 25 09:22:08 compute-0 systemd[1]: libpod-a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7.scope: Deactivated successfully.
Nov 25 09:22:08 compute-0 podman[425851]: 2025-11-25 09:22:08.421375938 +0000 UTC m=+1.311962928 container died a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:22:08 compute-0 systemd[1]: libpod-a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7.scope: Consumed 1.113s CPU time.
Nov 25 09:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e92befe67c996a87f97fc1590a4e5da38a603c92003526823d3b04b19f9423e-merged.mount: Deactivated successfully.
Nov 25 09:22:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:08 compute-0 podman[425851]: 2025-11-25 09:22:08.554983631 +0000 UTC m=+1.445570611 container remove a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:22:08 compute-0 systemd[1]: libpod-conmon-a59136682f19c9da1f38f1fd7d35e017f3ecd2f2cf32f2ab45ece18d25d42cd7.scope: Deactivated successfully.
Nov 25 09:22:08 compute-0 sudo[425744]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:22:08 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:22:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:22:08 compute-0 nova_compute[253538]: 2025-11-25 09:22:08.603 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:08 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:22:08 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3afc6391-063d-4346-9f9a-bcc214d6b308 does not exist
Nov 25 09:22:08 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ee97b1b5-8776-4820-8106-a9a09f19c525 does not exist
Nov 25 09:22:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 09:22:08 compute-0 sudo[425913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:22:08 compute-0 sudo[425913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:08 compute-0 sudo[425913]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:08 compute-0 sudo[425938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:22:08 compute-0 sudo[425938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:22:08 compute-0 sudo[425938]: pam_unix(sudo:session): session closed for user root
Nov 25 09:22:09 compute-0 nova_compute[253538]: 2025-11-25 09:22:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:09 compute-0 nova_compute[253538]: 2025-11-25 09:22:09.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:22:09 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:22:09 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:22:09 compute-0 ceph-mon[75015]: pgmap v3061: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 09:22:10 compute-0 nova_compute[253538]: 2025-11-25 09:22:10.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 09:22:11 compute-0 nova_compute[253538]: 2025-11-25 09:22:11.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:11 compute-0 ceph-mon[75015]: pgmap v3062: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 09:22:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:22:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:13 compute-0 nova_compute[253538]: 2025-11-25 09:22:13.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:13 compute-0 nova_compute[253538]: 2025-11-25 09:22:13.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:13 compute-0 ceph-mon[75015]: pgmap v3063: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:22:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:22:15 compute-0 nova_compute[253538]: 2025-11-25 09:22:15.088 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:15 compute-0 ceph-mon[75015]: pgmap v3064: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:22:16 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 09:22:16 compute-0 systemd[1]: virtsecretd.service: Consumed 1.190s CPU time.
Nov 25 09:22:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:22:17 compute-0 ceph-mon[75015]: pgmap v3065: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:22:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:18 compute-0 nova_compute[253538]: 2025-11-25 09:22:18.624 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:22:18 compute-0 nova_compute[253538]: 2025-11-25 09:22:18.842 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:19 compute-0 nova_compute[253538]: 2025-11-25 09:22:19.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:19 compute-0 nova_compute[253538]: 2025-11-25 09:22:19.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:22:19 compute-0 nova_compute[253538]: 2025-11-25 09:22:19.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:22:19 compute-0 nova_compute[253538]: 2025-11-25 09:22:19.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:22:19 compute-0 nova_compute[253538]: 2025-11-25 09:22:19.579 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:22:19 compute-0 nova_compute[253538]: 2025-11-25 09:22:19.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:22:19 compute-0 ceph-mon[75015]: pgmap v3066: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:22:19 compute-0 podman[425985]: 2025-11-25 09:22:19.821415124 +0000 UTC m=+0.060667504 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 09:22:19 compute-0 podman[425968]: 2025-11-25 09:22:19.834784619 +0000 UTC m=+0.075052337 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 09:22:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:22:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640351867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.060 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.090 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.290 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.292 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3592MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.293 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.293 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.376 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.377 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.401 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:22:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 09:22:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2640351867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:22:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:22:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/75560992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.873 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.880 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.897 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.899 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:22:20 compute-0 nova_compute[253538]: 2025-11-25 09:22:20.900 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:22:21 compute-0 ceph-mon[75015]: pgmap v3067: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 09:22:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/75560992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:22:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 09:22:23 compute-0 ceph-mon[75015]: pgmap v3068: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 09:22:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:22:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:22:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:22:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:22:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:22:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:22:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:23 compute-0 nova_compute[253538]: 2025-11-25 09:22:23.627 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:25 compute-0 nova_compute[253538]: 2025-11-25 09:22:25.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:25 compute-0 ceph-mon[75015]: pgmap v3069: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:27 compute-0 ceph-mon[75015]: pgmap v3070: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:28 compute-0 nova_compute[253538]: 2025-11-25 09:22:28.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:22:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420193235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:22:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:22:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420193235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:22:29 compute-0 ceph-mon[75015]: pgmap v3071: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3420193235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:22:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3420193235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:22:30 compute-0 nova_compute[253538]: 2025-11-25 09:22:30.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:30 compute-0 podman[426042]: 2025-11-25 09:22:30.839344958 +0000 UTC m=+0.093310385 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 09:22:31 compute-0 ceph-mon[75015]: pgmap v3072: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:33 compute-0 nova_compute[253538]: 2025-11-25 09:22:33.635 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:34 compute-0 ceph-mon[75015]: pgmap v3073: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:35 compute-0 nova_compute[253538]: 2025-11-25 09:22:35.126 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:35 compute-0 ceph-mon[75015]: pgmap v3074: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:37 compute-0 ceph-mon[75015]: pgmap v3075: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:38 compute-0 nova_compute[253538]: 2025-11-25 09:22:38.664 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:39 compute-0 ceph-mon[75015]: pgmap v3076: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:40 compute-0 nova_compute[253538]: 2025-11-25 09:22:40.127 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:22:41.108 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:22:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:22:41.108 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:22:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:22:41.109 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:22:41 compute-0 ceph-mon[75015]: pgmap v3077: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:42 compute-0 sshd-session[426066]: Received disconnect from 45.78.217.205 port 39190:11: Bye Bye [preauth]
Nov 25 09:22:42 compute-0 sshd-session[426066]: Disconnected from authenticating user root 45.78.217.205 port 39190 [preauth]
Nov 25 09:22:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:43 compute-0 nova_compute[253538]: 2025-11-25 09:22:43.667 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:43 compute-0 ceph-mon[75015]: pgmap v3078: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:45 compute-0 nova_compute[253538]: 2025-11-25 09:22:45.129 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:45 compute-0 ceph-mon[75015]: pgmap v3079: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:47 compute-0 ceph-mon[75015]: pgmap v3080: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:48 compute-0 nova_compute[253538]: 2025-11-25 09:22:48.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:49 compute-0 ceph-mon[75015]: pgmap v3081: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:50 compute-0 nova_compute[253538]: 2025-11-25 09:22:50.131 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:50 compute-0 podman[426069]: 2025-11-25 09:22:50.801850277 +0000 UTC m=+0.047660020 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 09:22:50 compute-0 podman[426068]: 2025-11-25 09:22:50.818249584 +0000 UTC m=+0.064709755 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:22:51 compute-0 ceph-mon[75015]: pgmap v3082: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:22:53
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.log', '.mgr']
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:22:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:22:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:53 compute-0 nova_compute[253538]: 2025-11-25 09:22:53.672 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:53 compute-0 ceph-mon[75015]: pgmap v3083: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:22:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:54 compute-0 ceph-mon[75015]: pgmap v3084: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:55 compute-0 nova_compute[253538]: 2025-11-25 09:22:55.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:55 compute-0 nova_compute[253538]: 2025-11-25 09:22:55.901 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:22:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:57 compute-0 ceph-mon[75015]: pgmap v3085: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:22:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:22:58 compute-0 nova_compute[253538]: 2025-11-25 09:22:58.675 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:22:59 compute-0 ceph-mon[75015]: pgmap v3086: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:00 compute-0 nova_compute[253538]: 2025-11-25 09:23:00.135 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:00 compute-0 nova_compute[253538]: 2025-11-25 09:23:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:01 compute-0 ceph-mon[75015]: pgmap v3087: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:01 compute-0 podman[426108]: 2025-11-25 09:23:01.854465186 +0000 UTC m=+0.107469851 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:23:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:03 compute-0 nova_compute[253538]: 2025-11-25 09:23:03.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:03 compute-0 nova_compute[253538]: 2025-11-25 09:23:03.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:23:03 compute-0 nova_compute[253538]: 2025-11-25 09:23:03.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:23:03 compute-0 nova_compute[253538]: 2025-11-25 09:23:03.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:23:03 compute-0 nova_compute[253538]: 2025-11-25 09:23:03.677 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:03 compute-0 ceph-mon[75015]: pgmap v3088: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:23:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:05 compute-0 nova_compute[253538]: 2025-11-25 09:23:05.136 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:05 compute-0 ceph-mon[75015]: pgmap v3089: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:07 compute-0 nova_compute[253538]: 2025-11-25 09:23:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:07 compute-0 nova_compute[253538]: 2025-11-25 09:23:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:07 compute-0 ceph-mon[75015]: pgmap v3090: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3091: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:08 compute-0 nova_compute[253538]: 2025-11-25 09:23:08.681 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:08 compute-0 sudo[426135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:08 compute-0 sudo[426135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:08 compute-0 sudo[426135]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:08 compute-0 sudo[426162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:23:08 compute-0 sudo[426162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:08 compute-0 sudo[426162]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:08 compute-0 sudo[426187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:08 compute-0 sudo[426187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:08 compute-0 sudo[426187]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:09 compute-0 sudo[426212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 09:23:09 compute-0 sudo[426212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:09 compute-0 sudo[426212]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:23:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:23:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:09 compute-0 sudo[426257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:09 compute-0 sudo[426257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:09 compute-0 sudo[426257]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:09 compute-0 sudo[426282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:23:09 compute-0 sudo[426282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:09 compute-0 sudo[426282]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:09 compute-0 sudo[426307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:09 compute-0 sudo[426307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:09 compute-0 sudo[426307]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:09 compute-0 sudo[426332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:23:09 compute-0 sudo[426332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:09 compute-0 nova_compute[253538]: 2025-11-25 09:23:09.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:09 compute-0 nova_compute[253538]: 2025-11-25 09:23:09.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:23:09 compute-0 ceph-mon[75015]: pgmap v3091: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:09 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:09 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:09 compute-0 sudo[426332]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:10 compute-0 sudo[426387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:10 compute-0 sudo[426387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:10 compute-0 sudo[426387]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:10 compute-0 nova_compute[253538]: 2025-11-25 09:23:10.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:10 compute-0 sudo[426412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:23:10 compute-0 sudo[426412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:10 compute-0 sudo[426412]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:10 compute-0 sudo[426437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:10 compute-0 sudo[426437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:10 compute-0 sudo[426437]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:10 compute-0 sudo[426462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- inventory --format=json-pretty --filter-for-batch
Nov 25 09:23:10 compute-0 sudo[426462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:10 compute-0 podman[426528]: 2025-11-25 09:23:10.700784174 +0000 UTC m=+0.079682244 container create 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:23:10 compute-0 podman[426528]: 2025-11-25 09:23:10.646488984 +0000 UTC m=+0.025387144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:23:10 compute-0 systemd[1]: Started libpod-conmon-6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde.scope.
Nov 25 09:23:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:23:10 compute-0 podman[426528]: 2025-11-25 09:23:10.817679291 +0000 UTC m=+0.196577381 container init 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:23:10 compute-0 podman[426528]: 2025-11-25 09:23:10.826579444 +0000 UTC m=+0.205477504 container start 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:23:10 compute-0 angry_bell[426545]: 167 167
Nov 25 09:23:10 compute-0 systemd[1]: libpod-6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde.scope: Deactivated successfully.
Nov 25 09:23:10 compute-0 podman[426528]: 2025-11-25 09:23:10.835031434 +0000 UTC m=+0.213929564 container attach 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:23:10 compute-0 podman[426528]: 2025-11-25 09:23:10.835689332 +0000 UTC m=+0.214587422 container died 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:23:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8073415963f06c7c1cbbcf1051b12de6b432bc7e46e8a934451066151877388e-merged.mount: Deactivated successfully.
Nov 25 09:23:10 compute-0 podman[426528]: 2025-11-25 09:23:10.91521123 +0000 UTC m=+0.294109300 container remove 6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 09:23:10 compute-0 systemd[1]: libpod-conmon-6e67cf6e3d379638152c4735c15126924ed44da5cde8043b31e26f6eec1ecdde.scope: Deactivated successfully.
Nov 25 09:23:11 compute-0 podman[426567]: 2025-11-25 09:23:11.152863879 +0000 UTC m=+0.083068096 container create b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:23:11 compute-0 podman[426567]: 2025-11-25 09:23:11.10815263 +0000 UTC m=+0.038356927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:23:11 compute-0 systemd[1]: Started libpod-conmon-b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990.scope.
Nov 25 09:23:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:23:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:11 compute-0 podman[426567]: 2025-11-25 09:23:11.264547513 +0000 UTC m=+0.194751760 container init b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:23:11 compute-0 podman[426567]: 2025-11-25 09:23:11.274561347 +0000 UTC m=+0.204765564 container start b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:23:11 compute-0 podman[426567]: 2025-11-25 09:23:11.281090794 +0000 UTC m=+0.211295011 container attach b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 09:23:11 compute-0 nova_compute[253538]: 2025-11-25 09:23:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:11 compute-0 ceph-mon[75015]: pgmap v3092: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]: [
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:     {
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         "available": false,
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         "ceph_device": false,
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         "lsm_data": {},
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         "lvs": [],
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         "path": "/dev/sr0",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         "rejected_reasons": [
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "Insufficient space (<5GB)",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "Has a FileSystem"
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         ],
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         "sys_api": {
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "actuators": null,
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "device_nodes": "sr0",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "devname": "sr0",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "human_readable_size": "482.00 KB",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "id_bus": "ata",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "model": "QEMU DVD-ROM",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "nr_requests": "2",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "parent": "/dev/sr0",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "partitions": {},
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "path": "/dev/sr0",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "removable": "1",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "rev": "2.5+",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "ro": "0",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "rotational": "1",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "sas_address": "",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "sas_device_handle": "",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "scheduler_mode": "mq-deadline",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "sectors": 0,
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "sectorsize": "2048",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "size": 493568.0,
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "support_discard": "2048",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "type": "disk",
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:             "vendor": "QEMU"
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:         }
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]:     }
Nov 25 09:23:12 compute-0 gifted_meninsky[426583]: ]
Nov 25 09:23:12 compute-0 systemd[1]: libpod-b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990.scope: Deactivated successfully.
Nov 25 09:23:12 compute-0 podman[426567]: 2025-11-25 09:23:12.833544087 +0000 UTC m=+1.763748294 container died b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:23:12 compute-0 systemd[1]: libpod-b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990.scope: Consumed 1.603s CPU time.
Nov 25 09:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b72a246bf61fb6a02c881b10e3a193a6f82797672075f4ddf80c01ac15cd048d-merged.mount: Deactivated successfully.
Nov 25 09:23:12 compute-0 podman[426567]: 2025-11-25 09:23:12.90701961 +0000 UTC m=+1.837223827 container remove b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:23:12 compute-0 systemd[1]: libpod-conmon-b6762fbe7fd09f6adea916478dc0f09395a4c5e9e9c57ba4260ef646ab794990.scope: Deactivated successfully.
Nov 25 09:23:12 compute-0 sudo[426462]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 973b0880-37fd-4120-8bcd-1170032b75fc does not exist
Nov 25 09:23:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8287d27b-d122-4fd8-92a4-7f6039ee1d89 does not exist
Nov 25 09:23:12 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev de7f56ac-9082-4abf-adc9-a48dd64c1da8 does not exist
Nov 25 09:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:23:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:23:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:23:13 compute-0 sudo[428426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:13 compute-0 sudo[428426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:13 compute-0 sudo[428426]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:13 compute-0 sudo[428451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:23:13 compute-0 sudo[428451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:13 compute-0 sudo[428451]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:13 compute-0 sudo[428476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:13 compute-0 sudo[428476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:13 compute-0 sudo[428476]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:13 compute-0 sudo[428501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:23:13 compute-0 sudo[428501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:13 compute-0 podman[428567]: 2025-11-25 09:23:13.600016373 +0000 UTC m=+0.037478023 container create 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 25 09:23:13 compute-0 systemd[1]: Started libpod-conmon-5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7.scope.
Nov 25 09:23:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:23:13 compute-0 podman[428567]: 2025-11-25 09:23:13.671745888 +0000 UTC m=+0.109207548 container init 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:23:13 compute-0 podman[428567]: 2025-11-25 09:23:13.679353556 +0000 UTC m=+0.116815206 container start 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:23:13 compute-0 podman[428567]: 2025-11-25 09:23:13.582942288 +0000 UTC m=+0.020403978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:23:13 compute-0 podman[428567]: 2025-11-25 09:23:13.682996335 +0000 UTC m=+0.120457985 container attach 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 09:23:13 compute-0 nova_compute[253538]: 2025-11-25 09:23:13.682 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:13 compute-0 eloquent_benz[428583]: 167 167
Nov 25 09:23:13 compute-0 systemd[1]: libpod-5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7.scope: Deactivated successfully.
Nov 25 09:23:13 compute-0 podman[428567]: 2025-11-25 09:23:13.687736174 +0000 UTC m=+0.125197824 container died 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:23:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a9c39dc40a2f862173b6897472e6c06bdbedb1ca2478d2784bbb00c62ef05a-merged.mount: Deactivated successfully.
Nov 25 09:23:13 compute-0 podman[428567]: 2025-11-25 09:23:13.726026259 +0000 UTC m=+0.163487909 container remove 5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_benz, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:23:13 compute-0 systemd[1]: libpod-conmon-5a907070d971aa98d78754540d785c8521a29e5ea368373ac44af45064bf3ac7.scope: Deactivated successfully.
Nov 25 09:23:13 compute-0 ceph-mon[75015]: pgmap v3093: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:23:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:23:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:23:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:23:13 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:23:13 compute-0 podman[428608]: 2025-11-25 09:23:13.902066887 +0000 UTC m=+0.046930890 container create 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:23:13 compute-0 systemd[1]: Started libpod-conmon-9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1.scope.
Nov 25 09:23:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:23:13 compute-0 podman[428608]: 2025-11-25 09:23:13.884886769 +0000 UTC m=+0.029750792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:14 compute-0 podman[428608]: 2025-11-25 09:23:14.001496948 +0000 UTC m=+0.146360971 container init 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 09:23:14 compute-0 podman[428608]: 2025-11-25 09:23:14.014080851 +0000 UTC m=+0.158944854 container start 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:23:14 compute-0 podman[428608]: 2025-11-25 09:23:14.017573127 +0000 UTC m=+0.162437150 container attach 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:23:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:15 compute-0 competent_mirzakhani[428624]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:23:15 compute-0 competent_mirzakhani[428624]: --> relative data size: 1.0
Nov 25 09:23:15 compute-0 competent_mirzakhani[428624]: --> All data devices are unavailable
Nov 25 09:23:15 compute-0 systemd[1]: libpod-9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1.scope: Deactivated successfully.
Nov 25 09:23:15 compute-0 podman[428608]: 2025-11-25 09:23:15.070743939 +0000 UTC m=+1.215607942 container died 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 09:23:15 compute-0 systemd[1]: libpod-9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1.scope: Consumed 1.010s CPU time.
Nov 25 09:23:15 compute-0 nova_compute[253538]: 2025-11-25 09:23:15.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c402c643f0764473e56bc69a13aa9c54977401402a0cb846c8f2f4f86303a83e-merged.mount: Deactivated successfully.
Nov 25 09:23:15 compute-0 podman[428608]: 2025-11-25 09:23:15.446901773 +0000 UTC m=+1.591765776 container remove 9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:23:15 compute-0 systemd[1]: libpod-conmon-9bda23b08168298a487d05d7019d36dd197078bbf8c7b483be137d3f5cf6adb1.scope: Deactivated successfully.
Nov 25 09:23:15 compute-0 sudo[428501]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:15 compute-0 nova_compute[253538]: 2025-11-25 09:23:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:15 compute-0 sudo[428667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:15 compute-0 sudo[428667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:15 compute-0 sudo[428667]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:15 compute-0 sudo[428692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:23:15 compute-0 sudo[428692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:15 compute-0 sudo[428692]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:15 compute-0 sudo[428717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:15 compute-0 sudo[428717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:15 compute-0 sudo[428717]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:15 compute-0 sudo[428742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:23:15 compute-0 sudo[428742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:15 compute-0 ceph-mon[75015]: pgmap v3094: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:16 compute-0 podman[428804]: 2025-11-25 09:23:16.156667292 +0000 UTC m=+0.063525312 container create 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:23:16 compute-0 systemd[1]: Started libpod-conmon-15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943.scope.
Nov 25 09:23:16 compute-0 podman[428804]: 2025-11-25 09:23:16.121788492 +0000 UTC m=+0.028646552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:23:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:23:16 compute-0 podman[428804]: 2025-11-25 09:23:16.246599094 +0000 UTC m=+0.153457154 container init 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:23:16 compute-0 podman[428804]: 2025-11-25 09:23:16.258215361 +0000 UTC m=+0.165073381 container start 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:23:16 compute-0 podman[428804]: 2025-11-25 09:23:16.260572355 +0000 UTC m=+0.167430405 container attach 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:23:16 compute-0 quizzical_varahamihira[428818]: 167 167
Nov 25 09:23:16 compute-0 systemd[1]: libpod-15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943.scope: Deactivated successfully.
Nov 25 09:23:16 compute-0 podman[428804]: 2025-11-25 09:23:16.263293649 +0000 UTC m=+0.170151669 container died 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:23:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4258785ca6da09afc181c177b78d82b1bc24775f9097944fa86bf0ec53422a8d-merged.mount: Deactivated successfully.
Nov 25 09:23:16 compute-0 sshd-session[426141]: Received disconnect from 45.78.222.2 port 33138:11: Bye Bye [preauth]
Nov 25 09:23:16 compute-0 sshd-session[426141]: Disconnected from 45.78.222.2 port 33138 [preauth]
Nov 25 09:23:16 compute-0 podman[428804]: 2025-11-25 09:23:16.419371204 +0000 UTC m=+0.326229224 container remove 15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 09:23:16 compute-0 systemd[1]: libpod-conmon-15641a7164dda2256a061501e7c286e41cba3c035aa1a8509a0158e0efac1943.scope: Deactivated successfully.
Nov 25 09:23:16 compute-0 podman[428842]: 2025-11-25 09:23:16.619194752 +0000 UTC m=+0.069458784 container create abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:23:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:16 compute-0 podman[428842]: 2025-11-25 09:23:16.570797123 +0000 UTC m=+0.021061175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:23:16 compute-0 systemd[1]: Started libpod-conmon-abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc.scope.
Nov 25 09:23:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:16 compute-0 podman[428842]: 2025-11-25 09:23:16.822182556 +0000 UTC m=+0.272446608 container init abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:23:16 compute-0 podman[428842]: 2025-11-25 09:23:16.829252138 +0000 UTC m=+0.279516170 container start abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:23:16 compute-0 podman[428842]: 2025-11-25 09:23:16.837165474 +0000 UTC m=+0.287429506 container attach abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:23:17 compute-0 funny_engelbart[428859]: {
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:     "0": [
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:         {
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "devices": [
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "/dev/loop3"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             ],
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_name": "ceph_lv0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_size": "21470642176",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "name": "ceph_lv0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "tags": {
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cluster_name": "ceph",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.crush_device_class": "",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.encrypted": "0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osd_id": "0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.type": "block",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.vdo": "0"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             },
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "type": "block",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "vg_name": "ceph_vg0"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:         }
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:     ],
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:     "1": [
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:         {
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "devices": [
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "/dev/loop4"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             ],
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_name": "ceph_lv1",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_size": "21470642176",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "name": "ceph_lv1",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "tags": {
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cluster_name": "ceph",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.crush_device_class": "",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.encrypted": "0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osd_id": "1",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.type": "block",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.vdo": "0"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             },
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "type": "block",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "vg_name": "ceph_vg1"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:         }
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:     ],
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:     "2": [
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:         {
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "devices": [
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "/dev/loop5"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             ],
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_name": "ceph_lv2",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_size": "21470642176",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "name": "ceph_lv2",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "tags": {
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.cluster_name": "ceph",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.crush_device_class": "",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.encrypted": "0",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osd_id": "2",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.type": "block",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:                 "ceph.vdo": "0"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             },
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "type": "block",
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:             "vg_name": "ceph_vg2"
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:         }
Nov 25 09:23:17 compute-0 funny_engelbart[428859]:     ]
Nov 25 09:23:17 compute-0 funny_engelbart[428859]: }
Nov 25 09:23:17 compute-0 systemd[1]: libpod-abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc.scope: Deactivated successfully.
Nov 25 09:23:17 compute-0 podman[428842]: 2025-11-25 09:23:17.635731735 +0000 UTC m=+1.085995767 container died abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:23:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e5e439d02db9952bb9519fed8c2c02b88b5a57bb8ecbd0112f333956497a280-merged.mount: Deactivated successfully.
Nov 25 09:23:17 compute-0 podman[428842]: 2025-11-25 09:23:17.691335731 +0000 UTC m=+1.141599763 container remove abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:23:17 compute-0 systemd[1]: libpod-conmon-abf984efbe49ba70b5a6fdcc93c8b7609e3d1d85511720a01f83e2f9b29068fc.scope: Deactivated successfully.
Nov 25 09:23:17 compute-0 sudo[428742]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:17 compute-0 sudo[428880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:17 compute-0 sudo[428880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:17 compute-0 sudo[428880]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:17 compute-0 sudo[428905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:23:17 compute-0 sudo[428905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:17 compute-0 sudo[428905]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:17 compute-0 ceph-mon[75015]: pgmap v3095: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:17 compute-0 sudo[428930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:17 compute-0 sudo[428930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:17 compute-0 sudo[428930]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:17 compute-0 sudo[428955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:23:17 compute-0 sudo[428955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:18 compute-0 podman[429018]: 2025-11-25 09:23:18.295106921 +0000 UTC m=+0.044401081 container create 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:23:18 compute-0 systemd[1]: Started libpod-conmon-007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e.scope.
Nov 25 09:23:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:23:18 compute-0 podman[429018]: 2025-11-25 09:23:18.273368648 +0000 UTC m=+0.022662848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:23:18 compute-0 podman[429018]: 2025-11-25 09:23:18.371579386 +0000 UTC m=+0.120873606 container init 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:23:18 compute-0 podman[429018]: 2025-11-25 09:23:18.378411452 +0000 UTC m=+0.127705622 container start 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 09:23:18 compute-0 confident_black[429034]: 167 167
Nov 25 09:23:18 compute-0 systemd[1]: libpod-007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e.scope: Deactivated successfully.
Nov 25 09:23:18 compute-0 podman[429018]: 2025-11-25 09:23:18.382970556 +0000 UTC m=+0.132264826 container attach 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 09:23:18 compute-0 podman[429018]: 2025-11-25 09:23:18.383788198 +0000 UTC m=+0.133082368 container died 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:23:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cac98c9c302d7235eea2a7aa8fd21d2ed7e5e6122f3244eb3a8de79b5cfb487c-merged.mount: Deactivated successfully.
Nov 25 09:23:18 compute-0 podman[429018]: 2025-11-25 09:23:18.433051592 +0000 UTC m=+0.182345762 container remove 007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 09:23:18 compute-0 systemd[1]: libpod-conmon-007e35880de50966834a4d8c5906a53abfd7aa6490678fe76985c001d4ab1c1e.scope: Deactivated successfully.
Nov 25 09:23:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:18 compute-0 podman[429060]: 2025-11-25 09:23:18.59403446 +0000 UTC m=+0.043605399 container create 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:23:18 compute-0 systemd[1]: Started libpod-conmon-17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe.scope.
Nov 25 09:23:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:23:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:23:18 compute-0 podman[429060]: 2025-11-25 09:23:18.577555291 +0000 UTC m=+0.027126250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:23:18 compute-0 podman[429060]: 2025-11-25 09:23:18.672963571 +0000 UTC m=+0.122534530 container init 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:23:18 compute-0 podman[429060]: 2025-11-25 09:23:18.681030932 +0000 UTC m=+0.130601871 container start 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:23:18 compute-0 podman[429060]: 2025-11-25 09:23:18.685563115 +0000 UTC m=+0.135134104 container attach 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:23:18 compute-0 nova_compute[253538]: 2025-11-25 09:23:18.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:19 compute-0 nova_compute[253538]: 2025-11-25 09:23:19.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]: {
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "osd_id": 1,
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "type": "bluestore"
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:     },
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "osd_id": 2,
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "type": "bluestore"
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:     },
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "osd_id": 0,
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:         "type": "bluestore"
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]:     }
Nov 25 09:23:19 compute-0 interesting_heisenberg[429077]: }
Nov 25 09:23:19 compute-0 systemd[1]: libpod-17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe.scope: Deactivated successfully.
Nov 25 09:23:19 compute-0 systemd[1]: libpod-17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe.scope: Consumed 1.021s CPU time.
Nov 25 09:23:19 compute-0 podman[429060]: 2025-11-25 09:23:19.697848652 +0000 UTC m=+1.147419611 container died 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:23:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a94d9802bf78c458a65e2786fce1aab99d3b0c80b551efc86d040817977332a-merged.mount: Deactivated successfully.
Nov 25 09:23:19 compute-0 podman[429060]: 2025-11-25 09:23:19.753283444 +0000 UTC m=+1.202854383 container remove 17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:23:19 compute-0 systemd[1]: libpod-conmon-17ee83225fb8b161e6b89be3d33060f8203eb46d8c2a71cdeb333baf4d45d5fe.scope: Deactivated successfully.
Nov 25 09:23:19 compute-0 sudo[428955]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:23:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:23:19 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:19 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2b0f72af-ab7d-49d1-a5bf-2327280f6b71 does not exist
Nov 25 09:23:19 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2508a217-5be7-4000-9bd3-674eb995826f does not exist
Nov 25 09:23:19 compute-0 sudo[429124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:23:19 compute-0 sudo[429124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:19 compute-0 sudo[429124]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:19 compute-0 ceph-mon[75015]: pgmap v3096: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:19 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:23:19 compute-0 sudo[429149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:23:19 compute-0 sudo[429149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:23:19 compute-0 sudo[429149]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:20 compute-0 nova_compute[253538]: 2025-11-25 09:23:20.167 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:21 compute-0 nova_compute[253538]: 2025-11-25 09:23:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:21 compute-0 nova_compute[253538]: 2025-11-25 09:23:21.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:23:21 compute-0 nova_compute[253538]: 2025-11-25 09:23:21.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:23:21 compute-0 nova_compute[253538]: 2025-11-25 09:23:21.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:23:21 compute-0 nova_compute[253538]: 2025-11-25 09:23:21.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:23:21 compute-0 nova_compute[253538]: 2025-11-25 09:23:21.584 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:23:21 compute-0 ceph-mon[75015]: pgmap v3097: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:21 compute-0 podman[429177]: 2025-11-25 09:23:21.956479768 +0000 UTC m=+0.056916623 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:23:21 compute-0 podman[429195]: 2025-11-25 09:23:21.980963735 +0000 UTC m=+0.081603756 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:23:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:23:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720521840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.059 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.270 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.272 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3580MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.272 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.272 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.499 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.499 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.590 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:23:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.671 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.672 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.688 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.709 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:23:22 compute-0 nova_compute[253538]: 2025-11-25 09:23:22.727 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:23:22 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1720521840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:23:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:23:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240611041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:23:23 compute-0 nova_compute[253538]: 2025-11-25 09:23:23.216 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:23:23 compute-0 nova_compute[253538]: 2025-11-25 09:23:23.222 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:23:23 compute-0 nova_compute[253538]: 2025-11-25 09:23:23.242 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:23:23 compute-0 nova_compute[253538]: 2025-11-25 09:23:23.244 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:23:23 compute-0 nova_compute[253538]: 2025-11-25 09:23:23.244 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:23:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:23:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:23 compute-0 nova_compute[253538]: 2025-11-25 09:23:23.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:23 compute-0 ceph-mon[75015]: pgmap v3098: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4240611041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:23:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:24 compute-0 ceph-mon[75015]: pgmap v3099: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:25 compute-0 nova_compute[253538]: 2025-11-25 09:23:25.212 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:27 compute-0 ceph-mon[75015]: pgmap v3100: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:28 compute-0 nova_compute[253538]: 2025-11-25 09:23:28.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:23:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3652649450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:23:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:23:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3652649450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:23:29 compute-0 ceph-mon[75015]: pgmap v3101: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:23:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3652649450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:23:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3652649450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:23:29 compute-0 sshd-session[429255]: Invalid user hadoop from 193.32.162.151 port 55142
Nov 25 09:23:30 compute-0 sshd-session[429255]: Connection closed by invalid user hadoop 193.32.162.151 port 55142 [preauth]
Nov 25 09:23:30 compute-0 nova_compute[253538]: 2025-11-25 09:23:30.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 0 op/s
Nov 25 09:23:31 compute-0 ceph-mon[75015]: pgmap v3102: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 0 op/s
Nov 25 09:23:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3103: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 0 op/s
Nov 25 09:23:32 compute-0 podman[429257]: 2025-11-25 09:23:32.860235215 +0000 UTC m=+0.111608804 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 09:23:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:33 compute-0 nova_compute[253538]: 2025-11-25 09:23:33.693 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:33 compute-0 ceph-mon[75015]: pgmap v3103: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 0 op/s
Nov 25 09:23:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 09:23:35 compute-0 nova_compute[253538]: 2025-11-25 09:23:35.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:35 compute-0 ceph-mon[75015]: pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 09:23:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 09:23:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 25 09:23:37 compute-0 ceph-mon[75015]: pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 09:23:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 25 09:23:37 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 25 09:23:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 818 B/s wr, 25 op/s
Nov 25 09:23:38 compute-0 nova_compute[253538]: 2025-11-25 09:23:38.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:38 compute-0 ceph-mon[75015]: osdmap e275: 3 total, 3 up, 3 in
Nov 25 09:23:39 compute-0 ceph-mon[75015]: pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 818 B/s wr, 25 op/s
Nov 25 09:23:40 compute-0 nova_compute[253538]: 2025-11-25 09:23:40.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 37 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.1 KiB/s wr, 26 op/s
Nov 25 09:23:41 compute-0 ceph-mon[75015]: pgmap v3108: 321 pgs: 321 active+clean; 37 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.1 KiB/s wr, 26 op/s
Nov 25 09:23:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:23:41.110 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:23:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:23:41.110 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:23:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:23:41.111 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:23:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 25 09:23:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 25 09:23:42 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 25 09:23:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 25 09:23:42 compute-0 sshd-session[429284]: Connection closed by authenticating user root 171.244.51.45 port 45014 [preauth]
Nov 25 09:23:43 compute-0 ceph-mon[75015]: osdmap e276: 3 total, 3 up, 3 in
Nov 25 09:23:43 compute-0 ceph-mon[75015]: pgmap v3110: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 25 09:23:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 25 09:23:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 25 09:23:43 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 25 09:23:43 compute-0 nova_compute[253538]: 2025-11-25 09:23:43.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:44 compute-0 ceph-mon[75015]: osdmap e277: 3 total, 3 up, 3 in
Nov 25 09:23:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 13 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 65 op/s
Nov 25 09:23:45 compute-0 nova_compute[253538]: 2025-11-25 09:23:45.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:45 compute-0 ceph-mon[75015]: pgmap v3112: 321 pgs: 321 active+clean; 13 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 65 op/s
Nov 25 09:23:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.6 KiB/s wr, 41 op/s
Nov 25 09:23:47 compute-0 ceph-mon[75015]: pgmap v3113: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.6 KiB/s wr, 41 op/s
Nov 25 09:23:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 25 09:23:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 25 09:23:48 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 25 09:23:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 KiB/s wr, 46 op/s
Nov 25 09:23:48 compute-0 nova_compute[253538]: 2025-11-25 09:23:48.700 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:49 compute-0 ceph-mon[75015]: osdmap e278: 3 total, 3 up, 3 in
Nov 25 09:23:49 compute-0 ceph-mon[75015]: pgmap v3115: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 KiB/s wr, 46 op/s
Nov 25 09:23:50 compute-0 nova_compute[253538]: 2025-11-25 09:23:50.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 38 op/s
Nov 25 09:23:51 compute-0 ceph-mon[75015]: pgmap v3116: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 38 op/s
Nov 25 09:23:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 897 B/s wr, 6 op/s
Nov 25 09:23:52 compute-0 podman[429287]: 2025-11-25 09:23:52.825215474 +0000 UTC m=+0.064236242 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 09:23:52 compute-0 podman[429286]: 2025-11-25 09:23:52.835419631 +0000 UTC m=+0.080320490 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:23:53
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:23:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:23:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:53 compute-0 nova_compute[253538]: 2025-11-25 09:23:53.703 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:53 compute-0 ceph-mon[75015]: pgmap v3117: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 897 B/s wr, 6 op/s
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:23:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 818 B/s wr, 6 op/s
Nov 25 09:23:55 compute-0 ceph-mon[75015]: pgmap v3118: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 818 B/s wr, 6 op/s
Nov 25 09:23:55 compute-0 nova_compute[253538]: 2025-11-25 09:23:55.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 25 09:23:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 25 09:23:56 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 25 09:23:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1007 B/s wr, 14 op/s
Nov 25 09:23:57 compute-0 ceph-mon[75015]: osdmap e279: 3 total, 3 up, 3 in
Nov 25 09:23:57 compute-0 ceph-mon[75015]: pgmap v3120: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1007 B/s wr, 14 op/s
Nov 25 09:23:57 compute-0 nova_compute[253538]: 2025-11-25 09:23:57.244 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:23:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:23:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 25 09:23:58 compute-0 nova_compute[253538]: 2025-11-25 09:23:58.739 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:23:59 compute-0 ceph-mon[75015]: pgmap v3121: 321 pgs: 321 active+clean; 457 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 25 09:24:00 compute-0 nova_compute[253538]: 2025-11-25 09:24:00.226 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:00 compute-0 nova_compute[253538]: 2025-11-25 09:24:00.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 09:24:01 compute-0 ceph-mon[75015]: pgmap v3122: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 09:24:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 09:24:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:03 compute-0 nova_compute[253538]: 2025-11-25 09:24:03.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:03 compute-0 ceph-mon[75015]: pgmap v3123: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 09:24:03 compute-0 podman[429322]: 2025-11-25 09:24:03.85988657 +0000 UTC m=+0.102592118 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:24:04 compute-0 nova_compute[253538]: 2025-11-25 09:24:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:04 compute-0 nova_compute[253538]: 2025-11-25 09:24:04.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:24:04 compute-0 nova_compute[253538]: 2025-11-25 09:24:04.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:24:04 compute-0 nova_compute[253538]: 2025-11-25 09:24:04.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:24:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 09:24:05 compute-0 nova_compute[253538]: 2025-11-25 09:24:05.228 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:05 compute-0 ceph-mon[75015]: pgmap v3124: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 09:24:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 25 09:24:07 compute-0 ceph-mon[75015]: pgmap v3125: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 25 09:24:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:08 compute-0 nova_compute[253538]: 2025-11-25 09:24:08.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Nov 25 09:24:08 compute-0 nova_compute[253538]: 2025-11-25 09:24:08.745 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:09 compute-0 nova_compute[253538]: 2025-11-25 09:24:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:09 compute-0 ceph-mon[75015]: pgmap v3126: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Nov 25 09:24:10 compute-0 nova_compute[253538]: 2025-11-25 09:24:10.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:10 compute-0 nova_compute[253538]: 2025-11-25 09:24:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:10 compute-0 nova_compute[253538]: 2025-11-25 09:24:10.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:24:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Nov 25 09:24:11 compute-0 ceph-mon[75015]: pgmap v3127: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Nov 25 09:24:12 compute-0 nova_compute[253538]: 2025-11-25 09:24:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:13 compute-0 nova_compute[253538]: 2025-11-25 09:24:13.748 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:13 compute-0 ceph-mon[75015]: pgmap v3128: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:15 compute-0 ceph-mon[75015]: pgmap v3129: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:15 compute-0 nova_compute[253538]: 2025-11-25 09:24:15.231 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:15 compute-0 nova_compute[253538]: 2025-11-25 09:24:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:17 compute-0 ceph-mon[75015]: pgmap v3130: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:18 compute-0 nova_compute[253538]: 2025-11-25 09:24:18.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:19 compute-0 ceph-mon[75015]: pgmap v3131: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:20 compute-0 sudo[429348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:20 compute-0 sudo[429348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:20 compute-0 sudo[429348]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:20 compute-0 sudo[429373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:24:20 compute-0 sudo[429373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:20 compute-0 sudo[429373]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:20 compute-0 sudo[429398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:20 compute-0 sudo[429398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:20 compute-0 sudo[429398]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:20 compute-0 sudo[429423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:24:20 compute-0 sudo[429423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:20 compute-0 nova_compute[253538]: 2025-11-25 09:24:20.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:20 compute-0 sudo[429423]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 09:24:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:24:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:24:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:24:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:24:20 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5b3a8aa6-3fd8-41bf-9141-f4417ec61663 does not exist
Nov 25 09:24:20 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 198ba67a-df82-4450-a3f2-13372c330822 does not exist
Nov 25 09:24:20 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 6bb50488-8344-493f-9981-6a614c67982b does not exist
Nov 25 09:24:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:24:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:24:20 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:24:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:24:20 compute-0 sudo[429479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:20 compute-0 sudo[429479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:20 compute-0 sudo[429479]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:20 compute-0 sudo[429504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:24:20 compute-0 sudo[429504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:20 compute-0 sudo[429504]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:24:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:24:20 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:24:21 compute-0 sudo[429529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:21 compute-0 sudo[429529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:21 compute-0 sudo[429529]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:21 compute-0 sudo[429554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:24:21 compute-0 sudo[429554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:21 compute-0 podman[429618]: 2025-11-25 09:24:21.462937798 +0000 UTC m=+0.024602791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:24:21 compute-0 podman[429618]: 2025-11-25 09:24:21.804809298 +0000 UTC m=+0.366474261 container create 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:24:21 compute-0 systemd[1]: Started libpod-conmon-6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5.scope.
Nov 25 09:24:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:24:22 compute-0 ceph-mon[75015]: pgmap v3132: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:22 compute-0 podman[429618]: 2025-11-25 09:24:22.069693979 +0000 UTC m=+0.631358972 container init 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 09:24:22 compute-0 podman[429618]: 2025-11-25 09:24:22.078490389 +0000 UTC m=+0.640155382 container start 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 09:24:22 compute-0 condescending_dijkstra[429635]: 167 167
Nov 25 09:24:22 compute-0 systemd[1]: libpod-6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5.scope: Deactivated successfully.
Nov 25 09:24:22 compute-0 podman[429618]: 2025-11-25 09:24:22.150463701 +0000 UTC m=+0.712128674 container attach 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 09:24:22 compute-0 podman[429618]: 2025-11-25 09:24:22.151481709 +0000 UTC m=+0.713146762 container died 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:24:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf9474d5906eca69310c566eeb4f5c30bc1f4e2689985c4e08696e7318a470fb-merged.mount: Deactivated successfully.
Nov 25 09:24:22 compute-0 podman[429618]: 2025-11-25 09:24:22.465572381 +0000 UTC m=+1.027237344 container remove 6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:24:22 compute-0 systemd[1]: libpod-conmon-6f46c5eafc93ba5d6c2bf188e7b46e3a0790edb7877bb9be4f3329dd2dc62fb5.scope: Deactivated successfully.
Nov 25 09:24:22 compute-0 nova_compute[253538]: 2025-11-25 09:24:22.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:22 compute-0 nova_compute[253538]: 2025-11-25 09:24:22.576 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:24:22 compute-0 nova_compute[253538]: 2025-11-25 09:24:22.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:24:22 compute-0 nova_compute[253538]: 2025-11-25 09:24:22.577 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:24:22 compute-0 nova_compute[253538]: 2025-11-25 09:24:22.577 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:24:22 compute-0 nova_compute[253538]: 2025-11-25 09:24:22.578 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:24:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:22 compute-0 podman[429659]: 2025-11-25 09:24:22.731373068 +0000 UTC m=+0.094378745 container create ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:24:22 compute-0 podman[429659]: 2025-11-25 09:24:22.661216015 +0000 UTC m=+0.024221792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:24:22 compute-0 systemd[1]: Started libpod-conmon-ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429.scope.
Nov 25 09:24:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:22 compute-0 podman[429659]: 2025-11-25 09:24:22.892413378 +0000 UTC m=+0.255419065 container init ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:24:22 compute-0 podman[429659]: 2025-11-25 09:24:22.901998109 +0000 UTC m=+0.265003786 container start ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:24:22 compute-0 podman[429659]: 2025-11-25 09:24:22.907704205 +0000 UTC m=+0.270709902 container attach ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:24:22 compute-0 podman[429696]: 2025-11-25 09:24:22.955260611 +0000 UTC m=+0.077659678 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:24:22 compute-0 podman[429698]: 2025-11-25 09:24:22.959113466 +0000 UTC m=+0.082266124 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 09:24:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:24:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842557725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.001 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:24:23 compute-0 ceph-mon[75015]: pgmap v3133: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1842557725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.202 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.204 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.204 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.204 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.286 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.287 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.305 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:24:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:24:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:24:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1799143828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.778 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.784 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.797 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.800 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:24:23 compute-0 nova_compute[253538]: 2025-11-25 09:24:23.801 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:24:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1799143828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:24:24 compute-0 busy_gauss[429695]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:24:24 compute-0 busy_gauss[429695]: --> relative data size: 1.0
Nov 25 09:24:24 compute-0 busy_gauss[429695]: --> All data devices are unavailable
Nov 25 09:24:24 compute-0 systemd[1]: libpod-ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429.scope: Deactivated successfully.
Nov 25 09:24:24 compute-0 systemd[1]: libpod-ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429.scope: Consumed 1.069s CPU time.
Nov 25 09:24:24 compute-0 podman[429659]: 2025-11-25 09:24:24.363741788 +0000 UTC m=+1.726747505 container died ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 09:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3801211cae971bd1c60d27764e66634998bfdd35bdf44b031d5dc8909f0f146-merged.mount: Deactivated successfully.
Nov 25 09:24:24 compute-0 podman[429659]: 2025-11-25 09:24:24.600624756 +0000 UTC m=+1.963630473 container remove ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:24:24 compute-0 systemd[1]: libpod-conmon-ff32e3dafad48c2eb70443b028e154512f6d52f4d0ea8f08b5b39d13b8f6b429.scope: Deactivated successfully.
Nov 25 09:24:24 compute-0 sudo[429554]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:24 compute-0 sudo[429800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:24 compute-0 sudo[429800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:24 compute-0 sudo[429800]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:24 compute-0 sudo[429825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:24:24 compute-0 sudo[429825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:24 compute-0 sudo[429825]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:24 compute-0 sudo[429850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:24 compute-0 sudo[429850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:24 compute-0 sudo[429850]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:24 compute-0 sudo[429875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:24:24 compute-0 sudo[429875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:25 compute-0 ceph-mon[75015]: pgmap v3134: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:25 compute-0 nova_compute[253538]: 2025-11-25 09:24:25.234 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:25 compute-0 podman[429940]: 2025-11-25 09:24:25.329657061 +0000 UTC m=+0.060633594 container create 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:24:25 compute-0 systemd[1]: Started libpod-conmon-9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63.scope.
Nov 25 09:24:25 compute-0 podman[429940]: 2025-11-25 09:24:25.298729738 +0000 UTC m=+0.029706291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:24:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:24:25 compute-0 podman[429940]: 2025-11-25 09:24:25.4370937 +0000 UTC m=+0.168070323 container init 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:24:25 compute-0 podman[429940]: 2025-11-25 09:24:25.444558664 +0000 UTC m=+0.175535207 container start 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:24:25 compute-0 peaceful_black[429956]: 167 167
Nov 25 09:24:25 compute-0 systemd[1]: libpod-9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63.scope: Deactivated successfully.
Nov 25 09:24:25 compute-0 conmon[429956]: conmon 9eee93f695bd189282d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63.scope/container/memory.events
Nov 25 09:24:25 compute-0 podman[429940]: 2025-11-25 09:24:25.473185824 +0000 UTC m=+0.204162357 container attach 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:24:25 compute-0 podman[429940]: 2025-11-25 09:24:25.473636906 +0000 UTC m=+0.204613439 container died 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:24:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c06a83d4316436aaabf1971b1507ccb748b0ea409445c71956122961893d2422-merged.mount: Deactivated successfully.
Nov 25 09:24:25 compute-0 podman[429940]: 2025-11-25 09:24:25.606871689 +0000 UTC m=+0.337848222 container remove 9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:24:25 compute-0 systemd[1]: libpod-conmon-9eee93f695bd189282d615147f47dccb5c67ed8d9da0d34331d5cfed69809e63.scope: Deactivated successfully.
Nov 25 09:24:25 compute-0 podman[429982]: 2025-11-25 09:24:25.78226605 +0000 UTC m=+0.056486391 container create f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:24:25 compute-0 systemd[1]: Started libpod-conmon-f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa.scope.
Nov 25 09:24:25 compute-0 podman[429982]: 2025-11-25 09:24:25.751373898 +0000 UTC m=+0.025594269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:24:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:25 compute-0 podman[429982]: 2025-11-25 09:24:25.886968735 +0000 UTC m=+0.161189106 container init f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 09:24:25 compute-0 podman[429982]: 2025-11-25 09:24:25.894599742 +0000 UTC m=+0.168820083 container start f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 09:24:25 compute-0 podman[429982]: 2025-11-25 09:24:25.913652312 +0000 UTC m=+0.187872673 container attach f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:24:26 compute-0 gracious_spence[429998]: {
Nov 25 09:24:26 compute-0 gracious_spence[429998]:     "0": [
Nov 25 09:24:26 compute-0 gracious_spence[429998]:         {
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "devices": [
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "/dev/loop3"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             ],
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_name": "ceph_lv0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_size": "21470642176",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "name": "ceph_lv0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "tags": {
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cluster_name": "ceph",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.crush_device_class": "",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.encrypted": "0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osd_id": "0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.type": "block",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.vdo": "0"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             },
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "type": "block",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "vg_name": "ceph_vg0"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:         }
Nov 25 09:24:26 compute-0 gracious_spence[429998]:     ],
Nov 25 09:24:26 compute-0 gracious_spence[429998]:     "1": [
Nov 25 09:24:26 compute-0 gracious_spence[429998]:         {
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "devices": [
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "/dev/loop4"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             ],
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_name": "ceph_lv1",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_size": "21470642176",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "name": "ceph_lv1",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "tags": {
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cluster_name": "ceph",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.crush_device_class": "",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.encrypted": "0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osd_id": "1",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.type": "block",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.vdo": "0"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             },
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "type": "block",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "vg_name": "ceph_vg1"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:         }
Nov 25 09:24:26 compute-0 gracious_spence[429998]:     ],
Nov 25 09:24:26 compute-0 gracious_spence[429998]:     "2": [
Nov 25 09:24:26 compute-0 gracious_spence[429998]:         {
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "devices": [
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "/dev/loop5"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             ],
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_name": "ceph_lv2",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_size": "21470642176",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "name": "ceph_lv2",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "tags": {
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.cluster_name": "ceph",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.crush_device_class": "",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.encrypted": "0",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osd_id": "2",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.type": "block",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:                 "ceph.vdo": "0"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             },
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "type": "block",
Nov 25 09:24:26 compute-0 gracious_spence[429998]:             "vg_name": "ceph_vg2"
Nov 25 09:24:26 compute-0 gracious_spence[429998]:         }
Nov 25 09:24:26 compute-0 gracious_spence[429998]:     ]
Nov 25 09:24:26 compute-0 gracious_spence[429998]: }
Nov 25 09:24:26 compute-0 systemd[1]: libpod-f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa.scope: Deactivated successfully.
Nov 25 09:24:26 compute-0 podman[429982]: 2025-11-25 09:24:26.674196236 +0000 UTC m=+0.948416577 container died f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:24:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-48e0795b41b2b1a8ba5eb2d7996995c0144a41533a271ebfd5d3bb2fabc55262-merged.mount: Deactivated successfully.
Nov 25 09:24:27 compute-0 podman[429982]: 2025-11-25 09:24:27.003618556 +0000 UTC m=+1.277838907 container remove f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:24:27 compute-0 systemd[1]: libpod-conmon-f3c8f0e5f665212bdd72ffd1531974c015e5b981efff4f30a2082994067a34aa.scope: Deactivated successfully.
Nov 25 09:24:27 compute-0 sudo[429875]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:27 compute-0 sudo[430019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:27 compute-0 sudo[430019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:27 compute-0 sudo[430019]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:27 compute-0 sudo[430044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:24:27 compute-0 sudo[430044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:27 compute-0 sudo[430044]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:27 compute-0 sudo[430069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:27 compute-0 sudo[430069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:27 compute-0 sudo[430069]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:27 compute-0 sudo[430094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:24:27 compute-0 sudo[430094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:27 compute-0 podman[430160]: 2025-11-25 09:24:27.660873864 +0000 UTC m=+0.048571535 container create a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:24:27 compute-0 systemd[1]: Started libpod-conmon-a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4.scope.
Nov 25 09:24:27 compute-0 podman[430160]: 2025-11-25 09:24:27.636428917 +0000 UTC m=+0.024126608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:24:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:24:27 compute-0 podman[430160]: 2025-11-25 09:24:27.753662124 +0000 UTC m=+0.141359815 container init a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:24:27 compute-0 podman[430160]: 2025-11-25 09:24:27.762122514 +0000 UTC m=+0.149820185 container start a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:24:27 compute-0 nifty_tesla[430177]: 167 167
Nov 25 09:24:27 compute-0 systemd[1]: libpod-a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4.scope: Deactivated successfully.
Nov 25 09:24:27 compute-0 podman[430160]: 2025-11-25 09:24:27.767185092 +0000 UTC m=+0.154882773 container attach a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:24:27 compute-0 podman[430160]: 2025-11-25 09:24:27.767937313 +0000 UTC m=+0.155634984 container died a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:24:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c9fe62981956c8351245bcf107fb6ab5cb772e00c1f7df87594183c1196bed5-merged.mount: Deactivated successfully.
Nov 25 09:24:27 compute-0 ceph-mon[75015]: pgmap v3135: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:27 compute-0 podman[430160]: 2025-11-25 09:24:27.823410735 +0000 UTC m=+0.211108406 container remove a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tesla, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:24:27 compute-0 systemd[1]: libpod-conmon-a7c9feb5cbbd93c8504d3bcfda02880251b87504ca517dfb513bd60e73f1c3b4.scope: Deactivated successfully.
Nov 25 09:24:28 compute-0 podman[430201]: 2025-11-25 09:24:28.009975811 +0000 UTC m=+0.045545683 container create 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:24:28 compute-0 systemd[1]: Started libpod-conmon-88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e.scope.
Nov 25 09:24:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:24:28 compute-0 podman[430201]: 2025-11-25 09:24:27.988960148 +0000 UTC m=+0.024530040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:24:28 compute-0 podman[430201]: 2025-11-25 09:24:28.094565368 +0000 UTC m=+0.130135270 container init 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:24:28 compute-0 podman[430201]: 2025-11-25 09:24:28.101225039 +0000 UTC m=+0.136794931 container start 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:24:28 compute-0 podman[430201]: 2025-11-25 09:24:28.10494991 +0000 UTC m=+0.140519812 container attach 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:24:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:28 compute-0 nova_compute[253538]: 2025-11-25 09:24:28.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:29 compute-0 ceph-mon[75015]: pgmap v3136: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:24:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/338765886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:24:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:24:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/338765886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]: {
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "osd_id": 1,
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "type": "bluestore"
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:     },
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "osd_id": 2,
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "type": "bluestore"
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:     },
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "osd_id": 0,
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:         "type": "bluestore"
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]:     }
Nov 25 09:24:29 compute-0 goofy_grothendieck[430218]: }
Nov 25 09:24:29 compute-0 systemd[1]: libpod-88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e.scope: Deactivated successfully.
Nov 25 09:24:29 compute-0 podman[430201]: 2025-11-25 09:24:29.203013516 +0000 UTC m=+1.238583388 container died 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:24:29 compute-0 systemd[1]: libpod-88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e.scope: Consumed 1.107s CPU time.
Nov 25 09:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a41bd55063642333689f6aaf752704e8b3402ed7a0c29ba9102068f2ac61317c-merged.mount: Deactivated successfully.
Nov 25 09:24:29 compute-0 podman[430201]: 2025-11-25 09:24:29.421940754 +0000 UTC m=+1.457510626 container remove 88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 09:24:29 compute-0 systemd[1]: libpod-conmon-88bb1d3692b19929e00e6ca57ebe7c6e9e5b5c62d78934413e687b1641485f6e.scope: Deactivated successfully.
Nov 25 09:24:29 compute-0 sudo[430094]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:24:29 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:24:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:24:29 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:24:29 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 60ca1655-f090-45bb-aa9f-cc22d9861cf6 does not exist
Nov 25 09:24:29 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 43f1d31d-9801-46c2-9ece-b50c2b825cc4 does not exist
Nov 25 09:24:29 compute-0 sudo[430264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:24:29 compute-0 sudo[430264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:29 compute-0 sudo[430264]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:29 compute-0 sudo[430289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:24:29 compute-0 sudo[430289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:24:29 compute-0 sudo[430289]: pam_unix(sudo:session): session closed for user root
Nov 25 09:24:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/338765886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:24:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/338765886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:24:30 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:24:30 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:24:30 compute-0 nova_compute[253538]: 2025-11-25 09:24:30.236 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:31 compute-0 ceph-mon[75015]: pgmap v3137: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:33 compute-0 nova_compute[253538]: 2025-11-25 09:24:33.759 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:33 compute-0 ceph-mon[75015]: pgmap v3138: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:34 compute-0 podman[430314]: 2025-11-25 09:24:34.869166786 +0000 UTC m=+0.107878652 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 09:24:35 compute-0 nova_compute[253538]: 2025-11-25 09:24:35.278 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:35 compute-0 ceph-mon[75015]: pgmap v3139: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:37 compute-0 ceph-mon[75015]: pgmap v3140: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:38 compute-0 nova_compute[253538]: 2025-11-25 09:24:38.813 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:39 compute-0 ceph-mon[75015]: pgmap v3141: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:40 compute-0 nova_compute[253538]: 2025-11-25 09:24:40.281 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:24:41.112 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:24:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:24:41.112 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:24:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:24:41.113 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:24:41 compute-0 ceph-mon[75015]: pgmap v3142: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:43 compute-0 nova_compute[253538]: 2025-11-25 09:24:43.817 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:43 compute-0 ceph-mon[75015]: pgmap v3143: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:45 compute-0 nova_compute[253538]: 2025-11-25 09:24:45.283 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:45 compute-0 ceph-mon[75015]: pgmap v3144: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:47 compute-0 ceph-mon[75015]: pgmap v3145: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:48 compute-0 nova_compute[253538]: 2025-11-25 09:24:48.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:49 compute-0 ceph-mon[75015]: pgmap v3146: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:50 compute-0 nova_compute[253538]: 2025-11-25 09:24:50.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:51 compute-0 ceph-mon[75015]: pgmap v3147: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:53 compute-0 ceph-mon[75015]: pgmap v3148: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:24:53
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log']
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:24:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:24:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:53 compute-0 podman[430341]: 2025-11-25 09:24:53.822079023 +0000 UTC m=+0.069484705 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 09:24:53 compute-0 podman[430342]: 2025-11-25 09:24:53.866478363 +0000 UTC m=+0.111860461 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 09:24:53 compute-0 nova_compute[253538]: 2025-11-25 09:24:53.896 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:24:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:55 compute-0 nova_compute[253538]: 2025-11-25 09:24:55.287 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:55 compute-0 ceph-mon[75015]: pgmap v3149: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:58 compute-0 ceph-mon[75015]: pgmap v3150: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:24:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:24:58 compute-0 nova_compute[253538]: 2025-11-25 09:24:58.801 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:24:58 compute-0 nova_compute[253538]: 2025-11-25 09:24:58.899 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:24:59 compute-0 ceph-mon[75015]: pgmap v3151: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:00 compute-0 nova_compute[253538]: 2025-11-25 09:25:00.288 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:01 compute-0 ceph-mon[75015]: pgmap v3152: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:02 compute-0 nova_compute[253538]: 2025-11-25 09:25:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:03 compute-0 ceph-mon[75015]: pgmap v3153: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:03 compute-0 nova_compute[253538]: 2025-11-25 09:25:03.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:04 compute-0 nova_compute[253538]: 2025-11-25 09:25:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:04 compute-0 nova_compute[253538]: 2025-11-25 09:25:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:25:04 compute-0 nova_compute[253538]: 2025-11-25 09:25:04.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:25:04 compute-0 nova_compute[253538]: 2025-11-25 09:25:04.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:25:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:05 compute-0 nova_compute[253538]: 2025-11-25 09:25:05.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:05 compute-0 ceph-mon[75015]: pgmap v3154: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:05 compute-0 podman[430380]: 2025-11-25 09:25:05.828403622 +0000 UTC m=+0.080194316 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:25:06 compute-0 sshd-session[430378]: Invalid user usuario from 45.78.217.205 port 52454
Nov 25 09:25:06 compute-0 sshd-session[430378]: Received disconnect from 45.78.217.205 port 52454:11: Bye Bye [preauth]
Nov 25 09:25:06 compute-0 sshd-session[430378]: Disconnected from invalid user usuario 45.78.217.205 port 52454 [preauth]
Nov 25 09:25:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:07 compute-0 ceph-mon[75015]: pgmap v3155: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:08 compute-0 nova_compute[253538]: 2025-11-25 09:25:08.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:09 compute-0 nova_compute[253538]: 2025-11-25 09:25:09.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:09 compute-0 ceph-mon[75015]: pgmap v3156: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:10 compute-0 nova_compute[253538]: 2025-11-25 09:25:10.292 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:11 compute-0 nova_compute[253538]: 2025-11-25 09:25:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:11 compute-0 ceph-mon[75015]: pgmap v3157: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:12 compute-0 nova_compute[253538]: 2025-11-25 09:25:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:12 compute-0 nova_compute[253538]: 2025-11-25 09:25:12.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:25:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:13 compute-0 ceph-mon[75015]: pgmap v3158: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:13 compute-0 nova_compute[253538]: 2025-11-25 09:25:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:13 compute-0 nova_compute[253538]: 2025-11-25 09:25:13.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:15 compute-0 nova_compute[253538]: 2025-11-25 09:25:15.300 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:15 compute-0 nova_compute[253538]: 2025-11-25 09:25:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:16 compute-0 ceph-mon[75015]: pgmap v3159: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:17 compute-0 ceph-mon[75015]: pgmap v3160: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:18 compute-0 nova_compute[253538]: 2025-11-25 09:25:18.908 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:19 compute-0 ceph-mon[75015]: pgmap v3161: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:20 compute-0 nova_compute[253538]: 2025-11-25 09:25:20.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:20 compute-0 nova_compute[253538]: 2025-11-25 09:25:20.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:21 compute-0 ceph-mon[75015]: pgmap v3162: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.100760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723100804, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2078, "num_deletes": 255, "total_data_size": 3505756, "memory_usage": 3556160, "flush_reason": "Manual Compaction"}
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 25 09:25:23 compute-0 ceph-mon[75015]: pgmap v3163: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723183043, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3427712, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63740, "largest_seqno": 65817, "table_properties": {"data_size": 3418131, "index_size": 6138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19182, "raw_average_key_size": 20, "raw_value_size": 3399066, "raw_average_value_size": 3600, "num_data_blocks": 273, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062499, "oldest_key_time": 1764062499, "file_creation_time": 1764062723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 82362 microseconds, and 7407 cpu microseconds.
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.183113) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3427712 bytes OK
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.183146) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.275116) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.275178) EVENT_LOG_v1 {"time_micros": 1764062723275161, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.275216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3497039, prev total WAL file size 3497039, number of live WAL files 2.
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.277296) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3347KB)], [152(8024KB)]
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723277401, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11644617, "oldest_snapshot_seqno": -1}
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8461 keys, 9904845 bytes, temperature: kUnknown
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723389884, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9904845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9851743, "index_size": 30845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 221454, "raw_average_key_size": 26, "raw_value_size": 9704174, "raw_average_value_size": 1146, "num_data_blocks": 1195, "num_entries": 8461, "num_filter_entries": 8461, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.390221) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9904845 bytes
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.400415) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.4 rd, 87.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8985, records dropped: 524 output_compression: NoCompression
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.400456) EVENT_LOG_v1 {"time_micros": 1764062723400440, "job": 94, "event": "compaction_finished", "compaction_time_micros": 112648, "compaction_time_cpu_micros": 32948, "output_level": 6, "num_output_files": 1, "total_output_size": 9904845, "num_input_records": 8985, "num_output_records": 8461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723401340, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062723402944, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.277159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:25:23 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:25:23.403219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:25:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:25:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:23 compute-0 nova_compute[253538]: 2025-11-25 09:25:23.941 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:24 compute-0 nova_compute[253538]: 2025-11-25 09:25:24.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:24 compute-0 nova_compute[253538]: 2025-11-25 09:25:24.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:25:24 compute-0 nova_compute[253538]: 2025-11-25 09:25:24.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:25:24 compute-0 nova_compute[253538]: 2025-11-25 09:25:24.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:25:24 compute-0 nova_compute[253538]: 2025-11-25 09:25:24.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:25:24 compute-0 nova_compute[253538]: 2025-11-25 09:25:24.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:25:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:24 compute-0 podman[430425]: 2025-11-25 09:25:24.818621596 +0000 UTC m=+0.064279684 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:25:24 compute-0 podman[430429]: 2025-11-25 09:25:24.828531626 +0000 UTC m=+0.061689272 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:25:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:25:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784927242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.092 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.299 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.300 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3625MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.300 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.301 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.303 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.360 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.361 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.376 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:25:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:25:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780056618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.856 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.863 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.877 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.881 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:25:25 compute-0 nova_compute[253538]: 2025-11-25 09:25:25.882 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:25:25 compute-0 ceph-mon[75015]: pgmap v3164: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/784927242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:25:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2780056618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:25:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:27 compute-0 ceph-mon[75015]: pgmap v3165: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:28 compute-0 nova_compute[253538]: 2025-11-25 09:25:28.944 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:25:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2942069893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:25:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:25:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2942069893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:25:29 compute-0 sudo[430489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:29 compute-0 sudo[430489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:29 compute-0 sudo[430489]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:29 compute-0 sudo[430514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:25:29 compute-0 sudo[430514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:29 compute-0 sudo[430514]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:29 compute-0 sudo[430539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:29 compute-0 sudo[430539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:29 compute-0 sudo[430539]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:29 compute-0 sudo[430564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:25:29 compute-0 sudo[430564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:30 compute-0 ceph-mon[75015]: pgmap v3166: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2942069893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:25:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2942069893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:25:30 compute-0 nova_compute[253538]: 2025-11-25 09:25:30.311 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:30 compute-0 sudo[430564]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:25:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:25:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:25:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:25:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:25:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:25:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f9cec71a-3741-4f83-bed2-1e79606723dc does not exist
Nov 25 09:25:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a0672b7e-c367-4883-a2eb-eac914e881db does not exist
Nov 25 09:25:30 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 409dfc27-9e95-48ce-92f0-d2a22b5d2e5a does not exist
Nov 25 09:25:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:25:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:25:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:25:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:25:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:25:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:25:30 compute-0 sudo[430619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:30 compute-0 sudo[430619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:30 compute-0 sudo[430619]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:30 compute-0 sudo[430644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:25:30 compute-0 sudo[430644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:30 compute-0 sudo[430644]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:30 compute-0 sudo[430669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:30 compute-0 sudo[430669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:30 compute-0 sudo[430669]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:30 compute-0 sudo[430694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:25:30 compute-0 sudo[430694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:25:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:25:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:25:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:25:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:25:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:25:31 compute-0 ceph-mon[75015]: pgmap v3167: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:31 compute-0 podman[430759]: 2025-11-25 09:25:31.359978476 +0000 UTC m=+0.041358338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:25:31 compute-0 podman[430759]: 2025-11-25 09:25:31.587442417 +0000 UTC m=+0.268822219 container create 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:25:31 compute-0 systemd[1]: Started libpod-conmon-93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb.scope.
Nov 25 09:25:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:25:31 compute-0 podman[430759]: 2025-11-25 09:25:31.922035709 +0000 UTC m=+0.603415571 container init 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:25:31 compute-0 podman[430759]: 2025-11-25 09:25:31.930790188 +0000 UTC m=+0.612169960 container start 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 09:25:31 compute-0 stupefied_payne[430775]: 167 167
Nov 25 09:25:31 compute-0 systemd[1]: libpod-93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb.scope: Deactivated successfully.
Nov 25 09:25:31 compute-0 conmon[430775]: conmon 93933f860f36901f407a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb.scope/container/memory.events
Nov 25 09:25:32 compute-0 podman[430759]: 2025-11-25 09:25:32.011996122 +0000 UTC m=+0.693375884 container attach 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:25:32 compute-0 podman[430759]: 2025-11-25 09:25:32.014163581 +0000 UTC m=+0.695543383 container died 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f10c336e918bf3ebfad48bc67c88ed1fb929ba3d3816ea20ee7fe2aeaa4d031-merged.mount: Deactivated successfully.
Nov 25 09:25:32 compute-0 podman[430759]: 2025-11-25 09:25:32.543604405 +0000 UTC m=+1.224984177 container remove 93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:25:32 compute-0 systemd[1]: libpod-conmon-93933f860f36901f407a16f67cfcd9e935e7d944da8ba03ccbe0f8c17baaa7cb.scope: Deactivated successfully.
Nov 25 09:25:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:32 compute-0 podman[430800]: 2025-11-25 09:25:32.782340523 +0000 UTC m=+0.047143726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:25:32 compute-0 podman[430800]: 2025-11-25 09:25:32.925545337 +0000 UTC m=+0.190348540 container create 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:25:33 compute-0 systemd[1]: Started libpod-conmon-1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99.scope.
Nov 25 09:25:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:33 compute-0 podman[430800]: 2025-11-25 09:25:33.138557815 +0000 UTC m=+0.403360978 container init 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:25:33 compute-0 podman[430800]: 2025-11-25 09:25:33.146662286 +0000 UTC m=+0.411465469 container start 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:25:33 compute-0 podman[430800]: 2025-11-25 09:25:33.219645695 +0000 UTC m=+0.484448888 container attach 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:25:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:33 compute-0 ceph-mon[75015]: pgmap v3168: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:33 compute-0 nova_compute[253538]: 2025-11-25 09:25:33.947 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:34 compute-0 optimistic_buck[430817]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:25:34 compute-0 optimistic_buck[430817]: --> relative data size: 1.0
Nov 25 09:25:34 compute-0 optimistic_buck[430817]: --> All data devices are unavailable
Nov 25 09:25:34 compute-0 systemd[1]: libpod-1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99.scope: Deactivated successfully.
Nov 25 09:25:34 compute-0 systemd[1]: libpod-1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99.scope: Consumed 1.083s CPU time.
Nov 25 09:25:34 compute-0 podman[430846]: 2025-11-25 09:25:34.325061522 +0000 UTC m=+0.029073005 container died 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 09:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-281c57f8a0c3c92c96d63ce57e683f2bcdc07bcc2ae6bdfecb9b5b8b230e8935-merged.mount: Deactivated successfully.
Nov 25 09:25:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:35 compute-0 nova_compute[253538]: 2025-11-25 09:25:35.314 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:35 compute-0 podman[430846]: 2025-11-25 09:25:35.49986035 +0000 UTC m=+1.203871833 container remove 1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_buck, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:25:35 compute-0 systemd[1]: libpod-conmon-1f313a22837d0fdf20ba73ba15ad68d587b1022f9107ea16727585359563ef99.scope: Deactivated successfully.
Nov 25 09:25:35 compute-0 sudo[430694]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:35 compute-0 ceph-mon[75015]: pgmap v3169: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:35 compute-0 sudo[430861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:35 compute-0 sudo[430861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:35 compute-0 sudo[430861]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:35 compute-0 sudo[430888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:25:35 compute-0 sudo[430888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:35 compute-0 sudo[430888]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:35 compute-0 sudo[430913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:35 compute-0 sudo[430913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:35 compute-0 sudo[430913]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:35 compute-0 sudo[430938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:25:35 compute-0 sudo[430938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:35 compute-0 podman[430962]: 2025-11-25 09:25:35.977130441 +0000 UTC m=+0.090929580 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 09:25:36 compute-0 sshd-session[430884]: Invalid user hadoop from 193.32.162.151 port 41888
Nov 25 09:25:36 compute-0 sshd-session[430884]: Connection closed by invalid user hadoop 193.32.162.151 port 41888 [preauth]
Nov 25 09:25:36 compute-0 podman[431030]: 2025-11-25 09:25:36.304809854 +0000 UTC m=+0.032411964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:25:36 compute-0 podman[431030]: 2025-11-25 09:25:36.409051517 +0000 UTC m=+0.136653537 container create 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:25:36 compute-0 systemd[1]: Started libpod-conmon-5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47.scope.
Nov 25 09:25:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:25:36 compute-0 podman[431030]: 2025-11-25 09:25:36.722507942 +0000 UTC m=+0.450110062 container init 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:25:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:36 compute-0 podman[431030]: 2025-11-25 09:25:36.736705749 +0000 UTC m=+0.464307809 container start 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 25 09:25:36 compute-0 pensive_wright[431046]: 167 167
Nov 25 09:25:36 compute-0 systemd[1]: libpod-5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47.scope: Deactivated successfully.
Nov 25 09:25:36 compute-0 podman[431030]: 2025-11-25 09:25:36.921410814 +0000 UTC m=+0.649012864 container attach 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:25:36 compute-0 podman[431030]: 2025-11-25 09:25:36.923693916 +0000 UTC m=+0.651296006 container died 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-59f4b79e15372c5fae81d5641afd6d0089eeb88ef8f4234a37a18615ade3eb4c-merged.mount: Deactivated successfully.
Nov 25 09:25:37 compute-0 podman[431030]: 2025-11-25 09:25:37.546578868 +0000 UTC m=+1.274180958 container remove 5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wright, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:25:37 compute-0 systemd[1]: libpod-conmon-5d7e0a7063e60ba34552dc88e55fe481514d872ebe5800eee0d3e461ec784d47.scope: Deactivated successfully.
Nov 25 09:25:37 compute-0 podman[431068]: 2025-11-25 09:25:37.697716968 +0000 UTC m=+0.021064895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:25:37 compute-0 podman[431068]: 2025-11-25 09:25:37.867536797 +0000 UTC m=+0.190884744 container create 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 09:25:37 compute-0 systemd[1]: Started libpod-conmon-67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2.scope.
Nov 25 09:25:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:38 compute-0 ceph-mon[75015]: pgmap v3170: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:38 compute-0 podman[431068]: 2025-11-25 09:25:38.145129725 +0000 UTC m=+0.468477652 container init 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:25:38 compute-0 podman[431068]: 2025-11-25 09:25:38.157221705 +0000 UTC m=+0.480569602 container start 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:25:38 compute-0 podman[431068]: 2025-11-25 09:25:38.284448973 +0000 UTC m=+0.607796970 container attach 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:25:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:38 compute-0 compassionate_saha[431084]: {
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:     "0": [
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:         {
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "devices": [
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "/dev/loop3"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             ],
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_name": "ceph_lv0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_size": "21470642176",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "name": "ceph_lv0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "tags": {
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cluster_name": "ceph",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.crush_device_class": "",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.encrypted": "0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osd_id": "0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.type": "block",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.vdo": "0"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             },
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "type": "block",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "vg_name": "ceph_vg0"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:         }
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:     ],
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:     "1": [
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:         {
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "devices": [
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "/dev/loop4"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             ],
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_name": "ceph_lv1",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_size": "21470642176",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "name": "ceph_lv1",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "tags": {
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cluster_name": "ceph",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.crush_device_class": "",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.encrypted": "0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osd_id": "1",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.type": "block",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.vdo": "0"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             },
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "type": "block",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "vg_name": "ceph_vg1"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:         }
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:     ],
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:     "2": [
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:         {
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "devices": [
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "/dev/loop5"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             ],
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_name": "ceph_lv2",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_size": "21470642176",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "name": "ceph_lv2",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "tags": {
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.cluster_name": "ceph",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.crush_device_class": "",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.encrypted": "0",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osd_id": "2",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.type": "block",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:                 "ceph.vdo": "0"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             },
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "type": "block",
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:             "vg_name": "ceph_vg2"
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:         }
Nov 25 09:25:38 compute-0 compassionate_saha[431084]:     ]
Nov 25 09:25:38 compute-0 compassionate_saha[431084]: }
Nov 25 09:25:38 compute-0 systemd[1]: libpod-67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2.scope: Deactivated successfully.
Nov 25 09:25:38 compute-0 nova_compute[253538]: 2025-11-25 09:25:38.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:38 compute-0 podman[431093]: 2025-11-25 09:25:38.99481059 +0000 UTC m=+0.041746720 container died 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:25:39 compute-0 ceph-mon[75015]: pgmap v3171: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-17b5c1d7710078c54dad9e439f1175573985af69dc2c63c89246841a5150b3d8-merged.mount: Deactivated successfully.
Nov 25 09:25:40 compute-0 nova_compute[253538]: 2025-11-25 09:25:40.330 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:40 compute-0 podman[431093]: 2025-11-25 09:25:40.549246796 +0000 UTC m=+1.596182856 container remove 67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 25 09:25:40 compute-0 systemd[1]: libpod-conmon-67ba0ff5e2d17cef454ed823b29a5fca53f8045bdd9b1ff56487b70cbc47caa2.scope: Deactivated successfully.
Nov 25 09:25:40 compute-0 sudo[430938]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:40 compute-0 sudo[431108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:40 compute-0 sudo[431108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:40 compute-0 sudo[431108]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:40 compute-0 sudo[431133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:25:40 compute-0 sudo[431133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:40 compute-0 sudo[431133]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:40 compute-0 sudo[431158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:40 compute-0 sudo[431158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:40 compute-0 sudo[431158]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:40 compute-0 sudo[431183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:25:40 compute-0 sudo[431183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:25:41.113 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:25:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:25:41.114 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:25:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:25:41.115 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:25:41 compute-0 ceph-mon[75015]: pgmap v3172: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:25:41 compute-0 podman[431248]: 2025-11-25 09:25:41.21918518 +0000 UTC m=+0.025099825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:25:41 compute-0 podman[431248]: 2025-11-25 09:25:41.563641921 +0000 UTC m=+0.369556566 container create 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:25:41 compute-0 systemd[1]: Started libpod-conmon-1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef.scope.
Nov 25 09:25:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:25:42 compute-0 podman[431248]: 2025-11-25 09:25:42.162048655 +0000 UTC m=+0.967963330 container init 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:25:42 compute-0 podman[431248]: 2025-11-25 09:25:42.170934907 +0000 UTC m=+0.976849592 container start 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:25:42 compute-0 eager_keller[431266]: 167 167
Nov 25 09:25:42 compute-0 systemd[1]: libpod-1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef.scope: Deactivated successfully.
Nov 25 09:25:42 compute-0 podman[431248]: 2025-11-25 09:25:42.409173811 +0000 UTC m=+1.215088546 container attach 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:25:42 compute-0 podman[431248]: 2025-11-25 09:25:42.410407046 +0000 UTC m=+1.216321701 container died 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:25:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s
Nov 25 09:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-55a50467a0b2e310209f78869ec9626449b503c753b96f3dd2ce7404c785b6d0-merged.mount: Deactivated successfully.
Nov 25 09:25:43 compute-0 ceph-mon[75015]: pgmap v3173: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s
Nov 25 09:25:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:43 compute-0 podman[431248]: 2025-11-25 09:25:43.930846026 +0000 UTC m=+2.736760671 container remove 1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:25:43 compute-0 nova_compute[253538]: 2025-11-25 09:25:43.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:43 compute-0 systemd[1]: libpod-conmon-1efe226eec73943f8928bda8976cae86d5c3355c88bb05147b75cf2f99ff36ef.scope: Deactivated successfully.
Nov 25 09:25:44 compute-0 podman[431290]: 2025-11-25 09:25:44.107642046 +0000 UTC m=+0.036282160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:25:44 compute-0 podman[431290]: 2025-11-25 09:25:44.513880181 +0000 UTC m=+0.442520245 container create 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 09:25:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Nov 25 09:25:44 compute-0 systemd[1]: Started libpod-conmon-9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7.scope.
Nov 25 09:25:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:25:45 compute-0 podman[431290]: 2025-11-25 09:25:45.306208542 +0000 UTC m=+1.234848596 container init 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 25 09:25:45 compute-0 podman[431290]: 2025-11-25 09:25:45.315255598 +0000 UTC m=+1.243895622 container start 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:25:45 compute-0 nova_compute[253538]: 2025-11-25 09:25:45.333 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:45 compute-0 ceph-mon[75015]: pgmap v3174: 321 pgs: 321 active+clean; 458 KiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Nov 25 09:25:45 compute-0 podman[431290]: 2025-11-25 09:25:45.810216192 +0000 UTC m=+1.738856256 container attach 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:25:46 compute-0 objective_leakey[431306]: {
Nov 25 09:25:46 compute-0 objective_leakey[431306]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "osd_id": 1,
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "type": "bluestore"
Nov 25 09:25:46 compute-0 objective_leakey[431306]:     },
Nov 25 09:25:46 compute-0 objective_leakey[431306]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "osd_id": 2,
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "type": "bluestore"
Nov 25 09:25:46 compute-0 objective_leakey[431306]:     },
Nov 25 09:25:46 compute-0 objective_leakey[431306]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "osd_id": 0,
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:25:46 compute-0 objective_leakey[431306]:         "type": "bluestore"
Nov 25 09:25:46 compute-0 objective_leakey[431306]:     }
Nov 25 09:25:46 compute-0 objective_leakey[431306]: }
Nov 25 09:25:46 compute-0 systemd[1]: libpod-9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7.scope: Deactivated successfully.
Nov 25 09:25:46 compute-0 conmon[431306]: conmon 9a1e3a9ef47abef2fa6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7.scope/container/memory.events
Nov 25 09:25:46 compute-0 podman[431339]: 2025-11-25 09:25:46.338762861 +0000 UTC m=+0.023683617 container died 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:25:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 8.4 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 684 KiB/s wr, 10 op/s
Nov 25 09:25:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 25 09:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-bde7251d46106a307708cf0ceae1afbc5e4ae0310675fa39096215870aef241c-merged.mount: Deactivated successfully.
Nov 25 09:25:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 25 09:25:47 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 25 09:25:48 compute-0 ceph-mon[75015]: pgmap v3175: 321 pgs: 321 active+clean; 8.4 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 684 KiB/s wr, 10 op/s
Nov 25 09:25:48 compute-0 podman[431339]: 2025-11-25 09:25:48.614236336 +0000 UTC m=+2.299157102 container remove 9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 25 09:25:48 compute-0 systemd[1]: libpod-conmon-9a1e3a9ef47abef2fa6ebd3540aa6346c769c72cfb2badf017bccfabbc2d53e7.scope: Deactivated successfully.
Nov 25 09:25:48 compute-0 sudo[431183]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 16 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.6 MiB/s wr, 12 op/s
Nov 25 09:25:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:25:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:48 compute-0 sshd-session[431262]: Received disconnect from 45.78.222.2 port 42038:11: Bye Bye [preauth]
Nov 25 09:25:48 compute-0 sshd-session[431262]: Disconnected from 45.78.222.2 port 42038 [preauth]
Nov 25 09:25:48 compute-0 nova_compute[253538]: 2025-11-25 09:25:48.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:49 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:25:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:25:49 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:25:49 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f612bd6d-323d-48c5-b27c-1f533d8bc4b5 does not exist
Nov 25 09:25:49 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a940df7b-7bec-47c8-922e-534951a059e3 does not exist
Nov 25 09:25:49 compute-0 sudo[431355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:25:49 compute-0 sudo[431355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:49 compute-0 sudo[431355]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:49 compute-0 sudo[431380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:25:49 compute-0 sudo[431380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:25:49 compute-0 sudo[431380]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:49 compute-0 ceph-mon[75015]: osdmap e280: 3 total, 3 up, 3 in
Nov 25 09:25:49 compute-0 ceph-mon[75015]: pgmap v3177: 321 pgs: 321 active+clean; 16 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.6 MiB/s wr, 12 op/s
Nov 25 09:25:49 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:25:49 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:25:50 compute-0 nova_compute[253538]: 2025-11-25 09:25:50.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Nov 25 09:25:51 compute-0 ceph-mon[75015]: pgmap v3178: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Nov 25 09:25:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:25:53
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root']
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:25:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:25:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:53 compute-0 ceph-mon[75015]: pgmap v3179: 321 pgs: 321 active+clean; 21 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Nov 25 09:25:53 compute-0 nova_compute[253538]: 2025-11-25 09:25:53.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:25:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 29 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.8 MiB/s wr, 18 op/s
Nov 25 09:25:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 25 09:25:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 25 09:25:55 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 25 09:25:55 compute-0 ceph-mon[75015]: pgmap v3180: 321 pgs: 321 active+clean; 29 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.8 MiB/s wr, 18 op/s
Nov 25 09:25:55 compute-0 nova_compute[253538]: 2025-11-25 09:25:55.380 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:55 compute-0 podman[431406]: 2025-11-25 09:25:55.856175525 +0000 UTC m=+0.092008409 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:25:55 compute-0 podman[431405]: 2025-11-25 09:25:55.865624013 +0000 UTC m=+0.101401486 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:25:56 compute-0 ceph-mon[75015]: osdmap e281: 3 total, 3 up, 3 in
Nov 25 09:25:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 33 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.6 MiB/s wr, 21 op/s
Nov 25 09:25:57 compute-0 ceph-mon[75015]: pgmap v3182: 321 pgs: 321 active+clean; 33 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.6 MiB/s wr, 21 op/s
Nov 25 09:25:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.5 MiB/s wr, 22 op/s
Nov 25 09:25:58 compute-0 nova_compute[253538]: 2025-11-25 09:25:58.884 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:25:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:25:58 compute-0 nova_compute[253538]: 2025-11-25 09:25:58.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:25:59 compute-0 ceph-mon[75015]: pgmap v3183: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.5 MiB/s wr, 22 op/s
Nov 25 09:26:00 compute-0 nova_compute[253538]: 2025-11-25 09:26:00.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Nov 25 09:26:01 compute-0 ceph-mon[75015]: pgmap v3184: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Nov 25 09:26:02 compute-0 nova_compute[253538]: 2025-11-25 09:26:02.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Nov 25 09:26:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:03 compute-0 nova_compute[253538]: 2025-11-25 09:26:03.994 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:04 compute-0 ceph-mon[75015]: pgmap v3185: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:26:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Nov 25 09:26:05 compute-0 ceph-mon[75015]: pgmap v3186: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Nov 25 09:26:05 compute-0 nova_compute[253538]: 2025-11-25 09:26:05.382 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:06 compute-0 nova_compute[253538]: 2025-11-25 09:26:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:06 compute-0 nova_compute[253538]: 2025-11-25 09:26:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:26:06 compute-0 nova_compute[253538]: 2025-11-25 09:26:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:26:06 compute-0 nova_compute[253538]: 2025-11-25 09:26:06.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:26:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 MiB/s wr, 5 op/s
Nov 25 09:26:06 compute-0 podman[431446]: 2025-11-25 09:26:06.854973337 +0000 UTC m=+0.107392999 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:26:07 compute-0 ceph-mon[75015]: pgmap v3187: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 MiB/s wr, 5 op/s
Nov 25 09:26:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 683 KiB/s wr, 4 op/s
Nov 25 09:26:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:08 compute-0 nova_compute[253538]: 2025-11-25 09:26:08.997 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:09 compute-0 nova_compute[253538]: 2025-11-25 09:26:09.561 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:09 compute-0 ceph-mon[75015]: pgmap v3188: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 683 KiB/s wr, 4 op/s
Nov 25 09:26:10 compute-0 nova_compute[253538]: 2025-11-25 09:26:10.422 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Nov 25 09:26:11 compute-0 nova_compute[253538]: 2025-11-25 09:26:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:11 compute-0 ceph-mon[75015]: pgmap v3189: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Nov 25 09:26:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:13 compute-0 ceph-mon[75015]: pgmap v3190: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.903540) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062773903590, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 673, "num_deletes": 255, "total_data_size": 800982, "memory_usage": 815096, "flush_reason": "Manual Compaction"}
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062773912284, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 793988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65818, "largest_seqno": 66490, "table_properties": {"data_size": 790346, "index_size": 1485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8079, "raw_average_key_size": 18, "raw_value_size": 782991, "raw_average_value_size": 1838, "num_data_blocks": 66, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062724, "oldest_key_time": 1764062724, "file_creation_time": 1764062773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 8808 microseconds, and 2991 cpu microseconds.
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.912350) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 793988 bytes OK
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.912372) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.915673) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.915700) EVENT_LOG_v1 {"time_micros": 1764062773915692, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.915725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 797408, prev total WAL file size 797408, number of live WAL files 2.
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.916530) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373639' seq:72057594037927935, type:22 .. '6C6F676D0033303230' seq:0, type:0; will stop at (end)
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(775KB)], [155(9672KB)]
Nov 25 09:26:13 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062773916573, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10698833, "oldest_snapshot_seqno": -1}
Nov 25 09:26:14 compute-0 nova_compute[253538]: 2025-11-25 09:26:13.999 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8361 keys, 10584525 bytes, temperature: kUnknown
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062774010621, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10584525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10530737, "index_size": 31787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 220315, "raw_average_key_size": 26, "raw_value_size": 10383584, "raw_average_value_size": 1241, "num_data_blocks": 1235, "num_entries": 8361, "num_filter_entries": 8361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.010901) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10584525 bytes
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.012835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.7 rd, 112.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.4 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(26.8) write-amplify(13.3) OK, records in: 8887, records dropped: 526 output_compression: NoCompression
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.012857) EVENT_LOG_v1 {"time_micros": 1764062774012848, "job": 96, "event": "compaction_finished", "compaction_time_micros": 94135, "compaction_time_cpu_micros": 49087, "output_level": 6, "num_output_files": 1, "total_output_size": 10584525, "num_input_records": 8887, "num_output_records": 8361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062774013148, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062774015809, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:13.916420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:26:14 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:26:14.015860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:26:14 compute-0 nova_compute[253538]: 2025-11-25 09:26:14.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:14 compute-0 nova_compute[253538]: 2025-11-25 09:26:14.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:26:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:15 compute-0 nova_compute[253538]: 2025-11-25 09:26:15.424 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:15 compute-0 nova_compute[253538]: 2025-11-25 09:26:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:15 compute-0 ceph-mon[75015]: pgmap v3191: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:16 compute-0 nova_compute[253538]: 2025-11-25 09:26:16.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:17 compute-0 ceph-mon[75015]: pgmap v3192: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:19 compute-0 nova_compute[253538]: 2025-11-25 09:26:19.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:19 compute-0 ceph-mon[75015]: pgmap v3193: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:20 compute-0 nova_compute[253538]: 2025-11-25 09:26:20.427 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:21 compute-0 ceph-mon[75015]: pgmap v3194: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:22 compute-0 ceph-mon[75015]: pgmap v3195: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:26:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:26:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:24 compute-0 nova_compute[253538]: 2025-11-25 09:26:24.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:26:24.228 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:26:24 compute-0 nova_compute[253538]: 2025-11-25 09:26:24.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:24 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:26:24.229 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:26:24 compute-0 nova_compute[253538]: 2025-11-25 09:26:24.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:24 compute-0 nova_compute[253538]: 2025-11-25 09:26:24.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:26:24 compute-0 nova_compute[253538]: 2025-11-25 09:26:24.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:26:24 compute-0 nova_compute[253538]: 2025-11-25 09:26:24.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:26:24 compute-0 nova_compute[253538]: 2025-11-25 09:26:24.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:26:24 compute-0 nova_compute[253538]: 2025-11-25 09:26:24.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:26:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:26:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268513709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.100 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.264 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.266 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3640MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.266 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.267 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.427 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.471 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.472 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:26:25 compute-0 nova_compute[253538]: 2025-11-25 09:26:25.489 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:26:25 compute-0 ceph-mon[75015]: pgmap v3196: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/268513709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:26:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:26:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/748818001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:26:26 compute-0 nova_compute[253538]: 2025-11-25 09:26:26.006 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:26:26 compute-0 nova_compute[253538]: 2025-11-25 09:26:26.012 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:26:26 compute-0 nova_compute[253538]: 2025-11-25 09:26:26.026 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:26:26 compute-0 nova_compute[253538]: 2025-11-25 09:26:26.028 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:26:26 compute-0 nova_compute[253538]: 2025-11-25 09:26:26.029 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:26:26 compute-0 nova_compute[253538]: 2025-11-25 09:26:26.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:26 compute-0 nova_compute[253538]: 2025-11-25 09:26:26.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:26:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/748818001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:26:26 compute-0 podman[431516]: 2025-11-25 09:26:26.824346882 +0000 UTC m=+0.067649385 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:26:26 compute-0 podman[431517]: 2025-11-25 09:26:26.824892297 +0000 UTC m=+0.063668406 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:26:27 compute-0 ceph-mon[75015]: pgmap v3197: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:29 compute-0 nova_compute[253538]: 2025-11-25 09:26:29.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:26:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3539745747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:26:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:26:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3539745747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:26:29 compute-0 ceph-mon[75015]: pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3539745747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:26:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3539745747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:26:30 compute-0 nova_compute[253538]: 2025-11-25 09:26:30.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:31 compute-0 ceph-mon[75015]: pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:32 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:26:32.232 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:26:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:33 compute-0 ceph-mon[75015]: pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:34 compute-0 nova_compute[253538]: 2025-11-25 09:26:34.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 25 09:26:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 25 09:26:34 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 25 09:26:35 compute-0 nova_compute[253538]: 2025-11-25 09:26:35.478 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:35 compute-0 ceph-mon[75015]: pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:35 compute-0 ceph-mon[75015]: osdmap e282: 3 total, 3 up, 3 in
Nov 25 09:26:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Nov 25 09:26:37 compute-0 podman[431553]: 2025-11-25 09:26:37.858450435 +0000 UTC m=+0.103177134 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:26:37 compute-0 ceph-mon[75015]: pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Nov 25 09:26:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Nov 25 09:26:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:39 compute-0 nova_compute[253538]: 2025-11-25 09:26:39.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:39 compute-0 ceph-mon[75015]: pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Nov 25 09:26:40 compute-0 nova_compute[253538]: 2025-11-25 09:26:40.519 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 09:26:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:26:41.114 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:26:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:26:41.114 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:26:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:26:41.115 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:26:42 compute-0 ceph-mon[75015]: pgmap v3205: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 09:26:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 09:26:43 compute-0 ceph-mon[75015]: pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 09:26:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 25 09:26:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 25 09:26:44 compute-0 nova_compute[253538]: 2025-11-25 09:26:44.038 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:44 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 25 09:26:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 09:26:45 compute-0 ceph-mon[75015]: osdmap e283: 3 total, 3 up, 3 in
Nov 25 09:26:45 compute-0 ceph-mon[75015]: pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 09:26:45 compute-0 nova_compute[253538]: 2025-11-25 09:26:45.521 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 818 B/s wr, 6 op/s
Nov 25 09:26:47 compute-0 ceph-mon[75015]: pgmap v3209: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 818 B/s wr, 6 op/s
Nov 25 09:26:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 1 op/s
Nov 25 09:26:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:49 compute-0 nova_compute[253538]: 2025-11-25 09:26:49.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:49 compute-0 sshd-session[431578]: Connection closed by authenticating user root 171.244.51.45 port 58890 [preauth]
Nov 25 09:26:49 compute-0 sudo[431580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:49 compute-0 sudo[431580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:49 compute-0 sudo[431580]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:49 compute-0 sudo[431605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:26:49 compute-0 sudo[431605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:49 compute-0 sudo[431605]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:49 compute-0 sudo[431630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:49 compute-0 sudo[431630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:49 compute-0 sudo[431630]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:49 compute-0 sudo[431655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 09:26:49 compute-0 sudo[431655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:49 compute-0 ceph-mon[75015]: pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 1 op/s
Nov 25 09:26:50 compute-0 podman[431751]: 2025-11-25 09:26:50.368582118 +0000 UTC m=+0.171429905 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:26:50 compute-0 podman[431751]: 2025-11-25 09:26:50.485713751 +0000 UTC m=+0.288561508 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:26:50 compute-0 nova_compute[253538]: 2025-11-25 09:26:50.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:51 compute-0 sudo[431655]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:26:51 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:51 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:26:51 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:51 compute-0 sudo[431907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:51 compute-0 sudo[431907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:51 compute-0 sudo[431907]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:51 compute-0 sudo[431932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:26:51 compute-0 sudo[431932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:51 compute-0 sudo[431932]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:51 compute-0 sudo[431957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:51 compute-0 sudo[431957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:51 compute-0 sudo[431957]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:51 compute-0 sudo[431982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:26:51 compute-0 sudo[431982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:51 compute-0 nova_compute[253538]: 2025-11-25 09:26:51.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:51 compute-0 nova_compute[253538]: 2025-11-25 09:26:51.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:26:51 compute-0 nova_compute[253538]: 2025-11-25 09:26:51.587 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:26:51 compute-0 ceph-mon[75015]: pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:51 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:51 compute-0 sudo[431982]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:26:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:26:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:26:52 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:26:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:26:52 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:52 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9df99beb-afbf-474a-a41b-9d1615cf1235 does not exist
Nov 25 09:26:52 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 012e2f98-bff3-482e-81ad-05ff027402c1 does not exist
Nov 25 09:26:52 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d60505dd-3c34-49e3-a50a-622841ecfe31 does not exist
Nov 25 09:26:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:26:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:26:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:26:52 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:26:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:26:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:26:52 compute-0 sudo[432038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:52 compute-0 sudo[432038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:52 compute-0 sudo[432038]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:52 compute-0 sudo[432063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:26:52 compute-0 sudo[432063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:52 compute-0 sudo[432063]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:52 compute-0 sudo[432088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:52 compute-0 sudo[432088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:52 compute-0 sudo[432088]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:52 compute-0 sudo[432113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:26:52 compute-0 sudo[432113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:52 compute-0 podman[432178]: 2025-11-25 09:26:52.707332247 +0000 UTC m=+0.051126915 container create 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 09:26:52 compute-0 systemd[1]: Started libpod-conmon-98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35.scope.
Nov 25 09:26:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:26:52 compute-0 podman[432178]: 2025-11-25 09:26:52.683949919 +0000 UTC m=+0.027744677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:26:52 compute-0 podman[432178]: 2025-11-25 09:26:52.792160239 +0000 UTC m=+0.135954957 container init 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:26:52 compute-0 podman[432178]: 2025-11-25 09:26:52.804505065 +0000 UTC m=+0.148299733 container start 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:26:52 compute-0 podman[432178]: 2025-11-25 09:26:52.807859297 +0000 UTC m=+0.151654025 container attach 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:26:52 compute-0 affectionate_rhodes[432195]: 167 167
Nov 25 09:26:52 compute-0 systemd[1]: libpod-98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35.scope: Deactivated successfully.
Nov 25 09:26:52 compute-0 podman[432178]: 2025-11-25 09:26:52.809346268 +0000 UTC m=+0.153140936 container died 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6df0af510512eae51917558090c3c4f3b0ffd4a4fcac6264539b4066d2dd1d7b-merged.mount: Deactivated successfully.
Nov 25 09:26:52 compute-0 podman[432178]: 2025-11-25 09:26:52.845230246 +0000 UTC m=+0.189024914 container remove 98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:26:52 compute-0 systemd[1]: libpod-conmon-98276f8ffe7156f7bc4d93d73747ad7d7a21330c66eb469f794cc9e282555f35.scope: Deactivated successfully.
Nov 25 09:26:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:26:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:26:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:26:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:26:52 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:26:53 compute-0 podman[432219]: 2025-11-25 09:26:53.068605045 +0000 UTC m=+0.060657224 container create 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:26:53 compute-0 systemd[1]: Started libpod-conmon-9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100.scope.
Nov 25 09:26:53 compute-0 podman[432219]: 2025-11-25 09:26:53.047014597 +0000 UTC m=+0.039066816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:26:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:53 compute-0 podman[432219]: 2025-11-25 09:26:53.185570054 +0000 UTC m=+0.177622243 container init 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 09:26:53 compute-0 podman[432219]: 2025-11-25 09:26:53.191971319 +0000 UTC m=+0.184023488 container start 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:26:53 compute-0 podman[432219]: 2025-11-25 09:26:53.203436792 +0000 UTC m=+0.195488961 container attach 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:26:53
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.meta', 'vms', 'images', 'volumes', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:26:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:26:53 compute-0 ceph-mon[75015]: pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:54 compute-0 nova_compute[253538]: 2025-11-25 09:26:54.043 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:26:54 compute-0 zen_visvesvaraya[432236]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:26:54 compute-0 zen_visvesvaraya[432236]: --> relative data size: 1.0
Nov 25 09:26:54 compute-0 zen_visvesvaraya[432236]: --> All data devices are unavailable
Nov 25 09:26:54 compute-0 systemd[1]: libpod-9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100.scope: Deactivated successfully.
Nov 25 09:26:54 compute-0 systemd[1]: libpod-9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100.scope: Consumed 1.126s CPU time.
Nov 25 09:26:54 compute-0 podman[432219]: 2025-11-25 09:26:54.368787762 +0000 UTC m=+1.360839931 container died 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-65fa2e58688cc35360945c6a98054333c92e783e8ba0fe1fb766180fdbe05d11-merged.mount: Deactivated successfully.
Nov 25 09:26:54 compute-0 podman[432219]: 2025-11-25 09:26:54.582492597 +0000 UTC m=+1.574544776 container remove 9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:26:54 compute-0 systemd[1]: libpod-conmon-9030533c0972b319d16888b4b08eac3452734ce25eb308f31d85a3560dd92100.scope: Deactivated successfully.
Nov 25 09:26:54 compute-0 sudo[432113]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:54 compute-0 sudo[432280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:54 compute-0 sudo[432280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:54 compute-0 sudo[432280]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:54 compute-0 sudo[432305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:26:54 compute-0 sudo[432305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:54 compute-0 sudo[432305]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:54 compute-0 sudo[432330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:54 compute-0 sudo[432330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:54 compute-0 sudo[432330]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:54 compute-0 sudo[432355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:26:54 compute-0 sudo[432355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:55 compute-0 ceph-mon[75015]: pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:55 compute-0 podman[432420]: 2025-11-25 09:26:55.268286664 +0000 UTC m=+0.054050005 container create 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:26:55 compute-0 systemd[1]: Started libpod-conmon-721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef.scope.
Nov 25 09:26:55 compute-0 podman[432420]: 2025-11-25 09:26:55.241704359 +0000 UTC m=+0.027467790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:26:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:26:55 compute-0 podman[432420]: 2025-11-25 09:26:55.363857929 +0000 UTC m=+0.149621300 container init 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:26:55 compute-0 podman[432420]: 2025-11-25 09:26:55.372227027 +0000 UTC m=+0.157990368 container start 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 09:26:55 compute-0 happy_elgamal[432436]: 167 167
Nov 25 09:26:55 compute-0 systemd[1]: libpod-721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef.scope: Deactivated successfully.
Nov 25 09:26:55 compute-0 conmon[432436]: conmon 721f777e83b330561fd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef.scope/container/memory.events
Nov 25 09:26:55 compute-0 podman[432420]: 2025-11-25 09:26:55.377174272 +0000 UTC m=+0.162937613 container attach 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:26:55 compute-0 podman[432420]: 2025-11-25 09:26:55.38074721 +0000 UTC m=+0.166510591 container died 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 09:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd63e34c9302aa5a9620f472ddde59042b4881de581165b483750570bfc19568-merged.mount: Deactivated successfully.
Nov 25 09:26:55 compute-0 podman[432420]: 2025-11-25 09:26:55.424415089 +0000 UTC m=+0.210178430 container remove 721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:26:55 compute-0 systemd[1]: libpod-conmon-721f777e83b330561fd3116babcaf0028e8e15d78d56fe794cf325c306ab74ef.scope: Deactivated successfully.
Nov 25 09:26:55 compute-0 nova_compute[253538]: 2025-11-25 09:26:55.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:55 compute-0 podman[432460]: 2025-11-25 09:26:55.603038799 +0000 UTC m=+0.044801082 container create 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:26:55 compute-0 systemd[1]: Started libpod-conmon-13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3.scope.
Nov 25 09:26:55 compute-0 podman[432460]: 2025-11-25 09:26:55.585183203 +0000 UTC m=+0.026945506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:26:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:55 compute-0 podman[432460]: 2025-11-25 09:26:55.716972476 +0000 UTC m=+0.158734779 container init 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:26:55 compute-0 podman[432460]: 2025-11-25 09:26:55.725923289 +0000 UTC m=+0.167685582 container start 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:26:55 compute-0 podman[432460]: 2025-11-25 09:26:55.730795683 +0000 UTC m=+0.172557996 container attach 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:26:56 compute-0 charming_snyder[432475]: {
Nov 25 09:26:56 compute-0 charming_snyder[432475]:     "0": [
Nov 25 09:26:56 compute-0 charming_snyder[432475]:         {
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "devices": [
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "/dev/loop3"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             ],
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_name": "ceph_lv0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_size": "21470642176",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "name": "ceph_lv0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "tags": {
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cluster_name": "ceph",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.crush_device_class": "",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.encrypted": "0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osd_id": "0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.type": "block",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.vdo": "0"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             },
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "type": "block",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "vg_name": "ceph_vg0"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:         }
Nov 25 09:26:56 compute-0 charming_snyder[432475]:     ],
Nov 25 09:26:56 compute-0 charming_snyder[432475]:     "1": [
Nov 25 09:26:56 compute-0 charming_snyder[432475]:         {
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "devices": [
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "/dev/loop4"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             ],
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_name": "ceph_lv1",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_size": "21470642176",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "name": "ceph_lv1",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "tags": {
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cluster_name": "ceph",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.crush_device_class": "",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.encrypted": "0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osd_id": "1",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.type": "block",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.vdo": "0"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             },
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "type": "block",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "vg_name": "ceph_vg1"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:         }
Nov 25 09:26:56 compute-0 charming_snyder[432475]:     ],
Nov 25 09:26:56 compute-0 charming_snyder[432475]:     "2": [
Nov 25 09:26:56 compute-0 charming_snyder[432475]:         {
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "devices": [
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "/dev/loop5"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             ],
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_name": "ceph_lv2",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_size": "21470642176",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "name": "ceph_lv2",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "tags": {
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.cluster_name": "ceph",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.crush_device_class": "",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.encrypted": "0",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osd_id": "2",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.type": "block",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:                 "ceph.vdo": "0"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             },
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "type": "block",
Nov 25 09:26:56 compute-0 charming_snyder[432475]:             "vg_name": "ceph_vg2"
Nov 25 09:26:56 compute-0 charming_snyder[432475]:         }
Nov 25 09:26:56 compute-0 charming_snyder[432475]:     ]
Nov 25 09:26:56 compute-0 charming_snyder[432475]: }
Nov 25 09:26:56 compute-0 systemd[1]: libpod-13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3.scope: Deactivated successfully.
Nov 25 09:26:56 compute-0 podman[432460]: 2025-11-25 09:26:56.55747568 +0000 UTC m=+0.999237973 container died 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b242a063f6af310e3fa58f87fb098501863b96ef9ff33b33e06d8f5046eda4e8-merged.mount: Deactivated successfully.
Nov 25 09:26:56 compute-0 podman[432460]: 2025-11-25 09:26:56.6825475 +0000 UTC m=+1.124309793 container remove 13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:26:56 compute-0 systemd[1]: libpod-conmon-13c6b846aec13b1f47cf11b36ebf85ae7b4f94ae38bd32f5fad051baf1fdb1a3.scope: Deactivated successfully.
Nov 25 09:26:56 compute-0 sudo[432355]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:56 compute-0 sudo[432501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:56 compute-0 sudo[432501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:56 compute-0 sudo[432501]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:56 compute-0 sudo[432526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:26:56 compute-0 sudo[432526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:56 compute-0 sudo[432526]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:56 compute-0 sudo[432563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:56 compute-0 sudo[432563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:56 compute-0 sudo[432563]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:56 compute-0 podman[432551]: 2025-11-25 09:26:56.975326691 +0000 UTC m=+0.079146189 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 09:26:56 compute-0 podman[432550]: 2025-11-25 09:26:56.980924644 +0000 UTC m=+0.086120720 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:26:57 compute-0 sudo[432610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:26:57 compute-0 sudo[432610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:57 compute-0 podman[432676]: 2025-11-25 09:26:57.406684241 +0000 UTC m=+0.035244403 container create d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 09:26:57 compute-0 systemd[1]: Started libpod-conmon-d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7.scope.
Nov 25 09:26:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:26:57 compute-0 podman[432676]: 2025-11-25 09:26:57.481824059 +0000 UTC m=+0.110384271 container init d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:26:57 compute-0 podman[432676]: 2025-11-25 09:26:57.392795322 +0000 UTC m=+0.021355504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:26:57 compute-0 podman[432676]: 2025-11-25 09:26:57.489000524 +0000 UTC m=+0.117560686 container start d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:26:57 compute-0 podman[432676]: 2025-11-25 09:26:57.491979775 +0000 UTC m=+0.120539957 container attach d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 09:26:57 compute-0 nervous_chandrasekhar[432692]: 167 167
Nov 25 09:26:57 compute-0 systemd[1]: libpod-d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7.scope: Deactivated successfully.
Nov 25 09:26:57 compute-0 conmon[432692]: conmon d5ccffd3cd1cc3717dc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7.scope/container/memory.events
Nov 25 09:26:57 compute-0 podman[432697]: 2025-11-25 09:26:57.540457428 +0000 UTC m=+0.029250379 container died d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 09:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8901162eafe67e28a264ee5cbc98e3352a7c971b888ad9a0e797b0a839bd7481-merged.mount: Deactivated successfully.
Nov 25 09:26:57 compute-0 podman[432697]: 2025-11-25 09:26:57.57427195 +0000 UTC m=+0.063064891 container remove d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 09:26:57 compute-0 systemd[1]: libpod-conmon-d5ccffd3cd1cc3717dc51ffa9b2f50b86686bfa7b920938b348b0e679be36cf7.scope: Deactivated successfully.
Nov 25 09:26:57 compute-0 podman[432719]: 2025-11-25 09:26:57.773202593 +0000 UTC m=+0.064698205 container create 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:26:57 compute-0 systemd[1]: Started libpod-conmon-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope.
Nov 25 09:26:57 compute-0 ceph-mon[75015]: pgmap v3214: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:26:57 compute-0 podman[432719]: 2025-11-25 09:26:57.742417444 +0000 UTC m=+0.033913126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:26:57 compute-0 podman[432719]: 2025-11-25 09:26:57.848968488 +0000 UTC m=+0.140464110 container init 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:26:57 compute-0 podman[432719]: 2025-11-25 09:26:57.863208226 +0000 UTC m=+0.154703818 container start 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 09:26:57 compute-0 podman[432719]: 2025-11-25 09:26:57.86701297 +0000 UTC m=+0.158508592 container attach 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:26:58 compute-0 nova_compute[253538]: 2025-11-25 09:26:58.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]: {
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "osd_id": 1,
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "type": "bluestore"
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:     },
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "osd_id": 2,
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "type": "bluestore"
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:     },
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "osd_id": 0,
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:         "type": "bluestore"
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]:     }
Nov 25 09:26:58 compute-0 gallant_heyrovsky[432735]: }
Nov 25 09:26:58 compute-0 systemd[1]: libpod-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope: Deactivated successfully.
Nov 25 09:26:58 compute-0 systemd[1]: libpod-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope: Consumed 1.067s CPU time.
Nov 25 09:26:58 compute-0 conmon[432735]: conmon 0249c9de94e5e19b9345 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope/container/memory.events
Nov 25 09:26:58 compute-0 podman[432719]: 2025-11-25 09:26:58.929459695 +0000 UTC m=+1.220955307 container died 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:26:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e88feda8cdc9558dfe83f83348305bff239bab708f4c3006d4d25cffc7bdbf2a-merged.mount: Deactivated successfully.
Nov 25 09:26:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:26:58 compute-0 podman[432719]: 2025-11-25 09:26:58.986700345 +0000 UTC m=+1.278195937 container remove 0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:26:58 compute-0 systemd[1]: libpod-conmon-0249c9de94e5e19b93456f9edf8ac6b953d5423aa2b3b9061111a57cfc8e2603.scope: Deactivated successfully.
Nov 25 09:26:59 compute-0 sudo[432610]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:26:59 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:26:59 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:59 compute-0 nova_compute[253538]: 2025-11-25 09:26:59.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:26:59 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 47537c22-396f-442c-abf7-963364df7155 does not exist
Nov 25 09:26:59 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 7be838f9-da08-4dbd-9c7f-529953f76d96 does not exist
Nov 25 09:26:59 compute-0 sudo[432780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:26:59 compute-0 sudo[432780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:59 compute-0 sudo[432780]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:59 compute-0 sudo[432805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:26:59 compute-0 sudo[432805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:26:59 compute-0 sudo[432805]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:59 compute-0 nova_compute[253538]: 2025-11-25 09:26:59.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:26:59 compute-0 ceph-mon[75015]: pgmap v3215: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:26:59 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:26:59 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:27:00 compute-0 nova_compute[253538]: 2025-11-25 09:27:00.579 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:01 compute-0 ceph-mon[75015]: pgmap v3216: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:03 compute-0 ceph-mon[75015]: pgmap v3217: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:04 compute-0 nova_compute[253538]: 2025-11-25 09:27:04.048 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:04 compute-0 nova_compute[253538]: 2025-11-25 09:27:04.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:27:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:05 compute-0 nova_compute[253538]: 2025-11-25 09:27:05.632 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:05 compute-0 ceph-mon[75015]: pgmap v3218: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:06 compute-0 nova_compute[253538]: 2025-11-25 09:27:06.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:06 compute-0 nova_compute[253538]: 2025-11-25 09:27:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:27:06 compute-0 nova_compute[253538]: 2025-11-25 09:27:06.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:27:06 compute-0 nova_compute[253538]: 2025-11-25 09:27:06.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:27:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:07 compute-0 ceph-mon[75015]: pgmap v3219: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:08 compute-0 podman[432830]: 2025-11-25 09:27:08.853062062 +0000 UTC m=+0.095255058 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 09:27:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:09 compute-0 nova_compute[253538]: 2025-11-25 09:27:09.090 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:09 compute-0 ceph-mon[75015]: pgmap v3220: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:10 compute-0 nova_compute[253538]: 2025-11-25 09:27:10.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:10 compute-0 nova_compute[253538]: 2025-11-25 09:27:10.633 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:11 compute-0 ceph-mon[75015]: pgmap v3221: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:13 compute-0 nova_compute[253538]: 2025-11-25 09:27:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:13 compute-0 ceph-mon[75015]: pgmap v3222: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:14 compute-0 nova_compute[253538]: 2025-11-25 09:27:14.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:14 compute-0 nova_compute[253538]: 2025-11-25 09:27:14.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:14 compute-0 nova_compute[253538]: 2025-11-25 09:27:14.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:27:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:15 compute-0 nova_compute[253538]: 2025-11-25 09:27:15.634 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:15 compute-0 ceph-mon[75015]: pgmap v3223: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:16 compute-0 nova_compute[253538]: 2025-11-25 09:27:16.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:17 compute-0 nova_compute[253538]: 2025-11-25 09:27:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:17 compute-0 ceph-mon[75015]: pgmap v3224: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:19 compute-0 nova_compute[253538]: 2025-11-25 09:27:19.094 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:19 compute-0 ceph-mon[75015]: pgmap v3225: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:20 compute-0 nova_compute[253538]: 2025-11-25 09:27:20.636 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:21 compute-0 ceph-mon[75015]: pgmap v3226: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:27:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:27:23 compute-0 ceph-mon[75015]: pgmap v3227: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:24 compute-0 nova_compute[253538]: 2025-11-25 09:27:24.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:24 compute-0 nova_compute[253538]: 2025-11-25 09:27:24.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:25 compute-0 nova_compute[253538]: 2025-11-25 09:27:25.637 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:26 compute-0 ceph-mon[75015]: pgmap v3228: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:26 compute-0 nova_compute[253538]: 2025-11-25 09:27:26.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:27:26 compute-0 nova_compute[253538]: 2025-11-25 09:27:26.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:27:26 compute-0 nova_compute[253538]: 2025-11-25 09:27:26.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:27:26 compute-0 nova_compute[253538]: 2025-11-25 09:27:26.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:27:26 compute-0 nova_compute[253538]: 2025-11-25 09:27:26.581 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:27:26 compute-0 nova_compute[253538]: 2025-11-25 09:27:26.582 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:27:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:27:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317948295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.081 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:27:27 compute-0 ceph-mon[75015]: pgmap v3229: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.253 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.254 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3610MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.255 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.255 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.334 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.335 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.357 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:27:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:27:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955999655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:27:27 compute-0 podman[432898]: 2025-11-25 09:27:27.821953514 +0000 UTC m=+0.071711045 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.832 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.839 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.856 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.859 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:27:27 compute-0 nova_compute[253538]: 2025-11-25 09:27:27.859 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:27:27 compute-0 podman[432899]: 2025-11-25 09:27:27.863104406 +0000 UTC m=+0.098611849 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:27:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3317948295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:27:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1955999655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:27:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:27:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048954658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:27:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:27:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048954658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:27:29 compute-0 nova_compute[253538]: 2025-11-25 09:27:29.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:29 compute-0 ceph-mon[75015]: pgmap v3230: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2048954658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:27:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2048954658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:27:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:30 compute-0 nova_compute[253538]: 2025-11-25 09:27:30.639 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:31 compute-0 sshd-session[432940]: Invalid user service from 45.78.217.205 port 35366
Nov 25 09:27:31 compute-0 sshd-session[432940]: Received disconnect from 45.78.217.205 port 35366:11: Bye Bye [preauth]
Nov 25 09:27:31 compute-0 sshd-session[432940]: Disconnected from invalid user service 45.78.217.205 port 35366 [preauth]
Nov 25 09:27:31 compute-0 ceph-mon[75015]: pgmap v3231: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:34 compute-0 nova_compute[253538]: 2025-11-25 09:27:34.101 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:34 compute-0 ceph-mon[75015]: pgmap v3232: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:35 compute-0 ceph-mon[75015]: pgmap v3233: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:35 compute-0 nova_compute[253538]: 2025-11-25 09:27:35.640 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:38 compute-0 ceph-mon[75015]: pgmap v3234: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:39 compute-0 nova_compute[253538]: 2025-11-25 09:27:39.102 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:39 compute-0 ceph-mon[75015]: pgmap v3235: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:39 compute-0 podman[432942]: 2025-11-25 09:27:39.832677044 +0000 UTC m=+0.089870891 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:27:40 compute-0 nova_compute[253538]: 2025-11-25 09:27:40.642 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:27:41.115 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:27:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:27:41.116 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:27:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:27:41.116 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:27:41 compute-0 ceph-mon[75015]: pgmap v3236: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:44 compute-0 ceph-mon[75015]: pgmap v3237: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:44 compute-0 nova_compute[253538]: 2025-11-25 09:27:44.104 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:45 compute-0 ceph-mon[75015]: pgmap v3238: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:45 compute-0 nova_compute[253538]: 2025-11-25 09:27:45.644 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:45 compute-0 sshd-session[432968]: Invalid user ubuntu from 119.96.131.8 port 34756
Nov 25 09:27:46 compute-0 sshd-session[432968]: Received disconnect from 119.96.131.8 port 34756:11:  [preauth]
Nov 25 09:27:46 compute-0 sshd-session[432968]: Disconnected from invalid user ubuntu 119.96.131.8 port 34756 [preauth]
Nov 25 09:27:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:47 compute-0 ceph-mon[75015]: pgmap v3239: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:48 compute-0 sshd-session[432970]: Invalid user hadoop from 193.32.162.151 port 56842
Nov 25 09:27:48 compute-0 sshd-session[432970]: Connection closed by invalid user hadoop 193.32.162.151 port 56842 [preauth]
Nov 25 09:27:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:49 compute-0 nova_compute[253538]: 2025-11-25 09:27:49.106 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:49 compute-0 ceph-mon[75015]: pgmap v3240: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:50 compute-0 nova_compute[253538]: 2025-11-25 09:27:50.645 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:51 compute-0 ceph-mon[75015]: pgmap v3241: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:27:53
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:27:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:27:53 compute-0 ceph-mon[75015]: pgmap v3242: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:54 compute-0 nova_compute[253538]: 2025-11-25 09:27:54.109 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:27:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:55 compute-0 nova_compute[253538]: 2025-11-25 09:27:55.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:56 compute-0 ceph-mon[75015]: pgmap v3243: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:57 compute-0 ceph-mon[75015]: pgmap v3244: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:58 compute-0 podman[432973]: 2025-11-25 09:27:58.803241636 +0000 UTC m=+0.052545424 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:27:58 compute-0 podman[432972]: 2025-11-25 09:27:58.809238059 +0000 UTC m=+0.058721262 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:27:59 compute-0 nova_compute[253538]: 2025-11-25 09:27:59.112 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:27:59 compute-0 sudo[433007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:27:59 compute-0 sudo[433007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:27:59 compute-0 sudo[433007]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:27:59 compute-0 sudo[433032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:27:59 compute-0 sudo[433032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:27:59 compute-0 sudo[433032]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:59 compute-0 sudo[433057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:27:59 compute-0 sudo[433057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:27:59 compute-0 sudo[433057]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:59 compute-0 sudo[433082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:27:59 compute-0 sudo[433082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:27:59 compute-0 ceph-mon[75015]: pgmap v3245: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:27:59 compute-0 sudo[433082]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:27:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:27:59 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:27:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:27:59 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:28:00 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8be94307-49c2-422e-a43b-8869bb3c969d does not exist
Nov 25 09:28:00 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1f4692f7-649b-4757-b7d9-cdbaa6a90999 does not exist
Nov 25 09:28:00 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 578e15c1-81d8-4610-906e-e421483f9158 does not exist
Nov 25 09:28:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:28:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:28:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:28:00 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:28:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:28:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:28:00 compute-0 sudo[433139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:28:00 compute-0 sudo[433139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:00 compute-0 sudo[433139]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:00 compute-0 sudo[433164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:28:00 compute-0 sudo[433164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:00 compute-0 sudo[433164]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:00 compute-0 sudo[433189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:28:00 compute-0 sudo[433189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:00 compute-0 sudo[433189]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:00 compute-0 sudo[433214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:28:00 compute-0 sudo[433214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:00 compute-0 nova_compute[253538]: 2025-11-25 09:28:00.700 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:00 compute-0 podman[433279]: 2025-11-25 09:28:00.733257251 +0000 UTC m=+0.047750713 container create 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:28:00 compute-0 systemd[1]: Started libpod-conmon-28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686.scope.
Nov 25 09:28:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:00 compute-0 podman[433279]: 2025-11-25 09:28:00.712175307 +0000 UTC m=+0.026668819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:28:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:28:00 compute-0 podman[433279]: 2025-11-25 09:28:00.827514121 +0000 UTC m=+0.142007603 container init 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:28:00 compute-0 podman[433279]: 2025-11-25 09:28:00.83958603 +0000 UTC m=+0.154079532 container start 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:28:00 compute-0 podman[433279]: 2025-11-25 09:28:00.843095136 +0000 UTC m=+0.157588618 container attach 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:28:00 compute-0 systemd[1]: libpod-28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686.scope: Deactivated successfully.
Nov 25 09:28:00 compute-0 distracted_haibt[433296]: 167 167
Nov 25 09:28:00 compute-0 conmon[433296]: conmon 28d4ad0a77cd9d12005f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686.scope/container/memory.events
Nov 25 09:28:00 compute-0 podman[433279]: 2025-11-25 09:28:00.848229816 +0000 UTC m=+0.162723278 container died 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 09:28:00 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:28:00 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:28:00 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:28:00 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:28:00 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:28:00 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d81d09c12d09c65e69e6e6eb50ca726ce25d21cebb71838d85929ce940014ba-merged.mount: Deactivated successfully.
Nov 25 09:28:00 compute-0 podman[433279]: 2025-11-25 09:28:00.897731765 +0000 UTC m=+0.212225267 container remove 28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:28:00 compute-0 systemd[1]: libpod-conmon-28d4ad0a77cd9d12005fa0447f764f246e3fcc630c3000c99b9894abca12a686.scope: Deactivated successfully.
Nov 25 09:28:01 compute-0 podman[433319]: 2025-11-25 09:28:01.109296543 +0000 UTC m=+0.056158802 container create 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:28:01 compute-0 systemd[1]: Started libpod-conmon-9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510.scope.
Nov 25 09:28:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:01 compute-0 podman[433319]: 2025-11-25 09:28:01.091541469 +0000 UTC m=+0.038403748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:01 compute-0 podman[433319]: 2025-11-25 09:28:01.210913473 +0000 UTC m=+0.157775732 container init 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:28:01 compute-0 podman[433319]: 2025-11-25 09:28:01.221691797 +0000 UTC m=+0.168554096 container start 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:28:01 compute-0 podman[433319]: 2025-11-25 09:28:01.225887741 +0000 UTC m=+0.172750020 container attach 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:28:02 compute-0 ceph-mon[75015]: pgmap v3246: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:02 compute-0 nifty_maxwell[433335]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:28:02 compute-0 nifty_maxwell[433335]: --> relative data size: 1.0
Nov 25 09:28:02 compute-0 nifty_maxwell[433335]: --> All data devices are unavailable
Nov 25 09:28:02 compute-0 systemd[1]: libpod-9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510.scope: Deactivated successfully.
Nov 25 09:28:02 compute-0 systemd[1]: libpod-9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510.scope: Consumed 1.109s CPU time.
Nov 25 09:28:02 compute-0 podman[433319]: 2025-11-25 09:28:02.366508298 +0000 UTC m=+1.313370557 container died 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 09:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aa6020fdcd8fbd634bb09db9e898736027e55019acb3cd3532203670b11694a-merged.mount: Deactivated successfully.
Nov 25 09:28:02 compute-0 podman[433319]: 2025-11-25 09:28:02.428398785 +0000 UTC m=+1.375261034 container remove 9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 09:28:02 compute-0 systemd[1]: libpod-conmon-9cc463ddd98f6816ce1297575176d825ef2f079f3a054cc7658d0d56d6a84510.scope: Deactivated successfully.
Nov 25 09:28:02 compute-0 sudo[433214]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:02 compute-0 sudo[433377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:28:02 compute-0 sudo[433377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:02 compute-0 sudo[433377]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:02 compute-0 sudo[433402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:28:02 compute-0 sudo[433402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:02 compute-0 sudo[433402]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:02 compute-0 sudo[433427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:28:02 compute-0 sudo[433427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:02 compute-0 sudo[433427]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:02 compute-0 sudo[433452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:28:02 compute-0 sudo[433452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:02 compute-0 nova_compute[253538]: 2025-11-25 09:28:02.861 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:03 compute-0 podman[433515]: 2025-11-25 09:28:03.022121911 +0000 UTC m=+0.040041833 container create c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 09:28:03 compute-0 ceph-mon[75015]: pgmap v3247: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:03 compute-0 systemd[1]: Started libpod-conmon-c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a.scope.
Nov 25 09:28:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:28:03 compute-0 podman[433515]: 2025-11-25 09:28:03.096602881 +0000 UTC m=+0.114522823 container init c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:28:03 compute-0 podman[433515]: 2025-11-25 09:28:03.005866308 +0000 UTC m=+0.023786250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:28:03 compute-0 podman[433515]: 2025-11-25 09:28:03.104403044 +0000 UTC m=+0.122322976 container start c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:28:03 compute-0 podman[433515]: 2025-11-25 09:28:03.10793735 +0000 UTC m=+0.125857272 container attach c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:28:03 compute-0 festive_maxwell[433531]: 167 167
Nov 25 09:28:03 compute-0 systemd[1]: libpod-c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a.scope: Deactivated successfully.
Nov 25 09:28:03 compute-0 podman[433515]: 2025-11-25 09:28:03.109685818 +0000 UTC m=+0.127605740 container died c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 09:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e52d1cb34c26c792017842888b06f09dee67e1e8d997821516d5cf2c803f10c-merged.mount: Deactivated successfully.
Nov 25 09:28:03 compute-0 podman[433515]: 2025-11-25 09:28:03.142264226 +0000 UTC m=+0.160184148 container remove c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_maxwell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:28:03 compute-0 systemd[1]: libpod-conmon-c3975ffdaa2caa683e941e2db86bc14106f3f60c7ba1928383a98cebf7a1680a.scope: Deactivated successfully.
Nov 25 09:28:03 compute-0 podman[433553]: 2025-11-25 09:28:03.31956981 +0000 UTC m=+0.054525558 container create 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:28:03 compute-0 systemd[1]: Started libpod-conmon-91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53.scope.
Nov 25 09:28:03 compute-0 podman[433553]: 2025-11-25 09:28:03.295581966 +0000 UTC m=+0.030537804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:28:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:03 compute-0 podman[433553]: 2025-11-25 09:28:03.420293046 +0000 UTC m=+0.155248864 container init 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 09:28:03 compute-0 podman[433553]: 2025-11-25 09:28:03.429168478 +0000 UTC m=+0.164124226 container start 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:28:03 compute-0 podman[433553]: 2025-11-25 09:28:03.434147714 +0000 UTC m=+0.169103512 container attach 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 09:28:04 compute-0 nova_compute[253538]: 2025-11-25 09:28:04.115 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:04 compute-0 lucid_davinci[433570]: {
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:     "0": [
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:         {
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "devices": [
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "/dev/loop3"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             ],
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_name": "ceph_lv0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_size": "21470642176",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "name": "ceph_lv0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "tags": {
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cluster_name": "ceph",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.crush_device_class": "",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.encrypted": "0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osd_id": "0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.type": "block",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.vdo": "0"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             },
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "type": "block",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "vg_name": "ceph_vg0"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:         }
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:     ],
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:     "1": [
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:         {
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "devices": [
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "/dev/loop4"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             ],
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_name": "ceph_lv1",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_size": "21470642176",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "name": "ceph_lv1",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "tags": {
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cluster_name": "ceph",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.crush_device_class": "",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.encrypted": "0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osd_id": "1",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.type": "block",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.vdo": "0"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             },
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "type": "block",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "vg_name": "ceph_vg1"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:         }
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:     ],
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:     "2": [
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:         {
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "devices": [
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "/dev/loop5"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             ],
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_name": "ceph_lv2",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_size": "21470642176",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "name": "ceph_lv2",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "tags": {
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.cluster_name": "ceph",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.crush_device_class": "",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.encrypted": "0",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osd_id": "2",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.type": "block",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:                 "ceph.vdo": "0"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             },
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "type": "block",
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:             "vg_name": "ceph_vg2"
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:         }
Nov 25 09:28:04 compute-0 lucid_davinci[433570]:     ]
Nov 25 09:28:04 compute-0 lucid_davinci[433570]: }
Nov 25 09:28:04 compute-0 systemd[1]: libpod-91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53.scope: Deactivated successfully.
Nov 25 09:28:04 compute-0 podman[433553]: 2025-11-25 09:28:04.236704543 +0000 UTC m=+0.971660321 container died 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:28:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b3bcc1053cfc7670b96aa930325b13ba84ca774d48e9f4ab6e85efab8ec69d9-merged.mount: Deactivated successfully.
Nov 25 09:28:04 compute-0 podman[433553]: 2025-11-25 09:28:04.305007775 +0000 UTC m=+1.039963523 container remove 91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_davinci, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:28:04 compute-0 systemd[1]: libpod-conmon-91c04484af799a8d2db1c05e602341720a41c8e14a5ea43e4bacae9ddbbc1c53.scope: Deactivated successfully.
Nov 25 09:28:04 compute-0 sudo[433452]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:04 compute-0 sudo[433589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:28:04 compute-0 sudo[433589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:04 compute-0 sudo[433589]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:04 compute-0 sudo[433614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:28:04 compute-0 sudo[433614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:04 compute-0 sudo[433614]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:04 compute-0 sudo[433639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:28:04 compute-0 sudo[433639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:04 compute-0 nova_compute[253538]: 2025-11-25 09:28:04.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:04 compute-0 sudo[433639]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:04 compute-0 sudo[433664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:28:04 compute-0 sudo[433664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:28:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:05 compute-0 podman[433728]: 2025-11-25 09:28:05.038265795 +0000 UTC m=+0.093153510 container create 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 09:28:05 compute-0 podman[433728]: 2025-11-25 09:28:04.968882133 +0000 UTC m=+0.023769878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:28:05 compute-0 systemd[1]: Started libpod-conmon-0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b.scope.
Nov 25 09:28:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:28:05 compute-0 podman[433728]: 2025-11-25 09:28:05.222392985 +0000 UTC m=+0.277280720 container init 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:28:05 compute-0 podman[433728]: 2025-11-25 09:28:05.230925018 +0000 UTC m=+0.285812743 container start 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:28:05 compute-0 eager_feynman[433744]: 167 167
Nov 25 09:28:05 compute-0 systemd[1]: libpod-0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b.scope: Deactivated successfully.
Nov 25 09:28:05 compute-0 podman[433728]: 2025-11-25 09:28:05.250291306 +0000 UTC m=+0.305179041 container attach 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 09:28:05 compute-0 podman[433728]: 2025-11-25 09:28:05.250765208 +0000 UTC m=+0.305652923 container died 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97721eca3636abfbe8ed2d52a1478e021fa93f2c205c0bf5b217d0d148e8a38-merged.mount: Deactivated successfully.
Nov 25 09:28:05 compute-0 podman[433728]: 2025-11-25 09:28:05.359963766 +0000 UTC m=+0.414851491 container remove 0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:28:05 compute-0 systemd[1]: libpod-conmon-0688b321e5b8435baf19a8ea0c036f6b5f957eee28230af37abe8e423be7146b.scope: Deactivated successfully.
Nov 25 09:28:05 compute-0 podman[433770]: 2025-11-25 09:28:05.538177594 +0000 UTC m=+0.044664578 container create b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:28:05 compute-0 systemd[1]: Started libpod-conmon-b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583.scope.
Nov 25 09:28:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:28:05 compute-0 podman[433770]: 2025-11-25 09:28:05.515690291 +0000 UTC m=+0.022177305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:28:05 compute-0 podman[433770]: 2025-11-25 09:28:05.62569582 +0000 UTC m=+0.132182824 container init b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:28:05 compute-0 podman[433770]: 2025-11-25 09:28:05.635514657 +0000 UTC m=+0.142001651 container start b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:28:05 compute-0 podman[433770]: 2025-11-25 09:28:05.647269019 +0000 UTC m=+0.153756033 container attach b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 09:28:05 compute-0 nova_compute[253538]: 2025-11-25 09:28:05.704 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:28:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 67K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1378 writes, 6241 keys, 1378 commit groups, 1.0 writes per commit group, ingest: 8.93 MB, 0.01 MB/s
                                           Interval WAL: 1378 writes, 1378 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     33.7      2.40              0.28        48    0.050       0      0       0.0       0.0
                                             L6      1/0   10.09 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   4.8     66.7     56.4      6.81              1.24        47    0.145    309K    25K       0.0       0.0
                                            Sum      1/0   10.09 MB   0.0      0.4     0.1      0.4       0.5      0.1       0.0   5.8     49.3     50.4      9.21              1.52        95    0.097    309K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   8.6     99.4    100.5      0.53              0.18        10    0.053     43K   2516       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0     66.7     56.4      6.81              1.24        47    0.145    309K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     33.7      2.40              0.28        47    0.051       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.079, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.45 GB write, 0.08 MB/s write, 0.44 GB read, 0.08 MB/s read, 9.2 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 52.48 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000367 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3403,50.30 MB,16.5474%) FilterBlock(96,862.42 KB,0.277042%) IndexBlock(96,1.33 MB,0.437576%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:28:05 compute-0 ceph-mon[75015]: pgmap v3248: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]: {
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "osd_id": 1,
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "type": "bluestore"
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:     },
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "osd_id": 2,
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "type": "bluestore"
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:     },
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "osd_id": 0,
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:         "type": "bluestore"
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]:     }
Nov 25 09:28:06 compute-0 suspicious_lamport[433787]: }
Nov 25 09:28:06 compute-0 systemd[1]: libpod-b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583.scope: Deactivated successfully.
Nov 25 09:28:06 compute-0 conmon[433787]: conmon b1e7bd2f14efa6eae015 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583.scope/container/memory.events
Nov 25 09:28:06 compute-0 podman[433770]: 2025-11-25 09:28:06.623543614 +0000 UTC m=+1.130030598 container died b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-746dac037b998398f70bf8280b4e2c1939453db159a366eede684add3deedea2-merged.mount: Deactivated successfully.
Nov 25 09:28:06 compute-0 podman[433770]: 2025-11-25 09:28:06.718312557 +0000 UTC m=+1.224799541 container remove b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:28:06 compute-0 systemd[1]: libpod-conmon-b1e7bd2f14efa6eae0152cf8d1b0cc2556c2691c04561e8fe769cf7ba389b583.scope: Deactivated successfully.
Nov 25 09:28:06 compute-0 sudo[433664]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:28:06 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:28:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:28:06 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:28:06 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 8f4de430-621b-4b56-9796-f674f387d1f8 does not exist
Nov 25 09:28:06 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev b7a1ea45-d7a0-41a1-bf40-1b71e50be779 does not exist
Nov 25 09:28:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:06 compute-0 sudo[433834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:28:06 compute-0 sudo[433834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:06 compute-0 sudo[433834]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:06 compute-0 sudo[433859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:28:06 compute-0 sudo[433859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:28:06 compute-0 sudo[433859]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:07 compute-0 nova_compute[253538]: 2025-11-25 09:28:07.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:07 compute-0 nova_compute[253538]: 2025-11-25 09:28:07.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:28:07 compute-0 nova_compute[253538]: 2025-11-25 09:28:07.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:28:07 compute-0 nova_compute[253538]: 2025-11-25 09:28:07.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:28:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:28:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:28:07 compute-0 ceph-mon[75015]: pgmap v3249: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:09 compute-0 nova_compute[253538]: 2025-11-25 09:28:09.117 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:09 compute-0 ceph-mon[75015]: pgmap v3250: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:10 compute-0 nova_compute[253538]: 2025-11-25 09:28:10.705 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:10 compute-0 podman[433884]: 2025-11-25 09:28:10.857926742 +0000 UTC m=+0.113343241 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:28:11 compute-0 nova_compute[253538]: 2025-11-25 09:28:11.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:11 compute-0 ceph-mon[75015]: pgmap v3251: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:13 compute-0 nova_compute[253538]: 2025-11-25 09:28:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:13 compute-0 ceph-mon[75015]: pgmap v3252: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:14 compute-0 nova_compute[253538]: 2025-11-25 09:28:14.119 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:15 compute-0 nova_compute[253538]: 2025-11-25 09:28:15.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:15 compute-0 nova_compute[253538]: 2025-11-25 09:28:15.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:28:15 compute-0 nova_compute[253538]: 2025-11-25 09:28:15.751 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:16 compute-0 ceph-mon[75015]: pgmap v3253: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:16 compute-0 nova_compute[253538]: 2025-11-25 09:28:16.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:17 compute-0 ceph-mon[75015]: pgmap v3254: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:19 compute-0 nova_compute[253538]: 2025-11-25 09:28:19.121 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:19 compute-0 nova_compute[253538]: 2025-11-25 09:28:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:19 compute-0 ceph-mon[75015]: pgmap v3255: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:20 compute-0 nova_compute[253538]: 2025-11-25 09:28:20.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:21 compute-0 ceph-mon[75015]: pgmap v3256: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:28:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:28:23 compute-0 ceph-mon[75015]: pgmap v3257: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:24 compute-0 nova_compute[253538]: 2025-11-25 09:28:24.124 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:24 compute-0 sshd-session[433909]: Invalid user ansible from 45.78.222.2 port 60270
Nov 25 09:28:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:24 compute-0 sshd-session[433909]: Received disconnect from 45.78.222.2 port 60270:11: Bye Bye [preauth]
Nov 25 09:28:24 compute-0 sshd-session[433909]: Disconnected from invalid user ansible 45.78.222.2 port 60270 [preauth]
Nov 25 09:28:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:25 compute-0 nova_compute[253538]: 2025-11-25 09:28:25.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:25 compute-0 ceph-mon[75015]: pgmap v3258: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:27 compute-0 ceph-mon[75015]: pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:28 compute-0 nova_compute[253538]: 2025-11-25 09:28:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:28:28 compute-0 nova_compute[253538]: 2025-11-25 09:28:28.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:28:28 compute-0 nova_compute[253538]: 2025-11-25 09:28:28.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:28:28 compute-0 nova_compute[253538]: 2025-11-25 09:28:28.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:28:28 compute-0 nova_compute[253538]: 2025-11-25 09:28:28.581 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:28:28 compute-0 nova_compute[253538]: 2025-11-25 09:28:28.581 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:28:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:28:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14658349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.054 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:28:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:28:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/132669809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:28:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:28:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/132669809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.219 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.220 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3628MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.220 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.220 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:28:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.558 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.559 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.674 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.758 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.759 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.777 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.795 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:28:29 compute-0 podman[433934]: 2025-11-25 09:28:29.816144922 +0000 UTC m=+0.065775844 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:28:29 compute-0 nova_compute[253538]: 2025-11-25 09:28:29.816 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:28:29 compute-0 podman[433933]: 2025-11-25 09:28:29.816676397 +0000 UTC m=+0.068977721 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 09:28:29 compute-0 ceph-mon[75015]: pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/14658349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:28:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/132669809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:28:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/132669809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:28:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:28:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446382416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:28:30 compute-0 nova_compute[253538]: 2025-11-25 09:28:30.268 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:28:30 compute-0 nova_compute[253538]: 2025-11-25 09:28:30.275 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:28:30 compute-0 nova_compute[253538]: 2025-11-25 09:28:30.294 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:28:30 compute-0 nova_compute[253538]: 2025-11-25 09:28:30.297 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:28:30 compute-0 nova_compute[253538]: 2025-11-25 09:28:30.298 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:28:30 compute-0 nova_compute[253538]: 2025-11-25 09:28:30.758 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3446382416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:28:32 compute-0 ceph-mon[75015]: pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:33 compute-0 ceph-mon[75015]: pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:34 compute-0 nova_compute[253538]: 2025-11-25 09:28:34.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:35 compute-0 nova_compute[253538]: 2025-11-25 09:28:35.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:35 compute-0 ceph-mon[75015]: pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:37 compute-0 ceph-mon[75015]: pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:39 compute-0 nova_compute[253538]: 2025-11-25 09:28:39.146 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:39 compute-0 ceph-mon[75015]: pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:40 compute-0 nova_compute[253538]: 2025-11-25 09:28:40.762 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:28:41.117 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:28:41.117 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:28:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:28:41.118 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:28:41 compute-0 podman[433988]: 2025-11-25 09:28:41.829054639 +0000 UTC m=+0.085357998 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:28:41 compute-0 ceph-mon[75015]: pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:43 compute-0 ceph-mon[75015]: pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:44 compute-0 nova_compute[253538]: 2025-11-25 09:28:44.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:45 compute-0 nova_compute[253538]: 2025-11-25 09:28:45.764 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:45 compute-0 sshd-session[434014]: Accepted publickey for zuul from 192.168.122.30 port 40218 ssh2: ECDSA SHA256:XPT2Qp05XP+4/iPWyxQ1YuG4VjRBRDdk6pBKmAF934E
Nov 25 09:28:45 compute-0 systemd-logind[822]: New session 52 of user zuul.
Nov 25 09:28:45 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 25 09:28:45 compute-0 sshd-session[434014]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:28:45 compute-0 ceph-mon[75015]: pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3269: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:48 compute-0 ceph-mon[75015]: pgmap v3269: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:28:48.125 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:28:48 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:28:48.126 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:28:48 compute-0 nova_compute[253538]: 2025-11-25 09:28:48.125 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:49 compute-0 ceph-mon[75015]: pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:49 compute-0 nova_compute[253538]: 2025-11-25 09:28:49.149 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:50 compute-0 nova_compute[253538]: 2025-11-25 09:28:50.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:51 compute-0 sshd-session[434017]: Connection closed by 192.168.122.30 port 40218
Nov 25 09:28:51 compute-0 sshd-session[434014]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:28:51 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 25 09:28:51 compute-0 systemd-logind[822]: Session 52 logged out. Waiting for processes to exit.
Nov 25 09:28:51 compute-0 systemd-logind[822]: Removed session 52.
Nov 25 09:28:52 compute-0 ceph-mon[75015]: pgmap v3271: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:28:53
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes']
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:28:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:28:54 compute-0 ceph-mon[75015]: pgmap v3272: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:54 compute-0 nova_compute[253538]: 2025-11-25 09:28:54.151 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:28:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:28:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:55 compute-0 ceph-mon[75015]: pgmap v3273: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:55 compute-0 nova_compute[253538]: 2025-11-25 09:28:55.767 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:58 compute-0 ceph-mon[75015]: pgmap v3274: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:58 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:28:58.128 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:28:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:59 compute-0 nova_compute[253538]: 2025-11-25 09:28:59.154 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:28:59 compute-0 ceph-mon[75015]: pgmap v3275: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:28:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:00 compute-0 nova_compute[253538]: 2025-11-25 09:29:00.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:00 compute-0 podman[434271]: 2025-11-25 09:29:00.82171315 +0000 UTC m=+0.074072890 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:29:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:00 compute-0 podman[434272]: 2025-11-25 09:29:00.835633879 +0000 UTC m=+0.073747441 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 09:29:02 compute-0 ceph-mon[75015]: pgmap v3276: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:03 compute-0 ceph-mon[75015]: pgmap v3277: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:04 compute-0 nova_compute[253538]: 2025-11-25 09:29:04.156 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:04 compute-0 nova_compute[253538]: 2025-11-25 09:29:04.299 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:29:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:05 compute-0 nova_compute[253538]: 2025-11-25 09:29:05.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:05 compute-0 nova_compute[253538]: 2025-11-25 09:29:05.771 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:06 compute-0 ceph-mon[75015]: pgmap v3278: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:06 compute-0 sudo[434309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:06 compute-0 sudo[434309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:06 compute-0 sudo[434309]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:07 compute-0 ceph-mon[75015]: pgmap v3279: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:07 compute-0 sudo[434334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:29:07 compute-0 sudo[434334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:07 compute-0 sudo[434334]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:07 compute-0 sudo[434359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:07 compute-0 sudo[434359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:07 compute-0 sudo[434359]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:07 compute-0 sudo[434384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:29:07 compute-0 sudo[434384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:07 compute-0 sudo[434384]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:29:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:29:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:29:07 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:29:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:29:07 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:29:07 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 85616236-fee4-48d4-b65a-d252ee4b87aa does not exist
Nov 25 09:29:07 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev fcd345c6-23a4-4bb8-ab67-352d3f27267a does not exist
Nov 25 09:29:07 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 69b84dc9-16c1-4080-b454-8fe78df2231a does not exist
Nov 25 09:29:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:29:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:29:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:29:07 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:29:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:29:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:29:07 compute-0 sudo[434439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:07 compute-0 sudo[434439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:07 compute-0 sudo[434439]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:07 compute-0 sudo[434464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:29:07 compute-0 sudo[434464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:07 compute-0 sudo[434464]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:07 compute-0 sudo[434489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:07 compute-0 sudo[434489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:07 compute-0 sudo[434489]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:07 compute-0 sudo[434514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:29:07 compute-0 sudo[434514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:08 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:29:08 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:29:08 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:29:08 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:29:08 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:29:08 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:29:08 compute-0 podman[434579]: 2025-11-25 09:29:08.422674677 +0000 UTC m=+0.166329794 container create ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:29:08 compute-0 podman[434579]: 2025-11-25 09:29:08.331525723 +0000 UTC m=+0.075180860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:29:08 compute-0 systemd[1]: Started libpod-conmon-ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27.scope.
Nov 25 09:29:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:29:08 compute-0 podman[434579]: 2025-11-25 09:29:08.566098339 +0000 UTC m=+0.309753506 container init ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 09:29:08 compute-0 podman[434579]: 2025-11-25 09:29:08.57717865 +0000 UTC m=+0.320833777 container start ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:29:08 compute-0 wizardly_williams[434596]: 167 167
Nov 25 09:29:08 compute-0 systemd[1]: libpod-ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27.scope: Deactivated successfully.
Nov 25 09:29:08 compute-0 podman[434579]: 2025-11-25 09:29:08.594435461 +0000 UTC m=+0.338090578 container attach ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:29:08 compute-0 podman[434579]: 2025-11-25 09:29:08.594972016 +0000 UTC m=+0.338627133 container died ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:29:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f084e989184c3e20d2e3ed2efa2f1909b0592970028c88cb4643fa055ee7bff9-merged.mount: Deactivated successfully.
Nov 25 09:29:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:08 compute-0 podman[434579]: 2025-11-25 09:29:08.861351837 +0000 UTC m=+0.605006954 container remove ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:29:08 compute-0 systemd[1]: libpod-conmon-ada061d494fc986b2dcd687d43cf8203eda95fa38c1a07a81e1f05bc74b16e27.scope: Deactivated successfully.
Nov 25 09:29:09 compute-0 podman[434620]: 2025-11-25 09:29:09.124191073 +0000 UTC m=+0.096013809 container create 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 09:29:09 compute-0 ceph-mon[75015]: pgmap v3280: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:09 compute-0 podman[434620]: 2025-11-25 09:29:09.059718715 +0000 UTC m=+0.031541451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:29:09 compute-0 nova_compute[253538]: 2025-11-25 09:29:09.158 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:09 compute-0 systemd[1]: Started libpod-conmon-8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138.scope.
Nov 25 09:29:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.269060) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949269100, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1651, "num_deletes": 252, "total_data_size": 2663824, "memory_usage": 2702912, "flush_reason": "Manual Compaction"}
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 25 09:29:09 compute-0 podman[434620]: 2025-11-25 09:29:09.286289982 +0000 UTC m=+0.258112708 container init 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:29:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:09 compute-0 podman[434620]: 2025-11-25 09:29:09.297812706 +0000 UTC m=+0.269635442 container start 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949339965, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2616092, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66491, "largest_seqno": 68141, "table_properties": {"data_size": 2608407, "index_size": 4627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15858, "raw_average_key_size": 20, "raw_value_size": 2592936, "raw_average_value_size": 3294, "num_data_blocks": 207, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062774, "oldest_key_time": 1764062774, "file_creation_time": 1764062949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 70953 microseconds, and 6702 cpu microseconds.
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:29:09 compute-0 podman[434620]: 2025-11-25 09:29:09.345615899 +0000 UTC m=+0.317438635 container attach 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.340011) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2616092 bytes OK
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.340032) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.345978) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.346019) EVENT_LOG_v1 {"time_micros": 1764062949346010, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.346043) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2656695, prev total WAL file size 2656695, number of live WAL files 2.
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.346988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2554KB)], [158(10MB)]
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949347030, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 13200617, "oldest_snapshot_seqno": -1}
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8628 keys, 11472551 bytes, temperature: kUnknown
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949444276, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11472551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11416045, "index_size": 33801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 226544, "raw_average_key_size": 26, "raw_value_size": 11263202, "raw_average_value_size": 1305, "num_data_blocks": 1314, "num_entries": 8628, "num_filter_entries": 8628, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.445121) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11472551 bytes
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.454144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.0 rd, 117.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.4) write-amplify(4.4) OK, records in: 9148, records dropped: 520 output_compression: NoCompression
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.454180) EVENT_LOG_v1 {"time_micros": 1764062949454166, "job": 98, "event": "compaction_finished", "compaction_time_micros": 97811, "compaction_time_cpu_micros": 28123, "output_level": 6, "num_output_files": 1, "total_output_size": 11472551, "num_input_records": 9148, "num_output_records": 8628, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949455174, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062949457726, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.346917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:09 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:09.457772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:09 compute-0 nova_compute[253538]: 2025-11-25 09:29:09.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:09 compute-0 nova_compute[253538]: 2025-11-25 09:29:09.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:29:09 compute-0 nova_compute[253538]: 2025-11-25 09:29:09.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:29:09 compute-0 nova_compute[253538]: 2025-11-25 09:29:09.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:29:10 compute-0 musing_antonelli[434637]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:29:10 compute-0 musing_antonelli[434637]: --> relative data size: 1.0
Nov 25 09:29:10 compute-0 musing_antonelli[434637]: --> All data devices are unavailable
Nov 25 09:29:10 compute-0 systemd[1]: libpod-8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138.scope: Deactivated successfully.
Nov 25 09:29:10 compute-0 podman[434620]: 2025-11-25 09:29:10.399960203 +0000 UTC m=+1.371782919 container died 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:29:10 compute-0 systemd[1]: libpod-8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138.scope: Consumed 1.048s CPU time.
Nov 25 09:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8fbefe6502460a1e401bf112565ac326470b7724c8dd96a0bf39c026b811f80-merged.mount: Deactivated successfully.
Nov 25 09:29:10 compute-0 podman[434620]: 2025-11-25 09:29:10.454927832 +0000 UTC m=+1.426750548 container remove 8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:29:10 compute-0 systemd[1]: libpod-conmon-8aff5a82f4ca885adae1bd3f2926103e719b214132d3a60b570055022bc69138.scope: Deactivated successfully.
Nov 25 09:29:10 compute-0 sudo[434514]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:10 compute-0 sudo[434678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:10 compute-0 sudo[434678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:10 compute-0 sudo[434678]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:10 compute-0 sudo[434703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:29:10 compute-0 sudo[434703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:10 compute-0 sudo[434703]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:10 compute-0 sudo[434728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:10 compute-0 sudo[434728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:10 compute-0 sudo[434728]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:10 compute-0 sudo[434753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:29:10 compute-0 sudo[434753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:10 compute-0 nova_compute[253538]: 2025-11-25 09:29:10.772 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:11 compute-0 podman[434819]: 2025-11-25 09:29:11.046567441 +0000 UTC m=+0.037861094 container create a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:29:11 compute-0 systemd[1]: Started libpod-conmon-a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9.scope.
Nov 25 09:29:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:29:11 compute-0 podman[434819]: 2025-11-25 09:29:11.030823121 +0000 UTC m=+0.022116794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:29:11 compute-0 podman[434819]: 2025-11-25 09:29:11.139832773 +0000 UTC m=+0.131126476 container init a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:29:11 compute-0 podman[434819]: 2025-11-25 09:29:11.14924622 +0000 UTC m=+0.140539873 container start a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:29:11 compute-0 podman[434819]: 2025-11-25 09:29:11.153023573 +0000 UTC m=+0.144317226 container attach a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:29:11 compute-0 stupefied_robinson[434835]: 167 167
Nov 25 09:29:11 compute-0 systemd[1]: libpod-a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9.scope: Deactivated successfully.
Nov 25 09:29:11 compute-0 conmon[434835]: conmon a7402d1d5ef07897ff3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9.scope/container/memory.events
Nov 25 09:29:11 compute-0 podman[434819]: 2025-11-25 09:29:11.159267733 +0000 UTC m=+0.150561386 container died a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 09:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-23a0b31698cd7b13c562e50d7ad81387d4f7b31a86f511fa69c4971ac0be243b-merged.mount: Deactivated successfully.
Nov 25 09:29:11 compute-0 podman[434819]: 2025-11-25 09:29:11.20792062 +0000 UTC m=+0.199214273 container remove a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:29:11 compute-0 systemd[1]: libpod-conmon-a7402d1d5ef07897ff3e365ed2d15581bef68812a95f92c1870a22b7d18c82c9.scope: Deactivated successfully.
Nov 25 09:29:11 compute-0 podman[434861]: 2025-11-25 09:29:11.422587822 +0000 UTC m=+0.075381446 container create 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:29:11 compute-0 podman[434861]: 2025-11-25 09:29:11.372125006 +0000 UTC m=+0.024918660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:29:11 compute-0 systemd[1]: Started libpod-conmon-6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3.scope.
Nov 25 09:29:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:11 compute-0 podman[434861]: 2025-11-25 09:29:11.69579834 +0000 UTC m=+0.348591994 container init 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:29:11 compute-0 podman[434861]: 2025-11-25 09:29:11.703338625 +0000 UTC m=+0.356132249 container start 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:29:11 compute-0 podman[434861]: 2025-11-25 09:29:11.801319526 +0000 UTC m=+0.454113160 container attach 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:29:11 compute-0 ceph-mon[75015]: pgmap v3281: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:12 compute-0 exciting_shaw[434877]: {
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:     "0": [
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:         {
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "devices": [
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "/dev/loop3"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             ],
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_name": "ceph_lv0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_size": "21470642176",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "name": "ceph_lv0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "tags": {
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cluster_name": "ceph",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.crush_device_class": "",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.encrypted": "0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osd_id": "0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.type": "block",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.vdo": "0"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             },
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "type": "block",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "vg_name": "ceph_vg0"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:         }
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:     ],
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:     "1": [
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:         {
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "devices": [
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "/dev/loop4"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             ],
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_name": "ceph_lv1",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_size": "21470642176",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "name": "ceph_lv1",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "tags": {
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cluster_name": "ceph",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.crush_device_class": "",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.encrypted": "0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osd_id": "1",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.type": "block",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.vdo": "0"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             },
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "type": "block",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "vg_name": "ceph_vg1"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:         }
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:     ],
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:     "2": [
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:         {
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "devices": [
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "/dev/loop5"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             ],
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_name": "ceph_lv2",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_size": "21470642176",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "name": "ceph_lv2",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "tags": {
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.cluster_name": "ceph",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.crush_device_class": "",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.encrypted": "0",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osd_id": "2",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.type": "block",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:                 "ceph.vdo": "0"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             },
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "type": "block",
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:             "vg_name": "ceph_vg2"
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:         }
Nov 25 09:29:12 compute-0 exciting_shaw[434877]:     ]
Nov 25 09:29:12 compute-0 exciting_shaw[434877]: }
Nov 25 09:29:12 compute-0 systemd[1]: libpod-6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3.scope: Deactivated successfully.
Nov 25 09:29:12 compute-0 podman[434887]: 2025-11-25 09:29:12.553138893 +0000 UTC m=+0.027119290 container died 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-275f40ecd8226259aaa895281c3d2658a087c6b609eb3f199a8226ba2568dd13-merged.mount: Deactivated successfully.
Nov 25 09:29:12 compute-0 podman[434887]: 2025-11-25 09:29:12.620781787 +0000 UTC m=+0.094762174 container remove 6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:29:12 compute-0 systemd[1]: libpod-conmon-6c316d0b9b7e64fcd6a21f28cad2117e89e9c5bb37df989b62bf1873c92846b3.scope: Deactivated successfully.
Nov 25 09:29:12 compute-0 podman[434886]: 2025-11-25 09:29:12.647454024 +0000 UTC m=+0.108819407 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 09:29:12 compute-0 sudo[434753]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:12 compute-0 sudo[434927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:12 compute-0 sudo[434927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:12 compute-0 sudo[434927]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:12 compute-0 sudo[434952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:29:12 compute-0 sudo[434952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:12 compute-0 sudo[434952]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:12 compute-0 sudo[434977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:12 compute-0 sudo[434977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:12 compute-0 sudo[434977]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:12 compute-0 sudo[435002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:29:12 compute-0 sudo[435002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:13 compute-0 podman[435067]: 2025-11-25 09:29:13.295785859 +0000 UTC m=+0.046935951 container create 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:29:13 compute-0 systemd[1]: Started libpod-conmon-9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab.scope.
Nov 25 09:29:13 compute-0 podman[435067]: 2025-11-25 09:29:13.273533153 +0000 UTC m=+0.024683245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:29:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:29:13 compute-0 podman[435067]: 2025-11-25 09:29:13.388890557 +0000 UTC m=+0.140040669 container init 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:29:13 compute-0 podman[435067]: 2025-11-25 09:29:13.39704072 +0000 UTC m=+0.148190812 container start 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:29:13 compute-0 podman[435067]: 2025-11-25 09:29:13.400088883 +0000 UTC m=+0.151238975 container attach 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:29:13 compute-0 systemd[1]: libpod-9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab.scope: Deactivated successfully.
Nov 25 09:29:13 compute-0 quizzical_black[435083]: 167 167
Nov 25 09:29:13 compute-0 conmon[435083]: conmon 9b67d3bf36eb249d9bf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab.scope/container/memory.events
Nov 25 09:29:13 compute-0 podman[435067]: 2025-11-25 09:29:13.405457459 +0000 UTC m=+0.156607551 container died 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 09:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1c75af54530410171b20f11f2946a5db3921942cfcc35ae1460a292cfe0ac51-merged.mount: Deactivated successfully.
Nov 25 09:29:13 compute-0 podman[435067]: 2025-11-25 09:29:13.444026151 +0000 UTC m=+0.195176243 container remove 9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_black, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:29:13 compute-0 systemd[1]: libpod-conmon-9b67d3bf36eb249d9bf5ade69f27a31465c15861b4af787e3a23f34282ed08ab.scope: Deactivated successfully.
Nov 25 09:29:13 compute-0 nova_compute[253538]: 2025-11-25 09:29:13.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:13 compute-0 podman[435107]: 2025-11-25 09:29:13.642985885 +0000 UTC m=+0.046620233 container create 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 09:29:13 compute-0 systemd[1]: Started libpod-conmon-62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2.scope.
Nov 25 09:29:13 compute-0 podman[435107]: 2025-11-25 09:29:13.624628384 +0000 UTC m=+0.028262752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:29:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:29:13 compute-0 podman[435107]: 2025-11-25 09:29:13.748241694 +0000 UTC m=+0.151876082 container init 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:29:13 compute-0 podman[435107]: 2025-11-25 09:29:13.757083726 +0000 UTC m=+0.160718074 container start 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:29:13 compute-0 podman[435107]: 2025-11-25 09:29:13.76091992 +0000 UTC m=+0.164554298 container attach 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:29:13 compute-0 ceph-mon[75015]: pgmap v3282: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:14 compute-0 nova_compute[253538]: 2025-11-25 09:29:14.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]: {
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "osd_id": 1,
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "type": "bluestore"
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:     },
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "osd_id": 2,
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "type": "bluestore"
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:     },
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "osd_id": 0,
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:         "type": "bluestore"
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]:     }
Nov 25 09:29:14 compute-0 reverent_mirzakhani[435124]: }
Nov 25 09:29:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:14 compute-0 systemd[1]: libpod-62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2.scope: Deactivated successfully.
Nov 25 09:29:14 compute-0 podman[435107]: 2025-11-25 09:29:14.853941598 +0000 UTC m=+1.257575936 container died 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:29:14 compute-0 systemd[1]: libpod-62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2.scope: Consumed 1.101s CPU time.
Nov 25 09:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a5eb10b57bbd7bdf2d1beeb4606200656c27aa4ac662298b80f4bade4228ed4-merged.mount: Deactivated successfully.
Nov 25 09:29:14 compute-0 podman[435107]: 2025-11-25 09:29:14.957700766 +0000 UTC m=+1.361335114 container remove 62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:29:14 compute-0 systemd[1]: libpod-conmon-62bb4ed11fb7a54167214586266e555236454e8095ce148517be672e31d30de2.scope: Deactivated successfully.
Nov 25 09:29:15 compute-0 sudo[435002]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:29:15 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:29:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:29:15 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:29:15 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 74622ce9-fae0-4a38-b82f-58602ccd19ba does not exist
Nov 25 09:29:15 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 081c9abb-f7a6-48c8-9e0c-75952509a069 does not exist
Nov 25 09:29:15 compute-0 sudo[435170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:29:15 compute-0 sudo[435170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:15 compute-0 sudo[435170]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:15 compute-0 sudo[435195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:29:15 compute-0 sudo[435195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:29:15 compute-0 sudo[435195]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:15 compute-0 nova_compute[253538]: 2025-11-25 09:29:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:15 compute-0 nova_compute[253538]: 2025-11-25 09:29:15.774 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:16 compute-0 ceph-mon[75015]: pgmap v3283: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:29:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:29:16 compute-0 nova_compute[253538]: 2025-11-25 09:29:16.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:16 compute-0 nova_compute[253538]: 2025-11-25 09:29:16.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:16 compute-0 nova_compute[253538]: 2025-11-25 09:29:16.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:29:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:17 compute-0 ceph-mon[75015]: pgmap v3284: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:19 compute-0 nova_compute[253538]: 2025-11-25 09:29:19.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:19 compute-0 ceph-mon[75015]: pgmap v3285: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:20 compute-0 nova_compute[253538]: 2025-11-25 09:29:20.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:21 compute-0 ceph-mon[75015]: pgmap v3286: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:21 compute-0 nova_compute[253538]: 2025-11-25 09:29:21.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:29:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:29:24 compute-0 ceph-mon[75015]: pgmap v3287: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:24 compute-0 nova_compute[253538]: 2025-11-25 09:29:24.165 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:25 compute-0 ceph-mon[75015]: pgmap v3288: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:25 compute-0 nova_compute[253538]: 2025-11-25 09:29:25.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:25 compute-0 nova_compute[253538]: 2025-11-25 09:29:25.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:28 compute-0 ceph-mon[75015]: pgmap v3289: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:29:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292568820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:29:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:29:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292568820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:29:29 compute-0 nova_compute[253538]: 2025-11-25 09:29:29.168 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:29 compute-0 ceph-mon[75015]: pgmap v3290: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1292568820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:29:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1292568820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:29:29 compute-0 nova_compute[253538]: 2025-11-25 09:29:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:29:29 compute-0 nova_compute[253538]: 2025-11-25 09:29:29.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:29:29 compute-0 nova_compute[253538]: 2025-11-25 09:29:29.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:29:29 compute-0 nova_compute[253538]: 2025-11-25 09:29:29.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:29:29 compute-0 nova_compute[253538]: 2025-11-25 09:29:29.579 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:29:29 compute-0 nova_compute[253538]: 2025-11-25 09:29:29.579 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:29:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:29:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627516190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.073 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.253 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.254 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3589MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.255 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.255 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.316 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.316 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.333 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:29:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:29:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966608219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:29:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.857 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.863 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.876 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.879 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:29:30 compute-0 nova_compute[253538]: 2025-11-25 09:29:30.879 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:29:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1627516190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:29:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2966608219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:29:31 compute-0 podman[435265]: 2025-11-25 09:29:31.805020409 +0000 UTC m=+0.054924929 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 09:29:31 compute-0 podman[435264]: 2025-11-25 09:29:31.805875912 +0000 UTC m=+0.055624197 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 09:29:31 compute-0 ceph-mon[75015]: pgmap v3291: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:33 compute-0 ceph-mon[75015]: pgmap v3292: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:34 compute-0 nova_compute[253538]: 2025-11-25 09:29:34.170 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:35 compute-0 nova_compute[253538]: 2025-11-25 09:29:35.852 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:35 compute-0 ceph-mon[75015]: pgmap v3293: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:36 compute-0 sshd-session[435300]: Invalid user developer from 182.253.79.194 port 33957
Nov 25 09:29:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:37 compute-0 sshd-session[435300]: Received disconnect from 182.253.79.194 port 33957:11: Bye Bye [preauth]
Nov 25 09:29:37 compute-0 sshd-session[435300]: Disconnected from invalid user developer 182.253.79.194 port 33957 [preauth]
Nov 25 09:29:37 compute-0 ceph-mon[75015]: pgmap v3294: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:39 compute-0 nova_compute[253538]: 2025-11-25 09:29:39.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:40 compute-0 ceph-mon[75015]: pgmap v3295: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:40 compute-0 nova_compute[253538]: 2025-11-25 09:29:40.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:29:41.118 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:29:41.118 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:29:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:29:41.119 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:29:42 compute-0 ceph-mon[75015]: pgmap v3296: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:42 compute-0 podman[435302]: 2025-11-25 09:29:42.877257791 +0000 UTC m=+0.126353315 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 09:29:44 compute-0 ceph-mon[75015]: pgmap v3297: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:44 compute-0 nova_compute[253538]: 2025-11-25 09:29:44.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:45 compute-0 ceph-mon[75015]: pgmap v3298: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:45 compute-0 nova_compute[253538]: 2025-11-25 09:29:45.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:47 compute-0 ceph-mon[75015]: pgmap v3299: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:49 compute-0 nova_compute[253538]: 2025-11-25 09:29:49.179 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:49 compute-0 ceph-mon[75015]: pgmap v3300: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:50 compute-0 nova_compute[253538]: 2025-11-25 09:29:50.857 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:51 compute-0 sshd-session[435329]: Invalid user uftp from 45.78.217.205 port 45892
Nov 25 09:29:51 compute-0 sshd-session[435329]: Received disconnect from 45.78.217.205 port 45892:11: Bye Bye [preauth]
Nov 25 09:29:51 compute-0 sshd-session[435329]: Disconnected from invalid user uftp 45.78.217.205 port 45892 [preauth]
Nov 25 09:29:52 compute-0 ceph-mon[75015]: pgmap v3301: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:53 compute-0 ceph-mon[75015]: pgmap v3302: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:29:53
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control', 'volumes']
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:29:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:29:54 compute-0 nova_compute[253538]: 2025-11-25 09:29:54.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:29:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:55 compute-0 sshd-session[435331]: Invalid user hadoop from 193.32.162.151 port 43604
Nov 25 09:29:55 compute-0 sshd-session[435331]: Connection closed by invalid user hadoop 193.32.162.151 port 43604 [preauth]
Nov 25 09:29:55 compute-0 nova_compute[253538]: 2025-11-25 09:29:55.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:55 compute-0 ceph-mon[75015]: pgmap v3303: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:57 compute-0 ceph-mon[75015]: pgmap v3304: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:29:59 compute-0 nova_compute[253538]: 2025-11-25 09:29:59.183 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:29:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.353291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999353426, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 627, "num_deletes": 251, "total_data_size": 760086, "memory_usage": 772264, "flush_reason": "Manual Compaction"}
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999385862, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 503622, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68142, "largest_seqno": 68768, "table_properties": {"data_size": 500696, "index_size": 897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7769, "raw_average_key_size": 20, "raw_value_size": 494589, "raw_average_value_size": 1304, "num_data_blocks": 40, "num_entries": 379, "num_filter_entries": 379, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062950, "oldest_key_time": 1764062950, "file_creation_time": 1764062999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 32593 microseconds, and 2796 cpu microseconds.
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.385940) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 503622 bytes OK
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.385974) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.406048) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.406119) EVENT_LOG_v1 {"time_micros": 1764062999406104, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.406154) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 756718, prev total WAL file size 756718, number of live WAL files 2.
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.407107) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373535' seq:72057594037927935, type:22 .. '6D6772737461740033303037' seq:0, type:0; will stop at (end)
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(491KB)], [161(10MB)]
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999407169, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 11976173, "oldest_snapshot_seqno": -1}
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8513 keys, 8913090 bytes, temperature: kUnknown
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999616674, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 8913090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8861612, "index_size": 29093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21317, "raw_key_size": 224311, "raw_average_key_size": 26, "raw_value_size": 8715034, "raw_average_value_size": 1023, "num_data_blocks": 1120, "num_entries": 8513, "num_filter_entries": 8513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764062999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.617069) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 8913090 bytes
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.677748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.1 rd, 42.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(41.5) write-amplify(17.7) OK, records in: 9007, records dropped: 494 output_compression: NoCompression
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.677791) EVENT_LOG_v1 {"time_micros": 1764062999677774, "job": 100, "event": "compaction_finished", "compaction_time_micros": 209633, "compaction_time_cpu_micros": 44747, "output_level": 6, "num_output_files": 1, "total_output_size": 8913090, "num_input_records": 9007, "num_output_records": 8513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999678084, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764062999680724, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.407042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:59 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:29:59.680811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:29:59 compute-0 ceph-mon[75015]: pgmap v3305: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:00 compute-0 nova_compute[253538]: 2025-11-25 09:30:00.859 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:01 compute-0 sshd-session[435336]: Connection closed by authenticating user root 171.244.51.45 port 60036 [preauth]
Nov 25 09:30:02 compute-0 ceph-mon[75015]: pgmap v3306: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:02 compute-0 podman[435339]: 2025-11-25 09:30:02.815657278 +0000 UTC m=+0.055686859 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:30:02 compute-0 podman[435338]: 2025-11-25 09:30:02.828377094 +0000 UTC m=+0.073608397 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 09:30:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:04 compute-0 ceph-mon[75015]: pgmap v3307: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:04 compute-0 nova_compute[253538]: 2025-11-25 09:30:04.185 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:30:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:04 compute-0 nova_compute[253538]: 2025-11-25 09:30:04.880 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:30:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.5 total, 600.0 interval
                                           Cumulative writes: 46K writes, 188K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.83 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 620 writes, 1636 keys, 620 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s
                                           Interval WAL: 620 writes, 275 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:30:05 compute-0 nova_compute[253538]: 2025-11-25 09:30:05.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:06 compute-0 ceph-mon[75015]: pgmap v3308: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:07 compute-0 ceph-mon[75015]: pgmap v3309: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:07 compute-0 nova_compute[253538]: 2025-11-25 09:30:07.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:09 compute-0 nova_compute[253538]: 2025-11-25 09:30:09.188 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:10 compute-0 ceph-mon[75015]: pgmap v3310: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:10 compute-0 nova_compute[253538]: 2025-11-25 09:30:10.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:10 compute-0 nova_compute[253538]: 2025-11-25 09:30:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:30:10 compute-0 nova_compute[253538]: 2025-11-25 09:30:10.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:30:10 compute-0 nova_compute[253538]: 2025-11-25 09:30:10.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:30:10 compute-0 nova_compute[253538]: 2025-11-25 09:30:10.866 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:11 compute-0 ceph-mon[75015]: pgmap v3311: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:30:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.4 total, 600.0 interval
                                           Cumulative writes: 46K writes, 181K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.78 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 691 writes, 1763 keys, 691 commit groups, 1.0 writes per commit group, ingest: 0.85 MB, 0.00 MB/s
                                           Interval WAL: 691 writes, 309 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:30:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:13 compute-0 nova_compute[253538]: 2025-11-25 09:30:13.562 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:13 compute-0 podman[435377]: 2025-11-25 09:30:13.864411759 +0000 UTC m=+0.120885316 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 09:30:13 compute-0 ceph-mon[75015]: pgmap v3312: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:14 compute-0 nova_compute[253538]: 2025-11-25 09:30:14.190 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:15 compute-0 sudo[435405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:15 compute-0 sudo[435405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:15 compute-0 sudo[435405]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:15 compute-0 sudo[435430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:30:15 compute-0 sudo[435430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:15 compute-0 sudo[435430]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:15 compute-0 sudo[435455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:15 compute-0 sudo[435455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:15 compute-0 sudo[435455]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:15 compute-0 sudo[435480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:30:15 compute-0 sudo[435480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:15 compute-0 nova_compute[253538]: 2025-11-25 09:30:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:15 compute-0 nova_compute[253538]: 2025-11-25 09:30:15.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:15 compute-0 ceph-mon[75015]: pgmap v3313: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:16 compute-0 sudo[435480]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:30:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:30:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:30:16 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:30:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:30:16 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:30:16 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 749fd843-ee72-432a-923d-0fa01d24d1e5 does not exist
Nov 25 09:30:16 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a7859bbe-b745-4211-a546-984b3865b994 does not exist
Nov 25 09:30:16 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1bd17a76-87e7-444c-80d0-10cf5d0f9dc2 does not exist
Nov 25 09:30:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:30:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:30:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:30:16 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:30:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:30:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:30:16 compute-0 sudo[435536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:16 compute-0 sudo[435536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:16 compute-0 sudo[435536]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:16 compute-0 sudo[435561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:30:16 compute-0 sudo[435561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:16 compute-0 sudo[435561]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:16 compute-0 sudo[435586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:16 compute-0 sudo[435586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:16 compute-0 sudo[435586]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:16 compute-0 sudo[435611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:30:16 compute-0 sudo[435611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:16 compute-0 podman[435676]: 2025-11-25 09:30:16.589399989 +0000 UTC m=+0.042599653 container create 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:30:16 compute-0 systemd[1]: Started libpod-conmon-862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae.scope.
Nov 25 09:30:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:30:16 compute-0 podman[435676]: 2025-11-25 09:30:16.571244873 +0000 UTC m=+0.024444557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:30:16 compute-0 podman[435676]: 2025-11-25 09:30:16.670076487 +0000 UTC m=+0.123276171 container init 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:30:16 compute-0 podman[435676]: 2025-11-25 09:30:16.678550128 +0000 UTC m=+0.131749792 container start 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:30:16 compute-0 podman[435676]: 2025-11-25 09:30:16.681657953 +0000 UTC m=+0.134857647 container attach 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:30:16 compute-0 suspicious_lamport[435693]: 167 167
Nov 25 09:30:16 compute-0 systemd[1]: libpod-862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae.scope: Deactivated successfully.
Nov 25 09:30:16 compute-0 podman[435676]: 2025-11-25 09:30:16.684589294 +0000 UTC m=+0.137788958 container died 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-81cd80287068906d92f04fa3ad4fb8bbf26f9216830fe3075d48f807762c3b75-merged.mount: Deactivated successfully.
Nov 25 09:30:16 compute-0 podman[435676]: 2025-11-25 09:30:16.737539306 +0000 UTC m=+0.190738980 container remove 862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:30:16 compute-0 systemd[1]: libpod-conmon-862e1663c268069f630a9b4df8a491fa846abe950cc9b25fafd3eea095c0a7ae.scope: Deactivated successfully.
Nov 25 09:30:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:16 compute-0 podman[435717]: 2025-11-25 09:30:16.896079699 +0000 UTC m=+0.054391293 container create 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:30:16 compute-0 systemd[1]: Started libpod-conmon-4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170.scope.
Nov 25 09:30:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:30:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:30:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:30:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:30:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:30:16 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:30:16 compute-0 podman[435717]: 2025-11-25 09:30:16.866664258 +0000 UTC m=+0.024975932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:30:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:16 compute-0 podman[435717]: 2025-11-25 09:30:16.997643708 +0000 UTC m=+0.155955302 container init 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:30:17 compute-0 podman[435717]: 2025-11-25 09:30:17.008953936 +0000 UTC m=+0.167265520 container start 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:30:17 compute-0 podman[435717]: 2025-11-25 09:30:17.012279166 +0000 UTC m=+0.170590820 container attach 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:30:17 compute-0 nova_compute[253538]: 2025-11-25 09:30:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:17 compute-0 nova_compute[253538]: 2025-11-25 09:30:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:17 compute-0 nova_compute[253538]: 2025-11-25 09:30:17.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:30:18 compute-0 ceph-mon[75015]: pgmap v3314: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:18 compute-0 unruffled_gates[435733]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:30:18 compute-0 unruffled_gates[435733]: --> relative data size: 1.0
Nov 25 09:30:18 compute-0 unruffled_gates[435733]: --> All data devices are unavailable
Nov 25 09:30:18 compute-0 systemd[1]: libpod-4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170.scope: Deactivated successfully.
Nov 25 09:30:18 compute-0 podman[435717]: 2025-11-25 09:30:18.166683289 +0000 UTC m=+1.324994913 container died 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:30:18 compute-0 systemd[1]: libpod-4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170.scope: Consumed 1.111s CPU time.
Nov 25 09:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-949250c2b1632bb56c09986b24b951143ce18ea87ea3d0de3cee3f97f1a6bce1-merged.mount: Deactivated successfully.
Nov 25 09:30:18 compute-0 podman[435717]: 2025-11-25 09:30:18.35414612 +0000 UTC m=+1.512457724 container remove 4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:30:18 compute-0 systemd[1]: libpod-conmon-4d88dfd356c9717440bd9e11c21e801d0f10638a39937a27a607caaa5e24c170.scope: Deactivated successfully.
Nov 25 09:30:18 compute-0 sudo[435611]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:18 compute-0 sudo[435778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:18 compute-0 sudo[435778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:18 compute-0 sudo[435778]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:18 compute-0 sudo[435803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:30:18 compute-0 sudo[435803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:18 compute-0 sudo[435803]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:18 compute-0 sshd-session[435762]: Invalid user developer from 146.190.154.85 port 43144
Nov 25 09:30:18 compute-0 sudo[435828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:18 compute-0 sudo[435828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:18 compute-0 sudo[435828]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:18 compute-0 sshd-session[435762]: Received disconnect from 146.190.154.85 port 43144:11: Bye Bye [preauth]
Nov 25 09:30:18 compute-0 sshd-session[435762]: Disconnected from invalid user developer 146.190.154.85 port 43144 [preauth]
Nov 25 09:30:18 compute-0 sudo[435853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:30:18 compute-0 sudo[435853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:19 compute-0 ceph-mon[75015]: pgmap v3315: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:19 compute-0 podman[435919]: 2025-11-25 09:30:19.167793951 +0000 UTC m=+0.078707437 container create 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:30:19 compute-0 nova_compute[253538]: 2025-11-25 09:30:19.192 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:19 compute-0 podman[435919]: 2025-11-25 09:30:19.125511678 +0000 UTC m=+0.036425214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:30:19 compute-0 systemd[1]: Started libpod-conmon-95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70.scope.
Nov 25 09:30:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:30:19 compute-0 podman[435919]: 2025-11-25 09:30:19.289783386 +0000 UTC m=+0.200696872 container init 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:30:19 compute-0 podman[435919]: 2025-11-25 09:30:19.301764313 +0000 UTC m=+0.212677759 container start 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:30:19 compute-0 podman[435919]: 2025-11-25 09:30:19.307580642 +0000 UTC m=+0.218494098 container attach 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 09:30:19 compute-0 amazing_brahmagupta[435936]: 167 167
Nov 25 09:30:19 compute-0 podman[435919]: 2025-11-25 09:30:19.308825645 +0000 UTC m=+0.219739101 container died 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:30:19 compute-0 systemd[1]: libpod-95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70.scope: Deactivated successfully.
Nov 25 09:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c95b4a513ac5db01e4c9cf5b8baccf5abcfd7f9f660906df15c50a7f4f79006-merged.mount: Deactivated successfully.
Nov 25 09:30:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:19 compute-0 podman[435919]: 2025-11-25 09:30:19.347619083 +0000 UTC m=+0.258532529 container remove 95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:30:19 compute-0 systemd[1]: libpod-conmon-95d3047ac2a83c7d5b055f5b23f6926a7d5a98d9ac8318b9f599b8d5d4254a70.scope: Deactivated successfully.
Nov 25 09:30:19 compute-0 podman[435958]: 2025-11-25 09:30:19.556360804 +0000 UTC m=+0.066186626 container create c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:30:19 compute-0 systemd[1]: Started libpod-conmon-c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5.scope.
Nov 25 09:30:19 compute-0 podman[435958]: 2025-11-25 09:30:19.524934697 +0000 UTC m=+0.034760559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:30:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:30:19.628 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:30:19 compute-0 nova_compute[253538]: 2025-11-25 09:30:19.629 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:19 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:30:19.631 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:30:19 compute-0 podman[435958]: 2025-11-25 09:30:19.674669169 +0000 UTC m=+0.184495041 container init c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:30:19 compute-0 podman[435958]: 2025-11-25 09:30:19.685538346 +0000 UTC m=+0.195364168 container start c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 09:30:19 compute-0 podman[435958]: 2025-11-25 09:30:19.726709137 +0000 UTC m=+0.236534919 container attach c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]: {
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:     "0": [
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:         {
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "devices": [
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "/dev/loop3"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             ],
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_name": "ceph_lv0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_size": "21470642176",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "name": "ceph_lv0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "tags": {
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cluster_name": "ceph",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.crush_device_class": "",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.encrypted": "0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osd_id": "0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.type": "block",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.vdo": "0"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             },
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "type": "block",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "vg_name": "ceph_vg0"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:         }
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:     ],
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:     "1": [
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:         {
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "devices": [
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "/dev/loop4"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             ],
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_name": "ceph_lv1",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_size": "21470642176",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "name": "ceph_lv1",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "tags": {
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cluster_name": "ceph",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.crush_device_class": "",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.encrypted": "0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osd_id": "1",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.type": "block",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.vdo": "0"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             },
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "type": "block",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "vg_name": "ceph_vg1"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:         }
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:     ],
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:     "2": [
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:         {
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "devices": [
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "/dev/loop5"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             ],
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_name": "ceph_lv2",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_size": "21470642176",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "name": "ceph_lv2",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "tags": {
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.cluster_name": "ceph",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.crush_device_class": "",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.encrypted": "0",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osd_id": "2",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.type": "block",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:                 "ceph.vdo": "0"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             },
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "type": "block",
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:             "vg_name": "ceph_vg2"
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:         }
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]:     ]
Nov 25 09:30:20 compute-0 frosty_mcclintock[435975]: }
Nov 25 09:30:20 compute-0 systemd[1]: libpod-c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5.scope: Deactivated successfully.
Nov 25 09:30:20 compute-0 conmon[435975]: conmon c26b4f693fbc29b6237e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5.scope/container/memory.events
Nov 25 09:30:20 compute-0 podman[435958]: 2025-11-25 09:30:20.510062014 +0000 UTC m=+1.019887796 container died c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 09:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-37e0cabffb4b02e32e901f9c5016a6414f9c80fb3b829f556622cfaddaac504a-merged.mount: Deactivated successfully.
Nov 25 09:30:20 compute-0 podman[435958]: 2025-11-25 09:30:20.57155518 +0000 UTC m=+1.081380962 container remove c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:30:20 compute-0 systemd[1]: libpod-conmon-c26b4f693fbc29b6237e848936058f3fe2111eda4ebc1aa745d5c968834862c5.scope: Deactivated successfully.
Nov 25 09:30:20 compute-0 sudo[435853]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:20 compute-0 sudo[435998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:20 compute-0 sudo[435998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:20 compute-0 sudo[435998]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:20 compute-0 sudo[436023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:30:20 compute-0 sudo[436023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:20 compute-0 sudo[436023]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:20 compute-0 sudo[436048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:20 compute-0 sudo[436048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:20 compute-0 sudo[436048]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:20 compute-0 nova_compute[253538]: 2025-11-25 09:30:20.868 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:20 compute-0 sudo[436073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:30:20 compute-0 sudo[436073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:21 compute-0 podman[436139]: 2025-11-25 09:30:21.252504774 +0000 UTC m=+0.028444886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:30:21 compute-0 podman[436139]: 2025-11-25 09:30:21.705391731 +0000 UTC m=+0.481331823 container create 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:30:21 compute-0 systemd[1]: Started libpod-conmon-85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367.scope.
Nov 25 09:30:21 compute-0 ceph-mon[75015]: pgmap v3316: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:30:21 compute-0 podman[436139]: 2025-11-25 09:30:21.793064481 +0000 UTC m=+0.569004573 container init 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:30:21 compute-0 podman[436139]: 2025-11-25 09:30:21.800895045 +0000 UTC m=+0.576835107 container start 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:30:21 compute-0 podman[436139]: 2025-11-25 09:30:21.804660487 +0000 UTC m=+0.580600569 container attach 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:30:21 compute-0 objective_clarke[436156]: 167 167
Nov 25 09:30:21 compute-0 systemd[1]: libpod-85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367.scope: Deactivated successfully.
Nov 25 09:30:21 compute-0 podman[436139]: 2025-11-25 09:30:21.807503305 +0000 UTC m=+0.583443367 container died 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 09:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-20245aed7b25b3ff6b2a1a175554599aacd9484b98f09883c7a7acca11260681-merged.mount: Deactivated successfully.
Nov 25 09:30:21 compute-0 podman[436139]: 2025-11-25 09:30:21.854679441 +0000 UTC m=+0.630619513 container remove 85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:30:21 compute-0 systemd[1]: libpod-conmon-85918ce9ba0870ad9641f362cc072ee63eac70918a6320802fc10c2609d41367.scope: Deactivated successfully.
Nov 25 09:30:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:30:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6002.4 total, 600.0 interval
                                           Cumulative writes: 36K writes, 143K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 596 writes, 1534 keys, 596 commit groups, 1.0 writes per commit group, ingest: 0.75 MB, 0.00 MB/s
                                           Interval WAL: 596 writes, 265 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:30:22 compute-0 podman[436180]: 2025-11-25 09:30:22.076658652 +0000 UTC m=+0.063128691 container create 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:30:22 compute-0 systemd[1]: Started libpod-conmon-200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed.scope.
Nov 25 09:30:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:30:22 compute-0 podman[436180]: 2025-11-25 09:30:22.054972702 +0000 UTC m=+0.041442791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:30:22 compute-0 podman[436180]: 2025-11-25 09:30:22.160277942 +0000 UTC m=+0.146747991 container init 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:30:22 compute-0 podman[436180]: 2025-11-25 09:30:22.169245066 +0000 UTC m=+0.155715105 container start 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:30:22 compute-0 podman[436180]: 2025-11-25 09:30:22.174400857 +0000 UTC m=+0.160870896 container attach 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:30:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]: {
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "osd_id": 1,
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "type": "bluestore"
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:     },
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "osd_id": 2,
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "type": "bluestore"
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:     },
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "osd_id": 0,
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:         "type": "bluestore"
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]:     }
Nov 25 09:30:23 compute-0 sharp_matsumoto[436197]: }
Nov 25 09:30:23 compute-0 systemd[1]: libpod-200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed.scope: Deactivated successfully.
Nov 25 09:30:23 compute-0 systemd[1]: libpod-200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed.scope: Consumed 1.099s CPU time.
Nov 25 09:30:23 compute-0 podman[436180]: 2025-11-25 09:30:23.263695144 +0000 UTC m=+1.250165223 container died 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e2d6e8cc0c1b2b84ba3e209d1cf1e719118e1c086f9abc8457c9c8cac7018f2-merged.mount: Deactivated successfully.
Nov 25 09:30:23 compute-0 podman[436180]: 2025-11-25 09:30:23.32443273 +0000 UTC m=+1.310902799 container remove 200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 09:30:23 compute-0 systemd[1]: libpod-conmon-200b58e520dd115d86e3d14ace70ed89244fd73f88b13ed5d8ecb5f6c1bf45ed.scope: Deactivated successfully.
Nov 25 09:30:23 compute-0 sudo[436073]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:30:23 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:30:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:30:23 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:30:23 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e89a28e8-21e1-4c2f-8059-eb132feb050d does not exist
Nov 25 09:30:23 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 64a1d304-03a0-4707-b955-4e804152c9ff does not exist
Nov 25 09:30:23 compute-0 sudo[436241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:30:23 compute-0 sudo[436241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:23 compute-0 sudo[436241]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:30:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:30:23 compute-0 sudo[436266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:30:23 compute-0 sudo[436266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:23 compute-0 sudo[436266]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:23 compute-0 nova_compute[253538]: 2025-11-25 09:30:23.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:23 compute-0 ceph-mon[75015]: pgmap v3317: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:30:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:30:24 compute-0 sshd-session[436291]: Accepted publickey for zuul from 192.168.122.30 port 49156 ssh2: ECDSA SHA256:XPT2Qp05XP+4/iPWyxQ1YuG4VjRBRDdk6pBKmAF934E
Nov 25 09:30:24 compute-0 systemd-logind[822]: New session 53 of user zuul.
Nov 25 09:30:24 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 25 09:30:24 compute-0 sshd-session[436291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:24 compute-0 nova_compute[253538]: 2025-11-25 09:30:24.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:25 compute-0 sudo[436387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Nov 25 09:30:25 compute-0 sudo[436387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 sudo[436387]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[436413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Nov 25 09:30:25 compute-0 sudo[436413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 groupadd[436415]: group added to /etc/group: name=podman, GID=42479
Nov 25 09:30:25 compute-0 groupadd[436415]: group added to /etc/gshadow: name=podman
Nov 25 09:30:25 compute-0 groupadd[436415]: new group: name=podman, GID=42479
Nov 25 09:30:25 compute-0 sudo[436413]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[436421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Nov 25 09:30:25 compute-0 sudo[436421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 usermod[436423]: add 'zuul' to group 'podman'
Nov 25 09:30:25 compute-0 usermod[436423]: add 'zuul' to shadow group 'podman'
Nov 25 09:30:25 compute-0 sudo[436421]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[436430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Nov 25 09:30:25 compute-0 sudo[436430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 sudo[436430]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[436433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Nov 25 09:30:25 compute-0 sudo[436433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 sudo[436433]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[436436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Nov 25 09:30:25 compute-0 sudo[436436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 sudo[436436]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[436439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Nov 25 09:30:25 compute-0 sudo[436439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 sudo[436439]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[436442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Nov 25 09:30:25 compute-0 sudo[436442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 sudo[436442]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[436445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Nov 25 09:30:25 compute-0 sudo[436445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 systemd[1]: Reloading.
Nov 25 09:30:25 compute-0 nova_compute[253538]: 2025-11-25 09:30:25.871 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:25 compute-0 systemd-rc-local-generator[436465]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:30:25 compute-0 systemd-sysv-generator[436474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:30:26 compute-0 ceph-mon[75015]: pgmap v3318: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:26 compute-0 sudo[436445]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:26 compute-0 sudo[436481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Nov 25 09:30:26 compute-0 sudo[436481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:26 compute-0 sudo[436481]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:26 compute-0 sudo[436484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Nov 25 09:30:26 compute-0 sudo[436484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:26 compute-0 systemd[1]: Reloading.
Nov 25 09:30:26 compute-0 systemd-rc-local-generator[436513]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:30:26 compute-0 systemd-sysv-generator[436517]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:30:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:26 compute-0 systemd[1]: Starting Podman API Socket...
Nov 25 09:30:26 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 25 09:30:26 compute-0 sudo[436484]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:26 compute-0 sudo[436522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Nov 25 09:30:26 compute-0 sudo[436522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:26 compute-0 sudo[436522]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:26 compute-0 sudo[436525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Nov 25 09:30:26 compute-0 sudo[436525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:26 compute-0 sudo[436525]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:27 compute-0 sudo[436528]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Nov 25 09:30:27 compute-0 sudo[436528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:27 compute-0 sudo[436528]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:27 compute-0 sudo[436531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Nov 25 09:30:27 compute-0 sudo[436531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:27 compute-0 sudo[436531]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:27 compute-0 sudo[436534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Nov 25 09:30:27 compute-0 sudo[436534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:27 compute-0 sudo[436534]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:27 compute-0 sudo[436537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Nov 25 09:30:27 compute-0 dbus-broker-launch[813]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Nov 25 09:30:27 compute-0 sudo[436537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:27 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Nov 25 09:30:27 compute-0 systemd[1]: Closed Podman API Socket.
Nov 25 09:30:27 compute-0 systemd[1]: Stopping Podman API Socket...
Nov 25 09:30:27 compute-0 systemd[1]: Starting Podman API Socket...
Nov 25 09:30:27 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 25 09:30:27 compute-0 sudo[436537]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:27 compute-0 sudo[436390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Nov 25 09:30:27 compute-0 sudo[436390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:27 compute-0 sudo[436390]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:27 compute-0 sshd-session[436543]: Accepted publickey for zuul from 192.168.122.30 port 49170 ssh2: ECDSA SHA256:XPT2Qp05XP+4/iPWyxQ1YuG4VjRBRDdk6pBKmAF934E
Nov 25 09:30:27 compute-0 systemd-logind[822]: New session 54 of user zuul.
Nov 25 09:30:27 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 25 09:30:27 compute-0 sshd-session[436543]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:27 compute-0 systemd[1]: Starting Podman API Service...
Nov 25 09:30:27 compute-0 systemd[1]: Started Podman API Service.
Nov 25 09:30:27 compute-0 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 25 09:30:27 compute-0 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="Setting parallel job count to 25"
Nov 25 09:30:27 compute-0 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="Using sqlite as database backend"
Nov 25 09:30:27 compute-0 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 25 09:30:27 compute-0 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 25 09:30:27 compute-0 podman[436547]: time="2025-11-25T09:30:27Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 25 09:30:27 compute-0 podman[436547]: @ - - [25/Nov/2025:09:30:27 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 25 09:30:27 compute-0 podman[436547]: @ - - [25/Nov/2025:09:30:27 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24899 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 25 09:30:28 compute-0 ceph-mon[75015]: pgmap v3319: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:28 compute-0 sudo[436560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Nov 25 09:30:28 compute-0 sudo[436560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:28 compute-0 sudo[436560]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:28 compute-0 sudo[436585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Nov 25 09:30:28 compute-0 sudo[436585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:28 compute-0 sudo[436585]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:30:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1449125801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:30:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:30:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1449125801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:30:29 compute-0 nova_compute[253538]: 2025-11-25 09:30:29.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:29 compute-0 nova_compute[253538]: 2025-11-25 09:30:29.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:30:29 compute-0 nova_compute[253538]: 2025-11-25 09:30:29.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:30:29 compute-0 nova_compute[253538]: 2025-11-25 09:30:29.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:30:29 compute-0 nova_compute[253538]: 2025-11-25 09:30:29.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:30:29 compute-0 nova_compute[253538]: 2025-11-25 09:30:29.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:30:29 compute-0 nova_compute[253538]: 2025-11-25 09:30:29.584 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:30:29 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:30:29.634 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:30:30 compute-0 ceph-mon[75015]: pgmap v3320: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1449125801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:30:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1449125801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:30:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:30:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256782775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.105 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.299 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.303 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.304 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.304 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.379 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.380 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.396 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:30:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:30:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508826971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.865 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.870 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.873 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.887 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.888 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:30:30 compute-0 nova_compute[253538]: 2025-11-25 09:30:30.888 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:30:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4256782775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:30:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2508826971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:30:32 compute-0 ceph-mon[75015]: pgmap v3321: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:33 compute-0 podman[436656]: 2025-11-25 09:30:33.80153167 +0000 UTC m=+0.048113143 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:30:33 compute-0 podman[436655]: 2025-11-25 09:30:33.806912406 +0000 UTC m=+0.056550423 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:30:34 compute-0 ceph-mon[75015]: pgmap v3322: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:34 compute-0 nova_compute[253538]: 2025-11-25 09:30:34.201 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:35 compute-0 ceph-mon[75015]: pgmap v3323: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:35 compute-0 nova_compute[253538]: 2025-11-25 09:30:35.875 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:37 compute-0 ceph-mon[75015]: pgmap v3324: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:39 compute-0 nova_compute[253538]: 2025-11-25 09:30:39.203 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:39 compute-0 ceph-mon[75015]: pgmap v3325: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:40 compute-0 nova_compute[253538]: 2025-11-25 09:30:40.876 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:30:41.118 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:30:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:30:41.119 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:30:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:30:41.119 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:30:41 compute-0 ceph-mon[75015]: pgmap v3326: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:42 compute-0 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 09:30:42 compute-0 podman[436547]: time="2025-11-25T09:30:42Z" level=info msg="Received shutdown.Stop(), terminating!" PID=436547
Nov 25 09:30:42 compute-0 systemd[1]: podman.service: Deactivated successfully.
Nov 25 09:30:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:43 compute-0 ceph-mon[75015]: pgmap v3327: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:44 compute-0 nova_compute[253538]: 2025-11-25 09:30:44.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:44 compute-0 podman[436691]: 2025-11-25 09:30:44.828853676 +0000 UTC m=+0.082658353 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 09:30:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:45 compute-0 nova_compute[253538]: 2025-11-25 09:30:45.877 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:48 compute-0 ceph-mon[75015]: pgmap v3328: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:48 compute-0 ceph-mon[75015]: pgmap v3329: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:49 compute-0 nova_compute[253538]: 2025-11-25 09:30:49.206 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:49 compute-0 ceph-mon[75015]: pgmap v3330: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:50 compute-0 nova_compute[253538]: 2025-11-25 09:30:50.878 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:51 compute-0 ceph-mon[75015]: pgmap v3331: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:30:53
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data', '.mgr']
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:30:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:30:53 compute-0 ceph-mon[75015]: pgmap v3332: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:54 compute-0 nova_compute[253538]: 2025-11-25 09:30:54.208 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:30:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:55 compute-0 ceph-mon[75015]: pgmap v3333: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:55 compute-0 nova_compute[253538]: 2025-11-25 09:30:55.881 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:57 compute-0 ceph-mon[75015]: pgmap v3334: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:30:59 compute-0 nova_compute[253538]: 2025-11-25 09:30:59.209 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:30:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:30:59 compute-0 ceph-mon[75015]: pgmap v3335: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:31:00.622 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:31:00 compute-0 nova_compute[253538]: 2025-11-25 09:31:00.623 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:00 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:31:00.623 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:31:00 compute-0 sshd-session[436294]: Connection closed by 192.168.122.30 port 49156
Nov 25 09:31:00 compute-0 sshd-session[436291]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:31:00 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 25 09:31:00 compute-0 systemd[1]: session-53.scope: Consumed 1.410s CPU time.
Nov 25 09:31:00 compute-0 systemd-logind[822]: Session 53 logged out. Waiting for processes to exit.
Nov 25 09:31:00 compute-0 systemd-logind[822]: Removed session 53.
Nov 25 09:31:00 compute-0 nova_compute[253538]: 2025-11-25 09:31:00.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:31:01.625 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:31:01 compute-0 sshd-session[436546]: Connection closed by 192.168.122.30 port 49170
Nov 25 09:31:01 compute-0 sshd-session[436543]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:31:01 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 25 09:31:01 compute-0 systemd-logind[822]: Session 54 logged out. Waiting for processes to exit.
Nov 25 09:31:01 compute-0 systemd-logind[822]: Removed session 54.
Nov 25 09:31:01 compute-0 ceph-mon[75015]: pgmap v3336: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:03 compute-0 ceph-mon[75015]: pgmap v3337: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:04 compute-0 nova_compute[253538]: 2025-11-25 09:31:04.211 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:31:04 compute-0 podman[436718]: 2025-11-25 09:31:04.807823185 +0000 UTC m=+0.052226915 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:31:04 compute-0 podman[436717]: 2025-11-25 09:31:04.839161279 +0000 UTC m=+0.089600553 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:05 compute-0 nova_compute[253538]: 2025-11-25 09:31:05.886 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:05 compute-0 nova_compute[253538]: 2025-11-25 09:31:05.889 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:06 compute-0 ceph-mon[75015]: pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:08 compute-0 ceph-mon[75015]: pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:08 compute-0 nova_compute[253538]: 2025-11-25 09:31:08.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:09 compute-0 nova_compute[253538]: 2025-11-25 09:31:09.214 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:10 compute-0 ceph-mon[75015]: pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:10 compute-0 nova_compute[253538]: 2025-11-25 09:31:10.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:11 compute-0 nova_compute[253538]: 2025-11-25 09:31:11.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:11 compute-0 nova_compute[253538]: 2025-11-25 09:31:11.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:31:11 compute-0 nova_compute[253538]: 2025-11-25 09:31:11.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:31:11 compute-0 nova_compute[253538]: 2025-11-25 09:31:11.580 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:31:12 compute-0 ceph-mon[75015]: pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:14 compute-0 ceph-mon[75015]: pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:14 compute-0 nova_compute[253538]: 2025-11-25 09:31:14.216 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:15 compute-0 ceph-mon[75015]: pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:15 compute-0 nova_compute[253538]: 2025-11-25 09:31:15.573 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:15 compute-0 podman[436752]: 2025-11-25 09:31:15.837228208 +0000 UTC m=+0.080662090 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:31:15 compute-0 nova_compute[253538]: 2025-11-25 09:31:15.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:16 compute-0 nova_compute[253538]: 2025-11-25 09:31:16.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:17 compute-0 ceph-mon[75015]: pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:17 compute-0 nova_compute[253538]: 2025-11-25 09:31:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:17 compute-0 nova_compute[253538]: 2025-11-25 09:31:17.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:31:18 compute-0 nova_compute[253538]: 2025-11-25 09:31:18.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:19 compute-0 nova_compute[253538]: 2025-11-25 09:31:19.218 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:19 compute-0 ceph-mon[75015]: pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:20 compute-0 nova_compute[253538]: 2025-11-25 09:31:20.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:21 compute-0 ceph-mon[75015]: pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:31:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:31:23 compute-0 sudo[436778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:23 compute-0 sudo[436778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:23 compute-0 sudo[436778]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:23 compute-0 sudo[436803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:23 compute-0 sudo[436803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:23 compute-0 sudo[436803]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:23 compute-0 sudo[436828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:23 compute-0 sudo[436828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:23 compute-0 sudo[436828]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:23 compute-0 sudo[436853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:31:23 compute-0 sudo[436853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:24 compute-0 ceph-mon[75015]: pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.220 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:24 compute-0 sudo[436853]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:31:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:31:24 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:31:24 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:24 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4138a3c1-b9dd-4fd9-85c2-4791ab063206 does not exist
Nov 25 09:31:24 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c7f88d99-490a-41fb-9f2f-e524ed8dc47f does not exist
Nov 25 09:31:24 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 95c5506f-8860-4674-9265-ed5029171e33 does not exist
Nov 25 09:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:31:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:31:24 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:31:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:31:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:24 compute-0 sudo[436908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:24 compute-0 sudo[436908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:24 compute-0 sudo[436908]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:24 compute-0 sudo[436933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:24 compute-0 sudo[436933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:24 compute-0 sudo[436933]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:24 compute-0 sudo[436958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:24 compute-0 sudo[436958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:24 compute-0 sudo[436958]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.554 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.555 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.555 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.556 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.556 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.556 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.572 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 WARNING nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 WARNING nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Removable base files: /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ad982bd9427c86feb49d0b60fa1a5b2511227adc
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a6a8bade0130d94f6fc91c6e483cc9b6db836676
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.579 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.580 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.580 253542 DEBUG nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 25 09:31:24 compute-0 nova_compute[253538]: 2025-11-25 09:31:24.580 253542 INFO nova.virt.libvirt.imagecache [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 25 09:31:24 compute-0 sudo[436983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:31:24 compute-0 sudo[436983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:24 compute-0 podman[437047]: 2025-11-25 09:31:24.9797311 +0000 UTC m=+0.064945523 container create c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 09:31:25 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:25 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:25 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:31:25 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:31:25 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:31:25 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:25 compute-0 systemd[1]: Started libpod-conmon-c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67.scope.
Nov 25 09:31:25 compute-0 podman[437047]: 2025-11-25 09:31:24.943949484 +0000 UTC m=+0.029163877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:31:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:25 compute-0 podman[437047]: 2025-11-25 09:31:25.110627718 +0000 UTC m=+0.195842131 container init c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:25 compute-0 podman[437047]: 2025-11-25 09:31:25.11840769 +0000 UTC m=+0.203622073 container start c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:25 compute-0 gifted_moore[437063]: 167 167
Nov 25 09:31:25 compute-0 systemd[1]: libpod-c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67.scope: Deactivated successfully.
Nov 25 09:31:25 compute-0 conmon[437063]: conmon c7d7d577de4703326ace <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67.scope/container/memory.events
Nov 25 09:31:25 compute-0 podman[437047]: 2025-11-25 09:31:25.128556657 +0000 UTC m=+0.213771050 container attach c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 09:31:25 compute-0 podman[437047]: 2025-11-25 09:31:25.129251745 +0000 UTC m=+0.214466138 container died c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-241281af1d535a33dc3b8975fe7d74ef88ebdb275c246853125a64f308b86ff9-merged.mount: Deactivated successfully.
Nov 25 09:31:25 compute-0 podman[437047]: 2025-11-25 09:31:25.237781954 +0000 UTC m=+0.322996337 container remove c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 09:31:25 compute-0 systemd[1]: libpod-conmon-c7d7d577de4703326acefebef2bf7e22fa01320656c5fe565f9a73bc7433eb67.scope: Deactivated successfully.
Nov 25 09:31:25 compute-0 podman[437088]: 2025-11-25 09:31:25.410155513 +0000 UTC m=+0.049380416 container create 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:31:25 compute-0 systemd[1]: Started libpod-conmon-0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031.scope.
Nov 25 09:31:25 compute-0 podman[437088]: 2025-11-25 09:31:25.387029723 +0000 UTC m=+0.026254596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:31:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:25 compute-0 podman[437088]: 2025-11-25 09:31:25.504153356 +0000 UTC m=+0.143378229 container init 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:25 compute-0 podman[437088]: 2025-11-25 09:31:25.516634766 +0000 UTC m=+0.155859639 container start 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:25 compute-0 podman[437088]: 2025-11-25 09:31:25.520927834 +0000 UTC m=+0.160152707 container attach 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:31:25 compute-0 nova_compute[253538]: 2025-11-25 09:31:25.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:26 compute-0 ceph-mon[75015]: pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:26 compute-0 competent_maxwell[437105]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:31:26 compute-0 competent_maxwell[437105]: --> relative data size: 1.0
Nov 25 09:31:26 compute-0 competent_maxwell[437105]: --> All data devices are unavailable
Nov 25 09:31:26 compute-0 systemd[1]: libpod-0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031.scope: Deactivated successfully.
Nov 25 09:31:26 compute-0 podman[437088]: 2025-11-25 09:31:26.548539609 +0000 UTC m=+1.187764492 container died 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-49957548785ab1f709e9ff7e7d826f2efde8ba2d9830c847186d9c15f5a56163-merged.mount: Deactivated successfully.
Nov 25 09:31:26 compute-0 podman[437088]: 2025-11-25 09:31:26.723057126 +0000 UTC m=+1.362281989 container remove 0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:31:26 compute-0 systemd[1]: libpod-conmon-0805092ca531d09789da2c6849f213263d61f07f2933549e6ddf06aba66ae031.scope: Deactivated successfully.
Nov 25 09:31:26 compute-0 sudo[436983]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:26 compute-0 sudo[437146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:26 compute-0 sudo[437146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:26 compute-0 sudo[437146]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:26 compute-0 sudo[437171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:26 compute-0 sudo[437171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:26 compute-0 sudo[437171]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:26 compute-0 sudo[437196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:26 compute-0 sudo[437196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:26 compute-0 sudo[437196]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:27 compute-0 sudo[437221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:31:27 compute-0 sudo[437221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:27 compute-0 podman[437285]: 2025-11-25 09:31:27.365199662 +0000 UTC m=+0.048329138 container create 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:27 compute-0 systemd[1]: Started libpod-conmon-4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710.scope.
Nov 25 09:31:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:27 compute-0 podman[437285]: 2025-11-25 09:31:27.342035211 +0000 UTC m=+0.025164747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:31:27 compute-0 podman[437285]: 2025-11-25 09:31:27.449270544 +0000 UTC m=+0.132400070 container init 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:27 compute-0 podman[437285]: 2025-11-25 09:31:27.459206955 +0000 UTC m=+0.142336411 container start 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:31:27 compute-0 podman[437285]: 2025-11-25 09:31:27.462650449 +0000 UTC m=+0.145779925 container attach 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:31:27 compute-0 quirky_hermann[437301]: 167 167
Nov 25 09:31:27 compute-0 systemd[1]: libpod-4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710.scope: Deactivated successfully.
Nov 25 09:31:27 compute-0 podman[437285]: 2025-11-25 09:31:27.467685936 +0000 UTC m=+0.150815392 container died 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a2132794d924e88c35f9ef5602d123ae06d801638a263a0b3f1b7e8dff68f04-merged.mount: Deactivated successfully.
Nov 25 09:31:27 compute-0 podman[437285]: 2025-11-25 09:31:27.508271772 +0000 UTC m=+0.191401228 container remove 4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:31:27 compute-0 systemd[1]: libpod-conmon-4f2213a5819d2d2071ff0cc2d8d7ba12faa3d5ba1e8849fada4a8647e7ace710.scope: Deactivated successfully.
Nov 25 09:31:27 compute-0 podman[437325]: 2025-11-25 09:31:27.763922532 +0000 UTC m=+0.062972648 container create aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:27 compute-0 systemd[1]: Started libpod-conmon-aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d.scope.
Nov 25 09:31:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:27 compute-0 podman[437325]: 2025-11-25 09:31:27.739008773 +0000 UTC m=+0.038058959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:27 compute-0 podman[437325]: 2025-11-25 09:31:27.849488055 +0000 UTC m=+0.148538191 container init aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:31:27 compute-0 podman[437325]: 2025-11-25 09:31:27.859659912 +0000 UTC m=+0.158710048 container start aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 09:31:27 compute-0 podman[437325]: 2025-11-25 09:31:27.86398345 +0000 UTC m=+0.163033596 container attach aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:28 compute-0 ceph-mon[75015]: pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:28 compute-0 nova_compute[253538]: 2025-11-25 09:31:28.573 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]: {
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:     "0": [
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:         {
Nov 25 09:31:28 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "devices": [
Nov 25 09:31:28 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "/dev/loop3"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             ],
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_name": "ceph_lv0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_size": "21470642176",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "name": "ceph_lv0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "tags": {
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cluster_name": "ceph",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.crush_device_class": "",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.encrypted": "0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osd_id": "0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.type": "block",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.vdo": "0"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             },
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "type": "block",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "vg_name": "ceph_vg0"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:         }
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:     ],
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:     "1": [
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:         {
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "devices": [
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "/dev/loop4"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             ],
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_name": "ceph_lv1",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_size": "21470642176",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "name": "ceph_lv1",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "tags": {
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cluster_name": "ceph",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.crush_device_class": "",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.encrypted": "0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osd_id": "1",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.type": "block",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.vdo": "0"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             },
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "type": "block",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "vg_name": "ceph_vg1"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:         }
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:     ],
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:     "2": [
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:         {
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "devices": [
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "/dev/loop5"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             ],
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_name": "ceph_lv2",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_size": "21470642176",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "name": "ceph_lv2",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "tags": {
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.cluster_name": "ceph",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.crush_device_class": "",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.encrypted": "0",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osd_id": "2",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.type": "block",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:                 "ceph.vdo": "0"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             },
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "type": "block",
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:             "vg_name": "ceph_vg2"
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:         }
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]:     ]
Nov 25 09:31:28 compute-0 intelligent_goodall[437342]: }
Nov 25 09:31:28 compute-0 systemd[1]: libpod-aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d.scope: Deactivated successfully.
Nov 25 09:31:28 compute-0 podman[437325]: 2025-11-25 09:31:28.651806437 +0000 UTC m=+0.950856563 container died aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-77595f1d308c622abc7c0a21e9c953d1aa8c74c16d0cdad28e64998c88cbf48c-merged.mount: Deactivated successfully.
Nov 25 09:31:28 compute-0 podman[437325]: 2025-11-25 09:31:28.71536045 +0000 UTC m=+1.014410556 container remove aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:28 compute-0 systemd[1]: libpod-conmon-aaf6529a5e0ea4b67ee6dfd3fd53eb1530852137872d361aab81dd9436b54c3d.scope: Deactivated successfully.
Nov 25 09:31:28 compute-0 sudo[437221]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:28 compute-0 sudo[437366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:28 compute-0 sudo[437366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:28 compute-0 sudo[437366]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:28 compute-0 sudo[437391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:28 compute-0 sudo[437391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:28 compute-0 sudo[437391]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:28 compute-0 sudo[437416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:28 compute-0 sudo[437416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:28 compute-0 sudo[437416]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:29 compute-0 sudo[437441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:31:29 compute-0 sudo[437441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:29 compute-0 nova_compute[253538]: 2025-11-25 09:31:29.222 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:31:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105697232' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:31:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:31:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105697232' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:31:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:29 compute-0 podman[437505]: 2025-11-25 09:31:29.42842199 +0000 UTC m=+0.047533067 container create dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:29 compute-0 systemd[1]: Started libpod-conmon-dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45.scope.
Nov 25 09:31:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:29 compute-0 podman[437505]: 2025-11-25 09:31:29.406757509 +0000 UTC m=+0.025868626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:31:29 compute-0 podman[437505]: 2025-11-25 09:31:29.516707827 +0000 UTC m=+0.135818924 container init dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:31:29 compute-0 podman[437505]: 2025-11-25 09:31:29.52821126 +0000 UTC m=+0.147322337 container start dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:29 compute-0 podman[437505]: 2025-11-25 09:31:29.53185323 +0000 UTC m=+0.150964307 container attach dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 09:31:29 compute-0 clever_saha[437522]: 167 167
Nov 25 09:31:29 compute-0 systemd[1]: libpod-dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45.scope: Deactivated successfully.
Nov 25 09:31:29 compute-0 conmon[437522]: conmon dffa2f8c6b0d2b284f0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45.scope/container/memory.events
Nov 25 09:31:29 compute-0 podman[437505]: 2025-11-25 09:31:29.536861786 +0000 UTC m=+0.155972903 container died dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c66313d657bc415bb3e339fa827303834863e0953f55b0f351d7faf030a0c167-merged.mount: Deactivated successfully.
Nov 25 09:31:29 compute-0 podman[437505]: 2025-11-25 09:31:29.578175852 +0000 UTC m=+0.197286939 container remove dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:31:29 compute-0 systemd[1]: libpod-conmon-dffa2f8c6b0d2b284f0c2e24ba15c81735d906e06176a1b914de5aef2d04bd45.scope: Deactivated successfully.
Nov 25 09:31:29 compute-0 podman[437546]: 2025-11-25 09:31:29.747400525 +0000 UTC m=+0.047360391 container create 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:31:29 compute-0 systemd[1]: Started libpod-conmon-753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0.scope.
Nov 25 09:31:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:29 compute-0 podman[437546]: 2025-11-25 09:31:29.731367619 +0000 UTC m=+0.031327515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:29 compute-0 podman[437546]: 2025-11-25 09:31:29.842677083 +0000 UTC m=+0.142636949 container init 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:31:29 compute-0 podman[437546]: 2025-11-25 09:31:29.858370771 +0000 UTC m=+0.158330677 container start 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:29 compute-0 podman[437546]: 2025-11-25 09:31:29.862915635 +0000 UTC m=+0.162875531 container attach 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 09:31:30 compute-0 ceph-mon[75015]: pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/105697232' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:31:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/105697232' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]: {
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "osd_id": 1,
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "type": "bluestore"
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:     },
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "osd_id": 2,
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "type": "bluestore"
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:     },
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "osd_id": 0,
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:         "type": "bluestore"
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]:     }
Nov 25 09:31:30 compute-0 ecstatic_williamson[437563]: }
Nov 25 09:31:30 compute-0 systemd[1]: libpod-753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0.scope: Deactivated successfully.
Nov 25 09:31:30 compute-0 podman[437546]: 2025-11-25 09:31:30.849981815 +0000 UTC m=+1.149941701 container died 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-787492dae69f60a7ad6168cfb3819b75f1509bd1080b31c14f15eb54a74be965-merged.mount: Deactivated successfully.
Nov 25 09:31:30 compute-0 nova_compute[253538]: 2025-11-25 09:31:30.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:30 compute-0 podman[437546]: 2025-11-25 09:31:30.91105746 +0000 UTC m=+1.211017326 container remove 753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:30 compute-0 systemd[1]: libpod-conmon-753d8a85ad6a191a4a6929b65ae2b2649ac427f48102b46feaa9c6683ab734f0.scope: Deactivated successfully.
Nov 25 09:31:30 compute-0 sudo[437441]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:31:30 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:31:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:31:31 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:31:31 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2dfa15eb-6009-4d41-8d4f-5e0eb5e79605 does not exist
Nov 25 09:31:31 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c22bfd70-aa4f-4b6f-8c25-d1ba913229b8 does not exist
Nov 25 09:31:31 compute-0 sudo[437610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:31:31 compute-0 sudo[437610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:31 compute-0 sudo[437610]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:31 compute-0 sudo[437635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:31:31 compute-0 sudo[437635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:31 compute-0 sudo[437635]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:31 compute-0 nova_compute[253538]: 2025-11-25 09:31:31.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:31 compute-0 nova_compute[253538]: 2025-11-25 09:31:31.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:31:31 compute-0 nova_compute[253538]: 2025-11-25 09:31:31.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:31:31 compute-0 nova_compute[253538]: 2025-11-25 09:31:31.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:31:31 compute-0 nova_compute[253538]: 2025-11-25 09:31:31.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:31:31 compute-0 nova_compute[253538]: 2025-11-25 09:31:31.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:31:31 compute-0 ceph-mon[75015]: pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:31:31 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:31:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:31:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735912451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.024 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.166 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.167 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3591MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.167 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.167 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.223 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.223 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.243 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:31:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:31:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028974768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.734 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.739 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.754 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.756 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.756 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.757 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:31:32 compute-0 nova_compute[253538]: 2025-11-25 09:31:32.757 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:31:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3735912451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:31:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4028974768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:31:34 compute-0 ceph-mon[75015]: pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:34 compute-0 nova_compute[253538]: 2025-11-25 09:31:34.224 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:35 compute-0 podman[437704]: 2025-11-25 09:31:35.810672513 +0000 UTC m=+0.064236722 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:31:35 compute-0 podman[437705]: 2025-11-25 09:31:35.8369579 +0000 UTC m=+0.087653661 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 09:31:35 compute-0 nova_compute[253538]: 2025-11-25 09:31:35.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:36 compute-0 ceph-mon[75015]: pgmap v3353: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:38 compute-0 ceph-mon[75015]: pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:39 compute-0 nova_compute[253538]: 2025-11-25 09:31:39.227 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:40 compute-0 ceph-mon[75015]: pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:40 compute-0 nova_compute[253538]: 2025-11-25 09:31:40.897 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:31:41.119 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:31:41.120 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:31:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:31:41.120 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:31:42 compute-0 ceph-mon[75015]: pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:44 compute-0 ceph-mon[75015]: pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:44 compute-0 nova_compute[253538]: 2025-11-25 09:31:44.229 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:45 compute-0 nova_compute[253538]: 2025-11-25 09:31:45.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:46 compute-0 ceph-mon[75015]: pgmap v3358: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:46 compute-0 podman[437741]: 2025-11-25 09:31:46.840483959 +0000 UTC m=+0.097213202 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 09:31:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:48 compute-0 ceph-mon[75015]: pgmap v3359: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:49 compute-0 nova_compute[253538]: 2025-11-25 09:31:49.230 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:50 compute-0 ceph-mon[75015]: pgmap v3360: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:50 compute-0 nova_compute[253538]: 2025-11-25 09:31:50.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:51 compute-0 ceph-mon[75015]: pgmap v3361: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:31:53
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.meta']
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:31:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:31:53 compute-0 ceph-mon[75015]: pgmap v3362: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:31:54 compute-0 nova_compute[253538]: 2025-11-25 09:31:54.233 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:31:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:55 compute-0 nova_compute[253538]: 2025-11-25 09:31:55.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:55 compute-0 sshd[189888]: Timeout before authentication for connection from 49.64.242.249 to 38.102.83.169, pid = 435333
Nov 25 09:31:55 compute-0 ceph-mon[75015]: pgmap v3363: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:56 compute-0 sshd[189888]: drop connection #0 from [49.64.242.249]:57832 on [38.102.83.169]:22 penalty: exceeded LoginGraceTime
Nov 25 09:31:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:57 compute-0 sshd-session[437767]: Invalid user oracle from 193.32.162.151 port 58590
Nov 25 09:31:57 compute-0 sshd-session[437767]: Connection closed by invalid user oracle 193.32.162.151 port 58590 [preauth]
Nov 25 09:31:57 compute-0 ceph-mon[75015]: pgmap v3364: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:31:59 compute-0 nova_compute[253538]: 2025-11-25 09:31:59.235 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:31:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:31:59 compute-0 nova_compute[253538]: 2025-11-25 09:31:59.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:00 compute-0 ceph-mon[75015]: pgmap v3365: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:00 compute-0 nova_compute[253538]: 2025-11-25 09:32:00.904 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:02 compute-0 ceph-mon[75015]: pgmap v3366: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:04 compute-0 ceph-mon[75015]: pgmap v3367: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:04 compute-0 nova_compute[253538]: 2025-11-25 09:32:04.238 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:32:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:05 compute-0 nova_compute[253538]: 2025-11-25 09:32:05.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:05 compute-0 nova_compute[253538]: 2025-11-25 09:32:05.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:05 compute-0 nova_compute[253538]: 2025-11-25 09:32:05.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:32:05 compute-0 nova_compute[253538]: 2025-11-25 09:32:05.581 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:32:05 compute-0 nova_compute[253538]: 2025-11-25 09:32:05.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:06 compute-0 ceph-mon[75015]: pgmap v3368: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:06 compute-0 podman[437769]: 2025-11-25 09:32:06.812362813 +0000 UTC m=+0.060373757 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 09:32:06 compute-0 podman[437770]: 2025-11-25 09:32:06.829421369 +0000 UTC m=+0.063584715 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 09:32:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:08 compute-0 ceph-mon[75015]: pgmap v3369: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 09:32:09 compute-0 nova_compute[253538]: 2025-11-25 09:32:09.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:10 compute-0 ceph-mon[75015]: pgmap v3370: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 09:32:10 compute-0 nova_compute[253538]: 2025-11-25 09:32:10.571 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:10 compute-0 nova_compute[253538]: 2025-11-25 09:32:10.908 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 09:32:11 compute-0 ceph-mon[75015]: pgmap v3371: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 09:32:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:32:13 compute-0 nova_compute[253538]: 2025-11-25 09:32:13.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:13 compute-0 nova_compute[253538]: 2025-11-25 09:32:13.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:32:13 compute-0 nova_compute[253538]: 2025-11-25 09:32:13.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:32:13 compute-0 nova_compute[253538]: 2025-11-25 09:32:13.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:32:13 compute-0 ceph-mon[75015]: pgmap v3372: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:32:14 compute-0 nova_compute[253538]: 2025-11-25 09:32:14.241 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:32:15 compute-0 nova_compute[253538]: 2025-11-25 09:32:15.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:15 compute-0 ceph-mon[75015]: pgmap v3373: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:32:16 compute-0 nova_compute[253538]: 2025-11-25 09:32:16.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:32:17 compute-0 nova_compute[253538]: 2025-11-25 09:32:17.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:17 compute-0 nova_compute[253538]: 2025-11-25 09:32:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:17 compute-0 nova_compute[253538]: 2025-11-25 09:32:17.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:32:17 compute-0 podman[437808]: 2025-11-25 09:32:17.832418285 +0000 UTC m=+0.080763483 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:32:18 compute-0 ceph-mon[75015]: pgmap v3374: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:32:18 compute-0 nova_compute[253538]: 2025-11-25 09:32:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:32:19 compute-0 nova_compute[253538]: 2025-11-25 09:32:19.243 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:20 compute-0 ceph-mon[75015]: pgmap v3375: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:32:20 compute-0 nova_compute[253538]: 2025-11-25 09:32:20.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 09:32:21 compute-0 ceph-mon[75015]: pgmap v3376: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 09:32:21 compute-0 nova_compute[253538]: 2025-11-25 09:32:21.841 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 09:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:24 compute-0 ceph-mon[75015]: pgmap v3377: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 09:32:24 compute-0 nova_compute[253538]: 2025-11-25 09:32:24.245 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:25 compute-0 nova_compute[253538]: 2025-11-25 09:32:25.568 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:25 compute-0 nova_compute[253538]: 2025-11-25 09:32:25.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:26 compute-0 ceph-mon[75015]: pgmap v3378: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:28 compute-0 ceph-mon[75015]: pgmap v3379: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:32:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463486539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:32:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463486539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:32:29 compute-0 nova_compute[253538]: 2025-11-25 09:32:29.246 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:29 compute-0 sshd-session[437834]: Invalid user opc from 146.190.154.85 port 43918
Nov 25 09:32:29 compute-0 sshd-session[437834]: Received disconnect from 146.190.154.85 port 43918:11: Bye Bye [preauth]
Nov 25 09:32:29 compute-0 sshd-session[437834]: Disconnected from invalid user opc 146.190.154.85 port 43918 [preauth]
Nov 25 09:32:30 compute-0 ceph-mon[75015]: pgmap v3380: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2463486539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2463486539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:32:30 compute-0 sshd-session[437806]: Invalid user steam from 45.78.217.205 port 42584
Nov 25 09:32:30 compute-0 sshd-session[437806]: Received disconnect from 45.78.217.205 port 42584:11: Bye Bye [preauth]
Nov 25 09:32:30 compute-0 sshd-session[437806]: Disconnected from invalid user steam 45.78.217.205 port 42584 [preauth]
Nov 25 09:32:30 compute-0 nova_compute[253538]: 2025-11-25 09:32:30.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:31 compute-0 ceph-mon[75015]: pgmap v3381: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:31 compute-0 sudo[437836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:31 compute-0 sudo[437836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:31 compute-0 sudo[437836]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:31 compute-0 sudo[437861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:31 compute-0 sudo[437861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:31 compute-0 sudo[437861]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:31 compute-0 sudo[437886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:31 compute-0 sudo[437886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:31 compute-0 sudo[437886]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:31 compute-0 sudo[437911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:32:31 compute-0 sudo[437911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:31 compute-0 sudo[437911]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:32:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:32:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:32:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:32:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2a958daf-c744-41e0-a227-6459902a6abd does not exist
Nov 25 09:32:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4f98d49e-b021-4316-9d34-0d51ee81b71c does not exist
Nov 25 09:32:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 9536ad03-b1c8-48aa-9fe1-719d2c133a5a does not exist
Nov 25 09:32:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:32:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:32:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:32:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:32 compute-0 sudo[437968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:32 compute-0 sudo[437968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:32 compute-0 sudo[437968]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:32:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:32 compute-0 sudo[437993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:32 compute-0 sudo[437993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:32 compute-0 sudo[437993]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:32 compute-0 sudo[438018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:32 compute-0 sudo[438018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:32 compute-0 sudo[438018]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:32 compute-0 sudo[438043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:32:32 compute-0 sudo[438043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:32 compute-0 nova_compute[253538]: 2025-11-25 09:32:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:32:32 compute-0 nova_compute[253538]: 2025-11-25 09:32:32.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:32:32 compute-0 nova_compute[253538]: 2025-11-25 09:32:32.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:32:32 compute-0 nova_compute[253538]: 2025-11-25 09:32:32.575 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:32:32 compute-0 nova_compute[253538]: 2025-11-25 09:32:32.576 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:32:32 compute-0 nova_compute[253538]: 2025-11-25 09:32:32.576 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:32:32 compute-0 podman[438108]: 2025-11-25 09:32:32.710586018 +0000 UTC m=+0.044933126 container create 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:32:32 compute-0 systemd[1]: Started libpod-conmon-58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754.scope.
Nov 25 09:32:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:32 compute-0 podman[438108]: 2025-11-25 09:32:32.688535007 +0000 UTC m=+0.022882095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:32:32 compute-0 podman[438108]: 2025-11-25 09:32:32.801906268 +0000 UTC m=+0.136253326 container init 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:32:32 compute-0 podman[438108]: 2025-11-25 09:32:32.809913216 +0000 UTC m=+0.144260274 container start 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:32:32 compute-0 podman[438108]: 2025-11-25 09:32:32.813429582 +0000 UTC m=+0.147776640 container attach 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:32:32 compute-0 cool_hermann[438143]: 167 167
Nov 25 09:32:32 compute-0 systemd[1]: libpod-58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754.scope: Deactivated successfully.
Nov 25 09:32:32 compute-0 podman[438148]: 2025-11-25 09:32:32.852850467 +0000 UTC m=+0.023974265 container died 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 09:32:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2bfc41aaa27bfc2f1e3eede4e4975c250d7e94929d29971e233b602d2907cb5-merged.mount: Deactivated successfully.
Nov 25 09:32:32 compute-0 podman[438148]: 2025-11-25 09:32:32.889128296 +0000 UTC m=+0.060252104 container remove 58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hermann, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:32:32 compute-0 systemd[1]: libpod-conmon-58fd72da658dcb7b61557bd3cbcd453df3508b7e152db12568662c4751fbc754.scope: Deactivated successfully.
Nov 25 09:32:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:32:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452485779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.058 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:32:33 compute-0 podman[438170]: 2025-11-25 09:32:33.0653588 +0000 UTC m=+0.050362624 container create 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:33 compute-0 systemd[1]: Started libpod-conmon-77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f.scope.
Nov 25 09:32:33 compute-0 podman[438170]: 2025-11-25 09:32:33.042673162 +0000 UTC m=+0.027677026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:32:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:33 compute-0 podman[438170]: 2025-11-25 09:32:33.164259126 +0000 UTC m=+0.149262970 container init 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:33 compute-0 podman[438170]: 2025-11-25 09:32:33.17322078 +0000 UTC m=+0.158224604 container start 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 09:32:33 compute-0 podman[438170]: 2025-11-25 09:32:33.177249901 +0000 UTC m=+0.162253745 container attach 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:32:33 compute-0 ceph-mon[75015]: pgmap v3382: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/452485779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.243 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.246 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3600MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.246 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.246 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.306 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.307 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.320 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:32:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:32:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3535890691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.828 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.835 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.856 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.860 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:32:33 compute-0 nova_compute[253538]: 2025-11-25 09:32:33.862 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:32:34 compute-0 naughty_jackson[438188]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:32:34 compute-0 naughty_jackson[438188]: --> relative data size: 1.0
Nov 25 09:32:34 compute-0 naughty_jackson[438188]: --> All data devices are unavailable
Nov 25 09:32:34 compute-0 systemd[1]: libpod-77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f.scope: Deactivated successfully.
Nov 25 09:32:34 compute-0 systemd[1]: libpod-77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f.scope: Consumed 1.012s CPU time.
Nov 25 09:32:34 compute-0 nova_compute[253538]: 2025-11-25 09:32:34.250 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:34 compute-0 podman[438239]: 2025-11-25 09:32:34.294362256 +0000 UTC m=+0.031878091 container died 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 09:32:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3535890691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a98e488fcc4ebbc8ddb655ad9abd7dff831fe5d139e1389f654ad3cc26b788e7-merged.mount: Deactivated successfully.
Nov 25 09:32:34 compute-0 podman[438239]: 2025-11-25 09:32:34.743232613 +0000 UTC m=+0.480748428 container remove 77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:32:34 compute-0 systemd[1]: libpod-conmon-77cc221d182124e285144c0a74272471e8a53967477813b5a54e351bb596103f.scope: Deactivated successfully.
Nov 25 09:32:34 compute-0 sudo[438043]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:34 compute-0 sudo[438255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:34 compute-0 sudo[438255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:34 compute-0 sudo[438255]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:34 compute-0 sudo[438280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:34 compute-0 sudo[438280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:34 compute-0 sudo[438280]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:35 compute-0 sudo[438305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:35 compute-0 sudo[438305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:35 compute-0 sudo[438305]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:35 compute-0 sudo[438330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:32:35 compute-0 sudo[438330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:35 compute-0 podman[438395]: 2025-11-25 09:32:35.413834104 +0000 UTC m=+0.023566833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:32:35 compute-0 podman[438395]: 2025-11-25 09:32:35.555890197 +0000 UTC m=+0.165622946 container create 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:32:35 compute-0 ceph-mon[75015]: pgmap v3383: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:35 compute-0 systemd[1]: Started libpod-conmon-47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9.scope.
Nov 25 09:32:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:35 compute-0 podman[438395]: 2025-11-25 09:32:35.885495743 +0000 UTC m=+0.495228472 container init 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:35 compute-0 podman[438395]: 2025-11-25 09:32:35.896885343 +0000 UTC m=+0.506618092 container start 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 09:32:35 compute-0 eloquent_mclean[438412]: 167 167
Nov 25 09:32:35 compute-0 systemd[1]: libpod-47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9.scope: Deactivated successfully.
Nov 25 09:32:35 compute-0 nova_compute[253538]: 2025-11-25 09:32:35.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:35 compute-0 podman[438395]: 2025-11-25 09:32:35.986214108 +0000 UTC m=+0.595946867 container attach 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:32:35 compute-0 podman[438395]: 2025-11-25 09:32:35.987411141 +0000 UTC m=+0.597143850 container died 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ce38f23e761ce5119220c91e753f0366bedca261a35cc12872f8a13f12e18e-merged.mount: Deactivated successfully.
Nov 25 09:32:36 compute-0 podman[438395]: 2025-11-25 09:32:36.276283226 +0000 UTC m=+0.886015935 container remove 47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:36 compute-0 systemd[1]: libpod-conmon-47b0d5ab4205c39b1c2f1578cae32a5ff5559184fd649d14aa07b90e501c8de9.scope: Deactivated successfully.
Nov 25 09:32:36 compute-0 podman[438436]: 2025-11-25 09:32:36.456331594 +0000 UTC m=+0.029911595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:32:36 compute-0 podman[438436]: 2025-11-25 09:32:36.5750001 +0000 UTC m=+0.148580081 container create 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:32:36 compute-0 systemd[1]: Started libpod-conmon-15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e.scope.
Nov 25 09:32:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:36 compute-0 podman[438436]: 2025-11-25 09:32:36.836575461 +0000 UTC m=+0.410155472 container init 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:36 compute-0 podman[438436]: 2025-11-25 09:32:36.848462875 +0000 UTC m=+0.422042856 container start 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:32:36 compute-0 podman[438436]: 2025-11-25 09:32:36.862544539 +0000 UTC m=+0.436124530 container attach 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:32:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:37 compute-0 reverent_joliot[438453]: {
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:     "0": [
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:         {
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "devices": [
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "/dev/loop3"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             ],
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_name": "ceph_lv0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_size": "21470642176",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "name": "ceph_lv0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "tags": {
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cluster_name": "ceph",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.crush_device_class": "",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.encrypted": "0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osd_id": "0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.type": "block",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.vdo": "0"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             },
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "type": "block",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "vg_name": "ceph_vg0"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:         }
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:     ],
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:     "1": [
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:         {
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "devices": [
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "/dev/loop4"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             ],
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_name": "ceph_lv1",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_size": "21470642176",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "name": "ceph_lv1",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "tags": {
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cluster_name": "ceph",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.crush_device_class": "",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.encrypted": "0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osd_id": "1",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.type": "block",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.vdo": "0"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             },
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "type": "block",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "vg_name": "ceph_vg1"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:         }
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:     ],
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:     "2": [
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:         {
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "devices": [
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "/dev/loop5"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             ],
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_name": "ceph_lv2",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_size": "21470642176",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "name": "ceph_lv2",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "tags": {
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.cluster_name": "ceph",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.crush_device_class": "",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.encrypted": "0",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osd_id": "2",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.type": "block",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:                 "ceph.vdo": "0"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             },
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "type": "block",
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:             "vg_name": "ceph_vg2"
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:         }
Nov 25 09:32:37 compute-0 reverent_joliot[438453]:     ]
Nov 25 09:32:37 compute-0 reverent_joliot[438453]: }
Nov 25 09:32:37 compute-0 systemd[1]: libpod-15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e.scope: Deactivated successfully.
Nov 25 09:32:37 compute-0 podman[438436]: 2025-11-25 09:32:37.61050626 +0000 UTC m=+1.184086291 container died 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6fedbde1fed2463b5b51ee35540d8bebb34c7da28c1cf81f602f276a5e284aa-merged.mount: Deactivated successfully.
Nov 25 09:32:38 compute-0 ceph-mon[75015]: pgmap v3384: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:38 compute-0 sshd-session[438458]: Received disconnect from 182.253.79.194 port 43572:11: Bye Bye [preauth]
Nov 25 09:32:38 compute-0 sshd-session[438458]: Disconnected from authenticating user root 182.253.79.194 port 43572 [preauth]
Nov 25 09:32:38 compute-0 podman[438436]: 2025-11-25 09:32:38.757227902 +0000 UTC m=+2.330807923 container remove 15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_joliot, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 09:32:38 compute-0 sudo[438330]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:38 compute-0 systemd[1]: libpod-conmon-15253919b7c359fa4a942be90d247e71b0767462bfc0e1cacf2fd9f2fd17907e.scope: Deactivated successfully.
Nov 25 09:32:38 compute-0 sudo[438500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:38 compute-0 podman[438465]: 2025-11-25 09:32:38.88554038 +0000 UTC m=+1.228918583 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:32:38 compute-0 podman[438472]: 2025-11-25 09:32:38.88480833 +0000 UTC m=+1.229712366 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:32:38 compute-0 sudo[438500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:38 compute-0 sudo[438500]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:38 compute-0 sudo[438541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:38 compute-0 sudo[438541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:38 compute-0 sudo[438541]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:39 compute-0 sudo[438566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:39 compute-0 sudo[438566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:39 compute-0 sudo[438566]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:39 compute-0 sudo[438591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:32:39 compute-0 sudo[438591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:39 compute-0 nova_compute[253538]: 2025-11-25 09:32:39.253 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:39 compute-0 ceph-mon[75015]: pgmap v3385: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:39 compute-0 podman[438657]: 2025-11-25 09:32:39.415212581 +0000 UTC m=+0.040147167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:32:39 compute-0 podman[438657]: 2025-11-25 09:32:39.577459143 +0000 UTC m=+0.202393729 container create ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 09:32:39 compute-0 systemd[1]: Started libpod-conmon-ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698.scope.
Nov 25 09:32:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:39 compute-0 podman[438657]: 2025-11-25 09:32:39.741848475 +0000 UTC m=+0.366783081 container init ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:32:39 compute-0 podman[438657]: 2025-11-25 09:32:39.75340939 +0000 UTC m=+0.378343976 container start ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:32:39 compute-0 great_haslett[438673]: 167 167
Nov 25 09:32:39 compute-0 systemd[1]: libpod-ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698.scope: Deactivated successfully.
Nov 25 09:32:39 compute-0 podman[438657]: 2025-11-25 09:32:39.820250872 +0000 UTC m=+0.445185458 container attach ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:32:39 compute-0 podman[438657]: 2025-11-25 09:32:39.821688652 +0000 UTC m=+0.446623238 container died ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca54e4da19159da098181a4439c763bfb153cad7cacb84c940146eb512a8c9e6-merged.mount: Deactivated successfully.
Nov 25 09:32:40 compute-0 podman[438657]: 2025-11-25 09:32:40.217766419 +0000 UTC m=+0.842701045 container remove ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:32:40 compute-0 systemd[1]: libpod-conmon-ea9ba254c0bd9b0442e926f3bc1c3b488d222fa1a279d56d2439abd3e074c698.scope: Deactivated successfully.
Nov 25 09:32:40 compute-0 podman[438700]: 2025-11-25 09:32:40.540343283 +0000 UTC m=+0.122126880 container create 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:40 compute-0 podman[438700]: 2025-11-25 09:32:40.459271523 +0000 UTC m=+0.041055210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:32:40 compute-0 systemd[1]: Started libpod-conmon-291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa.scope.
Nov 25 09:32:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:40 compute-0 podman[438700]: 2025-11-25 09:32:40.667132549 +0000 UTC m=+0.248916166 container init 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:40 compute-0 podman[438700]: 2025-11-25 09:32:40.680018061 +0000 UTC m=+0.261801698 container start 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:32:40 compute-0 podman[438700]: 2025-11-25 09:32:40.79297447 +0000 UTC m=+0.374758167 container attach 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:40 compute-0 nova_compute[253538]: 2025-11-25 09:32:40.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:32:41.120 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:32:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:32:41.121 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:32:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:32:41.121 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:32:41 compute-0 wonderful_buck[438716]: {
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "osd_id": 1,
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "type": "bluestore"
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:     },
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "osd_id": 2,
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "type": "bluestore"
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:     },
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "osd_id": 0,
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:         "type": "bluestore"
Nov 25 09:32:41 compute-0 wonderful_buck[438716]:     }
Nov 25 09:32:41 compute-0 wonderful_buck[438716]: }
Nov 25 09:32:41 compute-0 systemd[1]: libpod-291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa.scope: Deactivated successfully.
Nov 25 09:32:41 compute-0 systemd[1]: libpod-291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa.scope: Consumed 1.006s CPU time.
Nov 25 09:32:41 compute-0 podman[438700]: 2025-11-25 09:32:41.67878616 +0000 UTC m=+1.260569797 container died 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2ace03bdd2c8b7656e8f34ea390951f51da22ad303f9b68444d4ffb1fe0beef-merged.mount: Deactivated successfully.
Nov 25 09:32:41 compute-0 podman[438700]: 2025-11-25 09:32:41.825695585 +0000 UTC m=+1.407479222 container remove 291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:32:41 compute-0 systemd[1]: libpod-conmon-291fe8498066679d1b403893c6a6371e397e4d13ac4f83dcca96f314f0e7a6aa.scope: Deactivated successfully.
Nov 25 09:32:41 compute-0 sudo[438591]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:32:41 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:32:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:32:41 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:32:41 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1e47d68c-4dc6-478b-8556-95b45088a4d8 does not exist
Nov 25 09:32:41 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c2bccb43-b819-448a-b71e-cd9dcdbca330 does not exist
Nov 25 09:32:42 compute-0 sudo[438763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:32:42 compute-0 sudo[438763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:42 compute-0 sudo[438763]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:42 compute-0 sudo[438788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:32:42 compute-0 sudo[438788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:42 compute-0 sudo[438788]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:42 compute-0 ceph-mon[75015]: pgmap v3386: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:42 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:32:42 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:32:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:43 compute-0 ceph-mon[75015]: pgmap v3387: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:44 compute-0 nova_compute[253538]: 2025-11-25 09:32:44.254 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:45 compute-0 nova_compute[253538]: 2025-11-25 09:32:45.922 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:46 compute-0 ceph-mon[75015]: pgmap v3388: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:48 compute-0 ceph-mon[75015]: pgmap v3389: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:48 compute-0 podman[438813]: 2025-11-25 09:32:48.842458847 +0000 UTC m=+0.091695421 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:32:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:49 compute-0 nova_compute[253538]: 2025-11-25 09:32:49.256 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:50 compute-0 ceph-mon[75015]: pgmap v3390: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:50 compute-0 nova_compute[253538]: 2025-11-25 09:32:50.925 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:51 compute-0 ceph-mon[75015]: pgmap v3391: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:53 compute-0 ceph-mon[75015]: pgmap v3392: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:32:53
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta']
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:32:54 compute-0 nova_compute[253538]: 2025-11-25 09:32:54.258 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:32:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:55 compute-0 nova_compute[253538]: 2025-11-25 09:32:55.926 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:56 compute-0 ceph-mon[75015]: pgmap v3393: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:57 compute-0 ceph-mon[75015]: pgmap v3394: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:32:59 compute-0 nova_compute[253538]: 2025-11-25 09:32:59.259 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:32:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:00 compute-0 ceph-mon[75015]: pgmap v3395: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:00 compute-0 nova_compute[253538]: 2025-11-25 09:33:00.928 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:02 compute-0 ceph-mon[75015]: pgmap v3396: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.167758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183167798, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1673, "num_deletes": 251, "total_data_size": 2720273, "memory_usage": 2766368, "flush_reason": "Manual Compaction"}
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 25 09:33:03 compute-0 ceph-mon[75015]: pgmap v3397: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183229725, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 2672166, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68769, "largest_seqno": 70441, "table_properties": {"data_size": 2664378, "index_size": 4730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15798, "raw_average_key_size": 19, "raw_value_size": 2648860, "raw_average_value_size": 3352, "num_data_blocks": 211, "num_entries": 790, "num_filter_entries": 790, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764062999, "oldest_key_time": 1764062999, "file_creation_time": 1764063183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 62053 microseconds, and 7396 cpu microseconds.
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.229801) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 2672166 bytes OK
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.229835) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.237794) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.237832) EVENT_LOG_v1 {"time_micros": 1764063183237820, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.237853) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2713063, prev total WAL file size 2713063, number of live WAL files 2.
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.238894) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(2609KB)], [164(8704KB)]
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183238925, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11585256, "oldest_snapshot_seqno": -1}
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8789 keys, 9813066 bytes, temperature: kUnknown
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183301965, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9813066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9758845, "index_size": 31132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 230682, "raw_average_key_size": 26, "raw_value_size": 9606403, "raw_average_value_size": 1093, "num_data_blocks": 1200, "num_entries": 8789, "num_filter_entries": 8789, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.302375) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9813066 bytes
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.306495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.4 rd, 155.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.5 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 9303, records dropped: 514 output_compression: NoCompression
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.306530) EVENT_LOG_v1 {"time_micros": 1764063183306514, "job": 102, "event": "compaction_finished", "compaction_time_micros": 63163, "compaction_time_cpu_micros": 23202, "output_level": 6, "num_output_files": 1, "total_output_size": 9813066, "num_input_records": 9303, "num_output_records": 8789, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183307801, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063183311375, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.238816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:33:03 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:33:03.311492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:33:04 compute-0 nova_compute[253538]: 2025-11-25 09:33:04.261 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:33:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:05 compute-0 nova_compute[253538]: 2025-11-25 09:33:05.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:06 compute-0 ceph-mon[75015]: pgmap v3398: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:07 compute-0 nova_compute[253538]: 2025-11-25 09:33:07.864 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:08 compute-0 ceph-mon[75015]: pgmap v3399: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:08 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:09 compute-0 ceph-mon[75015]: pgmap v3400: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:09 compute-0 nova_compute[253538]: 2025-11-25 09:33:09.285 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:09 compute-0 podman[438839]: 2025-11-25 09:33:09.825786209 +0000 UTC m=+0.072300712 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 09:33:09 compute-0 podman[438838]: 2025-11-25 09:33:09.844214762 +0000 UTC m=+0.089080919 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:33:10 compute-0 nova_compute[253538]: 2025-11-25 09:33:10.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:10 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:11 compute-0 nova_compute[253538]: 2025-11-25 09:33:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:12 compute-0 ceph-mon[75015]: pgmap v3401: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:12 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:14 compute-0 ceph-mon[75015]: pgmap v3402: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:14 compute-0 nova_compute[253538]: 2025-11-25 09:33:14.288 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:14 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:15 compute-0 nova_compute[253538]: 2025-11-25 09:33:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:15 compute-0 nova_compute[253538]: 2025-11-25 09:33:15.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:33:15 compute-0 nova_compute[253538]: 2025-11-25 09:33:15.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:33:15 compute-0 nova_compute[253538]: 2025-11-25 09:33:15.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:33:15 compute-0 nova_compute[253538]: 2025-11-25 09:33:15.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:16 compute-0 ceph-mon[75015]: pgmap v3403: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:16 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:17 compute-0 sshd-session[438877]: Connection closed by authenticating user root 171.244.51.45 port 37536 [preauth]
Nov 25 09:33:17 compute-0 nova_compute[253538]: 2025-11-25 09:33:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:17 compute-0 nova_compute[253538]: 2025-11-25 09:33:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:17 compute-0 nova_compute[253538]: 2025-11-25 09:33:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:17 compute-0 nova_compute[253538]: 2025-11-25 09:33:17.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:33:18 compute-0 ceph-mon[75015]: pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:18 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:19 compute-0 nova_compute[253538]: 2025-11-25 09:33:19.290 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:19 compute-0 nova_compute[253538]: 2025-11-25 09:33:19.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:19 compute-0 podman[438879]: 2025-11-25 09:33:19.843818618 +0000 UTC m=+0.090292552 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:33:20 compute-0 ceph-mon[75015]: pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:20 compute-0 nova_compute[253538]: 2025-11-25 09:33:20.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:20 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:21 compute-0 ceph-mon[75015]: pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:22 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:24 compute-0 ceph-mon[75015]: pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:24 compute-0 nova_compute[253538]: 2025-11-25 09:33:24.291 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:24 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:25 compute-0 nova_compute[253538]: 2025-11-25 09:33:25.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:25 compute-0 nova_compute[253538]: 2025-11-25 09:33:25.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:26 compute-0 ceph-mon[75015]: pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:26 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:28 compute-0 ceph-mon[75015]: pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:28 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:33:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357591464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:33:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:33:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357591464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:33:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3357591464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:33:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3357591464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:33:29 compute-0 nova_compute[253538]: 2025-11-25 09:33:29.293 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:30 compute-0 ceph-mon[75015]: pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:30 compute-0 nova_compute[253538]: 2025-11-25 09:33:30.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:30 compute-0 nova_compute[253538]: 2025-11-25 09:33:30.978 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:30 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:32 compute-0 ceph-mon[75015]: pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:32 compute-0 nova_compute[253538]: 2025-11-25 09:33:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:33:32 compute-0 nova_compute[253538]: 2025-11-25 09:33:32.599 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:33:32 compute-0 nova_compute[253538]: 2025-11-25 09:33:32.600 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:33:32 compute-0 nova_compute[253538]: 2025-11-25 09:33:32.600 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:33:32 compute-0 nova_compute[253538]: 2025-11-25 09:33:32.600 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:33:32 compute-0 nova_compute[253538]: 2025-11-25 09:33:32.601 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:33:32 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:33:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904402365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.023 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.216 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.217 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3638MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.218 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.218 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:33:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3904402365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.365 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.365 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.463 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.553 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.554 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.572 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.596 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:33:33 compute-0 nova_compute[253538]: 2025-11-25 09:33:33.613 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:33:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:33:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2998360382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:33:34 compute-0 nova_compute[253538]: 2025-11-25 09:33:34.051 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:33:34 compute-0 nova_compute[253538]: 2025-11-25 09:33:34.058 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:33:34 compute-0 nova_compute[253538]: 2025-11-25 09:33:34.083 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:33:34 compute-0 nova_compute[253538]: 2025-11-25 09:33:34.085 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:33:34 compute-0 nova_compute[253538]: 2025-11-25 09:33:34.085 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:33:34 compute-0 ceph-mon[75015]: pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2998360382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:33:34 compute-0 nova_compute[253538]: 2025-11-25 09:33:34.295 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:34 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:35 compute-0 ceph-mon[75015]: pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:35 compute-0 sshd-session[438905]: Connection closed by 45.78.222.2 port 48688 [preauth]
Nov 25 09:33:35 compute-0 nova_compute[253538]: 2025-11-25 09:33:35.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:36 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:38 compute-0 ceph-mon[75015]: pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:38 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:39 compute-0 nova_compute[253538]: 2025-11-25 09:33:39.297 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:39 compute-0 sshd-session[438951]: Invalid user rstudio from 146.190.154.85 port 52048
Nov 25 09:33:39 compute-0 sshd-session[438951]: Received disconnect from 146.190.154.85 port 52048:11: Bye Bye [preauth]
Nov 25 09:33:39 compute-0 sshd-session[438951]: Disconnected from invalid user rstudio 146.190.154.85 port 52048 [preauth]
Nov 25 09:33:40 compute-0 ceph-mon[75015]: pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:40 compute-0 podman[438953]: 2025-11-25 09:33:40.828228229 +0000 UTC m=+0.077038831 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:33:40 compute-0 podman[438954]: 2025-11-25 09:33:40.841360497 +0000 UTC m=+0.076691962 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 09:33:40 compute-0 nova_compute[253538]: 2025-11-25 09:33:40.982 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:40 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:33:41.121 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:33:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:33:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:33:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:33:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:33:42 compute-0 ceph-mon[75015]: pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:42 compute-0 sudo[438991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:42 compute-0 sudo[438991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:42 compute-0 sudo[438991]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:42 compute-0 sudo[439016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:42 compute-0 sudo[439016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:42 compute-0 sudo[439016]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:42 compute-0 sudo[439041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:42 compute-0 sudo[439041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:42 compute-0 sudo[439041]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:42 compute-0 sudo[439066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 09:33:42 compute-0 sudo[439066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:42 compute-0 sudo[439066]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:33:42 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:42 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:33:42 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:42 compute-0 sudo[439111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:42 compute-0 sudo[439111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:42 compute-0 sudo[439111]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:42 compute-0 sudo[439136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:42 compute-0 sudo[439136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:42 compute-0 sudo[439136]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:42 compute-0 sudo[439161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:42 compute-0 sudo[439161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:42 compute-0 sudo[439161]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:42 compute-0 sudo[439186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:33:42 compute-0 sudo[439186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:42 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:43 compute-0 sudo[439186]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:33:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:33:43 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:33:43 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:43 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 88b38ac1-664b-4884-8c6c-63f2444f1ef4 does not exist
Nov 25 09:33:43 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0b481079-20e2-4ce6-9fa8-615d9da57a2b does not exist
Nov 25 09:33:43 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3f0366ca-105e-4b40-829b-8d9fc71aa33a does not exist
Nov 25 09:33:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:33:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:33:43 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:33:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:43 compute-0 sudo[439242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:43 compute-0 sudo[439242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:43 compute-0 sudo[439242]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:43 compute-0 sudo[439267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:43 compute-0 sudo[439267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:43 compute-0 sudo[439267]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:43 compute-0 sudo[439292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:43 compute-0 sudo[439292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:43 compute-0 sudo[439292]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:43 compute-0 sudo[439317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:33:43 compute-0 sudo[439317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:43 compute-0 ceph-mon[75015]: pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:43 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:44 compute-0 podman[439382]: 2025-11-25 09:33:43.941695748 +0000 UTC m=+0.021363803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:33:44 compute-0 podman[439382]: 2025-11-25 09:33:44.058241516 +0000 UTC m=+0.137909561 container create 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:44 compute-0 systemd[1]: Started libpod-conmon-61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791.scope.
Nov 25 09:33:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:44 compute-0 nova_compute[253538]: 2025-11-25 09:33:44.298 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:44 compute-0 podman[439382]: 2025-11-25 09:33:44.30637483 +0000 UTC m=+0.386042895 container init 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:44 compute-0 podman[439382]: 2025-11-25 09:33:44.314894373 +0000 UTC m=+0.394562398 container start 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:33:44 compute-0 jolly_kirch[439398]: 167 167
Nov 25 09:33:44 compute-0 systemd[1]: libpod-61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791.scope: Deactivated successfully.
Nov 25 09:33:44 compute-0 conmon[439398]: conmon 61a4956fcb2086360076 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791.scope/container/memory.events
Nov 25 09:33:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:44 compute-0 podman[439382]: 2025-11-25 09:33:44.393709292 +0000 UTC m=+0.473377317 container attach 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 09:33:44 compute-0 podman[439382]: 2025-11-25 09:33:44.394670248 +0000 UTC m=+0.474338253 container died 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-315db2ef99eda647d0ee0c246513139f1f8e30731defc5b19674169931ae69d7-merged.mount: Deactivated successfully.
Nov 25 09:33:44 compute-0 podman[439382]: 2025-11-25 09:33:44.899667444 +0000 UTC m=+0.979335449 container remove 61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 09:33:44 compute-0 systemd[1]: libpod-conmon-61a4956fcb2086360076bd07aedfd2772719dce91c3c20405ad34760c2ece791.scope: Deactivated successfully.
Nov 25 09:33:44 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:45 compute-0 podman[439422]: 2025-11-25 09:33:45.087856665 +0000 UTC m=+0.039718034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:33:45 compute-0 podman[439422]: 2025-11-25 09:33:45.292028251 +0000 UTC m=+0.243889630 container create 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:45 compute-0 systemd[1]: Started libpod-conmon-8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185.scope.
Nov 25 09:33:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:45 compute-0 podman[439422]: 2025-11-25 09:33:45.555124484 +0000 UTC m=+0.506985833 container init 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:33:45 compute-0 podman[439422]: 2025-11-25 09:33:45.562828834 +0000 UTC m=+0.514690183 container start 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:33:45 compute-0 podman[439422]: 2025-11-25 09:33:45.638014044 +0000 UTC m=+0.589875423 container attach 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:33:45 compute-0 nova_compute[253538]: 2025-11-25 09:33:45.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 25 09:33:46 compute-0 ceph-mon[75015]: pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:33:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 25 09:33:46 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 25 09:33:46 compute-0 quirky_goldstine[439438]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:33:46 compute-0 quirky_goldstine[439438]: --> relative data size: 1.0
Nov 25 09:33:46 compute-0 quirky_goldstine[439438]: --> All data devices are unavailable
Nov 25 09:33:46 compute-0 systemd[1]: libpod-8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185.scope: Deactivated successfully.
Nov 25 09:33:46 compute-0 podman[439422]: 2025-11-25 09:33:46.590392538 +0000 UTC m=+1.542253887 container died 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a3af70e5932aa8f615e3656ef766278be45904183eedfab4606becc1abea22d-merged.mount: Deactivated successfully.
Nov 25 09:33:46 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 614 B/s wr, 8 op/s
Nov 25 09:33:47 compute-0 podman[439422]: 2025-11-25 09:33:47.089536625 +0000 UTC m=+2.041398014 container remove 8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:47 compute-0 sudo[439317]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:47 compute-0 systemd[1]: libpod-conmon-8028a5cf37236c849404acca54ad9215e793e3502daf02105d6266b1fb233185.scope: Deactivated successfully.
Nov 25 09:33:47 compute-0 sudo[439479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:47 compute-0 sudo[439479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:47 compute-0 sudo[439479]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:47 compute-0 sudo[439504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:47 compute-0 sudo[439504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:47 compute-0 sudo[439504]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:47 compute-0 ceph-mon[75015]: osdmap e284: 3 total, 3 up, 3 in
Nov 25 09:33:47 compute-0 ceph-mon[75015]: pgmap v3420: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 41 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 614 B/s wr, 8 op/s
Nov 25 09:33:47 compute-0 sudo[439529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:47 compute-0 sudo[439529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:47 compute-0 sudo[439529]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:47 compute-0 sudo[439554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:33:47 compute-0 sudo[439554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:47 compute-0 podman[439619]: 2025-11-25 09:33:47.87507348 +0000 UTC m=+0.116550727 container create 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 09:33:47 compute-0 podman[439619]: 2025-11-25 09:33:47.795885622 +0000 UTC m=+0.037362959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:33:47 compute-0 systemd[1]: Started libpod-conmon-39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2.scope.
Nov 25 09:33:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:47 compute-0 podman[439619]: 2025-11-25 09:33:47.991889835 +0000 UTC m=+0.233367122 container init 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 09:33:48 compute-0 podman[439619]: 2025-11-25 09:33:48.003475941 +0000 UTC m=+0.244953188 container start 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:48 compute-0 podman[439619]: 2025-11-25 09:33:48.006435492 +0000 UTC m=+0.247912859 container attach 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:33:48 compute-0 distracted_diffie[439635]: 167 167
Nov 25 09:33:48 compute-0 systemd[1]: libpod-39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2.scope: Deactivated successfully.
Nov 25 09:33:48 compute-0 conmon[439635]: conmon 39941af595881874439f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2.scope/container/memory.events
Nov 25 09:33:48 compute-0 podman[439619]: 2025-11-25 09:33:48.011192642 +0000 UTC m=+0.252669899 container died 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 09:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6cb92f1671015a272803035473012e6d068952726001ee0591fe85283d13c1e-merged.mount: Deactivated successfully.
Nov 25 09:33:48 compute-0 podman[439619]: 2025-11-25 09:33:48.114464877 +0000 UTC m=+0.355942134 container remove 39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 09:33:48 compute-0 systemd[1]: libpod-conmon-39941af595881874439f6e869d8db369dd1225500a2905606f4ca84c087b67d2.scope: Deactivated successfully.
Nov 25 09:33:48 compute-0 podman[439659]: 2025-11-25 09:33:48.303421298 +0000 UTC m=+0.066913095 container create 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:33:48 compute-0 systemd[1]: Started libpod-conmon-52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7.scope.
Nov 25 09:33:48 compute-0 podman[439659]: 2025-11-25 09:33:48.258850193 +0000 UTC m=+0.022342000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:33:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:48 compute-0 podman[439659]: 2025-11-25 09:33:48.463511622 +0000 UTC m=+0.227003429 container init 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 09:33:48 compute-0 podman[439659]: 2025-11-25 09:33:48.469885757 +0000 UTC m=+0.233377584 container start 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:48 compute-0 podman[439659]: 2025-11-25 09:33:48.587784931 +0000 UTC m=+0.351276758 container attach 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 09:33:48 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 33 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 09:33:49 compute-0 amazing_khorana[439675]: {
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:     "0": [
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:         {
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "devices": [
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "/dev/loop3"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             ],
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_name": "ceph_lv0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_size": "21470642176",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "name": "ceph_lv0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "tags": {
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cluster_name": "ceph",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.crush_device_class": "",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.encrypted": "0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osd_id": "0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.type": "block",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.vdo": "0"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             },
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "type": "block",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "vg_name": "ceph_vg0"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:         }
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:     ],
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:     "1": [
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:         {
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "devices": [
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "/dev/loop4"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             ],
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_name": "ceph_lv1",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_size": "21470642176",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "name": "ceph_lv1",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "tags": {
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cluster_name": "ceph",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.crush_device_class": "",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.encrypted": "0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osd_id": "1",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.type": "block",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.vdo": "0"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             },
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "type": "block",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "vg_name": "ceph_vg1"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:         }
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:     ],
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:     "2": [
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:         {
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "devices": [
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "/dev/loop5"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             ],
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_name": "ceph_lv2",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_size": "21470642176",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "name": "ceph_lv2",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "tags": {
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.cluster_name": "ceph",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.crush_device_class": "",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.encrypted": "0",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osd_id": "2",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.type": "block",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:                 "ceph.vdo": "0"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             },
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "type": "block",
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:             "vg_name": "ceph_vg2"
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:         }
Nov 25 09:33:49 compute-0 amazing_khorana[439675]:     ]
Nov 25 09:33:49 compute-0 amazing_khorana[439675]: }
Nov 25 09:33:49 compute-0 systemd[1]: libpod-52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7.scope: Deactivated successfully.
Nov 25 09:33:49 compute-0 podman[439659]: 2025-11-25 09:33:49.282247033 +0000 UTC m=+1.045738850 container died 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:49 compute-0 nova_compute[253538]: 2025-11-25 09:33:49.301 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-92b889bfd34aa25d58c825ae5a7b0a248519bd9d726ee657f1f55745a86dd22b-merged.mount: Deactivated successfully.
Nov 25 09:33:49 compute-0 podman[439659]: 2025-11-25 09:33:49.647985584 +0000 UTC m=+1.411477371 container remove 52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khorana, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:49 compute-0 systemd[1]: libpod-conmon-52ec1b13510830a64df55b4414acd0d29fe888041802af18ad5bbdb4a3bec2f7.scope: Deactivated successfully.
Nov 25 09:33:49 compute-0 sudo[439554]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:49 compute-0 sudo[439696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:49 compute-0 sudo[439696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:49 compute-0 sudo[439696]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:49 compute-0 sudo[439721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:49 compute-0 sudo[439721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:49 compute-0 sudo[439721]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:49 compute-0 sudo[439746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:49 compute-0 sudo[439746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:49 compute-0 sudo[439746]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:49 compute-0 sudo[439772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:33:49 compute-0 sudo[439772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:50 compute-0 podman[439770]: 2025-11-25 09:33:50.028358504 +0000 UTC m=+0.099308708 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:33:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 25 09:33:50 compute-0 ceph-mon[75015]: pgmap v3421: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 33 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 09:33:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 25 09:33:50 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 25 09:33:50 compute-0 podman[439863]: 2025-11-25 09:33:50.408255141 +0000 UTC m=+0.102438334 container create e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:33:50 compute-0 podman[439863]: 2025-11-25 09:33:50.33746228 +0000 UTC m=+0.031645463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:33:50 compute-0 systemd[1]: Started libpod-conmon-e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7.scope.
Nov 25 09:33:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:50 compute-0 nova_compute[253538]: 2025-11-25 09:33:50.986 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:50 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 22 op/s
Nov 25 09:33:51 compute-0 podman[439863]: 2025-11-25 09:33:51.039853749 +0000 UTC m=+0.734036932 container init e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:33:51 compute-0 podman[439863]: 2025-11-25 09:33:51.053208733 +0000 UTC m=+0.747391906 container start e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:33:51 compute-0 crazy_ptolemy[439882]: 167 167
Nov 25 09:33:51 compute-0 systemd[1]: libpod-e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7.scope: Deactivated successfully.
Nov 25 09:33:51 compute-0 podman[439863]: 2025-11-25 09:33:51.149081857 +0000 UTC m=+0.843265050 container attach e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:51 compute-0 podman[439863]: 2025-11-25 09:33:51.149487438 +0000 UTC m=+0.843670611 container died e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:51 compute-0 ceph-mon[75015]: osdmap e285: 3 total, 3 up, 3 in
Nov 25 09:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-14dea20a4e070f30f646ddc0fcfef4c5268e618308fad135909c8240333d99fa-merged.mount: Deactivated successfully.
Nov 25 09:33:51 compute-0 podman[439863]: 2025-11-25 09:33:51.22033935 +0000 UTC m=+0.914522523 container remove e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:51 compute-0 systemd[1]: libpod-conmon-e567ce78abed84fd97e0c77c84d7f9a829b0bdf26a5e2198f185c4a23d7a59f7.scope: Deactivated successfully.
Nov 25 09:33:51 compute-0 podman[439905]: 2025-11-25 09:33:51.49475968 +0000 UTC m=+0.097119328 container create ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:51 compute-0 podman[439905]: 2025-11-25 09:33:51.422548033 +0000 UTC m=+0.024907711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:33:51 compute-0 systemd[1]: Started libpod-conmon-ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b.scope.
Nov 25 09:33:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:51 compute-0 podman[439905]: 2025-11-25 09:33:51.854533359 +0000 UTC m=+0.456893087 container init ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:33:51 compute-0 podman[439905]: 2025-11-25 09:33:51.866458644 +0000 UTC m=+0.468818292 container start ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:51 compute-0 podman[439905]: 2025-11-25 09:33:51.957398974 +0000 UTC m=+0.559758702 container attach ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:33:52 compute-0 ceph-mon[75015]: pgmap v3423: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 22 op/s
Nov 25 09:33:52 compute-0 sharp_wilson[439921]: {
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "osd_id": 1,
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "type": "bluestore"
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:     },
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "osd_id": 2,
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "type": "bluestore"
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:     },
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "osd_id": 0,
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:         "type": "bluestore"
Nov 25 09:33:52 compute-0 sharp_wilson[439921]:     }
Nov 25 09:33:52 compute-0 sharp_wilson[439921]: }
Nov 25 09:33:52 compute-0 systemd[1]: libpod-ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b.scope: Deactivated successfully.
Nov 25 09:33:52 compute-0 podman[439905]: 2025-11-25 09:33:52.934957244 +0000 UTC m=+1.537316882 container died ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:33:52 compute-0 systemd[1]: libpod-ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b.scope: Consumed 1.074s CPU time.
Nov 25 09:33:52 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 41 op/s
Nov 25 09:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f8cfeab3052566ed05dae5cd6df2107820baa724e3754ea76b7f6288155a69f-merged.mount: Deactivated successfully.
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:33:53
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.log', '.mgr', '.rgw.root', 'images']
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:33:53 compute-0 ceph-mon[75015]: pgmap v3424: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 41 op/s
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:53 compute-0 podman[439905]: 2025-11-25 09:33:53.691232561 +0000 UTC m=+2.293592229 container remove ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:53 compute-0 systemd[1]: libpod-conmon-ab00855765c45d812419342b6de98c94546c0ddac3f1ad289666e92b8eb3a33b.scope: Deactivated successfully.
Nov 25 09:33:53 compute-0 sudo[439772]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:33:53 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:33:53 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f8870446-f61b-463a-9ad8-c4f178a450a2 does not exist
Nov 25 09:33:53 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d3968b03-fa92-4517-bb4f-3c35c5070e5c does not exist
Nov 25 09:33:53 compute-0 sudo[439969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:33:53 compute-0 sudo[439969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:53 compute-0 sudo[439969]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-0 sudo[439994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:33:53 compute-0 sudo[439994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:53 compute-0 sudo[439994]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3119838916
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:33:54 compute-0 nova_compute[253538]: 2025-11-25 09:33:54.304 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 25 09:33:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 25 09:33:54 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 25 09:33:54 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:54 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:33:54 compute-0 ceph-mon[75015]: osdmap e286: 3 total, 3 up, 3 in
Nov 25 09:33:54 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 4.9 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 52 op/s
Nov 25 09:33:55 compute-0 ceph-mon[75015]: pgmap v3426: 321 pgs: 321 active+clean; 4.9 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 52 op/s
Nov 25 09:33:55 compute-0 nova_compute[253538]: 2025-11-25 09:33:55.987 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:56 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 50 op/s
Nov 25 09:33:58 compute-0 ceph-mon[75015]: pgmap v3427: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 50 op/s
Nov 25 09:33:58 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Nov 25 09:33:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 25 09:33:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 25 09:33:59 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 25 09:33:59 compute-0 nova_compute[253538]: 2025-11-25 09:33:59.307 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:33:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:33:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 25 09:33:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 25 09:33:59 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 25 09:34:00 compute-0 ceph-mon[75015]: pgmap v3428: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Nov 25 09:34:00 compute-0 ceph-mon[75015]: osdmap e287: 3 total, 3 up, 3 in
Nov 25 09:34:00 compute-0 ceph-mon[75015]: osdmap e288: 3 total, 3 up, 3 in
Nov 25 09:34:00 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 964 B/s wr, 19 op/s
Nov 25 09:34:01 compute-0 nova_compute[253538]: 2025-11-25 09:34:01.001 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:01 compute-0 ceph-mon[75015]: pgmap v3431: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 964 B/s wr, 19 op/s
Nov 25 09:34:02 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.6 MiB/s wr, 15 op/s
Nov 25 09:34:03 compute-0 sshd-session[440019]: Invalid user oracle from 193.32.162.151 port 45346
Nov 25 09:34:03 compute-0 sshd-session[440019]: Connection closed by invalid user oracle 193.32.162.151 port 45346 [preauth]
Nov 25 09:34:04 compute-0 ceph-mon[75015]: pgmap v3432: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.6 MiB/s wr, 15 op/s
Nov 25 09:34:04 compute-0 nova_compute[253538]: 2025-11-25 09:34:04.347 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.533581) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244533654, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 819, "num_deletes": 260, "total_data_size": 1031675, "memory_usage": 1048064, "flush_reason": "Manual Compaction"}
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244544795, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1010330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70442, "largest_seqno": 71260, "table_properties": {"data_size": 1006135, "index_size": 1910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9358, "raw_average_key_size": 19, "raw_value_size": 997583, "raw_average_value_size": 2073, "num_data_blocks": 85, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063185, "oldest_key_time": 1764063185, "file_creation_time": 1764063244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 11270 microseconds, and 4859 cpu microseconds.
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.544850) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1010330 bytes OK
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.544886) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.546439) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.546461) EVENT_LOG_v1 {"time_micros": 1764063244546454, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.546480) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1027542, prev total WAL file size 1027542, number of live WAL files 2.
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.547239) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303139' seq:72057594037927935, type:22 .. '6C6F676D0033323731' seq:0, type:0; will stop at (end)
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(986KB)], [167(9583KB)]
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244547280, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10823396, "oldest_snapshot_seqno": -1}
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8737 keys, 10710109 bytes, temperature: kUnknown
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244623671, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10710109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10654571, "index_size": 32577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 230530, "raw_average_key_size": 26, "raw_value_size": 10501308, "raw_average_value_size": 1201, "num_data_blocks": 1261, "num_entries": 8737, "num_filter_entries": 8737, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.623959) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10710109 bytes
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.625439) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.5 rd, 140.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.4 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(21.3) write-amplify(10.6) OK, records in: 9270, records dropped: 533 output_compression: NoCompression
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.625461) EVENT_LOG_v1 {"time_micros": 1764063244625451, "job": 104, "event": "compaction_finished", "compaction_time_micros": 76491, "compaction_time_cpu_micros": 44611, "output_level": 6, "num_output_files": 1, "total_output_size": 10710109, "num_input_records": 9270, "num_output_records": 8737, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244625768, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063244627961, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.547155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:34:04 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:34:04.628012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:34:04 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Nov 25 09:34:05 compute-0 ceph-mon[75015]: pgmap v3433: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Nov 25 09:34:06 compute-0 nova_compute[253538]: 2025-11-25 09:34:06.004 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:06 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Nov 25 09:34:08 compute-0 ceph-mon[75015]: pgmap v3434: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Nov 25 09:34:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.1 MiB/s wr, 11 op/s
Nov 25 09:34:09 compute-0 nova_compute[253538]: 2025-11-25 09:34:09.349 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:10 compute-0 nova_compute[253538]: 2025-11-25 09:34:10.085 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:10 compute-0 ceph-mon[75015]: pgmap v3435: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.1 MiB/s wr, 11 op/s
Nov 25 09:34:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.8 MiB/s wr, 10 op/s
Nov 25 09:34:11 compute-0 nova_compute[253538]: 2025-11-25 09:34:11.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:11 compute-0 nova_compute[253538]: 2025-11-25 09:34:11.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:11 compute-0 sshd-session[440021]: Invalid user opc from 182.253.79.194 port 64329
Nov 25 09:34:11 compute-0 podman[440024]: 2025-11-25 09:34:11.659389892 +0000 UTC m=+0.059868023 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 09:34:11 compute-0 podman[440023]: 2025-11-25 09:34:11.664390328 +0000 UTC m=+0.065006913 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:34:11 compute-0 sshd-session[440021]: Received disconnect from 182.253.79.194 port 64329:11: Bye Bye [preauth]
Nov 25 09:34:11 compute-0 sshd-session[440021]: Disconnected from invalid user opc 182.253.79.194 port 64329 [preauth]
Nov 25 09:34:12 compute-0 ceph-mon[75015]: pgmap v3436: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.8 MiB/s wr, 10 op/s
Nov 25 09:34:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Nov 25 09:34:14 compute-0 ceph-mon[75015]: pgmap v3437: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Nov 25 09:34:14 compute-0 nova_compute[253538]: 2025-11-25 09:34:14.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:16 compute-0 nova_compute[253538]: 2025-11-25 09:34:16.009 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:16 compute-0 ceph-mon[75015]: pgmap v3438: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:17 compute-0 nova_compute[253538]: 2025-11-25 09:34:17.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:17 compute-0 nova_compute[253538]: 2025-11-25 09:34:17.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:34:17 compute-0 nova_compute[253538]: 2025-11-25 09:34:17.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:34:17 compute-0 nova_compute[253538]: 2025-11-25 09:34:17.576 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:34:17 compute-0 nova_compute[253538]: 2025-11-25 09:34:17.577 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:17 compute-0 nova_compute[253538]: 2025-11-25 09:34:17.577 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:34:18 compute-0 ceph-mon[75015]: pgmap v3439: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:18 compute-0 nova_compute[253538]: 2025-11-25 09:34:18.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:18 compute-0 nova_compute[253538]: 2025-11-25 09:34:18.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:19 compute-0 nova_compute[253538]: 2025-11-25 09:34:19.351 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:20 compute-0 nova_compute[253538]: 2025-11-25 09:34:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:20 compute-0 ceph-mon[75015]: pgmap v3440: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:20 compute-0 podman[440059]: 2025-11-25 09:34:20.819105688 +0000 UTC m=+0.073303420 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:34:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:21 compute-0 nova_compute[253538]: 2025-11-25 09:34:21.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:21 compute-0 ceph-mon[75015]: pgmap v3441: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:24 compute-0 ceph-mon[75015]: pgmap v3442: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:24 compute-0 nova_compute[253538]: 2025-11-25 09:34:24.354 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:26 compute-0 nova_compute[253538]: 2025-11-25 09:34:26.067 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:26 compute-0 ceph-mon[75015]: pgmap v3443: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:27 compute-0 nova_compute[253538]: 2025-11-25 09:34:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:28 compute-0 ceph-mon[75015]: pgmap v3444: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:34:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/700008148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:34:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:34:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/700008148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:34:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/700008148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:34:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/700008148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:34:29 compute-0 nova_compute[253538]: 2025-11-25 09:34:29.355 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:30 compute-0 ceph-mon[75015]: pgmap v3445: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:31 compute-0 nova_compute[253538]: 2025-11-25 09:34:31.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:32 compute-0 ceph-mon[75015]: pgmap v3446: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:33 compute-0 ceph-mon[75015]: pgmap v3447: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:33 compute-0 nova_compute[253538]: 2025-11-25 09:34:33.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:34:33 compute-0 nova_compute[253538]: 2025-11-25 09:34:33.593 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:34:33 compute-0 nova_compute[253538]: 2025-11-25 09:34:33.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:34:33 compute-0 nova_compute[253538]: 2025-11-25 09:34:33.594 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:34:33 compute-0 nova_compute[253538]: 2025-11-25 09:34:33.594 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:34:33 compute-0 nova_compute[253538]: 2025-11-25 09:34:33.594 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:34:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:34:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684591325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.069 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.235 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.236 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3628MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.237 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.237 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.328 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.329 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.348 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:34:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1684591325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.385 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:34:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735311605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.845 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.850 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.862 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.864 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:34:34 compute-0 nova_compute[253538]: 2025-11-25 09:34:34.864 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:34:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/735311605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:34:35 compute-0 ceph-mon[75015]: pgmap v3448: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:36 compute-0 nova_compute[253538]: 2025-11-25 09:34:36.071 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:38 compute-0 ceph-mon[75015]: pgmap v3449: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:39 compute-0 nova_compute[253538]: 2025-11-25 09:34:39.387 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:40 compute-0 ceph-mon[75015]: pgmap v3450: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:41 compute-0 nova_compute[253538]: 2025-11-25 09:34:41.074 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:34:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:34:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:34:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:34:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:34:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:34:41 compute-0 podman[440129]: 2025-11-25 09:34:41.804123292 +0000 UTC m=+0.053857899 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 09:34:41 compute-0 podman[440130]: 2025-11-25 09:34:41.811587036 +0000 UTC m=+0.063605526 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 09:34:42 compute-0 ceph-mon[75015]: pgmap v3451: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:44 compute-0 ceph-mon[75015]: pgmap v3452: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:44 compute-0 nova_compute[253538]: 2025-11-25 09:34:44.389 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:44 compute-0 sshd-session[440165]: Invalid user nominatim from 47.252.72.9 port 47612
Nov 25 09:34:44 compute-0 sshd-session[440165]: Received disconnect from 47.252.72.9 port 47612:11: Bye Bye [preauth]
Nov 25 09:34:44 compute-0 sshd-session[440165]: Disconnected from invalid user nominatim 47.252.72.9 port 47612 [preauth]
Nov 25 09:34:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:46 compute-0 nova_compute[253538]: 2025-11-25 09:34:46.075 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:46 compute-0 ceph-mon[75015]: pgmap v3453: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:46 compute-0 sshd-session[440167]: Invalid user victor from 146.190.154.85 port 44670
Nov 25 09:34:46 compute-0 sshd-session[440167]: Received disconnect from 146.190.154.85 port 44670:11: Bye Bye [preauth]
Nov 25 09:34:46 compute-0 sshd-session[440167]: Disconnected from invalid user victor 146.190.154.85 port 44670 [preauth]
Nov 25 09:34:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:48 compute-0 ceph-mon[75015]: pgmap v3454: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:49 compute-0 ceph-mon[75015]: pgmap v3455: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:49 compute-0 nova_compute[253538]: 2025-11-25 09:34:49.394 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:51 compute-0 nova_compute[253538]: 2025-11-25 09:34:51.076 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:51 compute-0 podman[440170]: 2025-11-25 09:34:51.59641442 +0000 UTC m=+0.099448392 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:52 compute-0 ceph-mon[75015]: pgmap v3456: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:34:53
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.control']
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:54 compute-0 sudo[440197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:34:54 compute-0 sudo[440197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:54 compute-0 sudo[440197]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:54 compute-0 sudo[440222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:54 compute-0 sudo[440222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:54 compute-0 sudo[440222]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:54 compute-0 ceph-mon[75015]: pgmap v3457: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:54 compute-0 sudo[440247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:34:54 compute-0 sudo[440247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:54 compute-0 sudo[440247]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:54 compute-0 sudo[440272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:34:54 compute-0 sudo[440272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:34:54 compute-0 nova_compute[253538]: 2025-11-25 09:34:54.396 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:54 compute-0 sudo[440272]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 09:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 747128f5-fa36-4ce0-8242-9c4c2e1189de does not exist
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e868dbc3-1eaa-44bc-b913-1d9adfbc5af3 does not exist
Nov 25 09:34:54 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 092675a0-50c9-443a-bf25-cc89012bd15e does not exist
Nov 25 09:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:34:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:54 compute-0 sudo[440328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:34:54 compute-0 sudo[440328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:54 compute-0 sudo[440328]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:54 compute-0 sudo[440353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:54 compute-0 sudo[440353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:54 compute-0 sudo[440353]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:55 compute-0 sudo[440378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:34:55 compute-0 sudo[440378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:55 compute-0 sudo[440378]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:55 compute-0 sudo[440403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:34:55 compute-0 sudo[440403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:34:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:34:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:34:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:34:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:55 compute-0 podman[440468]: 2025-11-25 09:34:55.530217384 +0000 UTC m=+0.079419025 container create f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:55 compute-0 podman[440468]: 2025-11-25 09:34:55.476662405 +0000 UTC m=+0.025864066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:34:55 compute-0 systemd[1]: Started libpod-conmon-f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5.scope.
Nov 25 09:34:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:55 compute-0 podman[440468]: 2025-11-25 09:34:55.794864439 +0000 UTC m=+0.344066100 container init f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 09:34:55 compute-0 podman[440468]: 2025-11-25 09:34:55.801679705 +0000 UTC m=+0.350881346 container start f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:55 compute-0 recursing_panini[440485]: 167 167
Nov 25 09:34:55 compute-0 systemd[1]: libpod-f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5.scope: Deactivated successfully.
Nov 25 09:34:55 compute-0 podman[440468]: 2025-11-25 09:34:55.820635262 +0000 UTC m=+0.369836903 container attach f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:34:55 compute-0 podman[440468]: 2025-11-25 09:34:55.821700601 +0000 UTC m=+0.370902242 container died f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-27dc0136c57dcd0bad1de871ecdbf79c1710601fb30600e9ac7afbdf74078a8d-merged.mount: Deactivated successfully.
Nov 25 09:34:56 compute-0 nova_compute[253538]: 2025-11-25 09:34:56.078 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:56 compute-0 podman[440468]: 2025-11-25 09:34:56.142373303 +0000 UTC m=+0.691574964 container remove f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 09:34:56 compute-0 systemd[1]: libpod-conmon-f97fa3651e4edd53d159c7f8c5e91c067c2bf0235f50a690b701d3aaf6fa70b5.scope: Deactivated successfully.
Nov 25 09:34:56 compute-0 ceph-mon[75015]: pgmap v3458: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:56 compute-0 podman[440508]: 2025-11-25 09:34:56.486942837 +0000 UTC m=+0.113101125 container create 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:34:56 compute-0 podman[440508]: 2025-11-25 09:34:56.412492727 +0000 UTC m=+0.038651015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:34:56 compute-0 systemd[1]: Started libpod-conmon-8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640.scope.
Nov 25 09:34:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 podman[440508]: 2025-11-25 09:34:56.7200031 +0000 UTC m=+0.346161378 container init 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 09:34:56 compute-0 podman[440508]: 2025-11-25 09:34:56.734249799 +0000 UTC m=+0.360408057 container start 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 09:34:56 compute-0 podman[440508]: 2025-11-25 09:34:56.786459562 +0000 UTC m=+0.412617840 container attach 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:34:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:57 compute-0 ceph-mon[75015]: pgmap v3459: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:57 compute-0 angry_bouman[440524]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:34:57 compute-0 angry_bouman[440524]: --> relative data size: 1.0
Nov 25 09:34:57 compute-0 angry_bouman[440524]: --> All data devices are unavailable
Nov 25 09:34:57 compute-0 systemd[1]: libpod-8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 podman[440508]: 2025-11-25 09:34:57.825777266 +0000 UTC m=+1.451935524 container died 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:57 compute-0 systemd[1]: libpod-8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640.scope: Consumed 1.047s CPU time.
Nov 25 09:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-12766fb881eee2a01d7a41723188fb932a7ba63d000f3f60623f0ee7d1fd2dfa-merged.mount: Deactivated successfully.
Nov 25 09:34:58 compute-0 podman[440508]: 2025-11-25 09:34:58.49347906 +0000 UTC m=+2.119637368 container remove 8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:58 compute-0 systemd[1]: libpod-conmon-8ac9b6d184fff644cc9e186b5fff1ffd37766e8140c122df85782760aff32640.scope: Deactivated successfully.
Nov 25 09:34:58 compute-0 sudo[440403]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 sudo[440567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:34:58 compute-0 sudo[440567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-0 sudo[440567]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 sudo[440592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:58 compute-0 sudo[440592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-0 sudo[440592]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 sudo[440617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:34:58 compute-0 sudo[440617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-0 sudo[440617]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 sudo[440642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:34:58 compute-0 sudo[440642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:59 compute-0 podman[440707]: 2025-11-25 09:34:59.149893255 +0000 UTC m=+0.022715080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:34:59 compute-0 podman[440707]: 2025-11-25 09:34:59.328470674 +0000 UTC m=+0.201292469 container create 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:59 compute-0 nova_compute[253538]: 2025-11-25 09:34:59.398 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:34:59 compute-0 systemd[1]: Started libpod-conmon-22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1.scope.
Nov 25 09:34:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:34:59 compute-0 ceph-mon[75015]: pgmap v3460: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:34:59 compute-0 podman[440707]: 2025-11-25 09:34:59.664790172 +0000 UTC m=+0.537612047 container init 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:59 compute-0 podman[440707]: 2025-11-25 09:34:59.671406102 +0000 UTC m=+0.544227937 container start 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:34:59 compute-0 silly_germain[440723]: 167 167
Nov 25 09:34:59 compute-0 systemd[1]: libpod-22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1.scope: Deactivated successfully.
Nov 25 09:34:59 compute-0 podman[440707]: 2025-11-25 09:34:59.728448118 +0000 UTC m=+0.601270013 container attach 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:59 compute-0 podman[440707]: 2025-11-25 09:34:59.72927451 +0000 UTC m=+0.602096345 container died 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7386f3b012f623999a6b15a1af6934d50f27272669df9e31d9c38c9bf0f27a-merged.mount: Deactivated successfully.
Nov 25 09:35:00 compute-0 podman[440707]: 2025-11-25 09:35:00.328890857 +0000 UTC m=+1.201712662 container remove 22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:35:00 compute-0 systemd[1]: libpod-conmon-22cfcdcbcf4d33442bb977376b7307bf5e0453a8740a1114465eeb94074071c1.scope: Deactivated successfully.
Nov 25 09:35:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 25 09:35:00 compute-0 podman[440747]: 2025-11-25 09:35:00.537143724 +0000 UTC m=+0.028777225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:35:00 compute-0 podman[440747]: 2025-11-25 09:35:00.715766724 +0000 UTC m=+0.207400175 container create a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 09:35:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 25 09:35:00 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 25 09:35:00 compute-0 systemd[1]: Started libpod-conmon-a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804.scope.
Nov 25 09:35:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:00 compute-0 podman[440747]: 2025-11-25 09:35:00.995899732 +0000 UTC m=+0.487533213 container init a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:01 compute-0 podman[440747]: 2025-11-25 09:35:01.007420786 +0000 UTC m=+0.499054247 container start a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:35:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Nov 25 09:35:01 compute-0 nova_compute[253538]: 2025-11-25 09:35:01.083 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:01 compute-0 podman[440747]: 2025-11-25 09:35:01.1098903 +0000 UTC m=+0.601523791 container attach a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:01 compute-0 sshd-session[440169]: error: kex_exchange_identification: read: Connection timed out
Nov 25 09:35:01 compute-0 sshd-session[440169]: banner exchange: Connection from 14.103.45.20 port 54726: Connection timed out
Nov 25 09:35:01 compute-0 zen_kalam[440763]: {
Nov 25 09:35:01 compute-0 zen_kalam[440763]:     "0": [
Nov 25 09:35:01 compute-0 zen_kalam[440763]:         {
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "devices": [
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "/dev/loop3"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             ],
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_name": "ceph_lv0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_size": "21470642176",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "name": "ceph_lv0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "tags": {
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cluster_name": "ceph",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.crush_device_class": "",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.encrypted": "0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osd_id": "0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.type": "block",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.vdo": "0"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             },
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "type": "block",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "vg_name": "ceph_vg0"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:         }
Nov 25 09:35:01 compute-0 zen_kalam[440763]:     ],
Nov 25 09:35:01 compute-0 zen_kalam[440763]:     "1": [
Nov 25 09:35:01 compute-0 zen_kalam[440763]:         {
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "devices": [
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "/dev/loop4"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             ],
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_name": "ceph_lv1",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_size": "21470642176",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "name": "ceph_lv1",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "tags": {
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cluster_name": "ceph",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.crush_device_class": "",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.encrypted": "0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osd_id": "1",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.type": "block",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.vdo": "0"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             },
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "type": "block",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "vg_name": "ceph_vg1"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:         }
Nov 25 09:35:01 compute-0 zen_kalam[440763]:     ],
Nov 25 09:35:01 compute-0 zen_kalam[440763]:     "2": [
Nov 25 09:35:01 compute-0 zen_kalam[440763]:         {
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "devices": [
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "/dev/loop5"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             ],
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_name": "ceph_lv2",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_size": "21470642176",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "name": "ceph_lv2",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "tags": {
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.cluster_name": "ceph",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.crush_device_class": "",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.encrypted": "0",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osd_id": "2",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.type": "block",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:                 "ceph.vdo": "0"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             },
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "type": "block",
Nov 25 09:35:01 compute-0 zen_kalam[440763]:             "vg_name": "ceph_vg2"
Nov 25 09:35:01 compute-0 zen_kalam[440763]:         }
Nov 25 09:35:01 compute-0 zen_kalam[440763]:     ]
Nov 25 09:35:01 compute-0 zen_kalam[440763]: }
Nov 25 09:35:01 compute-0 systemd[1]: libpod-a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804.scope: Deactivated successfully.
Nov 25 09:35:01 compute-0 podman[440747]: 2025-11-25 09:35:01.82116218 +0000 UTC m=+1.312795641 container died a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:35:01 compute-0 ceph-mon[75015]: osdmap e289: 3 total, 3 up, 3 in
Nov 25 09:35:01 compute-0 ceph-mon[75015]: pgmap v3462: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Nov 25 09:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aa706f9172d4d4ea86a3bcc1df9e9c5a7dd0cf4fc872ad15cff48bce67c48e5-merged.mount: Deactivated successfully.
Nov 25 09:35:02 compute-0 podman[440747]: 2025-11-25 09:35:02.652823914 +0000 UTC m=+2.144457375 container remove a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 09:35:02 compute-0 systemd[1]: libpod-conmon-a9e2ba4b18ba17e237243029cf2adf7b84e43c6758f588ee50f06378e09e7804.scope: Deactivated successfully.
Nov 25 09:35:02 compute-0 sudo[440642]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:02 compute-0 sudo[440783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:02 compute-0 sudo[440783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:02 compute-0 sudo[440783]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:02 compute-0 sudo[440808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:02 compute-0 sudo[440808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:02 compute-0 sudo[440808]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:02 compute-0 sudo[440833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:02 compute-0 sudo[440833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:02 compute-0 sudo[440833]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:02 compute-0 sudo[440858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:35:02 compute-0 sudo[440858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 716 B/s wr, 12 op/s
Nov 25 09:35:03 compute-0 podman[440924]: 2025-11-25 09:35:03.373884772 +0000 UTC m=+0.085814871 container create ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:03 compute-0 podman[440924]: 2025-11-25 09:35:03.310567055 +0000 UTC m=+0.022497174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:35:03 compute-0 systemd[1]: Started libpod-conmon-ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7.scope.
Nov 25 09:35:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:03 compute-0 podman[440924]: 2025-11-25 09:35:03.702015437 +0000 UTC m=+0.413945556 container init ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:35:03 compute-0 podman[440924]: 2025-11-25 09:35:03.713268714 +0000 UTC m=+0.425198823 container start ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 09:35:03 compute-0 cranky_elbakyan[440940]: 167 167
Nov 25 09:35:03 compute-0 systemd[1]: libpod-ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7.scope: Deactivated successfully.
Nov 25 09:35:03 compute-0 podman[440924]: 2025-11-25 09:35:03.777524366 +0000 UTC m=+0.489454475 container attach ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:03 compute-0 podman[440924]: 2025-11-25 09:35:03.778387829 +0000 UTC m=+0.490317978 container died ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a43126f221a2f7499cbc765b41149f8152d65e855c1adcb2916969d3122be8-merged.mount: Deactivated successfully.
Nov 25 09:35:04 compute-0 podman[440924]: 2025-11-25 09:35:04.195536031 +0000 UTC m=+0.907466120 container remove ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:35:04 compute-0 ceph-mon[75015]: pgmap v3463: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 716 B/s wr, 12 op/s
Nov 25 09:35:04 compute-0 systemd[1]: libpod-conmon-ec95d8f147d339636b087546256e36aa0441c10183d9f724cd8e89ccf11e2fb7.scope: Deactivated successfully.
Nov 25 09:35:04 compute-0 nova_compute[253538]: 2025-11-25 09:35:04.400 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:04 compute-0 podman[440965]: 2025-11-25 09:35:04.410198164 +0000 UTC m=+0.055021582 container create f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:35:04 compute-0 podman[440965]: 2025-11-25 09:35:04.377105651 +0000 UTC m=+0.021929089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:35:04 compute-0 systemd[1]: Started libpod-conmon-f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9.scope.
Nov 25 09:35:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:04 compute-0 podman[440965]: 2025-11-25 09:35:04.578509112 +0000 UTC m=+0.223332530 container init f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 09:35:04 compute-0 podman[440965]: 2025-11-25 09:35:04.588413731 +0000 UTC m=+0.233237149 container start f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:04 compute-0 podman[440965]: 2025-11-25 09:35:04.60961135 +0000 UTC m=+0.254434778 container attach f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003330245368561568 of space, bias 1.0, pg target 0.09990736105684704 quantized to 32 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:35:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1023 B/s wr, 14 op/s
Nov 25 09:35:05 compute-0 blissful_murdock[440982]: {
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "osd_id": 1,
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "type": "bluestore"
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:     },
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "osd_id": 2,
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "type": "bluestore"
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:     },
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "osd_id": 0,
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:         "type": "bluestore"
Nov 25 09:35:05 compute-0 blissful_murdock[440982]:     }
Nov 25 09:35:05 compute-0 blissful_murdock[440982]: }
Nov 25 09:35:05 compute-0 systemd[1]: libpod-f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9.scope: Deactivated successfully.
Nov 25 09:35:05 compute-0 systemd[1]: libpod-f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9.scope: Consumed 1.056s CPU time.
Nov 25 09:35:05 compute-0 podman[440965]: 2025-11-25 09:35:05.641022698 +0000 UTC m=+1.285846156 container died f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:35:05 compute-0 ceph-mon[75015]: pgmap v3464: 321 pgs: 321 active+clean; 21 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1023 B/s wr, 14 op/s
Nov 25 09:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b62095ee2e7f37470cffe814d31fa4121921b05401a941a3eecde5bff0916f7-merged.mount: Deactivated successfully.
Nov 25 09:35:06 compute-0 podman[440965]: 2025-11-25 09:35:06.004519708 +0000 UTC m=+1.649343166 container remove f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:35:06 compute-0 systemd[1]: libpod-conmon-f107d1231d457835f69cfeede28861a9026462576a0ad57a59a513b98533beb9.scope: Deactivated successfully.
Nov 25 09:35:06 compute-0 sudo[440858]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:35:06 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:35:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:35:06 compute-0 nova_compute[253538]: 2025-11-25 09:35:06.084 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:06 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:35:06 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d6d41010-25fb-4b3b-83f5-7680b2d81c16 does not exist
Nov 25 09:35:06 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 239ae6b9-ed4d-4c6b-84ef-be68f6af8a1a does not exist
Nov 25 09:35:06 compute-0 sudo[441027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:06 compute-0 sudo[441027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:06 compute-0 sudo[441027]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:06 compute-0 sudo[441052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:35:06 compute-0 sudo[441052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:06 compute-0 sudo[441052]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 09:35:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:35:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:35:08 compute-0 ceph-mon[75015]: pgmap v3465: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 09:35:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 09:35:09 compute-0 nova_compute[253538]: 2025-11-25 09:35:09.507 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 25 09:35:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 25 09:35:09 compute-0 ceph-mon[75015]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 25 09:35:09 compute-0 nova_compute[253538]: 2025-11-25 09:35:09.864 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:10 compute-0 ceph-mon[75015]: pgmap v3466: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 09:35:10 compute-0 ceph-mon[75015]: osdmap e290: 3 total, 3 up, 3 in
Nov 25 09:35:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 09:35:11 compute-0 nova_compute[253538]: 2025-11-25 09:35:11.085 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:11 compute-0 ceph-mon[75015]: pgmap v3468: 321 pgs: 321 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 09:35:12 compute-0 nova_compute[253538]: 2025-11-25 09:35:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:12 compute-0 podman[441078]: 2025-11-25 09:35:12.832070082 +0000 UTC m=+0.079558690 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:35:12 compute-0 podman[441077]: 2025-11-25 09:35:12.83237976 +0000 UTC m=+0.078250294 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:35:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Nov 25 09:35:14 compute-0 ceph-mon[75015]: pgmap v3469: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Nov 25 09:35:14 compute-0 nova_compute[253538]: 2025-11-25 09:35:14.543 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 409 B/s wr, 10 op/s
Nov 25 09:35:15 compute-0 ceph-mon[75015]: pgmap v3470: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 409 B/s wr, 10 op/s
Nov 25 09:35:16 compute-0 nova_compute[253538]: 2025-11-25 09:35:16.088 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:17 compute-0 nova_compute[253538]: 2025-11-25 09:35:17.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:17 compute-0 nova_compute[253538]: 2025-11-25 09:35:17.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:35:17 compute-0 nova_compute[253538]: 2025-11-25 09:35:17.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:35:17 compute-0 nova_compute[253538]: 2025-11-25 09:35:17.589 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:35:18 compute-0 ceph-mon[75015]: pgmap v3471: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:19 compute-0 ceph-mon[75015]: pgmap v3472: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:19 compute-0 nova_compute[253538]: 2025-11-25 09:35:19.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:19 compute-0 nova_compute[253538]: 2025-11-25 09:35:19.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:19 compute-0 nova_compute[253538]: 2025-11-25 09:35:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:19 compute-0 nova_compute[253538]: 2025-11-25 09:35:19.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:35:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:20 compute-0 nova_compute[253538]: 2025-11-25 09:35:20.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:21 compute-0 nova_compute[253538]: 2025-11-25 09:35:21.091 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:21 compute-0 podman[441117]: 2025-11-25 09:35:21.93661974 +0000 UTC m=+0.175244368 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 09:35:22 compute-0 ceph-mon[75015]: pgmap v3473: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:22 compute-0 nova_compute[253538]: 2025-11-25 09:35:22.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:23 compute-0 ceph-mon[75015]: pgmap v3474: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:24 compute-0 nova_compute[253538]: 2025-11-25 09:35:24.550 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:26 compute-0 nova_compute[253538]: 2025-11-25 09:35:26.092 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:26 compute-0 ceph-mon[75015]: pgmap v3475: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:28 compute-0 ceph-mon[75015]: pgmap v3476: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:28 compute-0 nova_compute[253538]: 2025-11-25 09:35:28.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:35:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1150341661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:35:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:35:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1150341661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:35:29 compute-0 ceph-mon[75015]: pgmap v3477: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1150341661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:35:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1150341661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:35:29 compute-0 nova_compute[253538]: 2025-11-25 09:35:29.551 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:30 compute-0 sshd-session[441143]: Received disconnect from 165.227.175.225 port 43340:11: Bye Bye [preauth]
Nov 25 09:35:30 compute-0 sshd-session[441143]: Disconnected from authenticating user root 165.227.175.225 port 43340 [preauth]
Nov 25 09:35:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:31 compute-0 nova_compute[253538]: 2025-11-25 09:35:31.093 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:32 compute-0 ceph-mon[75015]: pgmap v3478: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:33 compute-0 nova_compute[253538]: 2025-11-25 09:35:33.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:33 compute-0 ceph-mon[75015]: pgmap v3479: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:34 compute-0 nova_compute[253538]: 2025-11-25 09:35:34.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:35:34 compute-0 nova_compute[253538]: 2025-11-25 09:35:34.554 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:34 compute-0 nova_compute[253538]: 2025-11-25 09:35:34.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:35:34 compute-0 nova_compute[253538]: 2025-11-25 09:35:34.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:35:34 compute-0 nova_compute[253538]: 2025-11-25 09:35:34.579 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:35:34 compute-0 nova_compute[253538]: 2025-11-25 09:35:34.580 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:35:34 compute-0 nova_compute[253538]: 2025-11-25 09:35:34.580 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:35:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:35:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227355941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.013 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:35:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.165 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.166 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3617MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.166 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.166 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:35:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2227355941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.240 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.241 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.275 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:35:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:35:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462335001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.716 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.722 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.737 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.739 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:35:35 compute-0 nova_compute[253538]: 2025-11-25 09:35:35.739 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:35:36 compute-0 nova_compute[253538]: 2025-11-25 09:35:36.095 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:36 compute-0 ceph-mon[75015]: pgmap v3480: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1462335001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:35:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:37 compute-0 ceph-mon[75015]: pgmap v3481: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:39 compute-0 nova_compute[253538]: 2025-11-25 09:35:39.555 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:40 compute-0 ceph-mon[75015]: pgmap v3482: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:41 compute-0 nova_compute[253538]: 2025-11-25 09:35:41.096 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:35:41.122 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:35:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:35:41.123 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:35:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:35:41.123 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:35:41 compute-0 sshd-session[441189]: Invalid user maarch from 182.253.79.194 port 26576
Nov 25 09:35:41 compute-0 ceph-mon[75015]: pgmap v3483: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:41 compute-0 sshd-session[441189]: Received disconnect from 182.253.79.194 port 26576:11: Bye Bye [preauth]
Nov 25 09:35:41 compute-0 sshd-session[441189]: Disconnected from invalid user maarch 182.253.79.194 port 26576 [preauth]
Nov 25 09:35:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:43 compute-0 podman[441192]: 2025-11-25 09:35:43.808145237 +0000 UTC m=+0.052615056 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 09:35:43 compute-0 podman[441191]: 2025-11-25 09:35:43.809298118 +0000 UTC m=+0.059737009 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 09:35:44 compute-0 ceph-mon[75015]: pgmap v3484: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:44 compute-0 nova_compute[253538]: 2025-11-25 09:35:44.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:46 compute-0 nova_compute[253538]: 2025-11-25 09:35:46.098 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:46 compute-0 ceph-mon[75015]: pgmap v3485: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:47 compute-0 ceph-mon[75015]: pgmap v3486: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:49 compute-0 ceph-mon[75015]: pgmap v3487: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:49 compute-0 nova_compute[253538]: 2025-11-25 09:35:49.574 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:51 compute-0 nova_compute[253538]: 2025-11-25 09:35:51.099 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:51 compute-0 sshd-session[441230]: Invalid user devops from 146.190.154.85 port 53538
Nov 25 09:35:51 compute-0 sshd-session[441230]: Received disconnect from 146.190.154.85 port 53538:11: Bye Bye [preauth]
Nov 25 09:35:51 compute-0 sshd-session[441230]: Disconnected from invalid user devops 146.190.154.85 port 53538 [preauth]
Nov 25 09:35:52 compute-0 ceph-mon[75015]: pgmap v3488: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:52 compute-0 podman[441232]: 2025-11-25 09:35:52.848566557 +0000 UTC m=+0.098128187 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:35:53
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr']
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:54 compute-0 ceph-mon[75015]: pgmap v3489: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:35:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:35:54 compute-0 nova_compute[253538]: 2025-11-25 09:35:54.576 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:55 compute-0 ceph-mon[75015]: pgmap v3490: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:56 compute-0 nova_compute[253538]: 2025-11-25 09:35:56.133 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:35:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:58 compute-0 ceph-mon[75015]: pgmap v3491: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:35:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:35:59 compute-0 nova_compute[253538]: 2025-11-25 09:35:59.619 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:00 compute-0 ceph-mon[75015]: pgmap v3492: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:01 compute-0 nova_compute[253538]: 2025-11-25 09:36:01.137 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:01 compute-0 ceph-mon[75015]: pgmap v3493: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:03 compute-0 ceph-mon[75015]: pgmap v3494: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:04 compute-0 nova_compute[253538]: 2025-11-25 09:36:04.620 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:36:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:05 compute-0 ceph-mon[75015]: pgmap v3495: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:06 compute-0 nova_compute[253538]: 2025-11-25 09:36:06.140 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:06 compute-0 sudo[441261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:06 compute-0 sudo[441261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:06 compute-0 sudo[441261]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:06 compute-0 sudo[441286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:06 compute-0 sudo[441286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:06 compute-0 sudo[441286]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:06 compute-0 sudo[441311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:06 compute-0 sudo[441311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:06 compute-0 sudo[441311]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:06 compute-0 sudo[441336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:36:06 compute-0 sudo[441336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:07 compute-0 sudo[441336]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:36:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:36:07 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:36:07 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:36:07 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a5a17609-87e6-487a-aa16-e36f9a25d6c8 does not exist
Nov 25 09:36:07 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2f483500-c670-4737-8681-5ee12618578a does not exist
Nov 25 09:36:07 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d5ecfb5b-27e6-499a-b696-0978b26a6d1a does not exist
Nov 25 09:36:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:36:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:36:07 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:36:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:36:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:36:07 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:07 compute-0 sudo[441392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:07 compute-0 sudo[441392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:07 compute-0 sudo[441392]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:07 compute-0 sudo[441417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:07 compute-0 sudo[441417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:07 compute-0 sudo[441417]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:07 compute-0 sudo[441442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:07 compute-0 sudo[441442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:07 compute-0 sudo[441442]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:07 compute-0 sudo[441467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:36:07 compute-0 sudo[441467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:07 compute-0 podman[441533]: 2025-11-25 09:36:07.638933064 +0000 UTC m=+0.043266220 container create 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:36:07 compute-0 systemd[1]: Started libpod-conmon-7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b.scope.
Nov 25 09:36:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:07 compute-0 podman[441533]: 2025-11-25 09:36:07.619126784 +0000 UTC m=+0.023459960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:36:07 compute-0 podman[441533]: 2025-11-25 09:36:07.727426557 +0000 UTC m=+0.131759733 container init 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 09:36:07 compute-0 podman[441533]: 2025-11-25 09:36:07.733672727 +0000 UTC m=+0.138005913 container start 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:36:07 compute-0 podman[441533]: 2025-11-25 09:36:07.737457451 +0000 UTC m=+0.141790637 container attach 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:36:07 compute-0 competent_nash[441550]: 167 167
Nov 25 09:36:07 compute-0 systemd[1]: libpod-7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b.scope: Deactivated successfully.
Nov 25 09:36:07 compute-0 conmon[441550]: conmon 7c261f71e1bb5be791a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b.scope/container/memory.events
Nov 25 09:36:07 compute-0 podman[441533]: 2025-11-25 09:36:07.743515916 +0000 UTC m=+0.147849072 container died 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 09:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-07d8a8e4fee59e9a7375ba67a66962696dde3e6514a1230c3a19821749541491-merged.mount: Deactivated successfully.
Nov 25 09:36:07 compute-0 podman[441533]: 2025-11-25 09:36:07.815502738 +0000 UTC m=+0.219835894 container remove 7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 25 09:36:07 compute-0 systemd[1]: libpod-conmon-7c261f71e1bb5be791a22908ecf9322a4ed6a9b3f060537530fb3814e6ccc75b.scope: Deactivated successfully.
Nov 25 09:36:08 compute-0 podman[441573]: 2025-11-25 09:36:08.011064019 +0000 UTC m=+0.054309041 container create 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:36:08 compute-0 systemd[1]: Started libpod-conmon-6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a.scope.
Nov 25 09:36:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:08 compute-0 podman[441573]: 2025-11-25 09:36:07.983984062 +0000 UTC m=+0.027229164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:08 compute-0 podman[441573]: 2025-11-25 09:36:08.088177762 +0000 UTC m=+0.131422814 container init 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 09:36:08 compute-0 podman[441573]: 2025-11-25 09:36:08.094637728 +0000 UTC m=+0.137882750 container start 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:36:08 compute-0 podman[441573]: 2025-11-25 09:36:08.098532104 +0000 UTC m=+0.141777126 container attach 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:36:08 compute-0 ceph-mon[75015]: pgmap v3496: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:09 compute-0 charming_shockley[441589]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:36:09 compute-0 charming_shockley[441589]: --> relative data size: 1.0
Nov 25 09:36:09 compute-0 charming_shockley[441589]: --> All data devices are unavailable
Nov 25 09:36:09 compute-0 systemd[1]: libpod-6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a.scope: Deactivated successfully.
Nov 25 09:36:09 compute-0 podman[441573]: 2025-11-25 09:36:09.105215069 +0000 UTC m=+1.148460131 container died 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 09:36:09 compute-0 sshd-session[441594]: Invalid user oracle from 193.32.162.151 port 60314
Nov 25 09:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bd925b3fe776e61c646ae0f4884b46cd0a77940fbe94c11ceb5b15845e6dc5b-merged.mount: Deactivated successfully.
Nov 25 09:36:09 compute-0 podman[441573]: 2025-11-25 09:36:09.166858119 +0000 UTC m=+1.210103141 container remove 6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:36:09 compute-0 systemd[1]: libpod-conmon-6add9e2346325779178fe0fdb0ad50829e573a79024691e9c5520252de502d7a.scope: Deactivated successfully.
Nov 25 09:36:09 compute-0 sudo[441467]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:09 compute-0 sudo[441634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:09 compute-0 sudo[441634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:09 compute-0 sudo[441634]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:09 compute-0 sshd-session[441594]: Connection closed by invalid user oracle 193.32.162.151 port 60314 [preauth]
Nov 25 09:36:09 compute-0 sudo[441659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:09 compute-0 sudo[441659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:09 compute-0 sudo[441659]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:09 compute-0 sudo[441684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:09 compute-0 sudo[441684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:09 compute-0 sudo[441684]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:09 compute-0 sudo[441709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:36:09 compute-0 sudo[441709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:09 compute-0 nova_compute[253538]: 2025-11-25 09:36:09.622 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:09 compute-0 podman[441772]: 2025-11-25 09:36:09.725487277 +0000 UTC m=+0.041088831 container create a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:36:09 compute-0 systemd[1]: Started libpod-conmon-a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6.scope.
Nov 25 09:36:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:09 compute-0 podman[441772]: 2025-11-25 09:36:09.80192551 +0000 UTC m=+0.117527084 container init a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:36:09 compute-0 podman[441772]: 2025-11-25 09:36:09.707885657 +0000 UTC m=+0.023487241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:36:09 compute-0 podman[441772]: 2025-11-25 09:36:09.807739329 +0000 UTC m=+0.123340883 container start a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:36:09 compute-0 podman[441772]: 2025-11-25 09:36:09.811324977 +0000 UTC m=+0.126926551 container attach a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:36:09 compute-0 intelligent_jones[441789]: 167 167
Nov 25 09:36:09 compute-0 systemd[1]: libpod-a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6.scope: Deactivated successfully.
Nov 25 09:36:09 compute-0 podman[441772]: 2025-11-25 09:36:09.813373103 +0000 UTC m=+0.128974657 container died a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 09:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e62de42c35affd42fc0103dd254b09df83e5e28ffedf28f1d81a758f0c39161-merged.mount: Deactivated successfully.
Nov 25 09:36:09 compute-0 podman[441772]: 2025-11-25 09:36:09.845017545 +0000 UTC m=+0.160619099 container remove a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 09:36:09 compute-0 systemd[1]: libpod-conmon-a3a0e5360445749a1ead286ddea6c6f7eb2fccfee354391bc11a6f003046dcf6.scope: Deactivated successfully.
Nov 25 09:36:09 compute-0 podman[441812]: 2025-11-25 09:36:09.988900118 +0000 UTC m=+0.035127008 container create f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:36:10 compute-0 systemd[1]: Started libpod-conmon-f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2.scope.
Nov 25 09:36:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:10 compute-0 podman[441812]: 2025-11-25 09:36:10.059923895 +0000 UTC m=+0.106150785 container init f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:36:10 compute-0 podman[441812]: 2025-11-25 09:36:10.068076096 +0000 UTC m=+0.114302976 container start f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 09:36:10 compute-0 podman[441812]: 2025-11-25 09:36:09.974819454 +0000 UTC m=+0.021046344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:36:10 compute-0 podman[441812]: 2025-11-25 09:36:10.07298503 +0000 UTC m=+0.119211900 container attach f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:36:10 compute-0 ceph-mon[75015]: pgmap v3497: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:10 compute-0 nova_compute[253538]: 2025-11-25 09:36:10.741 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]: {
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:     "0": [
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:         {
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "devices": [
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "/dev/loop3"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             ],
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_name": "ceph_lv0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_size": "21470642176",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "name": "ceph_lv0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "tags": {
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cluster_name": "ceph",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.crush_device_class": "",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.encrypted": "0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osd_id": "0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.type": "block",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.vdo": "0"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             },
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "type": "block",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "vg_name": "ceph_vg0"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:         }
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:     ],
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:     "1": [
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:         {
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "devices": [
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "/dev/loop4"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             ],
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_name": "ceph_lv1",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_size": "21470642176",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "name": "ceph_lv1",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "tags": {
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cluster_name": "ceph",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.crush_device_class": "",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.encrypted": "0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osd_id": "1",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.type": "block",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.vdo": "0"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             },
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "type": "block",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "vg_name": "ceph_vg1"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:         }
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:     ],
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:     "2": [
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:         {
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "devices": [
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "/dev/loop5"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             ],
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_name": "ceph_lv2",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_size": "21470642176",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "name": "ceph_lv2",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "tags": {
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.cluster_name": "ceph",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.crush_device_class": "",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.encrypted": "0",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osd_id": "2",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.type": "block",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:                 "ceph.vdo": "0"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             },
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "type": "block",
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:             "vg_name": "ceph_vg2"
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:         }
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]:     ]
Nov 25 09:36:10 compute-0 vigorous_williamson[441828]: }
Nov 25 09:36:10 compute-0 systemd[1]: libpod-f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2.scope: Deactivated successfully.
Nov 25 09:36:10 compute-0 podman[441812]: 2025-11-25 09:36:10.87116794 +0000 UTC m=+0.917394810 container died f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 09:36:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f24c646e95d931cfe668b7e8f74119a4754376fb44dbe57517655a6fb9fb39e-merged.mount: Deactivated successfully.
Nov 25 09:36:10 compute-0 podman[441812]: 2025-11-25 09:36:10.925287555 +0000 UTC m=+0.971514425 container remove f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:36:10 compute-0 systemd[1]: libpod-conmon-f2a167d819ca41fb8b7431408fac32511e22e13e87bbcefec54ccb2fcff8aea2.scope: Deactivated successfully.
Nov 25 09:36:10 compute-0 sudo[441709]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:11 compute-0 sudo[441850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:11 compute-0 sudo[441850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:11 compute-0 sudo[441850]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:11 compute-0 sudo[441875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:11 compute-0 sudo[441875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:11 compute-0 sudo[441875]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:11 compute-0 sudo[441900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:11 compute-0 sudo[441900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:11 compute-0 sudo[441900]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:11 compute-0 nova_compute[253538]: 2025-11-25 09:36:11.142 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:11 compute-0 sudo[441925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:36:11 compute-0 sudo[441925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:11 compute-0 podman[441991]: 2025-11-25 09:36:11.589255726 +0000 UTC m=+0.042612903 container create 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:36:11 compute-0 systemd[1]: Started libpod-conmon-7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7.scope.
Nov 25 09:36:11 compute-0 podman[441991]: 2025-11-25 09:36:11.567855863 +0000 UTC m=+0.021213060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:36:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:11 compute-0 podman[441991]: 2025-11-25 09:36:11.699032779 +0000 UTC m=+0.152389986 container init 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 09:36:11 compute-0 podman[441991]: 2025-11-25 09:36:11.707842039 +0000 UTC m=+0.161199216 container start 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:36:11 compute-0 podman[441991]: 2025-11-25 09:36:11.714427009 +0000 UTC m=+0.167784196 container attach 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:36:11 compute-0 condescending_gould[442007]: 167 167
Nov 25 09:36:11 compute-0 systemd[1]: libpod-7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7.scope: Deactivated successfully.
Nov 25 09:36:11 compute-0 podman[441991]: 2025-11-25 09:36:11.716833694 +0000 UTC m=+0.170190871 container died 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6243dbb4c07cb623e389df2ad8806fc81f2fc55fbf72603d4951b08ec48bcf77-merged.mount: Deactivated successfully.
Nov 25 09:36:11 compute-0 podman[441991]: 2025-11-25 09:36:11.779521083 +0000 UTC m=+0.232878260 container remove 7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:36:11 compute-0 systemd[1]: libpod-conmon-7d3437fb7e21ad504f760f051d9d2072e82848e1f620e1ea7f7138285ef5bca7.scope: Deactivated successfully.
Nov 25 09:36:11 compute-0 podman[442031]: 2025-11-25 09:36:11.982348273 +0000 UTC m=+0.048811452 container create d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 09:36:12 compute-0 systemd[1]: Started libpod-conmon-d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de.scope.
Nov 25 09:36:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:12 compute-0 podman[442031]: 2025-11-25 09:36:11.962842361 +0000 UTC m=+0.029305570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:36:12 compute-0 podman[442031]: 2025-11-25 09:36:12.072368767 +0000 UTC m=+0.138831966 container init d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 09:36:12 compute-0 podman[442031]: 2025-11-25 09:36:12.08055842 +0000 UTC m=+0.147021589 container start d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:36:12 compute-0 podman[442031]: 2025-11-25 09:36:12.084014844 +0000 UTC m=+0.150478023 container attach d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 09:36:12 compute-0 ceph-mon[75015]: pgmap v3498: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:12 compute-0 nova_compute[253538]: 2025-11-25 09:36:12.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]: {
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "osd_id": 1,
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "type": "bluestore"
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:     },
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "osd_id": 2,
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "type": "bluestore"
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:     },
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "osd_id": 0,
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:         "type": "bluestore"
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]:     }
Nov 25 09:36:13 compute-0 peaceful_tharp[442047]: }
Nov 25 09:36:13 compute-0 systemd[1]: libpod-d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de.scope: Deactivated successfully.
Nov 25 09:36:13 compute-0 podman[442031]: 2025-11-25 09:36:13.048349404 +0000 UTC m=+1.114812583 container died d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:36:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e517a4a2d05910b231e393f0a6cab1be36720ff217bd8313faf77b1d09fa1ef4-merged.mount: Deactivated successfully.
Nov 25 09:36:13 compute-0 podman[442031]: 2025-11-25 09:36:13.104584217 +0000 UTC m=+1.171047396 container remove d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 09:36:13 compute-0 systemd[1]: libpod-conmon-d9197fbc9aaeb469249a1f2a296c4264829c5f0044df0a2e4cece77fff3923de.scope: Deactivated successfully.
Nov 25 09:36:13 compute-0 sudo[441925]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:36:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:36:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:36:13 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:36:13 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 00d4f859-e71a-4783-9944-da36c640d3ae does not exist
Nov 25 09:36:13 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f4faf2d2-5e82-41c0-be05-d96889e24edb does not exist
Nov 25 09:36:13 compute-0 sudo[442092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:13 compute-0 sudo[442092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:13 compute-0 sudo[442092]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:13 compute-0 sudo[442117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:36:13 compute-0 sudo[442117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:13 compute-0 sudo[442117]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:14 compute-0 ceph-mon[75015]: pgmap v3499: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:36:14 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:36:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:14 compute-0 nova_compute[253538]: 2025-11-25 09:36:14.670 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:14 compute-0 podman[442143]: 2025-11-25 09:36:14.818352427 +0000 UTC m=+0.054449465 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:36:14 compute-0 podman[442142]: 2025-11-25 09:36:14.825929814 +0000 UTC m=+0.063836941 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Nov 25 09:36:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:16 compute-0 nova_compute[253538]: 2025-11-25 09:36:16.143 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:16 compute-0 ceph-mon[75015]: pgmap v3500: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:18 compute-0 ceph-mon[75015]: pgmap v3501: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:19 compute-0 sshd-session[442177]: Accepted publickey for zuul from 192.168.122.10 port 53310 ssh2: ECDSA SHA256:XPT2Qp05XP+4/iPWyxQ1YuG4VjRBRDdk6pBKmAF934E
Nov 25 09:36:19 compute-0 systemd-logind[822]: New session 55 of user zuul.
Nov 25 09:36:19 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 25 09:36:19 compute-0 sshd-session[442177]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:36:19 compute-0 sudo[442181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 09:36:19 compute-0 sudo[442181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:19 compute-0 nova_compute[253538]: 2025-11-25 09:36:19.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:19 compute-0 nova_compute[253538]: 2025-11-25 09:36:19.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:36:19 compute-0 nova_compute[253538]: 2025-11-25 09:36:19.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:36:19 compute-0 nova_compute[253538]: 2025-11-25 09:36:19.572 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:36:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:19 compute-0 nova_compute[253538]: 2025-11-25 09:36:19.671 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:20 compute-0 ceph-mon[75015]: pgmap v3502: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:20 compute-0 nova_compute[253538]: 2025-11-25 09:36:20.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:20 compute-0 nova_compute[253538]: 2025-11-25 09:36:20.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:36:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:21 compute-0 nova_compute[253538]: 2025-11-25 09:36:21.144 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:21 compute-0 nova_compute[253538]: 2025-11-25 09:36:21.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:21 compute-0 nova_compute[253538]: 2025-11-25 09:36:21.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:22 compute-0 ceph-mon[75015]: pgmap v3503: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:22 compute-0 nova_compute[253538]: 2025-11-25 09:36:22.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:22 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23115 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:23 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23117 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 09:36:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940533214' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:36:23 compute-0 podman[442407]: 2025-11-25 09:36:23.895893192 +0000 UTC m=+0.132954866 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:36:24 compute-0 ceph-mon[75015]: from='client.23115 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:24 compute-0 ceph-mon[75015]: pgmap v3504: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1940533214' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:36:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:24 compute-0 nova_compute[253538]: 2025-11-25 09:36:24.674 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:25 compute-0 ceph-mon[75015]: from='client.23117 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:26 compute-0 nova_compute[253538]: 2025-11-25 09:36:26.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:26 compute-0 ceph-mon[75015]: pgmap v3505: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:27 compute-0 ovs-vsctl[442490]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 09:36:28 compute-0 ceph-mon[75015]: pgmap v3506: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:28 compute-0 nova_compute[253538]: 2025-11-25 09:36:28.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:28 compute-0 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 09:36:28 compute-0 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 09:36:28 compute-0 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 09:36:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:36:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4130503670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:36:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:36:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4130503670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:36:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4130503670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:36:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4130503670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:36:29 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: cache status {prefix=cache status} (starting...)
Nov 25 09:36:29 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: client ls {prefix=client ls} (starting...)
Nov 25 09:36:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:29 compute-0 nova_compute[253538]: 2025-11-25 09:36:29.677 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:29 compute-0 lvm[442852]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:36:29 compute-0 lvm[442852]: VG ceph_vg0 finished
Nov 25 09:36:30 compute-0 lvm[442858]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 09:36:30 compute-0 lvm[442858]: VG ceph_vg1 finished
Nov 25 09:36:30 compute-0 lvm[442861]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 09:36:30 compute-0 lvm[442861]: VG ceph_vg2 finished
Nov 25 09:36:30 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23125 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:30 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 09:36:30 compute-0 ceph-mon[75015]: pgmap v3507: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:30 compute-0 ceph-mon[75015]: from='client.23125 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:30 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 09:36:30 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:30 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 09:36:30 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 09:36:30 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 09:36:30 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 09:36:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 09:36:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828198151' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 09:36:31 compute-0 nova_compute[253538]: 2025-11-25 09:36:31.147 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:31 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23133 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:31 compute-0 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:36:31.227+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 09:36:31 compute-0 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 09:36:31 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 09:36:31 compute-0 ceph-mon[75015]: from='client.23127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mon[75015]: pgmap v3508: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:31 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/828198151' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mon[75015]: from='client.23133 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:36:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193962000' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: ops {prefix=ops} (starting...)
Nov 25 09:36:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 09:36:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772513536' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 09:36:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481717257' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 09:36:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2744050197' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: session ls {prefix=session ls} (starting...)
Nov 25 09:36:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 09:36:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728529688' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: status {prefix=status} (starting...)
Nov 25 09:36:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1193962000' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2772513536' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1481717257' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2744050197' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3728529688' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 09:36:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238580934' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 09:36:32 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2844190516' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 09:36:32 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23151 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1238580934' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[75015]: from='client.23147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2844190516' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[75015]: from='client.23151 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[75015]: pgmap v3509: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 09:36:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128864166' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 09:36:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9214792' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 09:36:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132945345' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 09:36:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917744854' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23161 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:34 compute-0 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:36:34.166+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 09:36:34 compute-0 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 09:36:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 09:36:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058462891' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2128864166' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/9214792' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4132945345' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1917744854' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mon[75015]: from='client.23161 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3058462891' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 09:36:34 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033806218' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 09:36:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:34 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23167 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:34 compute-0 nova_compute[253538]: 2025-11-25 09:36:34.679 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:34 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23171 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 09:36:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133696125' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3033806218' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[75015]: from='client.23167 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[75015]: from='client.23171 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1133696125' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[75015]: pgmap v3510: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:35 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23173 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 09:36:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645074910' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23177 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267362304 unmapped: 47718400 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:41.568657+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721669 data_alloc: 218103808 data_used: 7303168
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267362304 unmapped: 47718400 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:42.568825+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267362304 unmapped: 47718400 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:43.569018+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267362304 unmapped: 47718400 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eee62000/0x0/0x4ffc00000, data 0x1233f7c/0x139a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:44.569159+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:45.569261+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:46.569413+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721669 data_alloc: 218103808 data_used: 7303168
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:47.569611+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:48.569813+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:49.569987+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267370496 unmapped: 47710208 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eee62000/0x0/0x4ffc00000, data 0x1233f7c/0x139a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:50.570122+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:51.570277+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721669 data_alloc: 218103808 data_used: 7303168
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:52.570366+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:53.570479+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:54.570567+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267378688 unmapped: 47702016 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.778717041s of 50.482543945s, submitted: 50
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:55.570749+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a1299c9680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f42400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f42400 session 0x55a1284852c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a1294e1a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267436032 unmapped: 47644672 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a128485a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a12ad86960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eea00000/0x0/0x4ffc00000, data 0x1697f7c/0x17fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:56.570945+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2760681 data_alloc: 218103808 data_used: 7303168
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267436032 unmapped: 47644672 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eea00000/0x0/0x4ffc00000, data 0x1697f7c/0x17fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:57.571077+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267436032 unmapped: 47644672 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:58.571234+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267444224 unmapped: 47636480 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129f52d20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f42400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f42400 session 0x55a128074960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a129e9cf00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a129ac6b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:59.571346+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a1285ada40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129d8e3c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f59800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f59800 session 0x55a12962a960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267395072 unmapped: 47685632 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a129d7eb40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a129d8f4a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:00.571468+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267395072 unmapped: 47685632 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee083000/0x0/0x4ffc00000, data 0x2012fee/0x217b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:01.571712+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2837556 data_alloc: 218103808 data_used: 7307264
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267395072 unmapped: 47685632 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a12808dc20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:02.571905+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267395072 unmapped: 47685632 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a12962a780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3ac00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3ac00 session 0x55a1294e12c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a12761c780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:03.572050+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267427840 unmapped: 47652864 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee083000/0x0/0x4ffc00000, data 0x2012fee/0x217b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:04.572195+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129f4bc20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267427840 unmapped: 47652864 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12a97dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:05.572339+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267427840 unmapped: 47652864 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee082000/0x0/0x4ffc00000, data 0x2013011/0x217c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:06.572441+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2875638 data_alloc: 218103808 data_used: 12042240
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 267436032 unmapped: 47644672 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:07.572639+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee082000/0x0/0x4ffc00000, data 0x2013011/0x217c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:08.572807+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:09.572931+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:10.573053+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:11.573173+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943318 data_alloc: 234881024 data_used: 21549056
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee082000/0x0/0x4ffc00000, data 0x2013011/0x217c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:12.573325+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270901248 unmapped: 44179456 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: mgrc ms_handle_reset ms_handle_reset con 0x55a12b693400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:36:35 compute-0 ceph-osd[90711]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: get_auth_request con 0x55a129f3ac00 auth_method 0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:13.573437+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270958592 unmapped: 44122112 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:14.573563+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270958592 unmapped: 44122112 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:15.573684+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270958592 unmapped: 44122112 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:16.573830+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943798 data_alloc: 234881024 data_used: 21561344
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270958592 unmapped: 44122112 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.959785461s of 21.312959671s, submitted: 50
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:17.573955+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275210240 unmapped: 39870464 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ecf17000/0x0/0x4ffc00000, data 0x317e011/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:18.574096+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ece93000/0x0/0x4ffc00000, data 0x3202011/0x336b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:19.574205+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:20.574358+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:21.574483+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3105258 data_alloc: 234881024 data_used: 22839296
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ece93000/0x0/0x4ffc00000, data 0x3202011/0x336b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:22.574588+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:23.574732+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:24.574876+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ece72000/0x0/0x4ffc00000, data 0x3223011/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:25.575022+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:26.575173+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3101998 data_alloc: 234881024 data_used: 22851584
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:27.575321+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:28.575517+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:29.575719+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.514759064s of 12.928457260s, submitted: 142
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a129ce9c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a129e9a780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 275447808 unmapped: 39632896 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ece72000/0x0/0x4ffc00000, data 0x3223011/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007f400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a13007f400 session 0x55a12ad861e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:30.575859+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a1285781e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129d7e960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12a97dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a129d5d4a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a129e9d0e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12c5fc000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:31.575985+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12c5fc000 session 0x55a1294e1680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2953948 data_alloc: 218103808 data_used: 12865536
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a128517c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a1294f2f00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12a97dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a1277a3c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a129d5d680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4edcb8000/0x0/0x4ffc00000, data 0x2316f7c/0x247d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:32.576117+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:33.576260+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:34.576716+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:35.577134+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:36.577324+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2954356 data_alloc: 218103808 data_used: 12865536
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:37.577477+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed874000/0x0/0x4ffc00000, data 0x275af7c/0x28c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:38.577637+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f40400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f40400 session 0x55a129f4ad20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed874000/0x0/0x4ffc00000, data 0x275af7c/0x28c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a129eb3e00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:39.577931+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269238272 unmapped: 45842432 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a1280734a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12a97dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.086547852s of 10.297847748s, submitted: 37
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a12844cb40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:40.578102+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf8800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed93b000/0x0/0x4ffc00000, data 0x275afaf/0x28c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:41.578338+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958596 data_alloc: 218103808 data_used: 12865536
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:42.578477+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:43.578733+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:44.579034+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed93b000/0x0/0x4ffc00000, data 0x275afaf/0x28c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:45.579396+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:46.579621+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989796 data_alloc: 234881024 data_used: 17223680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:47.579760+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed93b000/0x0/0x4ffc00000, data 0x275afaf/0x28c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed93b000/0x0/0x4ffc00000, data 0x275afaf/0x28c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:48.579930+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:49.580106+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:50.580349+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed939000/0x0/0x4ffc00000, data 0x275bfaf/0x28c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:51.580528+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990232 data_alloc: 234881024 data_used: 17227776
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:52.580655+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:53.580860+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270311424 unmapped: 44769280 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:54.580998+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.535174370s of 14.598592758s, submitted: 14
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 272859136 unmapped: 42221568 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ecf05000/0x0/0x4ffc00000, data 0x3182faf/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:55.581181+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273547264 unmapped: 41533440 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:56.581398+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087538 data_alloc: 234881024 data_used: 18219008
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:57.581574+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:58.581775+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eceef000/0x0/0x4ffc00000, data 0x3190faf/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:59.581948+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:00.582127+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eceef000/0x0/0x4ffc00000, data 0x3190faf/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:01.582277+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087554 data_alloc: 234881024 data_used: 18219008
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:02.582413+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:03.582617+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:04.582734+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eceef000/0x0/0x4ffc00000, data 0x3190faf/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273776640 unmapped: 41304064 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:05.582881+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eceef000/0x0/0x4ffc00000, data 0x3190faf/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273793024 unmapped: 41287680 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:06.584108+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090754 data_alloc: 234881024 data_used: 18391040
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273793024 unmapped: 41287680 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:07.584434+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273793024 unmapped: 41287680 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a12761c3c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.127888680s of 13.495772362s, submitted: 77
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12abf8800 session 0x55a129e9c780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:08.584813+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a1285163c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:09.584995+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:10.585334+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4edd6e000/0x0/0x4ffc00000, data 0x2329f7c/0x2490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:11.585567+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2929259 data_alloc: 218103808 data_used: 12951552
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:12.585753+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:13.586170+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a12808cf00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a129ac6b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 44204032 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:14.586331+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a126bf7a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268722176 unmapped: 46358528 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eee64000/0x0/0x4ffc00000, data 0x1233f7c/0x139a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:15.586517+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268722176 unmapped: 46358528 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:16.586778+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2756393 data_alloc: 218103808 data_used: 7303168
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268722176 unmapped: 46358528 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:17.586987+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12a97dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268722176 unmapped: 46358528 heap: 315080704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.729051590s of 10.018085480s, submitted: 60
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:18.587281+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12a97dc00 session 0x55a129f4a000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268738560 unmapped: 50020352 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a127620000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12769b400 session 0x55a1294f2b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a129eb2960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a129f3e000 session 0x55a129eb21e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:19.587506+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee20a000/0x0/0x4ffc00000, data 0x1e8df7c/0x1ff4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:20.587641+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:21.587824+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2848624 data_alloc: 218103808 data_used: 7303168
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:22.587960+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12b694c00 session 0x55a12962be00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee20a000/0x0/0x4ffc00000, data 0x1e8df7c/0x1ff4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a127698400 session 0x55a12844d2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:23.588165+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12951c000 session 0x55a128000b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12951c000 session 0x55a1294f3860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:24.588327+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 268746752 unmapped: 50012160 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:25.588486+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 269705216 unmapped: 49053696 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:26.588630+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943878 data_alloc: 234881024 data_used: 20254720
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:27.588752+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:28.588899+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee209000/0x0/0x4ffc00000, data 0x1e8df9f/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:29.589069+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:30.589207+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:31.589351+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943878 data_alloc: 234881024 data_used: 20254720
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ee209000/0x0/0x4ffc00000, data 0x1e8df9f/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:32.589492+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:33.589631+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:34.589752+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:35.589910+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 271441920 unmapped: 47316992 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.334646225s of 18.030015945s, submitted: 29
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:36.590055+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3022528 data_alloc: 234881024 data_used: 20365312
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273301504 unmapped: 45457408 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:37.590230+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273309696 unmapped: 45449216 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed792000/0x0/0x4ffc00000, data 0x2904f9f/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:38.590470+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:39.590700+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:40.590809+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:41.590980+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028408 data_alloc: 234881024 data_used: 21094400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:42.591145+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:43.591356+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed792000/0x0/0x4ffc00000, data 0x2904f9f/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:44.591484+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:45.591601+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:46.591703+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028408 data_alloc: 234881024 data_used: 21094400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:47.591858+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ed792000/0x0/0x4ffc00000, data 0x2904f9f/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951d800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 ms_handle_reset con 0x55a12951d800 session 0x55a128517e00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:48.592081+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935c400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.266537666s of 12.748188019s, submitted: 50
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 273342464 unmapped: 45416448 heap: 318758912 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 254 ms_handle_reset con 0x55a12935c400 session 0x55a1285161e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 254 ms_handle_reset con 0x55a12935d400 session 0x55a12ad863c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 254 ms_handle_reset con 0x55a127698400 session 0x55a128073860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:49.592178+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 254 ms_handle_reset con 0x55a12951c000 session 0x55a129f4b0e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951d800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285696000 unmapped: 35487744 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:50.592400+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 255 ms_handle_reset con 0x55a12951d800 session 0x55a128072f00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12a6e4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 255 heartbeat osd_stat(store_statfs(0x4ec999000/0x0/0x4ffc00000, data 0x36fbb2c/0x3865000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285712384 unmapped: 35471360 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:51.592544+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12a6e4000 session 0x55a127620960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3184509 data_alloc: 234881024 data_used: 27652096
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285720576 unmapped: 35463168 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:52.592676+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a127698400 session 0x55a129d7f4a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12935d400 session 0x55a12761c960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951c000 session 0x55a1277a2000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285728768 unmapped: 35454976 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:53.592786+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 35790848 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:54.593083+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 35790848 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:55.593558+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 35790848 heap: 321183744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 heartbeat osd_stat(store_statfs(0x4ec991000/0x0/0x4ffc00000, data 0x36ff296/0x386b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:56.593724+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951d800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3174861 data_alloc: 234881024 data_used: 27652096
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 38264832 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951d800 session 0x55a129d7f0e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f43000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a129f43000 session 0x55a129ce94a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a127698400 session 0x55a129ce83c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12935d400 session 0x55a1299c81e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951c000 session 0x55a129f4b4a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:57.593886+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951d800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951d800 session 0x55a129d5cf00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007f800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12abf9000 session 0x55a1277a2780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a13007f800 session 0x55a129d7fa40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a127698400 session 0x55a129d8fa40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12935d400 session 0x55a128517680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 ms_handle_reset con 0x55a12951c000 session 0x55a129f4a1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279527424 unmapped: 45326336 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:58.594048+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279527424 unmapped: 45326336 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:59.594223+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.191231728s of 10.974235535s, submitted: 81
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279543808 unmapped: 45309952 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:00.594429+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279543808 unmapped: 45309952 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:01.594579+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfe7000/0x0/0x4ffc00000, data 0x40a8cf9/0x4216000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3247971 data_alloc: 234881024 data_used: 27660288
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279552000 unmapped: 45301760 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:02.594757+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279552000 unmapped: 45301760 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951d800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:03.594899+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12951d800 session 0x55a12ad87a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfe7000/0x0/0x4ffc00000, data 0x40a8cf9/0x4216000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279855104 unmapped: 44998656 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfc4000/0x0/0x4ffc00000, data 0x40cccf9/0x423a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:04.595039+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 44990464 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:05.595159+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12951c000 session 0x55a129d8f860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279912448 unmapped: 44941312 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007f800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:06.595292+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3307448 data_alloc: 251658240 data_used: 35053568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 280035328 unmapped: 44818432 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:07.595412+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283172864 unmapped: 41680896 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:08.595610+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:09.595739+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfc4000/0x0/0x4ffc00000, data 0x40cccf9/0x423a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:10.595878+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebfc4000/0x0/0x4ffc00000, data 0x40cccf9/0x423a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:11.596012+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3378328 data_alloc: 251658240 data_used: 44978176
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:12.596154+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:13.596326+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:14.596470+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 39845888 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:15.596655+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.041135788s of 16.079441071s, submitted: 65
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 36036608 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:16.596816+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ebcb9000/0x0/0x4ffc00000, data 0x43d7cf9/0x4545000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3427395 data_alloc: 251658240 data_used: 45391872
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289161216 unmapped: 35692544 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:17.596965+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289193984 unmapped: 35659776 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:18.597138+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291012608 unmapped: 33841152 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:19.597256+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291037184 unmapped: 33816576 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4eb30e000/0x0/0x4ffc00000, data 0x4d7ccf9/0x4eea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:20.597394+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 31539200 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:21.597569+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508061 data_alloc: 251658240 data_used: 45445120
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 31539200 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:22.597687+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 31531008 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:23.597819+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 31531008 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:24.598036+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4eb25e000/0x0/0x4ffc00000, data 0x4e2acf9/0x4f98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293355520 unmapped: 31498240 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:25.598175+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.931900978s of 10.228565216s, submitted: 100
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:26.598345+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3502997 data_alloc: 251658240 data_used: 45445120
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:27.598463+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:28.598685+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4eb241000/0x0/0x4ffc00000, data 0x4e4fcf9/0x4fbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:29.598815+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:30.598950+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96d000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12e96d000 session 0x55a1281530e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286df800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a1286df800 session 0x55a128153860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12b694400 session 0x55a128579e00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d0400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12b4d0400 session 0x55a127959a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286df800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 32718848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a1286df800 session 0x55a127620f00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12951c000 session 0x55a1294a6960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12b694400 session 0x55a128547a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96d000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:31.599065+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12e96d000 session 0x55a1294a6b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f44000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a129f44000 session 0x55a128665a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3577914 data_alloc: 251658240 data_used: 45445120
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 32366592 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:32.599192+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ea8e6000/0x0/0x4ffc00000, data 0x57a9d5b/0x5918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292503552 unmapped: 32350208 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:33.599379+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:34.599514+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:35.599649+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:36.599775+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3578094 data_alloc: 251658240 data_used: 45445120
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:37.599863+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.780004501s of 11.949531555s, submitted: 35
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 32342016 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ea8e3000/0x0/0x4ffc00000, data 0x57acd5b/0x591b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:38.600019+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 32333824 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:39.600222+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ea8e3000/0x0/0x4ffc00000, data 0x57acd5b/0x591b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 32333824 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:40.600410+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 32325632 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:41.600541+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286df800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a1286df800 session 0x55a1285bb4a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3585006 data_alloc: 251658240 data_used: 46264320
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293224448 unmapped: 31629312 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 heartbeat osd_stat(store_statfs(0x4ea8bf000/0x0/0x4ffc00000, data 0x57d0d5b/0x593f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:42.600681+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96d000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 ms_handle_reset con 0x55a12e96d000 session 0x55a12962be00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96ec00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293257216 unmapped: 31596544 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 257 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 258 ms_handle_reset con 0x55a12e96ec00 session 0x55a129e9de00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f38c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286ad000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:43.600861+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 258 ms_handle_reset con 0x55a1286ad000 session 0x55a129f4a1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 258 ms_handle_reset con 0x55a129f38c00 session 0x55a129f4b4a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286ad000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 25550848 heap: 324853760 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:44.600984+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 258 ms_handle_reset con 0x55a1286ad000 session 0x55a129e9d860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286df800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 34914304 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:45.601090+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 259 ms_handle_reset con 0x55a1286df800 session 0x55a12962b2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 32931840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:46.601228+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96d000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 260 ms_handle_reset con 0x55a12e96d000 session 0x55a129d8e1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 260 heartbeat osd_stat(store_statfs(0x4e9393000/0x0/0x4ffc00000, data 0x6cf94a9/0x6e6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3857252 data_alloc: 268435456 data_used: 57167872
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96ec00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 260 ms_handle_reset con 0x55a12e96ec00 session 0x55a129f4a960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306790400 unmapped: 30670848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f43800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 260 ms_handle_reset con 0x55a129f43800 session 0x55a129ce92c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286ad000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:47.601387+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 260 ms_handle_reset con 0x55a1286ad000 session 0x55a129d5da40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286df800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306896896 unmapped: 30564352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:48.601522+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.849305153s of 10.489986420s, submitted: 43
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 261 ms_handle_reset con 0x55a1286df800 session 0x55a1285161e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33284096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:49.601641+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 261 heartbeat osd_stat(store_statfs(0x4ea686000/0x0/0x4ffc00000, data 0x5a03c2f/0x5b77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33284096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:50.601783+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 261 heartbeat osd_stat(store_statfs(0x4ea686000/0x0/0x4ffc00000, data 0x5a03c2f/0x5b77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 261 ms_handle_reset con 0x55a127698400 session 0x55a129e9af00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 261 ms_handle_reset con 0x55a12935d400 session 0x55a129e9b680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f43800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33275904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:51.601915+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 261 ms_handle_reset con 0x55a129f43800 session 0x55a129d5cb40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3645496 data_alloc: 268435456 data_used: 53866496
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33275904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:52.603228+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33275904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:53.603352+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33275904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:54.603510+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 263 ms_handle_reset con 0x55a127698400 session 0x55a12ad86960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 35528704 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:55.603655+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 35528704 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 263 heartbeat osd_stat(store_statfs(0x4eb9d0000/0x0/0x4ffc00000, data 0x46b726f/0x482c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:56.603790+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3546903 data_alloc: 251658240 data_used: 40132608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 33628160 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:57.603908+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 33497088 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:58.604089+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.238170624s of 10.207138062s, submitted: 191
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 264 ms_handle_reset con 0x55a129f3e000 session 0x55a12ad86b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 264 ms_handle_reset con 0x55a12b694c00 session 0x55a12844c5a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286ad000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 33497088 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:59.604231+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 264 ms_handle_reset con 0x55a1286ad000 session 0x55a1284850e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 264 heartbeat osd_stat(store_statfs(0x4eaf59000/0x0/0x4ffc00000, data 0x512dcee/0x52a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:00.604353+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 264 heartbeat osd_stat(store_statfs(0x4ec62b000/0x0/0x4ffc00000, data 0x3a5cccb/0x3bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:01.604475+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 264 heartbeat osd_stat(store_statfs(0x4ec62b000/0x0/0x4ffc00000, data 0x3a5cccb/0x3bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3301591 data_alloc: 234881024 data_used: 26443776
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:02.604597+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:03.604749+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:04.604905+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279833615' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:05.605050+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:06.605192+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec628000/0x0/0x4ffc00000, data 0x3a5e72e/0x3bd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3306229 data_alloc: 234881024 data_used: 26468352
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:07.605350+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:08.605557+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:09.605707+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.888059616s of 11.317874908s, submitted: 61
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1280014a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a128000b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 40280064 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:10.605823+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128516000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 52404224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:11.605972+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed765000/0x0/0x4ffc00000, data 0x266e6cc/0x27e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074375 data_alloc: 218103808 data_used: 15634432
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 52404224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:12.606151+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 52404224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:13.606291+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007f800 session 0x55a128665860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a12855d0e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 52404224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eda0c000/0x0/0x4ffc00000, data 0x267c6cc/0x27f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:14.606498+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286ad000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286ad000 session 0x55a129d7f0e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:15.606681+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:16.606844+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:17.606983+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:18.607177+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:19.607371+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:20.607524+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:21.607650+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:22.607759+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:23.607900+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:24.608082+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:25.608238+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:26.608382+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:27.608529+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:28.608704+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:29.608838+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:30.609002+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:31.609184+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:32.609377+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:33.609555+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:34.609729+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:35.609940+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:36.610094+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:37.610249+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:38.610461+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:39.610643+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:40.610816+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:41.610986+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:42.611160+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:43.611389+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:44.611558+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:45.611739+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:46.611917+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:47.612042+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:48.612196+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:49.612366+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277774336 unmapped: 59686912 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:50.612506+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:51.647425+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854123 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:52.647568+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:53.647683+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:54.647796+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:55.647911+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:56.648048+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277782528 unmapped: 59678720 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.931575775s of 47.195777893s, submitted: 51
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2888029 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:57.648178+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279502848 unmapped: 57958400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eee40000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129ce92c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286ad000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286ad000 session 0x55a12962b2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a129e9d860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a129f4b4a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007f800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007f800 session 0x55a129f4a1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:58.648373+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:59.648544+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:00.648706+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:01.648836+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:02.648988+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882589 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:03.649154+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129e9de00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:04.649383+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277889024 unmapped: 59572224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286ad000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:05.649579+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277897216 unmapped: 59564032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:06.650193+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:07.650516+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2897949 data_alloc: 218103808 data_used: 9506816
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:08.650687+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:09.650852+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:10.651020+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:11.651163+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:12.651465+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2897949 data_alloc: 218103808 data_used: 9506816
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:13.651679+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277905408 unmapped: 59555840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:14.651968+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277913600 unmapped: 59547648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eeb20000/0x0/0x4ffc00000, data 0x15686cc/0x16de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:15.652108+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 277913600 unmapped: 59547648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:16.652270+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.087421417s of 19.442668915s, submitted: 8
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 279355392 unmapped: 58105856 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:17.652469+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952199 data_alloc: 218103808 data_used: 10223616
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 281747456 unmapped: 55713792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:18.652619+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:19.652816+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53c000/0x0/0x4ffc00000, data 0x1b4b6cc/0x1cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:20.652983+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:21.653251+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:22.653404+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2957029 data_alloc: 218103808 data_used: 10440704
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:23.653727+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 54722560 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:24.653977+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:25.654098+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:26.655980+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:27.656191+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956373 data_alloc: 218103808 data_used: 10440704
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:28.656423+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 54804480 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:29.656616+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:30.656741+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:31.656895+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:32.657073+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956373 data_alloc: 218103808 data_used: 10440704
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:33.657224+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:34.657408+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:35.657555+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:36.657747+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:37.657892+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956373 data_alloc: 218103808 data_used: 10440704
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:38.658123+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:39.658271+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:40.658402+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:41.658537+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 54788096 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:42.658663+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956693 data_alloc: 218103808 data_used: 10448896
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 54779904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:43.658814+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 54779904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:44.659011+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee53a000/0x0/0x4ffc00000, data 0x1b4e6cc/0x1cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 54779904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:45.659141+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 54779904 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a129d7ef00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1276212c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1277a2000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:46.659278+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694c00 session 0x55a1285794a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.608951569s of 30.004426956s, submitted: 63
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283533312 unmapped: 53927936 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d7fe00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129eb3c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a128579a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a129ce8d20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286df800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286df800 session 0x55a129d8fa40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:47.659412+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004634 data_alloc: 218103808 data_used: 10448896
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:48.659611+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:49.659795+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:50.659979+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:51.660125+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4802.4 total, 600.0 interval
                                           Cumulative writes: 33K writes, 133K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.85 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3363 writes, 13K keys, 3363 commit groups, 1.0 writes per commit group, ingest: 13.80 MB, 0.02 MB/s
                                           Interval WAL: 3363 writes, 1299 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:52.660275+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004810 data_alloc: 218103808 data_used: 10448896
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:53.660467+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283009024 unmapped: 54452224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:54.660683+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:55.660868+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:56.660997+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:57.661130+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004810 data_alloc: 218103808 data_used: 10448896
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:58.661340+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129eb21e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [1])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.558760643s of 12.719706535s, submitted: 25
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:59.661501+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:00.661740+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283017216 unmapped: 54444032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:01.661961+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 54435840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:02.662188+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004398 data_alloc: 218103808 data_used: 10452992
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 54435840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:03.662350+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 54730752 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:04.662472+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:05.662589+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:06.662702+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:07.662881+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033838 data_alloc: 218103808 data_used: 14458880
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:08.663073+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:09.663196+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:10.663341+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:11.663488+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:12.663622+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033838 data_alloc: 218103808 data_used: 14458880
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:13.663767+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 54796288 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ee03f000/0x0/0x4ffc00000, data 0x20496cc/0x21bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:14.699482+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.305363655s of 15.309145927s, submitted: 1
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 49815552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:15.699674+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287997952 unmapped: 49463296 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4ce000/0x0/0x4ffc00000, data 0x2a1a6cc/0x2b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:16.699816+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:17.700002+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117286 data_alloc: 218103808 data_used: 15851520
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a4000/0x0/0x4ffc00000, data 0x2a436cc/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:18.700213+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a4000/0x0/0x4ffc00000, data 0x2a436cc/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:19.700410+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:20.700642+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a4000/0x0/0x4ffc00000, data 0x2a436cc/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:21.700903+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289251328 unmapped: 48209920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:22.701064+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110502 data_alloc: 218103808 data_used: 15851520
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a4000/0x0/0x4ffc00000, data 0x2a436cc/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288940032 unmapped: 48521216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:23.701203+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288940032 unmapped: 48521216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:24.701379+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288940032 unmapped: 48521216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:25.701638+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288940032 unmapped: 48521216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:26.701799+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288948224 unmapped: 48513024 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.085641861s of 12.817085266s, submitted: 90
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:27.701936+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3112330 data_alloc: 218103808 data_used: 15937536
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288948224 unmapped: 48513024 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:28.702183+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a1000/0x0/0x4ffc00000, data 0x2a476cc/0x2bbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288948224 unmapped: 48513024 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129eb25a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a128152d20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:29.702348+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288956416 unmapped: 48504832 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:30.702511+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 288964608 unmapped: 48496640 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:31.702705+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a12761cf00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:32.702864+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed399000/0x0/0x4ffc00000, data 0x1b4f6cc/0x1cc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2964738 data_alloc: 218103808 data_used: 10526720
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:33.703021+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:34.703227+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:35.703375+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed399000/0x0/0x4ffc00000, data 0x1b4f6cc/0x1cc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:36.703576+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:37.703727+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2964738 data_alloc: 218103808 data_used: 10526720
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:38.703959+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286ad000 session 0x55a12962be00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a12855c3c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:39.704113+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.591695786s of 12.656030655s, submitted: 24
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:40.704377+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12850c3c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:41.704559+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:42.704739+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:43.704876+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:44.704998+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285818880 unmapped: 51642368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:45.705128+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:46.705383+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:47.705528+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:48.705695+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:49.705866+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:50.706036+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:51.706171+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285827072 unmapped: 51634176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:52.706394+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:53.706583+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:54.706742+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:55.706933+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:56.707096+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285835264 unmapped: 51625984 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:57.707394+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:58.707641+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:59.707778+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:00.707931+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:01.708059+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:02.708247+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:03.708420+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:04.708589+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285843456 unmapped: 51617792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:05.708765+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:06.708900+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:07.709068+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:08.709257+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:09.711778+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:10.711926+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:11.712079+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:12.712279+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285851648 unmapped: 51609600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:13.712447+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285859840 unmapped: 51601408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:14.712591+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285859840 unmapped: 51601408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:15.712746+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285859840 unmapped: 51601408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:16.712857+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285859840 unmapped: 51601408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:17.712974+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:18.713215+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:19.713361+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:20.713557+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:21.713754+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:22.713894+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866172 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 285868032 unmapped: 51593216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12855c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1294e12c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:23.714021+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a1294e01e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a1294e1680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.633911133s of 43.654327393s, submitted: 5
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1285ada40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129e9a780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a129e9a1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a129e9ad20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a1285781e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4edca0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:24.714177+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fd000/0x0/0x4ffc00000, data 0x1aeb6cc/0x1c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:25.714374+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:26.714613+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:27.714835+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fd000/0x0/0x4ffc00000, data 0x1aeb6cc/0x1c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2937838 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128579c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 51003392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:28.715040+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a128578d20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 50995200 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:29.715219+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a128075680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fd000/0x0/0x4ffc00000, data 0x1aeb6cc/0x1c61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b694400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b694400 session 0x55a1286652c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 50995200 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:30.715370+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 50995200 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:31.715524+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fc000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:32.715754+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fc000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001436 data_alloc: 218103808 data_used: 16015360
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:33.715949+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed3fc000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10b9f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:34.716080+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:35.716225+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.580469131s of 11.722208023s, submitted: 15
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 50962432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:36.716411+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:37.716552+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecfec000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000380 data_alloc: 218103808 data_used: 16015360
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:38.716734+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:39.716899+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecfec000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecfec000/0x0/0x4ffc00000, data 0x1aeb6db/0x1c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:40.717089+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:41.717248+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286507008 unmapped: 50954240 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:42.717424+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035518 data_alloc: 218103808 data_used: 16035840
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 48160768 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:43.717598+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:44.717731+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec9cf000/0x0/0x4ffc00000, data 0x21026db/0x2279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:45.717853+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:46.718017+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.718398094s of 11.239345551s, submitted: 157
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:47.718178+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058784 data_alloc: 218103808 data_used: 16855040
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:48.718343+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:49.718534+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec93d000/0x0/0x4ffc00000, data 0x21926db/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:50.718667+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291438592 unmapped: 46022656 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:51.718799+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec93d000/0x0/0x4ffc00000, data 0x21926db/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:52.718928+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053172 data_alloc: 218103808 data_used: 16855040
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:53.719114+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec924000/0x0/0x4ffc00000, data 0x21b36db/0x232a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:54.719396+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:55.719681+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:56.719821+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:57.719957+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.055026054s of 11.294532776s, submitted: 35
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129eb3e00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a127620000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12808dc20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053404 data_alloc: 218103808 data_used: 16855040
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96d000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96d000 session 0x55a128001c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96ec00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:58.720105+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96ec00 session 0x55a129f4b0e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129f4bc20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129d7f680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a129eb32c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96d000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96d000 session 0x55a129f4b2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:59.720250+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:00.720377+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:01.720538+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:02.720677+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087144 data_alloc: 218103808 data_used: 16855040
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:03.721063+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:04.721227+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f38000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f38000 session 0x55a128517e00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:05.721377+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291528704 unmapped: 45932544 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:06.721481+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:07.721603+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113064 data_alloc: 234881024 data_used: 19431424
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:08.721779+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:09.721953+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:10.722073+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:11.722447+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:12.722593+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113064 data_alloc: 234881024 data_used: 19431424
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:13.722748+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:14.722939+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:15.723131+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec555000/0x0/0x4ffc00000, data 0x25816eb/0x26f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x10faf9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:16.723272+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.307680130s of 19.415519714s, submitted: 9
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:17.723345+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3212432 data_alloc: 234881024 data_used: 19800064
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 44736512 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:18.723477+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea6c5000/0x0/0x4ffc00000, data 0x32716eb/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 44736512 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:19.723613+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:20.723776+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:21.723932+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea648000/0x0/0x4ffc00000, data 0x32ee6eb/0x3466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:22.724121+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3218054 data_alloc: 234881024 data_used: 19988480
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:23.724275+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:24.724484+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea648000/0x0/0x4ffc00000, data 0x32ee6eb/0x3466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:25.724617+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea648000/0x0/0x4ffc00000, data 0x32ee6eb/0x3466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292937728 unmapped: 44523520 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:26.724775+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:27.724964+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217482 data_alloc: 234881024 data_used: 19988480
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea627000/0x0/0x4ffc00000, data 0x330f6eb/0x3487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:28.725119+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:29.725272+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:30.725485+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8e5a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129e9a780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ea627000/0x0/0x4ffc00000, data 0x330f6eb/0x3487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292945920 unmapped: 44515328 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:31.725654+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.704324722s of 14.392348289s, submitted: 68
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a129e9be00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291430400 unmapped: 46030848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:32.726393+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062914 data_alloc: 218103808 data_used: 15761408
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291430400 unmapped: 46030848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:33.726536+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb777000/0x0/0x4ffc00000, data 0x21c06db/0x2337000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291430400 unmapped: 46030848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:34.726760+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a127959e00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d5d2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:35.727157+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291430400 unmapped: 46030848 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1285794a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:36.727378+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:37.727524+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:38.728218+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:39.728361+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:40.728500+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:41.728949+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:42.729163+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:43.729612+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:44.729818+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:45.729952+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:46.730101+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:47.730353+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:48.730633+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:49.731018+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:50.731235+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:51.731522+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:52.731714+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:53.731983+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:54.732199+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:55.732394+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:56.732589+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:57.732790+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:58.733001+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:59.733371+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:00.733540+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:01.733773+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:02.733960+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:03.734156+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:04.734575+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:05.734806+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:06.735051+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:07.735270+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:08.735471+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:09.735750+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:10.735927+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:11.736099+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:12.736286+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:13.736487+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:14.736694+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 49766400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:15.736856+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129e9a1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294a74a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 49766400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1276214a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a128516d20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.088615417s of 44.159133911s, submitted: 19
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12844c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12808c780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1299c8000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96d000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96d000 session 0x55a129ce90e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:16.737035+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:17.737220+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:18.737467+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906377 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:19.737639+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:20.737850+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:21.738044+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:22.738189+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a1280734a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:23.738359+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909117 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:24.738517+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:25.738684+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:26.738822+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:27.739089+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:28.739296+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924957 data_alloc: 218103808 data_used: 9543680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:29.739438+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:30.739590+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:31.739734+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:32.739864+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:33.740026+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924957 data_alloc: 218103808 data_used: 9543680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:34.740172+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.063507080s of 19.188037872s, submitted: 5
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:35.740378+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286867456 unmapped: 50593792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:36.740890+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 50577408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:37.741006+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:38.741338+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004787 data_alloc: 218103808 data_used: 10338304
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:39.743174+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:40.743482+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:41.743867+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:42.744126+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:43.744341+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005123 data_alloc: 218103808 data_used: 10346496
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:44.744567+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:45.745372+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:46.745738+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:47.746011+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1294e1680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12855d0e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129f4ab40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b692800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b692800 session 0x55a1294f3860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.005581856s of 12.598716736s, submitted: 51
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 45473792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12962b2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12844d2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129f4a780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a1294a7c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1299f6c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1299f6c00 session 0x55a12761c960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:48.746351+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057527 data_alloc: 218103808 data_used: 10346496
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:49.746734+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:50.747115+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:51.747293+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb68e000/0x0/0x4ffc00000, data 0x22a86ec/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:52.747540+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:53.747719+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057527 data_alloc: 218103808 data_used: 10346496
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:54.747893+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:55.748224+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb68e000/0x0/0x4ffc00000, data 0x22a86ec/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129eb2f00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:56.748383+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129e9d860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129eb25a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129d8e780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:57.748516+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698c00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e49fc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:58.748690+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059899 data_alloc: 218103808 data_used: 10350592
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:59.748794+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:00.748942+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:01.749128+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:02.749267+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:03.749446+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104219 data_alloc: 218103808 data_used: 16457728
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:04.749617+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:05.749807+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:06.749985+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:07.750138+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:08.750383+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104219 data_alloc: 218103808 data_used: 16457728
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:09.750619+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.668926239s of 21.840452194s, submitted: 15
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292233216 unmapped: 45228032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:10.750757+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebda6000/0x0/0x4ffc00000, data 0x2bca6ec/0x2d42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292864000 unmapped: 44597248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:11.759557+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:12.759691+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:13.759839+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194841 data_alloc: 234881024 data_used: 18636800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:14.760041+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:15.760229+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:16.760401+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:17.760541+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:18.760767+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196601 data_alloc: 234881024 data_used: 18771968
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:19.760953+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:20.761192+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:21.761431+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698c00 session 0x55a128547a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e49fc00 session 0x55a12761c1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.503499985s of 12.766171455s, submitted: 100
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:22.761590+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128516b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:23.761812+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013621 data_alloc: 218103808 data_used: 10407936
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eccb6000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:24.761971+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:25.762135+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294a70e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1294f3a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:26.762336+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12850d860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:27.762541+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:28.762712+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:29.762941+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:30.763077+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:31.763267+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:32.763479+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:33.763664+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:34.763939+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:35.764120+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:36.764375+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:37.764558+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:38.764748+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:39.764906+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:40.765062+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:41.765244+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:42.765456+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:43.765671+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:44.765851+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:45.766041+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:46.766242+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:47.766388+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:48.766557+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:49.766729+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:50.766900+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:51.767118+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:52.767405+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:53.767602+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:54.767763+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:55.767974+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:56.768170+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:57.768396+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:58.768582+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:59.768787+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:00.768990+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:01.769169+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:02.769379+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:03.769581+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:04.769725+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.827690125s of 42.880455017s, submitted: 18
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290463744 unmapped: 46997504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128094f00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a128073860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:05.769885+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a127959e00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d5de00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e49fc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e49fc00 session 0x55a12ad874a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:06.770071+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed282000/0x0/0x4ffc00000, data 0x16f66cc/0x186c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:07.770296+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:08.770532+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939984 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed282000/0x0/0x4ffc00000, data 0x16f66cc/0x186c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:09.770728+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12962a1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:10.770864+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:11.771009+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:12.771126+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290594816 unmapped: 46866432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:13.771252+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969544 data_alloc: 218103808 data_used: 11149312
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290594816 unmapped: 46866432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:14.771376+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:15.771484+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:16.771628+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:17.771762+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:18.771906+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969544 data_alloc: 218103808 data_used: 11149312
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:19.772019+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:20.772547+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:21.772684+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289472512 unmapped: 47988736 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:22.772859+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.737442017s of 17.874473572s, submitted: 19
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291201024 unmapped: 46260224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:23.773053+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013660 data_alloc: 218103808 data_used: 11976704
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:24.773186+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:25.773362+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:26.773561+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:27.773728+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:28.773888+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015580 data_alloc: 218103808 data_used: 12120064
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:29.774095+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:30.774413+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:31.774659+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:32.774820+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:33.774976+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015900 data_alloc: 218103808 data_used: 12128256
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:34.775177+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:35.775361+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:36.775508+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:37.775772+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.645758629s of 14.787140846s, submitted: 29
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12844c960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12ad87680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a12ad87a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935dc00 session 0x55a12ad865a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12ad86f00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:38.776021+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051303 data_alloc: 218103808 data_used: 12128256
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:39.776203+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:40.776389+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:41.776545+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:42.776680+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:43.776868+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051303 data_alloc: 218103808 data_used: 12128256
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:44.777001+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:45.777192+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12962a5a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291233792 unmapped: 46227456 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:46.777368+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291299328 unmapped: 46161920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:47.777507+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:48.777661+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077383 data_alloc: 218103808 data_used: 15802368
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:49.777780+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:50.777941+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:51.778090+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:52.778234+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:53.778837+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077383 data_alloc: 218103808 data_used: 15802368
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:54.779367+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:55.779858+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:56.780351+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:57.780669+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.151699066s of 20.260654449s, submitted: 29
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293240832 unmapped: 44220416 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:58.780876+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131253 data_alloc: 218103808 data_used: 16269312
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293249024 unmapped: 44212224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:59.781010+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:00.781232+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:01.781486+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:02.781819+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec31b000/0x0/0x4ffc00000, data 0x264e72e/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:03.782016+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3142553 data_alloc: 218103808 data_used: 16080896
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:04.782142+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:05.782368+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 42926080 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:06.782495+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12962a780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129ac61e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:07.782611+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 42926080 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a129ce85a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:08.782777+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb2f000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3022497 data_alloc: 218103808 data_used: 12189696
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:09.782950+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:10.783087+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.365324974s of 12.755796432s, submitted: 111
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:11.783385+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:12.783630+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129d7fa40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a127620780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:13.783801+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:14.783953+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:15.784195+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:16.784507+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:17.784664+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:18.784968+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:19.785191+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:20.785396+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:21.785601+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:22.785795+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:23.786073+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:24.786434+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:25.786652+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:26.786847+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:27.787059+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:28.787375+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:29.787544+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:30.787737+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:31.787940+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:32.788223+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:33.788596+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:34.788782+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:35.788989+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:36.789151+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:37.789297+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:38.789512+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:39.789702+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:40.789887+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:41.790029+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:42.790189+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:43.790358+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:44.790494+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.636383057s of 33.703060150s, submitted: 22
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298565632 unmapped: 38895616 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,7])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128517c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129e9b860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d5cb40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a1280952c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12761c780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:45.790659+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:46.790805+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:47.790927+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed2ab000/0x0/0x4ffc00000, data 0x16cd6cc/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129f4a1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:48.791142+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12962b4a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956231 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:49.791337+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128517680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12761c3c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:50.791589+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 43474944 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:51.791793+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:52.791963+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:53.792183+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12962b680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128665860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993716 data_alloc: 218103808 data_used: 12095488
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:54.792374+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:55.792516+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:56.792808+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:57.792952+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:58.793217+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.936422348s of 14.315987587s, submitted: 15
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993848 data_alloc: 218103808 data_used: 12095488
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:59.793368+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:00.793537+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a12ad865a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a129d5cf00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:01.793663+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:02.793881+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:03.794058+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993716 data_alloc: 218103808 data_used: 12095488
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:04.794249+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:05.794474+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:06.795276+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:07.795369+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12ad87a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129eb21e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:08.795494+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12ad87680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:09.795646+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:10.795864+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:11.796280+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:12.796588+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:13.796807+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:14.796960+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:15.797252+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:16.797480+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:17.797745+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:18.797968+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:19.798163+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:20.798412+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:21.798623+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:22.798938+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:23.799090+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:24.799246+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:25.799366+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:26.799489+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:27.799600+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:28.799748+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:29.799915+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:30.800052+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:31.800413+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:32.800586+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:33.800839+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:34.801087+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:35.801228+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:36.801507+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:37.801672+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:38.801904+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:39.802117+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:40.802539+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:41.802727+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:42.802934+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:43.803199+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:44.803386+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:45.803563+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291512320 unmapped: 45948928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:46.803794+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291512320 unmapped: 45948928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.175201416s of 48.326942444s, submitted: 23
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d8e780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:47.803955+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12844d2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a1294f3860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294e1680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a12761cf00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:48.804164+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:49.804376+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999881 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:50.804512+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:51.804673+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:52.804863+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12d8ce800
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12d8ce800 session 0x55a128094f00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:53.804988+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:54.805148+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000013 data_alloc: 218103808 data_used: 7364608
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:55.805781+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:56.805936+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:57.806199+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:58.806582+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:59.806875+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064813 data_alloc: 218103808 data_used: 16584704
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:00.807024+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:01.831237+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:02.833156+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:03.834451+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:04.836847+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064813 data_alloc: 218103808 data_used: 16584704
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:05.837636+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.436155319s of 18.604982376s, submitted: 24
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 43556864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:06.837877+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec693000/0x0/0x4ffc00000, data 0x22e56cc/0x245b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:07.838448+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:08.838869+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:09.839232+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134245 data_alloc: 234881024 data_used: 17334272
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:10.839759+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:11.839905+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:12.840043+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:13.840355+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:14.840512+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134245 data_alloc: 234881024 data_used: 17334272
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:15.840584+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:16.840848+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:17.840970+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.365725517s of 12.565147400s, submitted: 42
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d5de00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12855c3c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:18.841146+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d7e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:19.841280+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131957 data_alloc: 234881024 data_used: 17362944
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 43401216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:20.841424+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a13007dc00 session 0x55a129f53860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a127eb5400 session 0x55a12808d2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a1286a5000 session 0x55a128579a40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a127eb5400 session 0x55a129f4a5a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 299204608 unmapped: 41099264 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 267 ms_handle_reset con 0x55a127698400 session 0x55a1299c9860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:21.841551+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 40034304 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:22.841697+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 267 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 267 heartbeat osd_stat(store_statfs(0x4ebc4a000/0x0/0x4ffc00000, data 0x2d29e2a/0x2ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a12935d400 session 0x55a1294f2b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:23.841901+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a127698400 session 0x55a12ad86000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a127eb5400 session 0x55a12761d2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:24.842025+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3255591 data_alloc: 234881024 data_used: 24862720
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:25.842348+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:26.842577+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:27.842713+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 40017920 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.536736488s of 10.118284225s, submitted: 64
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:28.842933+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a4000 session 0x55a127620960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:29.843120+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033163 data_alloc: 218103808 data_used: 7397376
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:30.843413+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:31.843607+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:32.843760+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eccee000/0x0/0x4ffc00000, data 0x1c83426/0x1dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:33.843913+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:34.844088+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033163 data_alloc: 218103808 data_used: 7397376
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:35.844273+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a5000 session 0x55a12844c960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb4000 session 0x55a129e9af00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a13007dc00 session 0x55a1285474a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127698400 session 0x55a129ce81e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb5400 session 0x55a129f53c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:36.844374+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6f00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a5000 session 0x55a1294a7860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 316350464 unmapped: 36495360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eb5ae000/0x0/0x4ffc00000, data 0x33c3488/0x3540000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,0,4,1,3])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:37.844507+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127698400 session 0x55a129f4b680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:38.844778+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb5400 session 0x55a128001c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:39.845003+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eb5aa000/0x0/0x4ffc00000, data 0x33c7488/0x3544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263674 data_alloc: 234881024 data_used: 18100224
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:40.845197+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:41.845366+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.454534531s of 13.402193069s, submitted: 53
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 270 ms_handle_reset con 0x55a13007dc00 session 0x55a1299c9c20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:42.845492+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:43.845626+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:44.845762+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194453 data_alloc: 218103808 data_used: 15204352
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:45.845875+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:46.846120+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:47.846521+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:48.846786+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:49.846910+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197427 data_alloc: 218103808 data_used: 15204352
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:50.847061+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:51.847275+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:53.788035+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.603380203s of 11.698580742s, submitted: 38
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd8000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298754048 unmapped: 54091776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:54.788185+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216491 data_alloc: 234881024 data_used: 17309696
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:55.788360+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:56.788537+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:57.788712+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:58.788868+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:59.789058+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216651 data_alloc: 234881024 data_used: 17313792
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:00.789261+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:01.789415+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:02.789592+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:03.789781+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:04.789945+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216651 data_alloc: 234881024 data_used: 17313792
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:05.790106+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:06.790285+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:07.790449+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.006009102s of 14.082138062s, submitted: 3
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:08.790581+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:09.790724+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216667 data_alloc: 234881024 data_used: 17309696
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:10.790865+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:11.791095+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:12.791228+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:13.791364+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:14.791504+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216667 data_alloc: 234881024 data_used: 17309696
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:15.791627+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:16.791787+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:17.791912+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.220427513s of 10.246352196s, submitted: 3
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a12ad87860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a129eb3860
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:18.792049+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a12ad865a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:19.792248+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972369 data_alloc: 218103808 data_used: 7401472
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:20.792392+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:21.792529+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:22.792649+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:23.792807+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:24.792935+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:25.793084+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:26.793231+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:27.793432+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:28.793600+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:29.793788+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:30.793965+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:31.794128+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:32.794272+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:33.794459+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:34.794600+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:35.794752+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:36.794920+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:37.795058+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:38.795191+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:39.795421+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:40.795569+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:41.795690+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:42.795925+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.520774841s of 24.589715958s, submitted: 20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a129f4a1e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a129d8f2c0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129ce85a0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a13007dc00 session 0x55a129d5d680
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a12850cf00
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:43.796073+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ecd3c000/0x0/0x4ffc00000, data 0x1c34a4a/0x1db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a12ad86960
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:44.796288+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129d5cd20
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048048 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a1285ba780
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007e000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:45.796430+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a13007e000 session 0x55a129d7f0e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:46.797393+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 62095360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:47.797545+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 62095360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:48.797633+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 58523648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a129eb30e0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a12844cb40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ecd3c000/0x0/0x4ffc00000, data 0x1c34a4a/0x1db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6b40
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:49.797748+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:50.797892+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:51.798077+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5402.4 total, 600.0 interval
                                           Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.84 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1830 writes, 8268 keys, 1830 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s
                                           Interval WAL: 1830 writes, 682 syncs, 2.68 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets getting new tickets!
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:52.798448+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _finish_auth 0
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:52.826254+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:53.798586+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:54.798729+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:55.798851+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:56.798978+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:57.799092+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:58.799276+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:59.799497+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:00.799712+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:01.799911+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:02.800086+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:03.800247+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:04.800441+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:35 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:35 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:05.800654+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:06.800786+0000)
Nov 25 09:36:35 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:35 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:07.800909+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:08.801056+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:09.801228+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:10.801347+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:11.801488+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:12.801715+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: mgrc ms_handle_reset ms_handle_reset con 0x55a129f3ac00
Nov 25 09:36:36 compute-0 ceph-osd[90711]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:36:36 compute-0 ceph-osd[90711]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: get_auth_request con 0x55a13007e000 auth_method 0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:13.801932+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:14.802149+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:15.802348+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:16.802761+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:17.802914+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:18.803030+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:19.803163+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:20.803271+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:21.803455+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:22.803733+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:23.803941+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:24.804112+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:25.804418+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:26.804580+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:27.804721+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:28.804843+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:29.805002+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:30.805197+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:31.805392+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:32.805589+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:33.805728+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:34.805884+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:35.806036+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:36.806185+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:37.806279+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:38.806364+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:39.806555+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:40.806689+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:41.806839+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:42.806960+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:43.807116+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:44.807278+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:45.807447+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:46.807584+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:47.807738+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:48.807941+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:49.808110+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:50.808270+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:51.808472+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:52.808612+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:53.808763+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:54.808926+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:55.809069+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:56.809206+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:57.809425+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:58.809607+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:59.809769+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.009826660s of 77.842643738s, submitted: 41
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977528 data_alloc: 218103808 data_used: 7405568
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:00.809916+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293298176 unmapped: 59547648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 272 ms_handle_reset con 0x55a1286a5000 session 0x55a129eb2b40
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:01.810111+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286de800
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293298176 unmapped: 59547648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:02.810282+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 273 ms_handle_reset con 0x55a1286de800 session 0x55a129d7eb40
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:03.810462+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:04.810632+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923180 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:05.810820+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 273 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0x9b7089/0xb35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:06.810994+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:07.811201+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:08.811407+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0x9b7089/0xb35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:09.811739+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:10.811920+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:11.812056+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:12.812215+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:13.812393+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:14.812578+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:15.812718+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:16.812861+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:17.813164+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:18.813459+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:19.813642+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:20.814048+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:21.814342+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:22.814477+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a12769b400 session 0x55a1294e1a40
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:23.814636+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:24.814775+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:25.814897+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:26.815103+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:27.815293+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:28.815473+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:29.815694+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:30.815854+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:31.816039+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:32.816234+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:33.816546+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:34.816684+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:35.816907+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.864181519s of 36.010292053s, submitted: 49
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:36.817046+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293396480 unmapped: 59449344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:37.817221+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:38.817375+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:39.817565+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:40.817677+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:41.817876+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:42.818046+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:43.818208+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:44.818391+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:45.818769+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:46.818995+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:47.819138+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:48.819254+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:49.819351+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:50.819472+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:51.819637+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:52.819825+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:53.820208+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:54.820379+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:55.820547+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:56.820735+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:57.820880+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:58.820991+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:59.821135+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:00.821277+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:01.821368+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:02.821525+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:03.821729+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:04.821941+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:05.822054+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:06.822445+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:07.822629+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:08.822806+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:09.823033+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:10.823261+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:11.823446+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:12.823736+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:13.823956+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:14.824117+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:15.824283+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:16.824511+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:17.824682+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:18.824904+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:19.825135+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:20.825272+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:21.825426+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:22.825681+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:23.825951+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:24.826094+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:25.826275+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:26.826499+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:27.826632+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:28.826844+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:29.827047+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:30.827192+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:31.827320+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:32.827470+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:33.827611+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:34.827824+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:35.827998+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:36.828148+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:37.828288+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:38.828457+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:39.828657+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:40.828802+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:41.828960+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:42.829115+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:43.829282+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:44.829388+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:45.829630+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:46.829772+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:47.830097+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:48.830262+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 58277888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:49.830457+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:50.830620+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:51.830760+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:52.830898+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:53.831156+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:54.831415+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:55.831646+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:56.831894+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:57.832101+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 58253312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:58.832368+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 58253312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 82.001701355s of 82.319427490s, submitted: 90
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a12769b400 session 0x55a1285ba780
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a127eb5400 session 0x55a12850cf00
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:59.832636+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:00.832851+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:01.833086+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:02.833272+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:03.833491+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:04.833798+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:05.834027+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:06.834236+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 58228736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:07.834413+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 274 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 275 ms_handle_reset con 0x55a1286a4000 session 0x55a129f4a1e0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:08.834653+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 275 heartbeat osd_stat(store_statfs(0x4edfb4000/0x0/0x4ffc00000, data 0x9ba687/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:09.834925+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:10.835111+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.517580032s of 12.192517281s, submitted: 57
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925069 data_alloc: 218103808 data_used: 7413760
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:11.835254+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:12.835460+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 58097664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 276 heartbeat osd_stat(store_statfs(0x4ee7b2000/0x0/0x4ffc00000, data 0x1bc258/0x33b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 276 ms_handle_reset con 0x55a1286a5000 session 0x55a12850c780
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:13.835610+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:14.835757+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 277 heartbeat osd_stat(store_statfs(0x4ee7af000/0x0/0x4ffc00000, data 0x1bdcd7/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:15.835909+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2857665 data_alloc: 218103808 data_used: 679936
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:16.836074+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:17.836211+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:18.836426+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 58073088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 277 heartbeat osd_stat(store_statfs(0x4ee7af000/0x0/0x4ffc00000, data 0x1bdcd7/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 277 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:19.836595+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 58056704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:20.836754+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 58056704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2860639 data_alloc: 218103808 data_used: 679936
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:21.836902+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:22.837045+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:23.837233+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:24.837380+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f44800
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.606686592s of 14.555405617s, submitted: 67
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:25.837526+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 ms_handle_reset con 0x55a129f44800 session 0x55a129d8f680
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:26.837694+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:27.837839+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:28.837960+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:29.838143+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:30.838280+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:31.838475+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:32.838647+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:33.838794+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:34.838920+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:35.839079+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:36.839240+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:37.839365+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:38.839566+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:39.839796+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:40.839962+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:41.840156+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:42.840395+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:43.840848+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:44.841051+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:45.841223+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:46.841422+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:47.841605+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:48.841842+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:49.842198+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:50.842460+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:51.843424+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:52.843589+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:53.843748+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:54.843952+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:55.844188+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:56.844396+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:57.844670+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:58.844852+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:59.845444+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:00.845682+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:01.846032+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:02.846383+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:03.846534+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:04.846685+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:05.846892+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:06.847102+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:07.847402+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:08.847525+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:09.847758+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:10.847949+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:11.848096+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:12.848427+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:13.848653+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:14.848989+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:15.849134+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:16.849288+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:17.849833+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:18.850055+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:19.850365+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:20.850542+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:21.850719+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:22.850807+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:23.850924+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:24.851080+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:25.851248+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:26.851417+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:27.851573+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:28.851765+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:29.851935+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:30.852061+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:31.852177+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:32.852288+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:33.852430+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:34.852578+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:35.853094+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:36.853272+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:37.853446+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:38.853612+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:39.853838+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:40.853974+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:41.854123+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:42.854279+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:43.854468+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:44.854605+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:45.854792+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:46.854982+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:47.855154+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:48.855353+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:49.855550+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:50.855682+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:51.855857+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:52.856039+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:53.856180+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:54.856339+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:55.856463+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:56.856623+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 57876480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:57.856780+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:58.856952+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:59.857154+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:00.857338+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:01.857500+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:02.857682+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:03.857878+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:04.858061+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294985728 unmapped: 57860096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:05.858203+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:06.858381+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:07.858593+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:08.858767+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:09.858956+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 104.417770386s of 104.486747742s, submitted: 10
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:10.859105+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:11.859231+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 49455104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:12.859364+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:13.859511+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4edfa9000/0x0/0x4ffc00000, data 0x9c12da/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:14.859726+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:15.859912+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:16.860146+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919522 data_alloc: 218103808 data_used: 688128
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:17.860398+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4edfa9000/0x0/0x4ffc00000, data 0x9c12da/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 280 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:18.860630+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:19.860822+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 280 ms_handle_reset con 0x55a12769b400 session 0x55a1294f3a40
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.512447357s of 10.195021629s, submitted: 4
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edfa4000/0x0/0x4ffc00000, data 0x9c2e7a/0xb49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:20.860974+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 57802752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:21.861205+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926341 data_alloc: 218103808 data_used: 696320
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 49414144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:22.861389+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295051264 unmapped: 57794560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:23.861547+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295051264 unmapped: 57794560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:24.861703+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecfa4000/0x0/0x4ffc00000, data 0x19c2e9d/0x1b4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:25.861862+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 ms_handle_reset con 0x55a127eb5400 session 0x55a1299c8000
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:26.862204+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:27.862372+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:28.862614+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:29.862835+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:30.863099+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:31.863266+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:32.864086+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:33.864262+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:34.865044+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:35.865213+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295075840 unmapped: 57769984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:36.865385+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:37.865552+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:38.865796+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:39.866007+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:40.866186+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:41.866378+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:42.866666+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:43.866859+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:44.867151+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:45.867358+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:46.867577+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:47.867775+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:48.867938+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:49.868152+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:50.868395+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:51.868565+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:52.868816+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:53.869013+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:54.869203+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:55.869360+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:56.869513+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:57.869671+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:58.869829+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:59.870005+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:00.870166+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:01.870365+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:02.870536+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:03.870690+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.138420105s of 43.667034149s, submitted: 14
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:04.870888+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 57720832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 282 ms_handle_reset con 0x55a1286a4000 session 0x55a129f52780
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:05.871031+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:06.871180+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042065 data_alloc: 218103808 data_used: 712704
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:07.871485+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:08.871678+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:09.871873+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:10.872060+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:11.872223+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042065 data_alloc: 218103808 data_used: 712704
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:12.872367+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:13.872578+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.782243729s of 10.014166832s, submitted: 29
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:14.872790+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:15.872956+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:16.873155+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:17.873325+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:18.873452+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:19.873612+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:20.873973+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:21.874201+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:22.874340+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:23.874514+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:24.874703+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:25.874888+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:26.875140+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:27.875328+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:28.875451+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:29.875668+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:30.875807+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:31.876032+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:32.876272+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:33.876443+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:34.876570+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:35.876713+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:36.876867+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:37.877110+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:38.877263+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:39.877540+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 57614336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:40.877688+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 57614336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:41.877835+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:42.877988+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:43.878135+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:44.878280+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:45.878470+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 57597952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:46.878656+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:47.878817+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:48.878989+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:49.879265+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:50.879452+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:51.879632+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:52.879773+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:53.879944+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:54.880165+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:55.880366+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:56.880484+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:57.880652+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:58.880816+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:59.881040+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:00.881261+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:01.881418+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:02.881591+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:03.881843+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:04.882033+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:05.882232+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:06.882428+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:07.882594+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:08.882787+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:09.882978+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:10.883148+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:11.883374+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:12.883542+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:13.883690+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 57540608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:14.883826+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:15.884008+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:16.884175+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:17.884386+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:18.884471+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:19.884586+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:20.884779+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:21.884988+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 57524224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:22.885154+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:23.885386+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:24.885508+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:25.885693+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:26.885909+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:27.886096+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:28.886400+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:29.886604+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:30.886741+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:31.886980+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:32.887110+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:33.887265+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 57499648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:34.887366+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:35.887636+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:36.887799+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:37.888417+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:38.889230+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:39.889679+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:40.890029+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:41.890153+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:42.890637+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:43.890776+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:44.891035+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:45.891170+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:46.891356+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:47.891500+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:48.891972+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:49.892204+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:50.892503+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:51.892659+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 57458688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:52.892903+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 57458688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:53.893050+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:54.893206+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:55.893396+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:56.893796+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:57.893938+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:58.894223+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:59.894392+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:00.894581+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:01.894706+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 57425920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:02.894885+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:03.895038+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:04.895157+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:05.895418+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:06.895658+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:07.895813+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:08.896116+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:09.896367+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 57409536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:10.896496+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 57409536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:11.896654+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 57401344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:12.896885+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 57401344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:13.897053+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:14.897246+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:15.897436+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:16.897715+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:17.897951+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:18.898087+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:19.898255+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:20.898561+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:21.898746+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:22.899064+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:23.899229+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:24.899382+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 57376768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:25.899551+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:26.899741+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:27.899896+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:28.900070+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:29.900351+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:30.900552+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:31.900867+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:32.901113+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:33.901349+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:34.901505+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:35.901657+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:36.901898+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:37.902072+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 57344000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:38.902272+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 57335808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:39.902556+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 57335808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:40.902700+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:41.902870+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:42.903003+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:43.903140+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:44.903344+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:45.903511+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:46.903647+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:47.903809+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:48.904054+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:49.904523+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:50.904719+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:51.904871+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:52.905092+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:53.905291+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:54.905477+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:55.905623+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:56.905842+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:57.905984+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:58.906131+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:59.906351+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:00.906474+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:01.906643+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:02.906839+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:03.906979+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:04.907157+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 57286656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:05.907369+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:06.907589+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:07.907775+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:08.907989+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:09.908230+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:10.908470+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:11.908648+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:12.908809+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:13.908986+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:14.909129+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:15.909271+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:16.909349+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:17.909528+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:18.909653+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:19.909765+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:20.909924+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 57253888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:21.910061+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:22.910216+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:23.910356+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:24.910526+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:25.910749+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:26.910892+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:27.911084+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:28.911255+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:29.911510+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:30.911653+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:31.911870+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:32.912035+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:33.912182+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:34.912371+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:35.912538+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:36.912770+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:37.912940+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:38.913095+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:39.913279+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:40.913409+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:41.913530+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:42.913707+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:43.913887+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:44.914053+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:45.914191+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:46.914363+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:47.914486+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:48.914610+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:49.914790+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:50.914929+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:51.915051+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6002.4 total, 600.0 interval
                                           Cumulative writes: 36K writes, 143K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 596 writes, 1534 keys, 596 commit groups, 1.0 writes per commit group, ingest: 0.75 MB, 0.00 MB/s
                                           Interval WAL: 596 writes, 265 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:52.915167+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:53.915349+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:54.915454+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:55.915573+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:56.915724+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:57.915896+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:58.916054+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:59.916233+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:00.916410+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:01.916547+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:02.916652+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:03.916802+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:04.916951+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:05.917082+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:06.917199+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:07.917434+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:08.917588+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:09.917745+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:10.917858+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:11.918017+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:12.918148+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:13.918291+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:14.918465+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:15.918640+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:16.918793+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:17.918933+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 57106432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:18.919059+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 57106432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:19.919226+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:20.919420+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:21.919578+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:22.919720+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:23.919870+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:24.920013+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:25.920143+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:26.920265+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:27.920402+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:28.920555+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:29.920733+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:30.920874+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:31.921016+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:32.921146+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:33.921279+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:34.921412+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:35.921553+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:36.921673+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:37.921840+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:38.921993+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:39.922185+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:40.922349+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 57057280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:41.922516+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 57057280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:42.922656+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:43.922806+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:44.922938+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:45.923101+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:46.923260+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:47.923440+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:48.923587+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:49.923794+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:50.924878+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:51.925578+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:52.925958+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 57032704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:53.926119+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:54.926340+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:55.926482+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:56.926601+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:57.926997+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 57016320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:58.927273+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:59.927466+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:00.927602+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:01.927737+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:02.928286+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:03.928494+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:04.928640+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:05.928777+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:06.928937+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:07.929196+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:08.929385+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:09.929565+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:10.929906+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:11.930069+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:12.930219+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:13.930356+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:14.930539+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:15.930660+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:16.930822+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:17.931104+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:18.931266+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:19.931461+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:20.931607+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:21.931792+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:22.931970+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:23.932177+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:24.932354+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:25.932547+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:26.932738+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:27.932895+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:28.933089+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:29.933273+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:30.933448+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:31.933640+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:32.933813+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:33.933978+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:34.934194+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:35.934354+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 322.681640625s of 322.787384033s, submitted: 11
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:36.934545+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 56893440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:37.934683+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:38.934869+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:39.935120+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:40.935281+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:41.935453+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:42.935628+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:43.935773+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:44.935940+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:45.936115+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:46.936295+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:47.936494+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:48.936637+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:49.936855+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:50.936998+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:51.937140+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:52.937294+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:53.937582+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:54.937869+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:55.938095+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:56.938289+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:57.938523+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:58.938721+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:59.938890+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:00.939114+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:01.939935+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:02.940360+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:03.940579+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:04.940722+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:05.940894+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:06.941111+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:07.941347+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:08.941485+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:09.941651+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:10.941824+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:11.941957+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:12.942143+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:13.942374+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:14.942530+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:15.942718+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:16.942912+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:17.943083+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 56819712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:18.943253+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:19.943517+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:20.943692+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:21.943871+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:22.944057+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:23.944240+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 56803328 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:24.944425+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:25.944596+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:26.944744+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:27.944910+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:28.944992+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:29.945126+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:30.945240+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:31.945389+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:32.945519+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:33.945663+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:34.945764+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:35.945936+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:36.946104+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:37.946222+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:38.946402+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:39.946595+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:40.946728+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:41.946878+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 56754176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:42.947009+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:43.947140+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:44.947258+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:45.947400+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:46.947612+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:47.947796+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:48.947968+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:49.948150+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:50.948367+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:51.948507+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:52.948651+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:53.948818+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 56729600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:54.948968+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 56729600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:55.949106+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 56721408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:56.949238+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 56721408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:57.949373+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:58.949509+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:59.949779+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:00.949950+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:01.950114+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:02.950374+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:03.950539+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:04.950795+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:05.950990+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 56705024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:06.951162+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 56705024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:07.951411+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:08.951603+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:09.951791+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:10.951966+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:11.952133+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:12.952548+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:13.952707+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:14.952869+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 98.430641174s of 98.900810242s, submitted: 90
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:15.953086+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:16.953296+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf9d000/0x0/0x4ffc00000, data 0x19c8008/0x1b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 284 ms_handle_reset con 0x55a1286a5000 session 0x55a129ce9860
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:17.953513+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:18.953700+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995067 data_alloc: 218103808 data_used: 729088
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f44800
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:19.953886+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:20.954056+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:21.954251+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 285 ms_handle_reset con 0x55a129f44800 session 0x55a129eb3860
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:22.954424+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 285 heartbeat osd_stat(store_statfs(0x4ee387000/0x0/0x4ffc00000, data 0x1cb787/0x356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:23.954610+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2889761 data_alloc: 218103808 data_used: 737280
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:24.954784+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:25.954946+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.809815407s of 11.059050560s, submitted: 61
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:26.955133+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:27.955426+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 286 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0x1cd206/0x359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:28.955621+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2894255 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 287 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0x1cd206/0x359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 287 ms_handle_reset con 0x55a12769b400 session 0x55a127620780
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:29.955852+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:30.956056+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:31.956379+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:32.956564+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:33.957022+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:34.957269+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:35.957574+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:36.957839+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:37.957997+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:38.958162+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:39.958370+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:40.958573+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:41.958739+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:42.958890+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:43.959041+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:44.959193+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:45.959351+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:46.959578+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:47.959835+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:48.960013+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:49.960194+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:50.960417+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:51.960606+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:52.960870+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:53.961035+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:54.961217+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:55.961441+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:56.961632+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:57.961779+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:58.961935+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:59.962109+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:00.962389+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:01.962542+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:02.962762+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:03.962891+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:04.963030+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:05.963204+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:06.963365+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:07.963521+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:08.963702+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:09.963881+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:10.964245+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:11.964401+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:12.964605+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:13.964800+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:14.964947+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:15.965085+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:16.965256+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 56451072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:17.965395+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:18.965554+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:19.965752+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:20.965951+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:21.966107+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:22.966270+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:23.966378+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:24.966512+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:25.966669+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:26.966899+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:27.967064+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:28.967195+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:29.967393+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.735168457s of 63.853855133s, submitted: 16
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:30.967514+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297484288 unmapped: 55361536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:31.967656+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297484288 unmapped: 55361536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:32.967816+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297492480 unmapped: 55353344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:33.967991+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902153 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 289 ms_handle_reset con 0x55a127eb5400 session 0x55a128153860
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:34.968166+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:35.968481+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:36.968658+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ee37c000/0x0/0x4ffc00000, data 0x1d23d3/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:37.968861+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297517056 unmapped: 55328768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:38.969030+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902153 data_alloc: 218103808 data_used: 753664
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297517056 unmapped: 55328768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:39.969233+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:40.969403+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:41.969587+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:42.969769+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:43.969905+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:44.970049+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:45.970206+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:46.970383+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:47.970624+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 55296000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:48.970807+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 55296000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:49.970955+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:50.971088+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:51.971202+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:52.971405+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:53.971557+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:54.971679+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:55.971825+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:56.971969+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:57.972100+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:58.972262+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:59.972515+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:00.972670+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:01.972845+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:02.973023+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:03.973202+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:04.973385+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:05.973584+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:06.973736+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:07.973937+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:08.974118+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:09.974293+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:10.974456+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:11.974673+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:12.974878+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:13.975042+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:14.975220+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:15.975405+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:16.975585+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 55238656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:17.975874+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:18.976119+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:19.976368+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:20.976530+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:21.976720+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:22.976873+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:23.977214+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:24.977409+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:25.977619+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:26.977797+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:27.977999+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:28.978144+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:29.978378+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 55214080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:30.978555+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:31.978703+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:32.978886+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:33.979054+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:34.979239+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:35.979367+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:36.979532+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:37.979658+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:38.979775+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:39.979908+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:40.980060+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:41.980249+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:42.980404+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:43.980533+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:44.980681+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:45.980816+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:46.980961+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:47.981132+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:48.981415+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:49.981595+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:50.981723+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:51.981883+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:52.982046+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:53.982217+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:54.982405+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:55.982570+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:56.982689+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297697280 unmapped: 55148544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:57.982825+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297697280 unmapped: 55148544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:58.983287+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:59.983467+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:00.983628+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:01.983751+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297713664 unmapped: 55132160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:02.983870+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297787392 unmapped: 55058432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'config show' '{prefix=config show}'
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:03.983979+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 55533568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:36 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:36 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:36:36 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:04.984088+0000)
Nov 25 09:36:36 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 55853056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:36 compute-0 ceph-osd[90711]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:36:36 compute-0 nova_compute[253538]: 2025-11-25 09:36:36.150 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:36 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23181 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 09:36:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402239297' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 09:36:36 compute-0 ceph-mon[75015]: from='client.23173 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3645074910' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 09:36:36 compute-0 ceph-mon[75015]: from='client.23177 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1279833615' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 09:36:36 compute-0 ceph-mon[75015]: from='client.23181 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:36 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/402239297' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 09:36:36 compute-0 nova_compute[253538]: 2025-11-25 09:36:36.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:36 compute-0 nova_compute[253538]: 2025-11-25 09:36:36.573 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:36:36 compute-0 nova_compute[253538]: 2025-11-25 09:36:36.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:36:36 compute-0 nova_compute[253538]: 2025-11-25 09:36:36.574 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:36:36 compute-0 nova_compute[253538]: 2025-11-25 09:36:36.574 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:36:36 compute-0 nova_compute[253538]: 2025-11-25 09:36:36.574 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:36:36 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23185 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:36 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 09:36:36 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1510233932' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23191 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:36:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1371963626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.055 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:36:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 09:36:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4284209462' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.232 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.233 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3415MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.234 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.234 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.364 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.364 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.387 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:36:37 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23195 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 09:36:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769775139' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[75015]: from='client.23185 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1510233932' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[75015]: from='client.23191 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1371963626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[75015]: pgmap v3511: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:37 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4284209462' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23201 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:36:37 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183006800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.917 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.923 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.951 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.953 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:36:37 compute-0 nova_compute[253538]: 2025-11-25 09:36:37.953 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:36:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 09:36:38 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2770772115' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 09:36:38 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23209 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:38 compute-0 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 09:36:38 compute-0 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:36:38.615+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 09:36:38 compute-0 ceph-mon[75015]: from='client.23195 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/769775139' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 09:36:38 compute-0 ceph-mon[75015]: from='client.23201 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4183006800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:36:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2770772115' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 09:36:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 09:36:38 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266162372' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 09:36:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910845931' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 09:36:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305211320' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 09:36:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1378948537' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 09:36:39 compute-0 crontab[444441]: (root) LIST (root)
Nov 25 09:36:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:39 compute-0 nova_compute[253538]: 2025-11-25 09:36:39.681 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 09:36:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487684928' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: from='client.23209 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1266162372' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2910845931' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: pgmap v3512: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1305211320' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1378948537' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1487684928' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.836271) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399836301, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1563, "num_deletes": 252, "total_data_size": 2388398, "memory_usage": 2424080, "flush_reason": "Manual Compaction"}
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399857445, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2353693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71261, "largest_seqno": 72823, "table_properties": {"data_size": 2346353, "index_size": 4282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15962, "raw_average_key_size": 20, "raw_value_size": 2331335, "raw_average_value_size": 2985, "num_data_blocks": 191, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063245, "oldest_key_time": 1764063245, "file_creation_time": 1764063399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 21899 microseconds, and 6994 cpu microseconds.
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.858161) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2353693 bytes OK
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.858293) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.865827) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.865896) EVENT_LOG_v1 {"time_micros": 1764063399865880, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.865934) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2381453, prev total WAL file size 2381453, number of live WAL files 2.
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.867943) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2298KB)], [170(10MB)]
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399868009, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 13063802, "oldest_snapshot_seqno": -1}
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8998 keys, 11310088 bytes, temperature: kUnknown
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399939951, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11310088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11252309, "index_size": 34178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 236863, "raw_average_key_size": 26, "raw_value_size": 11093930, "raw_average_value_size": 1232, "num_data_blocks": 1323, "num_entries": 8998, "num_filter_entries": 8998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.940198) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11310088 bytes
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.941597) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.4 rd, 157.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(10.4) write-amplify(4.8) OK, records in: 9518, records dropped: 520 output_compression: NoCompression
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.941612) EVENT_LOG_v1 {"time_micros": 1764063399941605, "job": 106, "event": "compaction_finished", "compaction_time_micros": 72000, "compaction_time_cpu_micros": 24505, "output_level": 6, "num_output_files": 1, "total_output_size": 11310088, "num_input_records": 9518, "num_output_records": 8998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399942011, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063399943661, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.867864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:36:39 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:36:39.943711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:36:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 09:36:39 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777993575' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 09:36:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 25 09:36:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266855097' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 09:36:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 25 09:36:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612359251' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 09:36:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 25 09:36:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705432200' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ec2d7000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 45899776 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:42.577602+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 45899776 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:43.577794+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 45899776 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:44.577945+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 45899776 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:45.578118+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3215035 data_alloc: 218103808 data_used: 14893056
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:46.578299+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ec2d7000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:47.578493+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:48.578665+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:49.578788+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:50.578910+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3215035 data_alloc: 218103808 data_used: 14893056
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:51.579084+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ec2d7000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:52.579247+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319324160 unmapped: 45891584 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:53.579379+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319332352 unmapped: 45883392 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:54.579575+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ec2d7000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8c74b860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8d118f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8e3ed860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319340544 unmapped: 45875200 heap: 365215744 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d118f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.639049530s of 45.752262115s, submitted: 23
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:55.579713+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8c62a780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8f049c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8d2370e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8d25bc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8c8134a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288911 data_alloc: 218103808 data_used: 14893056
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319512576 unmapped: 54099968 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:56.579875+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb9bf000/0x0/0x4ffc00000, data 0x27aa8d8/0x290f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319512576 unmapped: 54099968 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:57.580036+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319512576 unmapped: 54099968 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb9bf000/0x0/0x4ffc00000, data 0x27aa8d8/0x290f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:58.580277+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319512576 unmapped: 54099968 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:59.580415+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8f38eb40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8e4321e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8cff1860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8ed734a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e3edc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb1a2000/0x0/0x4ffc00000, data 0x2fc78d8/0x312c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 53641216 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:00.580635+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3352946 data_alloc: 218103808 data_used: 14893056
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 53641216 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:01.580808+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 53641216 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:02.580934+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8c74ab40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8e50f2c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 53633024 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8e50e3c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:03.581077+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e24fa40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 53633024 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:04.581192+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8c813a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb1a1000/0x0/0x4ffc00000, data 0x2fc78fb/0x312d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41e000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 320184320 unmapped: 53428224 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:05.581354+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361680 data_alloc: 218103808 data_used: 14905344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 320184320 unmapped: 53428224 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:06.581465+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326451200 unmapped: 47161344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:07.581586+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326451200 unmapped: 47161344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:08.581740+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326451200 unmapped: 47161344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:09.581874+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb176000/0x0/0x4ffc00000, data 0x2ff190b/0x3158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326451200 unmapped: 47161344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:10.581956+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d93337000 session 0x561d8c74a5a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74f800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3479600 data_alloc: 234881024 data_used: 31514624
Nov 25 09:36:40 compute-0 ceph-osd[89702]: mgrc ms_handle_reset ms_handle_reset con 0x561d8f035c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:36:40 compute-0 ceph-osd[89702]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: get_auth_request con 0x561d8c6aa800 auth_method 0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:11.582088+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f031c00 session 0x561d8e24f2c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8ceb1000 session 0x561d8d117680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f039c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:12.582207+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:13.582326+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:14.582462+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:15.582556+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3479600 data_alloc: 234881024 data_used: 31514624
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb176000/0x0/0x4ffc00000, data 0x2ff190b/0x3158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326475776 unmapped: 47136768 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:16.582690+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.041730881s of 21.370254517s, submitted: 46
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ead6d000/0x0/0x4ffc00000, data 0x33fa90b/0x3561000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [0,0,0,5,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332963840 unmapped: 40648704 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:17.582794+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 333488128 unmapped: 40124416 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:18.582916+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:19.583045+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:20.583250+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9b20000/0x0/0x4ffc00000, data 0x464690b/0x47ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3670790 data_alloc: 234881024 data_used: 32821248
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:21.583384+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:22.583501+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334880768 unmapped: 38731776 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:23.583649+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:24.583780+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:25.583903+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9af9000/0x0/0x4ffc00000, data 0x466e90b/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3666786 data_alloc: 234881024 data_used: 32821248
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9af9000/0x0/0x4ffc00000, data 0x466e90b/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:26.584024+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:27.584195+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9af9000/0x0/0x4ffc00000, data 0x466e90b/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:28.584356+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335028224 unmapped: 38584320 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:29.584495+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.411730766s of 12.939365387s, submitted: 176
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8cff1c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e41e000 session 0x561d8c881860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9af9000/0x0/0x4ffc00000, data 0x466e90b/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8eb53c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 41230336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:30.584638+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438469 data_alloc: 218103808 data_used: 23339008
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 41230336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:31.584766+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8e249e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8d119e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:32.584905+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:33.585068+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:34.585443+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:35.585671+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505912 data_alloc: 218103808 data_used: 23339008
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:36.585847+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:37.586037+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:38.586234+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:39.586472+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:40.586840+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505912 data_alloc: 218103808 data_used: 23339008
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:41.587087+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:42.587417+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 41041920 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:43.587652+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:44.587903+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:45.588104+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3547352 data_alloc: 234881024 data_used: 28848128
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:46.588367+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:47.588589+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:48.588848+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.149969101s of 19.457172394s, submitted: 94
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:49.589057+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:50.589277+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3547528 data_alloc: 234881024 data_used: 28848128
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4ea9b2000/0x0/0x4ffc00000, data 0x37b595d/0x391c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:51.589461+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:52.589581+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:53.589674+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:54.589883+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332595200 unmapped: 41017344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e907f000/0x0/0x4ffc00000, data 0x3f4895d/0x40af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:55.590028+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335577088 unmapped: 38035456 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3625298 data_alloc: 234881024 data_used: 30154752
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:56.590164+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:57.590366+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e8fec000/0x0/0x4ffc00000, data 0x3fda95d/0x4141000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:58.590535+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:59.590722+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:00.590893+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 38019072 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3625458 data_alloc: 234881024 data_used: 30158848
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.860197067s of 12.206836700s, submitted: 105
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:01.591039+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:02.591212+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e8fce000/0x0/0x4ffc00000, data 0x3ff995d/0x4160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:03.591400+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:04.591590+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e8fce000/0x0/0x4ffc00000, data 0x3ff995d/0x4160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:05.591752+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 38158336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617634 data_alloc: 234881024 data_used: 30179328
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:06.592401+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 38150144 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:07.592767+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 38150144 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8f032800 session 0x561d8d236d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d96070c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:08.592917+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d96070c00 session 0x561d8e248f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9f10000/0x0/0x4ffc00000, data 0x30b78fb/0x321d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:09.593038+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9f10000/0x0/0x4ffc00000, data 0x30b78fb/0x321d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:10.593432+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450306 data_alloc: 218103808 data_used: 23072768
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:11.593688+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:12.593882+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:13.594076+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 43065344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8e90c780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d96070c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.583516121s of 12.746244431s, submitted: 38
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:14.594201+0000)
Nov 25 09:36:40 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d96070c00 session 0x561d8d2a7e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 324550656 unmapped: 49061888 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:15.594487+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 324550656 unmapped: 49061888 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254418 data_alloc: 218103808 data_used: 14893056
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4eb136000/0x0/0x4ffc00000, data 0x1e938c8/0x1ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:16.594707+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 324550656 unmapped: 49061888 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:17.594885+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 324550656 unmapped: 49061888 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e705860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8e248b40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8e24f680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e74c800 session 0x561d8d2a7e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:18.595093+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326139904 unmapped: 47472640 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8e248f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d14ec00 session 0x561d8cff1c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:19.595273+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:20.595472+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3280725 data_alloc: 218103808 data_used: 14893056
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:21.595688+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d65000/0x0/0x4ffc00000, data 0x20c492a/0x2229000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:22.595819+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8e3b8800 session 0x561d8c849860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d96070c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d96070c00 session 0x561d8e24f2c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:23.595989+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8c459800 session 0x561d8e513e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d96070c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327196672 unmapped: 46415872 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d65000/0x0/0x4ffc00000, data 0x20c492a/0x2229000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8c459800 session 0x561d8ed72f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.653755188s of 10.120682716s, submitted: 65
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 ms_handle_reset con 0x561d8d10c000 session 0x561d8c813a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:24.596131+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327344128 unmapped: 46268416 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:25.596283+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327278592 unmapped: 46333952 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291143 data_alloc: 218103808 data_used: 15781888
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:26.596755+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 46325760 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:27.596952+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 46325760 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:28.597111+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 46325760 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:29.597549+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 46325760 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:30.597727+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291143 data_alloc: 218103808 data_used: 15781888
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:31.597903+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:32.598092+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:33.598371+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9d40000/0x0/0x4ffc00000, data 0x20e893a/0x224e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:34.598559+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:35.598757+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 46317568 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.936169624s of 11.946027756s, submitted: 2
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307099 data_alloc: 218103808 data_used: 15839232
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:36.598893+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 328499200 unmapped: 45113344 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:37.599043+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 48226304 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9835000/0x0/0x4ffc00000, data 0x25f393a/0x2759000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:38.599211+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:39.599485+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:40.599619+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9787000/0x0/0x4ffc00000, data 0x26a093a/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354701 data_alloc: 218103808 data_used: 16392192
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:41.599755+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:42.599902+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:43.600097+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 46702592 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9764000/0x0/0x4ffc00000, data 0x26c493a/0x282a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:44.600252+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326279168 unmapped: 47333376 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9764000/0x0/0x4ffc00000, data 0x26c493a/0x282a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:45.600410+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326279168 unmapped: 47333376 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3346753 data_alloc: 218103808 data_used: 16396288
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:46.600583+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326279168 unmapped: 47333376 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.099457741s of 10.687935829s, submitted: 103
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:47.600767+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 47374336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:48.600975+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e9752000/0x0/0x4ffc00000, data 0x26d693a/0x283c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 47374336 heap: 373612544 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 254 ms_handle_reset con 0x561d8e74c800 session 0x561d8e8c4d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:49.601131+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 38273024 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 254 ms_handle_reset con 0x561d8f032800 session 0x561d8d2a6780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52f000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 254 heartbeat osd_stat(store_statfs(0x4e9752000/0x0/0x4ffc00000, data 0x26d693a/0x283c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:50.601275+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 254 ms_handle_reset con 0x561d8e52f000 session 0x561d8d2361e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 51830784 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521865 data_alloc: 218103808 data_used: 23535616
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:51.601399+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c459800 session 0x561d8e24e5a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 51822592 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:52.601549+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8d10c000 session 0x561d8e704780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 51814400 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8e74c800 session 0x561d8c62bc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8f032800 session 0x561d8f046d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:53.601675+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 51814400 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:54.601831+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 51798016 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 heartbeat osd_stat(store_statfs(0x4e857c000/0x0/0x4ffc00000, data 0x38a5c83/0x3a10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:55.601995+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 51798016 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527727 data_alloc: 218103808 data_used: 23535616
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:56.602197+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 51798016 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 heartbeat osd_stat(store_statfs(0x4e857c000/0x0/0x4ffc00000, data 0x38a5c83/0x3a10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c585800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.871630669s of 10.184924126s, submitted: 37
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c585800 session 0x561d8c8805a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c459800 session 0x561d8cff14a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:57.602397+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8d10c000 session 0x561d8d1a41e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332210176 unmapped: 54231040 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8e74c800 session 0x561d8e513c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8f032800 session 0x561d8e1a14a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x3d1bcac/0x3e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c6d7400 session 0x561d8f38f0e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 ms_handle_reset con 0x561d8c459800 session 0x561d8e5134a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:58.602770+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 54222848 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:59.602940+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 54222848 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:00.603122+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 54222848 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565348 data_alloc: 218103808 data_used: 23547904
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:01.603371+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 54222848 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:02.603511+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e8103000/0x0/0x4ffc00000, data 0x3d1d748/0x3e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332226560 unmapped: 54214656 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:03.603633+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8d10c000 session 0x561d8e1a1680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332226560 unmapped: 54214656 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e74c800 session 0x561d8ca96960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e8104000/0x0/0x4ffc00000, data 0x3d1d748/0x3e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:04.603764+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332226560 unmapped: 54214656 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8d10d400 session 0x561d8e1a1a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:05.603888+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3dd000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e3dd000 session 0x561d8e39ed20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3dd000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e3dd000 session 0x561d8cffe3c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3582741 data_alloc: 234881024 data_used: 25600000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:06.604020+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e80e0000/0x0/0x4ffc00000, data 0x3d41748/0x3eae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:07.604146+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:08.604297+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e80e0000/0x0/0x4ffc00000, data 0x3d41748/0x3eae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:09.604432+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:10.604562+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614901 data_alloc: 234881024 data_used: 30093312
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:11.604676+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:12.604807+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:13.604934+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e80e0000/0x0/0x4ffc00000, data 0x3d41748/0x3eae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:14.605053+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:15.605171+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 54050816 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.748184204s of 18.947298050s, submitted: 100
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3648765 data_alloc: 234881024 data_used: 34914304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:16.605297+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 51978240 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:17.605447+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 51978240 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:18.605656+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 51970048 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7c79000/0x0/0x4ffc00000, data 0x41a0748/0x430d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:19.605765+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 47718400 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7c79000/0x0/0x4ffc00000, data 0x41a0748/0x430d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:20.605889+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 47718400 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3695857 data_alloc: 234881024 data_used: 36913152
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:21.606020+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7c73000/0x0/0x4ffc00000, data 0x41ae748/0x431b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338051072 unmapped: 48390144 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:22.606166+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47259648 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:23.606337+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47259648 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:24.606494+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47259648 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:25.606590+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47259648 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.900465012s of 10.162605286s, submitted: 84
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3707409 data_alloc: 234881024 data_used: 37318656
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:26.606719+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 47251456 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7c05000/0x0/0x4ffc00000, data 0x421c748/0x4389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:27.606878+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 47587328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:28.607045+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 47587328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:29.607183+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 47587328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:30.607320+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 47587328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8d10d400 session 0x561d8c644960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e74c800 session 0x561d8c389c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747476 data_alloc: 234881024 data_used: 37322752
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c969400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8c969400 session 0x561d8f048b40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41f800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:31.607435+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e41f800 session 0x561d8f38f2c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c969400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8c969400 session 0x561d8e4cda40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e7be7000/0x0/0x4ffc00000, data 0x423a748/0x43a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338878464 unmapped: 47562752 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:32.607594+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338878464 unmapped: 47562752 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:33.607778+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:34.607914+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e768e000/0x0/0x4ffc00000, data 0x4793748/0x4900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:35.608041+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747576 data_alloc: 234881024 data_used: 37322752
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:36.608276+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:37.608443+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:38.608762+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:39.608901+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8d10d400 session 0x561d8c74bc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e768e000/0x0/0x4ffc00000, data 0x4793748/0x4900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:40.609042+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3dd000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e3dd000 session 0x561d8f047680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 47554560 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747576 data_alloc: 234881024 data_used: 37322752
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:41.609510+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e74c800 session 0x561d8f0481e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.523627281s of 15.641870499s, submitted: 25
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e52e000 session 0x561d8e8c41e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 47357952 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:42.609692+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c969400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3dd000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 ms_handle_reset con 0x561d8e3dd000 session 0x561d8d16e780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 47357952 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 heartbeat osd_stat(store_statfs(0x4e768e000/0x0/0x4ffc00000, data 0x4793748/0x4900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 257 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 258 ms_handle_reset con 0x561d8e74c800 session 0x561d8e4cd4a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41f400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:43.609799+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 258 ms_handle_reset con 0x561d8e41f400 session 0x561d8d236960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 258 ms_handle_reset con 0x561d8c586c00 session 0x561d8e512000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 35299328 heap: 386441216 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:44.609921+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 47702016 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 258 ms_handle_reset con 0x561d8f3b1400 session 0x561d8f38e780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:45.610037+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 259 ms_handle_reset con 0x561d8c586c00 session 0x561d8e5125a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3dd000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 45498368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4140652 data_alloc: 251658240 data_used: 54489088
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:46.610201+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 260 ms_handle_reset con 0x561d8e3dd000 session 0x561d8e3ecb40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 45383680 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41f400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 260 ms_handle_reset con 0x561d8e41f400 session 0x561d8c7d25a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 260 ms_handle_reset con 0x561d8e74c800 session 0x561d8f049a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 260 ms_handle_reset con 0x561d8c586000 session 0x561d8eb52b40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:47.610371+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 48603136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:48.610529+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 261 ms_handle_reset con 0x561d8c586000 session 0x561d8d117c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 48529408 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 261 heartbeat osd_stat(store_statfs(0x4e727b000/0x0/0x4ffc00000, data 0x479f61c/0x4911000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:49.610648+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 48529408 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:50.610785+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 261 ms_handle_reset con 0x561d8f032800 session 0x561d8e8c54a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 261 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d237c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350535680 unmapped: 48513024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:51.610926+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3837511 data_alloc: 251658240 data_used: 53280768
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.442913055s of 10.001754761s, submitted: 134
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 261 ms_handle_reset con 0x561d8c586c00 session 0x561d8e7023c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 48488448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:52.611066+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 48488448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:53.611207+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 48488448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3dd000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:54.611366+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 262 heartbeat osd_stat(store_statfs(0x4e7398000/0x0/0x4ffc00000, data 0x467209b/0x47e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 48431104 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 263 ms_handle_reset con 0x561d8e3dd000 session 0x561d8e18a3c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:55.611607+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 52772864 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:56.611767+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3631553 data_alloc: 234881024 data_used: 34648064
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 49659904 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:57.611887+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 49889280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 263 heartbeat osd_stat(store_statfs(0x4e7fe4000/0x0/0x4ffc00000, data 0x3a1ec0a/0x3b92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:58.612053+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 264 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e50e3c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 264 ms_handle_reset con 0x561d8e3b8800 session 0x561d8e50f2c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 49823744 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:59.612211+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 264 ms_handle_reset con 0x561d8c586000 session 0x561d8cff1a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 53608448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:00.612388+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 53608448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:01.612544+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3575510 data_alloc: 234881024 data_used: 26189824
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 53608448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 264 heartbeat osd_stat(store_statfs(0x4e8776000/0x0/0x4ffc00000, data 0x3270617/0x33e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:02.612666+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 53608448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:03.612838+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 264 heartbeat osd_stat(store_statfs(0x4e8776000/0x0/0x4ffc00000, data 0x3270617/0x33e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.457151413s of 12.123614311s, submitted: 157
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 53600256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:04.613002+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:05.613139+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:06.613279+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3581608 data_alloc: 234881024 data_used: 26202112
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8773000/0x0/0x4ffc00000, data 0x329607a/0x340a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:07.613432+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:08.613595+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 52535296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:09.613736+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 52527104 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c969400 session 0x561d8e513a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e7c0d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:10.613833+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8765000/0x0/0x4ffc00000, data 0x32a507a/0x3419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,0,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 61751296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8d25ba40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:11.614010+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3453322 data_alloc: 218103808 data_used: 20615168
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 61751296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:12.614152+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 61751296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:13.614284+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 61751296 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.911143303s of 10.312707901s, submitted: 67
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c881a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d032960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9316000/0x0/0x4ffc00000, data 0x26f407a/0x2868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:14.614367+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330719232 unmapped: 68329472 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d0334a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:15.614492+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:16.614663+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:17.614949+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:18.615182+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:19.615359+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:20.615538+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:21.615673+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:22.615824+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:23.615966+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:24.616153+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:25.616332+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:26.616512+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:27.616689+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:28.616886+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:29.617052+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:30.617219+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:31.617399+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:32.617544+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:33.617714+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:34.617877+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:35.618033+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:36.618189+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:37.618426+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:38.618679+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:39.618866+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:40.619036+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:41.619223+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:42.619391+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:43.619598+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:44.619786+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:45.619949+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:46.620157+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:47.620374+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:48.620588+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:49.620774+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:50.620950+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:51.621480+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:52.621653+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:53.621829+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 68321280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:54.621978+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330735616 unmapped: 68313088 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:55.622117+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330735616 unmapped: 68313088 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:56.622380+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 330735616 unmapped: 68313088 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345696 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c969400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.686870575s of 42.879127502s, submitted: 27
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:57.622545+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 54329344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c969400 session 0x561d8e90c1e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c969400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c969400 session 0x561d8c644f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d116f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d1a52c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e3ecf00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e18000/0x0/0x4ffc00000, data 0x2bf3018/0x2d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:58.622732+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 67428352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:59.623163+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 67428352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:00.623357+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 67428352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e18000/0x0/0x4ffc00000, data 0x2bf3018/0x2d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:01.623546+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 67428352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450951 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:02.623715+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 67420160 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed73c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e3ed4a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:03.623968+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 67420160 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e18ad20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8c62a1e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:04.624112+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 67502080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c969400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:05.624248+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 67493888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e16000/0x0/0x4ffc00000, data 0x2bf304b/0x2d68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:06.624379+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3552651 data_alloc: 234881024 data_used: 28540928
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:07.624497+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:08.624651+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:09.624850+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:10.625009+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:11.625144+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3552651 data_alloc: 234881024 data_used: 28540928
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e16000/0x0/0x4ffc00000, data 0x2bf304b/0x2d68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:12.625352+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:13.642632+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:14.642756+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:15.643010+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:16.643238+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 66469888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.975399017s of 19.610374451s, submitted: 38
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3578299 data_alloc: 234881024 data_used: 28549120
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:17.643374+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337371136 unmapped: 61677568 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e857a000/0x0/0x4ffc00000, data 0x348704b/0x35fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:18.643724+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:19.644214+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:20.644402+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:21.644649+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3631847 data_alloc: 234881024 data_used: 30068736
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:22.644950+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84fa000/0x0/0x4ffc00000, data 0x350f04b/0x3684000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:23.645157+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:24.645576+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:25.645744+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:26.645978+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3629483 data_alloc: 234881024 data_used: 30068736
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:27.646153+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:28.646403+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d9000/0x0/0x4ffc00000, data 0x353004b/0x36a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:29.646595+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:30.646762+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:31.647579+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3629483 data_alloc: 234881024 data_used: 30068736
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d9000/0x0/0x4ffc00000, data 0x353004b/0x36a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:32.647735+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:33.647943+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.343145370s of 16.906240463s, submitted: 101
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:34.648457+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d6000/0x0/0x4ffc00000, data 0x353304b/0x36a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:35.648605+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:36.648759+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 61890560 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3629527 data_alloc: 234881024 data_used: 30068736
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:37.649100+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d6000/0x0/0x4ffc00000, data 0x353304b/0x36a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:38.649410+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:39.649596+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:40.649787+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.4 total, 600.0 interval
                                           Cumulative writes: 42K writes, 168K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.81 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4521 writes, 19K keys, 4521 commit groups, 1.0 writes per commit group, ingest: 23.42 MB, 0.04 MB/s
                                           Interval WAL: 4521 writes, 1686 syncs, 2.68 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:41.649901+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3629527 data_alloc: 234881024 data_used: 30068736
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:42.650038+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d6000/0x0/0x4ffc00000, data 0x353304b/0x36a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:43.650188+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84d6000/0x0/0x4ffc00000, data 0x353304b/0x36a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:44.650405+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 61882368 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.767104149s of 11.775897026s, submitted: 2
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:45.650613+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 61865984 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e513a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e3b8800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e3b8800 session 0x561d8cff1a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e50f2c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:46.650789+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e50e3c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61644800 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e18a3c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3712601 data_alloc: 234881024 data_used: 30068736
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d117c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586c00 session 0x561d8f049a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c7d25a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e3ecb40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:47.650955+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:48.651123+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:49.651287+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:50.651505+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:51.651699+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 61505536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3711369 data_alloc: 234881024 data_used: 30068736
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:52.651889+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:53.652120+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:54.652338+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:55.652498+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:56.652692+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aaa000/0x0/0x4ffc00000, data 0x3f5e05b/0x40d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3711369 data_alloc: 234881024 data_used: 30068736
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:57.652833+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e512000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 61497344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d236960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:58.653073+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f032800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f032800 session 0x561d8e4cd4a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.303936958s of 13.445974350s, submitted: 21
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e8c41e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:59.653203+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:00.653375+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:01.653504+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3713367 data_alloc: 234881024 data_used: 30072832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:02.653641+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 61489152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:03.653769+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337969152 unmapped: 61079552 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:04.653906+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:05.655324+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:06.655449+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3786743 data_alloc: 234881024 data_used: 40497152
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:07.655579+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:08.655812+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:09.655957+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:10.656110+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:11.656294+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3786743 data_alloc: 234881024 data_used: 40497152
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:12.656527+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:13.656661+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343932928 unmapped: 55115776 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7aa9000/0x0/0x4ffc00000, data 0x3f5e06b/0x40d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:14.656815+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.973099709s of 15.986382484s, submitted: 2
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55328768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:15.656992+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347422720 unmapped: 51625984 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:16.657124+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 50724864 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888193 data_alloc: 234881024 data_used: 41283584
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:17.657294+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:18.657505+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:19.657677+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7020000/0x0/0x4ffc00000, data 0x49d906b/0x4b50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:20.657817+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7020000/0x0/0x4ffc00000, data 0x49d906b/0x4b50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:21.657969+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 50569216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888673 data_alloc: 234881024 data_used: 41295872
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:22.658113+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 50438144 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:23.658249+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 50438144 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:24.658470+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:25.658628+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:26.658805+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3882581 data_alloc: 234881024 data_used: 41402368
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e700d000/0x0/0x4ffc00000, data 0x49fa06b/0x4b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:27.658943+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.950686455s of 12.774372101s, submitted: 94
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:28.659113+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8f0481e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:29.659234+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e18b2c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 50421760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:30.659364+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6fff000/0x0/0x4ffc00000, data 0x4a0806b/0x4b7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 50405376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:31.659511+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 50397184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3643394 data_alloc: 234881024 data_used: 30081024
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e24f680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:32.659658+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 50397184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:33.659801+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 50397184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:34.660001+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 50388992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:35.660135+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 50388992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84c4000/0x0/0x4ffc00000, data 0x354504b/0x36ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:36.660296+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84c4000/0x0/0x4ffc00000, data 0x354504b/0x36ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 50388992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3643394 data_alloc: 234881024 data_used: 30081024
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:37.660492+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e84c4000/0x0/0x4ffc00000, data 0x354504b/0x36ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 50388992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:38.660733+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 50380800 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.654464722s of 11.467668533s, submitted: 27
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c969400 session 0x561d8f049e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e50e000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:39.660879+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 50380800 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:40.661027+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e4325a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:41.661221+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:42.661350+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:43.661483+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:44.661621+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:45.661759+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:46.661885+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:47.662031+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:48.662242+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:49.662446+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:50.662644+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:51.662812+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:52.662942+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:53.663065+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:54.663189+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:55.663573+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:56.663695+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:57.663833+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:58.664023+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:59.664150+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:00.664278+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:01.664428+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:02.664569+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:03.664732+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:04.664902+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:05.665035+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:06.665219+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:07.665358+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:08.665560+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:09.665700+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:10.665834+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:11.666041+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:12.666196+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:13.666365+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:14.666503+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:15.666648+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:16.666818+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:17.666995+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:18.667190+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:19.667393+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b62000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:20.668453+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:21.668606+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374163 data_alloc: 218103808 data_used: 14979072
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:22.668749+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337821696 unmapped: 61227008 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:23.668941+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e1a1680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e8c5e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d116b40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d25a960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.586261749s of 44.781974792s, submitted: 62
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e7c1c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d25a960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e1a1680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f049e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e4cd4a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:24.669129+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9665000/0x0/0x4ffc00000, data 0x23a408a/0x2519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:25.669391+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:26.669568+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3418577 data_alloc: 218103808 data_used: 14983168
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:27.669760+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e3ecb40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:28.669951+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8c7d25a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:29.670093+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f049a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d117c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9664000/0x0/0x4ffc00000, data 0x23a40ad/0x251a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:30.670236+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:31.670360+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449160 data_alloc: 218103808 data_used: 18993152
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:32.670492+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9664000/0x0/0x4ffc00000, data 0x23a40ad/0x251a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:33.670632+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:34.670791+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61202432 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:35.670920+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.539092064s of 11.824838638s, submitted: 45
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 61169664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:36.671051+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449112 data_alloc: 218103808 data_used: 18997248
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:37.671183+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:38.671373+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9664000/0x0/0x4ffc00000, data 0x23a40ad/0x251a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:39.671530+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:40.671664+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 61153280 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:41.671802+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9664000/0x0/0x4ffc00000, data 0x23a40ad/0x251a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 337903616 unmapped: 61145088 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451036 data_alloc: 218103808 data_used: 19042304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:42.672264+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 59351040 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:43.672444+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341319680 unmapped: 57729024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:44.672572+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8e28000/0x0/0x4ffc00000, data 0x2bd80ad/0x2d4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341319680 unmapped: 57729024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:45.672710+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.745832443s of 10.384993553s, submitted: 145
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:46.672874+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528146 data_alloc: 218103808 data_used: 19152896
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:47.673016+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8dfb000/0x0/0x4ffc00000, data 0x2c0c0ad/0x2d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:48.673191+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:49.673399+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8dfb000/0x0/0x4ffc00000, data 0x2c0c0ad/0x2d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:50.673557+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8dfb000/0x0/0x4ffc00000, data 0x2c0c0ad/0x2d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:51.673732+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8dfb000/0x0/0x4ffc00000, data 0x2c0c0ad/0x2d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525954 data_alloc: 218103808 data_used: 19156992
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:52.673909+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x2c0f0ad/0x2d85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:53.674077+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:54.674243+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 58646528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:55.674434+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x2c0f0ad/0x2d85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 58638336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:56.674592+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 58638336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525954 data_alloc: 218103808 data_used: 19156992
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x2c0f0ad/0x2d85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:57.674731+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 58638336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.710719109s of 12.442818642s, submitted: 34
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:58.674891+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e7032c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e7c0780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 58597376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:59.675033+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 58597376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:00.675189+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e811e000/0x0/0x4ffc00000, data 0x38e910f/0x3a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 58597376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:01.675355+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 58589184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619231 data_alloc: 218103808 data_used: 19156992
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:02.675507+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 58589184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d1172c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:03.675630+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e811e000/0x0/0x4ffc00000, data 0x38e910f/0x3a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ca963c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 58589184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:04.675788+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8ca97c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8ca97a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340770816 unmapped: 58277888 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:05.675916+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340795392 unmapped: 58253312 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:06.676006+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340795392 unmapped: 58253312 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3670238 data_alloc: 234881024 data_used: 25145344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:07.676117+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:08.676333+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x390d11f/0x3a85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:09.676516+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:10.676646+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x390d11f/0x3a85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:11.676808+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3720638 data_alloc: 234881024 data_used: 31768576
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:12.676953+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:13.677097+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x390d11f/0x3a85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:14.677241+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:15.677442+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:16.677566+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x390d11f/0x3a85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344727552 unmapped: 54321152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.677581787s of 18.996076584s, submitted: 36
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721670 data_alloc: 234881024 data_used: 31793152
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:17.677687+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346578944 unmapped: 52469760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:18.677868+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c90000/0x0/0x4ffc00000, data 0x3d7611f/0x3eee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 52445184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:19.677993+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 52355072 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:20.678106+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:21.678231+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c67000/0x0/0x4ffc00000, data 0x3d9f11f/0x3f17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:22.678365+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3772452 data_alloc: 234881024 data_used: 32202752
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c67000/0x0/0x4ffc00000, data 0x3d9f11f/0x3f17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:23.678513+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:24.678672+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:25.678824+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c67000/0x0/0x4ffc00000, data 0x3d9f11f/0x3f17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:26.679001+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c67000/0x0/0x4ffc00000, data 0x3d9f11f/0x3f17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:27.679160+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3770996 data_alloc: 234881024 data_used: 32206848
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347766784 unmapped: 51281920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.320297241s of 10.981893539s, submitted: 59
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:28.679407+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347774976 unmapped: 51273728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:29.679589+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 51265536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:30.680124+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8e24fa40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8ca972c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 51265536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:31.680291+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2494a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8971000/0x0/0x4ffc00000, data 0x2c330bd/0x2daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 56467456 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:32.680533+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538985 data_alloc: 218103808 data_used: 18288640
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 56467456 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:33.680852+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 56467456 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:34.681234+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e50e3c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 56467456 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:35.681378+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8995000/0x0/0x4ffc00000, data 0x2c100ad/0x2d86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e90cd20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:36.681533+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:37.681680+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:38.681868+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:39.682051+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:40.682174+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:41.682533+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:42.682885+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:43.683139+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:44.683294+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:45.683428+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:46.683677+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:47.683835+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:48.684080+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:49.684270+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:50.684509+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:51.684682+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:52.684840+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:53.685028+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:54.685189+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:55.685355+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:56.685506+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:57.685694+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:58.685964+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:59.686140+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:00.686357+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:01.686540+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 60104704 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:02.686711+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 60096512 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:03.686866+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 60096512 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:04.687035+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:05.687229+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:06.687516+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:07.687688+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:08.687902+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:09.688182+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:10.688517+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:11.688696+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:12.688849+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:13.689028+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:14.689214+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:15.689366+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.827438354s of 47.724197388s, submitted: 107
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e24f860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:16.689522+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 59662336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:17.689676+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 59662336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525594 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:18.689856+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 59654144 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:19.690036+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:20.690260+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:21.690393+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d1163c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:22.690573+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525594 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d1a50e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e7034a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e7c0d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:23.690706+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:24.690842+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 59629568 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:25.691083+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:26.691234+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:27.691366+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632855 data_alloc: 234881024 data_used: 28499968
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:28.691508+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:29.691659+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:30.691777+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:31.691910+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:32.692345+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632855 data_alloc: 234881024 data_used: 28499968
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:33.692482+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:34.692616+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.927917480s of 19.114524841s, submitted: 30
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:35.692753+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 53174272 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:36.693532+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 53059584 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:37.693689+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3727151 data_alloc: 234881024 data_used: 29900800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:38.693882+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:39.694089+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e809b000/0x0/0x4ffc00000, data 0x3970018/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:40.694256+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:41.694389+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e809b000/0x0/0x4ffc00000, data 0x3970018/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:42.694540+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3725011 data_alloc: 234881024 data_used: 29900800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:43.694706+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:44.694879+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:45.695008+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:46.695153+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:47.695352+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.873039246s of 12.605758667s, submitted: 112
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 52609024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3770760 data_alloc: 234881024 data_used: 29900800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e807a000/0x0/0x4ffc00000, data 0x3991018/0x3b04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,0,0,0,1,2])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2483c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e90cb40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e90cf00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41f400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e41f400 session 0x561d8e8c5a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f048f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:48.695512+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b1c000/0x0/0x4ffc00000, data 0x3eef018/0x4062000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:49.695653+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:50.695773+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:51.695930+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:52.696057+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3769046 data_alloc: 234881024 data_used: 29900800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:53.696197+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:54.696354+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:55.696485+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e39ed20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:56.711578+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d2365a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8c738780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e74c800 session 0x561d8d116b40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:57.711751+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3771924 data_alloc: 234881024 data_used: 29900800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365564346s of 10.544802666s, submitted: 22
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:58.711914+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:59.712075+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:00.712214+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:01.712364+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:02.712510+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804024 data_alloc: 234881024 data_used: 34402304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:03.712638+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:04.712797+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:05.712933+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:06.713083+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:07.713244+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804552 data_alloc: 234881024 data_used: 34402304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:08.713384+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:09.713524+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.267370224s of 11.282814980s, submitted: 5
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347512832 unmapped: 51535872 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:10.713659+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 51519488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:11.713842+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72fb000/0x0/0x4ffc00000, data 0x470e028/0x4882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:12.714032+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874330 data_alloc: 234881024 data_used: 34574336
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:13.714224+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:14.714394+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:15.714566+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72fb000/0x0/0x4ffc00000, data 0x470e028/0x4882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:16.714689+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:17.714859+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872410 data_alloc: 234881024 data_used: 34574336
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:18.715043+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:19.715208+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:20.715384+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:21.715519+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72f9000/0x0/0x4ffc00000, data 0x4711028/0x4885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 51478528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.121975899s of 12.436902046s, submitted: 54
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e90de00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e4321e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:22.715625+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d1a41e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3736725 data_alloc: 234881024 data_used: 29904896
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:23.715787+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8074000/0x0/0x4ffc00000, data 0x3997018/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:24.715964+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:25.716141+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e806d000/0x0/0x4ffc00000, data 0x399e018/0x3b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d25bc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8c62bc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:26.716265+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e249860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:27.716439+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:28.716606+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:29.716792+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:30.716979+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:31.717143+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:32.717302+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:33.717510+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:34.717734+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:35.717869+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:36.717988+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:37.718143+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:38.718299+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:39.718461+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:40.718593+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:41.718728+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:42.718872+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:43.719026+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:44.719149+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:45.719410+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:46.719603+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:47.719774+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:48.719950+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:49.720191+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:50.720374+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:51.720519+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:52.720766+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:53.720949+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:54.721115+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:55.721247+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:56.721420+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:57.721594+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:58.721767+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:59.721956+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:00.722143+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338649088 unmapped: 60399616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:01.722354+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338649088 unmapped: 60399616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:02.722521+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:03.722708+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:04.722855+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c8130e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38e960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e39fa40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e24ed20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.992057800s of 43.214187622s, submitted: 76
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,1,3,3,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8eb53c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:05.722996+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d032f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c644960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f048780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e512780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:06.723152+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:07.723355+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517656 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:08.723541+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e3edc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:09.723686+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d16f0e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4325a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e5105a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:10.723821+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:11.723993+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:12.726952+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599896 data_alloc: 234881024 data_used: 26505216
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:13.727150+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:14.728603+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:15.728781+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:16.728937+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:17.729073+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599896 data_alloc: 234881024 data_used: 26505216
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:18.729241+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:19.729410+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:20.729568+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:21.729743+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:22.729897+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.722257614s of 17.883968353s, submitted: 34
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 52879360 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3648258 data_alloc: 234881024 data_used: 26505216
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:23.730012+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 52592640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:24.730152+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:25.730368+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:26.730546+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:27.730724+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682318 data_alloc: 234881024 data_used: 27729920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:28.730960+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:29.731138+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:30.731351+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:31.731523+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:32.731685+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682318 data_alloc: 234881024 data_used: 27729920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:33.731946+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:34.732118+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:35.732279+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:36.732434+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:37.732601+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e2485a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8c74ba40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e4cd4a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4cc780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.496106148s of 14.768519402s, submitted: 82
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38ef00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e8c4d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8ed734a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d2361e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e39f860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730459 data_alloc: 234881024 data_used: 27734016
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:38.732848+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:39.733011+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:40.733139+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:41.733356+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:42.733483+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730459 data_alloc: 234881024 data_used: 27734016
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:43.733631+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:44.733784+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e50eb40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8d237860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:45.733920+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8d1a50e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c74b680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:46.734244+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:47.734392+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782911 data_alloc: 234881024 data_used: 35106816
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:48.734567+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:49.734903+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:50.735053+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:51.735679+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:52.735836+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782911 data_alloc: 234881024 data_used: 35106816
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:53.735967+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:54.736115+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.003824234s of 17.156684875s, submitted: 17
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:55.736423+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:56.736688+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:57.736859+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 46997504 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817961 data_alloc: 234881024 data_used: 35102720
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:58.737041+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 46325760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:59.737183+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:00.737379+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:01.737551+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:02.737722+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6993000/0x0/0x4ffc00000, data 0x3ecd09a/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3828065 data_alloc: 234881024 data_used: 35172352
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:03.737849+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:04.737987+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.787124634s of 10.008896828s, submitted: 44
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:05.738102+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6993000/0x0/0x4ffc00000, data 0x3ecd09a/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:06.738373+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 46292992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f38f680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:07.738491+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed73c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:08.738639+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684633 data_alloc: 234881024 data_used: 27738112
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:09.738839+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:10.739044+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:11.739364+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:12.739631+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e3ec1e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e18a3c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:13.739823+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:14.740035+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:15.740230+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:16.740388+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:17.740528+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:18.740733+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:19.740902+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:20.741068+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:21.741223+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:22.741536+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:23.742014+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:24.742203+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:25.742431+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:26.742597+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:27.742760+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:28.742965+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:29.743122+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:30.743338+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:31.743497+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:32.743673+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:33.743817+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:34.743946+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:35.744117+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:36.744295+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:37.744478+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:38.744693+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:39.744904+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:40.745139+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:41.745272+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:42.745435+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:43.745522+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e39f860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d2361e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed734a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8f38ef00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.750972748s of 38.958789825s, submitted: 67
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89bd000/0x0/0x4ffc00000, data 0x1ea8028/0x201c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:44.745646+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 58351616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8e4cc780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8f048780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8eb53c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38e960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d1a41e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:45.745807+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:46.745962+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:47.746084+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d116b40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:48.746353+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492197 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8112000/0x0/0x4ffc00000, data 0x234708a/0x24bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c738780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:49.746545+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2485a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8d2a63c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:50.746965+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 58122240 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:51.747134+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 58122240 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:52.747326+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:53.747613+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529564 data_alloc: 218103808 data_used: 19730432
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8e4cd860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e2492c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:54.748242+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:55.748398+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:56.748641+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:57.748897+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:58.749064+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529432 data_alloc: 218103808 data_used: 19730432
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.582655907s of 14.895611763s, submitted: 46
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:59.749208+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:00.749560+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e7c1e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d25a000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:01.749717+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:02.750140+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:03.750377+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529368 data_alloc: 218103808 data_used: 19738624
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:04.750654+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:05.750793+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:06.750928+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:07.751249+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e248780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8f0483c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:08.751419+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8f047c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:09.751586+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:10.751758+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:11.751909+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:12.752049+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:13.752205+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:14.752354+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:15.752531+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:16.752754+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:17.752933+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:18.753971+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:19.754109+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:20.754286+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:21.754422+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:22.754620+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:23.754865+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:24.755069+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:25.755254+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:26.755467+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:27.755637+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:28.755862+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:29.756011+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:30.756221+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:31.756428+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:32.756600+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:33.756777+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:34.756941+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:35.757098+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:36.757255+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:37.757441+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:38.758594+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:39.758756+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:40.758936+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:41.759106+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:42.759347+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:43.759534+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:44.759662+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:45.759842+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:46.759991+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 60456960 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.281188965s of 48.498039246s, submitted: 50
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:47.760152+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c880000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:48.760372+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525743 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:49.760535+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:50.760785+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:51.761004+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f047680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:52.761145+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e2490e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8f049e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:53.761265+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e1a1a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338698240 unmapped: 60350464 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529969 data_alloc: 218103808 data_used: 14966784
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:54.761408+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 60342272 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:55.761757+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:56.762388+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:57.762805+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x2824028/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:58.763082+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598449 data_alloc: 218103808 data_used: 24567808
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:59.763273+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:00.763402+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:01.763554+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:02.763701+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:03.763856+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x2824028/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598449 data_alloc: 218103808 data_used: 24567808
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:04.764035+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:05.764248+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.442842484s of 18.518671036s, submitted: 11
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 59375616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:06.764391+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 54894592 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:07.764567+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:08.765005+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e712d000/0x0/0x4ffc00000, data 0x332d028/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3694817 data_alloc: 218103808 data_used: 24907776
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:09.765137+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:10.765538+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:11.765762+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:12.765923+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:13.766244+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3691285 data_alloc: 218103808 data_used: 24907776
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7109000/0x0/0x4ffc00000, data 0x3351028/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:14.766512+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:15.766778+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927303314s of 10.246125221s, submitted: 93
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:16.766994+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:17.767160+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4cdc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8cff1860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:18.767368+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e70e8000/0x0/0x4ffc00000, data 0x3372028/0x34e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693961 data_alloc: 218103808 data_used: 24915968
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:19.768597+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:20.769203+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8d10d400 session 0x561d8e50e1e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41e400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8ca97c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8e41e400 session 0x561d8d116780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8c459800 session 0x561d8e24f0e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 52314112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:21.769389+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8c6d7800 session 0x561d8f38eb40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 52297728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:22.769543+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8d109000 session 0x561d8e3edc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 52273152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:23.769669+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02e000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8f02e000 session 0x561d8e4cd4a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02e000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8f02e000 session 0x561d8f38fe00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8c459800 session 0x561d8ed73e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970481 data_alloc: 234881024 data_used: 33873920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 268 heartbeat osd_stat(store_statfs(0x4e6421000/0x0/0x4ffc00000, data 0x5072371/0x51eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:24.769848+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:25.769992+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:26.770146+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:27.770288+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.572269440s of 12.077063560s, submitted: 93
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:28.770481+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8ed72000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709496 data_alloc: 218103808 data_used: 14983168
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:29.770828+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:30.771068+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:31.771260+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:32.771422+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:33.771578+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709496 data_alloc: 218103808 data_used: 14983168
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:34.771821+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:35.772018+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8e3eda40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41e400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8e41e400 session 0x561d8e7c1c20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:36.772197+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c459800 session 0x561d8c659e00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8d1a5860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8d16e780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02e000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02e000 session 0x561d8e5105a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02c400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 384196608 unmapped: 25747456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:37.772355+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02c400 session 0x561d8c74ba40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c459800 session 0x561d8d2363c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:38.772529+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8d1174a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3907564 data_alloc: 251658240 data_used: 44687360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e6abd000/0x0/0x4ffc00000, data 0x49d7dc4/0x4b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8f38f4a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02e000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.500659943s of 11.212936401s, submitted: 43
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:39.772646+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02e000 session 0x561d8e50e000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:40.772856+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d0b7000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e6abc000/0x0/0x4ffc00000, data 0x49d7dd3/0x4b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,2])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 41066496 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c5cbc00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:41.772986+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 270 ms_handle_reset con 0x561d8d0b7000 session 0x561d8f38ed20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:42.773173+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:43.773423+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668229 data_alloc: 218103808 data_used: 22822912
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:44.773548+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:45.773681+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:46.773891+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:47.774119+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:48.774407+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671011 data_alloc: 218103808 data_used: 22822912
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:49.774544+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:50.774686+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b1000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:51.774863+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b1000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:52.774992+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:53.775228+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.854998589s of 14.211294174s, submitted: 75
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3687955 data_alloc: 218103808 data_used: 24702976
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:54.775440+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:55.775575+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:56.775812+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:57.775979+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:58.776159+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693395 data_alloc: 234881024 data_used: 25255936
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:59.776383+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:00.776551+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:01.776742+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:02.776910+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:03.777106+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693571 data_alloc: 234881024 data_used: 25251840
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:04.777259+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:05.777404+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.356461525s of 12.525735855s, submitted: 8
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:06.777574+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:07.777731+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:08.777892+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693235 data_alloc: 234881024 data_used: 25247744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:09.778169+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:10.778324+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:11.778487+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:12.778641+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:13.778788+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3692883 data_alloc: 234881024 data_used: 25247744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:14.778985+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:15.779143+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:16.779299+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:17.779455+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c5cbc00 session 0x561d8f0485a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e3ed680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d0b7000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.056241989s of 12.152028084s, submitted: 5
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:18.779597+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d0b7000 session 0x561d8e24eb40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:19.779752+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:20.779941+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:21.780103+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:22.780259+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:23.780434+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:24.780604+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:25.780786+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:26.780915+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:27.781141+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:28.781354+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:29.781509+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:30.781686+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:31.781923+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:32.782128+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:33.782297+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:34.782467+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:35.782644+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:36.782807+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:37.782961+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:38.783142+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:39.783435+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:40.783609+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.4 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2715 writes, 11K keys, 2715 commit groups, 1.0 writes per commit group, ingest: 11.35 MB, 0.02 MB/s
                                           Interval WAL: 2715 writes, 1097 syncs, 2.47 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:41.783751+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets getting new tickets!
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:42.783957+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _finish_auth 0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:42.784849+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.209249496s of 24.272548676s, submitted: 19
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c459800 session 0x561d8c62bc20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:43.784143+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551006 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:44.784280+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e913b000/0x0/0x4ffc00000, data 0x2358396/0x24d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:45.784372+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c6d7800 session 0x561d8c644960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:46.784488+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:47.784648+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 356655104 unmapped: 53288960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:48.784807+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c459800 session 0x561d8e18ba40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516663 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:49.784917+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e18a780
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:50.785098+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:51.785243+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:52.785361+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:53.785523+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:54.785701+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:55.785864+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:56.786039+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:57.786205+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:58.786393+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:59.786576+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:00.786740+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:01.786861+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:02.787010+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:03.787147+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:04.787297+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:05.787540+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:06.787699+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:07.787931+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:08.788095+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:09.788266+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8e74f800 session 0x561d8e18be00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:10.788598+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: mgrc ms_handle_reset ms_handle_reset con 0x561d8c6aa800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:36:40 compute-0 ceph-osd[89702]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: get_auth_request con 0x561d8c6d7800 auth_method 0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:11.788791+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8f032c00 session 0x561d8e1cc000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74f800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8f039c00 session 0x561d8d117860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c2aa800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:12.789096+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:13.789362+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:14.789559+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:15.789775+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:16.790031+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:17.790204+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:18.790388+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:19.790627+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:20.790805+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:21.790991+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:22.791169+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:23.791368+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:24.791540+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:25.791684+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:26.791988+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:27.792158+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:28.792359+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:29.792496+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:30.792810+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:31.793124+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:32.793404+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:33.793634+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:34.793819+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:35.794193+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:36.794434+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:37.794599+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:38.795943+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:39.796135+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:40.796352+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:41.796521+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:42.796708+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:43.796881+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:44.797052+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:45.797174+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:46.797331+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:47.797539+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:48.797771+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:49.797891+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:50.798041+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:51.798151+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:52.798298+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:53.798512+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:54.798722+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:55.798898+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:56.799120+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:57.799278+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:58.799495+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:59.799640+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 77.114036560s of 77.737030029s, submitted: 33
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:00.799805+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 60071936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 272 ms_handle_reset con 0x561d8e52e800 session 0x561d8d16f680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:01.800010+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 60063744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:02.800247+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 273 ms_handle_reset con 0x561d8d10d000 session 0x561d8e4cd4a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:03.800413+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:04.800578+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478923 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:05.800795+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:06.801025+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:07.801546+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349904896 unmapped: 60039168 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:08.801819+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:09.802007+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:10.802261+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:11.802521+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:12.802738+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:13.802961+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:14.803123+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:15.803357+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:16.803632+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:17.803877+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 60006400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:18.804079+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:19.804263+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:20.804480+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:21.804693+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:22.804878+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d96070c00 session 0x561d8d117680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d96070c00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:23.805071+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:24.805358+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:25.805586+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:26.805811+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:27.805990+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:28.806230+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:29.806441+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:30.806633+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:31.806790+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:32.807037+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:33.807192+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 59973632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:34.807371+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 59965440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:35.807508+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 59965440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.830196381s of 36.115074158s, submitted: 95
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:36.807636+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 59957248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:37.807810+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:38.808103+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:39.808385+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:40.808595+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:41.808771+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:42.808939+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:43.809082+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:44.809301+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:45.809575+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:46.809766+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:47.809900+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:48.810076+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:49.810206+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:50.810381+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:51.810524+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:52.810676+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:53.810810+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:54.810942+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:55.811118+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:56.811221+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:57.811386+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:58.811558+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:59.811697+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:00.811833+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:01.811992+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:02.812182+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:03.812368+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:04.812507+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:05.812629+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:06.812811+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:07.813017+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:08.813193+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:09.813415+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:10.813641+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:11.813824+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:12.814048+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:13.814282+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:14.814510+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:15.814657+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:16.814854+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:17.815041+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:18.815219+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:19.815400+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:20.815527+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 59842560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:21.815674+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 59842560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:22.815851+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 59834368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:23.816056+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 59834368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:24.816213+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:25.816389+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:26.816570+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:27.816757+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:28.816975+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:29.817167+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 59809792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:30.817392+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 59809792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:31.817570+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:32.817818+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:33.818027+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:34.818226+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:35.818382+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:36.818557+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:37.818736+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:38.818930+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:39.819068+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:40.819282+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:41.819501+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:42.819714+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:43.819928+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:44.820130+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:45.820437+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:46.820602+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:47.820744+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:48.821478+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:49.821642+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:50.821893+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:51.822044+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:52.822257+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:53.822467+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:54.822675+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:55.822811+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:56.823118+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:57.823349+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:58.823569+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d8c459800 session 0x561d8e50e1e0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 59490304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d8d10bc00 session 0x561d8cff1860
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:59.823756+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 59744256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486265 data_alloc: 234881024 data_used: 21430272
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:00.823911+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:01.824058+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:02.824201+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:03.824433+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:04.824678+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486265 data_alloc: 234881024 data_used: 21430272
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:05.824871+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:06.825023+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:07.825172+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 90.937507629s of 91.256561279s, submitted: 90
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 275 heartbeat osd_stat(store_statfs(0x4e9a66000/0x0/0x4ffc00000, data 0x1a2a0f4/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 275 ms_handle_reset con 0x561d8d10d000 session 0x561d8e702d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:08.825362+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:09.825538+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342967 data_alloc: 218103808 data_used: 7806976
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:10.825764+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:11.825950+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 61595648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:12.826108+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 276 heartbeat osd_stat(store_statfs(0x4eaa68000/0x0/0x4ffc00000, data 0xa2a0d1/0xba6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 276 ms_handle_reset con 0x561d8e52e800 session 0x561d8c644f00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:13.826351+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:14.826579+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3272720 data_alloc: 218103808 data_used: 1056768
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:15.826799+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:16.826979+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:17.827174+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 277 heartbeat osd_stat(store_statfs(0x4eb263000/0x0/0x4ffc00000, data 0x22d6ee/0x3aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 64815104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:18.827393+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 277 heartbeat osd_stat(store_statfs(0x4eb263000/0x0/0x4ffc00000, data 0x22d6ee/0x3aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.671767235s of 10.978181839s, submitted: 60
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:19.827542+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275694 data_alloc: 218103808 data_used: 1056768
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:20.827757+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:21.827923+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:22.828459+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:23.828624+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:24.828811+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275694 data_alloc: 218103808 data_used: 1056768
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e532400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:25.828897+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 ms_handle_reset con 0x561d8e532400 session 0x561d8e7034a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:26.829058+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:27.829235+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:28.829468+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:29.829637+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:30.829804+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:31.829982+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:32.830102+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:33.830262+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:34.830426+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:35.830579+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:36.830732+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:37.830915+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:38.831105+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:39.831251+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:40.831420+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 64724992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:41.831566+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:42.831717+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:43.831853+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:44.831981+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:45.832144+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:46.832288+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:47.832461+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:48.832639+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:49.832839+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:50.833030+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:51.834205+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:52.834458+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:53.834601+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 64700416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:54.834746+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 64700416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:55.834941+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 64692224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:56.835136+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 64692224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:57.835360+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:58.835887+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:59.836210+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:00.836407+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:01.836683+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:02.836963+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:03.837106+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:04.837291+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:05.837438+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:06.837574+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:07.837803+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:08.838014+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:09.838192+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:10.838371+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:11.838581+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:12.838770+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:13.838951+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:14.839133+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 64659456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:15.839391+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 64659456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:16.839511+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:17.839673+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:18.839890+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:19.840074+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:20.840251+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:21.840410+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:22.840547+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:23.840683+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:24.840821+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:25.840929+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:26.841056+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:27.841188+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:28.841397+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:29.841567+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:30.841701+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:31.841825+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:32.841951+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:33.842112+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:34.842293+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:35.842487+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:36.842658+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:37.842796+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:38.842964+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:39.843103+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:40.843243+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:41.843435+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:42.843623+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:43.843778+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:44.843917+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 64602112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:45.844210+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 64602112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:46.844360+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:47.844486+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:48.844701+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:49.844839+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:50.844975+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:51.845125+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:52.861992+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:53.862103+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:54.862248+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 64585728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:55.862387+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 64585728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:56.862546+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:57.862689+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:58.862844+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:59.862962+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:00.863129+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:01.863276+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:02.863389+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:03.863578+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:04.863791+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:05.863933+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:06.864165+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:07.864293+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:08.864540+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 64552960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:09.864740+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 64552960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 111.406425476s of 111.508773804s, submitted: 26
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:10.865084+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3287953 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d2f/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:11.865214+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:12.865352+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:13.865481+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:14.865632+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:15.865812+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288207 data_alloc: 218103808 data_used: 1064960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:16.866088+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:17.866393+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:18.866579+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:19.866728+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 280 ms_handle_reset con 0x561d8c459800 session 0x561d8ed72d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:20.866870+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291989 data_alloc: 218103808 data_used: 1073152
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:21.867023+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb256000/0x0/0x4ffc00000, data 0x2328b1/0x3b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:22.867192+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.714424610s of 13.350893974s, submitted: 22
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:23.867370+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 64479232 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:24.867497+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 64479232 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:25.867623+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 ms_handle_reset con 0x561d8d10bc00 session 0x561d8c881a40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:26.867776+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:27.867912+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:28.868083+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:29.868197+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:30.868469+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:31.868880+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:32.869080+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 64438272 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:33.869280+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 64438272 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:34.869382+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:35.869585+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3777993575' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 09:36:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4266855097' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 09:36:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/612359251' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 09:36:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/705432200' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:36.869752+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:37.869905+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:38.870184+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:39.870420+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:40.870812+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:41.871126+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345522176 unmapped: 64421888 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:42.871290+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:43.871582+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:44.871849+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:45.872098+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:46.872420+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:47.872668+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:48.872956+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:49.873391+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:50.873548+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:51.873771+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:52.873983+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:53.874283+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:54.874536+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:55.874722+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:56.874887+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 64389120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:57.875113+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:58.875343+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:59.875516+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:00.875642+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:01.875839+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 64372736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:02.876068+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 64364544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:03.876250+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.750205994s of 40.227725983s, submitted: 4
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 64356352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:04.876431+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 282 ms_handle_reset con 0x561d8d10d000 session 0x561d8e8c54a0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:05.876643+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328998 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:06.876826+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:07.876966+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:08.877204+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:09.877356+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:10.877585+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328998 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:11.880769+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:12.880940+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:13.881125+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148046494s of 10.304548264s, submitted: 38
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 64315392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 282 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:14.881382+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:15.881532+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:16.881905+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:17.882058+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:18.882261+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:19.882422+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:20.882542+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:21.882695+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:22.882977+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:23.883175+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:24.883346+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:25.883519+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:26.884543+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 64274432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:27.884693+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 64274432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:28.884876+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 64266240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:29.885016+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 64266240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:30.885140+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:31.885360+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:32.885533+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:33.885898+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:34.886031+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:35.886194+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:36.886358+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:37.886529+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:38.886707+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:39.886857+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:40.887006+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:41.887134+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:42.887296+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:43.887529+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 64233472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:44.887666+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 64233472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:45.887809+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:46.887968+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:47.888116+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:48.888274+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:49.888429+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:50.888561+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:51.888686+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:52.888883+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:53.889061+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:54.889216+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:55.889352+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:56.889492+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:57.889622+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:58.889777+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:59.889942+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:00.890084+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:01.890274+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:02.890426+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:03.890561+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:04.890692+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:05.890826+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:06.890970+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:07.891100+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:08.891259+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:09.891386+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:10.891527+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:11.891723+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:12.891893+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:13.892058+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:14.892178+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:15.892423+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 64151552 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:16.892550+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 64151552 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:17.892682+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 64135168 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:18.892881+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:19.893016+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:20.893187+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:21.893326+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:22.893505+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:23.893676+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 64118784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:24.893805+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 64118784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:25.893966+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:26.894177+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:27.894393+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:28.894725+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 64102400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:29.894900+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 64102400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:30.895022+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:31.895170+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:32.895301+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:33.895480+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:34.895652+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:35.898259+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:36.898392+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:37.898555+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:38.898981+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:39.899202+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:40.899434+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:41.899666+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:42.899855+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:43.900026+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:44.900297+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:45.901009+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:46.901433+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:47.901576+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:48.901913+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:49.902133+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:50.902271+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 64053248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:51.902417+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 64053248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:52.902540+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 64045056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:53.902687+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 64045056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:54.903016+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:55.903199+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:56.903363+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:57.903513+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:58.903694+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:59.903845+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:00.903974+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:01.904124+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:02.904279+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:03.904419+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:04.904575+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:05.904743+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:06.904898+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:07.905050+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:08.905208+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:09.905346+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:10.905498+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:11.905670+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:12.905808+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:13.905983+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:14.906140+0000)
Nov 25 09:36:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 63987712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:15.906280+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 63987712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:16.906399+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345964544 unmapped: 63979520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1516408040' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:17.906581+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345964544 unmapped: 63979520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:18.906769+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:19.906907+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:20.907196+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:21.907370+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:22.907583+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:23.907721+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:24.907870+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:25.908000+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:26.908184+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:27.908409+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:28.908632+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 63946752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:29.908783+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 63946752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:30.908969+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:31.909138+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:32.909268+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:33.909424+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:34.909554+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:35.909678+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 63930368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:36.909798+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 63930368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:37.909933+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 63922176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:38.910124+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:39.910293+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:40.910453+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:41.910588+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:42.910711+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:43.910843+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:44.910976+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:45.911113+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:46.911249+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:47.911424+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:48.911604+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:49.911765+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:50.911950+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:51.912128+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:52.912425+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:53.912595+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:54.912767+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:55.912929+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:56.913065+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:57.913188+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:58.913391+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:59.913542+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:00.913706+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:01.913844+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:02.913992+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:03.914145+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:04.914384+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:05.914560+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:06.914764+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:07.914963+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:08.915230+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:09.915434+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:10.915634+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:11.915850+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:12.916007+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:13.916092+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:14.916219+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:15.916365+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:16.916537+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:17.916676+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:18.916873+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:19.916954+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:20.917079+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 67338240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:21.917271+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 67338240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:22.917421+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:23.917549+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:24.917689+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:25.917878+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:26.918043+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:27.918211+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:28.918410+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:29.918557+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:30.918702+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:31.918913+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:32.919126+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:33.919395+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 67313664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:34.919569+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:35.919717+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:36.919969+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:37.920187+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:38.920403+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:39.920573+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:40.920722+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.4 total, 600.0 interval
                                           Cumulative writes: 46K writes, 181K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.78 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 691 writes, 1763 keys, 691 commit groups, 1.0 writes per commit group, ingest: 0.85 MB, 0.00 MB/s
                                           Interval WAL: 691 writes, 309 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:41.920834+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:42.920971+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:43.921158+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:44.921385+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:45.921552+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:46.921863+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:47.922101+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:48.922367+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:49.922552+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:50.922747+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:51.922914+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:52.923117+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:53.923577+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 67248128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:54.923782+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 67248128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:55.923966+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 68165632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:56.924176+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 68165632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:57.924450+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 68157440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:58.924690+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:59.924834+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:00.924990+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:01.925132+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:02.925287+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:03.925495+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 68141056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:04.925629+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 68141056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:05.925764+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:06.925949+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:07.926093+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:08.926272+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:09.926428+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:10.926574+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:11.926718+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:12.926955+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341819392 unmapped: 68124672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:13.927082+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:14.927192+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:15.935560+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:16.935732+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:17.935864+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:18.936013+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:19.936152+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:20.936359+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 68100096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:21.936511+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 68083712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:22.936664+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 68083712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:23.936817+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:24.936990+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:25.937132+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:26.937282+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:27.937426+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:28.937590+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:29.937743+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341876736 unmapped: 68067328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:30.937890+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341876736 unmapped: 68067328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:31.938028+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:32.938243+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:33.938412+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:34.938592+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:35.938736+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:36.938868+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:37.939049+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:38.939225+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:39.939375+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:40.939521+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:41.939683+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 68042752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:42.939827+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 68042752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:43.940004+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 68034560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:44.940143+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 68034560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:45.940370+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:46.940525+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:47.940706+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:48.940935+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:49.941390+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:50.941655+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:51.941939+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:52.942200+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:53.942413+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 68001792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:54.942608+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:55.942755+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:56.942882+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:57.943024+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:58.943602+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:59.943834+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:00.943962+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:01.944076+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:02.944335+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:03.944544+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:04.944733+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:05.944930+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:06.945152+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:07.945404+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:08.945625+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:09.945783+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:10.946004+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 67960832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:11.946192+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:12.946381+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:13.946540+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:14.946681+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:15.946807+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:16.946980+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 67944448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:17.947274+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 67944448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:18.947607+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:19.947744+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:20.947891+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:21.948248+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:22.948676+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:23.949081+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:24.949407+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 67928064 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:25.949703+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 67928064 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:26.949990+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:27.950250+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:28.950614+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:29.950833+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:30.951149+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 67903488 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:31.951571+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 67903488 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:32.951821+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 67895296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:33.952043+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 67895296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:34.952366+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:35.952566+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:36.952777+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 322.153381348s of 322.755462646s, submitted: 14
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:37.952923+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 67846144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:38.953146+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:39.953451+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:40.953643+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:41.954038+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:42.954208+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:43.954398+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:44.954562+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:45.954726+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:46.954930+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:47.955083+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:48.955279+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:49.955505+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:50.955679+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:51.955850+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:52.956107+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:53.956853+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:54.957602+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:55.957875+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:56.958391+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:57.958997+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:58.959541+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:59.959985+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:00.960330+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:01.960615+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:02.960831+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:03.961269+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:04.961400+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:05.961778+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:06.962069+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 67780608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:07.962446+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:08.962639+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:09.962798+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:10.962987+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:11.963111+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:12.963288+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:13.963912+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:14.964221+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:15.964399+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:16.964747+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:17.965013+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:18.965375+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:19.965644+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:20.966222+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:21.966576+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:22.966983+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:23.967388+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:24.967570+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:25.967791+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:26.967987+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:27.968152+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:28.968380+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:29.968528+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:30.968669+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:31.968859+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:32.969105+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:33.969300+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:34.969492+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:35.969615+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:36.969801+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342228992 unmapped: 67715072 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:37.969946+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342237184 unmapped: 67706880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:38.970114+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:39.970280+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:40.970500+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:41.970643+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:42.970796+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:43.970986+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:44.971216+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:45.971404+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:46.971585+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:47.971683+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:48.971872+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:49.972019+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:50.972200+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:51.972421+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:52.972582+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:53.972730+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:54.972872+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:55.973076+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:56.973270+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:57.973472+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:58.973707+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 67657728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:59.973897+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 67657728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:00.974036+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 67649536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:01.974168+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:02.974412+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:03.974556+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:04.974726+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:05.974863+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:06.975036+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:07.975200+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:08.975395+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:09.975572+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:10.975747+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:11.975866+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:12.975978+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:13.976121+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:14.976274+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.540496826s of 97.933944702s, submitted: 90
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 67608576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:15.976518+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 70795264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:16.976672+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 284 ms_handle_reset con 0x561d8e52e800 session 0x561d8e7c0960
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:17.976864+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3333441 data_alloc: 218103808 data_used: 1097728
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:18.977056+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e532400
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eaddc000/0x0/0x4ffc00000, data 0x6a95f0/0x830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:19.977222+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:20.977486+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:21.977731+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 285 ms_handle_reset con 0x561d8e532400 session 0x561d8d25ba40
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:22.978393+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305645 data_alloc: 218103808 data_used: 1097728
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:23.978597+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:24.978991+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 285 heartbeat osd_stat(store_statfs(0x4eb24b000/0x0/0x4ffc00000, data 0x23b19e/0x3c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.389628410s of 10.221417427s, submitted: 70
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:25.979267+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:26.979455+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 286 heartbeat osd_stat(store_statfs(0x4eb248000/0x0/0x4ffc00000, data 0x23cc40/0x3c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:27.979592+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 286 heartbeat osd_stat(store_statfs(0x4ea248000/0x0/0x4ffc00000, data 0x123cc40/0x13c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452337 data_alloc: 218103808 data_used: 1097728
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:28.979824+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 287 ms_handle_reset con 0x561d8c459800 session 0x561d8e3ed680
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:29.979977+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:30.981995+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:31.982736+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:32.983504+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:33.983714+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:34.983894+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:35.984532+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:36.984957+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:37.985170+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:38.985450+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:39.985783+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:40.986066+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:41.986681+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:42.986895+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:43.987474+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:44.987704+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:45.987928+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:46.988271+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:47.988657+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:48.988843+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:49.989083+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:50.989251+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:51.989425+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:52.989654+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:53.989811+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:54.989995+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:55.990255+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:56.990443+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:57.990603+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:58.990792+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:59.990990+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:00.991197+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:01.991377+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:02.991563+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:03.991721+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:04.991882+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:05.992020+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 70680576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:06.992164+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 70680576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:07.992294+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:08.992542+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:09.992705+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:10.992831+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:11.993049+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:12.993211+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:13.993386+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339279872 unmapped: 70664192 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:14.993562+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:15.993716+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:16.993845+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:17.993981+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:18.994235+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:19.994405+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:20.994548+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:21.994724+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:22.994878+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:23.995044+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:24.995191+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:25.995383+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:26.995591+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:27.995771+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.414230347s of 63.027751923s, submitted: 47
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3459809 data_alloc: 218103808 data_used: 1105920
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:28.995939+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 70631424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:29.996139+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 70631424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:30.996566+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 70606848 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:31.996761+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 70598656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e9dce000/0x0/0x4ffc00000, data 0x16b1e0d/0x183f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e9dce000/0x0/0x4ffc00000, data 0x16b1e0d/0x183f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:32.996924+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 70590464 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325729 data_alloc: 218103808 data_used: 1114112
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241e0d/0x3cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:33.997071+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 289 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e8c43c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:34.997275+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:35.997406+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:36.997579+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:37.997700+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241dea/0x3ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324113 data_alloc: 218103808 data_used: 1114112
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:38.997860+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.201864243s of 11.404572487s, submitted: 53
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241dea/0x3ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 289 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:39.998007+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:40.998190+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:41.998334+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:42.998486+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:43.998651+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:44.998820+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:45.998956+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:46.999090+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:47.999214+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:48.999421+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:49.999549+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:51.000526+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:52.000639+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:53.000842+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:54.000993+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:55.001136+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:56.001293+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:57.001473+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:58.001630+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:59.001909+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:00.002062+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:01.002240+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:02.002373+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:03.002496+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:04.002649+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:05.002779+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:06.002934+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:07.003120+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:08.003261+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:09.003432+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:10.003601+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8d109000 session 0x561d8e39fe00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10cc00
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:11.003771+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 70492160 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8e74f800 session 0x561d8d1163c0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8c2aa800 session 0x561d8d2a6d20
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e800
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:12.004010+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 70483968 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:13.004193+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 70483968 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:14.004386+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 70475776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:15.004735+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:16.004904+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:17.005149+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:18.006067+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:19.033442+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:20.033653+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:21.033829+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:22.033989+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:23.034156+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:24.034414+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:25.034561+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:26.034759+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 70434816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:27.034908+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 70434816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:28.035110+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:29.035333+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:30.035511+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:31.035708+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:32.035855+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:33.036041+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:34.036213+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:35.036373+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:36.036540+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:37.036666+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:38.036816+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:39.037027+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:40.037146+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:41.037324+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:42.037471+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 70393856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:43.037605+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:44.037731+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:45.037896+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:46.038055+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:47.038224+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:48.038357+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:49.038545+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:50.038675+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:51.038806+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:52.038993+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:53.039124+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:54.039292+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:55.039491+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:56.039648+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:57.039805+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:58.039962+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:59.040151+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:00.040266+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:01.040367+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 70336512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:02.040517+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 70336512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:03.040635+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:04.040741+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:05.040854+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:06.041228+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 70320128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:07.041388+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:08.041625+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'config show' '{prefix=config show}'
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:09.041854+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:40 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:40 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:36:40 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338845696 unmapped: 71098368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:36:40 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:10.042052+0000)
Nov 25 09:36:40 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338665472 unmapped: 71278592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:40 compute-0 ceph-osd[89702]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:36:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 25 09:36:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/290239770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:36:41.123 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:36:41.124 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:36:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:36:41.124 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:36:41 compute-0 nova_compute[253538]: 2025-11-25 09:36:41.155 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 25 09:36:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176052188' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 09:36:41 compute-0 rsyslogd[1007]: imjournal from <np0005534516:ceph-osd>: begin to drop messages due to rate-limiting
Nov 25 09:36:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 09:36:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201944677' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 25 09:36:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3730707467' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 25 09:36:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783009575' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1516408040' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/290239770' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mon[75015]: pgmap v3513: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1176052188' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4201944677' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3730707467' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 09:36:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/783009575' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 09:36:42 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23243 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:42 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23241 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:42 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23247 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:42 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23245 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:42 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23249 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:42 compute-0 ceph-mon[75015]: from='client.23243 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:42 compute-0 ceph-mon[75015]: from='client.23241 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:43 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23253 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 25 09:36:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642950578' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23257 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 25 09:36:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2507654599' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mon[75015]: from='client.23247 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mon[75015]: from='client.23245 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mon[75015]: from='client.23249 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mon[75015]: pgmap v3514: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1642950578' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 09:36:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2507654599' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23261 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 09:36:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768203780' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23265 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:44 compute-0 nova_compute[253538]: 2025-11-25 09:36:44.684 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 25 09:36:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4152449056' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 09:36:44 compute-0 ceph-mon[75015]: from='client.23253 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: from='client.23257 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: from='client.23261 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2768203780' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4152449056' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 09:36:44 compute-0 ceph-mon[75015]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:41.768843+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 59957248 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:42.769012+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 59957248 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:43.769149+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4eaba6000/0x0/0x4ffc00000, data 0x1284d8e/0x13e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 59957248 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: mgrc ms_handle_reset ms_handle_reset con 0x562bd4911000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:36:45 compute-0 ceph-osd[88620]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: get_auth_request con 0x562bd2e6cc00 auth_method 0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3276216 data_alloc: 218103808 data_used: 4177920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:44.769274+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 59957248 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:45.769434+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4eaba6000/0x0/0x4ffc00000, data 0x1284d8e/0x13e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 59957248 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:46.769588+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 59957248 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:47.769748+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4eaba6000/0x0/0x4ffc00000, data 0x1284d8e/0x13e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 59957248 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:48.769909+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 59957248 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3276216 data_alloc: 218103808 data_used: 4177920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:49.770058+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335798272 unmapped: 59949056 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:50.770236+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335798272 unmapped: 59949056 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:51.770388+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335798272 unmapped: 59949056 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:52.770625+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335798272 unmapped: 59949056 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4eaba6000/0x0/0x4ffc00000, data 0x1284d8e/0x13e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:53.770773+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335798272 unmapped: 59949056 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3276216 data_alloc: 218103808 data_used: 4177920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:54.770954+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 59940864 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6bc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.234298706s of 46.362182617s, submitted: 27
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2e6bc00 session 0x562bd2f230e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6d400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2e6d400 session 0x562bd21f1e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd21ad800 session 0x562bd3ebe780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21adc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:55.771127+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd21adc00 session 0x562bd2f49a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6bc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2e6bc00 session 0x562bd47a05a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 332603392 unmapped: 63143936 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:56.771251+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 332603392 unmapped: 63143936 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:57.771418+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 332603392 unmapped: 63143936 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:58.771558+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea3e2000/0x0/0x4ffc00000, data 0x1a47df0/0x1bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 333824000 unmapped: 61923328 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2f21400 session 0x562bd3cc1c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377570 data_alloc: 218103808 data_used: 4177920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:04:59.771663+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 333799424 unmapped: 61947904 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:00.771829+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea03c000/0x0/0x4ffc00000, data 0x1deddf0/0x1f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 333799424 unmapped: 61947904 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:01.771988+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3f11c00 session 0x562bd4718f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 333799424 unmapped: 61947904 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:02.772123+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3f11c00 session 0x562bd30161e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd21ad800 session 0x562bd3cc3c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 333799424 unmapped: 61947904 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21adc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd21adc00 session 0x562bd489b680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:03.772275+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6bc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 333963264 unmapped: 61784064 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd596c400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd596c400 session 0x562bd4000000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3382456 data_alloc: 218103808 data_used: 4177920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:04.772395+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 333963264 unmapped: 61784064 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:05.772551+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea017000/0x0/0x4ffc00000, data 0x1e11e00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 61988864 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:06.772709+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 61022208 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:07.772843+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 61022208 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:08.772961+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 61022208 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:09.773097+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3459736 data_alloc: 234881024 data_used: 14843904
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 61022208 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea017000/0x0/0x4ffc00000, data 0x1e11e00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:10.773294+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 61022208 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:11.773552+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2e6a800 session 0x562bd160cf00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334733312 unmapped: 61014016 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:12.773728+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334733312 unmapped: 61014016 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea017000/0x0/0x4ffc00000, data 0x1e11e00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:13.773824+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334733312 unmapped: 61014016 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:14.773969+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3459736 data_alloc: 234881024 data_used: 14843904
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334733312 unmapped: 61014016 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:15.774099+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea017000/0x0/0x4ffc00000, data 0x1e11e00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea017000/0x0/0x4ffc00000, data 0x1e11e00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 334733312 unmapped: 61014016 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:16.774273+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.037870407s of 21.384599686s, submitted: 62
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea017000/0x0/0x4ffc00000, data 0x1e11e00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341016576 unmapped: 54730752 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:17.774366+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 340467712 unmapped: 55279616 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:18.774493+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:19.774599+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570294 data_alloc: 234881024 data_used: 17002496
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:20.774813+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:21.774988+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e9620000/0x0/0x4ffc00000, data 0x27f7e00/0x295d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:22.775219+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:23.775373+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e9620000/0x0/0x4ffc00000, data 0x27f7e00/0x295d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:24.775561+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570310 data_alloc: 234881024 data_used: 17002496
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:25.775698+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e9620000/0x0/0x4ffc00000, data 0x27f7e00/0x295d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:26.775815+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:27.775959+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 54050816 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:28.776095+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 54042624 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:29.776233+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3bcd000 session 0x562bd47181e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571270 data_alloc: 234881024 data_used: 17027072
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd596c400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.840531349s of 13.297338486s, submitted: 160
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd596c400 session 0x562bd37934a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 54124544 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:30.776369+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3f11400 session 0x562bd48274a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2409800 session 0x562bd489ba40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3e57800 session 0x562bd46dd680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2409800 session 0x562bd3cc12c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea00f000/0x0/0x4ffc00000, data 0x1e19e00/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 54124544 heap: 395747328 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3bcd000 session 0x562bd46d9e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:31.776514+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3e57800 session 0x562bd4931e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3f11400 session 0x562bd404cd20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd596c400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd596c400 session 0x562bd2259a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2409800 session 0x562bd2f230e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 58302464 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e960e000/0x0/0x4ffc00000, data 0x2819e10/0x2980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:32.776843+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 58302464 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:33.777148+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 58302464 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:34.777476+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3524335 data_alloc: 218103808 data_used: 13352960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 58302464 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:35.777597+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e960e000/0x0/0x4ffc00000, data 0x2819e10/0x2980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 58302464 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:36.778156+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 58302464 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:37.778825+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 58302464 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:38.778979+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 58302464 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:39.779179+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3524335 data_alloc: 218103808 data_used: 13352960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3bcd000 session 0x562bd4002d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 59777024 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.335142136s of 10.534548759s, submitted: 30
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:40.779413+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e95ea000/0x0/0x4ffc00000, data 0x283de10/0x29a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 59777024 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:41.779566+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 59777024 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:42.779791+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 59777024 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:43.780000+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:44.780191+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601607 data_alloc: 234881024 data_used: 23846912
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:45.780375+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:46.780516+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e95ea000/0x0/0x4ffc00000, data 0x283de10/0x29a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:47.780704+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:48.780857+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:49.781006+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601959 data_alloc: 234881024 data_used: 23846912
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:50.781226+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:51.781354+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:52.781509+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e95ea000/0x0/0x4ffc00000, data 0x283de10/0x29a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:53.781629+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.966702461s of 13.978903770s, submitted: 3
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341123072 unmapped: 58826752 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:54.781767+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654495 data_alloc: 234881024 data_used: 23867392
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 57532416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:55.781959+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 57532416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:56.782092+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 57532416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:57.782245+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 57532416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:58.782510+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e8f2f000/0x0/0x4ffc00000, data 0x2ef8e10/0x305f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 57532416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:05:59.782707+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3668639 data_alloc: 234881024 data_used: 24334336
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 57532416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:00.782929+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342425600 unmapped: 57524224 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:01.783133+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e8f2d000/0x0/0x4ffc00000, data 0x2efae10/0x3061000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342425600 unmapped: 57524224 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:02.783381+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342425600 unmapped: 57524224 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:03.783519+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 57507840 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:04.783704+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3667067 data_alloc: 234881024 data_used: 24326144
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 57507840 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:05.783849+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e8f2d000/0x0/0x4ffc00000, data 0x2efae10/0x3061000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 57507840 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:06.784792+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.362945557s of 12.661943436s, submitted: 76
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 57507840 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:07.784960+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3e57800 session 0x562bd404dc20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3f11400 session 0x562bd47a0960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd44cb400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 58335232 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:08.785152+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd44cb400 session 0x562bd3cc32c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e8f2b000/0x0/0x4ffc00000, data 0x2efce10/0x3063000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 58335232 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:09.785792+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461369 data_alloc: 218103808 data_used: 13344768
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 58335232 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:10.786279+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea00f000/0x0/0x4ffc00000, data 0x1e19e00/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 58335232 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:11.786610+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea00f000/0x0/0x4ffc00000, data 0x1e19e00/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 58335232 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:12.786961+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea00f000/0x0/0x4ffc00000, data 0x1e19e00/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 58318848 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2e6bc00 session 0x562bd21aa780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:13.787155+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2f21400 session 0x562bd21f14a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea00f000/0x0/0x4ffc00000, data 0x1e19e00/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2409800 session 0x562bd48272c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:14.787449+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306370 data_alloc: 218103808 data_used: 4177920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:15.787757+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:16.788177+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4eaba5000/0x0/0x4ffc00000, data 0x1284d8e/0x13e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:17.788300+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3bcd000 session 0x562bd4719c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3e57800 session 0x562bd44d74a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3e57800 session 0x562bd2abef00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2409800 session 0x562bd49312c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6bc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.390676498s of 10.730729103s, submitted: 66
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:18.788560+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 341049344 unmapped: 58900480 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2e6bc00 session 0x562bd49314a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2f21400 session 0x562bd3ba5a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3bcd000 session 0x562bd21aa5a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3bcd000 session 0x562bd40005a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2409800 session 0x562bd3ebe5a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:19.788840+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366656 data_alloc: 218103808 data_used: 4177920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:20.789086+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:21.789392+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea4f2000/0x0/0x4ffc00000, data 0x1937d9e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:22.789547+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd4719e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6bc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:23.789781+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd57f14a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:24.789989+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 64708608 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366656 data_alloc: 218103808 data_used: 4177920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:25.790121+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:26.790558+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:27.790696+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea4f2000/0x0/0x4ffc00000, data 0x1937d9e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:28.790965+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:29.791203+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415456 data_alloc: 218103808 data_used: 11071488
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:30.791631+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:31.791861+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:32.792059+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:33.792237+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea4f2000/0x0/0x4ffc00000, data 0x1937d9e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:34.792499+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415456 data_alloc: 218103808 data_used: 11071488
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea4f2000/0x0/0x4ffc00000, data 0x1937d9e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:35.792627+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 64700416 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ea4f2000/0x0/0x4ffc00000, data 0x1937d9e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.593769073s of 18.222761154s, submitted: 13
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:36.792785+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 60784640 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:37.792912+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 60776448 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:38.793025+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:39.793159+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500080 data_alloc: 218103808 data_used: 11460608
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:40.793351+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e9bf7000/0x0/0x4ffc00000, data 0x221cd9e/0x2381000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:41.793479+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:42.793678+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:43.793852+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:44.794009+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e9bf7000/0x0/0x4ffc00000, data 0x221cd9e/0x2381000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500096 data_alloc: 218103808 data_used: 11460608
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:45.794135+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e9bf7000/0x0/0x4ffc00000, data 0x221cd9e/0x2381000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:46.794280+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 60751872 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.278714180s of 10.668151855s, submitted: 88
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e9c0b000/0x0/0x4ffc00000, data 0x221dd9e/0x2382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:47.794461+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 60588032 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e9c0b000/0x0/0x4ffc00000, data 0x221dd9e/0x2382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 ms_handle_reset con 0x562bd3e57800 session 0x562bd46dc960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:48.794695+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 60588032 heap: 399949824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 254 ms_handle_reset con 0x562bd3f11400 session 0x562bd48c83c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 254 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd4827c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 254 ms_handle_reset con 0x562bd2409800 session 0x562bd3792960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:49.794840+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 254 ms_handle_reset con 0x562bd3bcd000 session 0x562bd44d63c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 63627264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652171 data_alloc: 234881024 data_used: 19316736
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 254 ms_handle_reset con 0x562bd3e57800 session 0x562bd3ff0f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd44cb400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:50.795062+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 63619072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd44cb400 session 0x562bd3bba1e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:51.795255+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 63610880 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:52.795429+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 63610880 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 heartbeat osd_stat(store_statfs(0x4e8d32000/0x0/0x4ffc00000, data 0x30f10e7/0x325a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:53.795595+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 63610880 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:54.795747+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 63602688 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660007 data_alloc: 234881024 data_used: 19324928
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:55.795938+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 63602688 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:56.796081+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd2409800 session 0x562bd4719a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd4826960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd3bcd000 session 0x562bd160dc20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 63602688 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd3e57800 session 0x562bd2258b40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd490fc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.836506844s of 10.172529221s, submitted: 80
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd490fc00 session 0x562bd48c8f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd2409800 session 0x562bd4002780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd2abe000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd3bcd000 session 0x562bd404de00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd490fc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd490fc00 session 0x562bd40030e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd490f800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd490f800 session 0x562bd4002f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd2409800 session 0x562bd46dc1e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd3e57800 session 0x562bd1db05a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 heartbeat osd_stat(store_statfs(0x4e8613000/0x0/0x4ffc00000, data 0x38110f7/0x397b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:57.796219+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd4002960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd3bcd000 session 0x562bd3cc1c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd490fc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 67469312 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd490fc00 session 0x562bd46d8780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 ms_handle_reset con 0x562bd2409800 session 0x562bd3ebed20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:58.796391+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 67469312 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:06:59.796473+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 67469312 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3710504 data_alloc: 234881024 data_used: 19324928
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:00.796622+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 67469312 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:01.796815+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd21f0960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344752128 unmapped: 67461120 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3bcd000 session 0x562bd404c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:02.797004+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 67452928 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e57800 session 0x562bd44d7a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4313c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e860d000/0x0/0x4ffc00000, data 0x3812bcc/0x3980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd4313c00 session 0x562bd2f22780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:03.797133+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 67444736 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd2409800 session 0x562bd2a945a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:04.797273+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e57800 session 0x562bd2e565a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 67444736 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719593 data_alloc: 234881024 data_used: 19730432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e97800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e97800 session 0x562bd1e5a3c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7922000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:05.797362+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd7922000 session 0x562bd48c9a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 67420160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e860a000/0x0/0x4ffc00000, data 0x3812c32/0x3984000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:06.797553+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 67420160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:07.797714+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 67387392 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e860a000/0x0/0x4ffc00000, data 0x3812c32/0x3984000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:08.797860+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 65060864 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:09.797987+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 65060864 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3844928 data_alloc: 251658240 data_used: 36814848
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:10.798124+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 65060864 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e860a000/0x0/0x4ffc00000, data 0x3812c32/0x3984000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e860a000/0x0/0x4ffc00000, data 0x3812c32/0x3984000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:11.798239+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 65060864 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:12.798397+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 65060864 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:13.798534+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 65060864 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e860a000/0x0/0x4ffc00000, data 0x3812c32/0x3984000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:14.798662+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 65060864 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3844928 data_alloc: 251658240 data_used: 36814848
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:15.798788+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.694450378s of 18.903245926s, submitted: 94
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 62873600 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:16.798917+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 64626688 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:17.799053+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 64626688 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:18.799208+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 61136896 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:19.799390+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 60547072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970613 data_alloc: 251658240 data_used: 39292928
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e78fd000/0x0/0x4ffc00000, data 0x451fc32/0x4691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:20.799555+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 59383808 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e78f0000/0x0/0x4ffc00000, data 0x452bc32/0x469d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:21.799675+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 60424192 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:22.799792+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 60424192 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e78f0000/0x0/0x4ffc00000, data 0x452bc32/0x469d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:23.799906+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 60424192 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:24.800098+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e78f0000/0x0/0x4ffc00000, data 0x452bc32/0x469d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 60424192 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3973677 data_alloc: 251658240 data_used: 39505920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e78f0000/0x0/0x4ffc00000, data 0x452bc32/0x469d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:25.800258+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 60416000 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:26.800394+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e78f0000/0x0/0x4ffc00000, data 0x452bc32/0x469d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 60416000 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:27.800532+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 60416000 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:28.800723+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 60416000 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:29.800878+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 60416000 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3973997 data_alloc: 251658240 data_used: 39514112
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:30.801059+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3bcd800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3bcd800 session 0x562bd47a05a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd2409800 session 0x562bd4000d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 60416000 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e57800 session 0x562bd4826b40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e97800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e97800 session 0x562bd47a1a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7922000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.021572113s of 15.309944153s, submitted: 88
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd7922000 session 0x562bd2b2c3c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd475f800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd475f800 session 0x562bd57f0780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd2409800 session 0x562bd57f0d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e57800 session 0x562bd4931a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e97800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e97800 session 0x562bd44d7e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e78f0000/0x0/0x4ffc00000, data 0x452bc32/0x469d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:31.801178+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 60235776 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:32.801367+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 60227584 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:33.801511+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 60227584 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:34.801636+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 60227584 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4031349 data_alloc: 251658240 data_used: 39518208
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:35.801795+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 60227584 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:36.801923+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e727f000/0x0/0x4ffc00000, data 0x4b9cc42/0x4d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 60227584 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:37.802112+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 60227584 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:38.802264+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 60219392 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:39.802396+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 60219392 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7922000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd7922000 session 0x562bd3ff1c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4031349 data_alloc: 251658240 data_used: 39518208
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:40.802556+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e58c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e58c00 session 0x562bd2e56780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 60219392 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd2409800 session 0x562bd4001680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:41.802756+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.445635796s of 10.541186333s, submitted: 17
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd3e57800 session 0x562bd404d4a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 60211200 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e727d000/0x0/0x4ffc00000, data 0x4b9cc75/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e97800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7922000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:42.802879+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7923000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 ms_handle_reset con 0x562bd7923000 session 0x562bd4719c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 heartbeat osd_stat(store_statfs(0x4e727d000/0x0/0x4ffc00000, data 0x4b9cc75/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 60211200 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 258 ms_handle_reset con 0x562bd2f21c00 session 0x562bd49303c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd596d800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4315c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 258 ms_handle_reset con 0x562bd4315c00 session 0x562bd2f1a000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 258 ms_handle_reset con 0x562bd596d800 session 0x562bd2f1a5a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:43.803064+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 40419328 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:44.803199+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 258 ms_handle_reset con 0x562bd2409800 session 0x562bd48274a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 56123392 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4264005 data_alloc: 251658240 data_used: 45490176
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 258 ms_handle_reset con 0x562bd2f21c00 session 0x562bd3cc2780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:45.803295+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 54206464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 259 heartbeat osd_stat(store_statfs(0x4e5671000/0x0/0x4ffc00000, data 0x67a53c3/0x691c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 259 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 260 ms_handle_reset con 0x562bd3e57800 session 0x562bd3ff01e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:46.803453+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 54181888 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7923000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 260 ms_handle_reset con 0x562bd7923000 session 0x562bd3ff05a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 260 ms_handle_reset con 0x562bd2409800 session 0x562bd2a8d2c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 260 ms_handle_reset con 0x562bd2f21c00 session 0x562bd489b4a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:47.803592+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 54181888 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:48.803720+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 ms_handle_reset con 0x562bd3e57800 session 0x562bd40003c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 54108160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 heartbeat osd_stat(store_statfs(0x4e7270000/0x0/0x4ffc00000, data 0x4ba3b49/0x4d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:49.803793+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 54108160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4114633 data_alloc: 268435456 data_used: 49872896
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd2a95e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 ms_handle_reset con 0x562bd3bcd000 session 0x562bd47a1e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:50.803954+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 54108160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 heartbeat osd_stat(store_statfs(0x4e7271000/0x0/0x4ffc00000, data 0x4ba3b49/0x4d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:51.804068+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.555204391s of 10.004935265s, submitted: 122
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 ms_handle_reset con 0x562bd2409800 session 0x562bd4719e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 53035008 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 heartbeat osd_stat(store_statfs(0x4e747b000/0x0/0x4ffc00000, data 0x499aaa4/0x4b10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:52.804207+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 53035008 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 heartbeat osd_stat(store_statfs(0x4e747b000/0x0/0x4ffc00000, data 0x499aaa4/0x4b10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:53.804332+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 54075392 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:54.804486+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 57786368 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 263 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd489b2c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3885772 data_alloc: 251658240 data_used: 32206848
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:55.804618+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 57786368 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:56.804759+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 54337536 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:57.804884+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 53043200 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:58.805048+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 264 ms_handle_reset con 0x562bd2f21400 session 0x562bd46dd2c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 264 heartbeat osd_stat(store_statfs(0x4e64f3000/0x0/0x4ffc00000, data 0x4368b11/0x44e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 52928512 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:07:59.805217+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 264 ms_handle_reset con 0x562bd2f21c00 session 0x562bd4003a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 60727296 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3775424 data_alloc: 234881024 data_used: 18534400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:00.805381+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 60727296 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:01.805545+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 264 heartbeat osd_stat(store_statfs(0x4e7045000/0x0/0x4ffc00000, data 0x33cfb01/0x3547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 60727296 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:02.805683+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 60727296 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:03.805805+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 60727296 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:04.805936+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 264 heartbeat osd_stat(store_statfs(0x4e7045000/0x0/0x4ffc00000, data 0x33cfb01/0x3547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.543651581s of 13.030752182s, submitted: 165
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 264 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 60858368 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770142 data_alloc: 234881024 data_used: 18534400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:05.806089+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 60858368 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:06.806241+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 60858368 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:07.806402+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 60858368 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:08.806534+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 60858368 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:09.806657+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3e97800 session 0x562bd57f0960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd7922000 session 0x562bd47a10e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 60858368 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771582 data_alloc: 234881024 data_used: 18616320
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7493000/0x0/0x4ffc00000, data 0x33d1564/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:10.806806+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd57f0000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 65404928 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:11.806937+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 65396736 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:12.807080+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 65396736 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:13.807229+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 65396736 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2f22780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd2f192c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2c1dc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:14.807467+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e839e000/0x0/0x4ffc00000, data 0x24c9521/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,2])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.755641937s of 10.003308296s, submitted: 46
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2c1dc00 session 0x562bd1e5b680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:15.807650+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:16.807829+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:17.807981+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:18.808146+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:19.808302+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:20.808517+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:21.808666+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:22.808890+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:23.809048+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:24.809172+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:25.809344+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:26.809523+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:27.809719+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:28.810064+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:29.810240+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:30.810567+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:31.811117+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:32.811421+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:33.811604+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:34.811754+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:35.811962+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:36.812113+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:37.812255+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:38.812392+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:39.812578+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:40.812755+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72843264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:41.812929+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72835072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:42.813156+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72835072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:43.813356+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72835072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:44.813559+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72835072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:45.813744+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72835072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:46.813924+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72835072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:47.814096+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72835072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:48.814299+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72835072 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:49.814537+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72826880 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:50.814766+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72826880 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:51.814959+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72826880 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:52.815176+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72826880 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:53.815398+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72818688 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:54.815521+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72818688 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410854 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:55.815660+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72818688 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:56.815808+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd40003c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2a8d2c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3ff05a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7922000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd7922000 session 0x562bd3ff01e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.122470856s of 42.224075317s, submitted: 28
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e95d1000/0x0/0x4ffc00000, data 0x12994ee/0x140d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,1,1,4])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 67780608 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21400 session 0x562bd48274a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:57.815924+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21400 session 0x562bd4001680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd3ff1c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd44d7e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd57f0d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 72761344 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:58.816057+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e90fd000/0x0/0x4ffc00000, data 0x176c550/0x18e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 72761344 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:08:59.816202+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 72761344 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456229 data_alloc: 218103808 data_used: 4259840
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:00.816370+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 72761344 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:01.816565+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 72753152 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7922000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd7922000 session 0x562bd57f0780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:02.816704+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd2b2c3c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 72753152 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:03.816847+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd47a1a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd4000d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 72589312 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:04.816998+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e90d9000/0x0/0x4ffc00000, data 0x1790550/0x1905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 72589312 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458601 data_alloc: 218103808 data_used: 4263936
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:05.817125+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:06.817279+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e90d9000/0x0/0x4ffc00000, data 0x1790550/0x1905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:07.817447+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:08.817634+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:09.817788+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:10.818010+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494441 data_alloc: 218103808 data_used: 9310208
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:11.818183+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e90d9000/0x0/0x4ffc00000, data 0x1790550/0x1905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:12.818384+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:13.818535+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:14.818687+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e90d9000/0x0/0x4ffc00000, data 0x1790550/0x1905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:15.818846+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494441 data_alloc: 218103808 data_used: 9310208
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 72949760 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.174478531s of 19.682008743s, submitted: 33
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:16.819256+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 67624960 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e761a000/0x0/0x4ffc00000, data 0x20af550/0x2224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:17.819407+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7537000/0x0/0x4ffc00000, data 0x218c550/0x2301000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:18.819614+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7516000/0x0/0x4ffc00000, data 0x21a5550/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:19.819886+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:20.820387+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589677 data_alloc: 218103808 data_used: 9756672
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:21.821107+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:22.822102+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:23.822960+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:24.823377+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7516000/0x0/0x4ffc00000, data 0x21a5550/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:25.823767+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589677 data_alloc: 218103808 data_used: 9756672
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:26.824418+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7516000/0x0/0x4ffc00000, data 0x21a5550/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:27.824692+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:28.824818+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:29.825065+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:30.825225+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589677 data_alloc: 218103808 data_used: 9756672
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:31.825510+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 67518464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7516000/0x0/0x4ffc00000, data 0x21a5550/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:32.825865+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 67510272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:33.826020+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 67510272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7516000/0x0/0x4ffc00000, data 0x21a5550/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:34.826220+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 67510272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.5 total, 600.0 interval
                                           Cumulative writes: 43K writes, 175K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 43K writes, 15K syncs, 2.86 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4589 writes, 19K keys, 4589 commit groups, 1.0 writes per commit group, ingest: 23.86 MB, 0.04 MB/s
                                           Interval WAL: 4589 writes, 1654 syncs, 2.77 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:35.826453+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 67510272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589677 data_alloc: 218103808 data_used: 9756672
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7516000/0x0/0x4ffc00000, data 0x21a5550/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:36.826603+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 67510272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:37.826779+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 67510272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:38.826974+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 67502080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:39.827133+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 67502080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:40.827326+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 67502080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589677 data_alloc: 218103808 data_used: 9756672
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:41.827453+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 67502080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7516000/0x0/0x4ffc00000, data 0x21a5550/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:42.827611+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 67493888 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:43.827776+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 67493888 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:44.827954+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 67493888 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7516000/0x0/0x4ffc00000, data 0x21a5550/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:45.828133+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 67493888 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589677 data_alloc: 218103808 data_used: 9756672
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:46.828275+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.650684357s of 30.024961472s, submitted: 83
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 61251584 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3e57800 session 0x562bd3cc1c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd596d800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd596d800 session 0x562bd46d9e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:47.828420+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 67534848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:48.828586+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 67534848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6f05000/0x0/0x4ffc00000, data 0x27c35b2/0x2939000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:49.828769+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:50.829007+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3630474 data_alloc: 218103808 data_used: 9760768
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:51.829140+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:52.829419+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:53.829603+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:54.829797+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6f03000/0x0/0x4ffc00000, data 0x27c45b2/0x293a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:55.829990+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3630474 data_alloc: 218103808 data_used: 9760768
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:56.830226+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:57.830864+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 67526656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6f03000/0x0/0x4ffc00000, data 0x27c45b2/0x293a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6f03000/0x0/0x4ffc00000, data 0x27c45b2/0x293a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:58.830948+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.950550079s of 12.080301285s, submitted: 27
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344637440 unmapped: 67575808 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd404da40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:09:59.831101+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 67420160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:00.831294+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 67420160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636032 data_alloc: 218103808 data_used: 9760768
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:01.831458+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 67420160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:02.831615+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 67411968 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:03.831741+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 67411968 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6edf000/0x0/0x4ffc00000, data 0x27e85d5/0x295f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:04.831867+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6edf000/0x0/0x4ffc00000, data 0x27e85d5/0x295f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:05.832002+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683248 data_alloc: 234881024 data_used: 16080896
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6edf000/0x0/0x4ffc00000, data 0x27e85d5/0x295f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:06.832075+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:07.832209+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:08.832345+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6edf000/0x0/0x4ffc00000, data 0x27e85d5/0x295f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:09.832469+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6edf000/0x0/0x4ffc00000, data 0x27e85d5/0x295f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:10.832638+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683248 data_alloc: 234881024 data_used: 16080896
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:11.832792+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:12.833009+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:13.833343+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:14.833453+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.874057770s of 15.962975502s, submitted: 25
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 66150400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:15.833590+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 63627264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725554 data_alloc: 234881024 data_used: 16179200
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a49000/0x0/0x4ffc00000, data 0x2c765d5/0x2ded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:16.833758+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 64290816 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:17.833922+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 64290816 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:18.834099+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 64290816 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:19.834233+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 64290816 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:20.834456+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 64290816 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733318 data_alloc: 234881024 data_used: 16179200
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:21.834587+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 64290816 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:22.834702+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 64282624 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:23.834853+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 64282624 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:24.834984+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 64282624 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:25.835138+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 64282624 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733334 data_alloc: 234881024 data_used: 16179200
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:26.835296+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 64282624 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:27.835474+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 64282624 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x2c875d5/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:28.835612+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 64282624 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.059627533s of 14.749497414s, submitted: 53
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd2ec2780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd4718960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:29.835748+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 64274432 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:30.835927+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 64274432 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3731050 data_alloc: 234881024 data_used: 16183296
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6a40000/0x0/0x4ffc00000, data 0x248c5d5/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:31.836139+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 64258048 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3e57800 session 0x562bd46d8780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:32.836342+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 64241664 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:33.836498+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 64241664 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:34.836646+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 64233472 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e746b000/0x0/0x4ffc00000, data 0x21a6550/0x231b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:35.836777+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 64233472 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596092 data_alloc: 218103808 data_used: 9748480
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e746b000/0x0/0x4ffc00000, data 0x21a6550/0x231b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e746b000/0x0/0x4ffc00000, data 0x21a6550/0x231b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:36.836907+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 64233472 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:37.837048+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 64233472 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e746b000/0x0/0x4ffc00000, data 0x21a6550/0x231b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:38.837194+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 64233472 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21400 session 0x562bd47a05a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd3cabe00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:39.837343+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.616145134s of 10.485960007s, submitted: 48
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 64233472 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:40.837561+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21400 session 0x562bd3792960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:41.837704+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:42.837858+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:43.838044+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:44.838253+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:45.838414+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:46.838564+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:47.838712+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:48.838905+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:49.839087+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:50.839346+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:51.839515+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:52.839736+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:53.839915+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 69615616 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:54.840038+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 69607424 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:55.840225+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 69607424 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:56.840438+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 69607424 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:57.840605+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 69607424 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:58.840789+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 69607424 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:10:59.840932+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 69607424 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:00.841085+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 69607424 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:01.841236+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 69599232 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:02.841336+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 69599232 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:03.841475+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 69599232 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:04.841708+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 69599232 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:05.841916+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 69591040 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:06.842091+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 69591040 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:07.842237+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 69591040 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:08.842373+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 69591040 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:09.842505+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 69582848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:10.842677+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 69582848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:11.842785+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 69582848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:12.842929+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 69582848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:13.843097+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 69582848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:14.843277+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 69582848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:15.843423+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 69582848 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:16.843538+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 69574656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:17.843709+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 69574656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:18.843848+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 69574656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:19.844005+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 69574656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:20.844212+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 69574656 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437424 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:21.844330+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342646784 unmapped: 69566464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:22.844488+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342646784 unmapped: 69566464 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:23.844605+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.695407867s of 43.821598053s, submitted: 24
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd48c9e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 69558272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:24.844734+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 69558272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:25.844870+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 69558272 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498984 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:26.845014+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:27.845157+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:28.845382+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7c93000/0x0/0x4ffc00000, data 0x1a384de/0x1bab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:29.845550+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd47a0f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:30.845708+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7c6f000/0x0/0x4ffc00000, data 0x1a5c4de/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502997 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:31.845853+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:32.845988+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:33.846124+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:34.846346+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 69550080 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.609845161s of 11.878426552s, submitted: 19
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:35.846465+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342687744 unmapped: 69525504 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559637 data_alloc: 218103808 data_used: 12238848
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7c6f000/0x0/0x4ffc00000, data 0x1a5c4de/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:36.846595+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 69484544 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:37.846744+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 69484544 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:38.846912+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 69484544 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:39.847127+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 69484544 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:40.847358+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 69484544 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559637 data_alloc: 218103808 data_used: 12238848
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:41.847572+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 69484544 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7c6f000/0x0/0x4ffc00000, data 0x1a5c4de/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,3,1])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:42.847697+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 64249856 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:43.847860+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 65544192 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:44.848060+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 65544192 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:45.848467+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 65544192 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645477 data_alloc: 218103808 data_used: 12681216
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:46.848644+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.009137154s of 11.425324440s, submitted: 182
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e726e000/0x0/0x4ffc00000, data 0x245d4de/0x25d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:47.848793+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:48.848945+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:49.849132+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:50.849335+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3644591 data_alloc: 218103808 data_used: 13066240
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e726a000/0x0/0x4ffc00000, data 0x24614de/0x25d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:51.849477+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:52.849675+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e726a000/0x0/0x4ffc00000, data 0x24614de/0x25d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:53.849809+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:54.849948+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:55.850082+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3644591 data_alloc: 218103808 data_used: 13066240
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:56.850257+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:57.850398+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd7923000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd7923000 session 0x562bd2f223c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd2abe000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2f1a000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 65675264 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21400 session 0x562bd2258b40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.306672096s of 11.336254120s, submitted: 2
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd2259a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:58.850543+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd489a780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd21f0780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3ff03c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21400 session 0x562bd2ec23c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6e51000/0x0/0x4ffc00000, data 0x287a4de/0x29ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 65273856 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:11:59.850697+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 65191936 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:00.850905+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 65191936 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3695790 data_alloc: 218103808 data_used: 13066240
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:01.851093+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 65191936 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:02.851253+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 65191936 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:03.851454+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6dce000/0x0/0x4ffc00000, data 0x28fd4de/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 65191936 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:04.851607+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd2a952c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 65191936 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6dce000/0x0/0x4ffc00000, data 0x28fd4de/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:05.851770+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 65175552 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696503 data_alloc: 218103808 data_used: 13070336
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:06.851916+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 65175552 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:07.852003+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:08.852169+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:09.852338+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6dcd000/0x0/0x4ffc00000, data 0x28fd501/0x2a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:10.852548+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3722103 data_alloc: 234881024 data_used: 16584704
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:11.852687+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:12.852817+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:13.852939+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:14.853076+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:15.853204+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 65167360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3722103 data_alloc: 234881024 data_used: 16584704
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6dcd000/0x0/0x4ffc00000, data 0x28fd501/0x2a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:16.853345+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 347054080 unmapped: 65159168 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.532457352s of 19.004079819s, submitted: 36
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:17.853478+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 60768256 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:18.853616+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 60760064 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:19.853793+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 61349888 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:20.853977+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350928896 unmapped: 61284352 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798699 data_alloc: 234881024 data_used: 18288640
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:21.854166+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e66ad000/0x0/0x4ffc00000, data 0x3014501/0x3188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350928896 unmapped: 61284352 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:22.854290+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350928896 unmapped: 61284352 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:23.854453+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350928896 unmapped: 61284352 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:24.854567+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 61276160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:25.854697+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e66ad000/0x0/0x4ffc00000, data 0x3014501/0x3188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 61276160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798699 data_alloc: 234881024 data_used: 18288640
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:26.854831+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 61276160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:27.855004+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 61276160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:28.855137+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 61276160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:29.855281+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 61276160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:30.855515+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.816677094s of 13.567830086s, submitted: 114
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd4002d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 61276160 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793707 data_alloc: 234881024 data_used: 18423808
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:31.855871+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e66b6000/0x0/0x4ffc00000, data 0x3014501/0x3188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd47194a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 61071360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:32.856036+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 61071360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7268000/0x0/0x4ffc00000, data 0x24624de/0x25d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:33.856216+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 61071360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:34.856653+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3e57800 session 0x562bd2ec25a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd2ec3a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 61071360 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:35.856812+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd20eef00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 61030400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:36.857563+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 61030400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:37.857765+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 61030400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:38.857912+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 61030400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:39.858380+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 61030400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:40.858629+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 61030400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:41.858955+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 61030400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:42.859171+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 61030400 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:43.859377+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 61022208 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:44.859506+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 61022208 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:45.859877+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 61022208 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:46.860080+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 61022208 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:47.860469+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 61022208 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:48.860749+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 61014016 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:49.860979+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 61014016 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:50.861224+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 61014016 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:51.861400+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 61014016 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:52.861555+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 61005824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:53.861805+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 61005824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:54.862012+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 61005824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:55.862269+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 61005824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:56.862492+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 61005824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:57.862759+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 61005824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:58.862967+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 61005824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:59.863128+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 60997632 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:00.863421+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 60997632 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:01.863648+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 60997632 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:02.863890+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 60997632 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:03.864115+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 60997632 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:04.864541+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 60997632 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:05.864913+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:06.865279+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:07.865619+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:08.865940+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:09.866064+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:10.866431+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:11.866650+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:12.866864+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 60981248 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:13.867187+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 60981248 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:14.867566+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 60981248 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:15.867686+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.536159515s of 45.072593689s, submitted: 63
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21400 session 0x562bd46d8780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd47a1c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd30163c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd20ee1e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3e57800 session 0x562bd47a1860
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:16.867903+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492799 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:17.868184+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:18.868448+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:19.868731+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:20.869059+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a8000/0x0/0x4ffc00000, data 0x1622540/0x1796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a8000/0x0/0x4ffc00000, data 0x1622540/0x1796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 59908096 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:21.869294+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492799 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd2f223c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 59908096 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:22.869572+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd489af00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd48c9e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd47a1a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a8000/0x0/0x4ffc00000, data 0x1622540/0x1796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 59908096 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:23.869703+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 59981824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:24.869837+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:25.870029+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:26.870167+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521014 data_alloc: 218103808 data_used: 7950336
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:27.870393+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:28.870521+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a6000/0x0/0x4ffc00000, data 0x1622573/0x1798000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:29.870663+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:30.870812+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:31.870979+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521014 data_alloc: 218103808 data_used: 7950336
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:32.871117+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:33.871242+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:34.871410+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a6000/0x0/0x4ffc00000, data 0x1622573/0x1798000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.819879532s of 19.090642929s, submitted: 44
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:35.871564+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 59113472 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:36.873108+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 59826176 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567670 data_alloc: 218103808 data_used: 8052736
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:37.873522+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:38.873666+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ad9000/0x0/0x4ffc00000, data 0x1bd0573/0x1d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:39.874182+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ad9000/0x0/0x4ffc00000, data 0x1bd0573/0x1d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:40.874386+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:41.874517+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577180 data_alloc: 218103808 data_used: 7962624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:42.875195+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 59785216 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:43.875989+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 59785216 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7af5000/0x0/0x4ffc00000, data 0x1bd3573/0x1d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7af5000/0x0/0x4ffc00000, data 0x1bd3573/0x1d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:44.876522+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 59785216 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:45.876837+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7af5000/0x0/0x4ffc00000, data 0x1bd3573/0x1d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 59785216 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:46.877037+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 59777024 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570364 data_alloc: 218103808 data_used: 7966720
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd21f14a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe8c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3fe8c00 session 0x562bd48274a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd3792960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:47.877199+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2ec3c20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 59777024 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.002296448s of 12.697712898s, submitted: 86
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd57f01e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd160dc20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:48.877764+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 66002944 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:49.878183+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 66002944 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:50.878580+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e70fb000/0x0/0x4ffc00000, data 0x25cc5d5/0x2743000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:51.878850+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651333 data_alloc: 218103808 data_used: 7966720
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:52.879090+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:53.879263+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e70fb000/0x0/0x4ffc00000, data 0x25cc5d5/0x2743000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:54.879645+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:55.879900+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:56.880057+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651333 data_alloc: 218103808 data_used: 7966720
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd9679000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd9679000 session 0x562bd46d8d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:57.880208+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 65978368 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.340452194s of 10.586875916s, submitted: 42
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e70fa000/0x0/0x4ffc00000, data 0x25cc5f8/0x2744000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:58.880355+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 65970176 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:59.880463+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:00.880653+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:01.880785+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 234881024 data_used: 18120704
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:02.880969+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:03.881147+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e70fa000/0x0/0x4ffc00000, data 0x25cc5f8/0x2744000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:04.881372+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:05.881558+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:06.881734+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 234881024 data_used: 18120704
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:07.881884+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:08.882044+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.238709450s of 11.252449989s, submitted: 3
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:09.882170+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6ea6000/0x0/0x4ffc00000, data 0x28205f8/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,1,0,6])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 59424768 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:10.882412+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 59252736 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:11.882544+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3795939 data_alloc: 234881024 data_used: 18563072
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:12.882723+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:13.882858+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:14.883023+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6912000/0x0/0x4ffc00000, data 0x2db45f8/0x2f2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:15.883196+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:16.883370+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3795303 data_alloc: 234881024 data_used: 18567168
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:17.883543+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:18.883693+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 59695104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:19.883816+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e68f1000/0x0/0x4ffc00000, data 0x2dd55f8/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 59695104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:20.884010+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 59695104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:21.884146+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 59686912 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3795303 data_alloc: 234881024 data_used: 18567168
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.981983185s of 12.428812027s, submitted: 91
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd404cd20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd22f0f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e68f1000/0x0/0x4ffc00000, data 0x2dd55f8/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:22.884297+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd22f1a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 63619072 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:23.884513+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 63619072 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:24.884687+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 63619072 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:25.884851+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 63619072 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3cabe00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3e57800 session 0x562bd48c94a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:26.885004+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd22f0f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7af3000/0x0/0x4ffc00000, data 0x1bd5573/0x1d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:27.885183+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:28.885378+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:29.885543+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:30.885749+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:31.885898+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:32.886140+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:33.886348+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:34.886490+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:35.886671+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:36.886803+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:37.886966+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:38.887151+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:39.887370+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:40.887632+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:41.887762+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:42.887940+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:43.888108+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:44.888280+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:45.888455+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:46.888645+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:47.888832+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:48.888963+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:49.889195+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:50.889423+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:51.889586+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:52.889777+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:53.889935+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:54.890132+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:55.890356+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:56.890613+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:57.890835+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:58.890982+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:59.891162+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:00.891365+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:01.891570+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:02.891794+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:03.891974+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:04.892106+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.842296600s of 43.214748383s, submitted: 108
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd20eef00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:05.892251+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ec1000/0x0/0x4ffc00000, data 0x180a4de/0x197d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:06.892422+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534280 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:07.892609+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ec1000/0x0/0x4ffc00000, data 0x180a4de/0x197d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:08.892797+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd47194a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:09.892989+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd4002d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd2a952c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd2ec23c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:10.893162+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 67387392 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:11.893345+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 67387392 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538168 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:12.893484+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:13.893635+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:14.893792+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:15.893989+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:16.894149+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577048 data_alloc: 218103808 data_used: 9658368
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:17.894369+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:18.894578+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:19.894731+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:20.894919+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:21.895063+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577048 data_alloc: 218103808 data_used: 9658368
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:22.895220+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.739616394s of 17.893392563s, submitted: 26
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:23.895445+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 62439424 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:24.895620+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:25.895767+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7283000/0x0/0x4ffc00000, data 0x2438511/0x25ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:26.895957+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686598 data_alloc: 218103808 data_used: 10125312
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:27.896136+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7283000/0x0/0x4ffc00000, data 0x2438511/0x25ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:28.896381+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:29.896513+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:30.896702+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:31.896856+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680126 data_alloc: 218103808 data_used: 10125312
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7270000/0x0/0x4ffc00000, data 0x2459511/0x25ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:32.897007+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:33.897273+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:34.897528+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:35.897695+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.321384430s of 12.719173431s, submitted: 131
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 61972480 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7270000/0x0/0x4ffc00000, data 0x2459511/0x25ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:36.897825+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 61972480 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680082 data_alloc: 218103808 data_used: 10125312
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:37.897957+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3ff1680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd3ff0960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd3ff0b40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f20800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f20800 session 0x562bd3ff0d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd44d7680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:38.898185+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:39.898405+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:40.898591+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:41.898714+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6813000/0x0/0x4ffc00000, data 0x2eb6511/0x302b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765666 data_alloc: 218103808 data_used: 10125312
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:42.898861+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 60727296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:43.899003+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 60727296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:44.899166+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd2a8cd20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 60727296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd4827680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:45.899291+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd57f1a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4b1e800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.891793251s of 10.165854454s, submitted: 23
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4b1e800 session 0x562bd21aba40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 60719104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:46.899396+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67ed000/0x0/0x4ffc00000, data 0x2eda544/0x3051000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [1])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 60719104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773142 data_alloc: 218103808 data_used: 10133504
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:47.899553+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:48.899668+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:49.899802+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67ed000/0x0/0x4ffc00000, data 0x2eda544/0x3051000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:50.899974+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:51.900141+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3841462 data_alloc: 234881024 data_used: 19718144
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:52.900444+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:53.900633+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:54.900882+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58875904 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67ed000/0x0/0x4ffc00000, data 0x2eda544/0x3051000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:55.901061+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58875904 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:56.901373+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67eb000/0x0/0x4ffc00000, data 0x2edb544/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58875904 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842074 data_alloc: 234881024 data_used: 19722240
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:57.901594+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67eb000/0x0/0x4ffc00000, data 0x2edb544/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.110299110s of 12.168491364s, submitted: 17
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361177088 unmapped: 58392576 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:58.901742+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364691456 unmapped: 54878208 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:59.901882+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364871680 unmapped: 54697984 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:00.902285+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364871680 unmapped: 54697984 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:01.902697+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 54689792 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3968126 data_alloc: 234881024 data_used: 21753856
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:02.903036+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 54689792 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:03.903345+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e5b3d000/0x0/0x4ffc00000, data 0x3b89544/0x3d00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 54689792 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:04.903509+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 54681600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:05.903805+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 54681600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:06.903979+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 54681600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3960894 data_alloc: 234881024 data_used: 21827584
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3cab2c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd48c9e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:07.904118+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd3792000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 59596800 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:08.904428+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 59596800 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:09.904697+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x2467511/0x25dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 59596800 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:10.904892+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.229134560s of 12.738837242s, submitted: 143
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x2467511/0x25dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 59588608 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:11.905070+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e725a000/0x0/0x4ffc00000, data 0x246f511/0x25e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 59588608 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696365 data_alloc: 218103808 data_used: 10125312
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:12.905502+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3ff0f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd2f1a000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd2f225a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:13.905710+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:14.905848+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:15.906050+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:16.906542+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:17.906760+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:18.906962+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:19.907199+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:20.907422+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:21.907606+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:22.907912+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:23.908179+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:24.908481+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:25.908836+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:26.909141+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:27.909472+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:28.909719+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:29.909966+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:30.910371+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:31.910587+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:32.910832+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:33.911069+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:34.911386+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:35.911586+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:36.911744+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:37.911905+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:38.912091+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:39.912443+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:40.912783+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:41.912969+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356712448 unmapped: 62857216 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:42.913154+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356712448 unmapped: 62857216 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:43.913365+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.918975830s of 33.064044952s, submitted: 43
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356712448 unmapped: 62857216 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:44.913495+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3ff03c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 65806336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:45.913711+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 65798144 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:46.913869+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 65798144 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609634 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:47.914034+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 65798144 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:48.914183+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 65798144 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:49.914387+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd57f03c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:50.914543+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:51.914682+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688674 data_alloc: 234881024 data_used: 15405056
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:52.914848+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:53.915749+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd48c8780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:54.915921+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:55.916060+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:56.916385+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688674 data_alloc: 234881024 data_used: 15405056
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:57.917132+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:58.917364+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:59.917619+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:00.917956+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd2ec30e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:01.918215+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688674 data_alloc: 234881024 data_used: 15405056
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:02.918488+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:03.918730+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:04.918973+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:05.919148+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358555648 unmapped: 65216512 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:06.919302+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358555648 unmapped: 65216512 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688674 data_alloc: 234881024 data_used: 15405056
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:07.919661+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd3ba4960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.817094803s of 24.468519211s, submitted: 20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:08.919801+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd22f1860
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:09.919956+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:10.920200+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:11.920447+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:12.920612+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:13.921010+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:14.921275+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:15.921549+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:16.921737+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:17.921925+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:18.922144+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:19.922414+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:20.922703+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:21.922969+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:22.923171+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:23.923358+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:24.923538+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:25.923758+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:26.923947+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:27.924155+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:28.924394+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:29.924639+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:30.924862+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:31.925056+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:32.925380+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:33.925605+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:34.925713+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:35.925901+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:36.926048+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:37.926255+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:38.926490+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 68640768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:39.926696+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 68640768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:40.926904+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 68640768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:41.927118+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 68640768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:42.927249+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:43.927465+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:44.927667+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:45.927873+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:46.928069+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd4303e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd4002000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd212b2c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd3cc34a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.765380859s of 38.786224365s, submitted: 10
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd21f0780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3caa780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd489ab40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:47.928194+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd21f01e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd3cc2f00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3552485 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:48.928425+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:49.928604+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:50.928808+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:51.928970+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd3ff01e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:52.929119+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3552485 data_alloc: 218103808 data_used: 4255744
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3ff0780
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x1494550/0x1609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd2ec3a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd2f1a1e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:53.929248+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357253120 unmapped: 66519040 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:54.929423+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7094000/0x0/0x4ffc00000, data 0x1494573/0x160a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357253120 unmapped: 66519040 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:55.930011+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:56.930549+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:57.930809+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569479 data_alloc: 218103808 data_used: 6160384
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:58.931068+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:59.931241+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7094000/0x0/0x4ffc00000, data 0x1494573/0x160a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:00.931407+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:01.931578+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:02.931759+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569479 data_alloc: 218103808 data_used: 6160384
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 66494464 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:03.931899+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 66494464 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:04.932043+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 66494464 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7094000/0x0/0x4ffc00000, data 0x1494573/0x160a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:05.932203+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.432277679s of 18.639139175s, submitted: 44
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 66494464 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:06.932363+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 63037440 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:07.932515+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636503 data_alloc: 218103808 data_used: 7401472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:08.932825+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:09.933013+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:10.933455+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6530000/0x0/0x4ffc00000, data 0x1be0573/0x1d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:11.933674+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:12.933960+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636519 data_alloc: 218103808 data_used: 7401472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6530000/0x0/0x4ffc00000, data 0x1be0573/0x1d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:13.934269+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:14.934464+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:15.934710+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6530000/0x0/0x4ffc00000, data 0x1be0573/0x1d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 62898176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.024751663s of 10.314311981s, submitted: 135
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:16.934960+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 62898176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:17.935203+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3631995 data_alloc: 218103808 data_used: 7401472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 62889984 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd20efc20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd4002000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:18.935374+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6537000/0x0/0x4ffc00000, data 0x1be1573/0x1d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 62889984 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd489bc20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:19.935592+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 62881792 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 266 ms_handle_reset con 0x562bd2e6c000 session 0x562bd46d9860
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:20.935795+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 266 ms_handle_reset con 0x562bd8114400 session 0x562bd404d680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 53370880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 266 ms_handle_reset con 0x562bd8114400 session 0x562bd44d74a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 267 ms_handle_reset con 0x562bd2409800 session 0x562bd3cabc20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:21.935929+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 370409472 unmapped: 53362688 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:22.936107+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 268 ms_handle_reset con 0x562bd2e6b400 session 0x562bd57f10e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719735 data_alloc: 234881024 data_used: 14004224
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 59285504 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 268 ms_handle_reset con 0x562bd2e6c000 session 0x562bd489b860
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 268 ms_handle_reset con 0x562bd4913400 session 0x562bd4931680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:23.936224+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 268 ms_handle_reset con 0x562bd4913400 session 0x562bd2b2cf00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 268 heartbeat osd_stat(store_statfs(0x4e5dc3000/0x0/0x4ffc00000, data 0x234f8bc/0x24ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 59277312 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:24.936460+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 59269120 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:25.936713+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 268 heartbeat osd_stat(store_statfs(0x4e5dc3000/0x0/0x4ffc00000, data 0x234f8bc/0x24ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 59269120 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:26.936951+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 59269120 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:27.937146+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719735 data_alloc: 234881024 data_used: 14004224
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 59269120 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.731040955s of 11.987977982s, submitted: 40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 268 handle_osd_map epochs [268,269], i have 268, src has [1,269]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:28.937266+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2409800 session 0x562bd2a8cb40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:29.937400+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e6489000/0x0/0x4ffc00000, data 0x1a0928a/0x1b82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:30.937588+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e6489000/0x0/0x4ffc00000, data 0x1a0928a/0x1b82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:31.937723+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:32.937871+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3605557 data_alloc: 218103808 data_used: 4263936
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:33.938006+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:34.938135+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:35.938297+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2258000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6c000 session 0x562bd48c9a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd8114400 session 0x562bd4827a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd8114400 session 0x562bd46d81e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2f1a000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2409800 session 0x562bd3cc1860
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6c000 session 0x562bd4826b40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd4913400 session 0x562bd37932c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd4913400 session 0x562bd44d70e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2409800 session 0x562bd404c5a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:36.938416+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e6489000/0x0/0x4ffc00000, data 0x1a0928a/0x1b82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [0,1])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368148480 unmapped: 55623680 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:37.938584+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6b400 session 0x562bd46dde00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e5df2000/0x0/0x4ffc00000, data 0x232229a/0x249c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719608 data_alloc: 218103808 data_used: 12038144
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 54804480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6c000 session 0x562bd404d4a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:38.938711+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd8114400 session 0x562bd1e6ed20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e5de5000/0x0/0x4ffc00000, data 0x232f29a/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 54804480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd8114400 session 0x562bd48c8d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.021324158s of 11.262329102s, submitted: 78
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2409800 session 0x562bd2ec30e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:39.938880+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368959488 unmapped: 54812672 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:40.939149+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 54804480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e5dc0000/0x0/0x4ffc00000, data 0x23532bd/0x24ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:41.939281+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 59449344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 270 ms_handle_reset con 0x562bd4913400 session 0x562bd4719e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:42.939519+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673882 data_alloc: 218103808 data_used: 8560640
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:43.939677+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:44.939850+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:45.940040+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:46.940146+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 270 heartbeat osd_stat(store_statfs(0x4e6526000/0x0/0x4ffc00000, data 0x1bebe2c/0x1d67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:47.940274+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673882 data_alloc: 218103808 data_used: 8560640
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:48.940364+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:49.940472+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:50.940623+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:51.940744+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.706192017s of 12.975371361s, submitted: 42
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:52.940906+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678708 data_alloc: 218103808 data_used: 8785920
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e6523000/0x0/0x4ffc00000, data 0x1bed88f/0x1d6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:53.941069+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:54.941181+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:55.941432+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:56.941593+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:57.941731+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689444 data_alloc: 218103808 data_used: 9150464
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7564000/0x0/0x4ffc00000, data 0x1bed88f/0x1d6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:58.941903+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7564000/0x0/0x4ffc00000, data 0x1bed88f/0x1d6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:59.942145+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:00.942424+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:01.942649+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e755f000/0x0/0x4ffc00000, data 0x1bf288f/0x1d6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:02.942895+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689530 data_alloc: 218103808 data_used: 9150464
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:03.943080+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:04.943252+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e755f000/0x0/0x4ffc00000, data 0x1bf288f/0x1d6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:05.943397+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.351364136s of 13.782290459s, submitted: 21
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:06.943551+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:07.943743+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690402 data_alloc: 218103808 data_used: 9146368
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e755e000/0x0/0x4ffc00000, data 0x1bf388f/0x1d70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:08.943907+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:09.944057+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 63938560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:10.944191+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 63938560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:11.944342+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7555000/0x0/0x4ffc00000, data 0x1bf888f/0x1d75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 63938560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:12.944472+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690402 data_alloc: 218103808 data_used: 9146368
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 63938560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:13.944634+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 63930368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:14.944830+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 63930368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:15.944946+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 63815680 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:16.945087+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7555000/0x0/0x4ffc00000, data 0x1bf888f/0x1d75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 63815680 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:17.945297+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.745916367s of 11.788716316s, submitted: 10
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6b400 session 0x562bd40025a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6c000 session 0x562bd21aba40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694526 data_alloc: 218103808 data_used: 10240000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 63864832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2409800 session 0x562bd47a05a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:18.945556+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:19.945763+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:20.945963+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:21.946092+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:22.946351+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569389 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:23.946640+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:24.946779+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:25.946924+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:26.947097+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:27.947261+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569389 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:28.947462+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:29.947626+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:30.947975+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:31.948157+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:32.948289+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569389 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:33.948445+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:34.948619+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.5 total, 600.0 interval
                                           Cumulative writes: 46K writes, 186K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.84 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2946 writes, 11K keys, 2946 commit groups, 1.0 writes per commit group, ingest: 12.87 MB, 0.02 MB/s
                                           Interval WAL: 2946 writes, 1143 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets getting new tickets!
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:35.948862+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _finish_auth 0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:35.949989+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:36.948964+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:37.949117+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569389 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:38.949228+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:39.949373+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:40.949516+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:41.949652+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6b400 session 0x562bd1e6fc20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd4913400 session 0x562bd404c5a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd8114400 session 0x562bd40023c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd3e57000 session 0x562bd40021e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.374252319s of 24.575435638s, submitted: 53
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:42.949788+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2409800 session 0x562bd21f0b40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3632626 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3cab0e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd4913400 session 0x562bd2abef00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361283584 unmapped: 62488576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd8114400 session 0x562bd47a0960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe9400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd3fe9400 session 0x562bd22590e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:43.949925+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e77f8000/0x0/0x4ffc00000, data 0x19598ce/0x1ad6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe9400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd3fe9400 session 0x562bd47a1860
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361283584 unmapped: 62488576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2409800 session 0x562bd57f14a0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:44.950122+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6b400 session 0x562bd160da40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 62480384 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:45.950382+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd4913400 session 0x562bd404c000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e77d3000/0x0/0x4ffc00000, data 0x197d8de/0x1afb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361603072 unmapped: 62169088 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:46.950525+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4319000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361611264 unmapped: 62160896 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:47.950706+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651889 data_alloc: 218103808 data_used: 6205440
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361611264 unmapped: 62160896 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:48.950874+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd8114400 session 0x562bd4002d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd4319000 session 0x562bd4000960
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 66379776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd8114400 session 0x562bd57f1680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:49.951048+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 66371584 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:50.951254+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 66371584 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:51.951389+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 66371584 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:52.951539+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 66371584 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:53.951704+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:54.951844+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:55.952002+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:56.952185+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:57.952346+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:58.952484+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:59.952704+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:00.952952+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:01.953105+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:02.953350+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:03.953530+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:04.953728+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:05.953889+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:06.954031+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:07.954216+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:08.954403+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:09.954704+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:10.954957+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd21ad800 session 0x562bd2f1bc20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11c00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:11.955110+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:12.955368+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:13.955525+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:14.955748+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:15.955934+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:16.956113+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:17.956259+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:18.956367+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 66355200 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:19.956519+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 66355200 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:20.956718+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 66355200 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:21.956858+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:22.957035+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:23.957182+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:24.957335+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:25.957517+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:26.957661+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:27.957790+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 66338816 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:28.957914+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 66338816 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:29.958089+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 66338816 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:30.958266+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 66330624 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:31.958425+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 66330624 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:32.958597+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 66330624 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:33.958801+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 66322432 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:34.958996+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 66322432 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:35.959208+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 66322432 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:36.959424+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 66314240 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:37.959592+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 66314240 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:38.959769+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:39.959946+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:40.960130+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:41.960240+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:42.960379+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:43.960546+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:44.960674+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:45.960829+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 66297856 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:46.960965+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 66297856 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:47.961092+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 66297856 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:48.961252+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 66297856 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:49.961426+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 66289664 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:50.961652+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 66289664 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:51.961762+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 66289664 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:52.961914+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 66289664 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:53.962088+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 66281472 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:54.962282+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:55.962724+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 66281472 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:56.962913+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 66281472 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:57.963173+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 66281472 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:58.963403+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 66265088 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:59.963553+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 66265088 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:00.963752+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 66265088 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 77.214645386s of 78.329620361s, submitted: 82
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 272 ms_handle_reset con 0x562bd21ad800 session 0x562bd4302000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:01.963957+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 66240512 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:02.964090+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 273 ms_handle_reset con 0x562bd2e6b400 session 0x562bd48272c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529296 data_alloc: 218103808 data_used: 4284416
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:03.964287+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 273 heartbeat osd_stat(store_statfs(0x4e86bb000/0x0/0x4ffc00000, data 0xa96fee/0xc12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:04.964549+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:05.964767+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:06.964975+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:07.965148+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532270 data_alloc: 218103808 data_used: 4284416
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:08.965398+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 66199552 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:09.965535+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 66199552 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:10.965765+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:11.965940+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:12.966152+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532270 data_alloc: 218103808 data_used: 4284416
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:13.966442+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:14.966665+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:15.966829+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:16.967052+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:17.967265+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532270 data_alloc: 218103808 data_used: 4284416
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:18.967717+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:19.968078+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:20.968408+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:21.968625+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:22.968813+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 66174976 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 ms_handle_reset con 0x562bd2e6bc00 session 0x562bd44d72c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6bc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:23.969033+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532270 data_alloc: 218103808 data_used: 4284416
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 66174976 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:24.969264+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 66174976 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:25.969466+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 66174976 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:26.969720+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 66166784 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:27.969985+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 66166784 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:28.970230+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532430 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 66166784 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:29.970388+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 66166784 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:30.970596+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 66158592 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:31.970764+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 66158592 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:32.970935+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 66158592 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:33.971127+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532430 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 66150400 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:34.971376+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 66150400 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:35.971606+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 66150400 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.431751251s of 35.659019470s, submitted: 67
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:36.971793+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 66134016 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:37.972019+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:38.972175+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:39.972413+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:40.972590+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:41.972758+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:42.972915+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:43.973128+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:44.973390+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:45.973554+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:46.973699+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:47.973928+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:48.974121+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:49.974298+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:50.974531+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:51.974752+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:52.974922+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:53.975115+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 66076672 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:54.975355+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 66068480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:55.975586+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 66068480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:56.975766+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 66068480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:57.975938+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 66068480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:58.976086+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:59.976295+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:00.976559+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:01.976699+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:02.976876+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:03.977071+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:04.977254+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:05.977413+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:06.977602+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:07.977747+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:08.977946+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:09.978138+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:10.978403+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:11.978608+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:12.978840+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:13.979025+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 66043904 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:14.979240+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 66043904 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:15.979436+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 66043904 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:16.979706+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 66043904 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:17.979873+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:18.980066+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:19.980263+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:20.980492+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:21.980683+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:22.980844+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357744640 unmapped: 66027520 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:23.981004+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:24.981156+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:25.981328+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:26.981477+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:27.982267+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:28.982506+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:29.982689+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:30.982951+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:31.983188+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:32.983358+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:33.983570+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:34.983753+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:35.983932+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:36.984143+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:37.984342+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 65994752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:38.984516+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 65994752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:39.984708+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 65994752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:40.984882+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 65994752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:41.985097+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 65986560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:42.985264+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 65986560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:43.985424+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 65986560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:44.985592+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 65986560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:45.985873+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 65978368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:46.986107+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 65978368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:47.986317+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 65978368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:48.986664+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 65978368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:49.986838+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:50.987099+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:51.987369+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:52.987513+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:53.987718+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:54.987872+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:55.988098+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:56.988291+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:57.988489+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 65961984 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:58.988650+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 65961984 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 ms_handle_reset con 0x562bd21ad800 session 0x562bd4827a40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:59.988832+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 65372160 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:00.989097+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 65372160 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:01.989243+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:02.989408+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:03.989564+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 5861376
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:04.989782+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:05.989974+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:06.990133+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:07.990360+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 274 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 90.954139709s of 91.414291382s, submitted: 106
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 65331200 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 275 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2258000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:08.990554+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493620 data_alloc: 218103808 data_used: 1216512
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356818944 unmapped: 66953216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:09.990779+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356818944 unmapped: 66953216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 275 heartbeat osd_stat(store_statfs(0x4e8b26000/0x0/0x4ffc00000, data 0x62a642/0x7a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:10.991002+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4319000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 66945024 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:11.991147+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 66945024 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:12.991297+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 276 ms_handle_reset con 0x562bd4319000 session 0x562bd4002b40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356843520 unmapped: 66928640 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:13.991540+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463896 data_alloc: 218103808 data_used: 1224704
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356859904 unmapped: 66912256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 277 heartbeat osd_stat(store_statfs(0x4e8f8e000/0x0/0x4ffc00000, data 0x1bdc92/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:14.991789+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356859904 unmapped: 66912256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:15.992003+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356859904 unmapped: 66912256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:16.992219+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356859904 unmapped: 66912256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:17.992375+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 66895872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.893241882s of 10.314013481s, submitted: 38
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:18.992505+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466198 data_alloc: 218103808 data_used: 1224704
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 66879488 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:19.992591+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 278 heartbeat osd_stat(store_statfs(0x4e8f8c000/0x0/0x4ffc00000, data 0x1bf6f5/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 66879488 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:20.992832+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 66879488 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:21.992948+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356909056 unmapped: 66863104 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:22.993113+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356909056 unmapped: 66863104 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:23.993286+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466518 data_alloc: 218103808 data_used: 1232896
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356909056 unmapped: 66863104 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:24.993465+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356909056 unmapped: 66863104 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 278 heartbeat osd_stat(store_statfs(0x4e8f8c000/0x0/0x4ffc00000, data 0x1bf6f5/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:25.993596+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 278 ms_handle_reset con 0x562bd8114400 session 0x562bd46dd680
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:26.993755+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:27.993980+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:28.994128+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:29.994361+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356933632 unmapped: 66838528 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:30.994597+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356933632 unmapped: 66838528 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:31.994761+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356933632 unmapped: 66838528 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:32.994916+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356933632 unmapped: 66838528 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:33.995088+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356941824 unmapped: 66830336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:34.995258+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356941824 unmapped: 66830336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:35.995448+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356941824 unmapped: 66830336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:36.995632+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356941824 unmapped: 66830336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:37.995786+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356958208 unmapped: 66813952 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:38.995975+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356958208 unmapped: 66813952 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:39.996120+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356958208 unmapped: 66813952 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:40.996351+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356958208 unmapped: 66813952 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:41.996577+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356966400 unmapped: 66805760 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:42.996744+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356966400 unmapped: 66805760 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:43.997118+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356966400 unmapped: 66805760 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:44.997451+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356966400 unmapped: 66805760 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:45.997702+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356974592 unmapped: 66797568 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:46.997917+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356974592 unmapped: 66797568 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:47.998142+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356974592 unmapped: 66797568 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:48.998400+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:49.998735+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:50.999018+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:51.999400+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:52.999536+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:53.999700+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356990976 unmapped: 66781184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:55.000138+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356990976 unmapped: 66781184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:56.000839+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356999168 unmapped: 66772992 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:57.001406+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356999168 unmapped: 66772992 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:58.001682+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356999168 unmapped: 66772992 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:59.002118+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:00.002351+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:01.002806+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:02.003092+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:03.003278+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:04.003518+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357015552 unmapped: 66756608 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:05.003733+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357015552 unmapped: 66756608 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:06.004012+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357023744 unmapped: 66748416 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:07.004256+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:08.004563+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:09.004760+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:10.004943+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:11.005210+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:12.005429+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 66732032 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:13.005615+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 66732032 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:14.005782+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 66732032 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:15.005932+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357048320 unmapped: 66723840 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:16.006078+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357048320 unmapped: 66723840 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:17.006246+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357048320 unmapped: 66723840 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:18.006436+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357048320 unmapped: 66723840 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:19.006594+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357056512 unmapped: 66715648 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:20.006850+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:21.007104+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:22.007371+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:23.007524+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:24.007699+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:25.007880+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:26.008076+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:27.008272+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:28.008372+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:29.008537+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:30.008713+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:31.008912+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:32.009093+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:33.009255+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:34.009427+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:35.009582+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:36.009989+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:37.010244+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:38.010401+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:39.010545+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:40.010664+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:41.010873+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:42.011002+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:43.011166+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:44.011346+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:45.011498+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:46.011671+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:47.011933+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:48.012153+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:49.012389+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:50.012534+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357138432 unmapped: 66633728 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:51.012704+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357138432 unmapped: 66633728 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:52.012878+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357138432 unmapped: 66633728 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:53.013074+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357138432 unmapped: 66633728 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:54.013265+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:55.013589+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357146624 unmapped: 66625536 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:56.013840+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357146624 unmapped: 66625536 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:57.014052+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357146624 unmapped: 66625536 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:58.014248+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357146624 unmapped: 66625536 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:59.014433+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357154816 unmapped: 66617344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:00.014590+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357154816 unmapped: 66617344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:01.014790+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357154816 unmapped: 66617344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:02.014945+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357154816 unmapped: 66617344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:03.015163+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357163008 unmapped: 66609152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:04.015814+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357163008 unmapped: 66609152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:05.016015+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357163008 unmapped: 66609152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:06.016153+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357171200 unmapped: 66600960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:07.016370+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 66592768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:08.016603+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 66592768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:09.016763+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 66592768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:10.016995+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 66592768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:11.017236+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 66584576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:12.017445+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 66584576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:13.017594+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 66584576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe9400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:14.017746+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 115.481742859s of 115.539009094s, submitted: 13
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 66592768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:15.017947+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525408 data_alloc: 218103808 data_used: 1241088
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 74981376 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:16.018130+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 74981376 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:17.018281+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 74981376 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8319000/0x0/0x4ffc00000, data 0xe31383/0xfb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:18.018480+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357195776 unmapped: 74973184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8315000/0x0/0x4ffc00000, data 0xe32f00/0xfb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:19.018634+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 74956800 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 280 ms_handle_reset con 0x562bd3fe9400 session 0x562bd2a95e00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:20.018820+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560558 data_alloc: 218103808 data_used: 1249280
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 74956800 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe9400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:21.019024+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 74956800 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:22.019180+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357220352 unmapped: 74948608 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:23.019375+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357228544 unmapped: 74940416 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:24.019528+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357228544 unmapped: 74940416 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.588495731s of 10.099582672s, submitted: 10
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8315000/0x0/0x4ffc00000, data 0xe32f10/0xfb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:25.019695+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563037 data_alloc: 218103808 data_used: 1249280
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 74932224 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 280 handle_osd_map epochs [281,281], i have 281, src has [1,281]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:26.019882+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357244928 unmapped: 74924032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 ms_handle_reset con 0x562bd3fe9400 session 0x562bd3792d20
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:27.020275+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357244928 unmapped: 74924032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:28.020569+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357244928 unmapped: 74924032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:29.020835+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357244928 unmapped: 74924032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:30.021070+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 74907648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:31.021414+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 74907648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:32.021598+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 74907648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:33.022065+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 74899456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:34.022375+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 74899456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:35.022792+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:36.023089+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:37.023249+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:38.023477+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:39.023744+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:40.023969+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:41.024455+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:42.024609+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357285888 unmapped: 74883072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:43.024750+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357285888 unmapped: 74883072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:44.024997+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357285888 unmapped: 74883072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:45.025192+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357285888 unmapped: 74883072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:46.025446+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:47.025649+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:48.025921+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:49.026117+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:50.026401+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:51.026666+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:52.026875+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:53.027032+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:54.027212+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:55.027356+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:56.027551+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:57.027773+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:58.027968+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74858496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:59.028201+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74858496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:00.028429+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74858496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:01.028655+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74858496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:02.028823+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 74842112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:03.028975+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 74842112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:04.029083+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 74842112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:05.029202+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.229564667s of 40.942382812s, submitted: 6
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569253 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 282 ms_handle_reset con 0x562bd21ad800 session 0x562bd3cc01e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:06.029387+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:07.029590+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:08.029787+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:09.030021+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 282 heartbeat osd_stat(store_statfs(0x4e830f000/0x0/0x4ffc00000, data 0xe3654d/0xfbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:10.030173+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569253 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 74776576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:11.030415+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 74776576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:12.030547+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 74776576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:13.030731+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 282 heartbeat osd_stat(store_statfs(0x4e830f000/0x0/0x4ffc00000, data 0xe3654d/0xfbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 74776576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:14.030915+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 74735616 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 282 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:15.031090+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572227 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 74711040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:16.031338+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 74711040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:17.031502+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 74711040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:18.031693+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 74702848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:19.031884+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 74702848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:20.032017+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572227 data_alloc: 218103808 data_used: 1257472
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 74702848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:21.032176+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 74702848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:22.032301+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 74686464 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:23.032507+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 74686464 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:24.032698+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 74686464 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:25.032842+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 74678272 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:26.033048+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:27.033222+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:28.033372+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:29.033540+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:30.033674+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:31.033921+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:32.034105+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:33.034298+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:34.034491+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 74653696 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:35.034770+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 74645504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:36.034920+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 74645504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:37.035093+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 74645504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:38.035244+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:39.035390+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:40.035544+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:41.035712+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:42.035868+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:43.036037+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:44.036238+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:45.036393+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:46.036521+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 74620928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:47.036948+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 74620928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:48.037162+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 74620928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:49.037466+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 74620928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:50.037626+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:51.037839+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:52.038010+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:53.038239+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:54.038420+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:55.038649+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:56.038802+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:57.038942+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:58.039103+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:59.039265+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:00.039446+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:01.039694+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:02.039867+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:03.040048+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:04.040221+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:05.040349+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:06.040489+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:07.040643+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:08.040774+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:09.040946+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:10.045095+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:11.045362+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 74563584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:12.045520+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 74563584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:13.045713+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 74563584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:14.045920+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 74555392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:15.046112+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 74547200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:16.046290+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 74547200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:17.046453+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 74547200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:18.046610+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 74547200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:19.046727+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 74539008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:20.046931+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 74539008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:21.047107+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 74539008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:22.047249+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 74530816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:23.047364+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 74530816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:24.047493+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 74530816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:25.047607+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 74530816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:26.047750+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:27.047890+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:28.048040+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:29.048192+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:30.048349+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:31.048584+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:32.048710+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:33.048836+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 74514432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:34.048996+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 74506240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:35.049127+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 74506240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:36.049381+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 74506240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:37.049745+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 74506240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:38.049907+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 74498048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:39.050395+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 74498048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:40.050627+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 74498048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:41.051047+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:42.051253+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:43.051468+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:44.051650+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:45.051876+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:46.052046+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 74473472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:47.052542+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 74473472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:48.052796+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 74473472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:49.053164+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 74473472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:50.053396+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:51.053699+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:52.053902+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:53.054286+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:54.054562+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:55.054905+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:56.055093+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:57.055454+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:58.055651+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:59.056427+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:00.056643+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:01.056854+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 74440704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:02.057009+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:03.057217+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:04.057376+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:05.057698+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:06.057851+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:07.058017+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:08.058234+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:09.058577+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357744640 unmapped: 74424320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:10.058765+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 74416128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:11.059161+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 74416128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:12.059352+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 74416128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:13.059776+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 74416128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:14.059946+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 74407936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:15.060117+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 74407936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:16.060266+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 74407936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:17.060523+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 74407936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:18.060680+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:19.060829+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:20.060951+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:21.061243+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:22.061460+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:23.061814+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:24.062031+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:25.062557+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:26.062720+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:27.062857+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:28.063031+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:29.063192+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:30.063369+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:31.063548+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:32.064152+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:33.064402+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:34.064593+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:35.064851+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 74366976 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:36.065009+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 74366976 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:37.065186+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 74366976 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:38.065367+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 74358784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:39.065519+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:40.065677+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:41.065860+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:42.066032+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:43.066205+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:44.066456+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:45.066755+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:46.067015+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:47.067290+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:48.067684+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:49.068272+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:50.068604+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:51.068854+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:52.068981+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:53.069185+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 74326016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:54.069453+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 74326016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:55.069638+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:56.069902+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:57.070085+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:58.070370+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:59.070543+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:00.070737+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:01.070917+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:02.071217+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:03.071415+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:04.071751+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:05.071932+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:06.072193+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:07.072377+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:08.072553+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:09.072785+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:10.072969+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:11.073200+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:12.073383+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:13.073568+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:14.073775+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:15.073938+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:16.074090+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:17.074222+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 74276864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:18.074451+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:19.074619+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:20.074739+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:21.074896+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:22.075059+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:23.075202+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:24.075400+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:25.075592+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:26.075754+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:27.075909+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:28.076096+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:29.076277+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:30.076423+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:31.076634+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:32.076840+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 74235904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:33.077001+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 74235904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:34.077133+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:35.077280+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.5 total, 600.0 interval
                                           Cumulative writes: 46K writes, 188K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.83 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 620 writes, 1636 keys, 620 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s
                                           Interval WAL: 620 writes, 275 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:36.077405+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:37.077599+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:38.077741+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:39.077937+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:40.078130+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:41.078373+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:42.078545+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 74219520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:43.078693+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 74219520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:44.078867+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 74211328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:45.079037+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 74211328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:46.079157+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:47.079345+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:48.079492+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:49.079639+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:50.079754+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:51.079921+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:52.080054+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:53.080192+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:54.080333+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:55.080528+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:56.080791+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:57.081371+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:58.081496+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:59.081633+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:00.081765+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:01.081965+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:02.082099+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:03.082227+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:04.082360+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:05.082486+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:06.082614+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:07.082774+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:08.082915+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:09.083046+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:10.083192+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:11.083443+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:12.083619+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:13.083818+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 74137600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:14.083965+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 74137600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:15.084123+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:16.084249+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:17.084374+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:18.084820+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:19.085013+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:20.085231+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:21.085414+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:22.085572+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:23.085706+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:24.085851+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:25.085984+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 74113024 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:26.086116+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:27.101958+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:28.102137+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:29.102283+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358072320 unmapped: 74096640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:30.102487+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358072320 unmapped: 74096640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:31.102662+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 74088448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:32.102851+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 74088448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:33.103053+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 74088448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:34.103199+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:35.103382+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:36.103514+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:37.103683+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:38.103870+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:39.104034+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:40.104201+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:41.104446+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:42.104625+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:43.104812+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:44.105009+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:45.105108+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:46.105247+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:47.105414+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:48.105550+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:49.105740+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:50.106103+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:51.106627+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:52.106848+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:53.107826+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:54.108404+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:55.109087+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:56.109538+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:57.109727+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:58.110085+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:59.110398+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:00.110514+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:01.110678+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:02.110806+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:03.111210+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:04.111488+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:05.111638+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:06.112095+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:07.112439+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:08.112700+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:09.112934+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:10.113149+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:11.113424+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:12.113570+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:13.114011+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:14.114295+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:15.114518+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:16.114721+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:17.114890+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:18.115164+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:19.115328+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358154240 unmapped: 74014720 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:20.115473+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358154240 unmapped: 74014720 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:21.115722+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 74006528 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:22.116052+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:23.116986+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:24.117645+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:25.117803+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:26.118442+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 73990144 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:27.119010+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 73990144 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:28.119136+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 73990144 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:29.119390+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 73990144 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:30.119554+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:31.119966+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:32.120123+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:33.120419+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:34.120582+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:35.120738+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 73965568 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:36.120914+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 73965568 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:37.121079+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 331.406250000s of 332.066192627s, submitted: 23
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 73900032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:38.121194+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:39.121395+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:40.121535+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:41.121724+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:42.121899+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:43.122079+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:44.122278+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:45.122474+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:46.122628+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:47.122820+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:48.122948+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:49.123097+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:50.123285+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:51.123546+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:52.123701+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:53.123944+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:54.124209+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:55.124743+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:56.124967+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:57.125201+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:58.125411+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:59.125646+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:00.125835+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:01.126162+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:02.126350+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:03.126472+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:04.126639+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:05.126801+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:06.126939+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:07.127146+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:08.127402+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:09.127537+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:10.127701+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:11.127894+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:12.128064+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:13.128289+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:14.128544+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:15.128723+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:16.128880+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:17.129050+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:18.129208+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:19.129373+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:20.129555+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:21.129810+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:22.129989+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:23.130167+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:24.130400+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:25.130582+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:26.130738+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:27.130950+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:28.131078+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:29.137242+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:30.137375+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:31.137603+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:32.137777+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:33.141801+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:34.141946+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:35.142089+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:36.142279+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:37.142461+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:38.142591+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:39.142720+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:40.142862+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:41.143041+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:42.143186+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:43.143379+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:44.143515+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:45.143683+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:46.143827+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:47.144007+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 73809920 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:48.144135+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 73809920 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:49.144282+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 73809920 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:50.144456+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:51.144602+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:52.144740+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:53.144871+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:54.144995+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:55.145140+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:56.145365+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:57.145710+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:58.145944+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:59.146117+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:00.146375+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:01.146601+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:02.146755+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 73768960 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:03.146939+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 73768960 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:04.147269+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 73768960 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:05.147471+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 73768960 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:06.147665+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 73760768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:07.147821+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 73760768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:08.148037+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 73760768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:09.148229+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 73760768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:10.148386+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:11.148622+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:12.148827+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:13.148982+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:14.149136+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:15.149356+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:16.149570+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 98.870841980s of 99.186439514s, submitted: 106
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575681 data_alloc: 218103808 data_used: 1269760
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:17.149745+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 284 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2258b40
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 73728000 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:18.149910+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:19.150146+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4319000
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 73711616 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f79000/0x0/0x4ffc00000, data 0x1c9b81/0x354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:20.150423+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358465536 unmapped: 73703424 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:21.150707+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358473728 unmapped: 73695232 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493942 data_alloc: 218103808 data_used: 1277952
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:22.150965+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 73687040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 285 ms_handle_reset con 0x562bd4319000 session 0x562bd21aa1e0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:23.151206+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 73687040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e8f78000/0x0/0x4ffc00000, data 0x1cb742/0x356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:24.151452+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 73687040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:25.151592+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 73687040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e8f78000/0x0/0x4ffc00000, data 0x1cb742/0x356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 285 handle_osd_map epochs [286,286], i have 286, src has [1,286]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:26.151779+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 73678848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.876462460s of 10.545830727s, submitted: 70
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3499658 data_alloc: 218103808 data_used: 1294336
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:27.151929+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 73670656 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:28.152114+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 73670656 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:29.152283+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 73621504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 ms_handle_reset con 0x562bd8114400 session 0x562bd57f12c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:30.152479+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358563840 unmapped: 73605120 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:31.154107+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358563840 unmapped: 73605120 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:32.154658+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358563840 unmapped: 73605120 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:33.155038+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:34.157788+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:35.158294+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:36.158615+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:37.158878+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:38.159208+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:39.159500+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:40.159644+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:41.159875+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:42.160136+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:43.160542+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:44.160825+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:45.161105+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:46.161414+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:47.161668+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:48.161897+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:49.162155+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:50.162386+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:51.162889+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:52.163082+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:53.163397+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:54.163610+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358604800 unmapped: 73564160 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:55.163839+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358604800 unmapped: 73564160 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:56.164018+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358604800 unmapped: 73564160 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:57.164244+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358604800 unmapped: 73564160 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:58.164421+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:59.164640+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:00.164851+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:01.165089+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:02.165263+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358629376 unmapped: 73539584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:03.165365+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358629376 unmapped: 73539584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:04.165482+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358637568 unmapped: 73531392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:05.165620+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358637568 unmapped: 73531392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:06.165749+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358637568 unmapped: 73531392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:07.165916+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358637568 unmapped: 73531392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:08.166050+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:09.166247+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:10.166467+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:11.166689+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:12.166858+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:13.166977+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:14.167170+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:15.167292+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:16.167515+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:17.167648+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:18.167786+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 73498624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:19.168055+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 73498624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:20.168289+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 73498624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:21.168521+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 73498624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:22.168667+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 73490432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:23.168811+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 73490432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:24.169027+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:25.169153+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:26.169337+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:27.169541+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:28.169710+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:29.170875+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:30.171025+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358694912 unmapped: 73474048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:31.171200+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.013931274s of 64.528137207s, submitted: 13
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358719488 unmapped: 73449472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:32.171363+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508724 data_alloc: 218103808 data_used: 1302528
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e8f6a000/0x0/0x4ffc00000, data 0x1d239e/0x363000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:33.171510+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:34.171719+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1d238e/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 289 ms_handle_reset con 0x562bd21ad800 session 0x562bd2f183c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:35.171894+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:36.172050+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:37.172213+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1d238e/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3506971 data_alloc: 218103808 data_used: 1298432
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1d238e/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:38.172383+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 73433088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:39.172551+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1d238e/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:40.172726+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:41.172937+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:42.173109+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:43.173274+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:44.173464+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: mgrc ms_handle_reset ms_handle_reset con 0x562bd2e6cc00
Nov 25 09:36:45 compute-0 ceph-osd[88620]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:36:45 compute-0 ceph-osd[88620]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: get_auth_request con 0x562bd8114400 auth_method 0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:45.173632+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:46.173820+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:47.173971+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:48.174113+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:49.174295+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:50.174489+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:51.174687+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:52.174965+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:53.175196+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:54.175391+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:55.175586+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:56.175721+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:57.175898+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:58.176073+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:59.176204+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:00.176380+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:01.176541+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:02.176758+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:03.177004+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:04.177154+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:05.177358+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:06.177529+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:07.177743+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:08.177877+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:09.178055+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:10.178190+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:11.178394+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 ms_handle_reset con 0x562bd3f11c00 session 0x562bd43023c0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:12.178567+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:13.178778+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:14.178963+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:15.179128+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:16.179253+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:17.179372+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:18.179538+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:19.179724+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:20.179887+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:21.180084+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:22.180231+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:23.180404+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:24.180545+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:25.180733+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:26.181013+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:27.181192+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:28.186828+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:29.187017+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:30.187156+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:31.187404+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:32.187581+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:33.187739+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:34.188128+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358875136 unmapped: 73293824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:35.188348+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:36.188486+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:37.188639+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:38.188776+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:39.188942+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:40.189099+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:41.189271+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:42.189431+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:43.189555+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:44.189690+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358899712 unmapped: 73269248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:45.189876+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358899712 unmapped: 73269248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:46.190077+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:47.190440+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:48.190587+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:49.190722+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:50.190873+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 73252864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:51.191035+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 73252864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:52.191164+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 73252864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:53.191354+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 73252864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:54.191484+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 73244672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:55.191640+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 73244672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:56.191791+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 73244672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:57.191950+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 73244672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:58.192160+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 73236480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:59.192317+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:00.192437+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:01.192697+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:02.192824+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:03.192938+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:04.193042+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:05.193158+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:06.193348+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 73203712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:07.193469+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:08.193614+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:09.193765+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:10.193967+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:11.194722+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:12.194860+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:36:45 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:36:45 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'config show' '{prefix=config show}'
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:13.194988+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358236160 unmapped: 73932800 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:36:45 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:14.195140+0000)
Nov 25 09:36:45 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:36:45 compute-0 ceph-osd[88620]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:36:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 25 09:36:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817584764' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 09:36:45 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:36:45 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23275 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:45 compute-0 podman[445227]: 2025-11-25 09:36:45.813628715 +0000 UTC m=+0.064436738 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 09:36:45 compute-0 podman[445226]: 2025-11-25 09:36:45.822034894 +0000 UTC m=+0.072492126 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 25 09:36:46 compute-0 ceph-mon[75015]: from='client.23265 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:46 compute-0 ceph-mon[75015]: pgmap v3515: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/817584764' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 09:36:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 25 09:36:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002823663' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 09:36:46 compute-0 nova_compute[253538]: 2025-11-25 09:36:46.157 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 25 09:36:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899246187' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 09:36:46 compute-0 nova_compute[253538]: 2025-11-25 09:36:46.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:36:46 compute-0 nova_compute[253538]: 2025-11-25 09:36:46.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:36:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 25 09:36:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550391899' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:47 compute-0 ceph-mon[75015]: from='client.23275 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3002823663' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3899246187' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3550391899' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 25 09:36:47 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810302306' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23285 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:47 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 09:36:48 compute-0 systemd[1]: Started Hostname Service.
Nov 25 09:36:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 25 09:36:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4080264442' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 09:36:48 compute-0 ceph-mon[75015]: pgmap v3516: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:48 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/810302306' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 09:36:48 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 25 09:36:48 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608151755' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 09:36:48 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23291 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 25 09:36:49 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3732175192' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 09:36:49 compute-0 ceph-mon[75015]: from='client.23285 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4080264442' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 09:36:49 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2608151755' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 09:36:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:49 compute-0 nova_compute[253538]: 2025-11-25 09:36:49.686 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:49 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23295 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:50 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23297 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:50 compute-0 ceph-mon[75015]: from='client.23291 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:50 compute-0 ceph-mon[75015]: pgmap v3517: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:50 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3732175192' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 09:36:50 compute-0 ceph-mon[75015]: from='client.23295 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:50 compute-0 ceph-mon[75015]: from='client.23297 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 25 09:36:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318754089' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 09:36:50 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 25 09:36:50 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885923098' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:51 compute-0 nova_compute[253538]: 2025-11-25 09:36:51.160 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23303 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23305 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:51 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:36:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2318754089' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 09:36:51 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2885923098' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 09:36:51 compute-0 ceph-mon[75015]: pgmap v3518: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 25 09:36:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550959763' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 09:36:52 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 25 09:36:52 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245658332' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 09:36:52 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23311 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:53 compute-0 ceph-mon[75015]: from='client.23303 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:53 compute-0 ceph-mon[75015]: from='client.23305 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/550959763' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 09:36:53 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4245658332' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23313 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:36:53
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'images', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:53 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 09:36:53 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1090516000' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:36:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:36:54 compute-0 ceph-mon[75015]: from='client.23311 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:54 compute-0 ceph-mon[75015]: pgmap v3519: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:54 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1090516000' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:36:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 25 09:36:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/286254082' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 09:36:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:54 compute-0 nova_compute[253538]: 2025-11-25 09:36:54.688 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 25 09:36:54 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089675676' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 09:36:54 compute-0 podman[446301]: 2025-11-25 09:36:54.860210285 +0000 UTC m=+0.106475347 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 09:36:55 compute-0 sshd-session[446289]: Invalid user ts2 from 146.190.154.85 port 34848
Nov 25 09:36:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:55 compute-0 sshd-session[446289]: Received disconnect from 146.190.154.85 port 34848:11: Bye Bye [preauth]
Nov 25 09:36:55 compute-0 sshd-session[446289]: Disconnected from invalid user ts2 146.190.154.85 port 34848 [preauth]
Nov 25 09:36:55 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23321 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:55 compute-0 ceph-mon[75015]: from='client.23313 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:36:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/286254082' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 09:36:55 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2089675676' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 09:36:55 compute-0 ceph-mon[75015]: pgmap v3520: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:55 compute-0 ceph-mon[75015]: from='client.23321 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 09:36:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383800928' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 09:36:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 25 09:36:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724617416' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 09:36:56 compute-0 nova_compute[253538]: 2025-11-25 09:36:56.161 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 25 09:36:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811779070' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 09:36:56 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 25 09:36:56 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1710755029' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 09:36:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3521: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3383800928' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 09:36:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1724617416' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 09:36:57 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1811779070' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 09:36:57 compute-0 ovs-appctl[447050]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 09:36:57 compute-0 ovs-appctl[447054]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 09:36:57 compute-0 ovs-appctl[447059]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 09:36:57 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23331 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:58 compute-0 sshd-session[446945]: Invalid user test1 from 45.78.217.205 port 52722
Nov 25 09:36:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 25 09:36:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/724046326' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 09:36:58 compute-0 sshd-session[446945]: Received disconnect from 45.78.217.205 port 52722:11: Bye Bye [preauth]
Nov 25 09:36:58 compute-0 sshd-session[446945]: Disconnected from invalid user test1 45.78.217.205 port 52722 [preauth]
Nov 25 09:36:58 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1710755029' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 09:36:58 compute-0 ceph-mon[75015]: pgmap v3521: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:58 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 25 09:36:58 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/474465945' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 09:36:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:36:59 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23337 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:36:59 compute-0 nova_compute[253538]: 2025-11-25 09:36:59.689 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:36:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:36:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 25 09:36:59 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2491281506' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 09:37:00 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23341 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:00 compute-0 ceph-mon[75015]: from='client.23331 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/724046326' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 09:37:00 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/474465945' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 09:37:00 compute-0 ceph-mon[75015]: pgmap v3522: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3523: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:01 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23343 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:01 compute-0 nova_compute[253538]: 2025-11-25 09:37:01.162 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:01 compute-0 ceph-mon[75015]: from='client.23337 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2491281506' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 09:37:01 compute-0 ceph-mon[75015]: from='client.23341 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 25 09:37:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609617331' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 09:37:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 25 09:37:01 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133555994' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 09:37:02 compute-0 ceph-mon[75015]: pgmap v3523: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:02 compute-0 ceph-mon[75015]: from='client.23343 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/609617331' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 09:37:02 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2133555994' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23349 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23351 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:02 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:37:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 09:37:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017026203' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 09:37:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:03 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 25 09:37:03 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687086729' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 09:37:03 compute-0 ceph-mon[75015]: from='client.23349 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:03 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1017026203' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 09:37:03 compute-0 sshd-session[448043]: Invalid user github from 182.253.79.194 port 58325
Nov 25 09:37:03 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23357 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:03 compute-0 sshd-session[448043]: Received disconnect from 182.253.79.194 port 58325:11: Bye Bye [preauth]
Nov 25 09:37:03 compute-0 sshd-session[448043]: Disconnected from invalid user github 182.253.79.194 port 58325 [preauth]
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23359 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 09:37:04 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287014066' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:37:04 compute-0 ceph-mon[75015]: from='client.23351 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:04 compute-0 ceph-mon[75015]: pgmap v3524: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:04 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3687086729' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 09:37:04 compute-0 ceph-mon[75015]: from='client.23357 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:04 compute-0 ceph-mon[75015]: from='client.23359 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:37:04 compute-0 nova_compute[253538]: 2025-11-25 09:37:04.691 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:37:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:05 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 25 09:37:05 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/102623185' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 09:37:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3525: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4287014066' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:37:05 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/102623185' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 09:37:05 compute-0 ceph-mon[75015]: pgmap v3525: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:06 compute-0 nova_compute[253538]: 2025-11-25 09:37:06.164 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:07 compute-0 ceph-mon[75015]: pgmap v3526: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:08 compute-0 nova_compute[253538]: 2025-11-25 09:37:08.560 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:09 compute-0 ceph-mon[75015]: pgmap v3527: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:09 compute-0 nova_compute[253538]: 2025-11-25 09:37:09.692 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:10 compute-0 nova_compute[253538]: 2025-11-25 09:37:10.567 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3528: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:11 compute-0 nova_compute[253538]: 2025-11-25 09:37:11.166 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:11 compute-0 nova_compute[253538]: 2025-11-25 09:37:11.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:11 compute-0 nova_compute[253538]: 2025-11-25 09:37:11.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:37:11 compute-0 nova_compute[253538]: 2025-11-25 09:37:11.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:37:11 compute-0 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 09:37:12 compute-0 ceph-mon[75015]: pgmap v3528: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3529: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:13 compute-0 sudo[448930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:13 compute-0 sudo[448930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:13 compute-0 sudo[448930]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:13 compute-0 sudo[448974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:13 compute-0 sudo[448974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:13 compute-0 sudo[448974]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:13 compute-0 sudo[449004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:13 compute-0 sudo[449004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:13 compute-0 sudo[449004]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:13 compute-0 sudo[449033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 09:37:13 compute-0 sudo[449033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:13 compute-0 podman[449140]: 2025-11-25 09:37:13.995478357 +0000 UTC m=+0.057808229 container exec 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:37:14 compute-0 podman[449140]: 2025-11-25 09:37:14.111633128 +0000 UTC m=+0.173962980 container exec_died 04ce8b89bbacb77b3b59c4996e899d7494010827e740656ebea8754c25303956 (image=quay.io/ceph/ceph:v18, name=ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:37:14 compute-0 ceph-mon[75015]: pgmap v3529: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:14 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 09:37:14 compute-0 nova_compute[253538]: 2025-11-25 09:37:14.571 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:14 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 09:37:14 compute-0 nova_compute[253538]: 2025-11-25 09:37:14.694 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:14 compute-0 sudo[449033]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:37:14 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:37:14 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:14 compute-0 sudo[449308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:14 compute-0 sudo[449308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:14 compute-0 sudo[449308]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:14 compute-0 sudo[449333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:14 compute-0 sudo[449333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:14 compute-0 sudo[449333]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:14 compute-0 sudo[449358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:14 compute-0 sudo[449358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:14 compute-0 sudo[449358]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:14 compute-0 sudo[449383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:37:14 compute-0 sudo[449383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:15 compute-0 sudo[449383]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:37:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:37:15 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:37:15 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:15 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1337ea64-b512-4b36-a4be-48b41094ed1b does not exist
Nov 25 09:37:15 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 69c77579-1cb6-40ae-9d10-79f124715b6e does not exist
Nov 25 09:37:15 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e55185f9-fb3e-4ed6-acf7-dbbb9fbd8f69 does not exist
Nov 25 09:37:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:37:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:37:15 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:37:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:15 compute-0 sudo[449439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:15 compute-0 sudo[449439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:15 compute-0 sudo[449439]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:15 compute-0 sudo[449464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:15 compute-0 sudo[449464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:15 compute-0 sudo[449464]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:15 compute-0 sudo[449489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:15 compute-0 sudo[449489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:15 compute-0 sudo[449489]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:15 compute-0 sudo[449514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:37:15 compute-0 sudo[449514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:15 compute-0 ceph-mon[75015]: pgmap v3530: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:16 compute-0 podman[449580]: 2025-11-25 09:37:16.053423802 +0000 UTC m=+0.039135149 container create 5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:37:16 compute-0 systemd[1]: Started libpod-conmon-5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e.scope.
Nov 25 09:37:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:16 compute-0 podman[449580]: 2025-11-25 09:37:16.116185965 +0000 UTC m=+0.101897342 container init 5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:37:16 compute-0 podman[449580]: 2025-11-25 09:37:16.124935185 +0000 UTC m=+0.110646542 container start 5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:37:16 compute-0 podman[449580]: 2025-11-25 09:37:16.128097741 +0000 UTC m=+0.113809098 container attach 5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:16 compute-0 podman[449580]: 2025-11-25 09:37:16.034954668 +0000 UTC m=+0.020666065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:37:16 compute-0 optimistic_mahavira[449598]: 167 167
Nov 25 09:37:16 compute-0 systemd[1]: libpod-5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e.scope: Deactivated successfully.
Nov 25 09:37:16 compute-0 podman[449580]: 2025-11-25 09:37:16.130646871 +0000 UTC m=+0.116358228 container died 5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:16 compute-0 podman[449597]: 2025-11-25 09:37:16.146465692 +0000 UTC m=+0.054169639 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:37:16 compute-0 podman[449593]: 2025-11-25 09:37:16.147901252 +0000 UTC m=+0.062189659 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 09:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5bcab4a1e96f12075026b00b732e9d2cc84dc8b1c40a76439cdc3da6874c27d-merged.mount: Deactivated successfully.
Nov 25 09:37:16 compute-0 podman[449580]: 2025-11-25 09:37:16.166349285 +0000 UTC m=+0.152060642 container remove 5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:37:16 compute-0 nova_compute[253538]: 2025-11-25 09:37:16.168 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:16 compute-0 systemd[1]: libpod-conmon-5798351ea87348eafdb02387e7058ae7450771d4ffe543c43d5c40e3352a7b9e.scope: Deactivated successfully.
Nov 25 09:37:16 compute-0 podman[449657]: 2025-11-25 09:37:16.322413715 +0000 UTC m=+0.043521959 container create 56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 09:37:16 compute-0 systemd[1]: Started libpod-conmon-56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4.scope.
Nov 25 09:37:16 compute-0 podman[449657]: 2025-11-25 09:37:16.304460755 +0000 UTC m=+0.025568979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:37:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63361dc3805f5bd26d205bb40cf7e7e4c3e9d8474869a2780ca6e6c6ea846f38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63361dc3805f5bd26d205bb40cf7e7e4c3e9d8474869a2780ca6e6c6ea846f38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63361dc3805f5bd26d205bb40cf7e7e4c3e9d8474869a2780ca6e6c6ea846f38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63361dc3805f5bd26d205bb40cf7e7e4c3e9d8474869a2780ca6e6c6ea846f38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63361dc3805f5bd26d205bb40cf7e7e4c3e9d8474869a2780ca6e6c6ea846f38/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:16 compute-0 podman[449657]: 2025-11-25 09:37:16.429526649 +0000 UTC m=+0.150634913 container init 56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:37:16 compute-0 podman[449657]: 2025-11-25 09:37:16.436743636 +0000 UTC m=+0.157851860 container start 56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 09:37:16 compute-0 podman[449657]: 2025-11-25 09:37:16.440145459 +0000 UTC m=+0.161253703 container attach 56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:37:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:17 compute-0 friendly_wright[449673]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:37:17 compute-0 friendly_wright[449673]: --> relative data size: 1.0
Nov 25 09:37:17 compute-0 friendly_wright[449673]: --> All data devices are unavailable
Nov 25 09:37:17 compute-0 systemd[1]: libpod-56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4.scope: Deactivated successfully.
Nov 25 09:37:17 compute-0 podman[449657]: 2025-11-25 09:37:17.465128258 +0000 UTC m=+1.186236482 container died 56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 09:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-63361dc3805f5bd26d205bb40cf7e7e4c3e9d8474869a2780ca6e6c6ea846f38-merged.mount: Deactivated successfully.
Nov 25 09:37:17 compute-0 podman[449657]: 2025-11-25 09:37:17.526171584 +0000 UTC m=+1.247279808 container remove 56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:37:17 compute-0 systemd[1]: libpod-conmon-56c98f233f2468c97f5f86cb57be64c48fe7989dd8540fe57c0bfced3f82c7e4.scope: Deactivated successfully.
Nov 25 09:37:17 compute-0 sudo[449514]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:17 compute-0 sudo[449716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:17 compute-0 sudo[449716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:17 compute-0 sudo[449716]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:17 compute-0 sudo[449741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:17 compute-0 sudo[449741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:17 compute-0 sudo[449741]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:17 compute-0 sudo[449766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:17 compute-0 sudo[449766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:17 compute-0 sudo[449766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:17 compute-0 sudo[449791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:37:17 compute-0 sudo[449791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:18 compute-0 podman[449854]: 2025-11-25 09:37:18.121558506 +0000 UTC m=+0.055807375 container create 6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_zhukovsky, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:18 compute-0 systemd[1]: Started libpod-conmon-6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292.scope.
Nov 25 09:37:18 compute-0 ceph-mon[75015]: pgmap v3531: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:18 compute-0 podman[449854]: 2025-11-25 09:37:18.102832735 +0000 UTC m=+0.037081664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:37:18 compute-0 podman[449854]: 2025-11-25 09:37:18.207201134 +0000 UTC m=+0.141450013 container init 6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:37:18 compute-0 podman[449854]: 2025-11-25 09:37:18.215978173 +0000 UTC m=+0.150227042 container start 6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:37:18 compute-0 podman[449854]: 2025-11-25 09:37:18.220807615 +0000 UTC m=+0.155056524 container attach 6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 09:37:18 compute-0 cool_zhukovsky[449870]: 167 167
Nov 25 09:37:18 compute-0 systemd[1]: libpod-6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292.scope: Deactivated successfully.
Nov 25 09:37:18 compute-0 podman[449854]: 2025-11-25 09:37:18.224351212 +0000 UTC m=+0.158600081 container died 6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_zhukovsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:37:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc831d5a100651b3f79270ffa983dd3a124201137233f1d1d42b0ca338f8e92e-merged.mount: Deactivated successfully.
Nov 25 09:37:18 compute-0 podman[449854]: 2025-11-25 09:37:18.258497204 +0000 UTC m=+0.192746073 container remove 6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 25 09:37:18 compute-0 systemd[1]: libpod-conmon-6a99d07a83f08206fe72ba7e68793820cb3ddf8db3eaffeb576b390a61b97292.scope: Deactivated successfully.
Nov 25 09:37:18 compute-0 podman[449895]: 2025-11-25 09:37:18.417882385 +0000 UTC m=+0.044871326 container create af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 09:37:18 compute-0 systemd[1]: Started libpod-conmon-af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc.scope.
Nov 25 09:37:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110ec860a70a344ddeb378fd3447cb9692e0924481293f709c3fa077d4c55f8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110ec860a70a344ddeb378fd3447cb9692e0924481293f709c3fa077d4c55f8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:18 compute-0 podman[449895]: 2025-11-25 09:37:18.400635354 +0000 UTC m=+0.027624325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110ec860a70a344ddeb378fd3447cb9692e0924481293f709c3fa077d4c55f8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110ec860a70a344ddeb378fd3447cb9692e0924481293f709c3fa077d4c55f8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:18 compute-0 podman[449895]: 2025-11-25 09:37:18.511552411 +0000 UTC m=+0.138559773 container init af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 09:37:18 compute-0 podman[449895]: 2025-11-25 09:37:18.518535812 +0000 UTC m=+0.145524753 container start af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:37:18 compute-0 podman[449895]: 2025-11-25 09:37:18.52138854 +0000 UTC m=+0.148377511 container attach af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatelet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]: {
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:     "0": [
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:         {
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "devices": [
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "/dev/loop3"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             ],
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_name": "ceph_lv0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_size": "21470642176",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "name": "ceph_lv0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "tags": {
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cluster_name": "ceph",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.crush_device_class": "",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.encrypted": "0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osd_id": "0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.type": "block",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.vdo": "0"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             },
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "type": "block",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "vg_name": "ceph_vg0"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:         }
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:     ],
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:     "1": [
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:         {
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "devices": [
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "/dev/loop4"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             ],
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_name": "ceph_lv1",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_size": "21470642176",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "name": "ceph_lv1",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "tags": {
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cluster_name": "ceph",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.crush_device_class": "",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.encrypted": "0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osd_id": "1",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.type": "block",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.vdo": "0"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             },
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "type": "block",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "vg_name": "ceph_vg1"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:         }
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:     ],
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:     "2": [
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:         {
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "devices": [
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "/dev/loop5"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             ],
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_name": "ceph_lv2",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_size": "21470642176",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "name": "ceph_lv2",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "tags": {
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.cluster_name": "ceph",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.crush_device_class": "",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.encrypted": "0",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osd_id": "2",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.type": "block",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:                 "ceph.vdo": "0"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             },
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "type": "block",
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:             "vg_name": "ceph_vg2"
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:         }
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]:     ]
Nov 25 09:37:19 compute-0 xenodochial_chatelet[449912]: }
Nov 25 09:37:19 compute-0 systemd[1]: libpod-af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc.scope: Deactivated successfully.
Nov 25 09:37:19 compute-0 podman[449895]: 2025-11-25 09:37:19.406285755 +0000 UTC m=+1.033274696 container died af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-110ec860a70a344ddeb378fd3447cb9692e0924481293f709c3fa077d4c55f8b-merged.mount: Deactivated successfully.
Nov 25 09:37:19 compute-0 podman[449895]: 2025-11-25 09:37:19.460252498 +0000 UTC m=+1.087241439 container remove af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 25 09:37:19 compute-0 systemd[1]: libpod-conmon-af8e751f8a3881b46e3272ee62360f4f44d4a9e3560587e6e064d58de2d198dc.scope: Deactivated successfully.
Nov 25 09:37:19 compute-0 sudo[449791]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:19 compute-0 sudo[449934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:19 compute-0 sudo[449934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:19 compute-0 sudo[449934]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:19 compute-0 sudo[449959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:19 compute-0 sudo[449959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:19 compute-0 sudo[449959]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:19 compute-0 sudo[449984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:19 compute-0 sudo[449984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:19 compute-0 sudo[449984]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:19 compute-0 nova_compute[253538]: 2025-11-25 09:37:19.695 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:19 compute-0 sudo[450009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:37:19 compute-0 sudo[450009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:20 compute-0 podman[450074]: 2025-11-25 09:37:20.007788244 +0000 UTC m=+0.034142883 container create 29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:37:20 compute-0 systemd[1]: Started libpod-conmon-29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f.scope.
Nov 25 09:37:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:20 compute-0 podman[450074]: 2025-11-25 09:37:20.090561023 +0000 UTC m=+0.116915682 container init 29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_almeida, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:37:20 compute-0 podman[450074]: 2025-11-25 09:37:19.994690766 +0000 UTC m=+0.021045435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:37:20 compute-0 podman[450074]: 2025-11-25 09:37:20.097498523 +0000 UTC m=+0.123853162 container start 29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 09:37:20 compute-0 agitated_almeida[450090]: 167 167
Nov 25 09:37:20 compute-0 systemd[1]: libpod-29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f.scope: Deactivated successfully.
Nov 25 09:37:20 compute-0 podman[450074]: 2025-11-25 09:37:20.121696213 +0000 UTC m=+0.148050852 container attach 29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:37:20 compute-0 podman[450074]: 2025-11-25 09:37:20.123343328 +0000 UTC m=+0.149697967 container died 29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_almeida, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a8dbde77ac6e9c4a9de86b60e81f8b6a9d11d02a2930129475b909494e3dd46-merged.mount: Deactivated successfully.
Nov 25 09:37:20 compute-0 ceph-mon[75015]: pgmap v3532: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:20 compute-0 podman[450074]: 2025-11-25 09:37:20.22376743 +0000 UTC m=+0.250122069 container remove 29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_almeida, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:20 compute-0 systemd[1]: libpod-conmon-29f88b3efedca7de1d383dc48772df2b6d187f9f1b376c0cc51cb473e1ce9b5f.scope: Deactivated successfully.
Nov 25 09:37:20 compute-0 podman[450115]: 2025-11-25 09:37:20.402079627 +0000 UTC m=+0.059525476 container create d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:37:20 compute-0 systemd[1]: Started libpod-conmon-d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d.scope.
Nov 25 09:37:20 compute-0 podman[450115]: 2025-11-25 09:37:20.370948987 +0000 UTC m=+0.028394846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:37:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f132c6dede512fd5715b8e6c4c1bde3a4416f0806d039d7657f0dab5dd0b6e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f132c6dede512fd5715b8e6c4c1bde3a4416f0806d039d7657f0dab5dd0b6e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f132c6dede512fd5715b8e6c4c1bde3a4416f0806d039d7657f0dab5dd0b6e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f132c6dede512fd5715b8e6c4c1bde3a4416f0806d039d7657f0dab5dd0b6e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:20 compute-0 podman[450115]: 2025-11-25 09:37:20.524085347 +0000 UTC m=+0.181531206 container init d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 09:37:20 compute-0 podman[450115]: 2025-11-25 09:37:20.530514463 +0000 UTC m=+0.187960292 container start d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:37:20 compute-0 nova_compute[253538]: 2025-11-25 09:37:20.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:20 compute-0 nova_compute[253538]: 2025-11-25 09:37:20.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:37:20 compute-0 nova_compute[253538]: 2025-11-25 09:37:20.557 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:37:20 compute-0 podman[450115]: 2025-11-25 09:37:20.560790109 +0000 UTC m=+0.218236158 container attach d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:37:20 compute-0 nova_compute[253538]: 2025-11-25 09:37:20.577 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:37:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:21 compute-0 nova_compute[253538]: 2025-11-25 09:37:21.172 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]: {
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "osd_id": 1,
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "type": "bluestore"
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:     },
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "osd_id": 2,
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "type": "bluestore"
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:     },
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "osd_id": 0,
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:         "type": "bluestore"
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]:     }
Nov 25 09:37:21 compute-0 beautiful_lamarr[450132]: }
Nov 25 09:37:21 compute-0 ceph-mon[75015]: pgmap v3533: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:21 compute-0 systemd[1]: libpod-d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d.scope: Deactivated successfully.
Nov 25 09:37:21 compute-0 systemd[1]: libpod-d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d.scope: Consumed 1.057s CPU time.
Nov 25 09:37:21 compute-0 podman[450165]: 2025-11-25 09:37:21.638274122 +0000 UTC m=+0.033971739 container died d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f132c6dede512fd5715b8e6c4c1bde3a4416f0806d039d7657f0dab5dd0b6e1-merged.mount: Deactivated successfully.
Nov 25 09:37:21 compute-0 podman[450165]: 2025-11-25 09:37:21.719481218 +0000 UTC m=+0.115178765 container remove d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:37:21 compute-0 systemd[1]: libpod-conmon-d6bff03800cebbe044eb49d0f23fc44ced4350084eb0f36238af5fc3cd36555d.scope: Deactivated successfully.
Nov 25 09:37:21 compute-0 sudo[450009]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:37:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:37:21 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c11e8397-7616-4523-bb9b-23bec6b35187 does not exist
Nov 25 09:37:21 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a24ffd21-8063-456f-86d3-c22aa5500c8b does not exist
Nov 25 09:37:21 compute-0 sudo[450180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:21 compute-0 sudo[450180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:21 compute-0 sudo[450180]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:21 compute-0 sudo[450205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:37:21 compute-0 sudo[450205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:21 compute-0 sudo[450205]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:22 compute-0 nova_compute[253538]: 2025-11-25 09:37:22.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:22 compute-0 nova_compute[253538]: 2025-11-25 09:37:22.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:22 compute-0 nova_compute[253538]: 2025-11-25 09:37:22.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:37:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:22 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:37:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:23 compute-0 nova_compute[253538]: 2025-11-25 09:37:23.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:23 compute-0 nova_compute[253538]: 2025-11-25 09:37:23.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:23 compute-0 ceph-mon[75015]: pgmap v3534: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:24 compute-0 nova_compute[253538]: 2025-11-25 09:37:24.698 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:25 compute-0 podman[450230]: 2025-11-25 09:37:25.844275471 +0000 UTC m=+0.095900308 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:37:26 compute-0 nova_compute[253538]: 2025-11-25 09:37:26.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:26 compute-0 ceph-mon[75015]: pgmap v3535: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:28 compute-0 ceph-mon[75015]: pgmap v3536: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:37:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3809871931' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:37:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:37:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3809871931' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:37:29 compute-0 ceph-mon[75015]: pgmap v3537: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3809871931' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:37:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/3809871931' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:37:29 compute-0 nova_compute[253538]: 2025-11-25 09:37:29.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:29 compute-0 nova_compute[253538]: 2025-11-25 09:37:29.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:31 compute-0 nova_compute[253538]: 2025-11-25 09:37:31.178 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:32 compute-0 ceph-mon[75015]: pgmap v3538: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:34 compute-0 ceph-mon[75015]: pgmap v3539: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:34 compute-0 nova_compute[253538]: 2025-11-25 09:37:34.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:34 compute-0 nova_compute[253538]: 2025-11-25 09:37:34.699 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:36 compute-0 nova_compute[253538]: 2025-11-25 09:37:36.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:36 compute-0 ceph-mon[75015]: pgmap v3540: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:37 compute-0 nova_compute[253538]: 2025-11-25 09:37:37.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:37:37 compute-0 nova_compute[253538]: 2025-11-25 09:37:37.580 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:37:37 compute-0 nova_compute[253538]: 2025-11-25 09:37:37.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:37:37 compute-0 nova_compute[253538]: 2025-11-25 09:37:37.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:37:37 compute-0 nova_compute[253538]: 2025-11-25 09:37:37.581 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:37:37 compute-0 nova_compute[253538]: 2025-11-25 09:37:37.582 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:37:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:37:38 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420176466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.033 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.194 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.196 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3435MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.196 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.196 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:37:38 compute-0 ceph-mon[75015]: pgmap v3541: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:38 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1420176466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.314 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.315 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.334 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:37:38 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:37:38 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716953968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.814 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.819 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.835 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.836 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:37:38 compute-0 nova_compute[253538]: 2025-11-25 09:37:38.836 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:37:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:39 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3716953968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:37:39 compute-0 nova_compute[253538]: 2025-11-25 09:37:39.701 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:40 compute-0 ceph-mon[75015]: pgmap v3542: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:37:41.125 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:37:41.126 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:37:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:37:41.126 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:37:41 compute-0 nova_compute[253538]: 2025-11-25 09:37:41.180 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:42 compute-0 ceph-mon[75015]: pgmap v3543: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:43 compute-0 ceph-mon[75015]: pgmap v3544: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:44 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 09:37:44 compute-0 nova_compute[253538]: 2025-11-25 09:37:44.741 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:44 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:37:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:45 compute-0 ceph-mon[75015]: pgmap v3545: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:46 compute-0 nova_compute[253538]: 2025-11-25 09:37:46.181 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:46 compute-0 podman[450305]: 2025-11-25 09:37:46.807190442 +0000 UTC m=+0.053759308 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:37:46 compute-0 podman[450304]: 2025-11-25 09:37:46.836292736 +0000 UTC m=+0.083518670 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 25 09:37:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:48 compute-0 ceph-mon[75015]: pgmap v3546: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:49 compute-0 nova_compute[253538]: 2025-11-25 09:37:49.742 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:50 compute-0 ceph-mon[75015]: pgmap v3547: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:51 compute-0 nova_compute[253538]: 2025-11-25 09:37:51.184 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:52 compute-0 ceph-mon[75015]: pgmap v3548: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:53 compute-0 ceph-mon[75015]: pgmap v3549: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:37:53
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root']
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:37:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:37:54 compute-0 nova_compute[253538]: 2025-11-25 09:37:54.744 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:55 compute-0 sudo[442181]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:55 compute-0 sshd-session[442180]: Received disconnect from 192.168.122.10 port 53310:11: disconnected by user
Nov 25 09:37:55 compute-0 sshd-session[442180]: Disconnected from user zuul 192.168.122.10 port 53310
Nov 25 09:37:55 compute-0 sshd-session[442177]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:55 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Nov 25 09:37:55 compute-0 systemd[1]: session-55.scope: Consumed 2min 57.629s CPU time, 1.2G memory peak, read 561.1M from disk, written 347.9M to disk.
Nov 25 09:37:55 compute-0 systemd-logind[822]: Session 55 logged out. Waiting for processes to exit.
Nov 25 09:37:55 compute-0 systemd-logind[822]: Removed session 55.
Nov 25 09:37:55 compute-0 sshd-session[450339]: Accepted publickey for zuul from 192.168.122.10 port 46608 ssh2: ECDSA SHA256:XPT2Qp05XP+4/iPWyxQ1YuG4VjRBRDdk6pBKmAF934E
Nov 25 09:37:55 compute-0 systemd-logind[822]: New session 56 of user zuul.
Nov 25 09:37:55 compute-0 systemd[1]: Started Session 56 of User zuul.
Nov 25 09:37:55 compute-0 sshd-session[450339]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:37:55 compute-0 sudo[450343]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-11-25-xewmfoz.tar.xz
Nov 25 09:37:55 compute-0 sudo[450343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:55 compute-0 ceph-mon[75015]: pgmap v3550: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:55 compute-0 sudo[450343]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:55 compute-0 sshd-session[450342]: Received disconnect from 192.168.122.10 port 46608:11: disconnected by user
Nov 25 09:37:55 compute-0 sshd-session[450342]: Disconnected from user zuul 192.168.122.10 port 46608
Nov 25 09:37:55 compute-0 sshd-session[450339]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:55 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Nov 25 09:37:55 compute-0 systemd-logind[822]: Session 56 logged out. Waiting for processes to exit.
Nov 25 09:37:55 compute-0 systemd-logind[822]: Removed session 56.
Nov 25 09:37:55 compute-0 sshd-session[450368]: Accepted publickey for zuul from 192.168.122.10 port 46614 ssh2: ECDSA SHA256:XPT2Qp05XP+4/iPWyxQ1YuG4VjRBRDdk6pBKmAF934E
Nov 25 09:37:55 compute-0 systemd-logind[822]: New session 57 of user zuul.
Nov 25 09:37:55 compute-0 systemd[1]: Started Session 57 of User zuul.
Nov 25 09:37:55 compute-0 sshd-session[450368]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:37:55 compute-0 sudo[450372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 25 09:37:55 compute-0 sudo[450372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:55 compute-0 sudo[450372]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:55 compute-0 sshd-session[450371]: Received disconnect from 192.168.122.10 port 46614:11: disconnected by user
Nov 25 09:37:55 compute-0 sshd-session[450371]: Disconnected from user zuul 192.168.122.10 port 46614
Nov 25 09:37:55 compute-0 sshd-session[450368]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:55 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Nov 25 09:37:55 compute-0 systemd-logind[822]: Session 57 logged out. Waiting for processes to exit.
Nov 25 09:37:55 compute-0 systemd-logind[822]: Removed session 57.
Nov 25 09:37:56 compute-0 podman[450396]: 2025-11-25 09:37:56.017717611 +0000 UTC m=+0.085539897 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 09:37:56 compute-0 nova_compute[253538]: 2025-11-25 09:37:56.186 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:56 compute-0 sshd-session[450423]: Received disconnect from 146.190.154.85 port 60274:11: Bye Bye [preauth]
Nov 25 09:37:56 compute-0 sshd-session[450423]: Disconnected from authenticating user root 146.190.154.85 port 60274 [preauth]
Nov 25 09:37:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:58 compute-0 ceph-mon[75015]: pgmap v3551: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:59 compute-0 ceph-mon[75015]: pgmap v3552: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:37:59 compute-0 nova_compute[253538]: 2025-11-25 09:37:59.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:37:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:01 compute-0 nova_compute[253538]: 2025-11-25 09:38:01.189 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:02 compute-0 ceph-mon[75015]: pgmap v3553: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:04 compute-0 ceph-mon[75015]: pgmap v3554: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:04 compute-0 nova_compute[253538]: 2025-11-25 09:38:04.752 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:38:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:38:05 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 73K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1355 writes, 6221 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s
                                           Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     34.5      2.60              0.31        53    0.049       0      0       0.0       0.0
                                             L6      1/0   10.79 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8     69.9     59.2      7.33              1.40        52    0.141    355K    27K       0.0       0.0
                                            Sum      1/0   10.79 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8     51.6     52.7      9.93              1.71       105    0.095    355K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     80.6     81.5      0.72              0.19        10    0.072     46K   2581       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     69.9     59.2      7.33              1.40        52    0.141    355K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     34.5      2.60              0.31        52    0.050       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.087, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.51 GB write, 0.08 MB/s write, 0.50 GB read, 0.08 MB/s read, 9.9 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e967e9f1f0#2 capacity: 304.00 MB usage: 61.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000379 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4264,58.60 MB,19.2773%) FilterBlock(106,990.98 KB,0.318342%) IndexBlock(106,1.52 MB,0.499986%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:38:06 compute-0 nova_compute[253538]: 2025-11-25 09:38:06.191 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:06 compute-0 ceph-mon[75015]: pgmap v3555: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:08 compute-0 sshd[189888]: Timeout before authentication for connection from 45.78.222.2 to 38.102.83.169, pid = 441259
Nov 25 09:38:08 compute-0 ceph-mon[75015]: pgmap v3556: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:09 compute-0 nova_compute[253538]: 2025-11-25 09:38:09.753 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:10 compute-0 ceph-mon[75015]: pgmap v3557: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:11 compute-0 nova_compute[253538]: 2025-11-25 09:38:11.194 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:11 compute-0 sshd-session[450425]: Invalid user gitea from 47.252.72.9 port 41090
Nov 25 09:38:11 compute-0 sshd-session[450425]: Received disconnect from 47.252.72.9 port 41090:11: Bye Bye [preauth]
Nov 25 09:38:11 compute-0 sshd-session[450425]: Disconnected from invalid user gitea 47.252.72.9 port 41090 [preauth]
Nov 25 09:38:11 compute-0 nova_compute[253538]: 2025-11-25 09:38:11.837 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:12 compute-0 ceph-mon[75015]: pgmap v3558: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3559: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:14 compute-0 ceph-mon[75015]: pgmap v3559: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:14 compute-0 sshd-session[450429]: Invalid user oracle from 193.32.162.151 port 47060
Nov 25 09:38:14 compute-0 nova_compute[253538]: 2025-11-25 09:38:14.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:14 compute-0 sshd-session[450429]: Connection closed by invalid user oracle 193.32.162.151 port 47060 [preauth]
Nov 25 09:38:14 compute-0 nova_compute[253538]: 2025-11-25 09:38:14.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:15 compute-0 ceph-mon[75015]: pgmap v3560: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:16 compute-0 nova_compute[253538]: 2025-11-25 09:38:16.197 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:17 compute-0 sshd-session[450431]: Invalid user max from 165.227.175.225 port 34972
Nov 25 09:38:17 compute-0 sshd-session[450431]: Received disconnect from 165.227.175.225 port 34972:11: Bye Bye [preauth]
Nov 25 09:38:17 compute-0 sshd-session[450431]: Disconnected from invalid user max 165.227.175.225 port 34972 [preauth]
Nov 25 09:38:17 compute-0 podman[450434]: 2025-11-25 09:38:17.670202584 +0000 UTC m=+0.102278352 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:38:17 compute-0 podman[450433]: 2025-11-25 09:38:17.680923867 +0000 UTC m=+0.117926880 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 09:38:18 compute-0 ceph-mon[75015]: pgmap v3561: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:19 compute-0 nova_compute[253538]: 2025-11-25 09:38:19.759 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:20 compute-0 ceph-mon[75015]: pgmap v3562: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:20 compute-0 nova_compute[253538]: 2025-11-25 09:38:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:20 compute-0 nova_compute[253538]: 2025-11-25 09:38:20.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:38:20 compute-0 nova_compute[253538]: 2025-11-25 09:38:20.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:38:20 compute-0 nova_compute[253538]: 2025-11-25 09:38:20.566 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:38:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:21 compute-0 nova_compute[253538]: 2025-11-25 09:38:21.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:21 compute-0 sudo[450471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:21 compute-0 sudo[450471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:21 compute-0 sudo[450471]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:22 compute-0 sudo[450496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:22 compute-0 sudo[450496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:22 compute-0 sudo[450496]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:22 compute-0 sudo[450521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:22 compute-0 sudo[450521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:22 compute-0 sudo[450521]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:22 compute-0 sudo[450546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:38:22 compute-0 sudo[450546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:22 compute-0 ceph-mon[75015]: pgmap v3563: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:22 compute-0 nova_compute[253538]: 2025-11-25 09:38:22.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:22 compute-0 sudo[450546]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:38:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:38:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:38:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:38:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:38:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 2c52087f-8aff-4700-88f4-4a5ef9b7f7d4 does not exist
Nov 25 09:38:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev bcf18077-4939-492c-96d2-d29d066909ae does not exist
Nov 25 09:38:22 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 49aa58ab-b0f2-49be-9f2d-32b882246937 does not exist
Nov 25 09:38:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:38:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:38:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:38:22 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:38:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:38:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:22 compute-0 sudo[450602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:22 compute-0 sudo[450602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:22 compute-0 sudo[450602]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:22 compute-0 sudo[450627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:22 compute-0 sudo[450627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:22 compute-0 sudo[450627]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:23 compute-0 sudo[450652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:23 compute-0 sudo[450652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:23 compute-0 sudo[450652]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3564: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:23 compute-0 sudo[450677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:38:23 compute-0 sudo[450677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:38:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:38:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:38:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:38:23 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:23 compute-0 podman[450741]: 2025-11-25 09:38:23.511910244 +0000 UTC m=+0.040944878 container create c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 09:38:23 compute-0 nova_compute[253538]: 2025-11-25 09:38:23.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:23 compute-0 nova_compute[253538]: 2025-11-25 09:38:23.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:38:23 compute-0 podman[450741]: 2025-11-25 09:38:23.490822919 +0000 UTC m=+0.019857583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:38:23 compute-0 systemd[1]: Started libpod-conmon-c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9.scope.
Nov 25 09:38:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:23 compute-0 podman[450741]: 2025-11-25 09:38:23.744847003 +0000 UTC m=+0.273881657 container init c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:38:23 compute-0 podman[450741]: 2025-11-25 09:38:23.753122049 +0000 UTC m=+0.282156683 container start c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 09:38:23 compute-0 stupefied_dubinsky[450758]: 167 167
Nov 25 09:38:23 compute-0 systemd[1]: libpod-c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9.scope: Deactivated successfully.
Nov 25 09:38:23 compute-0 conmon[450758]: conmon c5d2f546ecd7e895bdac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9.scope/container/memory.events
Nov 25 09:38:24 compute-0 podman[450741]: 2025-11-25 09:38:24.044924144 +0000 UTC m=+0.573958808 container attach c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 09:38:24 compute-0 podman[450741]: 2025-11-25 09:38:24.046716752 +0000 UTC m=+0.575751386 container died c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a03af2e2cb56b4a3f8cb4de2be09e52ac7424943d330f0cb80133823899fa00-merged.mount: Deactivated successfully.
Nov 25 09:38:24 compute-0 ceph-mon[75015]: pgmap v3564: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:24 compute-0 podman[450741]: 2025-11-25 09:38:24.60091663 +0000 UTC m=+1.129951274 container remove c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:38:24 compute-0 systemd[1]: libpod-conmon-c5d2f546ecd7e895bdac27318022114926246a6e649ce62ae56079354b1f2db9.scope: Deactivated successfully.
Nov 25 09:38:24 compute-0 nova_compute[253538]: 2025-11-25 09:38:24.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:24 compute-0 podman[450782]: 2025-11-25 09:38:24.827836425 +0000 UTC m=+0.062900318 container create 882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 09:38:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.855501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063504856382, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1191, "num_deletes": 251, "total_data_size": 1650378, "memory_usage": 1678056, "flush_reason": "Manual Compaction"}
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063504868291, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1018578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72824, "largest_seqno": 74014, "table_properties": {"data_size": 1013926, "index_size": 2047, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12906, "raw_average_key_size": 21, "raw_value_size": 1003600, "raw_average_value_size": 1656, "num_data_blocks": 92, "num_entries": 606, "num_filter_entries": 606, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063400, "oldest_key_time": 1764063400, "file_creation_time": 1764063504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 12874 microseconds, and 4846 cpu microseconds.
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.868388) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1018578 bytes OK
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.868421) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.871859) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.871890) EVENT_LOG_v1 {"time_micros": 1764063504871882, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.871910) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1644742, prev total WAL file size 1644742, number of live WAL files 2.
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.872650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303036' seq:72057594037927935, type:22 .. '6D6772737461740033323538' seq:0, type:0; will stop at (end)
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(994KB)], [173(10MB)]
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063504872768, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12328666, "oldest_snapshot_seqno": -1}
Nov 25 09:38:24 compute-0 podman[450782]: 2025-11-25 09:38:24.79028552 +0000 UTC m=+0.025349443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:38:24 compute-0 systemd[1]: Started libpod-conmon-882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4.scope.
Nov 25 09:38:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804988cf66360a9ea407a86216a83283111d038b4c094d6fc5e8c48c534fc664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804988cf66360a9ea407a86216a83283111d038b4c094d6fc5e8c48c534fc664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804988cf66360a9ea407a86216a83283111d038b4c094d6fc5e8c48c534fc664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804988cf66360a9ea407a86216a83283111d038b4c094d6fc5e8c48c534fc664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804988cf66360a9ea407a86216a83283111d038b4c094d6fc5e8c48c534fc664/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9136 keys, 9619820 bytes, temperature: kUnknown
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063504952613, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 9619820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9564607, "index_size": 31292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22853, "raw_key_size": 240256, "raw_average_key_size": 26, "raw_value_size": 9407292, "raw_average_value_size": 1029, "num_data_blocks": 1207, "num_entries": 9136, "num_filter_entries": 9136, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:38:24 compute-0 podman[450782]: 2025-11-25 09:38:24.952782436 +0000 UTC m=+0.187846349 container init 882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_williams, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.953058) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 9619820 bytes
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.956887) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.1 rd, 120.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(21.5) write-amplify(9.4) OK, records in: 9604, records dropped: 468 output_compression: NoCompression
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.956925) EVENT_LOG_v1 {"time_micros": 1764063504956909, "job": 108, "event": "compaction_finished", "compaction_time_micros": 79982, "compaction_time_cpu_micros": 26797, "output_level": 6, "num_output_files": 1, "total_output_size": 9619820, "num_input_records": 9604, "num_output_records": 9136, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063504957548, "job": 108, "event": "table_file_deletion", "file_number": 175}
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063504959975, "job": 108, "event": "table_file_deletion", "file_number": 173}
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.872476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.960102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.960111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.960114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.960116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:38:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:38:24.960118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:38:24 compute-0 podman[450782]: 2025-11-25 09:38:24.960829275 +0000 UTC m=+0.195893158 container start 882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:24 compute-0 podman[450782]: 2025-11-25 09:38:24.967542978 +0000 UTC m=+0.202606911 container attach 882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_williams, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 09:38:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:25 compute-0 nova_compute[253538]: 2025-11-25 09:38:25.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:25 compute-0 nova_compute[253538]: 2025-11-25 09:38:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:25 compute-0 ceph-mon[75015]: pgmap v3565: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:26 compute-0 trusting_williams[450798]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:38:26 compute-0 trusting_williams[450798]: --> relative data size: 1.0
Nov 25 09:38:26 compute-0 trusting_williams[450798]: --> All data devices are unavailable
Nov 25 09:38:26 compute-0 systemd[1]: libpod-882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4.scope: Deactivated successfully.
Nov 25 09:38:26 compute-0 podman[450782]: 2025-11-25 09:38:26.040935938 +0000 UTC m=+1.275999881 container died 882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_williams, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:38:26 compute-0 systemd[1]: libpod-882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4.scope: Consumed 1.011s CPU time.
Nov 25 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-804988cf66360a9ea407a86216a83283111d038b4c094d6fc5e8c48c534fc664-merged.mount: Deactivated successfully.
Nov 25 09:38:26 compute-0 podman[450782]: 2025-11-25 09:38:26.147678552 +0000 UTC m=+1.382742455 container remove 882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_williams, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:38:26 compute-0 systemd[1]: libpod-conmon-882b5e81c3665a839ebe89f74d9e069f38b38351c3eac182781629dc0aa5e8c4.scope: Deactivated successfully.
Nov 25 09:38:26 compute-0 sudo[450677]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:26 compute-0 nova_compute[253538]: 2025-11-25 09:38:26.199 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:26 compute-0 podman[450828]: 2025-11-25 09:38:26.220741616 +0000 UTC m=+0.141239456 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:38:26 compute-0 sudo[450859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:26 compute-0 sudo[450859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:26 compute-0 sudo[450859]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:26 compute-0 sudo[450890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:26 compute-0 sudo[450890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:26 compute-0 sudo[450890]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:26 compute-0 sudo[450915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:26 compute-0 sudo[450915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:26 compute-0 sudo[450915]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:26 compute-0 sudo[450940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:38:26 compute-0 sudo[450940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:26 compute-0 podman[451006]: 2025-11-25 09:38:26.815940674 +0000 UTC m=+0.075764509 container create dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:38:26 compute-0 podman[451006]: 2025-11-25 09:38:26.762200867 +0000 UTC m=+0.022024712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:38:26 compute-0 systemd[1]: Started libpod-conmon-dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9.scope.
Nov 25 09:38:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:26 compute-0 podman[451006]: 2025-11-25 09:38:26.980725342 +0000 UTC m=+0.240549167 container init dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:38:26 compute-0 podman[451006]: 2025-11-25 09:38:26.988955617 +0000 UTC m=+0.248779432 container start dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:26 compute-0 pensive_mayer[451022]: 167 167
Nov 25 09:38:26 compute-0 systemd[1]: libpod-dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9.scope: Deactivated successfully.
Nov 25 09:38:27 compute-0 podman[451006]: 2025-11-25 09:38:27.006863835 +0000 UTC m=+0.266687670 container attach dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:38:27 compute-0 podman[451006]: 2025-11-25 09:38:27.008503231 +0000 UTC m=+0.268327036 container died dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 09:38:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-24634d74912e2cf550e290e692e5ca090b687fe036735e0d9feb87251cdca99c-merged.mount: Deactivated successfully.
Nov 25 09:38:27 compute-0 podman[451006]: 2025-11-25 09:38:27.33529773 +0000 UTC m=+0.595121575 container remove dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:38:27 compute-0 systemd[1]: libpod-conmon-dc3be626a49991abeae4620ba2e2001a3d6cc0ec6af16cb12c36158a6ddbead9.scope: Deactivated successfully.
Nov 25 09:38:27 compute-0 podman[451047]: 2025-11-25 09:38:27.533379068 +0000 UTC m=+0.065823629 container create b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 09:38:27 compute-0 podman[451047]: 2025-11-25 09:38:27.494143476 +0000 UTC m=+0.026588077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:38:27 compute-0 systemd[1]: Started libpod-conmon-b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0.scope.
Nov 25 09:38:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ede59b62f4dff64539d138f1fdf5fce18773104bb099f8d27dc498391cb6596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ede59b62f4dff64539d138f1fdf5fce18773104bb099f8d27dc498391cb6596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ede59b62f4dff64539d138f1fdf5fce18773104bb099f8d27dc498391cb6596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ede59b62f4dff64539d138f1fdf5fce18773104bb099f8d27dc498391cb6596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:27 compute-0 podman[451047]: 2025-11-25 09:38:27.664428995 +0000 UTC m=+0.196873586 container init b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:38:27 compute-0 podman[451047]: 2025-11-25 09:38:27.670760578 +0000 UTC m=+0.203205179 container start b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:38:27 compute-0 podman[451047]: 2025-11-25 09:38:27.677876562 +0000 UTC m=+0.210321143 container attach b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:38:28 compute-0 ceph-mon[75015]: pgmap v3566: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:28 compute-0 sshd-session[451004]: Received disconnect from 182.253.79.194 port 63054:11: Bye Bye [preauth]
Nov 25 09:38:28 compute-0 sshd-session[451004]: Disconnected from authenticating user root 182.253.79.194 port 63054 [preauth]
Nov 25 09:38:28 compute-0 busy_satoshi[451064]: {
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:     "0": [
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:         {
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "devices": [
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "/dev/loop3"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             ],
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_name": "ceph_lv0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_size": "21470642176",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "name": "ceph_lv0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "tags": {
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cluster_name": "ceph",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.crush_device_class": "",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.encrypted": "0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osd_id": "0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.type": "block",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.vdo": "0"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             },
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "type": "block",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "vg_name": "ceph_vg0"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:         }
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:     ],
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:     "1": [
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:         {
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "devices": [
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "/dev/loop4"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             ],
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_name": "ceph_lv1",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_size": "21470642176",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "name": "ceph_lv1",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "tags": {
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cluster_name": "ceph",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.crush_device_class": "",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.encrypted": "0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osd_id": "1",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.type": "block",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.vdo": "0"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             },
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "type": "block",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "vg_name": "ceph_vg1"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:         }
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:     ],
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:     "2": [
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:         {
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "devices": [
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "/dev/loop5"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             ],
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_name": "ceph_lv2",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_size": "21470642176",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "name": "ceph_lv2",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "tags": {
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.cluster_name": "ceph",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.crush_device_class": "",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.encrypted": "0",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osd_id": "2",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.type": "block",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:                 "ceph.vdo": "0"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             },
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "type": "block",
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:             "vg_name": "ceph_vg2"
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:         }
Nov 25 09:38:28 compute-0 busy_satoshi[451064]:     ]
Nov 25 09:38:28 compute-0 busy_satoshi[451064]: }
Nov 25 09:38:28 compute-0 systemd[1]: libpod-b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0.scope: Deactivated successfully.
Nov 25 09:38:28 compute-0 podman[451047]: 2025-11-25 09:38:28.437085226 +0000 UTC m=+0.969529797 container died b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ede59b62f4dff64539d138f1fdf5fce18773104bb099f8d27dc498391cb6596-merged.mount: Deactivated successfully.
Nov 25 09:38:28 compute-0 podman[451047]: 2025-11-25 09:38:28.873891159 +0000 UTC m=+1.406335720 container remove b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 09:38:28 compute-0 systemd[1]: libpod-conmon-b0865f7b6f8115114fa116d6b637d1b59e6177fa9c0dcc2441b59eae2a7360c0.scope: Deactivated successfully.
Nov 25 09:38:28 compute-0 sudo[450940]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:28 compute-0 sudo[451087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:28 compute-0 sudo[451087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:28 compute-0 sudo[451087]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:29 compute-0 sudo[451112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:29 compute-0 sudo[451112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:29 compute-0 sudo[451112]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:29 compute-0 sudo[451137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:29 compute-0 sudo[451137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:29 compute-0 sudo[451137]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:29 compute-0 sudo[451162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:38:29 compute-0 sudo[451162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:38:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614406912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:38:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:38:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614406912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:38:29 compute-0 ceph-mon[75015]: pgmap v3567: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2614406912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:38:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2614406912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:38:29 compute-0 podman[451227]: 2025-11-25 09:38:29.549947364 +0000 UTC m=+0.069195730 container create 857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:38:29 compute-0 podman[451227]: 2025-11-25 09:38:29.506074016 +0000 UTC m=+0.025322412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:38:29 compute-0 systemd[1]: Started libpod-conmon-857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e.scope.
Nov 25 09:38:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:29 compute-0 nova_compute[253538]: 2025-11-25 09:38:29.762 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:29 compute-0 podman[451227]: 2025-11-25 09:38:29.82901186 +0000 UTC m=+0.348260246 container init 857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 09:38:29 compute-0 podman[451227]: 2025-11-25 09:38:29.836382932 +0000 UTC m=+0.355631318 container start 857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:38:29 compute-0 vigilant_mayer[451243]: 167 167
Nov 25 09:38:29 compute-0 systemd[1]: libpod-857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e.scope: Deactivated successfully.
Nov 25 09:38:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:29 compute-0 podman[451227]: 2025-11-25 09:38:29.870647267 +0000 UTC m=+0.389895633 container attach 857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:38:29 compute-0 podman[451227]: 2025-11-25 09:38:29.870980516 +0000 UTC m=+0.390228872 container died 857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-66dc8bc52c07060a98befb64096492606c053ca2b5d9f834d0aca91194e09016-merged.mount: Deactivated successfully.
Nov 25 09:38:30 compute-0 podman[451227]: 2025-11-25 09:38:30.096092312 +0000 UTC m=+0.615340678 container remove 857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:38:30 compute-0 systemd[1]: libpod-conmon-857e756f748990da448d5bdb8ecc9a12fceb5ae80ce08a8b9bcfbdcfb9938a9e.scope: Deactivated successfully.
Nov 25 09:38:30 compute-0 podman[451269]: 2025-11-25 09:38:30.269060084 +0000 UTC m=+0.026267019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:38:30 compute-0 podman[451269]: 2025-11-25 09:38:30.409106585 +0000 UTC m=+0.166313490 container create 7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:38:30 compute-0 nova_compute[253538]: 2025-11-25 09:38:30.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:30 compute-0 systemd[1]: Started libpod-conmon-7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa.scope.
Nov 25 09:38:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ec49532a56403e50ccafbd5146bb61fa67019c55025aff25d11b0e63525128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ec49532a56403e50ccafbd5146bb61fa67019c55025aff25d11b0e63525128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ec49532a56403e50ccafbd5146bb61fa67019c55025aff25d11b0e63525128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ec49532a56403e50ccafbd5146bb61fa67019c55025aff25d11b0e63525128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:30 compute-0 podman[451269]: 2025-11-25 09:38:30.735997839 +0000 UTC m=+0.493204754 container init 7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:38:30 compute-0 podman[451269]: 2025-11-25 09:38:30.743278698 +0000 UTC m=+0.500485593 container start 7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:38:30 compute-0 podman[451269]: 2025-11-25 09:38:30.768148247 +0000 UTC m=+0.525355152 container attach 7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:38:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:31 compute-0 nova_compute[253538]: 2025-11-25 09:38:31.202 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]: {
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "osd_id": 1,
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "type": "bluestore"
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:     },
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "osd_id": 2,
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "type": "bluestore"
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:     },
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "osd_id": 0,
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:         "type": "bluestore"
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]:     }
Nov 25 09:38:31 compute-0 nostalgic_goldberg[451286]: }
Nov 25 09:38:31 compute-0 systemd[1]: libpod-7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa.scope: Deactivated successfully.
Nov 25 09:38:31 compute-0 podman[451269]: 2025-11-25 09:38:31.797132455 +0000 UTC m=+1.554339360 container died 7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:31 compute-0 systemd[1]: libpod-7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa.scope: Consumed 1.025s CPU time.
Nov 25 09:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-64ec49532a56403e50ccafbd5146bb61fa67019c55025aff25d11b0e63525128-merged.mount: Deactivated successfully.
Nov 25 09:38:32 compute-0 ceph-mon[75015]: pgmap v3568: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:32 compute-0 podman[451269]: 2025-11-25 09:38:32.51539007 +0000 UTC m=+2.272596975 container remove 7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:38:32 compute-0 sudo[451162]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:38:32 compute-0 systemd[1]: libpod-conmon-7084d42911ea96fd6500ff36ea0fa4b3613ddb8325a9b47baeccbecbff3ca2fa.scope: Deactivated successfully.
Nov 25 09:38:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:38:32 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:38:32 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:38:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev bf7300e2-6453-4319-b00b-f05da1b9cd9e does not exist
Nov 25 09:38:32 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev f83b05a0-1f60-4538-b8b2-744a4eb64520 does not exist
Nov 25 09:38:32 compute-0 sudo[451331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:32 compute-0 sudo[451331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:32 compute-0 sudo[451331]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:33 compute-0 sudo[451356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:38:33 compute-0 sudo[451356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:33 compute-0 sudo[451356]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:33 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:38:33 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:38:33 compute-0 ceph-mon[75015]: pgmap v3569: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:34 compute-0 nova_compute[253538]: 2025-11-25 09:38:34.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:35 compute-0 ceph-mon[75015]: pgmap v3570: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:36 compute-0 nova_compute[253538]: 2025-11-25 09:38:36.204 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:37 compute-0 ceph-mon[75015]: pgmap v3571: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:39 compute-0 nova_compute[253538]: 2025-11-25 09:38:39.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:38:39 compute-0 ceph-mon[75015]: pgmap v3572: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:39 compute-0 nova_compute[253538]: 2025-11-25 09:38:39.634 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:38:39 compute-0 nova_compute[253538]: 2025-11-25 09:38:39.634 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:38:39 compute-0 nova_compute[253538]: 2025-11-25 09:38:39.635 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:38:39 compute-0 nova_compute[253538]: 2025-11-25 09:38:39.635 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:38:39 compute-0 nova_compute[253538]: 2025-11-25 09:38:39.635 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:38:39 compute-0 nova_compute[253538]: 2025-11-25 09:38:39.766 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:40 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:38:40 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2382290401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.204 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.375 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.377 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3504MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.377 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.377 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.630 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.630 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:38:40 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2382290401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.741 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.878 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.879 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.905 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.940 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:38:40 compute-0 nova_compute[253538]: 2025-11-25 09:38:40.976 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:38:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:38:41.126 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:38:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:38:41.127 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:38:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:38:41.127 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:38:41 compute-0 nova_compute[253538]: 2025-11-25 09:38:41.261 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:38:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399297919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:38:41 compute-0 nova_compute[253538]: 2025-11-25 09:38:41.410 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:38:41 compute-0 nova_compute[253538]: 2025-11-25 09:38:41.415 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:38:41 compute-0 nova_compute[253538]: 2025-11-25 09:38:41.431 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:38:41 compute-0 nova_compute[253538]: 2025-11-25 09:38:41.435 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:38:41 compute-0 nova_compute[253538]: 2025-11-25 09:38:41.436 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:38:41 compute-0 ceph-mon[75015]: pgmap v3573: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3399297919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:38:42 compute-0 sshd-session[451423]: Invalid user andy from 152.70.84.178 port 49024
Nov 25 09:38:42 compute-0 sshd-session[451423]: Received disconnect from 152.70.84.178 port 49024:11: Bye Bye [preauth]
Nov 25 09:38:42 compute-0 sshd-session[451423]: Disconnected from invalid user andy 152.70.84.178 port 49024 [preauth]
Nov 25 09:38:42 compute-0 sshd[189888]: drop connection #1 from [45.78.222.2]:50932 on [38.102.83.169]:22 penalty: exceeded LoginGraceTime
Nov 25 09:38:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:43 compute-0 ceph-mon[75015]: pgmap v3574: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:44 compute-0 nova_compute[253538]: 2025-11-25 09:38:44.768 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:46 compute-0 ceph-mon[75015]: pgmap v3575: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:46 compute-0 nova_compute[253538]: 2025-11-25 09:38:46.320 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:47 compute-0 ceph-mon[75015]: pgmap v3576: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:47 compute-0 sshd-session[451427]: Invalid user vps from 62.60.193.188 port 33772
Nov 25 09:38:47 compute-0 podman[451429]: 2025-11-25 09:38:47.820410689 +0000 UTC m=+0.074345730 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 09:38:47 compute-0 podman[451430]: 2025-11-25 09:38:47.828050547 +0000 UTC m=+0.072065457 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 09:38:47 compute-0 sshd-session[451427]: Received disconnect from 62.60.193.188 port 33772:11: Bye Bye [preauth]
Nov 25 09:38:47 compute-0 sshd-session[451427]: Disconnected from invalid user vps 62.60.193.188 port 33772 [preauth]
Nov 25 09:38:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:49 compute-0 nova_compute[253538]: 2025-11-25 09:38:49.770 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:50 compute-0 ceph-mon[75015]: pgmap v3577: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:51 compute-0 nova_compute[253538]: 2025-11-25 09:38:51.322 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:51 compute-0 ceph-mon[75015]: pgmap v3578: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:38:53
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.rgw.root', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'images']
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:54 compute-0 ceph-mon[75015]: pgmap v3579: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:38:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:38:54 compute-0 nova_compute[253538]: 2025-11-25 09:38:54.772 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:55 compute-0 ceph-mon[75015]: pgmap v3580: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:56 compute-0 nova_compute[253538]: 2025-11-25 09:38:56.371 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:56 compute-0 podman[451467]: 2025-11-25 09:38:56.834431922 +0000 UTC m=+0.090333717 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 09:38:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:58 compute-0 ceph-mon[75015]: pgmap v3581: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:59 compute-0 ceph-mon[75015]: pgmap v3582: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:38:59 compute-0 nova_compute[253538]: 2025-11-25 09:38:59.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:38:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:01 compute-0 nova_compute[253538]: 2025-11-25 09:39:01.373 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:01 compute-0 sshd-session[451494]: Received disconnect from 146.190.154.85 port 36810:11: Bye Bye [preauth]
Nov 25 09:39:01 compute-0 sshd-session[451494]: Disconnected from authenticating user root 146.190.154.85 port 36810 [preauth]
Nov 25 09:39:02 compute-0 ceph-mon[75015]: pgmap v3583: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:03 compute-0 ceph-mon[75015]: pgmap v3584: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:04 compute-0 nova_compute[253538]: 2025-11-25 09:39:04.777 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:39:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:05 compute-0 ceph-mon[75015]: pgmap v3585: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:06 compute-0 nova_compute[253538]: 2025-11-25 09:39:06.373 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:07 compute-0 ceph-mon[75015]: pgmap v3586: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:09 compute-0 nova_compute[253538]: 2025-11-25 09:39:09.779 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:10 compute-0 ceph-mon[75015]: pgmap v3587: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:11 compute-0 nova_compute[253538]: 2025-11-25 09:39:11.376 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:12 compute-0 ceph-mon[75015]: pgmap v3588: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:12 compute-0 nova_compute[253538]: 2025-11-25 09:39:12.437 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:13 compute-0 ceph-mon[75015]: pgmap v3589: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:14 compute-0 nova_compute[253538]: 2025-11-25 09:39:14.780 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:15 compute-0 nova_compute[253538]: 2025-11-25 09:39:15.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:16 compute-0 ceph-mon[75015]: pgmap v3590: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:16 compute-0 nova_compute[253538]: 2025-11-25 09:39:16.378 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:17 compute-0 ceph-mon[75015]: pgmap v3591: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:18 compute-0 podman[451497]: 2025-11-25 09:39:18.794452611 +0000 UTC m=+0.049448250 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:39:18 compute-0 podman[451496]: 2025-11-25 09:39:18.795073839 +0000 UTC m=+0.050865980 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 09:39:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:19 compute-0 nova_compute[253538]: 2025-11-25 09:39:19.781 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:20 compute-0 ceph-mon[75015]: pgmap v3592: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:20 compute-0 nova_compute[253538]: 2025-11-25 09:39:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:20 compute-0 nova_compute[253538]: 2025-11-25 09:39:20.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:39:20 compute-0 nova_compute[253538]: 2025-11-25 09:39:20.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:39:20 compute-0 nova_compute[253538]: 2025-11-25 09:39:20.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:39:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:21 compute-0 nova_compute[253538]: 2025-11-25 09:39:21.438 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:22 compute-0 ceph-mon[75015]: pgmap v3593: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:22 compute-0 nova_compute[253538]: 2025-11-25 09:39:22.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:22 compute-0 sshd-session[451536]: Invalid user node from 14.103.111.13 port 60686
Nov 25 09:39:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:23 compute-0 nova_compute[253538]: 2025-11-25 09:39:23.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:23 compute-0 nova_compute[253538]: 2025-11-25 09:39:23.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:39:23 compute-0 ceph-mon[75015]: pgmap v3594: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:24 compute-0 sshd-session[451539]: Invalid user laravel from 165.227.175.225 port 44286
Nov 25 09:39:24 compute-0 sshd-session[451539]: Received disconnect from 165.227.175.225 port 44286:11: Bye Bye [preauth]
Nov 25 09:39:24 compute-0 sshd-session[451539]: Disconnected from invalid user laravel 165.227.175.225 port 44286 [preauth]
Nov 25 09:39:24 compute-0 nova_compute[253538]: 2025-11-25 09:39:24.781 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:25 compute-0 nova_compute[253538]: 2025-11-25 09:39:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:26 compute-0 ceph-mon[75015]: pgmap v3595: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:26 compute-0 nova_compute[253538]: 2025-11-25 09:39:26.440 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:26 compute-0 nova_compute[253538]: 2025-11-25 09:39:26.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:27 compute-0 ceph-mon[75015]: pgmap v3596: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:27 compute-0 podman[451541]: 2025-11-25 09:39:27.847606094 +0000 UTC m=+0.088046304 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 09:39:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:39:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2461575536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:39:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:39:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2461575536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:39:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2461575536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:39:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2461575536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:39:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:29 compute-0 nova_compute[253538]: 2025-11-25 09:39:29.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:30 compute-0 ceph-mon[75015]: pgmap v3597: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:31 compute-0 nova_compute[253538]: 2025-11-25 09:39:31.475 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:32 compute-0 ceph-mon[75015]: pgmap v3598: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:32 compute-0 nova_compute[253538]: 2025-11-25 09:39:32.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:33 compute-0 sudo[451568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:33 compute-0 sudo[451568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:33 compute-0 sudo[451568]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:33 compute-0 sudo[451593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:33 compute-0 sudo[451593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:33 compute-0 sudo[451593]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:33 compute-0 sudo[451618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:33 compute-0 sudo[451618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:33 compute-0 sudo[451618]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:33 compute-0 sudo[451643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:39:33 compute-0 sudo[451643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:33 compute-0 ceph-mon[75015]: pgmap v3599: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:33 compute-0 sshd-session[451668]: Invalid user vagrant from 47.252.72.9 port 34548
Nov 25 09:39:33 compute-0 sshd-session[451668]: Received disconnect from 47.252.72.9 port 34548:11: Bye Bye [preauth]
Nov 25 09:39:33 compute-0 sshd-session[451668]: Disconnected from invalid user vagrant 47.252.72.9 port 34548 [preauth]
Nov 25 09:39:33 compute-0 sudo[451643]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:39:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:39:33 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:39:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:39:33 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:39:33 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 4897e048-b85d-4cf4-bf53-b53a0fcaa3d5 does not exist
Nov 25 09:39:33 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 50ed06fc-6a5f-481c-926d-fd0856398f95 does not exist
Nov 25 09:39:33 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev dccfb1e6-7b71-44df-957a-b2da86f3b4bc does not exist
Nov 25 09:39:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:39:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:39:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:39:33 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:39:33 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:39:33 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:34 compute-0 sudo[451699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:34 compute-0 sudo[451699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:34 compute-0 sudo[451699]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:34 compute-0 sudo[451724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:34 compute-0 sudo[451724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:34 compute-0 sudo[451724]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:34 compute-0 sudo[451749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:34 compute-0 sudo[451749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:34 compute-0 sudo[451749]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:34 compute-0 sudo[451774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:39:34 compute-0 sudo[451774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:39:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:39:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:39:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:39:34 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:34 compute-0 podman[451836]: 2025-11-25 09:39:34.539674027 +0000 UTC m=+0.021862238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:39:34 compute-0 podman[451836]: 2025-11-25 09:39:34.663423474 +0000 UTC m=+0.145611665 container create 18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:39:34 compute-0 systemd[1]: Started libpod-conmon-18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4.scope.
Nov 25 09:39:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:34 compute-0 nova_compute[253538]: 2025-11-25 09:39:34.785 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:34 compute-0 podman[451836]: 2025-11-25 09:39:34.887067719 +0000 UTC m=+0.369255930 container init 18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:39:34 compute-0 podman[451836]: 2025-11-25 09:39:34.896864356 +0000 UTC m=+0.379052547 container start 18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_antonelli, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 09:39:34 compute-0 naughty_antonelli[451853]: 167 167
Nov 25 09:39:34 compute-0 systemd[1]: libpod-18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4.scope: Deactivated successfully.
Nov 25 09:39:34 compute-0 podman[451836]: 2025-11-25 09:39:34.923999057 +0000 UTC m=+0.406187258 container attach 18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:39:34 compute-0 podman[451836]: 2025-11-25 09:39:34.92517116 +0000 UTC m=+0.407359361 container died 18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:39:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a8893ebebaddfbba9c2dd90a62776a1c367853672518cd28d8d7b3478c50a5f-merged.mount: Deactivated successfully.
Nov 25 09:39:35 compute-0 nova_compute[253538]: 2025-11-25 09:39:35.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:35 compute-0 ceph-mon[75015]: pgmap v3600: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:35 compute-0 podman[451836]: 2025-11-25 09:39:35.874720089 +0000 UTC m=+1.356908320 container remove 18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:39:35 compute-0 systemd[1]: libpod-conmon-18a248f31f4b29b28b83a538f5551c838ebdf297818c915188b96abd1318d8d4.scope: Deactivated successfully.
Nov 25 09:39:36 compute-0 podman[451878]: 2025-11-25 09:39:36.043284851 +0000 UTC m=+0.027668237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:39:36 compute-0 podman[451878]: 2025-11-25 09:39:36.250895317 +0000 UTC m=+0.235278693 container create 61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:39:36 compute-0 systemd[1]: Started libpod-conmon-61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a.scope.
Nov 25 09:39:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4568007a004a38c8bcdf1aeb9c6e987d23be12e8b15a40df43b4f391f3cb7fc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4568007a004a38c8bcdf1aeb9c6e987d23be12e8b15a40df43b4f391f3cb7fc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4568007a004a38c8bcdf1aeb9c6e987d23be12e8b15a40df43b4f391f3cb7fc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4568007a004a38c8bcdf1aeb9c6e987d23be12e8b15a40df43b4f391f3cb7fc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4568007a004a38c8bcdf1aeb9c6e987d23be12e8b15a40df43b4f391f3cb7fc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:36 compute-0 nova_compute[253538]: 2025-11-25 09:39:36.477 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:36 compute-0 podman[451878]: 2025-11-25 09:39:36.76240073 +0000 UTC m=+0.746784086 container init 61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:39:36 compute-0 podman[451878]: 2025-11-25 09:39:36.771453797 +0000 UTC m=+0.755837133 container start 61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:39:36 compute-0 podman[451878]: 2025-11-25 09:39:36.964812485 +0000 UTC m=+0.949195851 container attach 61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 09:39:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:37 compute-0 laughing_haslett[451895]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:39:37 compute-0 laughing_haslett[451895]: --> relative data size: 1.0
Nov 25 09:39:37 compute-0 laughing_haslett[451895]: --> All data devices are unavailable
Nov 25 09:39:37 compute-0 systemd[1]: libpod-61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a.scope: Deactivated successfully.
Nov 25 09:39:37 compute-0 podman[451924]: 2025-11-25 09:39:37.836436768 +0000 UTC m=+0.026610398 container died 61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 09:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4568007a004a38c8bcdf1aeb9c6e987d23be12e8b15a40df43b4f391f3cb7fc0-merged.mount: Deactivated successfully.
Nov 25 09:39:38 compute-0 ceph-mon[75015]: pgmap v3601: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:38 compute-0 podman[451924]: 2025-11-25 09:39:38.281506287 +0000 UTC m=+0.471679887 container remove 61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 09:39:38 compute-0 systemd[1]: libpod-conmon-61d4ad8697d796c715165db9cfeb7c760b82e093cf27c5e3f9e263ee12021f2a.scope: Deactivated successfully.
Nov 25 09:39:38 compute-0 sudo[451774]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:38 compute-0 sudo[451940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:38 compute-0 sudo[451940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:38 compute-0 sudo[451940]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:38 compute-0 sudo[451965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:38 compute-0 sudo[451965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:38 compute-0 sudo[451965]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:38 compute-0 sudo[451990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:38 compute-0 sudo[451990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:38 compute-0 sudo[451990]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:38 compute-0 sudo[452015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:39:38 compute-0 sudo[452015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:38 compute-0 podman[452079]: 2025-11-25 09:39:38.888086274 +0000 UTC m=+0.024830469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:39:39 compute-0 podman[452079]: 2025-11-25 09:39:39.00184506 +0000 UTC m=+0.138589255 container create 49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:39:39 compute-0 systemd[1]: Started libpod-conmon-49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e.scope.
Nov 25 09:39:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:39 compute-0 podman[452079]: 2025-11-25 09:39:39.331118637 +0000 UTC m=+0.467862912 container init 49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:39:39 compute-0 podman[452079]: 2025-11-25 09:39:39.337364189 +0000 UTC m=+0.474108384 container start 49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:39:39 compute-0 youthful_ptolemy[452095]: 167 167
Nov 25 09:39:39 compute-0 systemd[1]: libpod-49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e.scope: Deactivated successfully.
Nov 25 09:39:39 compute-0 podman[452079]: 2025-11-25 09:39:39.445671064 +0000 UTC m=+0.582415269 container attach 49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 09:39:39 compute-0 podman[452079]: 2025-11-25 09:39:39.446504908 +0000 UTC m=+0.583249083 container died 49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 09:39:39 compute-0 ceph-mon[75015]: pgmap v3602: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f77f138d2a1c40098e577ab62d833e9cb9dac6cda0cce063c56acdefbc7f128d-merged.mount: Deactivated successfully.
Nov 25 09:39:39 compute-0 nova_compute[253538]: 2025-11-25 09:39:39.787 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:39 compute-0 podman[452079]: 2025-11-25 09:39:39.943126953 +0000 UTC m=+1.079871178 container remove 49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 09:39:39 compute-0 systemd[1]: libpod-conmon-49e12f9ccaf3d4194123aadd039bb8eb1d80bad1c8df24cd1039262df374694e.scope: Deactivated successfully.
Nov 25 09:39:40 compute-0 podman[452119]: 2025-11-25 09:39:40.173504812 +0000 UTC m=+0.091292074 container create 342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rhodes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:39:40 compute-0 podman[452119]: 2025-11-25 09:39:40.106165194 +0000 UTC m=+0.023952546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:39:40 compute-0 systemd[1]: Started libpod-conmon-342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269.scope.
Nov 25 09:39:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c025e269350551e58a23bd19e3982f25e9f846635ed33091609eed0b706a59c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c025e269350551e58a23bd19e3982f25e9f846635ed33091609eed0b706a59c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c025e269350551e58a23bd19e3982f25e9f846635ed33091609eed0b706a59c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c025e269350551e58a23bd19e3982f25e9f846635ed33091609eed0b706a59c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:40 compute-0 podman[452119]: 2025-11-25 09:39:40.310769269 +0000 UTC m=+0.228556631 container init 342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:39:40 compute-0 podman[452119]: 2025-11-25 09:39:40.320178845 +0000 UTC m=+0.237966147 container start 342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:39:40 compute-0 podman[452119]: 2025-11-25 09:39:40.339788801 +0000 UTC m=+0.257576103 container attach 342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rhodes, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:39:40 compute-0 nova_compute[253538]: 2025-11-25 09:39:40.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:39:40 compute-0 nova_compute[253538]: 2025-11-25 09:39:40.587 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:39:40 compute-0 nova_compute[253538]: 2025-11-25 09:39:40.588 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:39:40 compute-0 nova_compute[253538]: 2025-11-25 09:39:40.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:39:40 compute-0 nova_compute[253538]: 2025-11-25 09:39:40.589 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:39:40 compute-0 nova_compute[253538]: 2025-11-25 09:39:40.589 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:39:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:39:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804341069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.030 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:39:41 compute-0 determined_rhodes[452135]: {
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:     "0": [
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:         {
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "devices": [
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "/dev/loop3"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             ],
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_name": "ceph_lv0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_size": "21470642176",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "name": "ceph_lv0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "tags": {
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cluster_name": "ceph",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.crush_device_class": "",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.encrypted": "0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osd_id": "0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.type": "block",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.vdo": "0"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             },
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "type": "block",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "vg_name": "ceph_vg0"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:         }
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:     ],
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:     "1": [
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:         {
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "devices": [
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "/dev/loop4"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             ],
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_name": "ceph_lv1",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_size": "21470642176",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "name": "ceph_lv1",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "tags": {
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cluster_name": "ceph",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.crush_device_class": "",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.encrypted": "0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osd_id": "1",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.type": "block",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.vdo": "0"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             },
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "type": "block",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "vg_name": "ceph_vg1"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:         }
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:     ],
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:     "2": [
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:         {
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "devices": [
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "/dev/loop5"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             ],
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_name": "ceph_lv2",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_size": "21470642176",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "name": "ceph_lv2",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "tags": {
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.cluster_name": "ceph",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.crush_device_class": "",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.encrypted": "0",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osd_id": "2",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.type": "block",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:                 "ceph.vdo": "0"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             },
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "type": "block",
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:             "vg_name": "ceph_vg2"
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:         }
Nov 25 09:39:41 compute-0 determined_rhodes[452135]:     ]
Nov 25 09:39:41 compute-0 determined_rhodes[452135]: }
Nov 25 09:39:41 compute-0 systemd[1]: libpod-342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269.scope: Deactivated successfully.
Nov 25 09:39:41 compute-0 podman[452119]: 2025-11-25 09:39:41.100412624 +0000 UTC m=+1.018199886 container died 342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:39:41 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1804341069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:39:41.127 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:39:41.127 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:39:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:39:41.128 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:39:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.189 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.190 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3481MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.190 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.191 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c025e269350551e58a23bd19e3982f25e9f846635ed33091609eed0b706a59c2-merged.mount: Deactivated successfully.
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.275 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.277 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:39:41 compute-0 podman[452119]: 2025-11-25 09:39:41.288173499 +0000 UTC m=+1.205960761 container remove 342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.297 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:39:41 compute-0 systemd[1]: libpod-conmon-342ffba2fd76deca5de2357c25e35d7bd886167acb0ba8b7b8344b122dca3269.scope: Deactivated successfully.
Nov 25 09:39:41 compute-0 sudo[452015]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:41 compute-0 sudo[452179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:41 compute-0 sudo[452179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:41 compute-0 sudo[452179]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:41 compute-0 sudo[452205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:41 compute-0 sudo[452205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:41 compute-0 sudo[452205]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.530 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:41 compute-0 sudo[452248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:41 compute-0 sudo[452248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:41 compute-0 sudo[452248]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:41 compute-0 sudo[452273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:39:41 compute-0 sudo[452273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:41 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:39:41 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410821001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.730 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.736 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.751 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.753 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:39:41 compute-0 nova_compute[253538]: 2025-11-25 09:39:41.753 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:39:42 compute-0 podman[452341]: 2025-11-25 09:39:41.962516077 +0000 UTC m=+0.028101888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:39:42 compute-0 podman[452341]: 2025-11-25 09:39:42.062956178 +0000 UTC m=+0.128541969 container create 8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hawking, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:39:42 compute-0 systemd[1]: Started libpod-conmon-8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22.scope.
Nov 25 09:39:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:42 compute-0 ceph-mon[75015]: pgmap v3603: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:42 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3410821001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:39:42 compute-0 podman[452341]: 2025-11-25 09:39:42.284814734 +0000 UTC m=+0.350400545 container init 8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:39:42 compute-0 podman[452341]: 2025-11-25 09:39:42.29309574 +0000 UTC m=+0.358681571 container start 8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hawking, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:39:42 compute-0 brave_hawking[452357]: 167 167
Nov 25 09:39:42 compute-0 systemd[1]: libpod-8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22.scope: Deactivated successfully.
Nov 25 09:39:42 compute-0 podman[452341]: 2025-11-25 09:39:42.467591073 +0000 UTC m=+0.533176964 container attach 8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 09:39:42 compute-0 podman[452341]: 2025-11-25 09:39:42.468504118 +0000 UTC m=+0.534090009 container died 8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 09:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e64af0cbc087dee6a34561fbd16f1221c07ea15f4c1c9f188cfea43216efc9-merged.mount: Deactivated successfully.
Nov 25 09:39:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:43 compute-0 podman[452341]: 2025-11-25 09:39:43.356270161 +0000 UTC m=+1.421856012 container remove 8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:39:43 compute-0 systemd[1]: libpod-conmon-8e61a0aad34f2ae522394d65788770b9dd5425047e8e746cb9c37a8ccd640e22.scope: Deactivated successfully.
Nov 25 09:39:43 compute-0 podman[452382]: 2025-11-25 09:39:43.541993001 +0000 UTC m=+0.040979009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:39:43 compute-0 ceph-mon[75015]: pgmap v3604: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:43 compute-0 podman[452382]: 2025-11-25 09:39:43.754634605 +0000 UTC m=+0.253620563 container create 71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:39:43 compute-0 systemd[1]: Started libpod-conmon-71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2.scope.
Nov 25 09:39:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ca0098ef588392397397def6ed0f18408d42c47c31ebdc73c29a914629137c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ca0098ef588392397397def6ed0f18408d42c47c31ebdc73c29a914629137c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ca0098ef588392397397def6ed0f18408d42c47c31ebdc73c29a914629137c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ca0098ef588392397397def6ed0f18408d42c47c31ebdc73c29a914629137c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:44 compute-0 podman[452382]: 2025-11-25 09:39:44.205984325 +0000 UTC m=+0.704970333 container init 71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:39:44 compute-0 podman[452382]: 2025-11-25 09:39:44.220298867 +0000 UTC m=+0.719284825 container start 71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:39:44 compute-0 podman[452382]: 2025-11-25 09:39:44.237836816 +0000 UTC m=+0.736822834 container attach 71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:39:44 compute-0 nova_compute[253538]: 2025-11-25 09:39:44.790 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]: {
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "osd_id": 1,
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "type": "bluestore"
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:     },
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "osd_id": 2,
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "type": "bluestore"
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:     },
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "osd_id": 0,
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:         "type": "bluestore"
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]:     }
Nov 25 09:39:45 compute-0 modest_stonebraker[452399]: }
Nov 25 09:39:45 compute-0 systemd[1]: libpod-71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2.scope: Deactivated successfully.
Nov 25 09:39:45 compute-0 systemd[1]: libpod-71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2.scope: Consumed 1.021s CPU time.
Nov 25 09:39:45 compute-0 conmon[452399]: conmon 71152afb258134e867b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2.scope/container/memory.events
Nov 25 09:39:45 compute-0 podman[452382]: 2025-11-25 09:39:45.241728608 +0000 UTC m=+1.740714536 container died 71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 09:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2ca0098ef588392397397def6ed0f18408d42c47c31ebdc73c29a914629137c-merged.mount: Deactivated successfully.
Nov 25 09:39:45 compute-0 podman[452382]: 2025-11-25 09:39:45.597716155 +0000 UTC m=+2.096702103 container remove 71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:39:45 compute-0 systemd[1]: libpod-conmon-71152afb258134e867b3bba636e666b2aca0b0d55e016534f6de002659ae5dc2.scope: Deactivated successfully.
Nov 25 09:39:45 compute-0 sudo[452273]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:39:45 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:39:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:39:45 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:39:45 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ea5df868-14ed-4612-a65b-f86214900e88 does not exist
Nov 25 09:39:45 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 83f8b280-7a33-46ce-a283-d593dc502d9c does not exist
Nov 25 09:39:45 compute-0 sudo[452445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:45 compute-0 sudo[452445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:45 compute-0 sudo[452445]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:45 compute-0 sudo[452470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:39:45 compute-0 sudo[452470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:45 compute-0 sudo[452470]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:46 compute-0 ceph-mon[75015]: pgmap v3605: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:46 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:39:46 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:39:46 compute-0 nova_compute[253538]: 2025-11-25 09:39:46.531 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:48 compute-0 ceph-mon[75015]: pgmap v3606: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:49 compute-0 nova_compute[253538]: 2025-11-25 09:39:49.794 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:49 compute-0 podman[452496]: 2025-11-25 09:39:49.814369336 +0000 UTC m=+0.055584948 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:39:49 compute-0 podman[452495]: 2025-11-25 09:39:49.82107789 +0000 UTC m=+0.070097505 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 09:39:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:50 compute-0 ceph-mon[75015]: pgmap v3607: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:51 compute-0 sshd-session[452531]: Connection closed by authenticating user root 171.244.51.45 port 44188 [preauth]
Nov 25 09:39:51 compute-0 nova_compute[253538]: 2025-11-25 09:39:51.534 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:52 compute-0 ceph-mon[75015]: pgmap v3608: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:39:53
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', '.rgw.root', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta']
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:53 compute-0 ceph-mon[75015]: pgmap v3609: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:53 compute-0 sshd-session[452533]: Invalid user supermaint from 182.253.79.194 port 17931
Nov 25 09:39:54 compute-0 sshd-session[452533]: Received disconnect from 182.253.79.194 port 17931:11: Bye Bye [preauth]
Nov 25 09:39:54 compute-0 sshd-session[452533]: Disconnected from invalid user supermaint 182.253.79.194 port 17931 [preauth]
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:39:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:39:54 compute-0 nova_compute[253538]: 2025-11-25 09:39:54.795 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:56 compute-0 ceph-mon[75015]: pgmap v3610: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:56 compute-0 nova_compute[253538]: 2025-11-25 09:39:56.538 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:58 compute-0 ceph-mon[75015]: pgmap v3611: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:58 compute-0 podman[452535]: 2025-11-25 09:39:58.855601093 +0000 UTC m=+0.108194765 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:39:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:39:59 compute-0 nova_compute[253538]: 2025-11-25 09:39:59.843 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:39:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:00 compute-0 ceph-mon[75015]: pgmap v3612: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:01 compute-0 nova_compute[253538]: 2025-11-25 09:40:01.539 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:02 compute-0 ceph-mon[75015]: pgmap v3613: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3614: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:04 compute-0 ceph-mon[75015]: pgmap v3614: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:40:04 compute-0 nova_compute[253538]: 2025-11-25 09:40:04.845 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:40:05 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.5 total, 600.0 interval
                                           Cumulative writes: 47K writes, 189K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 16K syncs, 2.82 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 377 writes, 852 keys, 377 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s
                                           Interval WAL: 377 writes, 164 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:40:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:06 compute-0 ceph-mon[75015]: pgmap v3615: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:06 compute-0 nova_compute[253538]: 2025-11-25 09:40:06.541 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:06 compute-0 sshd-session[452561]: Received disconnect from 146.190.154.85 port 35536:11: Bye Bye [preauth]
Nov 25 09:40:06 compute-0 sshd-session[452561]: Disconnected from authenticating user root 146.190.154.85 port 35536 [preauth]
Nov 25 09:40:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:08 compute-0 ceph-mon[75015]: pgmap v3616: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:09 compute-0 nova_compute[253538]: 2025-11-25 09:40:09.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:10 compute-0 ceph-mon[75015]: pgmap v3617: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:11 compute-0 ceph-mon[75015]: pgmap v3618: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:11 compute-0 nova_compute[253538]: 2025-11-25 09:40:11.546 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:40:11 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.4 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.77 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 476 writes, 1271 keys, 476 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s
                                           Interval WAL: 476 writes, 202 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:40:12 compute-0 nova_compute[253538]: 2025-11-25 09:40:12.754 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:14 compute-0 ceph-mon[75015]: pgmap v3619: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:14 compute-0 nova_compute[253538]: 2025-11-25 09:40:14.847 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:15 compute-0 ceph-mon[75015]: pgmap v3620: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:15 compute-0 nova_compute[253538]: 2025-11-25 09:40:15.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:15 compute-0 sshd[189888]: Timeout before authentication for connection from 49.64.169.153 to 38.102.83.169, pid = 450427
Nov 25 09:40:16 compute-0 nova_compute[253538]: 2025-11-25 09:40:16.547 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:18 compute-0 ceph-mon[75015]: pgmap v3621: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3622: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:19 compute-0 nova_compute[253538]: 2025-11-25 09:40:19.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:20 compute-0 ceph-mon[75015]: pgmap v3622: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:20 compute-0 podman[452564]: 2025-11-25 09:40:20.834883759 +0000 UTC m=+0.063754792 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:40:20 compute-0 podman[452563]: 2025-11-25 09:40:20.85915396 +0000 UTC m=+0.101553172 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:40:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3623: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:21 compute-0 nova_compute[253538]: 2025-11-25 09:40:21.814 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:40:21 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6602.4 total, 600.0 interval
                                           Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 12K syncs, 2.82 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 333 writes, 863 keys, 333 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s
                                           Interval WAL: 333 writes, 138 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:40:22 compute-0 ceph-mon[75015]: pgmap v3623: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.495152) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063622495175, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1163, "num_deletes": 251, "total_data_size": 1764383, "memory_usage": 1789696, "flush_reason": "Manual Compaction"}
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Nov 25 09:40:22 compute-0 nova_compute[253538]: 2025-11-25 09:40:22.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:22 compute-0 nova_compute[253538]: 2025-11-25 09:40:22.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:40:22 compute-0 nova_compute[253538]: 2025-11-25 09:40:22.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:40:22 compute-0 nova_compute[253538]: 2025-11-25 09:40:22.569 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063622608837, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1736777, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74015, "largest_seqno": 75177, "table_properties": {"data_size": 1731129, "index_size": 3042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11775, "raw_average_key_size": 19, "raw_value_size": 1719908, "raw_average_value_size": 2880, "num_data_blocks": 136, "num_entries": 597, "num_filter_entries": 597, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063505, "oldest_key_time": 1764063505, "file_creation_time": 1764063622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 113763 microseconds, and 4333 cpu microseconds.
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.608906) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1736777 bytes OK
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.608930) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.618568) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.618605) EVENT_LOG_v1 {"time_micros": 1764063622618597, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.618626) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1759063, prev total WAL file size 1759063, number of live WAL files 2.
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.619551) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(1696KB)], [176(9394KB)]
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063622619599, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11356597, "oldest_snapshot_seqno": -1}
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9219 keys, 9595614 bytes, temperature: kUnknown
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063622945940, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9595614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9539765, "index_size": 31672, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 242604, "raw_average_key_size": 26, "raw_value_size": 9380906, "raw_average_value_size": 1017, "num_data_blocks": 1216, "num_entries": 9219, "num_filter_entries": 9219, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.946224) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9595614 bytes
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.972721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 34.8 rd, 29.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(12.1) write-amplify(5.5) OK, records in: 9733, records dropped: 514 output_compression: NoCompression
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.972761) EVENT_LOG_v1 {"time_micros": 1764063622972745, "job": 110, "event": "compaction_finished", "compaction_time_micros": 326426, "compaction_time_cpu_micros": 28211, "output_level": 6, "num_output_files": 1, "total_output_size": 9595614, "num_input_records": 9733, "num_output_records": 9219, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063622973448, "job": 110, "event": "table_file_deletion", "file_number": 178}
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063622975845, "job": 110, "event": "table_file_deletion", "file_number": 176}
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.619462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.975886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.975892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.975895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.975897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:22 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:40:22.975900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3624: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:23 compute-0 ceph-mon[75015]: pgmap v3624: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:24 compute-0 nova_compute[253538]: 2025-11-25 09:40:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:24 compute-0 nova_compute[253538]: 2025-11-25 09:40:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:24 compute-0 nova_compute[253538]: 2025-11-25 09:40:24.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:40:24 compute-0 nova_compute[253538]: 2025-11-25 09:40:24.850 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3625: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:25 compute-0 nova_compute[253538]: 2025-11-25 09:40:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:26 compute-0 ceph-mon[75015]: pgmap v3625: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:26 compute-0 nova_compute[253538]: 2025-11-25 09:40:26.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:26 compute-0 nova_compute[253538]: 2025-11-25 09:40:26.585 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:26 compute-0 sshd-session[452602]: Invalid user oracle from 193.32.162.151 port 33788
Nov 25 09:40:27 compute-0 sshd-session[452602]: Connection closed by invalid user oracle 193.32.162.151 port 33788 [preauth]
Nov 25 09:40:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3626: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:28 compute-0 ceph-mon[75015]: pgmap v3626: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:40:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2112491473' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:40:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:40:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2112491473' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:40:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3627: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2112491473' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:40:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2112491473' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:40:29 compute-0 ceph-mon[75015]: pgmap v3627: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:29 compute-0 nova_compute[253538]: 2025-11-25 09:40:29.867 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:29 compute-0 podman[452604]: 2025-11-25 09:40:29.881832791 +0000 UTC m=+0.138149822 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:40:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3628: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:31 compute-0 nova_compute[253538]: 2025-11-25 09:40:31.586 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:32 compute-0 ceph-mon[75015]: pgmap v3628: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:32 compute-0 sshd-session[452630]: Received disconnect from 165.227.175.225 port 46962:11: Bye Bye [preauth]
Nov 25 09:40:32 compute-0 sshd-session[452630]: Disconnected from authenticating user root 165.227.175.225 port 46962 [preauth]
Nov 25 09:40:32 compute-0 nova_compute[253538]: 2025-11-25 09:40:32.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3629: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:34 compute-0 ceph-mon[75015]: pgmap v3629: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:34 compute-0 nova_compute[253538]: 2025-11-25 09:40:34.869 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3630: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:35 compute-0 ceph-mon[75015]: pgmap v3630: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:36 compute-0 nova_compute[253538]: 2025-11-25 09:40:36.589 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3631: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:38 compute-0 ceph-mon[75015]: pgmap v3631: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3632: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:39 compute-0 nova_compute[253538]: 2025-11-25 09:40:39.883 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:40 compute-0 ceph-mon[75015]: pgmap v3632: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:40:41.129 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:40:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:40:41.129 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:40:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:40:41.129 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:40:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3633: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:41 compute-0 ceph-mon[75015]: pgmap v3633: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:41 compute-0 nova_compute[253538]: 2025-11-25 09:40:41.590 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:42 compute-0 ceph-mgr[75313]: [devicehealth INFO root] Check health
Nov 25 09:40:42 compute-0 nova_compute[253538]: 2025-11-25 09:40:42.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:40:42 compute-0 nova_compute[253538]: 2025-11-25 09:40:42.604 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:40:42 compute-0 nova_compute[253538]: 2025-11-25 09:40:42.604 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:40:42 compute-0 nova_compute[253538]: 2025-11-25 09:40:42.605 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:40:42 compute-0 nova_compute[253538]: 2025-11-25 09:40:42.605 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:40:42 compute-0 nova_compute[253538]: 2025-11-25 09:40:42.605 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:40:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:40:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554359672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.050 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:40:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3634: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/554359672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.223 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.224 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.224 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.224 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.321 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.321 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.337 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:40:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:40:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/853623856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.765 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.771 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.828 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.830 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:40:43 compute-0 nova_compute[253538]: 2025-11-25 09:40:43.830 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:40:44 compute-0 nova_compute[253538]: 2025-11-25 09:40:44.155 253542 DEBUG oslo_concurrency.processutils [None req-bea60fdf-784c-46c0-91c3-52d8073eb371 af6ab5bc7bb24bd08e82a59fd52ba82d d5293c2f698f43d69b5e1b38a119911e - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:40:44 compute-0 nova_compute[253538]: 2025-11-25 09:40:44.203 253542 DEBUG oslo_concurrency.processutils [None req-bea60fdf-784c-46c0-91c3-52d8073eb371 af6ab5bc7bb24bd08e82a59fd52ba82d d5293c2f698f43d69b5e1b38a119911e - - default default] CMD "env LANG=C uptime" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:40:44 compute-0 ceph-mon[75015]: pgmap v3634: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/853623856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:40:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:44 compute-0 nova_compute[253538]: 2025-11-25 09:40:44.885 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3635: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:45 compute-0 sudo[452679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:45 compute-0 sudo[452679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:45 compute-0 sudo[452679]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:45 compute-0 sudo[452704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:45 compute-0 sudo[452704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:45 compute-0 sudo[452704]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 sudo[452729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:46 compute-0 sudo[452729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:46 compute-0 sudo[452729]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 sudo[452754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:40:46 compute-0 sudo[452754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:46 compute-0 sshd-session[452677]: Invalid user userb from 62.60.193.188 port 46630
Nov 25 09:40:46 compute-0 ceph-mon[75015]: pgmap v3635: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:46 compute-0 sshd-session[452677]: Received disconnect from 62.60.193.188 port 46630:11: Bye Bye [preauth]
Nov 25 09:40:46 compute-0 sshd-session[452677]: Disconnected from invalid user userb 62.60.193.188 port 46630 [preauth]
Nov 25 09:40:46 compute-0 sudo[452754]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:40:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:40:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:40:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:40:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:40:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0330f363-8cc1-43b4-b510-f09af96abd16 does not exist
Nov 25 09:40:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1a35ee78-966b-47e2-ab12-f7532a5f53f2 does not exist
Nov 25 09:40:46 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a53612e5-8cb7-4bda-b616-08237d3881c6 does not exist
Nov 25 09:40:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:40:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:40:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:40:46 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:40:46 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:40:46 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:46 compute-0 nova_compute[253538]: 2025-11-25 09:40:46.592 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:46 compute-0 sudo[452808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:46 compute-0 sudo[452808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:46 compute-0 sudo[452808]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 sudo[452833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:46 compute-0 sudo[452833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:46 compute-0 sudo[452833]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 sudo[452858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:46 compute-0 sudo[452858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:46 compute-0 sudo[452858]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 sudo[452883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:40:46 compute-0 sudo[452883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:47 compute-0 podman[452948]: 2025-11-25 09:40:47.145955777 +0000 UTC m=+0.049270366 container create 7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:40:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3636: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:47 compute-0 systemd[1]: Started libpod-conmon-7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f.scope.
Nov 25 09:40:47 compute-0 podman[452948]: 2025-11-25 09:40:47.123443802 +0000 UTC m=+0.026758421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:40:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:47 compute-0 podman[452948]: 2025-11-25 09:40:47.323657947 +0000 UTC m=+0.226972566 container init 7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:40:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:40:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:40:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:40:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:40:47 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:47 compute-0 podman[452948]: 2025-11-25 09:40:47.331153912 +0000 UTC m=+0.234468481 container start 7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:40:47 compute-0 dreamy_ride[452964]: 167 167
Nov 25 09:40:47 compute-0 systemd[1]: libpod-7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f.scope: Deactivated successfully.
Nov 25 09:40:47 compute-0 conmon[452964]: conmon 7b9628ddce3e8ca980ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f.scope/container/memory.events
Nov 25 09:40:47 compute-0 podman[452948]: 2025-11-25 09:40:47.360080621 +0000 UTC m=+0.263395210 container attach 7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:47 compute-0 podman[452948]: 2025-11-25 09:40:47.361034468 +0000 UTC m=+0.264349057 container died 7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-604c9a11ec7fb8678711e94fda6f2fea0ef5c5c1b99a8174d51c8c167b4dea59-merged.mount: Deactivated successfully.
Nov 25 09:40:47 compute-0 podman[452948]: 2025-11-25 09:40:47.523945685 +0000 UTC m=+0.427260254 container remove 7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 09:40:47 compute-0 systemd[1]: libpod-conmon-7b9628ddce3e8ca980ffd4c4914826d6f792ffb56d6da059e087d772cda3b78f.scope: Deactivated successfully.
Nov 25 09:40:47 compute-0 podman[452989]: 2025-11-25 09:40:47.680106188 +0000 UTC m=+0.038497642 container create 34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_raman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:40:47 compute-0 systemd[1]: Started libpod-conmon-34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143.scope.
Nov 25 09:40:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ca45e5c3d7a27b736cdff6cc53d427a92125b763359165c762e7f3582de3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ca45e5c3d7a27b736cdff6cc53d427a92125b763359165c762e7f3582de3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ca45e5c3d7a27b736cdff6cc53d427a92125b763359165c762e7f3582de3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ca45e5c3d7a27b736cdff6cc53d427a92125b763359165c762e7f3582de3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ca45e5c3d7a27b736cdff6cc53d427a92125b763359165c762e7f3582de3aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:47 compute-0 podman[452989]: 2025-11-25 09:40:47.754033115 +0000 UTC m=+0.112424599 container init 34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_raman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 09:40:47 compute-0 podman[452989]: 2025-11-25 09:40:47.663239437 +0000 UTC m=+0.021630911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:40:47 compute-0 podman[452989]: 2025-11-25 09:40:47.766023832 +0000 UTC m=+0.124415326 container start 34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 09:40:47 compute-0 podman[452989]: 2025-11-25 09:40:47.774463383 +0000 UTC m=+0.132854867 container attach 34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:40:48 compute-0 ceph-mon[75015]: pgmap v3636: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:48 compute-0 adoring_raman[453006]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:40:48 compute-0 adoring_raman[453006]: --> relative data size: 1.0
Nov 25 09:40:48 compute-0 adoring_raman[453006]: --> All data devices are unavailable
Nov 25 09:40:48 compute-0 systemd[1]: libpod-34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143.scope: Deactivated successfully.
Nov 25 09:40:48 compute-0 podman[452989]: 2025-11-25 09:40:48.799201395 +0000 UTC m=+1.157592849 container died 34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_raman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ca45e5c3d7a27b736cdff6cc53d427a92125b763359165c762e7f3582de3aa-merged.mount: Deactivated successfully.
Nov 25 09:40:48 compute-0 podman[452989]: 2025-11-25 09:40:48.904624783 +0000 UTC m=+1.263016237 container remove 34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:40:48 compute-0 systemd[1]: libpod-conmon-34df4c27a4ae905f8cc54f3588b124b4873dce583639d0b72c10ca1f62ff1143.scope: Deactivated successfully.
Nov 25 09:40:48 compute-0 sudo[452883]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:48 compute-0 sudo[453047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:48 compute-0 sudo[453047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:48 compute-0 sudo[453047]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:49 compute-0 sudo[453072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:49 compute-0 sudo[453072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:49 compute-0 sudo[453072]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:49 compute-0 sudo[453097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:49 compute-0 sudo[453097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:49 compute-0 sudo[453097]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:49 compute-0 sudo[453122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:40:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3637: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:49 compute-0 sudo[453122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:49 compute-0 ceph-mon[75015]: pgmap v3637: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:49 compute-0 podman[453188]: 2025-11-25 09:40:49.490715131 +0000 UTC m=+0.023083401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:40:49 compute-0 podman[453188]: 2025-11-25 09:40:49.62473127 +0000 UTC m=+0.157099530 container create 39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:49 compute-0 systemd[1]: Started libpod-conmon-39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c.scope.
Nov 25 09:40:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:49 compute-0 podman[453188]: 2025-11-25 09:40:49.784941943 +0000 UTC m=+0.317310223 container init 39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 09:40:49 compute-0 podman[453188]: 2025-11-25 09:40:49.792559231 +0000 UTC m=+0.324927511 container start 39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:40:49 compute-0 agitated_kirch[453204]: 167 167
Nov 25 09:40:49 compute-0 systemd[1]: libpod-39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c.scope: Deactivated successfully.
Nov 25 09:40:49 compute-0 podman[453188]: 2025-11-25 09:40:49.809841893 +0000 UTC m=+0.342210173 container attach 39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kirch, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:40:49 compute-0 podman[453188]: 2025-11-25 09:40:49.810576822 +0000 UTC m=+0.342945112 container died 39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kirch, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:40:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:49 compute-0 nova_compute[253538]: 2025-11-25 09:40:49.886 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7b01b58eab7c27bd3a0def34eaa1905243e5be486946d5f6e5040a0fb5226f2-merged.mount: Deactivated successfully.
Nov 25 09:40:50 compute-0 podman[453188]: 2025-11-25 09:40:50.072285697 +0000 UTC m=+0.604653947 container remove 39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 09:40:50 compute-0 systemd[1]: libpod-conmon-39f83c18b7ec54c98402bfb52512d4c92465bd8f99dfe71b0e2e49c370bb0d6c.scope: Deactivated successfully.
Nov 25 09:40:50 compute-0 podman[453229]: 2025-11-25 09:40:50.253004519 +0000 UTC m=+0.022152105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:40:50 compute-0 podman[453229]: 2025-11-25 09:40:50.375643177 +0000 UTC m=+0.144790733 container create 9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_blackwell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:40:50 compute-0 systemd[1]: Started libpod-conmon-9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea.scope.
Nov 25 09:40:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfe643d2bc56961cd488c53221be58e84b77747eaad7d7bf4f6eb6a25e1f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfe643d2bc56961cd488c53221be58e84b77747eaad7d7bf4f6eb6a25e1f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfe643d2bc56961cd488c53221be58e84b77747eaad7d7bf4f6eb6a25e1f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfe643d2bc56961cd488c53221be58e84b77747eaad7d7bf4f6eb6a25e1f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:50 compute-0 podman[453229]: 2025-11-25 09:40:50.582235987 +0000 UTC m=+0.351383563 container init 9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_blackwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:40:50 compute-0 podman[453229]: 2025-11-25 09:40:50.59115692 +0000 UTC m=+0.360304476 container start 9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 09:40:50 compute-0 podman[453229]: 2025-11-25 09:40:50.700195857 +0000 UTC m=+0.469343433 container attach 9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_blackwell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:40:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3638: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]: {
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:     "0": [
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:         {
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "devices": [
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "/dev/loop3"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             ],
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_name": "ceph_lv0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_size": "21470642176",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "name": "ceph_lv0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "tags": {
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cluster_name": "ceph",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.crush_device_class": "",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.encrypted": "0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osd_id": "0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.type": "block",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.vdo": "0"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             },
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "type": "block",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "vg_name": "ceph_vg0"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:         }
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:     ],
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:     "1": [
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:         {
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "devices": [
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "/dev/loop4"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             ],
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_name": "ceph_lv1",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_size": "21470642176",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "name": "ceph_lv1",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "tags": {
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cluster_name": "ceph",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.crush_device_class": "",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.encrypted": "0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osd_id": "1",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.type": "block",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.vdo": "0"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             },
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "type": "block",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "vg_name": "ceph_vg1"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:         }
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:     ],
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:     "2": [
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:         {
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "devices": [
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "/dev/loop5"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             ],
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_name": "ceph_lv2",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_size": "21470642176",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "name": "ceph_lv2",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "tags": {
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.cluster_name": "ceph",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.crush_device_class": "",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.encrypted": "0",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osd_id": "2",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.type": "block",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:                 "ceph.vdo": "0"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             },
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "type": "block",
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:             "vg_name": "ceph_vg2"
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:         }
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]:     ]
Nov 25 09:40:51 compute-0 goofy_blackwell[453245]: }
Nov 25 09:40:51 compute-0 systemd[1]: libpod-9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea.scope: Deactivated successfully.
Nov 25 09:40:51 compute-0 podman[453229]: 2025-11-25 09:40:51.456210673 +0000 UTC m=+1.225358249 container died 9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:40:51 compute-0 ceph-mon[75015]: pgmap v3638: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:51 compute-0 nova_compute[253538]: 2025-11-25 09:40:51.594 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-44dfe643d2bc56961cd488c53221be58e84b77747eaad7d7bf4f6eb6a25e1f73-merged.mount: Deactivated successfully.
Nov 25 09:40:51 compute-0 podman[453255]: 2025-11-25 09:40:51.849887699 +0000 UTC m=+0.391925779 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:40:51 compute-0 podman[453229]: 2025-11-25 09:40:51.954513495 +0000 UTC m=+1.723661051 container remove 9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_blackwell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:40:51 compute-0 podman[453254]: 2025-11-25 09:40:51.958052482 +0000 UTC m=+0.500507164 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:40:51 compute-0 sudo[453122]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 systemd[1]: libpod-conmon-9d93d0fd14dd7a9a42dec4972e8d5ac285916ee5e61f746c3bafd61a9f0c9bea.scope: Deactivated successfully.
Nov 25 09:40:52 compute-0 sudo[453302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:52 compute-0 sudo[453302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:52 compute-0 sudo[453302]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 sudo[453327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:52 compute-0 sudo[453327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:52 compute-0 sudo[453327]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 sudo[453352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:52 compute-0 sudo[453352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:52 compute-0 sudo[453352]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 sudo[453377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:40:52 compute-0 sudo[453377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:52 compute-0 podman[453440]: 2025-11-25 09:40:52.584493691 +0000 UTC m=+0.051029353 container create c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:40:52 compute-0 systemd[1]: Started libpod-conmon-c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34.scope.
Nov 25 09:40:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:52 compute-0 podman[453440]: 2025-11-25 09:40:52.558263246 +0000 UTC m=+0.024798928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:40:52 compute-0 podman[453440]: 2025-11-25 09:40:52.664755763 +0000 UTC m=+0.131291435 container init c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cohen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:40:52 compute-0 podman[453440]: 2025-11-25 09:40:52.672753651 +0000 UTC m=+0.139289313 container start c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:40:52 compute-0 podman[453440]: 2025-11-25 09:40:52.676163264 +0000 UTC m=+0.142698956 container attach c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cohen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:52 compute-0 nice_cohen[453456]: 167 167
Nov 25 09:40:52 compute-0 systemd[1]: libpod-c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34.scope: Deactivated successfully.
Nov 25 09:40:52 compute-0 podman[453440]: 2025-11-25 09:40:52.680363089 +0000 UTC m=+0.146898741 container died c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:40:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e597a0b89499931c760d6531768a700cecf5c9f21ba62f9d39ea9566918cdc86-merged.mount: Deactivated successfully.
Nov 25 09:40:52 compute-0 podman[453440]: 2025-11-25 09:40:52.717256986 +0000 UTC m=+0.183792648 container remove c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cohen, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:40:52 compute-0 systemd[1]: libpod-conmon-c7e5c95913814f006cc985be12b33c31a96e46d2f994afd0059ce61b13857c34.scope: Deactivated successfully.
Nov 25 09:40:52 compute-0 podman[453481]: 2025-11-25 09:40:52.927218637 +0000 UTC m=+0.049167753 container create c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:40:52 compute-0 systemd[1]: Started libpod-conmon-c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634.scope.
Nov 25 09:40:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52d4279b914cd8c4135f2db7d84fd55d951c3075d37efe06173397e9c4e48c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 podman[453481]: 2025-11-25 09:40:52.906052649 +0000 UTC m=+0.028001785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52d4279b914cd8c4135f2db7d84fd55d951c3075d37efe06173397e9c4e48c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52d4279b914cd8c4135f2db7d84fd55d951c3075d37efe06173397e9c4e48c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52d4279b914cd8c4135f2db7d84fd55d951c3075d37efe06173397e9c4e48c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 podman[453481]: 2025-11-25 09:40:53.014603922 +0000 UTC m=+0.136553058 container init c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:53 compute-0 podman[453481]: 2025-11-25 09:40:53.022137368 +0000 UTC m=+0.144086484 container start c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:53 compute-0 podman[453481]: 2025-11-25 09:40:53.024865782 +0000 UTC m=+0.146814898 container attach c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3639: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:40:53
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', 'vms', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta']
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:40:53.783 162739 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:21:09', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:71:8e:07:fa:c2'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:40:53 compute-0 nova_compute[253538]: 2025-11-25 09:40:53.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:53 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:40:53.786 162739 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:40:54 compute-0 eager_feynman[453498]: {
Nov 25 09:40:54 compute-0 eager_feynman[453498]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "osd_id": 1,
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "type": "bluestore"
Nov 25 09:40:54 compute-0 eager_feynman[453498]:     },
Nov 25 09:40:54 compute-0 eager_feynman[453498]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "osd_id": 2,
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "type": "bluestore"
Nov 25 09:40:54 compute-0 eager_feynman[453498]:     },
Nov 25 09:40:54 compute-0 eager_feynman[453498]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "osd_id": 0,
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:40:54 compute-0 eager_feynman[453498]:         "type": "bluestore"
Nov 25 09:40:54 compute-0 eager_feynman[453498]:     }
Nov 25 09:40:54 compute-0 eager_feynman[453498]: }
Nov 25 09:40:54 compute-0 systemd[1]: libpod-c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634.scope: Deactivated successfully.
Nov 25 09:40:54 compute-0 systemd[1]: libpod-c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634.scope: Consumed 1.035s CPU time.
Nov 25 09:40:54 compute-0 podman[453481]: 2025-11-25 09:40:54.052723399 +0000 UTC m=+1.174672515 container died c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d52d4279b914cd8c4135f2db7d84fd55d951c3075d37efe06173397e9c4e48c2-merged.mount: Deactivated successfully.
Nov 25 09:40:54 compute-0 podman[453481]: 2025-11-25 09:40:54.110885027 +0000 UTC m=+1.232834143 container remove c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feynman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:54 compute-0 systemd[1]: libpod-conmon-c2dc82ea3bf76a2423f5808a8b577e56e558b836995edd29796c2a1213ab5634.scope: Deactivated successfully.
Nov 25 09:40:54 compute-0 sudo[453377]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:40:54 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:40:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:40:54 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 493aff3e-d527-48d5-8676-3217c503b275 does not exist
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3de7a2b6-1f0e-4d4a-9f00-267669c267bf does not exist
Nov 25 09:40:54 compute-0 sudo[453545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:54 compute-0 sudo[453545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:54 compute-0 sudo[453545]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-0 ceph-mon[75015]: pgmap v3639: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:54 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:40:54 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:40:54 compute-0 sudo[453570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:40:54 compute-0 sudo[453570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:40:54 compute-0 sudo[453570]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:40:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:40:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:54 compute-0 nova_compute[253538]: 2025-11-25 09:40:54.888 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3640: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:55 compute-0 sshd-session[453595]: Invalid user abhi from 47.252.72.9 port 43618
Nov 25 09:40:55 compute-0 sshd-session[453595]: Received disconnect from 47.252.72.9 port 43618:11: Bye Bye [preauth]
Nov 25 09:40:55 compute-0 sshd-session[453595]: Disconnected from invalid user abhi 47.252.72.9 port 43618 [preauth]
Nov 25 09:40:56 compute-0 ceph-mon[75015]: pgmap v3640: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:56 compute-0 nova_compute[253538]: 2025-11-25 09:40:56.595 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:40:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3641: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:57 compute-0 ceph-mon[75015]: pgmap v3641: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3642: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:40:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:59 compute-0 nova_compute[253538]: 2025-11-25 09:40:59.889 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:00 compute-0 ceph-mon[75015]: pgmap v3642: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:00 compute-0 podman[453597]: 2025-11-25 09:41:00.838011077 +0000 UTC m=+0.091907710 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 25 09:41:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3643: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:01 compute-0 nova_compute[253538]: 2025-11-25 09:41:01.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:01 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:41:01.787 162739 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0e3362c2-a4a0-4a10-9289-943331244f84, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:41:02 compute-0 ceph-mon[75015]: pgmap v3643: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3644: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:03 compute-0 ceph-mon[75015]: pgmap v3644: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:41:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:04 compute-0 nova_compute[253538]: 2025-11-25 09:41:04.890 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3645: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:06 compute-0 ceph-mon[75015]: pgmap v3645: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:06 compute-0 nova_compute[253538]: 2025-11-25 09:41:06.597 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3646: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:07 compute-0 ceph-mon[75015]: pgmap v3646: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3647: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:09 compute-0 nova_compute[253538]: 2025-11-25 09:41:09.892 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:10 compute-0 ceph-mon[75015]: pgmap v3647: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3648: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:11 compute-0 nova_compute[253538]: 2025-11-25 09:41:11.600 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:11 compute-0 ceph-mon[75015]: pgmap v3648: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3649: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:13 compute-0 sshd-session[453625]: Invalid user supermaint from 146.190.154.85 port 38148
Nov 25 09:41:13 compute-0 sshd-session[453625]: Received disconnect from 146.190.154.85 port 38148:11: Bye Bye [preauth]
Nov 25 09:41:13 compute-0 sshd-session[453625]: Disconnected from invalid user supermaint 146.190.154.85 port 38148 [preauth]
Nov 25 09:41:13 compute-0 ceph-mon[75015]: pgmap v3649: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:14 compute-0 nova_compute[253538]: 2025-11-25 09:41:14.831 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:14 compute-0 nova_compute[253538]: 2025-11-25 09:41:14.894 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3650: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:15 compute-0 ceph-mon[75015]: pgmap v3650: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:16 compute-0 nova_compute[253538]: 2025-11-25 09:41:16.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:16 compute-0 nova_compute[253538]: 2025-11-25 09:41:16.601 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3651: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:17 compute-0 ceph-mon[75015]: pgmap v3651: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3652: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:19 compute-0 nova_compute[253538]: 2025-11-25 09:41:19.895 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:20 compute-0 ceph-mon[75015]: pgmap v3652: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:21 compute-0 nova_compute[253538]: 2025-11-25 09:41:21.648 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3653: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:21 compute-0 sshd-session[453627]: Invalid user user4 from 45.78.222.2 port 46856
Nov 25 09:41:21 compute-0 ceph-mon[75015]: pgmap v3653: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:22 compute-0 podman[453632]: 2025-11-25 09:41:22.813061446 +0000 UTC m=+0.054702115 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 09:41:22 compute-0 podman[453631]: 2025-11-25 09:41:22.851573177 +0000 UTC m=+0.094083600 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:41:23 compute-0 sshd[189888]: Timeout before authentication for connection from 14.103.111.13 to 38.102.83.169, pid = 451536
Nov 25 09:41:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3654: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:23 compute-0 ceph-mon[75015]: pgmap v3654: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:24 compute-0 sshd-session[453629]: Invalid user devuser from 182.253.79.194 port 22850
Nov 25 09:41:24 compute-0 sshd-session[453627]: Received disconnect from 45.78.222.2 port 46856:11: Bye Bye [preauth]
Nov 25 09:41:24 compute-0 sshd-session[453627]: Disconnected from invalid user user4 45.78.222.2 port 46856 [preauth]
Nov 25 09:41:24 compute-0 sshd-session[453629]: Received disconnect from 182.253.79.194 port 22850:11: Bye Bye [preauth]
Nov 25 09:41:24 compute-0 sshd-session[453629]: Disconnected from invalid user devuser 182.253.79.194 port 22850 [preauth]
Nov 25 09:41:24 compute-0 nova_compute[253538]: 2025-11-25 09:41:24.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:24 compute-0 nova_compute[253538]: 2025-11-25 09:41:24.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:41:24 compute-0 nova_compute[253538]: 2025-11-25 09:41:24.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:41:24 compute-0 nova_compute[253538]: 2025-11-25 09:41:24.567 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:41:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:24 compute-0 nova_compute[253538]: 2025-11-25 09:41:24.898 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:24 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Nov 25 09:41:24 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:24.995404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:41:24 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Nov 25 09:41:24 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063684995438, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 739, "num_deletes": 255, "total_data_size": 930998, "memory_usage": 944408, "flush_reason": "Manual Compaction"}
Nov 25 09:41:24 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063685080755, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 922544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75178, "largest_seqno": 75916, "table_properties": {"data_size": 918701, "index_size": 1622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8396, "raw_average_key_size": 18, "raw_value_size": 911025, "raw_average_value_size": 2051, "num_data_blocks": 73, "num_entries": 444, "num_filter_entries": 444, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063623, "oldest_key_time": 1764063623, "file_creation_time": 1764063684, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 85424 microseconds, and 5216 cpu microseconds.
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.080820) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 922544 bytes OK
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.080847) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.192259) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.192374) EVENT_LOG_v1 {"time_micros": 1764063685192299, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.192402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 927202, prev total WAL file size 927202, number of live WAL files 2.
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.193184) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323730' seq:72057594037927935, type:22 .. '6C6F676D0033353231' seq:0, type:0; will stop at (end)
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(900KB)], [179(9370KB)]
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063685193274, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10518158, "oldest_snapshot_seqno": -1}
Nov 25 09:41:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3655: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 9142 keys, 10410784 bytes, temperature: kUnknown
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063685522901, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 10410784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10353973, "index_size": 32826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 241914, "raw_average_key_size": 26, "raw_value_size": 10194953, "raw_average_value_size": 1115, "num_data_blocks": 1265, "num_entries": 9142, "num_filter_entries": 9142, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063685, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:41:25 compute-0 nova_compute[253538]: 2025-11-25 09:41:25.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:25 compute-0 nova_compute[253538]: 2025-11-25 09:41:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:25 compute-0 nova_compute[253538]: 2025-11-25 09:41:25.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:25 compute-0 nova_compute[253538]: 2025-11-25 09:41:25.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.523447) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10410784 bytes
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.579484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 31.9 rd, 31.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(22.7) write-amplify(11.3) OK, records in: 9663, records dropped: 521 output_compression: NoCompression
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.579554) EVENT_LOG_v1 {"time_micros": 1764063685579529, "job": 112, "event": "compaction_finished", "compaction_time_micros": 329780, "compaction_time_cpu_micros": 42210, "output_level": 6, "num_output_files": 1, "total_output_size": 10410784, "num_input_records": 9663, "num_output_records": 9142, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063685580095, "job": 112, "event": "table_file_deletion", "file_number": 181}
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063685582094, "job": 112, "event": "table_file_deletion", "file_number": 179}
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.193020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.582269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.582276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.582278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.582284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:41:25 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:41:25.582286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:41:26 compute-0 ceph-mon[75015]: pgmap v3655: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:26 compute-0 nova_compute[253538]: 2025-11-25 09:41:26.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:26 compute-0 nova_compute[253538]: 2025-11-25 09:41:26.649 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3656: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:28 compute-0 ceph-mon[75015]: pgmap v3656: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:41:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1382840064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:41:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:41:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1382840064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:41:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3657: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1382840064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:41:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1382840064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:41:29 compute-0 ceph-mon[75015]: pgmap v3657: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:29 compute-0 nova_compute[253538]: 2025-11-25 09:41:29.900 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3658: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:31 compute-0 nova_compute[253538]: 2025-11-25 09:41:31.650 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:31 compute-0 podman[453671]: 2025-11-25 09:41:31.830406921 +0000 UTC m=+0.083251294 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:41:32 compute-0 ceph-mon[75015]: pgmap v3658: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3659: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:34 compute-0 ceph-mon[75015]: pgmap v3659: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:34 compute-0 nova_compute[253538]: 2025-11-25 09:41:34.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:34 compute-0 sshd-session[453698]: Invalid user student from 152.70.84.178 port 54866
Nov 25 09:41:34 compute-0 nova_compute[253538]: 2025-11-25 09:41:34.901 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3660: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:35 compute-0 sshd-session[453698]: Received disconnect from 152.70.84.178 port 54866:11: Bye Bye [preauth]
Nov 25 09:41:35 compute-0 sshd-session[453698]: Disconnected from invalid user student 152.70.84.178 port 54866 [preauth]
Nov 25 09:41:35 compute-0 sshd-session[453701]: Invalid user jacob from 165.227.175.225 port 46112
Nov 25 09:41:35 compute-0 sshd-session[453701]: Received disconnect from 165.227.175.225 port 46112:11: Bye Bye [preauth]
Nov 25 09:41:35 compute-0 sshd-session[453701]: Disconnected from invalid user jacob 165.227.175.225 port 46112 [preauth]
Nov 25 09:41:36 compute-0 ceph-mon[75015]: pgmap v3660: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:36 compute-0 nova_compute[253538]: 2025-11-25 09:41:36.652 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3661: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:37 compute-0 nova_compute[253538]: 2025-11-25 09:41:37.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:38 compute-0 ceph-mon[75015]: pgmap v3661: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3662: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:39 compute-0 ceph-mon[75015]: pgmap v3662: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:39 compute-0 nova_compute[253538]: 2025-11-25 09:41:39.902 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:41:41.130 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:41:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:41:41.130 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:41:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:41:41.130 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:41:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3663: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:41 compute-0 nova_compute[253538]: 2025-11-25 09:41:41.653 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:42 compute-0 ceph-mon[75015]: pgmap v3663: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:42 compute-0 nova_compute[253538]: 2025-11-25 09:41:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:42 compute-0 nova_compute[253538]: 2025-11-25 09:41:42.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:41:42 compute-0 nova_compute[253538]: 2025-11-25 09:41:42.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:41:42 compute-0 nova_compute[253538]: 2025-11-25 09:41:42.578 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:41:42 compute-0 nova_compute[253538]: 2025-11-25 09:41:42.578 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:41:42 compute-0 nova_compute[253538]: 2025-11-25 09:41:42.579 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:41:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:41:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605825118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.045 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:41:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3664: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.211 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.213 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3572MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.213 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.213 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.275 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.275 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:41:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1605825118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.305 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:41:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:41:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42374849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.728 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.733 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.748 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.750 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:41:43 compute-0 nova_compute[253538]: 2025-11-25 09:41:43.750 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:41:44 compute-0 ceph-mon[75015]: pgmap v3664: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/42374849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:41:44 compute-0 nova_compute[253538]: 2025-11-25 09:41:44.903 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3665: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:46 compute-0 ceph-mon[75015]: pgmap v3665: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:46 compute-0 nova_compute[253538]: 2025-11-25 09:41:46.655 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3666: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:48 compute-0 ceph-mon[75015]: pgmap v3666: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3667: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:49 compute-0 ceph-mon[75015]: pgmap v3667: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:49 compute-0 nova_compute[253538]: 2025-11-25 09:41:49.905 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:50 compute-0 sshd-session[453703]: Invalid user martin from 45.78.217.205 port 55928
Nov 25 09:41:50 compute-0 sshd-session[453703]: Received disconnect from 45.78.217.205 port 55928:11: Bye Bye [preauth]
Nov 25 09:41:50 compute-0 sshd-session[453703]: Disconnected from invalid user martin 45.78.217.205 port 55928 [preauth]
Nov 25 09:41:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3668: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:51 compute-0 nova_compute[253538]: 2025-11-25 09:41:51.657 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:52 compute-0 ceph-mon[75015]: pgmap v3668: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3669: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:53 compute-0 ceph-mon[75015]: pgmap v3669: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:41:53
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'volumes', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:53 compute-0 podman[453751]: 2025-11-25 09:41:53.79744577 +0000 UTC m=+0.047329833 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:41:53 compute-0 podman[453750]: 2025-11-25 09:41:53.800177725 +0000 UTC m=+0.052387641 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:41:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:41:54 compute-0 sudo[453787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:54 compute-0 sudo[453787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:54 compute-0 sudo[453787]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:54 compute-0 sudo[453812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:54 compute-0 sudo[453812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:54 compute-0 sudo[453812]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:54 compute-0 sudo[453837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:54 compute-0 sudo[453837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:54 compute-0 sudo[453837]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:54 compute-0 sudo[453862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:41:54 compute-0 sudo[453862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:54 compute-0 nova_compute[253538]: 2025-11-25 09:41:54.906 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:54 compute-0 sudo[453862]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:41:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:41:55 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:41:55 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:41:55 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c70c9854-3191-46b0-8ced-a5d99c1bb5a7 does not exist
Nov 25 09:41:55 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 501c3760-0e92-42c5-848c-a0e4b6ee2278 does not exist
Nov 25 09:41:55 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3dc32a78-bd45-4527-a422-49638c00c12b does not exist
Nov 25 09:41:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:41:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:41:55 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:41:55 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:41:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:41:55 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:41:55 compute-0 sudo[453918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:55 compute-0 sudo[453918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:55 compute-0 sudo[453918]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:55 compute-0 sudo[453943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:55 compute-0 sudo[453943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:55 compute-0 sudo[453943]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3670: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:55 compute-0 sudo[453968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:55 compute-0 sudo[453968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:55 compute-0 sudo[453968]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:55 compute-0 sudo[453993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:41:55 compute-0 sudo[453993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:55 compute-0 podman[454058]: 2025-11-25 09:41:55.645803045 +0000 UTC m=+0.049786530 container create 426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 09:41:55 compute-0 systemd[1]: Started libpod-conmon-426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14.scope.
Nov 25 09:41:55 compute-0 podman[454058]: 2025-11-25 09:41:55.623297551 +0000 UTC m=+0.027281066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:41:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:41:55 compute-0 podman[454058]: 2025-11-25 09:41:55.771344892 +0000 UTC m=+0.175328397 container init 426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:41:55 compute-0 podman[454058]: 2025-11-25 09:41:55.780623805 +0000 UTC m=+0.184607290 container start 426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:41:55 compute-0 angry_hopper[454075]: 167 167
Nov 25 09:41:55 compute-0 systemd[1]: libpod-426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14.scope: Deactivated successfully.
Nov 25 09:41:55 compute-0 conmon[454075]: conmon 426eddc8a3b16b6bb2ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14.scope/container/memory.events
Nov 25 09:41:55 compute-0 podman[454058]: 2025-11-25 09:41:55.79141812 +0000 UTC m=+0.195401635 container attach 426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:41:55 compute-0 podman[454058]: 2025-11-25 09:41:55.791889512 +0000 UTC m=+0.195873007 container died 426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d221c1952a5e6813ec1f00b62ad5274e51b75dbb5853e908a9ef0945002d4c-merged.mount: Deactivated successfully.
Nov 25 09:41:55 compute-0 nova_compute[253538]: 2025-11-25 09:41:55.861 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:55 compute-0 podman[454058]: 2025-11-25 09:41:55.897936447 +0000 UTC m=+0.301919932 container remove 426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:41:55 compute-0 systemd[1]: libpod-conmon-426eddc8a3b16b6bb2ecad223596bffc2842bfe5b047b905ff14945b157e6c14.scope: Deactivated successfully.
Nov 25 09:41:56 compute-0 podman[454098]: 2025-11-25 09:41:56.072688367 +0000 UTC m=+0.061787577 container create 9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:41:56 compute-0 ceph-mon[75015]: pgmap v3670: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:56 compute-0 podman[454098]: 2025-11-25 09:41:56.034454194 +0000 UTC m=+0.023553404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:41:56 compute-0 systemd[1]: Started libpod-conmon-9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522.scope.
Nov 25 09:41:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fdb2e33e9e1396ebe02814375714b581ec7923cd0478ac2fd4f9c268a9647/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fdb2e33e9e1396ebe02814375714b581ec7923cd0478ac2fd4f9c268a9647/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fdb2e33e9e1396ebe02814375714b581ec7923cd0478ac2fd4f9c268a9647/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fdb2e33e9e1396ebe02814375714b581ec7923cd0478ac2fd4f9c268a9647/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3fdb2e33e9e1396ebe02814375714b581ec7923cd0478ac2fd4f9c268a9647/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:56 compute-0 podman[454098]: 2025-11-25 09:41:56.176862772 +0000 UTC m=+0.165962002 container init 9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:41:56 compute-0 podman[454098]: 2025-11-25 09:41:56.18710068 +0000 UTC m=+0.176199890 container start 9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:41:56 compute-0 podman[454098]: 2025-11-25 09:41:56.191031398 +0000 UTC m=+0.180130628 container attach 9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:41:56 compute-0 nova_compute[253538]: 2025-11-25 09:41:56.658 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:57 compute-0 nifty_jackson[454114]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:41:57 compute-0 nifty_jackson[454114]: --> relative data size: 1.0
Nov 25 09:41:57 compute-0 nifty_jackson[454114]: --> All data devices are unavailable
Nov 25 09:41:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3671: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:57 compute-0 systemd[1]: libpod-9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522.scope: Deactivated successfully.
Nov 25 09:41:57 compute-0 conmon[454114]: conmon 9915f40d0fdff2912f89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522.scope/container/memory.events
Nov 25 09:41:57 compute-0 podman[454098]: 2025-11-25 09:41:57.216563932 +0000 UTC m=+1.205663152 container died 9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 09:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab3fdb2e33e9e1396ebe02814375714b581ec7923cd0478ac2fd4f9c268a9647-merged.mount: Deactivated successfully.
Nov 25 09:41:57 compute-0 podman[454098]: 2025-11-25 09:41:57.291279171 +0000 UTC m=+1.280378391 container remove 9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:41:57 compute-0 systemd[1]: libpod-conmon-9915f40d0fdff2912f89837d3b1d39ba1b4f3eea8e99e86c323b714df22d5522.scope: Deactivated successfully.
Nov 25 09:41:57 compute-0 sudo[453993]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:57 compute-0 sudo[454156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:57 compute-0 sudo[454156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:57 compute-0 sudo[454156]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:57 compute-0 sudo[454181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:57 compute-0 sudo[454181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:57 compute-0 sudo[454181]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:57 compute-0 sudo[454206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:57 compute-0 sudo[454206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:57 compute-0 sudo[454206]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:57 compute-0 sudo[454231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:41:57 compute-0 sudo[454231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:57 compute-0 podman[454296]: 2025-11-25 09:41:57.916612301 +0000 UTC m=+0.066475016 container create b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 09:41:57 compute-0 systemd[1]: Started libpod-conmon-b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7.scope.
Nov 25 09:41:57 compute-0 podman[454296]: 2025-11-25 09:41:57.872943839 +0000 UTC m=+0.022806574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:41:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:41:57 compute-0 podman[454296]: 2025-11-25 09:41:57.999505563 +0000 UTC m=+0.149368308 container init b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:41:58 compute-0 podman[454296]: 2025-11-25 09:41:58.005415754 +0000 UTC m=+0.155278459 container start b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:41:58 compute-0 peaceful_roentgen[454312]: 167 167
Nov 25 09:41:58 compute-0 podman[454296]: 2025-11-25 09:41:58.010820502 +0000 UTC m=+0.160683247 container attach b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 09:41:58 compute-0 systemd[1]: libpod-b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7.scope: Deactivated successfully.
Nov 25 09:41:58 compute-0 podman[454296]: 2025-11-25 09:41:58.012000215 +0000 UTC m=+0.161862930 container died b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-732bb6062ad017a53e4e9c202040df128b7e16ad37e5f3230de67c5e4da69699-merged.mount: Deactivated successfully.
Nov 25 09:41:58 compute-0 podman[454296]: 2025-11-25 09:41:58.163643864 +0000 UTC m=+0.313506579 container remove b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:41:58 compute-0 systemd[1]: libpod-conmon-b7a59148403c7d4f02a698596a4f052ddcd9ba49322d6b568d34a1e106e2aae7.scope: Deactivated successfully.
Nov 25 09:41:58 compute-0 ceph-mon[75015]: pgmap v3671: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:58 compute-0 podman[454336]: 2025-11-25 09:41:58.318159442 +0000 UTC m=+0.046399608 container create 90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:41:58 compute-0 systemd[1]: Started libpod-conmon-90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917.scope.
Nov 25 09:41:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:41:58 compute-0 podman[454336]: 2025-11-25 09:41:58.294808845 +0000 UTC m=+0.023049041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96565e6a304a1031022da527ccf2fc8760f6605deb4b0bca509de853efe4634f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96565e6a304a1031022da527ccf2fc8760f6605deb4b0bca509de853efe4634f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96565e6a304a1031022da527ccf2fc8760f6605deb4b0bca509de853efe4634f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96565e6a304a1031022da527ccf2fc8760f6605deb4b0bca509de853efe4634f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:58 compute-0 podman[454336]: 2025-11-25 09:41:58.410686618 +0000 UTC m=+0.138926794 container init 90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:41:58 compute-0 podman[454336]: 2025-11-25 09:41:58.421641737 +0000 UTC m=+0.149881903 container start 90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:41:58 compute-0 podman[454336]: 2025-11-25 09:41:58.445796986 +0000 UTC m=+0.174037182 container attach 90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:41:59 compute-0 naughty_tharp[454352]: {
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:     "0": [
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:         {
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "devices": [
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "/dev/loop3"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             ],
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_name": "ceph_lv0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_size": "21470642176",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "name": "ceph_lv0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "tags": {
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cluster_name": "ceph",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.crush_device_class": "",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.encrypted": "0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osd_id": "0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.type": "block",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.vdo": "0"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             },
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "type": "block",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "vg_name": "ceph_vg0"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:         }
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:     ],
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:     "1": [
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:         {
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "devices": [
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "/dev/loop4"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             ],
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_name": "ceph_lv1",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_size": "21470642176",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "name": "ceph_lv1",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "tags": {
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cluster_name": "ceph",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.crush_device_class": "",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.encrypted": "0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osd_id": "1",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.type": "block",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.vdo": "0"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             },
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "type": "block",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "vg_name": "ceph_vg1"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:         }
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:     ],
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:     "2": [
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:         {
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "devices": [
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "/dev/loop5"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             ],
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_name": "ceph_lv2",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_size": "21470642176",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "name": "ceph_lv2",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "tags": {
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.cluster_name": "ceph",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.crush_device_class": "",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.encrypted": "0",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osd_id": "2",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.type": "block",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:                 "ceph.vdo": "0"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             },
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "type": "block",
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:             "vg_name": "ceph_vg2"
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:         }
Nov 25 09:41:59 compute-0 naughty_tharp[454352]:     ]
Nov 25 09:41:59 compute-0 naughty_tharp[454352]: }
Nov 25 09:41:59 compute-0 systemd[1]: libpod-90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917.scope: Deactivated successfully.
Nov 25 09:41:59 compute-0 podman[454336]: 2025-11-25 09:41:59.209568034 +0000 UTC m=+0.937808200 container died 90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:41:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3672: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-96565e6a304a1031022da527ccf2fc8760f6605deb4b0bca509de853efe4634f-merged.mount: Deactivated successfully.
Nov 25 09:41:59 compute-0 podman[454336]: 2025-11-25 09:41:59.280708907 +0000 UTC m=+1.008949073 container remove 90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:41:59 compute-0 systemd[1]: libpod-conmon-90377f62e1780a191b6850e12f2378ee7aeea3b198b875ad5fa3077fb1fae917.scope: Deactivated successfully.
Nov 25 09:41:59 compute-0 sudo[454231]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:59 compute-0 sudo[454373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:59 compute-0 sudo[454373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:59 compute-0 sudo[454373]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:59 compute-0 sudo[454398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:59 compute-0 sudo[454398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:59 compute-0 sudo[454398]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:59 compute-0 sudo[454423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:59 compute-0 sudo[454423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:59 compute-0 sudo[454423]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:59 compute-0 nova_compute[253538]: 2025-11-25 09:41:59.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:41:59 compute-0 nova_compute[253538]: 2025-11-25 09:41:59.556 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:41:59 compute-0 sudo[454448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:41:59 compute-0 sudo[454448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:59 compute-0 nova_compute[253538]: 2025-11-25 09:41:59.907 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:41:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:59 compute-0 podman[454513]: 2025-11-25 09:41:59.937862174 +0000 UTC m=+0.086899983 container create e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:41:59 compute-0 podman[454513]: 2025-11-25 09:41:59.879053489 +0000 UTC m=+0.028091318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:41:59 compute-0 systemd[1]: Started libpod-conmon-e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f.scope.
Nov 25 09:42:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:00 compute-0 podman[454513]: 2025-11-25 09:42:00.029968679 +0000 UTC m=+0.179006518 container init e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:42:00 compute-0 podman[454513]: 2025-11-25 09:42:00.038395719 +0000 UTC m=+0.187433518 container start e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:42:00 compute-0 podman[454513]: 2025-11-25 09:42:00.042740918 +0000 UTC m=+0.191778747 container attach e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:42:00 compute-0 strange_montalcini[454529]: 167 167
Nov 25 09:42:00 compute-0 systemd[1]: libpod-e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f.scope: Deactivated successfully.
Nov 25 09:42:00 compute-0 conmon[454529]: conmon e3d940a357b89187823f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f.scope/container/memory.events
Nov 25 09:42:00 compute-0 podman[454513]: 2025-11-25 09:42:00.045585686 +0000 UTC m=+0.194623535 container died e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc56f3c9eb5e4df9d11b47d91615c7b0631e98935b43fc4af176b9bf92819802-merged.mount: Deactivated successfully.
Nov 25 09:42:00 compute-0 podman[454513]: 2025-11-25 09:42:00.096759972 +0000 UTC m=+0.245797781 container remove e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:42:00 compute-0 systemd[1]: libpod-conmon-e3d940a357b89187823f715b2afe7e2d60e74bffde1b86b2390335662ae7368f.scope: Deactivated successfully.
Nov 25 09:42:00 compute-0 podman[454553]: 2025-11-25 09:42:00.267247496 +0000 UTC m=+0.045516263 container create c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:42:00 compute-0 ceph-mon[75015]: pgmap v3672: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:00 compute-0 systemd[1]: Started libpod-conmon-c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2.scope.
Nov 25 09:42:00 compute-0 podman[454553]: 2025-11-25 09:42:00.244923167 +0000 UTC m=+0.023191964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:42:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d183e4ece24a2dcb2305735a898e6a6b693649a50af86ff4ccfe7263471af8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d183e4ece24a2dcb2305735a898e6a6b693649a50af86ff4ccfe7263471af8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d183e4ece24a2dcb2305735a898e6a6b693649a50af86ff4ccfe7263471af8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d183e4ece24a2dcb2305735a898e6a6b693649a50af86ff4ccfe7263471af8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 podman[454553]: 2025-11-25 09:42:00.380102436 +0000 UTC m=+0.158371213 container init c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:42:00 compute-0 podman[454553]: 2025-11-25 09:42:00.38718856 +0000 UTC m=+0.165457337 container start c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:42:00 compute-0 podman[454553]: 2025-11-25 09:42:00.393244425 +0000 UTC m=+0.171513232 container attach c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:42:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3673: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:01 compute-0 condescending_neumann[454571]: {
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "osd_id": 1,
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "type": "bluestore"
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:     },
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "osd_id": 2,
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "type": "bluestore"
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:     },
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "osd_id": 0,
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:         "type": "bluestore"
Nov 25 09:42:01 compute-0 condescending_neumann[454571]:     }
Nov 25 09:42:01 compute-0 condescending_neumann[454571]: }
Nov 25 09:42:01 compute-0 systemd[1]: libpod-c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2.scope: Deactivated successfully.
Nov 25 09:42:01 compute-0 podman[454553]: 2025-11-25 09:42:01.3374793 +0000 UTC m=+1.115748077 container died c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 09:42:01 compute-0 ceph-mon[75015]: pgmap v3673: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2d183e4ece24a2dcb2305735a898e6a6b693649a50af86ff4ccfe7263471af8-merged.mount: Deactivated successfully.
Nov 25 09:42:01 compute-0 nova_compute[253538]: 2025-11-25 09:42:01.660 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:01 compute-0 podman[454553]: 2025-11-25 09:42:01.673866502 +0000 UTC m=+1.452135279 container remove c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:42:01 compute-0 systemd[1]: libpod-conmon-c8854a5472c332c5d5df9e712e48b80828a3744f5eb4547b129d6f296aee45a2.scope: Deactivated successfully.
Nov 25 09:42:01 compute-0 sudo[454448]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:42:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:42:01 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:42:01 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:42:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev eefc914c-3113-44f7-895f-e3e20a0298cd does not exist
Nov 25 09:42:01 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e3a46cb5-2690-484c-b608-f15398cb50e7 does not exist
Nov 25 09:42:01 compute-0 sudo[454616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:42:01 compute-0 sudo[454616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:01 compute-0 sudo[454616]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:01 compute-0 sudo[454641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:42:01 compute-0 sudo[454641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:01 compute-0 sudo[454641]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:02 compute-0 podman[454665]: 2025-11-25 09:42:02.027118805 +0000 UTC m=+0.096053663 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 09:42:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:42:02 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:42:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3674: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:03 compute-0 ceph-mon[75015]: pgmap v3674: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:42:04 compute-0 nova_compute[253538]: 2025-11-25 09:42:04.909 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3675: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:05 compute-0 sshd-session[454694]: Invalid user abhi from 62.60.193.188 port 43232
Nov 25 09:42:05 compute-0 sshd-session[454694]: Received disconnect from 62.60.193.188 port 43232:11: Bye Bye [preauth]
Nov 25 09:42:05 compute-0 sshd-session[454694]: Disconnected from invalid user abhi 62.60.193.188 port 43232 [preauth]
Nov 25 09:42:06 compute-0 ceph-mon[75015]: pgmap v3675: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:06 compute-0 nova_compute[253538]: 2025-11-25 09:42:06.711 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3676: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:08 compute-0 ceph-mon[75015]: pgmap v3676: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:08 compute-0 sshd-session[454696]: Invalid user vps from 47.252.72.9 port 55926
Nov 25 09:42:08 compute-0 sshd-session[454696]: Received disconnect from 47.252.72.9 port 55926:11: Bye Bye [preauth]
Nov 25 09:42:08 compute-0 sshd-session[454696]: Disconnected from invalid user vps 47.252.72.9 port 55926 [preauth]
Nov 25 09:42:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3677: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Nov 25 09:42:09 compute-0 nova_compute[253538]: 2025-11-25 09:42:09.911 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:10 compute-0 ceph-mon[75015]: pgmap v3677: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Nov 25 09:42:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3678: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Nov 25 09:42:11 compute-0 ceph-mon[75015]: pgmap v3678: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Nov 25 09:42:11 compute-0 nova_compute[253538]: 2025-11-25 09:42:11.713 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3679: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 25 09:42:14 compute-0 ceph-mon[75015]: pgmap v3679: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 25 09:42:14 compute-0 nova_compute[253538]: 2025-11-25 09:42:14.912 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3680: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:42:15 compute-0 nova_compute[253538]: 2025-11-25 09:42:15.571 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:16 compute-0 ceph-mon[75015]: pgmap v3680: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:42:16 compute-0 nova_compute[253538]: 2025-11-25 09:42:16.749 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3681: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:42:17 compute-0 rsyslogd[1007]: imjournal: 15462 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 09:42:17 compute-0 sshd-session[454698]: Invalid user sipv from 146.190.154.85 port 56834
Nov 25 09:42:17 compute-0 sshd-session[454698]: Received disconnect from 146.190.154.85 port 56834:11: Bye Bye [preauth]
Nov 25 09:42:17 compute-0 sshd-session[454698]: Disconnected from invalid user sipv 146.190.154.85 port 56834 [preauth]
Nov 25 09:42:18 compute-0 ceph-mon[75015]: pgmap v3681: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:42:18 compute-0 nova_compute[253538]: 2025-11-25 09:42:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:42:19 compute-0 nova_compute[253538]: 2025-11-25 09:42:19.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:19 compute-0 ceph-mon[75015]: pgmap v3682: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 09:42:19 compute-0 nova_compute[253538]: 2025-11-25 09:42:19.913 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Nov 25 09:42:21 compute-0 nova_compute[253538]: 2025-11-25 09:42:21.750 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:22 compute-0 ceph-mon[75015]: pgmap v3683: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Nov 25 09:42:22 compute-0 nova_compute[253538]: 2025-11-25 09:42:22.569 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:22 compute-0 nova_compute[253538]: 2025-11-25 09:42:22.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:42:22 compute-0 nova_compute[253538]: 2025-11-25 09:42:22.585 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:42:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Nov 25 09:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:24 compute-0 ceph-mon[75015]: pgmap v3684: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Nov 25 09:42:24 compute-0 podman[454701]: 2025-11-25 09:42:24.806593381 +0000 UTC m=+0.049024309 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:42:24 compute-0 podman[454700]: 2025-11-25 09:42:24.819207976 +0000 UTC m=+0.067063732 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:42:24 compute-0 nova_compute[253538]: 2025-11-25 09:42:24.858 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:24 compute-0 nova_compute[253538]: 2025-11-25 09:42:24.915 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Nov 25 09:42:25 compute-0 nova_compute[253538]: 2025-11-25 09:42:25.566 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:26 compute-0 ceph-mon[75015]: pgmap v3685: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Nov 25 09:42:26 compute-0 nova_compute[253538]: 2025-11-25 09:42:26.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:26 compute-0 nova_compute[253538]: 2025-11-25 09:42:26.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:26 compute-0 nova_compute[253538]: 2025-11-25 09:42:26.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:42:26 compute-0 nova_compute[253538]: 2025-11-25 09:42:26.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:42:26 compute-0 nova_compute[253538]: 2025-11-25 09:42:26.568 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:42:26 compute-0 nova_compute[253538]: 2025-11-25 09:42:26.754 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:27 compute-0 nova_compute[253538]: 2025-11-25 09:42:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:27 compute-0 nova_compute[253538]: 2025-11-25 09:42:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:27 compute-0 nova_compute[253538]: 2025-11-25 09:42:27.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:42:28 compute-0 ceph-mon[75015]: pgmap v3686: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:42:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292678911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:42:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:42:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292678911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:42:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4292678911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:42:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/4292678911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:42:29 compute-0 nova_compute[253538]: 2025-11-25 09:42:29.917 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:30 compute-0 ceph-mon[75015]: pgmap v3687: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:31 compute-0 nova_compute[253538]: 2025-11-25 09:42:31.756 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:32 compute-0 ceph-mon[75015]: pgmap v3688: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:32 compute-0 podman[454737]: 2025-11-25 09:42:32.873628186 +0000 UTC m=+0.119590846 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 09:42:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:34 compute-0 ceph-mon[75015]: pgmap v3689: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:34 compute-0 nova_compute[253538]: 2025-11-25 09:42:34.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:34 compute-0 nova_compute[253538]: 2025-11-25 09:42:34.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:36 compute-0 ceph-mon[75015]: pgmap v3690: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:36 compute-0 nova_compute[253538]: 2025-11-25 09:42:36.757 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:36 compute-0 sshd-session[454763]: Invalid user oracle from 193.32.162.151 port 48770
Nov 25 09:42:37 compute-0 sshd-session[454763]: Connection closed by invalid user oracle 193.32.162.151 port 48770 [preauth]
Nov 25 09:42:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:38 compute-0 ceph-mon[75015]: pgmap v3691: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:39 compute-0 sshd-session[454765]: Invalid user vps from 165.227.175.225 port 38332
Nov 25 09:42:39 compute-0 sshd-session[454765]: Received disconnect from 165.227.175.225 port 38332:11: Bye Bye [preauth]
Nov 25 09:42:39 compute-0 sshd-session[454765]: Disconnected from invalid user vps 165.227.175.225 port 38332 [preauth]
Nov 25 09:42:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:39 compute-0 nova_compute[253538]: 2025-11-25 09:42:39.918 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:40 compute-0 ceph-mon[75015]: pgmap v3692: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:42:41.131 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:42:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:42:41.132 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:42:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:42:41.132 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:42:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:41 compute-0 nova_compute[253538]: 2025-11-25 09:42:41.760 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:42 compute-0 ceph-mon[75015]: pgmap v3693: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:42 compute-0 nova_compute[253538]: 2025-11-25 09:42:42.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:42:42 compute-0 nova_compute[253538]: 2025-11-25 09:42:42.581 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:42:42 compute-0 nova_compute[253538]: 2025-11-25 09:42:42.582 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:42:42 compute-0 nova_compute[253538]: 2025-11-25 09:42:42.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:42:42 compute-0 nova_compute[253538]: 2025-11-25 09:42:42.583 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:42:42 compute-0 nova_compute[253538]: 2025-11-25 09:42:42.584 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:42:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:42:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938740477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.044 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.200 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.202 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3572MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.202 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.202 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:42:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.263 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.263 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.291 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:42:43 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2938740477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:42:43 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:42:43 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627811981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.732 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.741 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.756 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.758 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:42:43 compute-0 nova_compute[253538]: 2025-11-25 09:42:43.759 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:42:44 compute-0 ceph-mon[75015]: pgmap v3694: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3627811981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:42:44 compute-0 nova_compute[253538]: 2025-11-25 09:42:44.920 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:46 compute-0 ceph-mon[75015]: pgmap v3695: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:46 compute-0 nova_compute[253538]: 2025-11-25 09:42:46.761 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:47 compute-0 ceph-mon[75015]: pgmap v3696: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:49 compute-0 nova_compute[253538]: 2025-11-25 09:42:49.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:50 compute-0 ceph-mon[75015]: pgmap v3697: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:51 compute-0 ceph-mon[75015]: pgmap v3698: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:51 compute-0 nova_compute[253538]: 2025-11-25 09:42:51.765 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:42:53
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', 'vms']
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:54 compute-0 ceph-mon[75015]: pgmap v3699: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:42:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:42:54 compute-0 nova_compute[253538]: 2025-11-25 09:42:54.921 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:55 compute-0 podman[454812]: 2025-11-25 09:42:55.813325698 +0000 UTC m=+0.061177201 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 09:42:55 compute-0 podman[454811]: 2025-11-25 09:42:55.847299956 +0000 UTC m=+0.091369846 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:42:56 compute-0 ceph-mon[75015]: pgmap v3700: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:56 compute-0 nova_compute[253538]: 2025-11-25 09:42:56.767 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:57 compute-0 sshd-session[454851]: Invalid user test from 182.253.79.194 port 50912
Nov 25 09:42:58 compute-0 sshd-session[454851]: Received disconnect from 182.253.79.194 port 50912:11: Bye Bye [preauth]
Nov 25 09:42:58 compute-0 sshd-session[454851]: Disconnected from invalid user test 182.253.79.194 port 50912 [preauth]
Nov 25 09:42:58 compute-0 ceph-mon[75015]: pgmap v3701: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:42:59 compute-0 nova_compute[253538]: 2025-11-25 09:42:59.923 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:42:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:00 compute-0 ceph-mon[75015]: pgmap v3702: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:01 compute-0 nova_compute[253538]: 2025-11-25 09:43:01.769 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:01 compute-0 sudo[454853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:01 compute-0 sudo[454853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:01 compute-0 sudo[454853]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:02 compute-0 sudo[454878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:02 compute-0 sudo[454878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:02 compute-0 sudo[454878]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:02 compute-0 sudo[454903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:02 compute-0 sudo[454903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:02 compute-0 sudo[454903]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:02 compute-0 sudo[454928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:43:02 compute-0 sudo[454928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:02 compute-0 ceph-mon[75015]: pgmap v3703: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:02 compute-0 sudo[454928]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:43:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:43:02 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:43:02 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:43:02 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d83876ba-50e8-48d6-9c53-5fba51135dfd does not exist
Nov 25 09:43:02 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 72dc65eb-ae16-499b-b237-87503c75c175 does not exist
Nov 25 09:43:02 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d98d0182-be53-4148-8280-581a11314092 does not exist
Nov 25 09:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:43:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:43:02 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:43:02 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:43:02 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:02 compute-0 sudo[454984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:02 compute-0 sudo[454984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:02 compute-0 sudo[454984]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:02 compute-0 sudo[455009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:02 compute-0 sudo[455009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:02 compute-0 sudo[455009]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:02 compute-0 sudo[455034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:02 compute-0 sudo[455034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:02 compute-0 sudo[455034]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:02 compute-0 sudo[455059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:43:02 compute-0 sudo[455059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:03 compute-0 podman[455083]: 2025-11-25 09:43:03.08053024 +0000 UTC m=+0.115307768 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 09:43:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:03 compute-0 podman[455151]: 2025-11-25 09:43:03.304149294 +0000 UTC m=+0.025558088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:43:03 compute-0 podman[455151]: 2025-11-25 09:43:03.400949636 +0000 UTC m=+0.122358410 container create b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:43:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:43:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:43:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:43:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:43:03 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:03 compute-0 systemd[1]: Started libpod-conmon-b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10.scope.
Nov 25 09:43:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:03 compute-0 podman[455151]: 2025-11-25 09:43:03.488731733 +0000 UTC m=+0.210140527 container init b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 25 09:43:03 compute-0 podman[455151]: 2025-11-25 09:43:03.497191084 +0000 UTC m=+0.218599858 container start b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:43:03 compute-0 podman[455151]: 2025-11-25 09:43:03.500514944 +0000 UTC m=+0.221923748 container attach b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:03 compute-0 clever_feistel[455168]: 167 167
Nov 25 09:43:03 compute-0 systemd[1]: libpod-b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10.scope: Deactivated successfully.
Nov 25 09:43:03 compute-0 podman[455151]: 2025-11-25 09:43:03.504544864 +0000 UTC m=+0.225953638 container died b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cd7d9c256f8572e957987a781e10b867c3a568cb991a7896718757efc26bc75-merged.mount: Deactivated successfully.
Nov 25 09:43:03 compute-0 podman[455151]: 2025-11-25 09:43:03.552280627 +0000 UTC m=+0.273689401 container remove b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 09:43:03 compute-0 systemd[1]: libpod-conmon-b77e28e5c3175e0660b9902619e0648b1605ff27add8ee668ded5887b0093a10.scope: Deactivated successfully.
Nov 25 09:43:03 compute-0 podman[455191]: 2025-11-25 09:43:03.715414361 +0000 UTC m=+0.041133914 container create 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:43:03 compute-0 systemd[1]: Started libpod-conmon-5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191.scope.
Nov 25 09:43:03 compute-0 podman[455191]: 2025-11-25 09:43:03.69632733 +0000 UTC m=+0.022046903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:43:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:03 compute-0 podman[455191]: 2025-11-25 09:43:03.825994579 +0000 UTC m=+0.151714132 container init 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:03 compute-0 podman[455191]: 2025-11-25 09:43:03.83522124 +0000 UTC m=+0.160940783 container start 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 09:43:03 compute-0 podman[455191]: 2025-11-25 09:43:03.838818409 +0000 UTC m=+0.164537962 container attach 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:43:04 compute-0 ceph-mon[75015]: pgmap v3704: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:43:04 compute-0 nervous_blackwell[455208]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:43:04 compute-0 nervous_blackwell[455208]: --> relative data size: 1.0
Nov 25 09:43:04 compute-0 nervous_blackwell[455208]: --> All data devices are unavailable
Nov 25 09:43:04 compute-0 nova_compute[253538]: 2025-11-25 09:43:04.924 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:04 compute-0 systemd[1]: libpod-5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191.scope: Deactivated successfully.
Nov 25 09:43:04 compute-0 podman[455191]: 2025-11-25 09:43:04.938451725 +0000 UTC m=+1.264171308 container died 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:43:04 compute-0 systemd[1]: libpod-5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191.scope: Consumed 1.041s CPU time.
Nov 25 09:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0306ace65b0abc4f1de5fe401e56b7c0df391a8e7e995f9accbfb7ba2197085a-merged.mount: Deactivated successfully.
Nov 25 09:43:04 compute-0 podman[455191]: 2025-11-25 09:43:04.996277654 +0000 UTC m=+1.321997207 container remove 5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:43:05 compute-0 systemd[1]: libpod-conmon-5d2ae597c09eaf86597f79b9bbc7d4ac028e0f561c2f671f6e71d884fcdef191.scope: Deactivated successfully.
Nov 25 09:43:05 compute-0 sudo[455059]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:05 compute-0 sudo[455251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:05 compute-0 sudo[455251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:05 compute-0 sudo[455251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:05 compute-0 sudo[455276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:05 compute-0 sudo[455276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:05 compute-0 sudo[455276]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:05 compute-0 sudo[455301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:05 compute-0 sudo[455301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:05 compute-0 sudo[455301]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:05 compute-0 sudo[455326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:43:05 compute-0 sudo[455326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:05 compute-0 ceph-mon[75015]: pgmap v3705: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:05 compute-0 podman[455392]: 2025-11-25 09:43:05.662821559 +0000 UTC m=+0.046098900 container create 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:43:05 compute-0 systemd[1]: Started libpod-conmon-505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53.scope.
Nov 25 09:43:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:05 compute-0 podman[455392]: 2025-11-25 09:43:05.645487696 +0000 UTC m=+0.028765057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:43:05 compute-0 podman[455392]: 2025-11-25 09:43:05.751347264 +0000 UTC m=+0.134624625 container init 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:43:05 compute-0 podman[455392]: 2025-11-25 09:43:05.759860107 +0000 UTC m=+0.143137438 container start 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 09:43:05 compute-0 podman[455392]: 2025-11-25 09:43:05.765259824 +0000 UTC m=+0.148537185 container attach 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:43:05 compute-0 elastic_ardinghelli[455408]: 167 167
Nov 25 09:43:05 compute-0 systemd[1]: libpod-505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53.scope: Deactivated successfully.
Nov 25 09:43:05 compute-0 conmon[455408]: conmon 505d5c0ad26598d65c36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53.scope/container/memory.events
Nov 25 09:43:05 compute-0 podman[455392]: 2025-11-25 09:43:05.767428504 +0000 UTC m=+0.150705855 container died 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a951b02dbd4df22ade2c4cf1ae62c33649ca2494763ebf86863c08c9ef815cd-merged.mount: Deactivated successfully.
Nov 25 09:43:05 compute-0 podman[455392]: 2025-11-25 09:43:05.805396051 +0000 UTC m=+0.188673392 container remove 505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:43:05 compute-0 systemd[1]: libpod-conmon-505d5c0ad26598d65c36023d80c63ffd35095263ab7bc13d4113b2f9e0318a53.scope: Deactivated successfully.
Nov 25 09:43:05 compute-0 podman[455430]: 2025-11-25 09:43:05.97720669 +0000 UTC m=+0.047865447 container create 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 25 09:43:06 compute-0 systemd[1]: Started libpod-conmon-97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073.scope.
Nov 25 09:43:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:06 compute-0 podman[455430]: 2025-11-25 09:43:05.960238507 +0000 UTC m=+0.030897284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:06 compute-0 podman[455430]: 2025-11-25 09:43:06.068132412 +0000 UTC m=+0.138791179 container init 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:43:06 compute-0 podman[455430]: 2025-11-25 09:43:06.076376987 +0000 UTC m=+0.147035754 container start 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:43:06 compute-0 podman[455430]: 2025-11-25 09:43:06.081706822 +0000 UTC m=+0.152365599 container attach 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:43:06 compute-0 nova_compute[253538]: 2025-11-25 09:43:06.770 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:06 compute-0 lucid_booth[455447]: {
Nov 25 09:43:06 compute-0 lucid_booth[455447]:     "0": [
Nov 25 09:43:06 compute-0 lucid_booth[455447]:         {
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "devices": [
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "/dev/loop3"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             ],
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_name": "ceph_lv0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_size": "21470642176",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "name": "ceph_lv0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "tags": {
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cluster_name": "ceph",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.crush_device_class": "",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.encrypted": "0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osd_id": "0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.type": "block",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.vdo": "0"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             },
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "type": "block",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "vg_name": "ceph_vg0"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:         }
Nov 25 09:43:06 compute-0 lucid_booth[455447]:     ],
Nov 25 09:43:06 compute-0 lucid_booth[455447]:     "1": [
Nov 25 09:43:06 compute-0 lucid_booth[455447]:         {
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "devices": [
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "/dev/loop4"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             ],
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_name": "ceph_lv1",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_size": "21470642176",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "name": "ceph_lv1",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "tags": {
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cluster_name": "ceph",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.crush_device_class": "",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.encrypted": "0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osd_id": "1",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.type": "block",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.vdo": "0"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             },
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "type": "block",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "vg_name": "ceph_vg1"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:         }
Nov 25 09:43:06 compute-0 lucid_booth[455447]:     ],
Nov 25 09:43:06 compute-0 lucid_booth[455447]:     "2": [
Nov 25 09:43:06 compute-0 lucid_booth[455447]:         {
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "devices": [
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "/dev/loop5"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             ],
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_name": "ceph_lv2",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_size": "21470642176",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "name": "ceph_lv2",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "tags": {
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.cluster_name": "ceph",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.crush_device_class": "",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.encrypted": "0",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osd_id": "2",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.type": "block",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:                 "ceph.vdo": "0"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             },
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "type": "block",
Nov 25 09:43:06 compute-0 lucid_booth[455447]:             "vg_name": "ceph_vg2"
Nov 25 09:43:06 compute-0 lucid_booth[455447]:         }
Nov 25 09:43:06 compute-0 lucid_booth[455447]:     ]
Nov 25 09:43:06 compute-0 lucid_booth[455447]: }
Nov 25 09:43:06 compute-0 systemd[1]: libpod-97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073.scope: Deactivated successfully.
Nov 25 09:43:06 compute-0 podman[455430]: 2025-11-25 09:43:06.866408832 +0000 UTC m=+0.937067599 container died 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca933924115bb774f197a34e29df6d95cdac0e57694a97424389744362dc8908-merged.mount: Deactivated successfully.
Nov 25 09:43:06 compute-0 podman[455430]: 2025-11-25 09:43:06.92601125 +0000 UTC m=+0.996669997 container remove 97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_booth, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:06 compute-0 systemd[1]: libpod-conmon-97ff5cb4c6f352e31286c11e57ed625eed2b7a3d5ff1ef2823f6c3401fc4a073.scope: Deactivated successfully.
Nov 25 09:43:06 compute-0 sudo[455326]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:07 compute-0 sudo[455470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:07 compute-0 sudo[455470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:07 compute-0 sudo[455470]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:07 compute-0 sudo[455495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:07 compute-0 sudo[455495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:07 compute-0 sudo[455495]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:07 compute-0 sudo[455522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:07 compute-0 sudo[455522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:07 compute-0 sudo[455522]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:07 compute-0 sudo[455547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:43:07 compute-0 sudo[455547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:07 compute-0 podman[455611]: 2025-11-25 09:43:07.623102148 +0000 UTC m=+0.044362992 container create 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:07 compute-0 systemd[1]: Started libpod-conmon-88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df.scope.
Nov 25 09:43:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:07 compute-0 podman[455611]: 2025-11-25 09:43:07.605346963 +0000 UTC m=+0.026607817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:43:07 compute-0 podman[455611]: 2025-11-25 09:43:07.707901033 +0000 UTC m=+0.129161917 container init 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 09:43:07 compute-0 podman[455611]: 2025-11-25 09:43:07.717771212 +0000 UTC m=+0.139032056 container start 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:43:07 compute-0 podman[455611]: 2025-11-25 09:43:07.721857254 +0000 UTC m=+0.143118118 container attach 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:43:07 compute-0 goofy_maxwell[455627]: 167 167
Nov 25 09:43:07 compute-0 systemd[1]: libpod-88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df.scope: Deactivated successfully.
Nov 25 09:43:07 compute-0 podman[455611]: 2025-11-25 09:43:07.726674285 +0000 UTC m=+0.147935119 container died 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 09:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad4434f5cc4ea8766b4adf84f64d25d7bff2681795ad6851349d9ef34f0b7f0-merged.mount: Deactivated successfully.
Nov 25 09:43:07 compute-0 podman[455611]: 2025-11-25 09:43:07.765371811 +0000 UTC m=+0.186632645 container remove 88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:43:07 compute-0 systemd[1]: libpod-conmon-88790e76343b746e057c410a9b95879624988acaf3bb2729bc823e4666e107df.scope: Deactivated successfully.
Nov 25 09:43:07 compute-0 podman[455650]: 2025-11-25 09:43:07.975405634 +0000 UTC m=+0.072344066 container create 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:43:08 compute-0 systemd[1]: Started libpod-conmon-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope.
Nov 25 09:43:08 compute-0 podman[455650]: 2025-11-25 09:43:07.955394558 +0000 UTC m=+0.052333000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:43:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:08 compute-0 podman[455650]: 2025-11-25 09:43:08.081661565 +0000 UTC m=+0.178600077 container init 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:43:08 compute-0 podman[455650]: 2025-11-25 09:43:08.089650303 +0000 UTC m=+0.186588765 container start 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:43:08 compute-0 podman[455650]: 2025-11-25 09:43:08.093659562 +0000 UTC m=+0.190598024 container attach 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 09:43:08 compute-0 ceph-mon[75015]: pgmap v3706: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:09 compute-0 sshd-session[455501]: Connection closed by authenticating user root 171.244.51.45 port 41280 [preauth]
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]: {
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "osd_id": 1,
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "type": "bluestore"
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:     },
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "osd_id": 2,
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "type": "bluestore"
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:     },
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "osd_id": 0,
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:         "type": "bluestore"
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]:     }
Nov 25 09:43:09 compute-0 trusting_proskuriakova[455667]: }
Nov 25 09:43:09 compute-0 systemd[1]: libpod-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope: Deactivated successfully.
Nov 25 09:43:09 compute-0 systemd[1]: libpod-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope: Consumed 1.094s CPU time.
Nov 25 09:43:09 compute-0 conmon[455667]: conmon 383dabd569bf9e1506bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope/container/memory.events
Nov 25 09:43:09 compute-0 podman[455650]: 2025-11-25 09:43:09.17646981 +0000 UTC m=+1.273408232 container died 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 09:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31e68504adb1ec693e0d14ac4103dacdcbeb4bfbeb10b0b6199cda73412ac0f-merged.mount: Deactivated successfully.
Nov 25 09:43:09 compute-0 podman[455650]: 2025-11-25 09:43:09.237251929 +0000 UTC m=+1.334190351 container remove 383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:43:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:09 compute-0 systemd[1]: libpod-conmon-383dabd569bf9e1506bfa31670fd026385784796e2ac0277ee25d00951013430.scope: Deactivated successfully.
Nov 25 09:43:09 compute-0 sudo[455547]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:43:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:43:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:43:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:43:09 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev c622ed19-5e96-4bc9-b606-d6f872d8ca45 does not exist
Nov 25 09:43:09 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e4b18c4a-eb4e-477b-a0e3-0f10baec0f8d does not exist
Nov 25 09:43:09 compute-0 sudo[455715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:09 compute-0 sudo[455715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:09 compute-0 sudo[455715]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:09 compute-0 sudo[455740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:43:09 compute-0 sudo[455740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:09 compute-0 sudo[455740]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:09 compute-0 nova_compute[253538]: 2025-11-25 09:43:09.927 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:10 compute-0 ceph-mon[75015]: pgmap v3707: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:43:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:43:11 compute-0 sshd-session[455765]: Received disconnect from 152.70.84.178 port 45946:11: Bye Bye [preauth]
Nov 25 09:43:11 compute-0 sshd-session[455765]: Disconnected from authenticating user root 152.70.84.178 port 45946 [preauth]
Nov 25 09:43:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3708: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:11 compute-0 nova_compute[253538]: 2025-11-25 09:43:11.772 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:12 compute-0 ceph-mon[75015]: pgmap v3708: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:14 compute-0 ceph-mon[75015]: pgmap v3709: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:14 compute-0 nova_compute[253538]: 2025-11-25 09:43:14.930 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3710: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:16 compute-0 ceph-mon[75015]: pgmap v3710: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:16 compute-0 nova_compute[253538]: 2025-11-25 09:43:16.760 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:16 compute-0 nova_compute[253538]: 2025-11-25 09:43:16.773 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:18 compute-0 ceph-mon[75015]: pgmap v3711: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:18 compute-0 nova_compute[253538]: 2025-11-25 09:43:18.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:19 compute-0 sshd-session[455769]: Invalid user dev from 47.252.72.9 port 33958
Nov 25 09:43:19 compute-0 sshd-session[455769]: Received disconnect from 47.252.72.9 port 33958:11: Bye Bye [preauth]
Nov 25 09:43:19 compute-0 sshd-session[455769]: Disconnected from invalid user dev 47.252.72.9 port 33958 [preauth]
Nov 25 09:43:19 compute-0 nova_compute[253538]: 2025-11-25 09:43:19.931 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:20 compute-0 ceph-mon[75015]: pgmap v3712: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:20 compute-0 sshd-session[455767]: Received disconnect from 62.60.193.188 port 51474:11: Bye Bye [preauth]
Nov 25 09:43:20 compute-0 sshd-session[455767]: Disconnected from authenticating user root 62.60.193.188 port 51474 [preauth]
Nov 25 09:43:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:21 compute-0 nova_compute[253538]: 2025-11-25 09:43:21.775 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:21 compute-0 sshd-session[455771]: Invalid user marvin from 146.190.154.85 port 47948
Nov 25 09:43:22 compute-0 sshd-session[455771]: Received disconnect from 146.190.154.85 port 47948:11: Bye Bye [preauth]
Nov 25 09:43:22 compute-0 sshd-session[455771]: Disconnected from invalid user marvin 146.190.154.85 port 47948 [preauth]
Nov 25 09:43:22 compute-0 ceph-mon[75015]: pgmap v3713: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3714: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:23 compute-0 ceph-mon[75015]: pgmap v3714: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:24 compute-0 nova_compute[253538]: 2025-11-25 09:43:24.932 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:26 compute-0 ceph-mon[75015]: pgmap v3715: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:26 compute-0 nova_compute[253538]: 2025-11-25 09:43:26.778 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:26 compute-0 podman[455775]: 2025-11-25 09:43:26.811652264 +0000 UTC m=+0.056787891 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:43:26 compute-0 podman[455774]: 2025-11-25 09:43:26.820181227 +0000 UTC m=+0.064809310 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:43:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:27 compute-0 nova_compute[253538]: 2025-11-25 09:43:27.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:27 compute-0 nova_compute[253538]: 2025-11-25 09:43:27.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:43:27 compute-0 nova_compute[253538]: 2025-11-25 09:43:27.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:43:27 compute-0 nova_compute[253538]: 2025-11-25 09:43:27.574 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:43:27 compute-0 nova_compute[253538]: 2025-11-25 09:43:27.574 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:27 compute-0 nova_compute[253538]: 2025-11-25 09:43:27.575 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:28 compute-0 ceph-mon[75015]: pgmap v3716: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:28 compute-0 nova_compute[253538]: 2025-11-25 09:43:28.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:28 compute-0 nova_compute[253538]: 2025-11-25 09:43:28.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:28 compute-0 nova_compute[253538]: 2025-11-25 09:43:28.555 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:43:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:43:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1316391236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:43:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:43:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1316391236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:43:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1316391236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:43:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1316391236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:43:29 compute-0 nova_compute[253538]: 2025-11-25 09:43:29.934 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:30 compute-0 ceph-mon[75015]: pgmap v3717: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:31 compute-0 nova_compute[253538]: 2025-11-25 09:43:31.780 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:32 compute-0 ceph-mon[75015]: pgmap v3718: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:33 compute-0 podman[455812]: 2025-11-25 09:43:33.825031078 +0000 UTC m=+0.079792640 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:43:34 compute-0 ceph-mon[75015]: pgmap v3719: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:34 compute-0 nova_compute[253538]: 2025-11-25 09:43:34.936 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:35 compute-0 nova_compute[253538]: 2025-11-25 09:43:35.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:36 compute-0 ceph-mon[75015]: pgmap v3720: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:36 compute-0 nova_compute[253538]: 2025-11-25 09:43:36.782 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:37 compute-0 ceph-mon[75015]: pgmap v3721: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:39 compute-0 sshd-session[455811]: error: kex_exchange_identification: read: Connection timed out
Nov 25 09:43:39 compute-0 sshd-session[455811]: banner exchange: Connection from 14.103.111.13 port 54108: Connection timed out
Nov 25 09:43:39 compute-0 sshd-session[455837]: Invalid user vagrant from 165.227.175.225 port 46892
Nov 25 09:43:39 compute-0 sshd-session[455837]: Received disconnect from 165.227.175.225 port 46892:11: Bye Bye [preauth]
Nov 25 09:43:39 compute-0 sshd-session[455837]: Disconnected from invalid user vagrant 165.227.175.225 port 46892 [preauth]
Nov 25 09:43:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:39 compute-0 nova_compute[253538]: 2025-11-25 09:43:39.940 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:40 compute-0 ceph-mon[75015]: pgmap v3722: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:40 compute-0 nova_compute[253538]: 2025-11-25 09:43:40.549 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:43:41.132 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:43:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:43:41.132 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:43:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:43:41.133 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:43:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:41 compute-0 nova_compute[253538]: 2025-11-25 09:43:41.783 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:42 compute-0 ceph-mon[75015]: pgmap v3723: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:43 compute-0 nova_compute[253538]: 2025-11-25 09:43:43.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:43:43 compute-0 nova_compute[253538]: 2025-11-25 09:43:43.589 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:43:43 compute-0 nova_compute[253538]: 2025-11-25 09:43:43.590 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:43:43 compute-0 nova_compute[253538]: 2025-11-25 09:43:43.590 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:43:43 compute-0 nova_compute[253538]: 2025-11-25 09:43:43.591 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:43:43 compute-0 nova_compute[253538]: 2025-11-25 09:43:43.591 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:43:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:43:44 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365809163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.057 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.223 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.224 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3573MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.225 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.225 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.435 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.435 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:43:44 compute-0 ceph-mon[75015]: pgmap v3724: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:44 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3365809163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.520 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing inventories for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.605 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating ProviderTree inventory for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.605 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Updating inventory in ProviderTree for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.622 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing aggregate associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.643 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Refreshing trait associations for resource provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4, traits: HW_CPU_X86_ABM,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.667 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:43:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:44 compute-0 nova_compute[253538]: 2025-11-25 09:43:44.942 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:43:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370423732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:43:45 compute-0 nova_compute[253538]: 2025-11-25 09:43:45.147 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:43:45 compute-0 nova_compute[253538]: 2025-11-25 09:43:45.155 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:43:45 compute-0 nova_compute[253538]: 2025-11-25 09:43:45.169 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:43:45 compute-0 nova_compute[253538]: 2025-11-25 09:43:45.171 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:43:45 compute-0 nova_compute[253538]: 2025-11-25 09:43:45.171 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:43:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/370423732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:43:45 compute-0 ceph-mon[75015]: pgmap v3725: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:46 compute-0 nova_compute[253538]: 2025-11-25 09:43:46.849 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:48 compute-0 ceph-mon[75015]: pgmap v3726: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:49 compute-0 nova_compute[253538]: 2025-11-25 09:43:49.944 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:50 compute-0 ceph-mon[75015]: pgmap v3727: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:51 compute-0 nova_compute[253538]: 2025-11-25 09:43:51.851 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:52 compute-0 ceph-mon[75015]: pgmap v3728: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:43:53
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images']
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:53 compute-0 ceph-mon[75015]: pgmap v3729: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:43:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:43:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:54 compute-0 nova_compute[253538]: 2025-11-25 09:43:54.948 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:55 compute-0 sshd-session[455883]: Connection closed by 45.78.222.2 port 52942 [preauth]
Nov 25 09:43:56 compute-0 ceph-mon[75015]: pgmap v3730: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:56 compute-0 nova_compute[253538]: 2025-11-25 09:43:56.853 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:43:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:57 compute-0 podman[455886]: 2025-11-25 09:43:57.795075953 +0000 UTC m=+0.044885906 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 09:43:57 compute-0 podman[455885]: 2025-11-25 09:43:57.79790719 +0000 UTC m=+0.050230282 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:43:58 compute-0 ceph-mon[75015]: pgmap v3731: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:43:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:59 compute-0 nova_compute[253538]: 2025-11-25 09:43:59.951 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:00 compute-0 ceph-mon[75015]: pgmap v3732: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:01 compute-0 nova_compute[253538]: 2025-11-25 09:44:01.854 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:02 compute-0 ceph-mon[75015]: pgmap v3733: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:04 compute-0 ceph-mon[75015]: pgmap v3734: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:04 compute-0 podman[455924]: 2025-11-25 09:44:04.834498047 +0000 UTC m=+0.085273388 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:44:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:04 compute-0 nova_compute[253538]: 2025-11-25 09:44:04.953 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:06 compute-0 ceph-mon[75015]: pgmap v3735: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:06 compute-0 nova_compute[253538]: 2025-11-25 09:44:06.856 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:08 compute-0 ceph-mon[75015]: pgmap v3736: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:09 compute-0 sudo[455950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:09 compute-0 sudo[455950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:09 compute-0 sudo[455950]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:09 compute-0 sudo[455975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:09 compute-0 sudo[455975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:09 compute-0 sudo[455975]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:09 compute-0 sudo[456000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:09 compute-0 sudo[456000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:09 compute-0 sudo[456000]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:09 compute-0 sudo[456025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 09:44:09 compute-0 sudo[456025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:09 compute-0 sudo[456025]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:44:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:44:09 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:09 compute-0 nova_compute[253538]: 2025-11-25 09:44:09.955 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:09 compute-0 sudo[456069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:09 compute-0 sudo[456069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:09 compute-0 sudo[456069]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 sudo[456094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:10 compute-0 sudo[456094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:10 compute-0 sudo[456094]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 sudo[456119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:10 compute-0 sudo[456119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:10 compute-0 sudo[456119]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 sudo[456144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:44:10 compute-0 sudo[456144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:10 compute-0 ceph-mon[75015]: pgmap v3737: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:10 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:10 compute-0 sudo[456144]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:44:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:44:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:44:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:44:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 3f9f7504-5acc-458c-aac7-ebc05ec90caf does not exist
Nov 25 09:44:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 1a23456b-ece7-4e18-b985-ffbb9a08629d does not exist
Nov 25 09:44:10 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 474f2439-6640-499b-b57e-824764c729f8 does not exist
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.740628) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850740823, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1557, "num_deletes": 251, "total_data_size": 2490969, "memory_usage": 2526048, "flush_reason": "Manual Compaction"}
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Nov 25 09:44:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:44:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:44:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:44:10 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:44:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:44:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850756596, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2444879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75917, "largest_seqno": 77473, "table_properties": {"data_size": 2437600, "index_size": 4285, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14780, "raw_average_key_size": 19, "raw_value_size": 2423178, "raw_average_value_size": 3261, "num_data_blocks": 192, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063686, "oldest_key_time": 1764063686, "file_creation_time": 1764063850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 15865 microseconds, and 5630 cpu microseconds.
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.756641) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2444879 bytes OK
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.756663) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.758507) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.758522) EVENT_LOG_v1 {"time_micros": 1764063850758517, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.758540) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2484211, prev total WAL file size 2484211, number of live WAL files 2.
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.759301) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2387KB)], [182(10166KB)]
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850759343, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12855663, "oldest_snapshot_seqno": -1}
Nov 25 09:44:10 compute-0 sudo[456200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:10 compute-0 sudo[456200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:10 compute-0 sudo[456200]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9371 keys, 11124444 bytes, temperature: kUnknown
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850835097, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 11124444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11065437, "index_size": 34446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 247267, "raw_average_key_size": 26, "raw_value_size": 10901687, "raw_average_value_size": 1163, "num_data_blocks": 1328, "num_entries": 9371, "num_filter_entries": 9371, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764056881, "oldest_key_time": 0, "file_creation_time": 1764063850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc5dc6ae-0fb8-474e-b34d-e37705a41add", "db_session_id": "WCSCMBNWDQZ3QKJA3OWW", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.835323) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 11124444 bytes
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.837248) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.6 rd, 146.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(9.8) write-amplify(4.6) OK, records in: 9885, records dropped: 514 output_compression: NoCompression
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.837266) EVENT_LOG_v1 {"time_micros": 1764063850837258, "job": 114, "event": "compaction_finished", "compaction_time_micros": 75820, "compaction_time_cpu_micros": 32867, "output_level": 6, "num_output_files": 1, "total_output_size": 11124444, "num_input_records": 9885, "num_output_records": 9371, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850837747, "job": 114, "event": "table_file_deletion", "file_number": 184}
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063850839606, "job": 114, "event": "table_file_deletion", "file_number": 182}
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.759215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:44:10 compute-0 ceph-mon[75015]: rocksdb: (Original Log Time 2025/11/25-09:44:10.839663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:44:10 compute-0 sudo[456225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:10 compute-0 sudo[456225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:10 compute-0 sudo[456225]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 sudo[456250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:10 compute-0 sudo[456250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:10 compute-0 sudo[456250]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 sudo[456275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:44:10 compute-0 sudo[456275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:11 compute-0 podman[456340]: 2025-11-25 09:44:11.305621108 +0000 UTC m=+0.021502778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:44:11 compute-0 podman[456340]: 2025-11-25 09:44:11.458249175 +0000 UTC m=+0.174130865 container create 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 09:44:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:44:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:44:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:44:11 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:11 compute-0 systemd[1]: Started libpod-conmon-8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f.scope.
Nov 25 09:44:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:11 compute-0 podman[456340]: 2025-11-25 09:44:11.656617249 +0000 UTC m=+0.372498919 container init 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:44:11 compute-0 podman[456340]: 2025-11-25 09:44:11.662896921 +0000 UTC m=+0.378778571 container start 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:44:11 compute-0 pedantic_greider[456356]: 167 167
Nov 25 09:44:11 compute-0 systemd[1]: libpod-8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f.scope: Deactivated successfully.
Nov 25 09:44:11 compute-0 podman[456340]: 2025-11-25 09:44:11.676972916 +0000 UTC m=+0.392854686 container attach 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:44:11 compute-0 podman[456340]: 2025-11-25 09:44:11.678200199 +0000 UTC m=+0.394081899 container died 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b23d3d7beeeb73c0e00b40386473ac363e28f2152e2b87d32fed3af17b24324f-merged.mount: Deactivated successfully.
Nov 25 09:44:11 compute-0 podman[456340]: 2025-11-25 09:44:11.789584109 +0000 UTC m=+0.505465759 container remove 8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_greider, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:44:11 compute-0 systemd[1]: libpod-conmon-8fdf802db3e6f156e2943f98411b768c65c4247fd036015e2d04b633a3cc495f.scope: Deactivated successfully.
Nov 25 09:44:11 compute-0 nova_compute[253538]: 2025-11-25 09:44:11.858 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:11 compute-0 podman[456382]: 2025-11-25 09:44:11.94932402 +0000 UTC m=+0.039951092 container create 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:44:11 compute-0 systemd[1]: Started libpod-conmon-62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c.scope.
Nov 25 09:44:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:12 compute-0 podman[456382]: 2025-11-25 09:44:11.933664122 +0000 UTC m=+0.024291144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:44:12 compute-0 podman[456382]: 2025-11-25 09:44:12.041049784 +0000 UTC m=+0.131676806 container init 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:44:12 compute-0 podman[456382]: 2025-11-25 09:44:12.047444848 +0000 UTC m=+0.138071850 container start 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 09:44:12 compute-0 podman[456382]: 2025-11-25 09:44:12.050369988 +0000 UTC m=+0.140997020 container attach 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:44:12 compute-0 ceph-mon[75015]: pgmap v3738: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:13 compute-0 nervous_newton[456398]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:44:13 compute-0 nervous_newton[456398]: --> relative data size: 1.0
Nov 25 09:44:13 compute-0 nervous_newton[456398]: --> All data devices are unavailable
Nov 25 09:44:13 compute-0 systemd[1]: libpod-62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c.scope: Deactivated successfully.
Nov 25 09:44:13 compute-0 podman[456382]: 2025-11-25 09:44:13.090940892 +0000 UTC m=+1.181567894 container died 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-836c444c85bd64788d6be4a1d84a3ebc86e50fdae805bd24be3c21dc57a528ce-merged.mount: Deactivated successfully.
Nov 25 09:44:13 compute-0 podman[456382]: 2025-11-25 09:44:13.246465427 +0000 UTC m=+1.337092429 container remove 62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:44:13 compute-0 systemd[1]: libpod-conmon-62b796e6a2eade9f813e0b43f4a258831bc8876fc04fbc2f2e913857bb532d6c.scope: Deactivated successfully.
Nov 25 09:44:13 compute-0 sudo[456275]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:13 compute-0 sudo[456440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:13 compute-0 sudo[456440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:13 compute-0 sudo[456440]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:13 compute-0 sudo[456465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:13 compute-0 sudo[456465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:13 compute-0 sudo[456465]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:13 compute-0 sudo[456490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:13 compute-0 sudo[456490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:13 compute-0 sudo[456490]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:13 compute-0 sudo[456515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:44:13 compute-0 sudo[456515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:13 compute-0 ceph-mon[75015]: pgmap v3739: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:13 compute-0 podman[456582]: 2025-11-25 09:44:13.910960556 +0000 UTC m=+0.070740182 container create d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:44:13 compute-0 systemd[1]: Started libpod-conmon-d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835.scope.
Nov 25 09:44:13 compute-0 podman[456582]: 2025-11-25 09:44:13.867362055 +0000 UTC m=+0.027142191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:44:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:14 compute-0 podman[456582]: 2025-11-25 09:44:14.013465174 +0000 UTC m=+0.173244820 container init d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:44:14 compute-0 podman[456582]: 2025-11-25 09:44:14.02211686 +0000 UTC m=+0.181896486 container start d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:44:14 compute-0 upbeat_easley[456598]: 167 167
Nov 25 09:44:14 compute-0 systemd[1]: libpod-d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835.scope: Deactivated successfully.
Nov 25 09:44:14 compute-0 podman[456582]: 2025-11-25 09:44:14.042817045 +0000 UTC m=+0.202596691 container attach d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:44:14 compute-0 podman[456582]: 2025-11-25 09:44:14.04374359 +0000 UTC m=+0.203523216 container died d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6beea3c2aea0dc645e9a0b16f24d13ba7498866f820099847339b0a45be8ec41-merged.mount: Deactivated successfully.
Nov 25 09:44:14 compute-0 podman[456582]: 2025-11-25 09:44:14.139148065 +0000 UTC m=+0.298927681 container remove d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:44:14 compute-0 systemd[1]: libpod-conmon-d2fdabba02e7bc742498b8d9431a0602be6e26244c2b0281ed28a41b301dd835.scope: Deactivated successfully.
Nov 25 09:44:14 compute-0 podman[456621]: 2025-11-25 09:44:14.350583856 +0000 UTC m=+0.069843207 container create 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 09:44:14 compute-0 podman[456621]: 2025-11-25 09:44:14.305197138 +0000 UTC m=+0.024456509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:44:14 compute-0 systemd[1]: Started libpod-conmon-015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8.scope.
Nov 25 09:44:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:14 compute-0 podman[456621]: 2025-11-25 09:44:14.496657314 +0000 UTC m=+0.215916675 container init 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:44:14 compute-0 podman[456621]: 2025-11-25 09:44:14.502476432 +0000 UTC m=+0.221735783 container start 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:44:14 compute-0 podman[456621]: 2025-11-25 09:44:14.520283359 +0000 UTC m=+0.239542720 container attach 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:44:14 compute-0 sshd-session[455923]: Connection closed by 45.78.217.205 port 36442 [preauth]
Nov 25 09:44:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:14 compute-0 nova_compute[253538]: 2025-11-25 09:44:14.958 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:15 compute-0 naughty_tharp[456637]: {
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:     "0": [
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:         {
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "devices": [
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "/dev/loop3"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             ],
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_name": "ceph_lv0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_size": "21470642176",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "name": "ceph_lv0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "tags": {
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cluster_name": "ceph",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.crush_device_class": "",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.encrypted": "0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osd_id": "0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.type": "block",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.vdo": "0"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             },
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "type": "block",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "vg_name": "ceph_vg0"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:         }
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:     ],
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:     "1": [
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:         {
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "devices": [
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "/dev/loop4"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             ],
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_name": "ceph_lv1",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_size": "21470642176",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "name": "ceph_lv1",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "tags": {
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cluster_name": "ceph",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.crush_device_class": "",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.encrypted": "0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osd_id": "1",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.type": "block",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.vdo": "0"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             },
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "type": "block",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "vg_name": "ceph_vg1"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:         }
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:     ],
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:     "2": [
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:         {
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "devices": [
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "/dev/loop5"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             ],
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_name": "ceph_lv2",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_size": "21470642176",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "name": "ceph_lv2",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "tags": {
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.cluster_name": "ceph",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.crush_device_class": "",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.encrypted": "0",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osd_id": "2",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.type": "block",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:                 "ceph.vdo": "0"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             },
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "type": "block",
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:             "vg_name": "ceph_vg2"
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:         }
Nov 25 09:44:15 compute-0 naughty_tharp[456637]:     ]
Nov 25 09:44:15 compute-0 naughty_tharp[456637]: }
Nov 25 09:44:15 compute-0 systemd[1]: libpod-015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8.scope: Deactivated successfully.
Nov 25 09:44:15 compute-0 podman[456621]: 2025-11-25 09:44:15.329704643 +0000 UTC m=+1.048964004 container died 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ade26a4f3258f526344d35fe78f779eea3f8444c2fb96a1d586c3c21ea5ad543-merged.mount: Deactivated successfully.
Nov 25 09:44:15 compute-0 podman[456621]: 2025-11-25 09:44:15.388145638 +0000 UTC m=+1.107404989 container remove 015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:44:15 compute-0 systemd[1]: libpod-conmon-015f9b89d7c9fada46656e5c5f164871db3fbc6f0fd00a21869944a361f99ea8.scope: Deactivated successfully.
Nov 25 09:44:15 compute-0 sudo[456515]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:15 compute-0 sudo[456658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:15 compute-0 sudo[456658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:15 compute-0 sudo[456658]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:15 compute-0 sudo[456683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:15 compute-0 sudo[456683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:15 compute-0 sudo[456683]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:15 compute-0 sudo[456708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:15 compute-0 sudo[456708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:15 compute-0 sudo[456708]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:15 compute-0 sudo[456733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:44:15 compute-0 sudo[456733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:16 compute-0 podman[456797]: 2025-11-25 09:44:16.030455922 +0000 UTC m=+0.036551189 container create eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:44:16 compute-0 systemd[1]: Started libpod-conmon-eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70.scope.
Nov 25 09:44:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:16 compute-0 podman[456797]: 2025-11-25 09:44:16.016439699 +0000 UTC m=+0.022534996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:44:16 compute-0 podman[456797]: 2025-11-25 09:44:16.119445 +0000 UTC m=+0.125540297 container init eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:44:16 compute-0 podman[456797]: 2025-11-25 09:44:16.129349921 +0000 UTC m=+0.135445208 container start eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:44:16 compute-0 podman[456797]: 2025-11-25 09:44:16.132820645 +0000 UTC m=+0.138915932 container attach eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:44:16 compute-0 gifted_cray[456813]: 167 167
Nov 25 09:44:16 compute-0 systemd[1]: libpod-eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70.scope: Deactivated successfully.
Nov 25 09:44:16 compute-0 podman[456797]: 2025-11-25 09:44:16.13554155 +0000 UTC m=+0.141636847 container died eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0df3a0fd2e80188c1fb726c19486864adf56e227a61a4b63c4b632b5acfee39c-merged.mount: Deactivated successfully.
Nov 25 09:44:16 compute-0 podman[456797]: 2025-11-25 09:44:16.16778167 +0000 UTC m=+0.173876947 container remove eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 09:44:16 compute-0 systemd[1]: libpod-conmon-eb4f9e13db87c9e444ab2f594bfc9e90bb914f06d9e3c9c20e6bba89d82ebc70.scope: Deactivated successfully.
Nov 25 09:44:16 compute-0 podman[456837]: 2025-11-25 09:44:16.342669924 +0000 UTC m=+0.051901407 container create fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:44:16 compute-0 ceph-mon[75015]: pgmap v3740: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:16 compute-0 systemd[1]: Started libpod-conmon-fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29.scope.
Nov 25 09:44:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:16 compute-0 podman[456837]: 2025-11-25 09:44:16.325392262 +0000 UTC m=+0.034623775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:44:16 compute-0 podman[456837]: 2025-11-25 09:44:16.423755147 +0000 UTC m=+0.132986630 container init fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:44:16 compute-0 podman[456837]: 2025-11-25 09:44:16.436757242 +0000 UTC m=+0.145988725 container start fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:44:16 compute-0 podman[456837]: 2025-11-25 09:44:16.440472854 +0000 UTC m=+0.149704627 container attach fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:44:16 compute-0 nova_compute[253538]: 2025-11-25 09:44:16.861 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:17 compute-0 hopeful_austin[456854]: {
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "osd_id": 1,
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "type": "bluestore"
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:     },
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "osd_id": 2,
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "type": "bluestore"
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:     },
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "osd_id": 0,
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:         "type": "bluestore"
Nov 25 09:44:17 compute-0 hopeful_austin[456854]:     }
Nov 25 09:44:17 compute-0 hopeful_austin[456854]: }
Nov 25 09:44:17 compute-0 systemd[1]: libpod-fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29.scope: Deactivated successfully.
Nov 25 09:44:17 compute-0 podman[456837]: 2025-11-25 09:44:17.423474326 +0000 UTC m=+1.132705809 container died fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 25 09:44:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ebd7886ab89b6c2efc4fa26e7cc2e8bfa359c40bc90cca080b1de493f11ab24-merged.mount: Deactivated successfully.
Nov 25 09:44:17 compute-0 podman[456837]: 2025-11-25 09:44:17.574439987 +0000 UTC m=+1.283671470 container remove fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:44:17 compute-0 systemd[1]: libpod-conmon-fa791dccb2025283a6d980384094d86b6f0a5ae020983f991d031d93089b4e29.scope: Deactivated successfully.
Nov 25 09:44:17 compute-0 sudo[456733]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:44:17 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:44:17 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:17 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 5dafb98f-c0a7-4bd1-ba17-e46082a7382a does not exist
Nov 25 09:44:17 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev a6228746-1f03-4c13-b81c-d93839102d4b does not exist
Nov 25 09:44:17 compute-0 sudo[456902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:17 compute-0 sudo[456902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:17 compute-0 sudo[456902]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:17 compute-0 sudo[456927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:44:17 compute-0 sudo[456927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:17 compute-0 sudo[456927]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:18 compute-0 nova_compute[253538]: 2025-11-25 09:44:18.172 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:18 compute-0 ceph-mon[75015]: pgmap v3741: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:44:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:19 compute-0 nova_compute[253538]: 2025-11-25 09:44:19.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:19 compute-0 nova_compute[253538]: 2025-11-25 09:44:19.963 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:20 compute-0 ceph-mon[75015]: pgmap v3742: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:21 compute-0 nova_compute[253538]: 2025-11-25 09:44:21.865 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:22 compute-0 ceph-mon[75015]: pgmap v3743: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:24 compute-0 ceph-mon[75015]: pgmap v3744: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:24 compute-0 nova_compute[253538]: 2025-11-25 09:44:24.966 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:26 compute-0 ceph-mon[75015]: pgmap v3745: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:26 compute-0 nova_compute[253538]: 2025-11-25 09:44:26.935 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:27 compute-0 sshd-session[456952]: Invalid user maarch from 146.190.154.85 port 58360
Nov 25 09:44:27 compute-0 sshd-session[456952]: Received disconnect from 146.190.154.85 port 58360:11: Bye Bye [preauth]
Nov 25 09:44:27 compute-0 sshd-session[456952]: Disconnected from invalid user maarch 146.190.154.85 port 58360 [preauth]
Nov 25 09:44:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:27 compute-0 nova_compute[253538]: 2025-11-25 09:44:27.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:27 compute-0 ceph-mon[75015]: pgmap v3746: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:28 compute-0 nova_compute[253538]: 2025-11-25 09:44:28.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:28 compute-0 nova_compute[253538]: 2025-11-25 09:44:28.556 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:28 compute-0 podman[456956]: 2025-11-25 09:44:28.809232231 +0000 UTC m=+0.061094859 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:44:28 compute-0 podman[456957]: 2025-11-25 09:44:28.828146888 +0000 UTC m=+0.080468308 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:44:28 compute-0 sshd-session[456954]: Invalid user devuser from 62.60.193.188 port 39836
Nov 25 09:44:29 compute-0 sshd-session[456954]: Received disconnect from 62.60.193.188 port 39836:11: Bye Bye [preauth]
Nov 25 09:44:29 compute-0 sshd-session[456954]: Disconnected from invalid user devuser 62.60.193.188 port 39836 [preauth]
Nov 25 09:44:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:44:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2661224800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:44:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:44:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2661224800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:44:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2661224800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:44:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/2661224800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:44:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:29 compute-0 nova_compute[253538]: 2025-11-25 09:44:29.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:29 compute-0 nova_compute[253538]: 2025-11-25 09:44:29.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:44:29 compute-0 nova_compute[253538]: 2025-11-25 09:44:29.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:44:29 compute-0 nova_compute[253538]: 2025-11-25 09:44:29.570 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:44:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:29 compute-0 nova_compute[253538]: 2025-11-25 09:44:29.969 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:30 compute-0 sshd-session[456993]: Invalid user aaa from 47.252.72.9 port 46948
Nov 25 09:44:30 compute-0 sshd-session[456993]: Received disconnect from 47.252.72.9 port 46948:11: Bye Bye [preauth]
Nov 25 09:44:30 compute-0 sshd-session[456993]: Disconnected from invalid user aaa 47.252.72.9 port 46948 [preauth]
Nov 25 09:44:30 compute-0 ceph-mon[75015]: pgmap v3747: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:30 compute-0 nova_compute[253538]: 2025-11-25 09:44:30.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:30 compute-0 nova_compute[253538]: 2025-11-25 09:44:30.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:44:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:31 compute-0 sshd-session[456995]: Invalid user a from 182.253.79.194 port 29794
Nov 25 09:44:31 compute-0 nova_compute[253538]: 2025-11-25 09:44:31.938 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:32 compute-0 sshd-session[456995]: Received disconnect from 182.253.79.194 port 29794:11: Bye Bye [preauth]
Nov 25 09:44:32 compute-0 sshd-session[456995]: Disconnected from invalid user a 182.253.79.194 port 29794 [preauth]
Nov 25 09:44:32 compute-0 ceph-mon[75015]: pgmap v3748: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:34 compute-0 ceph-mon[75015]: pgmap v3749: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:34 compute-0 nova_compute[253538]: 2025-11-25 09:44:34.973 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:35 compute-0 podman[456997]: 2025-11-25 09:44:35.877842101 +0000 UTC m=+0.121175918 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:44:36 compute-0 ceph-mon[75015]: pgmap v3750: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:36 compute-0 nova_compute[253538]: 2025-11-25 09:44:36.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:37 compute-0 nova_compute[253538]: 2025-11-25 09:44:37.555 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:38 compute-0 sshd-session[457024]: Invalid user oracle from 193.32.162.151 port 35534
Nov 25 09:44:38 compute-0 ceph-mon[75015]: pgmap v3751: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:38 compute-0 sshd-session[457024]: Connection closed by invalid user oracle 193.32.162.151 port 35534 [preauth]
Nov 25 09:44:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:39 compute-0 sshd-session[457026]: Invalid user gitea from 152.70.84.178 port 47714
Nov 25 09:44:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:39 compute-0 nova_compute[253538]: 2025-11-25 09:44:39.977 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:40 compute-0 sshd-session[457026]: Received disconnect from 152.70.84.178 port 47714:11: Bye Bye [preauth]
Nov 25 09:44:40 compute-0 sshd-session[457026]: Disconnected from invalid user gitea 152.70.84.178 port 47714 [preauth]
Nov 25 09:44:40 compute-0 sshd-session[457028]: Invalid user gitea from 165.227.175.225 port 59676
Nov 25 09:44:40 compute-0 sshd-session[457028]: Received disconnect from 165.227.175.225 port 59676:11: Bye Bye [preauth]
Nov 25 09:44:40 compute-0 sshd-session[457028]: Disconnected from invalid user gitea 165.227.175.225 port 59676 [preauth]
Nov 25 09:44:40 compute-0 ceph-mon[75015]: pgmap v3752: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:44:41.133 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:44:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:44:41.135 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:44:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:44:41.136 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:44:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:41 compute-0 nova_compute[253538]: 2025-11-25 09:44:41.980 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:42 compute-0 ceph-mon[75015]: pgmap v3753: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:44 compute-0 ceph-mon[75015]: pgmap v3754: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:44 compute-0 nova_compute[253538]: 2025-11-25 09:44:44.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:44:44 compute-0 nova_compute[253538]: 2025-11-25 09:44:44.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:44:44 compute-0 nova_compute[253538]: 2025-11-25 09:44:44.583 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:44:44 compute-0 nova_compute[253538]: 2025-11-25 09:44:44.584 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:44:44 compute-0 nova_compute[253538]: 2025-11-25 09:44:44.584 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:44:44 compute-0 nova_compute[253538]: 2025-11-25 09:44:44.585 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:44:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:44 compute-0 nova_compute[253538]: 2025-11-25 09:44:44.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:44:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094437044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.072 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:44:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.296 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.297 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3570MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.298 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.298 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.353 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.354 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.374 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:44:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2094437044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:44:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:44:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853883255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.821 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.830 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.845 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.848 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:44:45 compute-0 nova_compute[253538]: 2025-11-25 09:44:45.849 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:44:46 compute-0 ceph-mon[75015]: pgmap v3755: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1853883255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:44:46 compute-0 nova_compute[253538]: 2025-11-25 09:44:46.981 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:47 compute-0 ceph-mon[75015]: pgmap v3756: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:50 compute-0 nova_compute[253538]: 2025-11-25 09:44:50.026 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:50 compute-0 ceph-mon[75015]: pgmap v3757: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:51 compute-0 nova_compute[253538]: 2025-11-25 09:44:51.984 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:52 compute-0 ceph-mon[75015]: pgmap v3758: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:44:53
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'vms']
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:44:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:44:54 compute-0 ceph-mon[75015]: pgmap v3759: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:55 compute-0 nova_compute[253538]: 2025-11-25 09:44:55.030 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:56 compute-0 ceph-mon[75015]: pgmap v3760: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:56 compute-0 nova_compute[253538]: 2025-11-25 09:44:56.989 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:44:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:58 compute-0 ceph-mon[75015]: pgmap v3761: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:44:59 compute-0 podman[457074]: 2025-11-25 09:44:59.849519673 +0000 UTC m=+0.084788976 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 25 09:44:59 compute-0 podman[457075]: 2025-11-25 09:44:59.881945358 +0000 UTC m=+0.112375679 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 09:44:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:00 compute-0 nova_compute[253538]: 2025-11-25 09:45:00.033 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:00 compute-0 ceph-mon[75015]: pgmap v3762: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:01 compute-0 nova_compute[253538]: 2025-11-25 09:45:01.990 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:02 compute-0 ceph-mon[75015]: pgmap v3763: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:03 compute-0 ceph-mon[75015]: pgmap v3764: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:45:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:05 compute-0 nova_compute[253538]: 2025-11-25 09:45:05.036 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:06 compute-0 ceph-mon[75015]: pgmap v3765: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:06 compute-0 podman[457110]: 2025-11-25 09:45:06.863765949 +0000 UTC m=+0.114898377 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 09:45:06 compute-0 nova_compute[253538]: 2025-11-25 09:45:06.991 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:08 compute-0 ceph-mon[75015]: pgmap v3766: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:10 compute-0 nova_compute[253538]: 2025-11-25 09:45:10.041 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:10 compute-0 ceph-mon[75015]: pgmap v3767: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:11 compute-0 ceph-mon[75015]: pgmap v3768: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:11 compute-0 nova_compute[253538]: 2025-11-25 09:45:11.993 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3769: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:14 compute-0 ceph-mon[75015]: pgmap v3769: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:15 compute-0 nova_compute[253538]: 2025-11-25 09:45:15.044 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:16 compute-0 ceph-mon[75015]: pgmap v3770: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:16 compute-0 nova_compute[253538]: 2025-11-25 09:45:16.995 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:17 compute-0 ceph-mon[75015]: pgmap v3771: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:17 compute-0 sudo[457137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:17 compute-0 nova_compute[253538]: 2025-11-25 09:45:17.849 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:17 compute-0 sudo[457137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:17 compute-0 sudo[457137]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:17 compute-0 sudo[457162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:17 compute-0 sudo[457162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:17 compute-0 sudo[457162]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:18 compute-0 sudo[457187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:18 compute-0 sudo[457187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:18 compute-0 sudo[457187]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:18 compute-0 sudo[457212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:45:18 compute-0 sudo[457212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:18 compute-0 sudo[457212]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 09:45:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:45:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:45:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:45:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:45:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev d4660492-c67a-439c-90e5-3add8392451b does not exist
Nov 25 09:45:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 89ea9a3f-6af0-429d-a5c7-bcd06efba30c does not exist
Nov 25 09:45:18 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 34a235ac-4d8b-4f64-830d-c705b92da52e does not exist
Nov 25 09:45:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:45:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:45:18 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:45:18 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:45:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:45:18 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:18 compute-0 sudo[457267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:18 compute-0 sudo[457267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:18 compute-0 sudo[457267]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:18 compute-0 sudo[457292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:18 compute-0 sudo[457292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:18 compute-0 sudo[457292]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:18 compute-0 sudo[457317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:18 compute-0 sudo[457317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:18 compute-0 sudo[457317]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:18 compute-0 sudo[457342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:45:18 compute-0 sudo[457342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:19 compute-0 podman[457407]: 2025-11-25 09:45:19.190622704 +0000 UTC m=+0.040440376 container create 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 09:45:19 compute-0 systemd[1]: Started libpod-conmon-05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801.scope.
Nov 25 09:45:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:19 compute-0 podman[457407]: 2025-11-25 09:45:19.174901154 +0000 UTC m=+0.024718846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:45:19 compute-0 podman[457407]: 2025-11-25 09:45:19.277800733 +0000 UTC m=+0.127618435 container init 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 09:45:19 compute-0 podman[457407]: 2025-11-25 09:45:19.28901887 +0000 UTC m=+0.138836542 container start 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:45:19 compute-0 admiring_poincare[457423]: 167 167
Nov 25 09:45:19 compute-0 systemd[1]: libpod-05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801.scope: Deactivated successfully.
Nov 25 09:45:19 compute-0 podman[457407]: 2025-11-25 09:45:19.300167304 +0000 UTC m=+0.149985056 container attach 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:19 compute-0 podman[457407]: 2025-11-25 09:45:19.301091009 +0000 UTC m=+0.150908681 container died 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:45:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a31b984f039012911ba5054c747098d5934321e5995f3f224d29d4035b480fe-merged.mount: Deactivated successfully.
Nov 25 09:45:19 compute-0 podman[457407]: 2025-11-25 09:45:19.360706416 +0000 UTC m=+0.210524088 container remove 05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:45:19 compute-0 systemd[1]: libpod-conmon-05a833fbd7597c6d70e8512928a4a8f784e056c3313549f50d0ee0cc13b4f801.scope: Deactivated successfully.
Nov 25 09:45:19 compute-0 nova_compute[253538]: 2025-11-25 09:45:19.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:19 compute-0 podman[457446]: 2025-11-25 09:45:19.559208684 +0000 UTC m=+0.084596709 container create 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 09:45:19 compute-0 podman[457446]: 2025-11-25 09:45:19.501818388 +0000 UTC m=+0.027206503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:45:19 compute-0 systemd[1]: Started libpod-conmon-24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d.scope.
Nov 25 09:45:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:19 compute-0 podman[457446]: 2025-11-25 09:45:19.645520761 +0000 UTC m=+0.170908806 container init 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:19 compute-0 podman[457446]: 2025-11-25 09:45:19.656646234 +0000 UTC m=+0.182034269 container start 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 09:45:19 compute-0 podman[457446]: 2025-11-25 09:45:19.659938064 +0000 UTC m=+0.185326119 container attach 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 09:45:19 compute-0 ceph-mon[75015]: pgmap v3772: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:20 compute-0 nova_compute[253538]: 2025-11-25 09:45:20.046 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:20 compute-0 jovial_ellis[457463]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:45:20 compute-0 jovial_ellis[457463]: --> relative data size: 1.0
Nov 25 09:45:20 compute-0 jovial_ellis[457463]: --> All data devices are unavailable
Nov 25 09:45:20 compute-0 systemd[1]: libpod-24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d.scope: Deactivated successfully.
Nov 25 09:45:20 compute-0 systemd[1]: libpod-24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d.scope: Consumed 1.052s CPU time.
Nov 25 09:45:20 compute-0 podman[457446]: 2025-11-25 09:45:20.767838567 +0000 UTC m=+1.293226592 container died 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-92b56ba5b6a6b7ea604521e2176b2381aad8033e45c0c1df2cdfd0982da9b3b9-merged.mount: Deactivated successfully.
Nov 25 09:45:20 compute-0 podman[457446]: 2025-11-25 09:45:20.830413535 +0000 UTC m=+1.355801580 container remove 24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:45:20 compute-0 systemd[1]: libpod-conmon-24ce67cf980efba8e3c966d098f002db4e2233fc0b0ba2de130b94e829bdfd5d.scope: Deactivated successfully.
Nov 25 09:45:20 compute-0 sudo[457342]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:20 compute-0 sudo[457506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:20 compute-0 sudo[457506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:20 compute-0 sudo[457506]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:21 compute-0 sudo[457531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:21 compute-0 sudo[457531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:21 compute-0 sudo[457531]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:21 compute-0 sudo[457556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:21 compute-0 sudo[457556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:21 compute-0 sudo[457556]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:21 compute-0 sudo[457581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:45:21 compute-0 sudo[457581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:21 compute-0 podman[457646]: 2025-11-25 09:45:21.538873483 +0000 UTC m=+0.048344140 container create 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:45:21 compute-0 systemd[1]: Started libpod-conmon-26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121.scope.
Nov 25 09:45:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:21 compute-0 podman[457646]: 2025-11-25 09:45:21.519551076 +0000 UTC m=+0.029021783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:45:21 compute-0 podman[457646]: 2025-11-25 09:45:21.616029959 +0000 UTC m=+0.125500626 container init 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:45:21 compute-0 podman[457646]: 2025-11-25 09:45:21.624286464 +0000 UTC m=+0.133757111 container start 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:45:21 compute-0 podman[457646]: 2025-11-25 09:45:21.627279406 +0000 UTC m=+0.136750053 container attach 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 09:45:21 compute-0 charming_nobel[457662]: 167 167
Nov 25 09:45:21 compute-0 systemd[1]: libpod-26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121.scope: Deactivated successfully.
Nov 25 09:45:21 compute-0 conmon[457662]: conmon 26455697d3e7aac83a01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121.scope/container/memory.events
Nov 25 09:45:21 compute-0 podman[457646]: 2025-11-25 09:45:21.632740335 +0000 UTC m=+0.142210992 container died 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc4771e21aee5bb6fccfb7d751250d9c98513b93d0e559becbfc7579dbd650df-merged.mount: Deactivated successfully.
Nov 25 09:45:21 compute-0 podman[457646]: 2025-11-25 09:45:21.676387987 +0000 UTC m=+0.185858634 container remove 26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:45:21 compute-0 systemd[1]: libpod-conmon-26455697d3e7aac83a01c004b78e7c7b05d377fc9900b0ccff5e53921a96a121.scope: Deactivated successfully.
Nov 25 09:45:21 compute-0 podman[457687]: 2025-11-25 09:45:21.901384848 +0000 UTC m=+0.062768294 container create 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:45:21 compute-0 systemd[1]: Started libpod-conmon-4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d.scope.
Nov 25 09:45:21 compute-0 podman[457687]: 2025-11-25 09:45:21.875034549 +0000 UTC m=+0.036418085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:45:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:21 compute-0 podman[457687]: 2025-11-25 09:45:21.992883866 +0000 UTC m=+0.154267342 container init 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:45:22 compute-0 nova_compute[253538]: 2025-11-25 09:45:21.998 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:22 compute-0 podman[457687]: 2025-11-25 09:45:22.000047412 +0000 UTC m=+0.161430868 container start 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:45:22 compute-0 podman[457687]: 2025-11-25 09:45:22.003420504 +0000 UTC m=+0.164803970 container attach 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 09:45:22 compute-0 ceph-mon[75015]: pgmap v3773: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:22 compute-0 infallible_haslett[457704]: {
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:     "0": [
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:         {
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "devices": [
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "/dev/loop3"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             ],
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_name": "ceph_lv0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_size": "21470642176",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fa0f8d90-5df8-4d42-9078-da082765696d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "name": "ceph_lv0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "tags": {
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.block_uuid": "ws6N1k-RQql-ecTe-5nuY-MHXm-VI7G-Yri1Wr",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cluster_name": "ceph",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.crush_device_class": "",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.encrypted": "0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osd_fsid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osd_id": "0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.type": "block",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.vdo": "0"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             },
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "type": "block",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "vg_name": "ceph_vg0"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:         }
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:     ],
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:     "1": [
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:         {
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "devices": [
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "/dev/loop4"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             ],
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_name": "ceph_lv1",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_size": "21470642176",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ccbbde0-8faf-460a-800e-c84f00ed17db,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "name": "ceph_lv1",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "tags": {
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.block_uuid": "nYTBvj-YHl6-5Jk0-jcJy-x09O-L1Q7-rp5y9e",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cluster_name": "ceph",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.crush_device_class": "",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.encrypted": "0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osd_fsid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osd_id": "1",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.type": "block",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.vdo": "0"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             },
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "type": "block",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "vg_name": "ceph_vg1"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:         }
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:     ],
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:     "2": [
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:         {
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "devices": [
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "/dev/loop5"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             ],
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_name": "ceph_lv2",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_size": "21470642176",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a058ea16-8b73-51e1-b172-ed66107102bf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a15223e-f21c-42af-b702-2be1c3607399,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "lv_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "name": "ceph_lv2",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "tags": {
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.block_uuid": "5WK7uW-oFZ2-dyrL-QQCf-AuId-ZZAH-eG3je0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cluster_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.cluster_name": "ceph",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.crush_device_class": "",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.encrypted": "0",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osd_fsid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osd_id": "2",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.type": "block",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:                 "ceph.vdo": "0"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             },
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "type": "block",
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:             "vg_name": "ceph_vg2"
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:         }
Nov 25 09:45:22 compute-0 infallible_haslett[457704]:     ]
Nov 25 09:45:22 compute-0 infallible_haslett[457704]: }
Nov 25 09:45:22 compute-0 systemd[1]: libpod-4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d.scope: Deactivated successfully.
Nov 25 09:45:23 compute-0 podman[457713]: 2025-11-25 09:45:23.034006096 +0000 UTC m=+0.038306297 container died 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c95f9c34cc9cf3c535d0b926d1de13207f0e5142efd4f211f2f482ea6e5320ae-merged.mount: Deactivated successfully.
Nov 25 09:45:23 compute-0 podman[457713]: 2025-11-25 09:45:23.081036669 +0000 UTC m=+0.085336840 container remove 4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 09:45:23 compute-0 systemd[1]: libpod-conmon-4e678957062a5b1d577de692530ba8504f33187f12a950a993d2e8373afe947d.scope: Deactivated successfully.
Nov 25 09:45:23 compute-0 sudo[457581]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:23 compute-0 sudo[457728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:23 compute-0 sudo[457728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:23 compute-0 sudo[457728]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:23 compute-0 sudo[457753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:23 compute-0 sudo[457753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:23 compute-0 sudo[457753]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:23 compute-0 sudo[457778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:23 compute-0 sudo[457778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:23 compute-0 sudo[457778]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:23 compute-0 sudo[457803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- raw list --format json
Nov 25 09:45:23 compute-0 sudo[457803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:23 compute-0 podman[457869]: 2025-11-25 09:45:23.813714649 +0000 UTC m=+0.050479909 container create c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 09:45:23 compute-0 systemd[1]: Started libpod-conmon-c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf.scope.
Nov 25 09:45:23 compute-0 podman[457869]: 2025-11-25 09:45:23.78590513 +0000 UTC m=+0.022670410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:45:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:23 compute-0 podman[457869]: 2025-11-25 09:45:23.90239671 +0000 UTC m=+0.139161990 container init c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:23 compute-0 podman[457869]: 2025-11-25 09:45:23.908327282 +0000 UTC m=+0.145092542 container start c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:45:23 compute-0 elegant_wescoff[457886]: 167 167
Nov 25 09:45:23 compute-0 systemd[1]: libpod-c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf.scope: Deactivated successfully.
Nov 25 09:45:23 compute-0 podman[457869]: 2025-11-25 09:45:23.914787778 +0000 UTC m=+0.151553068 container attach c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 09:45:23 compute-0 podman[457869]: 2025-11-25 09:45:23.915112327 +0000 UTC m=+0.151877587 container died c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 09:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0087a099184feb138d26866296e2990f10dd4bd69941c6eabbe343c524e92269-merged.mount: Deactivated successfully.
Nov 25 09:45:23 compute-0 podman[457869]: 2025-11-25 09:45:23.964427433 +0000 UTC m=+0.201192703 container remove c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wescoff, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:45:23 compute-0 systemd[1]: libpod-conmon-c9f35143b27da54870b431b7f66d8801139aca6803f4b78cf9861e9759ca8ecf.scope: Deactivated successfully.
Nov 25 09:45:24 compute-0 podman[457909]: 2025-11-25 09:45:24.135850713 +0000 UTC m=+0.052136025 container create 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:45:24 compute-0 systemd[1]: Started libpod-conmon-62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f.scope.
Nov 25 09:45:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:24 compute-0 podman[457909]: 2025-11-25 09:45:24.111347333 +0000 UTC m=+0.027632625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:45:24 compute-0 podman[457909]: 2025-11-25 09:45:24.220912574 +0000 UTC m=+0.137197866 container init 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:45:24 compute-0 podman[457909]: 2025-11-25 09:45:24.231794491 +0000 UTC m=+0.148079763 container start 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:45:24 compute-0 podman[457909]: 2025-11-25 09:45:24.236844099 +0000 UTC m=+0.153129391 container attach 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:45:24 compute-0 ceph-mon[75015]: pgmap v3774: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:25 compute-0 nova_compute[253538]: 2025-11-25 09:45:25.049 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]: {
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:     "1ccbbde0-8faf-460a-800e-c84f00ed17db": {
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "osd_id": 1,
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "osd_uuid": "1ccbbde0-8faf-460a-800e-c84f00ed17db",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "type": "bluestore"
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:     },
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:     "2a15223e-f21c-42af-b702-2be1c3607399": {
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "osd_id": 2,
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "osd_uuid": "2a15223e-f21c-42af-b702-2be1c3607399",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "type": "bluestore"
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:     },
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:     "fa0f8d90-5df8-4d42-9078-da082765696d": {
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "ceph_fsid": "a058ea16-8b73-51e1-b172-ed66107102bf",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "osd_id": 0,
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "osd_uuid": "fa0f8d90-5df8-4d42-9078-da082765696d",
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:         "type": "bluestore"
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]:     }
Nov 25 09:45:25 compute-0 intelligent_bouman[457926]: }
Nov 25 09:45:25 compute-0 systemd[1]: libpod-62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f.scope: Deactivated successfully.
Nov 25 09:45:25 compute-0 systemd[1]: libpod-62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f.scope: Consumed 1.067s CPU time.
Nov 25 09:45:25 compute-0 podman[457909]: 2025-11-25 09:45:25.295385754 +0000 UTC m=+1.211671066 container died 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 09:45:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3775: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5100eac66ed36a528b391ca9fe7d90d22ba6dd1662daa4886a16be4787756ca-merged.mount: Deactivated successfully.
Nov 25 09:45:25 compute-0 podman[457909]: 2025-11-25 09:45:25.34945935 +0000 UTC m=+1.265744622 container remove 62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:45:25 compute-0 systemd[1]: libpod-conmon-62173747a634b5c2fae464f63e7a2958eed7c09fb41b0d0fd4541fe7a5b1f74f.scope: Deactivated successfully.
Nov 25 09:45:25 compute-0 sudo[457803]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 09:45:25 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:45:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 09:45:25 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:45:25 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 96efbcec-9f84-4fdd-8cb2-0f6b5fb3b4a4 does not exist
Nov 25 09:45:25 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev bf8fa640-345f-49a7-b244-1a57cfe92224 does not exist
Nov 25 09:45:25 compute-0 sudo[457972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:25 compute-0 sudo[457972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:25 compute-0 sudo[457972]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:25 compute-0 sudo[457997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:45:25 compute-0 sudo[457997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:25 compute-0 sudo[457997]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:26 compute-0 ceph-mon[75015]: pgmap v3775: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:26 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:45:26 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:45:27 compute-0 nova_compute[253538]: 2025-11-25 09:45:27.000 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:28 compute-0 ceph-mon[75015]: pgmap v3776: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:28 compute-0 nova_compute[253538]: 2025-11-25 09:45:28.548 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:28 compute-0 nova_compute[253538]: 2025-11-25 09:45:28.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:45:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1911535707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:45:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:45:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1911535707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:45:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1911535707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:45:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1911535707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:45:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:30 compute-0 nova_compute[253538]: 2025-11-25 09:45:30.052 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:30 compute-0 ceph-mon[75015]: pgmap v3777: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:30 compute-0 nova_compute[253538]: 2025-11-25 09:45:30.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:30 compute-0 nova_compute[253538]: 2025-11-25 09:45:30.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:45:30 compute-0 nova_compute[253538]: 2025-11-25 09:45:30.554 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:45:30 compute-0 nova_compute[253538]: 2025-11-25 09:45:30.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:45:30 compute-0 nova_compute[253538]: 2025-11-25 09:45:30.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:30 compute-0 nova_compute[253538]: 2025-11-25 09:45:30.565 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:30 compute-0 nova_compute[253538]: 2025-11-25 09:45:30.565 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:45:30 compute-0 podman[458023]: 2025-11-25 09:45:30.809350418 +0000 UTC m=+0.061291355 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:30 compute-0 podman[458022]: 2025-11-25 09:45:30.815834914 +0000 UTC m=+0.067795891 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:45:31 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:31 compute-0 ceph-mon[75015]: pgmap v3778: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:32 compute-0 nova_compute[253538]: 2025-11-25 09:45:32.003 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:33 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:34 compute-0 ceph-mon[75015]: pgmap v3779: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:34 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:35 compute-0 nova_compute[253538]: 2025-11-25 09:45:35.056 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:35 compute-0 sshd-session[458060]: Invalid user a from 146.190.154.85 port 57220
Nov 25 09:45:35 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:35 compute-0 sshd-session[458060]: Received disconnect from 146.190.154.85 port 57220:11: Bye Bye [preauth]
Nov 25 09:45:35 compute-0 sshd-session[458060]: Disconnected from invalid user a 146.190.154.85 port 57220 [preauth]
Nov 25 09:45:36 compute-0 ceph-mon[75015]: pgmap v3780: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:37 compute-0 nova_compute[253538]: 2025-11-25 09:45:37.005 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:37 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:37 compute-0 podman[458062]: 2025-11-25 09:45:37.83371774 +0000 UTC m=+0.087792288 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:45:38 compute-0 ceph-mon[75015]: pgmap v3781: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:38 compute-0 nova_compute[253538]: 2025-11-25 09:45:38.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:39 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:39 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:40 compute-0 nova_compute[253538]: 2025-11-25 09:45:40.059 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:40 compute-0 ceph-mon[75015]: pgmap v3782: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:41 compute-0 sshd-session[458088]: Received disconnect from 62.60.193.188 port 37296:11: Bye Bye [preauth]
Nov 25 09:45:41 compute-0 sshd-session[458088]: Disconnected from authenticating user root 62.60.193.188 port 37296 [preauth]
Nov 25 09:45:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:45:41.134 162739 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:45:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:45:41.135 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:45:41 compute-0 ovn_metadata_agent[162734]: 2025-11-25 09:45:41.135 162739 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:45:41 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:41 compute-0 sshd-session[458090]: Invalid user pivpn from 165.227.175.225 port 34912
Nov 25 09:45:42 compute-0 nova_compute[253538]: 2025-11-25 09:45:42.007 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:42 compute-0 sshd-session[458090]: Received disconnect from 165.227.175.225 port 34912:11: Bye Bye [preauth]
Nov 25 09:45:42 compute-0 sshd-session[458090]: Disconnected from invalid user pivpn 165.227.175.225 port 34912 [preauth]
Nov 25 09:45:42 compute-0 ceph-mon[75015]: pgmap v3783: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:43 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:43 compute-0 sshd-session[458092]: Received disconnect from 47.252.72.9 port 38244:11: Bye Bye [preauth]
Nov 25 09:45:43 compute-0 sshd-session[458092]: Disconnected from authenticating user root 47.252.72.9 port 38244 [preauth]
Nov 25 09:45:44 compute-0 ceph-mon[75015]: pgmap v3784: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:44 compute-0 nova_compute[253538]: 2025-11-25 09:45:44.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:44 compute-0 nova_compute[253538]: 2025-11-25 09:45:44.564 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:45:44 compute-0 nova_compute[253538]: 2025-11-25 09:45:44.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:45:44 compute-0 nova_compute[253538]: 2025-11-25 09:45:44.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:45:44 compute-0 nova_compute[253538]: 2025-11-25 09:45:44.585 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:45:44 compute-0 nova_compute[253538]: 2025-11-25 09:45:44.585 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:45:44 compute-0 nova_compute[253538]: 2025-11-25 09:45:44.586 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:45:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:44 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:45:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716900065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.016 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.062 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.202 253542 WARNING nova.virt.libvirt.driver [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.204 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.204 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.204 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.273 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.274 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.292 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:45:45 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:45 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3716900065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:45:45 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 09:45:45 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787085369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.732 253542 DEBUG oslo_concurrency.processutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.737 253542 DEBUG nova.compute.provider_tree [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed in ProviderTree for provider: 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.753 253542 DEBUG nova.scheduler.client.report [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Inventory has not changed for provider 06cbbb86-d0ab-41fb-a5e5-8a0de51561b4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.755 253542 DEBUG nova.compute.resource_tracker [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:45:45 compute-0 nova_compute[253538]: 2025-11-25 09:45:45.755 253542 DEBUG oslo_concurrency.lockutils [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:45:46 compute-0 ceph-mon[75015]: pgmap v3785: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:46 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/787085369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:45:47 compute-0 nova_compute[253538]: 2025-11-25 09:45:47.008 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:47 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:47 compute-0 ceph-mon[75015]: pgmap v3786: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:49 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:49 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:50 compute-0 nova_compute[253538]: 2025-11-25 09:45:50.066 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:50 compute-0 ceph-mon[75015]: pgmap v3787: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:51 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:52 compute-0 nova_compute[253538]: 2025-11-25 09:45:52.010 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:52 compute-0 ceph-mon[75015]: pgmap v3788: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Optimize plan auto_2025-11-25_09:45:53
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] do_upmap
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'images']
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [balancer INFO root] prepared 0/10 changes
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:53 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:45:54 compute-0 ceph-mgr[75313]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:45:54 compute-0 ceph-mon[75015]: pgmap v3789: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:54 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:55 compute-0 nova_compute[253538]: 2025-11-25 09:45:55.069 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:55 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:55 compute-0 ceph-mon[75015]: pgmap v3790: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:57 compute-0 nova_compute[253538]: 2025-11-25 09:45:57.012 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:45:57 compute-0 sshd-session[458138]: Accepted publickey for zuul from 192.168.122.10 port 57180 ssh2: ECDSA SHA256:XPT2Qp05XP+4/iPWyxQ1YuG4VjRBRDdk6pBKmAF934E
Nov 25 09:45:57 compute-0 systemd-logind[822]: New session 58 of user zuul.
Nov 25 09:45:57 compute-0 systemd[1]: Started Session 58 of User zuul.
Nov 25 09:45:57 compute-0 sshd-session[458138]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:45:57 compute-0 sudo[458142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 09:45:57 compute-0 sudo[458142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:57 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:58 compute-0 ceph-mon[75015]: pgmap v3791: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:59 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:45:59 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23437 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:45:59 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:00 compute-0 nova_compute[253538]: 2025-11-25 09:46:00.072 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:00 compute-0 ceph-mon[75015]: pgmap v3792: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:00 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23439 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:00 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 09:46:00 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459262280' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:46:01 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:01 compute-0 ceph-mon[75015]: from='client.23437 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:01 compute-0 ceph-mon[75015]: from='client.23439 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:01 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3459262280' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:46:01 compute-0 podman[458392]: 2025-11-25 09:46:01.514202593 +0000 UTC m=+0.057392298 container health_status 5c8d5508db57b4c3da7e80ae11f3a3094e87785cb74f60c98199261dd8b73481 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:46:01 compute-0 podman[458391]: 2025-11-25 09:46:01.544533271 +0000 UTC m=+0.082651498 container health_status 201f5ba8a846455dad7681d91099d8c3f33e8ac91298c1330a8b525b94c03938 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 09:46:02 compute-0 nova_compute[253538]: 2025-11-25 09:46:02.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:02 compute-0 ceph-mon[75015]: pgmap v3793: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:03 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:03 compute-0 ceph-mon[75015]: pgmap v3794: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:03 compute-0 ovs-vsctl[458462]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 09:46:04 compute-0 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 09:46:04 compute-0 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 09:46:04 compute-0 virtqemud[253839]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:04 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:46:04 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:05 compute-0 nova_compute[253538]: 2025-11-25 09:46:05.128 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:05 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:05 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: cache status {prefix=cache status} (starting...)
Nov 25 09:46:05 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: client ls {prefix=client ls} (starting...)
Nov 25 09:46:05 compute-0 lvm[458805]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:46:05 compute-0 lvm[458805]: VG ceph_vg0 finished
Nov 25 09:46:05 compute-0 lvm[458825]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 09:46:05 compute-0 lvm[458825]: VG ceph_vg1 finished
Nov 25 09:46:05 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23443 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:05 compute-0 lvm[458835]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 09:46:05 compute-0 lvm[458835]: VG ceph_vg2 finished
Nov 25 09:46:06 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23445 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:06 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 09:46:06 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 09:46:06 compute-0 ceph-mon[75015]: pgmap v3795: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:06 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 09:46:06 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 09:46:06 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 09:46:06 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 09:46:06 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946772714' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 09:46:06 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 09:46:07 compute-0 nova_compute[253538]: 2025-11-25 09:46:07.014 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:07 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23451 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 09:46:07 compute-0 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:46:07.035+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 09:46:07 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 09:46:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:46:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967422664' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 09:46:07 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 09:46:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847336372' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mon[75015]: from='client.23443 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mon[75015]: from='client.23445 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/946772714' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/967422664' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/847336372' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: ops {prefix=ops} (starting...)
Nov 25 09:46:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 09:46:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244514426' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 09:46:07 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 09:46:07 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559998018' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 09:46:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230869845' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: session ls {prefix=session ls} (starting...)
Nov 25 09:46:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 09:46:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531798828' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mds[101326]: mds.cephfs.compute-0.dgfvvi asok_command: status {prefix=status} (starting...)
Nov 25 09:46:08 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23465 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mon[75015]: from='client.23451 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mon[75015]: pgmap v3796: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2244514426' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1559998018' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3230869845' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1531798828' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mon[75015]: from='client.23465 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 09:46:08 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73070914' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 09:46:08 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23469 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:08 compute-0 podman[459249]: 2025-11-25 09:46:08.845121884 +0000 UTC m=+0.095924470 container health_status 8ad1e319ea263c63021f01bb2d6f18ca5aea205883339692c6b10bddfa993191 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 09:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3171255462' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 09:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116851053' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 09:46:09 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 09:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178038085' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 09:46:09 compute-0 sshd-session[459160]: Invalid user laravel from 152.70.84.178 port 50434
Nov 25 09:46:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/73070914' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 09:46:09 compute-0 ceph-mon[75015]: from='client.23469 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3171255462' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:46:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2116851053' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 09:46:09 compute-0 ceph-mon[75015]: pgmap v3797: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:09 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4178038085' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 09:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 09:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961891983' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 09:46:09 compute-0 sshd-session[459160]: Received disconnect from 152.70.84.178 port 50434:11: Bye Bye [preauth]
Nov 25 09:46:09 compute-0 sshd-session[459160]: Disconnected from invalid user laravel 152.70.84.178 port 50434 [preauth]
Nov 25 09:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 09:46:09 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118572444' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 09:46:09 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:10 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23481 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 09:46:10 compute-0 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:46:10.096+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 09:46:10 compute-0 nova_compute[253538]: 2025-11-25 09:46:10.130 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:10 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23483 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 09:46:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3649500046' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23487 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3961891983' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/118572444' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mon[75015]: from='client.23481 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mon[75015]: from='client.23483 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3649500046' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23491 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:10 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 09:46:10 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2776042938' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 09:46:11 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:11 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23493 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 09:46:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144628325' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 09:46:11 compute-0 ceph-mon[75015]: from='client.23487 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:11 compute-0 ceph-mon[75015]: from='client.23491 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2776042938' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 09:46:11 compute-0 ceph-mon[75015]: pgmap v3798: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:11 compute-0 ceph-mon[75015]: from='client.23493 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:11 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4144628325' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 09:46:11 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23497 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:58.733001+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:12:59.733371+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:00.733540+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:01.733773+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:02.733960+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:03.734156+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:04.734575+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:05.734806+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:06.735051+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:07.735270+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:08.735471+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:09.735750+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:10.735927+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:11.736099+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:12.736286+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:13.736487+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884287 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 49774592 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:14.736694+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 49766400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:15.736856+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129e9a1e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294a74a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 49766400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1276214a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12abf9000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12abf9000 session 0x55a128516d20
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.088615417s of 44.159133911s, submitted: 19
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12844c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12808c780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1299c8000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e96d000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e96d000 session 0x55a129ce90e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:16.737035+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:17.737220+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:18.737467+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906377 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:19.737639+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:20.737850+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:21.738044+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:22.738189+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 53256192 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4c7000/0x0/0x4ffc00000, data 0x14706dc/0x15e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a1280734a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:23.738359+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909117 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:24.738517+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:25.738684+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:26.738822+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:27.739089+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:28.739296+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924957 data_alloc: 218103808 data_used: 9543680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:29.739438+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:30.739590+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:31.739734+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:32.739864+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:33.740026+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924957 data_alloc: 218103808 data_used: 9543680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec4a3000/0x0/0x4ffc00000, data 0x14946dc/0x160b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:34.740172+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 52944896 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.063507080s of 19.188037872s, submitted: 5
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:35.740378+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286867456 unmapped: 50593792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:36.740890+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 50577408 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:37.741006+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:38.741338+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004787 data_alloc: 218103808 data_used: 10338304
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:39.743174+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:40.743482+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:41.743867+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:42.744126+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:43.744341+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005123 data_alloc: 218103808 data_used: 10346496
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:44.744567+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:45.745372+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:46.745738+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 286212096 unmapped: 51249152 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:47.746011+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a1294e1680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12855d0e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129f4ab40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b692800
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b692800 session 0x55a1294f3860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.005581856s of 12.598716736s, submitted: 51
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 45473792 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12962b2c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12844d2c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebc76000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129f4a780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a1294a7c20
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1299f6c00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1299f6c00 session 0x55a12761c960
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:48.746351+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057527 data_alloc: 218103808 data_used: 10346496
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:49.746734+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:50.747115+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:51.747293+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb68e000/0x0/0x4ffc00000, data 0x22a86ec/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:52.747540+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:53.747719+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057527 data_alloc: 218103808 data_used: 10346496
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:54.747893+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:55.748224+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb68e000/0x0/0x4ffc00000, data 0x22a86ec/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 49717248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129eb2f00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:56.748383+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129e9d860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129eb25a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129d8e780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:57.748516+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698c00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e49fc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:58.748690+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059899 data_alloc: 218103808 data_used: 10350592
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:59.748794+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:00.748942+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:01.749128+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:02.749267+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:03.749446+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104219 data_alloc: 218103808 data_used: 16457728
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:04.749617+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:05.749807+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eb66a000/0x0/0x4ffc00000, data 0x22cc6ec/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:06.749985+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:07.750138+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:08.750383+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104219 data_alloc: 218103808 data_used: 16457728
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 49692672 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:09.750619+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.668926239s of 21.840452194s, submitted: 15
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292233216 unmapped: 45228032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:10.750757+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebda6000/0x0/0x4ffc00000, data 0x2bca6ec/0x2d42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292864000 unmapped: 44597248 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:11.759557+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:12.759691+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:13.759839+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194841 data_alloc: 234881024 data_used: 18636800
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:14.760041+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:15.760229+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:16.760401+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:17.760541+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:18.760767+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196601 data_alloc: 234881024 data_used: 18771968
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:19.760953+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ebd8e000/0x0/0x4ffc00000, data 0x2be06ec/0x2d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:20.761192+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:21.761431+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698c00 session 0x55a128547a40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e49fc00 session 0x55a12761c1e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292872192 unmapped: 44589056 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.503499985s of 12.766171455s, submitted: 100
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:22.761590+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128516b40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:23.761812+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013621 data_alloc: 218103808 data_used: 10407936
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4eccb6000/0x0/0x4ffc00000, data 0x1cc16dc/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:24.761971+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:25.762135+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292880384 unmapped: 44580864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294a70e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a1294f3a40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:26.762336+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12850d860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:27.762541+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:28.762712+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:29.762941+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:30.763077+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:31.763267+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:32.763479+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:33.763664+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:34.763939+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:35.764120+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:36.764375+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:37.764558+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:38.764748+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:39.764906+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:40.765062+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:41.765244+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:42.765456+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:43.765671+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:44.765851+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:45.766041+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:46.766242+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:47.766388+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:48.766557+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290447360 unmapped: 47013888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:49.766729+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:50.766900+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:51.767118+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:52.767405+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:53.767602+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:54.767763+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:55.767974+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:56.768170+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:57.768396+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:58.768582+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:59.768787+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:00.768990+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:01.769169+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:02.769379+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:03.769581+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2901715 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290455552 unmapped: 47005696 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:04.769725+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.827690125s of 42.880455017s, submitted: 18
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290463744 unmapped: 46997504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128094f00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a128073860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:05.769885+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a127959e00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d5de00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12e49fc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12e49fc00 session 0x55a12ad874a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:06.770071+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed282000/0x0/0x4ffc00000, data 0x16f66cc/0x186c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:07.770296+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:08.770532+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939984 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed282000/0x0/0x4ffc00000, data 0x16f66cc/0x186c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:09.770728+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12962a1e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:10.770864+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:11.771009+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 46907392 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:12.771126+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290594816 unmapped: 46866432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:13.771252+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969544 data_alloc: 218103808 data_used: 11149312
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290594816 unmapped: 46866432 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:14.771376+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:15.771484+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:16.771628+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:17.771762+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:18.771906+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969544 data_alloc: 218103808 data_used: 11149312
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:19.772019+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:20.772547+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289464320 unmapped: 47996928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:21.772684+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed25e000/0x0/0x4ffc00000, data 0x171a6cc/0x1890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289472512 unmapped: 47988736 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:22.772859+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.737442017s of 17.874473572s, submitted: 19
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291201024 unmapped: 46260224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:23.773053+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013660 data_alloc: 218103808 data_used: 11976704
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:24.773186+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:25.773362+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:26.773561+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:27.773728+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:28.773888+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015580 data_alloc: 218103808 data_used: 12120064
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:29.774095+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:30.774413+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:31.774659+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:32.774820+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:33.774976+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015900 data_alloc: 218103808 data_used: 12128256
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:34.775177+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:35.775361+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:36.775508+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291209216 unmapped: 46252032 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:37.775772+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.645758629s of 14.787140846s, submitted: 29
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12844c960
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12ad87680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a12ad87a40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935dc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935dc00 session 0x55a12ad865a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12ad86f00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecedc000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:38.776021+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051303 data_alloc: 218103808 data_used: 12128256
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:39.776203+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:40.776389+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:41.776545+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:42.776680+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291217408 unmapped: 46243840 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:43.776868+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051303 data_alloc: 218103808 data_used: 12128256
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:44.777001+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291225600 unmapped: 46235648 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:45.777192+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a12962a5a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291233792 unmapped: 46227456 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:46.777368+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291299328 unmapped: 46161920 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:47.777507+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:48.777661+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077383 data_alloc: 218103808 data_used: 15802368
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:49.777780+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:50.777941+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:51.778090+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:52.778234+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:53.778837+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077383 data_alloc: 218103808 data_used: 15802368
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:54.779367+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:55.779858+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:56.780351+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 45309952 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:57.780669+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb0c000/0x0/0x4ffc00000, data 0x1e6b72e/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.151699066s of 20.260654449s, submitted: 29
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293240832 unmapped: 44220416 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:58.780876+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131253 data_alloc: 218103808 data_used: 16269312
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293249024 unmapped: 44212224 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:59.781010+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:00.781232+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:01.781486+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:02.781819+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec31b000/0x0/0x4ffc00000, data 0x264e72e/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:03.782016+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3142553 data_alloc: 218103808 data_used: 16080896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:04.782142+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 42934272 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:05.782368+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 42926080 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:06.782495+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12962a780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a129ac61e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:07.782611+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 42926080 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a129ce85a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:08.782777+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecb2f000/0x0/0x4ffc00000, data 0x1a9b6cc/0x1c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3022497 data_alloc: 218103808 data_used: 12189696
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:09.782950+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:10.783087+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.365324974s of 12.755796432s, submitted: 111
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:11.783385+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:12.783630+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293789696 unmapped: 43671552 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129d7fa40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d8f2c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a127620780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:13.783801+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:14.783953+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:15.784195+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:16.784507+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:17.784664+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:18.784968+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:19.785191+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:20.785396+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:21.785601+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:22.785795+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:23.786073+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:24.786434+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:25.786652+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:26.786847+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:27.787059+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:28.787375+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:29.787544+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:30.787737+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:31.787940+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:32.788223+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 43655168 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:33.788596+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:34.788782+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:35.788989+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:36.789151+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:37.789297+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:38.789512+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:39.789702+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:40.789887+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 43646976 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:41.790029+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:42.790189+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:43.790358+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 43638784 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915481 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:44.790494+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.636383057s of 33.703060150s, submitted: 22
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298565632 unmapped: 38895616 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,7])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128517c20
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a129e9b860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d5cb40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a1280952c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12761c780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:45.790659+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:46.790805+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:47.790927+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed2ab000/0x0/0x4ffc00000, data 0x16cd6cc/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129f4a1e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:48.791142+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f3e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a129f3e000 session 0x55a12962b4a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956231 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:49.791337+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 43622400 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a128517680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12761c3c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:50.791589+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 43474944 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:51.791793+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:52.791963+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:53.792183+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12962b680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a128665860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993716 data_alloc: 218103808 data_used: 12095488
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:54.792374+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:55.792516+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:56.792808+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:57.792952+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:58.793217+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12b4d1000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.936422348s of 14.315987587s, submitted: 15
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993848 data_alloc: 218103808 data_used: 12095488
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:59.793368+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:00.793537+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12b4d1000 session 0x55a12ad865a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a129d5cf00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:01.793663+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 43450368 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:02.793881+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:03.794058+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993716 data_alloc: 218103808 data_used: 12095488
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:04.794249+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:05.794474+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:06.795276+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed287000/0x0/0x4ffc00000, data 0x16f16cc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:07.795369+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 43442176 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12ad87a40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a129eb21e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:08.795494+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a12ad87680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:09.795646+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:10.795864+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:11.796280+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:12.796588+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:13.796807+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:14.796960+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:15.797252+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:16.797480+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:17.797745+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:18.797968+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:19.798163+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:20.798412+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:21.798623+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:22.798938+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:23.799090+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:24.799246+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:25.799366+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:26.799489+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:27.799600+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:28.799748+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291471360 unmapped: 45989888 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:29.799915+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:30.800052+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:31.800413+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:32.800586+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:33.800839+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291487744 unmapped: 45973504 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:34.801087+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:35.801228+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:36.801507+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:37.801672+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:38.801904+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:39.802117+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291495936 unmapped: 45965312 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:40.802539+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:41.802727+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:42.802934+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:43.803199+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:44.803386+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921114 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291504128 unmapped: 45957120 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:45.803563+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ed730000/0x0/0x4ffc00000, data 0x12486cc/0x13be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291512320 unmapped: 45948928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:46.803794+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291512320 unmapped: 45948928 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12951c000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.175201416s of 48.326942444s, submitted: 23
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12951c000 session 0x55a129d8e780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:47.803955+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a12844d2c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a1294f3860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a1294e1680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a13007dc00 session 0x55a12761cf00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:48.804164+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:49.804376+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999881 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:50.804512+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:51.804673+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:52.804863+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12d8ce800
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12d8ce800 session 0x55a128094f00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:53.804988+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:54.805148+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000013 data_alloc: 218103808 data_used: 7364608
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 291536896 unmapped: 45924352 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:55.805781+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:56.805936+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:57.806199+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:58.806582+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:59.806875+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064813 data_alloc: 218103808 data_used: 16584704
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:00.807024+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ecd45000/0x0/0x4ffc00000, data 0x1c336cc/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:01.831237+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:02.833156+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:03.834451+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:04.836847+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064813 data_alloc: 218103808 data_used: 16584704
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 292847616 unmapped: 44613632 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:05.837636+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.436155319s of 18.604982376s, submitted: 24
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 43556864 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:06.837877+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec693000/0x0/0x4ffc00000, data 0x22e56cc/0x245b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:07.838448+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:08.838869+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:09.839232+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134245 data_alloc: 234881024 data_used: 17334272
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:10.839759+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:11.839905+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:12.840043+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:13.840355+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:14.840512+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134245 data_alloc: 234881024 data_used: 17334272
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:15.840584+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:16.840848+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:17.840970+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.365725517s of 12.565147400s, submitted: 42
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a127698400 session 0x55a129d5de00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a1286a4000 session 0x55a12855c3c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:18.841146+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 heartbeat osd_stat(store_statfs(0x4ec685000/0x0/0x4ffc00000, data 0x22f26cc/0x2468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 43417600 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 ms_handle_reset con 0x55a12935d400 session 0x55a129d7e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:19.841280+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131957 data_alloc: 234881024 data_used: 17362944
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 43401216 heap: 337461248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:20.841424+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a13007dc00 session 0x55a129f53860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a127eb5400 session 0x55a12808d2c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a1286a5000 session 0x55a128579a40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 266 ms_handle_reset con 0x55a127eb5400 session 0x55a129f4a5a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 299204608 unmapped: 41099264 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 267 ms_handle_reset con 0x55a127698400 session 0x55a1299c9860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:21.841551+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 40034304 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:22.841697+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 267 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6960
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 267 heartbeat osd_stat(store_statfs(0x4ebc4a000/0x0/0x4ffc00000, data 0x2d29e2a/0x2ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 267 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12935d400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a12935d400 session 0x55a1294f2b40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:23.841901+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a127698400 session 0x55a12ad86000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 268 ms_handle_reset con 0x55a127eb5400 session 0x55a12761d2c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:24.842025+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3255591 data_alloc: 234881024 data_used: 24862720
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:25.842348+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:26.842577+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 268 heartbeat osd_stat(store_statfs(0x4ebc46000/0x0/0x4ffc00000, data 0x2d2b9c3/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 40026112 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:27.842713+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 40017920 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.536736488s of 10.118284225s, submitted: 64
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:28.842933+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a4000 session 0x55a127620960
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:29.843120+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033163 data_alloc: 218103808 data_used: 7397376
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:30.843413+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:31.843607+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:32.843760+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eccee000/0x0/0x4ffc00000, data 0x1c83426/0x1dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:33.843913+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:34.844088+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033163 data_alloc: 218103808 data_used: 7397376
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:35.844273+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 44343296 heap: 340303872 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a5000 session 0x55a12844c960
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb4000 session 0x55a129e9af00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a13007dc00 session 0x55a1285474a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127698400 session 0x55a129ce81e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb5400 session 0x55a129f53c20
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:36.844374+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6f00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a1286a5000 session 0x55a1294a7860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 316350464 unmapped: 36495360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eb5ae000/0x0/0x4ffc00000, data 0x33c3488/0x3540000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,0,4,1,3])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:37.844507+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127698400 session 0x55a129f4b680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:38.844778+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 ms_handle_reset con 0x55a127eb5400 session 0x55a128001c20
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:39.845003+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 heartbeat osd_stat(store_statfs(0x4eb5aa000/0x0/0x4ffc00000, data 0x33c7488/0x3544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263674 data_alloc: 234881024 data_used: 18100224
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:40.845197+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 46792704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:41.845366+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.454534531s of 13.402193069s, submitted: 53
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 270 ms_handle_reset con 0x55a13007dc00 session 0x55a1299c9c20
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:42.845492+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:43.845626+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:44.845762+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194453 data_alloc: 218103808 data_used: 15204352
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:45.845875+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:46.846120+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 270 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:47.846521+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfdb000/0x0/0x4ffc00000, data 0x2995049/0x2b12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:48.846786+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:49.846910+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197427 data_alloc: 218103808 data_used: 15204352
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:50.847061+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:51.847275+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298696704 unmapped: 54149120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:53.788035+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.603380203s of 11.698580742s, submitted: 38
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd8000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298754048 unmapped: 54091776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:54.788185+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216491 data_alloc: 234881024 data_used: 17309696
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:55.788360+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:56.788537+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:57.788712+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:58.788868+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:59.789058+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216651 data_alloc: 234881024 data_used: 17313792
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:00.789261+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:01.789415+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:02.789592+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:03.789781+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:04.789945+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216651 data_alloc: 234881024 data_used: 17313792
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:05.790106+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:06.790285+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:07.790449+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.006009102s of 14.082138062s, submitted: 3
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:08.790581+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:09.790724+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216667 data_alloc: 234881024 data_used: 17309696
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:10.790865+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:11.791095+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:12.791228+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:13.791364+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:14.791504+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216667 data_alloc: 234881024 data_used: 17309696
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:15.791627+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:16.791787+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ebfd9000/0x0/0x4ffc00000, data 0x2996aac/0x2b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:17.791912+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 54059008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.220427513s of 10.246352196s, submitted: 3
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a12ad87860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a129eb3860
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:18.792049+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a12ad865a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:19.792248+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972369 data_alloc: 218103808 data_used: 7401472
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:20.792392+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:21.792529+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:22.792649+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:23.792807+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:24.792935+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:25.793084+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:26.793231+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:27.793432+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:28.793600+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:29.793788+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:30.793965+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:31.794128+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:32.794272+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:33.794459+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:34.794600+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:35.794752+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:36.794920+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:37.795058+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:38.795191+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:39.795421+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972529 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:40.795569+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:41.795690+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed4ce000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:42.795925+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.520774841s of 24.589715958s, submitted: 20
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289693696 unmapped: 63152128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a129f4a1e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a129d8f2c0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129ce85a0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007dc00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a13007dc00 session 0x55a129d5d680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a12850cf00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:43.796073+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ecd3c000/0x0/0x4ffc00000, data 0x1c34a4a/0x1db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a12ad86960
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:44.796288+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129d5cd20
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048048 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a5000 session 0x55a1285ba780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a13007e000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:45.796430+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 289701888 unmapped: 63143936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a13007e000 session 0x55a129d7f0e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:46.797393+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 62095360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:47.797545+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 62095360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:48.797633+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 58523648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127698400 session 0x55a129eb30e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a127eb5400 session 0x55a12844cb40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ecd3c000/0x0/0x4ffc00000, data 0x1c34a4a/0x1db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 ms_handle_reset con 0x55a1286a4000 session 0x55a129ac6b40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:49.797748+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:50.797892+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:51.798077+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5402.4 total, 600.0 interval
                                           Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.84 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1830 writes, 8268 keys, 1830 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s
                                           Interval WAL: 1830 writes, 682 syncs, 2.68 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets getting new tickets!
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:52.798448+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _finish_auth 0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:52.826254+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:53.798586+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:54.798729+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:55.798851+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:56.798978+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:57.799092+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:58.799276+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:59.799497+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:00.799712+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:01.799911+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:02.800086+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:03.800247+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:04.800441+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:05.800654+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:06.800786+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:07.800909+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:08.801056+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:09.801228+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:10.801347+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:11.801488+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:12.801715+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: mgrc ms_handle_reset ms_handle_reset con 0x55a129f3ac00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:46:11 compute-0 ceph-osd[90711]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: get_auth_request con 0x55a13007e000 auth_method 0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:13.801932+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:14.802149+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:15.802348+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:16.802761+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:17.802914+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:18.803030+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:19.803163+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:20.803271+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:21.803455+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:22.803733+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:23.803941+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:24.804112+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:25.804418+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:26.804580+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:27.804721+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:28.804843+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:29.805002+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:30.805197+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:31.805392+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:32.805589+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:33.805728+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:34.805884+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:35.806036+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:36.806185+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:37.806279+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:38.806364+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:39.806555+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:40.806689+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:41.806839+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:42.806960+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:43.807116+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:44.807278+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:45.807447+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:46.807584+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:47.807738+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:48.807941+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:49.808110+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:50.808270+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:51.808472+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:52.808612+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:53.808763+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293273600 unmapped: 59572224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:54.808926+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978242 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:55.809069+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:56.809206+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:57.809425+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:58.809607+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:59.809769+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293281792 unmapped: 59564032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.009826660s of 77.842643738s, submitted: 41
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 heartbeat osd_stat(store_statfs(0x4ed71e000/0x0/0x4ffc00000, data 0x1252a4a/0x13d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977528 data_alloc: 218103808 data_used: 7405568
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:00.809916+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293298176 unmapped: 59547648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 272 ms_handle_reset con 0x55a1286a5000 session 0x55a129eb2b40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:01.810111+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286de800
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293298176 unmapped: 59547648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:02.810282+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 273 ms_handle_reset con 0x55a1286de800 session 0x55a129d7eb40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:03.810462+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:04.810632+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293314560 unmapped: 59531264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923180 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:05.810820+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 273 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0x9b7089/0xb35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:06.810994+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:07.811201+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293322752 unmapped: 59523072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:08.811407+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0x9b7089/0xb35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:09.811739+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:10.811920+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:11.812056+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:12.812215+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293339136 unmapped: 59506688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:13.812393+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:14.812578+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:15.812718+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:16.812861+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:17.813164+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:18.813459+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:19.813642+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293347328 unmapped: 59498496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:20.814048+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:21.814342+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:22.814477+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a12769b400 session 0x55a1294e1a40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127698400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:23.814636+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293363712 unmapped: 59482112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:24.814775+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:25.814897+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:26.815103+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:27.815293+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:28.815473+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293371904 unmapped: 59473920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:29.815694+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:30.815854+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:31.816039+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:32.816234+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:33.816546+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb5000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:34.816684+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:35.816907+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925482 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293380096 unmapped: 59465728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.864181519s of 36.010292053s, submitted: 49
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:36.817046+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 293396480 unmapped: 59449344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:37.817221+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:38.817375+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:39.817565+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:40.817677+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:41.817876+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:42.818046+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:43.818208+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 58376192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:44.818391+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:45.818769+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:46.818995+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:47.819138+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:48.819254+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:49.819351+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:50.819472+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:51.819637+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 58368000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:52.819825+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:53.820208+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:54.820379+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:55.820547+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:56.820735+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 58359808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:57.820880+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:58.820991+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:59.821135+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:00.821277+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:01.821368+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:02.821525+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:03.821729+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:04.821941+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 58351616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:05.822054+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:06.822445+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:07.822629+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:08.822806+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 58343424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:09.823033+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:10.823261+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:11.823446+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:12.823736+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:13.823956+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:14.824117+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:15.824283+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:16.824511+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:17.824682+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:18.824904+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:19.825135+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 58335232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:20.825272+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:21.825426+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:22.825681+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:23.825951+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:24.826094+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 58327040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:25.826275+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:26.826499+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:27.826632+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:28.826844+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:29.827047+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:30.827192+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:31.827320+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:32.827470+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 58318848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:33.827611+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:34.827824+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:35.827998+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:36.828148+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 58302464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:37.828288+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:38.828457+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:39.828657+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:40.828802+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 58294272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:41.828960+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:42.829115+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:43.829282+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:44.829388+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:45.829630+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:46.829772+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:47.830097+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 58286080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:48.830262+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 58277888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:49.830457+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:50.830620+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:51.830760+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:52.830898+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:53.831156+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:54.831415+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:55.831646+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:56.831894+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 58261504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:57.832101+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 58253312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:58.832368+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 58253312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 82.001701355s of 82.319427490s, submitted: 90
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a12769b400 session 0x55a1285ba780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 ms_handle_reset con 0x55a127eb5400 session 0x55a12850cf00
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:59.832636+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:00.832851+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:01.833086+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:02.833272+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:03.833491+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:04.833798+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:05.834027+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 58245120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2924602 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:06.834236+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 58228736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9b8b0c/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:07.834413+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 274 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Nov 25 09:46:11 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 275 ms_handle_reset con 0x55a1286a4000 session 0x55a129f4a1e0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:08.834653+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 275 heartbeat osd_stat(store_statfs(0x4edfb4000/0x0/0x4ffc00000, data 0x9ba687/0xb38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:09.834925+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:10.835111+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.517580032s of 12.192517281s, submitted: 57
Nov 25 09:46:11 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015543659' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925069 data_alloc: 218103808 data_used: 7413760
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:11.835254+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:12.835460+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 58097664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 276 heartbeat osd_stat(store_statfs(0x4ee7b2000/0x0/0x4ffc00000, data 0x1bc258/0x33b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 276 ms_handle_reset con 0x55a1286a5000 session 0x55a12850c780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:13.835610+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:14.835757+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 277 heartbeat osd_stat(store_statfs(0x4ee7af000/0x0/0x4ffc00000, data 0x1bdcd7/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:15.835909+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2857665 data_alloc: 218103808 data_used: 679936
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:16.836074+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:17.836211+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 58089472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:18.836426+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 58073088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 277 heartbeat osd_stat(store_statfs(0x4ee7af000/0x0/0x4ffc00000, data 0x1bdcd7/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 277 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:19.836595+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 58056704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:20.836754+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 58056704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2860639 data_alloc: 218103808 data_used: 679936
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:21.836902+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:22.837045+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 278 heartbeat osd_stat(store_statfs(0x4ee7ac000/0x0/0x4ffc00000, data 0x1bf73a/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:23.837233+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:24.837380+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f44800
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.606686592s of 14.555405617s, submitted: 67
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:25.837526+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 58048512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 ms_handle_reset con 0x55a129f44800 session 0x55a129d8f680
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:26.837694+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:27.837839+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:28.837960+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 58040320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:29.838143+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:30.838280+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:31.838475+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 58032128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:32.838647+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:33.838794+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:34.838920+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:35.839079+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:36.839240+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:37.839365+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 58023936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:38.839566+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:39.839796+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:40.839962+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:41.840156+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:42.840395+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:43.840848+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:44.841051+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 58015744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:45.841223+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:46.841422+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:47.841605+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:48.841842+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 58007552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:49.842198+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:50.842460+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:51.843424+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:52.843589+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 57991168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:53.843748+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:54.843952+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:55.844188+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:56.844396+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:57.844670+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:58.844852+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:59.845444+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:00.845682+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294862848 unmapped: 57982976 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:01.846032+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:02.846383+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:03.846534+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:04.846685+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 57974784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:05.846892+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:06.847102+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:07.847402+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:08.847525+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:09.847758+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:10.847949+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:11.848096+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:12.848427+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 57958400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:13.848653+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:14.848989+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:15.849134+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:16.849288+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 57950208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:17.849833+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:18.850055+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:19.850365+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:20.850542+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:21.850719+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:22.850807+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:23.850924+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 57942016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:24.851080+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:25.851248+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:26.851417+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:27.851573+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:28.851765+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 57933824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:29.851935+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:30.852061+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:31.852177+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:32.852288+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 57925632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:33.852430+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:34.852578+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:35.853094+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 57917440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:36.853272+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:37.853446+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:38.853612+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:39.853838+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:40.853974+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 57909248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:41.854123+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:42.854279+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:43.854468+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:44.854605+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:45.854792+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:46.854982+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:47.855154+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:48.855353+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 57901056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:49.855550+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:50.855682+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:51.855857+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:52.856039+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:53.856180+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:54.856339+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:55.856463+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 57884672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:56.856623+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 57876480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:57.856780+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:58.856952+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:59.857154+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:00.857338+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:01.857500+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:02.857682+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:03.857878+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 57868288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:04.858061+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294985728 unmapped: 57860096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:05.858203+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:06.858381+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:07.858593+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:08.858767+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 57851904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:09.858956+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 104.417770386s of 104.486747742s, submitted: 10
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:10.859105+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4ee7a8000/0x0/0x4ffc00000, data 0x1c12da/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:11.859231+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865810 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 49455104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:12.859364+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 57843712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:13.859511+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4edfa9000/0x0/0x4ffc00000, data 0x9c12da/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:14.859726+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:15.859912+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:16.860146+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919522 data_alloc: 218103808 data_used: 688128
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 57835520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:17.860398+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 heartbeat osd_stat(store_statfs(0x4edfa9000/0x0/0x4ffc00000, data 0x9c12da/0xb45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 280 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:18.860630+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:19.860822+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 280 ms_handle_reset con 0x55a12769b400 session 0x55a1294f3a40
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.512447357s of 10.195021629s, submitted: 4
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edfa4000/0x0/0x4ffc00000, data 0x9c2e7a/0xb49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 57810944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:20.860974+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 57802752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:21.861205+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926341 data_alloc: 218103808 data_used: 696320
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 49414144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:22.861389+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295051264 unmapped: 57794560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:23.861547+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295051264 unmapped: 57794560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:24.861703+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecfa4000/0x0/0x4ffc00000, data 0x19c2e9d/0x1b4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:25.861862+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 ms_handle_reset con 0x55a127eb5400 session 0x55a1299c8000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:26.862204+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:27.862372+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 57786368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:28.862614+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:29.862835+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:30.863099+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:31.863266+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:32.864086+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:33.864262+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:34.865044+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 57778176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:35.865213+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295075840 unmapped: 57769984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:36.865385+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:37.865552+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:38.865796+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:39.866007+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 57761792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:40.866186+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:41.866378+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:42.866666+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:43.866859+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:44.867151+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:45.867358+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 57753600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:46.867577+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:47.867775+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:48.867938+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:49.868152+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:50.868395+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:51.868565+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:52.868816+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 57745408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:53.869013+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:54.869203+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:55.869360+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:56.869513+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:57.869671+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 57737216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:58.869829+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:59.870005+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:00.870166+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 heartbeat osd_stat(store_statfs(0x4ecfa1000/0x0/0x4ffc00000, data 0x19c4a1a/0x1b4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:01.870365+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039483 data_alloc: 218103808 data_used: 704512
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:02.870536+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:03.870690+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295116800 unmapped: 57729024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.138420105s of 43.667034149s, submitted: 14
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:04.870888+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 57720832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 282 ms_handle_reset con 0x55a1286a4000 session 0x55a129f52780
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:05.871031+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:06.871180+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042065 data_alloc: 218103808 data_used: 712704
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:07.871485+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:08.871678+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 57696256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:09.871873+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:10.872060+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:11.872223+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042065 data_alloc: 218103808 data_used: 712704
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 282 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:12.872367+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:13.872578+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 57679872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.782243729s of 10.014166832s, submitted: 29
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9e000/0x0/0x4ffc00000, data 0x19c65c8/0x1b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:14.872790+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:15.872956+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:16.873155+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:17.873325+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 57655296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:18.873452+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:19.873612+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:20.873973+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:21.874201+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:22.874340+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:23.874514+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 57647104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:24.874703+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:25.874888+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:26.875140+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:27.875328+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:28.875451+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:29.875668+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 57638912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:30.875807+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:31.876032+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:32.876272+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 57630720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:33.876443+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:34.876570+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:35.876713+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:36.876867+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:37.877110+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:38.877263+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 57622528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:39.877540+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 57614336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:40.877688+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 57614336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:41.877835+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:42.877988+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:43.878135+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:44.878280+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295239680 unmapped: 57606144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:45.878470+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 57597952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:46.878656+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:47.878817+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:48.878989+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:49.879265+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:50.879452+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:51.879632+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:52.879773+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 57589760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:53.879944+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:54.880165+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:55.880366+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:56.880484+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 57573376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:57.880652+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:58.880816+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:59.881040+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:00.881261+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:01.881418+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:02.881591+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:03.881843+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:04.882033+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 57556992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:05.882232+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:06.882428+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:07.882594+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:08.882787+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:09.882978+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:10.883148+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:11.883374+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:12.883542+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 57548800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:13.883690+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 57540608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:14.883826+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:15.884008+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:16.884175+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:17.884386+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:18.884471+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:19.884586+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:20.884779+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 57532416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:21.884988+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 57524224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:22.885154+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:23.885386+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:24.885508+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:25.885693+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:26.885909+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:27.886096+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:28.886400+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 57516032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:29.886604+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:30.886741+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:31.886980+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:32.887110+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 57507840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:33.887265+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 57499648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:34.887366+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:35.887636+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:36.887799+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:37.888417+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:38.889230+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:39.889679+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 57483264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:40.890029+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:41.890153+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:42.890637+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:43.890776+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:44.891035+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 57475072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:45.891170+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:46.891356+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:47.891500+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:48.891972+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:49.892204+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:50.892503+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 57466880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:51.892659+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 57458688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:52.892903+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 57458688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:53.893050+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:54.893206+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:55.893396+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:56.893796+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 57442304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:57.893938+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:58.894223+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:59.894392+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:00.894581+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 57434112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:01.894706+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 57425920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:02.894885+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:03.895038+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:04.895157+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:05.895418+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:06.895658+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:07.895813+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:08.896116+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 57417728 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:09.896367+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 57409536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:10.896496+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 57409536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:11.896654+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 57401344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:12.896885+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 57401344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:13.897053+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:14.897246+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:15.897436+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:16.897715+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 57393152 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:17.897951+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:18.898087+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:19.898255+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:20.898561+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:21.898746+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:22.899064+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:23.899229+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 57384960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:24.899382+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 57376768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:25.899551+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:26.899741+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:27.899896+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:28.900070+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:29.900351+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:30.900552+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:31.900867+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:32.901113+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 57360384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:33.901349+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:34.901505+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:35.901657+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:36.901898+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 57352192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:37.902072+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 57344000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:38.902272+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 57335808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:39.902556+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 57335808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:40.902700+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:41.902870+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:42.903003+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 57327616 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:43.903140+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:44.903344+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:45.903511+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:46.903647+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:47.903809+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:48.904054+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 57319424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:49.904523+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:50.904719+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:51.904871+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:52.905092+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:53.905291+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:54.905477+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:55.905623+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:56.905842+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 57311232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:57.905984+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:58.906131+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:59.906351+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 57303040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:00.906474+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:01.906643+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:02.906839+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:03.906979+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 57294848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:04.907157+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 57286656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:05.907369+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:06.907589+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:07.907775+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:08.907989+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:09.908230+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:10.908470+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:11.908648+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:12.908809+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 57270272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:13.908986+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:14.909129+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:15.909271+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:16.909349+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:17.909528+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:18.909653+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:19.909765+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 57262080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:20.909924+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 57253888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:21.910061+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:22.910216+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:23.910356+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:24.910526+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 57229312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:25.910749+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:26.910892+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:27.911084+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:28.911255+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:29.911510+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 57221120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:30.911653+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:31.911870+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:32.912035+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:33.912182+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:34.912371+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:35.912538+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 57212928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:36.912770+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:37.912940+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:38.913095+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:39.913279+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:40.913409+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 57204736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:41.913530+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:42.913707+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:43.913887+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:44.914053+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 57188352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:45.914191+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:46.914363+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:47.914486+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:48.914610+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 57180160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:49.914790+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:50.914929+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:51.915051+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6002.4 total, 600.0 interval
                                           Cumulative writes: 36K writes, 143K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 596 writes, 1534 keys, 596 commit groups, 1.0 writes per commit group, ingest: 0.75 MB, 0.00 MB/s
                                           Interval WAL: 596 writes, 265 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.357       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.79              0.00         1    0.790       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.30              0.00         1    0.300       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e13090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6002.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a125e131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:52.915167+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 57171968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:53.915349+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:54.915454+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:55.915573+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:56.915724+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:57.915896+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:58.916054+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 57163776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:59.916233+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:00.916410+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 57155584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:01.916547+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:02.916652+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:03.916802+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:04.916951+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 57147392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:05.917082+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:06.917199+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:07.917434+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 57139200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:08.917588+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:09.917745+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:10.917858+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:11.918017+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 57122816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:12.918148+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:13.918291+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:14.918465+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:15.918640+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:16.918793+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 57114624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:17.918933+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 57106432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:18.919059+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 57106432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:19.919226+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:20.919420+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:21.919578+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:22.919720+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:23.919870+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:24.920013+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 57098240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:25.920143+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:26.920265+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:27.920402+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:28.920555+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:29.920733+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:30.920874+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:31.921016+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 57081856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:32.921146+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:33.921279+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:34.921412+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:35.921553+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:36.921673+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 57073664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:37.921840+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:38.921993+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:39.922185+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 57065472 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:40.922349+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 57057280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:41.922516+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 57057280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:42.922656+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:43.922806+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:44.922938+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:45.923101+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:46.923260+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:47.923440+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 57049088 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:48.923587+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:49.923794+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:50.924878+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:51.925578+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 57040896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:52.925958+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 57032704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:53.926119+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:54.926340+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:55.926482+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:56.926601+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 57024512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:57.926997+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 57016320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:58.927273+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:59.927466+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:00.927602+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:01.927737+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:02.928286+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:03.928494+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 57008128 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:04.928640+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:05.928777+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:06.928937+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:07.929196+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:08.929385+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 56999936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:09.929565+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:10.929906+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:11.930069+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:12.930219+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:13.930356+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:14.930539+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:15.930660+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:16.930822+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 56991744 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:17.931104+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:18.931266+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:19.931461+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:20.931607+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 56967168 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:21.931792+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:22.931970+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:23.932177+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:24.932354+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:25.932547+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:26.932738+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:27.932895+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:28.933089+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 56950784 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:29.933273+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:30.933448+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:31.933640+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:32.933813+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046063 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 56934400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:33.933978+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9b000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:34.934194+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:35.934354+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 56926208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 322.681640625s of 322.787384033s, submitted: 11
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:36.934545+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 56893440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:37.934683+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:38.934869+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:39.935120+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:40.935281+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:41.935453+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:42.935628+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:43.935773+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 56877056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:44.935940+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:45.936115+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:46.936295+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:47.936494+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:48.936637+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 56868864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:49.936855+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:50.936998+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:51.937140+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:52.937294+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:53.937582+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:54.937869+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:55.938095+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 56860672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:56.938289+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:57.938523+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:58.938721+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:59.938890+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:00.939114+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:01.939935+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 56852480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:02.940360+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:03.940579+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:04.940722+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:05.940894+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:06.941111+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:07.941347+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:08.941485+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 56844288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:09.941651+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:10.941824+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:11.941957+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:12.942143+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 56836096 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:13.942374+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:14.942530+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:15.942718+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:16.942912+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 56827904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:17.943083+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 56819712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:18.943253+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:19.943517+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:20.943692+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:21.943871+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:22.944057+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 56811520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:23.944240+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 56803328 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:24.944425+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:25.944596+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:26.944744+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:27.944910+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 56786944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:28.944992+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:29.945126+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:30.945240+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:31.945389+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:32.945519+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:33.945663+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 56778752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:34.945764+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:35.945936+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:36.946104+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 56770560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:37.946222+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:38.946402+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:39.946595+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:40.946728+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 56762368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:41.946878+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 56754176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:42.947009+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:43.947140+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:44.947258+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:45.947400+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:46.947612+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:47.947796+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:48.947968+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 56745984 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:49.948150+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:50.948367+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:51.948507+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:52.948651+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 56737792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:53.948818+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 56729600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:54.948968+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 56729600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:55.949106+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 56721408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:56.949238+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 56721408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:57.949373+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:11 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:11 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:58.949509+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:59.949779+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:00.949950+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:01.950114+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:02.950374+0000)
Nov 25 09:46:11 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:11 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:03.950539+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:04.950795+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 56713216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:05.950990+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 56705024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:06.951162+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 56705024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:07.951411+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:08.951603+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:09.951791+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:10.951966+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:11.952133+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 56696832 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:12.952548+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 283 heartbeat osd_stat(store_statfs(0x4ecf9c000/0x0/0x4ffc00000, data 0x19c802b/0x1b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:13.952707+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045183 data_alloc: 218103808 data_used: 720896
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:14.952869+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a5000
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 98.430641174s of 98.900810242s, submitted: 90
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:15.953086+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:16.953296+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf9d000/0x0/0x4ffc00000, data 0x19c8008/0x1b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 284 ms_handle_reset con 0x55a1286a5000 session 0x55a129ce9860
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:17.953513+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:18.953700+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995067 data_alloc: 218103808 data_used: 729088
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a129f44800
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:19.953886+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:20.954056+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:21.954251+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 285 ms_handle_reset con 0x55a129f44800 session 0x55a129eb3860
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:22.954424+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 285 heartbeat osd_stat(store_statfs(0x4ee387000/0x0/0x4ffc00000, data 0x1cb787/0x356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:23.954610+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2889761 data_alloc: 218103808 data_used: 737280
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:24.954784+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:25.954946+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a12769b400
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.809815407s of 11.059050560s, submitted: 61
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:26.955133+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:27.955426+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 286 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0x1cd206/0x359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:28.955621+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2894255 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 287 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0x1cd206/0x359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 287 ms_handle_reset con 0x55a12769b400 session 0x55a127620780
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:29.955852+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:30.956056+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:31.956379+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:32.956564+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:33.957022+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:34.957269+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:35.957574+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:36.957839+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:37.957997+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:38.958162+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:39.958370+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:40.958573+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:41.958739+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:42.958890+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:43.959041+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:44.959193+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:45.959351+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:46.959578+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:47.959835+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:48.960013+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:49.960194+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:50.960417+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:51.960606+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:52.960870+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:53.961035+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:54.961217+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:55.961441+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:56.961632+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:57.961779+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:58.961935+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:59.962109+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:00.962389+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:01.962542+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:02.962762+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:03.962891+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:04.963030+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:05.963204+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:06.963365+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:07.963521+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:08.963702+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:09.963881+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:10.964245+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:11.964401+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:12.964605+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:13.964800+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:14.964947+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:15.965085+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:16.965256+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 56451072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:17.965395+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:18.965554+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:19.965752+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:20.965951+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:21.966107+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:22.966270+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:23.966378+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:24.966512+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:25.966669+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:26.966899+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:27.967064+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a127eb5400
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:28.967195+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900059 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:29.967393+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.735168457s of 63.853855133s, submitted: 16
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:30.967514+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ee37e000/0x0/0x4ffc00000, data 0x1d0802/0x35f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297484288 unmapped: 55361536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:31.967656+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297484288 unmapped: 55361536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:32.967816+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297492480 unmapped: 55353344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:33.967991+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902153 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 289 ms_handle_reset con 0x55a127eb5400 session 0x55a128153860
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:34.968166+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:35.968481+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:36.968658+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ee37c000/0x0/0x4ffc00000, data 0x1d23d3/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297508864 unmapped: 55336960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:37.968861+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297517056 unmapped: 55328768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:38.969030+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902153 data_alloc: 218103808 data_used: 753664
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297517056 unmapped: 55328768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _renew_subs
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:39.969233+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:40.969403+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:41.969587+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:42.969769+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 55312384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:43.969905+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:44.970049+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:45.970206+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:46.970383+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 55304192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:47.970624+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 55296000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:48.970807+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 55296000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:49.970955+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:50.971088+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:51.971202+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:52.971405+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:53.971557+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:54.971679+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:55.971825+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:56.971969+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 55287808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:57.972100+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:58.972262+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:59.972515+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:00.972670+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 55271424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:01.972845+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:02.973023+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:03.973202+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:04.973385+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:05.973584+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:06.973736+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 55263232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:07.973937+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:08.974118+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:09.974293+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:10.974456+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:11.974673+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 55255040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:12.974878+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:13.975042+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:14.975220+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:15.975405+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 55246848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:16.975585+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 55238656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:17.975874+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:18.976119+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:19.976368+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:20.976530+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 55230464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:21.976720+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:22.976873+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:23.977214+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:24.977409+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:25.977619+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:26.977797+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:27.977999+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:28.978144+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 55222272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:29.978378+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 55214080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:30.978555+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:31.978703+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:32.978886+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:33.979054+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297639936 unmapped: 55205888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:34.979239+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:35.979367+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:36.979532+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:37.979658+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 55189504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:38.979775+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:39.979908+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:40.980060+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:41.980249+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:42.980404+0000)
Nov 25 09:46:12 compute-0 nova_compute[253538]: 2025-11-25 09:46:12.017 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:43.980533+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:44.980681+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 55173120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:45.980816+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:46.980961+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:47.981132+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:48.981415+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:49.981595+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:50.981723+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:51.981883+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:52.982046+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 55164928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:53.982217+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:54.982405+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:55.982570+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 55156736 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:56.982689+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297697280 unmapped: 55148544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:57.982825+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297697280 unmapped: 55148544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:58.983287+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:59.983467+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:00.983628+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 55140352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:01.983751+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297713664 unmapped: 55132160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:02.983870+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297787392 unmapped: 55058432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'config show' '{prefix=config show}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:03.983979+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 55533568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:04.984088+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 55853056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:05.984341+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 55853056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:06.984467+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:07.984585+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:08.984707+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:09.984889+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:10.985025+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:11.985159+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:12.985334+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:13.985535+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 55664640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:14.985659+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 55664640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:15.985823+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 55664640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:16.985960+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 55664640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:17.986079+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:18.986216+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:19.986379+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:20.986589+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:21.986732+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:22.986902+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 ms_handle_reset con 0x55a127698400 session 0x55a129f4ba40
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: handle_auth_request added challenge on 0x55a1286a4000
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:23.987028+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:24.987157+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:25.987288+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:26.987370+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:27.987485+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:28.987595+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:29.987761+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:30.987917+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:31.988052+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:32.988208+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297213952 unmapped: 55631872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:33.988354+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:34.988495+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:35.988642+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:36.988816+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 56688640 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:37.988958+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 56680448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:38.989097+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 56680448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:40.000555+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 56680448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:41.000682+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 56680448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:42.000813+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:43.000945+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:44.002170+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:45.002334+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:46.002469+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:47.002595+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:48.002788+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:49.002987+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 56672256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:50.003174+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 56664064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:51.003300+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 56664064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:52.003519+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 56664064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:53.003692+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 56664064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:54.003862+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:55.003993+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:56.004176+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:57.004370+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 56655872 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:58.004542+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:59.004733+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:00.004943+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:01.005054+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:02.005176+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:03.005359+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:04.005521+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 56639488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:05.005725+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 56631296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:06.005981+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 56631296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:07.006170+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 56631296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:08.006387+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:09.006575+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:10.006823+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:11.007015+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:12.007203+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 56623104 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:13.007378+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 56614912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:14.007554+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:15.007775+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:16.007971+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 56606720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:17.008142+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:18.008351+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:19.008532+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:20.008752+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:21.008960+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 56598528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:22.009161+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:23.009409+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:24.009647+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:25.009820+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:26.009946+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:27.010142+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:28.010408+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 56590336 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:29.010602+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296263680 unmapped: 56582144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:30.010795+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 56573952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:31.010956+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 56573952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:32.011103+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 56573952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:33.011253+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:34.011454+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:35.011608+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:36.011795+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 56565760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:37.011986+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 56557568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:38.012144+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:39.012320+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:40.012491+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:41.012741+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:42.012970+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:43.013182+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:44.013416+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 56549376 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:45.013658+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 56541184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:46.013835+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:47.014009+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:48.014301+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:49.014579+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:50.014809+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:51.015031+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:52.015181+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 56532992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:53.015341+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 56524800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:54.015538+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:55.015810+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:56.015963+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:57.016097+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 56516608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:58.016295+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:59.016467+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:00.016655+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:01.016799+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 56508416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:02.016960+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:03.017110+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:04.017408+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:05.017617+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 56500224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:06.017798+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 56492032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:07.017970+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 56492032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:08.018254+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:09.018396+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:10.047578+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:11.047716+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:12.047889+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:13.048077+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 56483840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:14.048224+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:15.048554+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:16.048710+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 56475648 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:17.048868+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:18.049035+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:19.049209+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 56467456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:20.049435+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:21.049581+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:22.049775+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:23.049949+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:24.050113+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:25.050301+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 56459264 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:26.050566+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:27.050675+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:28.050846+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:29.051016+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:30.051222+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:31.051426+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:32.051570+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296402944 unmapped: 56442880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:33.051701+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:34.051869+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:35.052023+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 56434688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:36.052180+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:37.052363+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:38.052552+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:39.052738+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:40.052999+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:41.053187+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 56426496 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:42.053394+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:43.053568+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:44.053716+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:45.053866+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:46.054016+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 56418304 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:47.054144+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 56410112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:48.054360+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 56410112 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:49.054567+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 56401920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:50.054773+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 56401920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:51.054965+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 56401920 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:52.055084+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:53.055257+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:54.055443+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:55.055617+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:56.055786+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 56385536 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:57.055910+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 56377344 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:58.056074+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:59.056218+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:00.056392+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:01.056528+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:02.056685+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:03.056865+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:04.057005+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 56360960 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:05.057166+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 56352768 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:06.057354+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:07.057509+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:08.057646+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:09.057827+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:10.058022+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:11.058137+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:12.058362+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:13.058496+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 56344576 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:14.058710+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 56336384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:15.058880+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 56336384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:16.059070+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 56336384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:17.059298+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 56336384 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:18.059542+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 56328192 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:19.059737+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 56320000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:20.059927+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 56320000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:21.060106+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 56320000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:22.060247+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 56320000 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:23.060379+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 56311808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:24.060539+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 56311808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:25.060657+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 56311808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:26.060835+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 56311808 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:27.060988+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 56295424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:28.061153+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 56295424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:29.061296+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 56295424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:30.061499+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 56295424 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:31.061620+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:32.061877+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:33.062077+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:34.062195+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:35.062388+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:36.062528+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:37.062660+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 56287232 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:38.062788+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 56279040 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:39.062951+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:40.063140+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:41.063292+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:42.063528+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:43.063687+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 56270848 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:44.063827+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 56262656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:45.063962+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 56262656 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:46.064136+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 56254464 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:47.064391+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:48.064530+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:49.064671+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:50.064868+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:51.065183+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6602.4 total, 600.0 interval
                                           Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 12K syncs, 2.82 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 333 writes, 863 keys, 333 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s
                                           Interval WAL: 333 writes, 138 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:52.065530+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:53.065718+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 56246272 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:54.065884+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 56238080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:55.066060+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 56238080 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:56.066232+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 56229888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:57.066360+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 56229888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:58.066447+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 56229888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:59.066638+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 56229888 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:00.066842+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296624128 unmapped: 56221696 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:01.066975+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296624128 unmapped: 56221696 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:02.067223+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296624128 unmapped: 56221696 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:03.067379+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 56213504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:04.067475+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 56213504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:05.067626+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 56213504 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:06.067785+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 56205312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:07.067943+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 56205312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:08.068097+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 56205312 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:09.068246+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 56197120 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:10.068430+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 56188928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:11.068566+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 56188928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:12.068707+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 56188928 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:13.068838+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:14.068969+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:15.069093+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:16.069240+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:17.069362+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 56172544 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:18.069505+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 56164352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:19.069638+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 56164352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:20.069806+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 56164352 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:21.069992+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 56156160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:22.070168+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 56156160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:23.070339+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 56156160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:24.070483+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 56156160 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:25.070696+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 56147968 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:26.070841+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:27.071020+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:28.071166+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:29.071339+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:30.071501+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:31.071651+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:32.071800+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 56139776 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:33.071938+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:34.072144+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:35.072354+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:36.072510+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:37.072655+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 56131584 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:38.072828+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:39.072965+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:40.073150+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:41.073379+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:42.073560+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:43.073716+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:44.073900+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296722432 unmapped: 56123392 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:45.074054+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 56115200 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:46.074187+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 56107008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:47.074361+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 56107008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:48.074538+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 56107008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:49.074749+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 56107008 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:50.075200+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 56098816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 56098816 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:51.727588+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:52.727752+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:53.727939+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:54.728152+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:55.728435+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 56090624 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:56.728577+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 56082432 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:57.728781+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:58.728938+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:59.729187+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:00.729366+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:01.729492+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:02.729704+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:03.729864+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 56074240 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:04.730146+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:05.730349+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:06.730515+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:07.730923+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:08.731102+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 56066048 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:09.731236+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 56057856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:10.731500+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 56057856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:11.731601+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 56057856 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:12.731756+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296796160 unmapped: 56049664 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:13.731921+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 56033280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:14.732141+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 56033280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:15.732383+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 56033280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:16.732530+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 56033280 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:17.732727+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 56016896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:18.732918+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 56016896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:19.733075+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 56016896 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:20.733280+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:21.733426+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:22.733616+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:23.733780+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:24.733931+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 56008704 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:25.734051+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:26.734191+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:27.734357+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:28.734524+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:29.734676+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:30.734828+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:31.734996+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 56000512 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:32.735214+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 55992320 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:33.735397+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 55975936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:34.735591+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 55975936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:35.735788+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906327 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 55975936 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:36.735940+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 425.801208496s of 426.502380371s, submitted: 28
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 55959552 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:37.736053+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee378000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 55951360 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:38.736186+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:39.736297+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:40.736486+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:41.736642+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:42.736839+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:43.737015+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:44.737192+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:45.737375+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:46.737565+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:47.737944+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:48.738127+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 55918592 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:49.738299+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:50.738708+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:51.738866+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:52.739029+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:53.739185+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 55910400 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:54.740093+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:55.740281+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:56.740431+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:57.740637+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:58.740802+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 55902208 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:59.740932+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:00.741075+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:01.741236+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:02.741375+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:03.741517+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:04.741646+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 55894016 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:05.741808+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:06.741995+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:07.742346+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:08.742475+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:09.742586+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:10.742783+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:11.742977+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:12.743113+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:13.743357+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 55885824 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:14.743484+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 55877632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:15.743641+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 55877632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:16.743837+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 55877632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:17.743973+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 55877632 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:18.744151+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:19.744284+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:20.744474+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:21.744622+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:22.744798+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:23.744948+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:24.745078+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:25.745234+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 55869440 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:26.745393+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:27.745725+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:28.745877+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:29.746071+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:30.746299+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:31.746607+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:32.746751+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 55861248 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:33.746869+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 55853056 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:34.747004+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:35.747112+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:36.747282+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:37.748428+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:38.748647+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:39.748788+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:40.748980+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297000960 unmapped: 55844864 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:41.749208+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:42.749400+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:43.749541+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:44.749677+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:45.749872+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 55836672 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:46.750023+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 55828480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:47.750382+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 55828480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:48.750572+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 55828480 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:49.750715+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 55820288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:50.750896+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 55820288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:51.751059+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 55820288 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:52.751247+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:53.751405+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:54.751590+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:55.751778+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:56.751908+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:57.752073+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 55803904 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:58.752240+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:59.752363+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:00.752916+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:01.753124+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:02.753268+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:03.753452+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:04.753582+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 55795712 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:05.753789+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:06.753952+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:07.754433+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:08.754555+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:09.754704+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297058304 unmapped: 55787520 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:10.754876+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297074688 unmapped: 55771136 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:11.755211+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297074688 unmapped: 55771136 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:12.755361+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297074688 unmapped: 55771136 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:13.755503+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297082880 unmapped: 55762944 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:14.755706+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:15.755838+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:16.761326+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:17.761455+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:18.761597+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:19.761789+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:20.761996+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:21.762134+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297091072 unmapped: 55754752 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:22.762372+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:23.762502+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:24.762611+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:25.762763+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:26.762936+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:27.763073+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:28.763400+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:29.763557+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297099264 unmapped: 55746560 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:30.763737+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 55738368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:31.763883+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 55738368 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:32.764046+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 55730176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:33.764206+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 55730176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:34.764370+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 55730176 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:35.764525+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:36.764691+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:37.764860+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:38.765035+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:39.765179+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:40.765353+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:41.765473+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 55713792 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:42.765810+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 55705600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:43.765963+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 55705600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:44.766140+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 55705600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:45.767068+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 55705600 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:46.767252+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 55697408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:47.767381+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 55697408 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:48.767563+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:49.767750+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:50.767945+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:51.768131+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:52.768461+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:53.768675+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 55689216 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:54.768869+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:55.769058+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:56.769239+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:57.769393+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:58.769508+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:59.769686+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:00.769927+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:01.770130+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 55656448 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:02.770597+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:03.770760+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:04.770894+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:05.771072+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 55648256 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:06.771265+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:07.771719+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:08.771884+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 55640064 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:09.772034+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:10.772272+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:11.772553+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:12.772746+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:13.772893+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:14.773097+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:15.773265+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:16.773419+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 55623680 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:17.773546+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 55615488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:18.773680+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 55615488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:19.773846+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 55615488 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:20.774030+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:21.774155+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:22.774264+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:23.774422+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:24.774628+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:25.774770+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:26.774956+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:27.775157+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:28.775402+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 55607296 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:29.775547+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 55590912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:30.775758+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 55590912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:31.775892+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 55590912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:32.776122+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 55590912 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:33.776388+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 55582720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:34.776552+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 55582720 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:35.776724+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:36.776863+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:37.777012+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:38.777171+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:39.777407+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:40.777621+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 55574528 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:41.777812+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 55558144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:42.777950+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 55558144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:43.778105+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 55558144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:44.778249+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 55558144 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:45.778404+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 55549952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:46.778593+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 55549952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:47.779207+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 55549952 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:48.779349+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 55541760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:49.779471+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 55541760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:50.779634+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 55541760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:51.779782+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 55541760 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:52.779928+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 55533568 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:53.780079+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:54.780242+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:55.780410+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:56.780580+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:57.780769+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:58.780924+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:59.781049+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:00.781225+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:01.781421+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:02.781556+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:03.781710+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:04.781878+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 55508992 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:05.782073+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 55500800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:06.782213+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 55500800 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:07.782364+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297353216 unmapped: 55492608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:08.782539+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297353216 unmapped: 55492608 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:09.782686+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 55484416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:10.782858+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 55484416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:11.783008+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 55484416 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:12.783165+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:13.783302+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:14.783484+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:15.783630+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:16.783882+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:17.784057+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 55476224 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:18.784196+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:19.784333+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:20.784538+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:21.784717+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:22.784923+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 55468032 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:23.785118+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 55459840 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:24.785378+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:25.785605+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:26.785840+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:27.785992+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:28.786180+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 55443456 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:29.786369+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 55427072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:30.786766+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 55427072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:31.786915+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 55427072 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:32.787128+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:33.787343+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:34.787474+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:35.787591+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:36.787711+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:12 compute-0 ceph-osd[90711]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:12 compute-0 ceph-osd[90711]: bluestore.MempoolThread(0x55a125ef1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905447 data_alloc: 218103808 data_used: 761856
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:37.787837+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 55410688 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:38.787941+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee379000/0x0/0x4ffc00000, data 0x1d3e36/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'config show' '{prefix=config show}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 55681024 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:39.788039+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 55517184 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:40.788171+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 55418880 heap: 352845824 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: tick
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_tickets
Nov 25 09:46:12 compute-0 ceph-osd[90711]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:41.788299+0000)
Nov 25 09:46:12 compute-0 ceph-osd[90711]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:46:12 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23501 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 09:46:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062392188' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 09:46:12 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23505 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:12 compute-0 ceph-mon[75015]: from='client.23497 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3015543659' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 09:46:12 compute-0 ceph-mon[75015]: from='client.23501 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:12 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1062392188' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 09:46:12 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 09:46:12 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260254065' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:46:12 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23509 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 09:46:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716183601' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23513 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:13 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 09:46:13 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351674670' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mon[75015]: from='client.23505 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1260254065' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mon[75015]: from='client.23509 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3716183601' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mon[75015]: from='client.23513 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mon[75015]: pgmap v3799: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:13 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3351674670' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23519 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:13 compute-0 ceph-mgr[75313]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 09:46:13 compute-0 ceph-a058ea16-8b73-51e1-b172-ed66107102bf-mgr-compute-0-cpskve[75309]: 2025-11-25T09:46:13.948+0000 7f6347321640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 09:46:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 09:46:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306814604' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 09:46:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 09:46:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572609602' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 09:46:14 compute-0 ceph-mon[75015]: from='client.23519 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3306814604' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 09:46:14 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/572609602' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 09:46:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 09:46:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118742297' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 09:46:14 compute-0 crontab[460347]: (root) LIST (root)
Nov 25 09:46:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 09:46:14 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876837186' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 09:46:14 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:15 compute-0 sshd-session[460138]: Received disconnect from 182.253.79.194 port 17463:11: Bye Bye [preauth]
Nov 25 09:46:15 compute-0 sshd-session[460138]: Disconnected from authenticating user root 182.253.79.194 port 17463 [preauth]
Nov 25 09:46:15 compute-0 nova_compute[253538]: 2025-11-25 09:46:15.174 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 09:46:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234045790' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 09:46:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092545494' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 09:46:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2452799282' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 25 09:46:15 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427329848' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2118742297' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2876837186' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2234045790' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4092545494' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mon[75015]: pgmap v3800: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2452799282' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 09:46:15 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2427329848' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 09:46:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 25 09:46:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/615187276' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 09:46:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 25 09:46:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313208645' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:09.688182+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 60088320 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:10.688517+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:11.688696+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:12.688849+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403206 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:13.689028+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:14.689214+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b61000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:15.689366+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 60080128 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.827438354s of 47.724197388s, submitted: 107
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e24f860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:16.689522+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 59662336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:17.689676+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 59662336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525594 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:18.689856+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 59654144 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:19.690036+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:20.690260+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:21.690393+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8d1163c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:22.690573+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525594 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d1a50e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e7034a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e7c0d20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:23.690706+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 59645952 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c586000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:24.690842+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 59629568 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:25.691083+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:26.691234+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:27.691366+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632855 data_alloc: 234881024 data_used: 28499968
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:28.691508+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:29.691659+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:30.691777+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:31.691910+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:32.692345+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632855 data_alloc: 234881024 data_used: 28499968
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:33.692482+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0x2e35018/0x2fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:34.692616+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 58662912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.927917480s of 19.114524841s, submitted: 30
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:35.692753+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 53174272 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:36.693532+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 53059584 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:37.693689+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3727151 data_alloc: 234881024 data_used: 29900800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:38.693882+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:39.694089+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e809b000/0x0/0x4ffc00000, data 0x3970018/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:40.694256+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:41.694389+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e809b000/0x0/0x4ffc00000, data 0x3970018/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:42.694540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3725011 data_alloc: 234881024 data_used: 29900800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:43.694706+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:44.694879+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:45.695008+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:46.695153+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 52617216 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:47.695352+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.873039246s of 12.605758667s, submitted: 112
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 52609024 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3770760 data_alloc: 234881024 data_used: 29900800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e807a000/0x0/0x4ffc00000, data 0x3991018/0x3b04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,0,0,0,1,2])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2483c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e90cb40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e90cf00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41f400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e41f400 session 0x561d8e8c5a40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f048f00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:48.695512+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b1c000/0x0/0x4ffc00000, data 0x3eef018/0x4062000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:49.695653+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:50.695773+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:51.695930+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:52.696057+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3769046 data_alloc: 234881024 data_used: 29900800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:53.696197+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:54.696354+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b16000/0x0/0x4ffc00000, data 0x3ef5018/0x4068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:55.696485+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e39ed20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:56.711578+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d2365a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8c738780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74c800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8e74c800 session 0x561d8d116b40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:57.711751+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3771924 data_alloc: 234881024 data_used: 29900800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365564346s of 10.544802666s, submitted: 22
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:58.711914+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 52502528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:59.712075+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:00.712214+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:01.712364+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:02.712510+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804024 data_alloc: 234881024 data_used: 34402304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:03.712638+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:04.712797+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:05.712933+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:06.713083+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7b15000/0x0/0x4ffc00000, data 0x3ef5028/0x4069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:07.713244+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804552 data_alloc: 234881024 data_used: 34402304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:08.713384+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 51675136 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:09.713524+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.267370224s of 11.282814980s, submitted: 5
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347512832 unmapped: 51535872 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:10.713659+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 51519488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:11.713842+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72fb000/0x0/0x4ffc00000, data 0x470e028/0x4882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:12.714032+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874330 data_alloc: 234881024 data_used: 34574336
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:13.714224+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:14.714394+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:15.714566+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 51494912 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72fb000/0x0/0x4ffc00000, data 0x470e028/0x4882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:16.714689+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:17.714859+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872410 data_alloc: 234881024 data_used: 34574336
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:18.715043+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:19.715208+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:20.715384+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 51486720 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:21.715519+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e72f9000/0x0/0x4ffc00000, data 0x4711028/0x4885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 51478528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.121975899s of 12.436902046s, submitted: 54
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e90de00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e4321e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:22.715625+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8d1a41e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3736725 data_alloc: 234881024 data_used: 29904896
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:23.715787+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8074000/0x0/0x4ffc00000, data 0x3997018/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:24.715964+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:25.716141+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 51445760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e806d000/0x0/0x4ffc00000, data 0x399e018/0x3b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d25bc20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c586000 session 0x561d8c62bc20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:26.716265+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e249860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:27.716439+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:28.716606+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:29.716792+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:30.716979+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:31.717143+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:32.717302+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:33.717510+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:34.717734+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:35.717869+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:36.717988+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:37.718143+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:38.718299+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:39.718461+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:40.718593+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:41.718728+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:42.718872+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:43.719026+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:44.719149+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:45.719410+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:46.719603+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:47.719774+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:48.719950+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:49.720191+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:50.720374+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:51.720519+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:52.720766+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:53.720949+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:54.721115+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:55.721247+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:56.721420+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:57.721594+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:58.721767+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:59.721956+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 60407808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:00.722143+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338649088 unmapped: 60399616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:01.722354+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338649088 unmapped: 60399616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:02.722521+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428513 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:03.722708+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:04.722855+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c8130e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38e960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e39fa40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e24ed20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 60391424 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.992057800s of 43.214187622s, submitted: 76
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9b63000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [0,1,3,3,1])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8eb53c20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:05.722996+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d032f00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8c644960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f048780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e512780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:06.723152+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:07.723355+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517656 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:08.723541+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e3edc20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:09.723686+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8d16f0e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4325a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e5105a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:10.723821+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:11.723993+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 58941440 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:12.726952+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599896 data_alloc: 234881024 data_used: 26505216
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:13.727150+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:14.728603+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:15.728781+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:16.728937+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:17.729073+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599896 data_alloc: 234881024 data_used: 26505216
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:18.729241+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:19.729410+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:20.729568+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e9042000/0x0/0x4ffc00000, data 0x29c708a/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:21.729743+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 343433216 unmapped: 55615488 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:22.729897+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.722257614s of 17.883968353s, submitted: 34
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 52879360 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3648258 data_alloc: 234881024 data_used: 26505216
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:23.730012+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 52592640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:24.730152+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:25.730368+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:26.730546+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:27.730724+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682318 data_alloc: 234881024 data_used: 27729920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:28.730960+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:29.731138+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:30.731351+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:31.731523+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:32.731685+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682318 data_alloc: 234881024 data_used: 27729920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:33.731946+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 49430528 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:34.732118+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:35.732279+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:36.732434+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 49422336 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:37.732601+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e2485a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8c74ba40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e4cd4a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4cc780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.496106148s of 14.768519402s, submitted: 82
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38ef00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d14ec00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d14ec00 session 0x561d8e8c4d20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8ed734a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d2361e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e39f860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730459 data_alloc: 234881024 data_used: 27734016
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:38.732848+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:39.733011+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:40.733139+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:41.733356+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:42.733483+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730459 data_alloc: 234881024 data_used: 27734016
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:43.733631+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 50544640 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:44.733784+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e50eb40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8d237860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:45.733920+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8d1a50e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c74b680
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:46.734244+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 50536448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:47.734392+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782911 data_alloc: 234881024 data_used: 35106816
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:48.734567+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:49.734903+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:50.735053+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:51.735679+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:52.735836+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3782911 data_alloc: 234881024 data_used: 35106816
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:53.735967+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:54.736115+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.003824234s of 17.156684875s, submitted: 17
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:55.736423+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:56.736688+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 47824896 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:57.736859+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 46997504 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817961 data_alloc: 234881024 data_used: 35102720
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:58.737041+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6e1e000/0x0/0x4ffc00000, data 0x3a4a09a/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 46325760 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:59.737183+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:00.737379+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:01.737551+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:02.737722+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6993000/0x0/0x4ffc00000, data 0x3ecd09a/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 46309376 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3828065 data_alloc: 234881024 data_used: 35172352
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:03.737849+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:04.737987+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.787124634s of 10.008896828s, submitted: 44
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:05.738102+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e6993000/0x0/0x4ffc00000, data 0x3ecd09a/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 46301184 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:06.738373+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 46292992 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f38f680
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:07.738491+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed73c20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:08.738639+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684633 data_alloc: 234881024 data_used: 27738112
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:09.738839+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:10.739044+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:11.739364+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e753d000/0x0/0x4ffc00000, data 0x332c08a/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 46948352 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:12.739631+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e3ec1e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8e18a3c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:13.739823+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:14.740035+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:15.740230+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:16.740388+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:17.740528+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:18.740733+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:19.740902+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:20.741068+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:21.741223+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:22.741536+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:23.742014+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:24.742203+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:25.742431+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:26.742597+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:27.742760+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:28.742965+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:29.743122+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:30.743338+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:31.743497+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:32.743673+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:33.743817+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:34.743946+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:35.744117+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:36.744295+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:37.744478+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:38.744693+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89be000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:39.744904+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:40.745139+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:41.745272+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:42.745435+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:43.745522+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 58359808 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450107 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e39f860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d2361e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8ed734a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8f38ef00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.750972748s of 38.958789825s, submitted: 67
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e89bd000/0x0/0x4ffc00000, data 0x1ea8028/0x201c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:44.745646+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 58351616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8e4cc780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8f048780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8eb53c20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8f38e960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d1a41e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:45.745807+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:46.745962+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:47.746084+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8d116b40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:48.746353+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492197 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8112000/0x0/0x4ffc00000, data 0x234708a/0x24bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c738780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:49.746545+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 58138624 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e2485a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8d2a63c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:50.746965+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 58122240 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d90b43c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:51.747134+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 58122240 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:52.747326+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:53.747613+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529564 data_alloc: 218103808 data_used: 19730432
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d90b43c00 session 0x561d8e4cd860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e2492c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:54.748242+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:55.748398+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:56.748641+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:57.748897+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:58.749064+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529432 data_alloc: 218103808 data_used: 19730432
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.582655907s of 14.895611763s, submitted: 46
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:59.749208+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:00.749560+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8e7c1e00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8d25a000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:01.749717+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 58114048 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:02.750140+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:03.750377+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529368 data_alloc: 218103808 data_used: 19738624
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:04.750654+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 58105856 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:05.750793+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:06.750928+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e8110000/0x0/0x4ffc00000, data 0x23470bd/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:07.751249+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 58097664 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e248780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8f0483c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:08.751419+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8f047c20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:09.751586+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:10.751758+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:11.751909+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:12.752049+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:13.752205+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:14.752354+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:15.752531+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:16.752754+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 59260928 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:17.752933+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:18.753971+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:19.754109+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:20.754286+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:21.754422+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 60497920 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:22.754620+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:23.754865+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:24.755069+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:25.755254+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 60489728 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:26.755467+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:27.755637+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:28.755862+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:29.756011+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:30.756221+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:31.756428+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:32.756600+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:33.756777+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 60481536 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:34.756941+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:35.757098+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:36.757255+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:37.757441+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:38.758594+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:39.758756+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:40.758936+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:41.759106+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338575360 unmapped: 60473344 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:42.759347+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:43.759534+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456873 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:44.759662+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:45.759842+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 60465152 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:46.759991+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 60456960 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.281188965s of 48.498039246s, submitted: 50
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:47.760152+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8c880000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x1ea8018/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:48.760372+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525743 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:49.760535+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:50.760785+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:51.761004+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8f047680
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:52.761145+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10d400 session 0x561d8e2490e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 60448768 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8f049e00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d91e23800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:53.761265+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c5b000/0x0/0x4ffc00000, data 0x2800018/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d91e23800 session 0x561d8e1a1a40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338698240 unmapped: 60350464 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529969 data_alloc: 218103808 data_used: 14966784
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:54.761408+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10c000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 60342272 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:55.761757+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:56.762388+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:57.762805+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x2824028/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:58.763082+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598449 data_alloc: 218103808 data_used: 24567808
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:59.763273+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:00.763402+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:01.763554+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:02.763701+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:03.763856+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x2824028/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598449 data_alloc: 218103808 data_used: 24567808
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:04.764035+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 60334080 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:05.764248+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.442842484s of 18.518671036s, submitted: 11
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 59375616 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:06.764391+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 54894592 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:07.764567+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:08.765005+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e712d000/0x0/0x4ffc00000, data 0x332d028/0x34a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3694817 data_alloc: 218103808 data_used: 24907776
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:09.765137+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:10.765538+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:11.765762+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:12.765923+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:13.766244+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3691285 data_alloc: 218103808 data_used: 24907776
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e7109000/0x0/0x4ffc00000, data 0x3351028/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:14.766512+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:15.766778+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 54632448 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927303314s of 10.246125221s, submitted: 93
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:16.766994+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:17.767160+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8d10c000 session 0x561d8e4cdc20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 ms_handle_reset con 0x561d8c459800 session 0x561d8cff1860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:18.767368+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e70e8000/0x0/0x4ffc00000, data 0x3372028/0x34e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693961 data_alloc: 218103808 data_used: 24915968
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:19.768597+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 344424448 unmapped: 54624256 heap: 399048704 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:20.769203+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8d10d400 session 0x561d8e50e1e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f3b1c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41e400
Nov 25 09:46:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3140011526' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8f3b1c00 session 0x561d8ca97c20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8e41e400 session 0x561d8d116780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8c459800 session 0x561d8e24f0e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 52314112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:21.769389+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 266 ms_handle_reset con 0x561d8c6d7800 session 0x561d8f38eb40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 52297728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:22.769543+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8d109000 session 0x561d8e3edc20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 52273152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:23.769669+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02e000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8f02e000 session 0x561d8e4cd4a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02e000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8f02e000 session 0x561d8f38fe00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 268 ms_handle_reset con 0x561d8c459800 session 0x561d8ed73e00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970481 data_alloc: 234881024 data_used: 33873920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 268 heartbeat osd_stat(store_statfs(0x4e6421000/0x0/0x4ffc00000, data 0x5072371/0x51eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:24.769848+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:25.769992+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:26.770146+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:27.770288+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 52264960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.572269440s of 12.077063560s, submitted: 93
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:28.770481+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8ed72000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709496 data_alloc: 218103808 data_used: 14983168
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:29.770828+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:30.771068+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:31.771260+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:32.771422+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:33.771578+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709496 data_alloc: 218103808 data_used: 14983168
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:34.771821+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e78ea000/0x0/0x4ffc00000, data 0x3ba9dc4/0x3d23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:35.772018+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 61046784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8e3eda40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e41e400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8e41e400 session 0x561d8e7c1c20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:36.772197+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c459800 session 0x561d8c659e00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8d1a5860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8d16e780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02e000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02e000 session 0x561d8e5105a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02c400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 384196608 unmapped: 25747456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:37.772355+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02c400 session 0x561d8c74ba40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c459800 session 0x561d8d2363c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:38.772529+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8c6d7800 session 0x561d8d1174a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3907564 data_alloc: 251658240 data_used: 44687360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e6abd000/0x0/0x4ffc00000, data 0x49d7dc4/0x4b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8d109000 session 0x561d8f38f4a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8f02e000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.500659943s of 11.212936401s, submitted: 43
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:39.772646+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 ms_handle_reset con 0x561d8f02e000 session 0x561d8e50e000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 373506048 unmapped: 36438016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:40.772856+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d0b7000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e6abc000/0x0/0x4ffc00000, data 0x49d7dd3/0x4b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,2])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 41066496 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c5cbc00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:41.772986+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 270 ms_handle_reset con 0x561d8d0b7000 session 0x561d8f38ed20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:42.773173+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:43.773423+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668229 data_alloc: 218103808 data_used: 22822912
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:44.773548+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:45.773681+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:46.773891+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:47.774119+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 270 heartbeat osd_stat(store_statfs(0x4e83e9000/0x0/0x4ffc00000, data 0x2cde942/0x2e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:48.774407+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671011 data_alloc: 218103808 data_used: 22822912
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:49.774544+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:50.774686+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b1000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:51.774863+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b1000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:52.774992+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 48005120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:53.775228+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.854998589s of 14.211294174s, submitted: 75
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3687955 data_alloc: 218103808 data_used: 24702976
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:54.775440+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:55.775575+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:56.775812+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:57.775979+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:58.776159+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 47857664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693395 data_alloc: 234881024 data_used: 25255936
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:59.776383+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:00.776551+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:01.776742+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:02.776910+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:03.777106+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693571 data_alloc: 234881024 data_used: 25251840
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:04.777259+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:05.777404+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.356461525s of 12.525735855s, submitted: 8
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:06.777574+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:07.777731+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:08.777892+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3693235 data_alloc: 234881024 data_used: 25247744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:09.778169+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:10.778324+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:11.778487+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:12.778641+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:13.778788+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3692883 data_alloc: 234881024 data_used: 25247744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:14.778985+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:15.779143+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e87b2000/0x0/0x4ffc00000, data 0x2ce03a5/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:16.779299+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:17.779455+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c5cbc00 session 0x561d8f0485a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e3ed680
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d0b7000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.056241989s of 12.152028084s, submitted: 5
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 46710784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:18.779597+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d0b7000 session 0x561d8e24eb40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:19.779752+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:20.779941+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:21.780103+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:22.780259+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:23.780434+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:24.780604+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:25.780786+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:26.780915+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:27.781141+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:28.781354+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:29.781509+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:30.781686+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:31.781923+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:32.782128+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:33.782297+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:34.782467+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:35.782644+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:36.782807+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:37.782961+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:38.783142+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510922 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:39.783435+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:40.783609+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.4 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2715 writes, 11K keys, 2715 commit groups, 1.0 writes per commit group, ingest: 11.35 MB, 0.02 MB/s
                                           Interval WAL: 2715 writes, 1097 syncs, 2.47 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:41.783751+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e1000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 52322304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets getting new tickets!
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:42.783957+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _finish_auth 0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:42.784849+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.209249496s of 24.272548676s, submitted: 19
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c459800 session 0x561d8c62bc20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:43.784143+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551006 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:44.784280+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e913b000/0x0/0x4ffc00000, data 0x2358396/0x24d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c6d7800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:45.784372+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c6d7800 session 0x561d8c644960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:46.784488+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 52068352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:47.784648+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 356655104 unmapped: 53288960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:48.784807+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8c459800 session 0x561d8e18ba40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516663 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:49.784917+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e18a780
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:50.785098+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:51.785243+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:52.785361+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:53.785523+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:54.785701+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:55.785864+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:56.786039+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:57.786205+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:58.786393+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:59.786576+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:00.786740+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:01.786861+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:02.787010+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:03.787147+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:04.787297+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:05.787540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:06.787699+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:07.787931+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:08.788095+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:09.788266+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8e74f800 session 0x561d8e18be00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:10.788598+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: mgrc ms_handle_reset ms_handle_reset con 0x561d8c6aa800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:46:16 compute-0 ceph-osd[89702]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: get_auth_request con 0x561d8c6d7800 auth_method 0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:11.788791+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8f032c00 session 0x561d8e1cc000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e74f800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 ms_handle_reset con 0x561d8f039c00 session 0x561d8d117860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c2aa800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:12.789096+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:13.789362+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:14.789559+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:15.789775+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:16.790031+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:17.790204+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:18.790388+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:19.790627+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:20.790805+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:21.790991+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349806592 unmapped: 60137472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:22.791169+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:23.791368+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:24.791540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:25.791684+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 60129280 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:26.791988+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:27.792158+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:28.792359+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:29.792496+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 60121088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:30.792810+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:31.793124+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:32.793404+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:33.793634+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:34.793819+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:35.794193+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:36.794434+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:37.794599+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 60112896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:38.795943+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:39.796135+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:40.796352+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:41.796521+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:42.796708+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:43.796881+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:44.797052+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:45.797174+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 60104704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:46.797331+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:47.797539+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:48.797771+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:49.797891+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349847552 unmapped: 60096512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:50.798041+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:51.798151+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:52.798298+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:53.798512+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:54.798722+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:55.798898+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 60088320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:56.799120+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:57.799278+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:58.799495+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 heartbeat osd_stat(store_statfs(0x4e95e0000/0x0/0x4ffc00000, data 0x1eb2396/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:59.799640+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 60080128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515831 data_alloc: 218103808 data_used: 14991360
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 77.114036560s of 77.737030029s, submitted: 33
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:00.799805+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 60071936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 272 ms_handle_reset con 0x561d8e52e800 session 0x561d8d16f680
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:01.800010+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349880320 unmapped: 60063744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:02.800247+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 273 ms_handle_reset con 0x561d8d10d000 session 0x561d8e4cd4a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:03.800413+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:04.800578+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478923 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:05.800795+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:06.801025+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 60047360 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:07.801546+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349904896 unmapped: 60039168 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 273 heartbeat osd_stat(store_statfs(0x4e9a6b000/0x0/0x4ffc00000, data 0x1a26aa0/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:08.801819+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:09.802007+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:10.802261+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:11.802521+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 60022784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:12.802738+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:13.802961+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:14.803123+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:15.803357+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:16.803632+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 60014592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:17.803877+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 60006400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:18.804079+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:19.804263+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:20.804480+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:21.804693+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 59998208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:22.804878+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d96070c00 session 0x561d8d117680
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d96070c00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:23.805071+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:24.805358+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:25.805586+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 59990016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:26.805811+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:27.805990+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:28.806230+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:29.806441+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:30.806633+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:31.806790+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:32.807037+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 59981824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:33.807192+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 59973632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a69000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:34.807371+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 59965440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481705 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:35.807508+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 59965440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.830196381s of 36.115074158s, submitted: 95
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:36.807636+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 59957248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:37.807810+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:38.808103+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:39.808385+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:40.808595+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:41.808771+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:42.808939+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:43.809082+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:44.809301+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:45.809575+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59916288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:46.809766+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:47.809900+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:48.810076+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:49.810206+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 59908096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:50.810381+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:51.810524+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:52.810676+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:53.810810+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 59899904 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:54.810942+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:55.811118+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:56.811221+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 59891712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:57.811386+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:58.811558+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:59.811697+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:00.811833+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 59883520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:01.811992+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:02.812182+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:03.812368+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:04.812507+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350068736 unmapped: 59875328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:05.812629+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:06.812811+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:07.813017+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:08.813193+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 59867136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:09.813415+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:10.813641+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:11.813824+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:12.814048+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 59858944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:13.814282+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:14.814510+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:15.814657+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:16.814854+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:17.815041+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:18.815219+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:19.815400+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 59850752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:20.815527+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 59842560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:21.815674+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 59842560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:22.815851+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 59834368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:23.816056+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 59834368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:24.816213+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:25.816389+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:26.816570+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:27.816757+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:28.816975+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 59826176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:29.817167+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 59809792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:30.817392+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 59809792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:31.817570+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:32.817818+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:33.818027+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 59801600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:34.818226+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:35.818382+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:36.818557+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 59793408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:37.818736+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:38.818930+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:39.819068+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:40.819282+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:41.819501+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:42.819714+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:43.819928+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:44.820130+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 59785216 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:45.820437+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:46.820602+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:47.820744+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 59768832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:48.821478+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:49.821642+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:50.821893+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:51.822044+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:52.822257+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 59760640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:53.822467+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:54.822675+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480825 data_alloc: 218103808 data_used: 15007744
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:55.822811+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:56.823118+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:57.823349+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 59752448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:58.823569+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d8c459800 session 0x561d8e50e1e0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 59490304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 ms_handle_reset con 0x561d8d10bc00 session 0x561d8cff1860
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:59.823756+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 59744256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486265 data_alloc: 234881024 data_used: 21430272
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:00.823911+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:01.824058+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:02.824201+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 heartbeat osd_stat(store_statfs(0x4e9a6a000/0x0/0x4ffc00000, data 0x1a28523/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:03.824433+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:04.824678+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486265 data_alloc: 234881024 data_used: 21430272
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:05.824871+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:06.825023+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 59654144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:07.825172+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 90.937507629s of 91.256561279s, submitted: 90
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 275 heartbeat osd_stat(store_statfs(0x4e9a66000/0x0/0x4ffc00000, data 0x1a2a0f4/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 275 ms_handle_reset con 0x561d8d10d000 session 0x561d8e702d20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:08.825362+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:09.825538+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342967 data_alloc: 218103808 data_used: 7806976
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:10.825764+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 61603840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:11.825950+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 61595648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:12.826108+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 276 heartbeat osd_stat(store_statfs(0x4eaa68000/0x0/0x4ffc00000, data 0xa2a0d1/0xba6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 276 ms_handle_reset con 0x561d8e52e800 session 0x561d8c644f00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:13.826351+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:14.826579+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3272720 data_alloc: 218103808 data_used: 1056768
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:15.826799+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:16.826979+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 64823296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:17.827174+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 277 heartbeat osd_stat(store_statfs(0x4eb263000/0x0/0x4ffc00000, data 0x22d6ee/0x3aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 64815104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:18.827393+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 277 heartbeat osd_stat(store_statfs(0x4eb263000/0x0/0x4ffc00000, data 0x22d6ee/0x3aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.671767235s of 10.978181839s, submitted: 60
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:19.827542+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275694 data_alloc: 218103808 data_used: 1056768
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:20.827757+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:21.827923+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 278 heartbeat osd_stat(store_statfs(0x4eb260000/0x0/0x4ffc00000, data 0x22f151/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:22.828459+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:23.828624+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:24.828811+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 64798720 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275694 data_alloc: 218103808 data_used: 1056768
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e532400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:25.828897+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 ms_handle_reset con 0x561d8e532400 session 0x561d8e7034a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:26.829058+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:27.829235+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:28.829468+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:29.829637+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:30.829804+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:31.829982+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 64749568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:32.830102+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:33.830262+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:34.830426+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:35.830579+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 64741376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:36.830732+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:37.830915+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:38.831105+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:39.831251+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 64733184 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:40.831420+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 64724992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:41.831566+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:42.831717+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:43.831853+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:44.831981+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:45.832144+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:46.832288+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:47.832461+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:48.832639+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 64716800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:49.832839+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:50.833030+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:51.834205+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:52.834458+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 64708608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:53.834601+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 64700416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:54.834746+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 64700416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:55.834941+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 64692224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:56.835136+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 64692224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:57.835360+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:58.835887+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:59.836210+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:00.836407+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:01.836683+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:02.836963+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:03.837106+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:04.837291+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:05.837438+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 64684032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:06.837574+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:07.837803+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:08.838014+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 64675840 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:09.838192+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:10.838371+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:11.838581+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:12.838770+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:13.838951+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 64667648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:14.839133+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 64659456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:15.839391+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 64659456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:16.839511+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:17.839673+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:18.839890+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:19.840074+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:20.840251+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:21.840410+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 64651264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:22.840547+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:23.840683+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:24.840821+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:25.840929+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:26.841056+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:27.841188+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:28.841397+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 64634880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:29.841567+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:30.841701+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:31.841825+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:32.841951+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:33.842112+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:34.842293+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:35.842487+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:36.842658+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:37.842796+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 64626688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:38.842964+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:39.843103+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:40.843243+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:41.843435+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:42.843623+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:43.843778+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 64610304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:44.843917+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 64602112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:45.844210+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 64602112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:46.844360+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:47.844486+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:48.844701+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:49.844839+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:50.844975+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:51.845125+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:52.861992+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:53.862103+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 64593920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:54.862248+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 64585728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:55.862387+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 64585728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:56.862546+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:57.862689+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:58.862844+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:59.862962+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:00.863129+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:01.863276+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 64577536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:02.863389+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:03.863578+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:04.863791+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:05.863933+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284115 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25b000/0x0/0x4ffc00000, data 0x230d01/0x3b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:06.864165+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:07.864293+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345382912 unmapped: 64561152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:08.864540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 64552960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:09.864740+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345391104 unmapped: 64552960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 111.406425476s of 111.508773804s, submitted: 26
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:10.865084+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3287953 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d2f/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:11.865214+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:12.865352+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345399296 unmapped: 64544768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:13.865481+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:14.865632+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:15.865812+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288207 data_alloc: 218103808 data_used: 1064960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:16.866088+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 heartbeat osd_stat(store_statfs(0x4eb25a000/0x0/0x4ffc00000, data 0x230d34/0x3b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:17.866393+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 64512000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:18.866579+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:19.866728+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 280 ms_handle_reset con 0x561d8c459800 session 0x561d8ed72d20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:20.866870+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291989 data_alloc: 218103808 data_used: 1073152
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:21.867023+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb256000/0x0/0x4ffc00000, data 0x2328b1/0x3b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:22.867192+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 64487424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.714424610s of 13.350893974s, submitted: 22
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:23.867370+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 64479232 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:24.867497+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 64479232 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:25.867623+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 ms_handle_reset con 0x561d8d10bc00 session 0x561d8c881a40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:26.867776+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:27.867912+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:28.868083+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:29.868197+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:30.868469+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:31.868880+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 64454656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:32.869080+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 64438272 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:33.869280+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 64438272 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:34.869382+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:35.869585+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:36.869752+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:37.869905+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:38.870184+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:39.870420+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:40.870812+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 64430080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:41.871126+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345522176 unmapped: 64421888 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:42.871290+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:43.871582+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:44.871849+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:45.872098+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 64413696 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:46.872420+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:47.872668+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:48.872956+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:49.873391+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:50.873548+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:51.873771+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 64405504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:52.873983+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:53.874283+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:54.874536+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:55.874722+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 64397312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:56.874887+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345554944 unmapped: 64389120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:57.875113+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:58.875343+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:59.875516+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:00.875642+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327899 data_alloc: 218103808 data_used: 1081344
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 64380928 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:01.875839+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 64372736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:02.876068+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 heartbeat osd_stat(store_statfs(0x4eade2000/0x0/0x4ffc00000, data 0x6a4451/0x82b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345579520 unmapped: 64364544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:03.876250+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10d000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.750205994s of 40.227725983s, submitted: 4
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 64356352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:04.876431+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 282 ms_handle_reset con 0x561d8d10d000 session 0x561d8e8c54a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:05.876643+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328998 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:06.876826+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:07.876966+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:08.877204+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:09.877356+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:10.877585+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328998 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:11.880769+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 282 heartbeat osd_stat(store_statfs(0x4eade1000/0x0/0x4ffc00000, data 0x6a5fef/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:12.880940+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345612288 unmapped: 64331776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:13.881125+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148046494s of 10.304548264s, submitted: 38
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 64315392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 282 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:14.881382+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:15.881532+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:16.881905+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 64299008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:17.882058+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:18.882261+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:19.882422+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 64290816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:20.882542+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:21.882695+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:22.882977+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:23.883175+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:24.883346+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:25.883519+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 64282624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:26.884543+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 64274432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:27.884693+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345669632 unmapped: 64274432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:28.884876+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 64266240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:29.885016+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 64266240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:30.885140+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:31.885360+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:32.885533+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:33.885898+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:34.886031+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:35.886194+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:36.886358+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 64258048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:37.886529+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:38.886707+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:39.886857+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:40.887006+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:41.887134+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:42.887296+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 64249856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:43.887529+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 64233472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:44.887666+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 64233472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:45.887809+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:46.887968+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:47.888116+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:48.888274+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:49.888429+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:50.888561+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:51.888686+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:52.888883+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345726976 unmapped: 64217088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:53.889061+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:54.889216+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:55.889352+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:56.889492+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 64208896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:57.889622+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:58.889777+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:59.889942+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:00.890084+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 64192512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:01.890274+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:02.890426+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:03.890561+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:04.890692+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 64176128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:05.890826+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:06.890970+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:07.891100+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:08.891259+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 64167936 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:09.891386+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:10.891527+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:11.891723+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:12.891893+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:13.892058+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:14.892178+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 64159744 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:15.892423+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 64151552 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:16.892550+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 64151552 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:17.892682+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 64135168 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:18.892881+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:19.893016+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:20.893187+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:21.893326+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:22.893505+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 64126976 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:23.893676+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 64118784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:24.893805+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 64118784 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:25.893966+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:26.894177+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:27.894393+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 64110592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:28.894725+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 64102400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:29.894900+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 64102400 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:30.895022+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:31.895170+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:32.895301+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 64094208 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:33.895480+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:34.895652+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:35.898259+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:36.898392+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 64086016 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:37.898555+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:38.898981+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:39.899202+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:40.899434+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:41.899666+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 64077824 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:42.899855+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:43.900026+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:44.900297+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:45.901009+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:46.901433+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:47.901576+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:48.901913+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:49.902133+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345882624 unmapped: 64061440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:50.902271+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 64053248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:51.902417+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 64053248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:52.902540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 64045056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:53.902687+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345899008 unmapped: 64045056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:54.903016+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:55.903199+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:56.903363+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:57.903513+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345907200 unmapped: 64036864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:58.903694+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:59.903845+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:00.903974+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:01.904124+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:02.904279+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:03.904419+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:04.904575+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:05.904743+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 64028672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:06.904898+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:07.905050+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:08.905208+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:09.905346+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 64012288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:10.905498+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:11.905670+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:12.905808+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:13.905983+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 64004096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:14.906140+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 63987712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:15.906280+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345956352 unmapped: 63987712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:16.906399+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345964544 unmapped: 63979520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:17.906581+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345964544 unmapped: 63979520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:18.906769+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:19.906907+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:20.907196+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:21.907370+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345972736 unmapped: 63971328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:22.907583+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:23.907721+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:24.907870+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345980928 unmapped: 63963136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:25.908000+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:26.908184+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:27.908409+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 63954944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:28.908632+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 63946752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:29.908783+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 63946752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:30.908969+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:31.909138+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:32.909268+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:33.909424+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:34.909554+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 63938560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:35.909678+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 63930368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:36.909798+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 63930368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:37.909933+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 63922176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:38.910124+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:39.910293+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:40.910453+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:41.910588+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:42.910711+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:43.910843+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:44.910976+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:45.911113+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:46.911249+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:47.911424+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:48.911604+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:49.911765+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 67379200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:50.911950+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:51.912128+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:52.912425+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:53.912595+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:54.912767+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:55.912929+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:56.913065+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:57.913188+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:58.913391+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:59.913542+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:00.913706+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:01.913844+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:02.913992+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:03.914145+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:04.914384+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:05.914560+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 67371008 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:06.914764+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:07.914963+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:08.915230+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:09.915434+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:10.915634+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:11.915850+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:12.916007+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:13.916092+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:14.916219+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:15.916365+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:16.916537+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 67362816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:17.916676+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:18.916873+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:19.916954+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 67346432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:20.917079+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 67338240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:21.917271+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 67338240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:22.917421+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:23.917549+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:24.917689+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:25.917878+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 67330048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:26.918043+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:27.918211+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:28.918410+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:29.918557+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:30.918702+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:31.918913+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:32.919126+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 67321856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:33.919395+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 67313664 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:34.919569+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:35.919717+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:36.919969+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 67305472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:37.920187+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:38.920403+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:39.920573+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:40.920722+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 67289088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.4 total, 600.0 interval
                                           Cumulative writes: 46K writes, 181K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.78 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 691 writes, 1763 keys, 691 commit groups, 1.0 writes per commit group, ingest: 0.85 MB, 0.00 MB/s
                                           Interval WAL: 691 writes, 309 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.083       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561d8ae0f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:41.920834+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:42.920971+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:43.921158+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:44.921385+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:45.921552+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 67280896 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:46.921863+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:47.922101+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:48.922367+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 67272704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:49.922552+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:50.922747+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:51.922914+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:52.923117+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 67264512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:53.923577+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 67248128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:54.923782+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 67248128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:55.923966+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 68165632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:56.924176+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 68165632 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:57.924450+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 68157440 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:58.924690+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:59.924834+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:00.924990+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:01.925132+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:02.925287+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 68149248 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:03.925495+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 68141056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:04.925629+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 68141056 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:05.925764+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:06.925949+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:07.926093+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:08.926272+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:09.926428+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:10.926574+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:11.926718+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 68132864 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:12.926955+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341819392 unmapped: 68124672 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:13.927082+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:14.927192+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:15.935560+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:16.935732+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 68116480 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:17.935864+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:18.936013+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:19.936152+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 68108288 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:20.936359+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 68100096 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:21.936511+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 68083712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:22.936664+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 68083712 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:23.936817+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:24.936990+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:25.937132+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:26.937282+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:27.937426+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:28.937590+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 68075520 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:29.937743+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341876736 unmapped: 68067328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:30.937890+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341876736 unmapped: 68067328 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:31.938028+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:32.938243+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:33.938412+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:34.938592+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:35.938736+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:36.938868+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 68059136 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:37.939049+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:38.939225+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:39.939375+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:40.939521+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 68050944 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:41.939683+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 68042752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:42.939827+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 68042752 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:43.940004+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 68034560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:44.940143+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 68034560 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:45.940370+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:46.940525+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:47.940706+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:48.940935+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:49.941390+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:50.941655+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:51.941939+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:52.942200+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 68018176 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:53.942413+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 68001792 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:54.942608+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:55.942755+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:56.942882+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 67993600 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:57.943024+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:58.943602+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:59.943834+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:00.943962+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:01.944076+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 67985408 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:02.944335+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:03.944544+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:04.944733+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:05.944930+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:06.945152+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:07.945404+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:08.945625+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:09.945783+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 67969024 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:10.946004+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 67960832 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:11.946192+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:12.946381+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:13.946540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:14.946681+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:15.946807+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 67952640 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:16.946980+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 67944448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:17.947274+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 67944448 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:18.947607+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:19.947744+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:20.947891+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:21.948248+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:22.948676+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:23.949081+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 67936256 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:24.949407+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 67928064 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:25.949703+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 67928064 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:26.949990+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:27.950250+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:28.950614+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:29.950833+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 67919872 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:30.951149+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 67903488 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:31.951571+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 67903488 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331972 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:32.951821+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 67895296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:33.952043+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 67895296 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:34.952366+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:35.952566+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eadde000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:36.952777+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 67887104 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 322.153381348s of 322.755462646s, submitted: 14
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:37.952923+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 67846144 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:38.953146+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:39.953451+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:40.953643+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:41.954038+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:42.954208+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:43.954398+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:44.954562+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:45.954726+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 67837952 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:46.954930+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:47.955083+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:48.955279+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:49.955505+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:50.955679+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:51.955850+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:52.956107+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 67821568 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:53.956853+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:54.957602+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:55.957875+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:56.958391+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 67813376 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:57.958997+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:58.959541+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:59.959985+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:00.960330+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:01.960615+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 67796992 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:02.960831+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:03.961269+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:04.961400+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:05.961778+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 67788800 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:06.962069+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 67780608 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:07.962446+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:08.962639+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:09.962798+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:10.962987+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:11.963111+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:12.963288+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 67772416 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:13.963912+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:14.964221+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:15.964399+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:16.964747+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:17.965013+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 67764224 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:18.965375+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:19.965644+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:20.966222+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:21.966576+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:22.966983+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:23.967388+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 67756032 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:24.967570+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:25.967791+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:26.967987+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:27.968152+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:28.968380+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 67739648 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:29.968528+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:30.968669+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:31.968859+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:32.969105+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:33.969300+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:34.969492+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:35.969615+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 67731456 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:36.969801+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342228992 unmapped: 67715072 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:37.969946+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342237184 unmapped: 67706880 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:38.970114+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:39.970280+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:40.970500+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:41.970643+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:42.970796+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:43.970986+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:44.971216+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 67698688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:45.971404+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:46.971585+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:47.971683+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:48.971872+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:49.972019+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:50.972200+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 67682304 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:51.972421+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:52.972582+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:53.972730+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 67674112 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:54.972872+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:55.973076+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:56.973270+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:57.973472+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 67665920 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:58.973707+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 67657728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:59.973897+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 67657728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:00.974036+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 67649536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:01.974168+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:02.974412+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:03.974556+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 67641344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:04.974726+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:05.974863+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:06.975036+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:07.975200+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:08.975395+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 67633152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:09.975572+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:10.975747+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:11.975866+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:12.975978+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331092 data_alloc: 218103808 data_used: 1089536
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:13.976121+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 67616768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:14.976274+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.540496826s of 97.933944702s, submitted: 90
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 67608576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:15.976518+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 70795264 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:16.976672+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eaddf000/0x0/0x4ffc00000, data 0x6a7a52/0x82f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 284 ms_handle_reset con 0x561d8e52e800 session 0x561d8e7c0960
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:17.976864+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3333441 data_alloc: 218103808 data_used: 1097728
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:18.977056+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e532400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eaddc000/0x0/0x4ffc00000, data 0x6a95f0/0x830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:19.977222+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:20.977486+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:21.977731+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 285 ms_handle_reset con 0x561d8e532400 session 0x561d8d25ba40
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:22.978393+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305645 data_alloc: 218103808 data_used: 1097728
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:23.978597+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:24.978991+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 285 heartbeat osd_stat(store_statfs(0x4eb24b000/0x0/0x4ffc00000, data 0x23b19e/0x3c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.389628410s of 10.221417427s, submitted: 70
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:25.979267+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8c459800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:26.979455+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 286 heartbeat osd_stat(store_statfs(0x4eb248000/0x0/0x4ffc00000, data 0x23cc40/0x3c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:27.979592+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 286 heartbeat osd_stat(store_statfs(0x4ea248000/0x0/0x4ffc00000, data 0x123cc40/0x13c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452337 data_alloc: 218103808 data_used: 1097728
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 70770688 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:28.979824+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 287 ms_handle_reset con 0x561d8c459800 session 0x561d8e3ed680
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:29.979977+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:30.981995+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:31.982736+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:32.983504+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:33.983714+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:34.983894+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:35.984532+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:36.984957+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:37.985170+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:38.985450+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:39.985783+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:40.986066+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 70721536 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:41.986681+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:42.986895+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:43.987474+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 70713344 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:44.987704+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:45.987928+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:46.988271+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:47.988657+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:48.988843+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 70705152 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:49.989083+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:50.989251+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:51.989425+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:52.989654+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:53.989811+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:54.989995+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:55.990255+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:56.990443+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 70696960 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:57.990603+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:58.990792+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:59.990990+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:00.991197+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:01.991377+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:02.991563+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:03.991721+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:04.991882+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339255296 unmapped: 70688768 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:05.992020+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 70680576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:06.992164+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 70680576 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:07.992294+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:08.992542+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:09.992705+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:10.992831+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:11.993049+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:12.993211+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 70672384 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:13.993386+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339279872 unmapped: 70664192 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:14.993562+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:15.993716+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:16.993845+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 70656000 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:17.993981+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:18.994235+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:19.994405+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:20.994548+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:21.994724+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:22.994878+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460557 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:23.995044+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 70647808 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:24.995191+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:25.995383+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:26.995591+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:27.995771+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 70639616 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e9dd0000/0x0/0x4ffc00000, data 0x16b025f/0x183d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10bc00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.414230347s of 63.027751923s, submitted: 47
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3459809 data_alloc: 218103808 data_used: 1105920
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:28.995939+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 70631424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:29.996139+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 70631424 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:30.996566+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 70606848 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:31.996761+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 70598656 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e9dce000/0x0/0x4ffc00000, data 0x16b1e0d/0x183f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e9dce000/0x0/0x4ffc00000, data 0x16b1e0d/0x183f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:32.996924+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 70590464 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325729 data_alloc: 218103808 data_used: 1114112
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241e0d/0x3cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:33.997071+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 289 ms_handle_reset con 0x561d8d10bc00 session 0x561d8e8c43c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:34.997275+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:35.997406+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:36.997579+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:37.997700+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241dea/0x3ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324113 data_alloc: 218103808 data_used: 1114112
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:38.997860+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 70574080 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _renew_subs
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.201864243s of 11.404572487s, submitted: 53
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 289 heartbeat osd_stat(store_statfs(0x4ea5cf000/0x0/0x4ffc00000, data 0x241dea/0x3ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 289 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:39.998007+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:40.998190+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:41.998334+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 70549504 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:42.998486+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:43.998651+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:44.998820+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:45.998956+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 70541312 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:46.999090+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:47.999214+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:48.999421+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:49.999549+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:51.000526+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:52.000639+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:53.000842+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:54.000993+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 70533120 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:55.001136+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:56.001293+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:57.001473+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:58.001630+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:59.001909+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:00.002062+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:01.002240+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:02.002373+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 70516736 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:03.002496+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:04.002649+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:05.002779+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 70508544 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:06.002934+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:07.003120+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:08.003261+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:09.003432+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:10.003601+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 70500352 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8d109000 session 0x561d8e39fe00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d10cc00
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:11.003771+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 70492160 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8e74f800 session 0x561d8d1163c0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8d109000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d8c2aa800 session 0x561d8d2a6d20
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d8e52e800
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:12.004010+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 70483968 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:13.004193+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 70483968 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:14.004386+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 70475776 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:15.004735+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:16.004904+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:17.005149+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 70467584 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:18.006067+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:19.033442+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:20.033653+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:21.033829+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:22.033989+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 70459392 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:23.034156+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:24.034414+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:25.034561+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 70451200 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:26.034759+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 70434816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:27.034908+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 70434816 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:28.035110+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:29.035333+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:30.035511+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 70426624 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:31.035708+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:32.035855+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:33.036041+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 70418432 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:34.036213+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:35.036373+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:36.036540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 70410240 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:37.036666+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:38.036816+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:39.037027+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:40.037146+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:41.037324+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 70402048 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:42.037471+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 70393856 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:43.037605+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:44.037731+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:45.037896+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:46.038055+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:47.038224+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:48.038357+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 70377472 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:49.038545+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:50.038675+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:51.038806+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:52.038993+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:53.039124+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:54.039292+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:55.039491+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:56.039648+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:57.039805+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 70361088 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:58.039962+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:59.040151+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:00.040266+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 70344704 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:01.040367+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 70336512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:02.040517+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 70336512 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:03.040635+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:04.040741+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:05.040854+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:06.041228+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 70320128 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 25 09:46:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149545527' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:07.041388+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 70328320 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:08.041625+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'config show' '{prefix=config show}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 70729728 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:09.041854+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338845696 unmapped: 71098368 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:10.042052+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338665472 unmapped: 71278592 heap: 409944064 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:11.042225+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:12.042503+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338419712 unmapped: 82567168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:13.042701+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338419712 unmapped: 82567168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:14.042969+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338419712 unmapped: 82567168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:15.043108+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338419712 unmapped: 82567168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:16.043246+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:17.043403+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:18.043593+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:19.043780+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:20.043895+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:21.044017+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338427904 unmapped: 82558976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:22.044505+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 82542592 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:23.044643+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 82542592 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 ms_handle_reset con 0x561d96070c00 session 0x561d8d2365a0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: handle_auth_request added challenge on 0x561d93337000
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:24.044763+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 82542592 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:25.044892+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 82542592 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:26.045024+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338452480 unmapped: 82534400 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:27.047099+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 82526208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:28.047242+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 82526208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:29.047411+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 82526208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:30.047576+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:31.047713+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:32.047863+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:33.047995+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:34.048201+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 82518016 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:35.048370+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 82501632 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:36.049003+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 82501632 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:37.049202+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 82501632 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:38.049401+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 82493440 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:39.049649+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:40.049797+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:41.049913+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:42.050046+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:43.050212+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:44.050380+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 82485248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:45.050511+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338509824 unmapped: 82477056 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:46.050644+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 82468864 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:47.050782+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 82468864 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:48.050913+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 82468864 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:49.051116+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:50.051258+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:51.051419+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:52.051554+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:53.051721+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 82460672 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:54.051893+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:55.052054+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:56.052240+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:57.052436+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:58.052613+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:59.052808+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 82452480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:00.052967+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338542592 unmapped: 82444288 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:01.053149+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338542592 unmapped: 82444288 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:02.053339+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 82436096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:03.053610+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 82436096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:04.053806+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 82436096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:05.053965+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 82436096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:06.054126+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 82427904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:07.054276+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 82427904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:08.054642+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 82427904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:09.054822+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 82419712 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:10.055019+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:11.055187+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:12.055355+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:13.055501+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:14.055702+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:15.055856+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:16.055987+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:17.056138+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338583552 unmapped: 82403328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:18.056336+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:19.056544+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:20.056751+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:21.056916+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:22.057092+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338591744 unmapped: 82395136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:23.057256+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 82386944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:24.057499+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 82386944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:25.057669+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 82386944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:26.057814+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 82362368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:27.057962+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 82362368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:28.058109+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 82362368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:29.058276+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 82362368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:30.058418+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338632704 unmapped: 82354176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:31.058541+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338632704 unmapped: 82354176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:32.058737+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338632704 unmapped: 82354176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:33.058894+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338632704 unmapped: 82354176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:34.059084+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:35.059291+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:36.059552+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:37.059753+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:38.059936+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:39.060135+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:40.060357+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:41.060541+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338640896 unmapped: 82345984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:42.060795+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 82329600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:43.060984+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 82329600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:44.061148+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 82329600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:45.061410+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 82329600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:46.061598+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338673664 unmapped: 82313216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:47.061846+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338673664 unmapped: 82313216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:48.062061+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338673664 unmapped: 82313216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:49.062352+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338681856 unmapped: 82305024 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:50.062535+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:51.062694+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:52.062875+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:53.063037+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:54.063272+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:55.063458+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:56.063606+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:57.063740+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 82296832 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:58.063865+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 82280448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:59.064386+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 82280448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:00.064570+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 82280448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:01.064817+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:02.064956+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:03.065099+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:04.065241+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:05.065442+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 82272256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:06.065618+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 82264064 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:07.065784+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338731008 unmapped: 82255872 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:08.065947+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:09.067264+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:10.067528+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:11.067702+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:12.067952+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:13.068130+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 82247680 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:14.068402+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:15.068555+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:16.068755+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:17.068907+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:18.069087+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:19.069286+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:20.069497+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:21.069750+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 82223104 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:22.070028+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 82214912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:23.070212+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 82214912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:24.070421+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 82214912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:25.070575+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:26.070793+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:27.071123+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:28.071483+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:29.071694+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:30.071883+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 82206720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:31.072110+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:32.072393+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:33.072568+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:34.073918+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:35.074106+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:36.074255+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 82190336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:37.074394+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 82182144 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:38.074791+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 82182144 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:39.075025+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 82165760 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:40.075189+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 82165760 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:41.075390+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 82165760 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:42.075545+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 82165760 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:43.075703+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 82157568 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:44.075912+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 82157568 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:45.076037+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 82157568 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:46.076387+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 82157568 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:47.076534+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 82149376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:48.076771+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 82149376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:49.076980+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 82149376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:50.077103+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 82149376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:51.077287+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 82132992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:52.077463+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 82132992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:53.077659+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 82132992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:54.077842+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 82132992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:55.078103+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:56.078265+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:57.078397+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:58.084017+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:59.084245+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:00.084452+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:01.084624+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:02.084805+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 82124800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:03.084963+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338878464 unmapped: 82108416 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:04.085108+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338878464 unmapped: 82108416 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:05.085274+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 82100224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:06.085418+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 82100224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:07.085572+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 82100224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:08.085758+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 82100224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:09.088346+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 82100224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:10.088473+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 82083840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:11.088618+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 82083840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:12.088744+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 82083840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:13.088924+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 82083840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:14.090573+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 82083840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:15.090715+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 82083840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:16.090892+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 82083840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:17.091053+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338911232 unmapped: 82075648 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:18.091225+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 82067456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:19.091495+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 82067456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:20.091666+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 82067456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:21.091851+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 82067456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:22.092001+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 82067456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:23.092238+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 82059264 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:24.092442+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 82059264 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:25.092594+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 82059264 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:26.092726+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338935808 unmapped: 82051072 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:27.092879+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 82042880 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:28.093026+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 82042880 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:29.093224+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 82034688 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:30.093372+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 82026496 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:31.093505+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 82026496 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:32.112867+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 82026496 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:33.113052+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 82018304 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:34.113185+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338968576 unmapped: 82018304 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:35.113411+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:36.113578+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:37.113776+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:38.113981+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:39.114252+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:40.114382+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:41.114539+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.4 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.77 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 476 writes, 1271 keys, 476 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s
                                           Interval WAL: 476 writes, 202 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:42.114806+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:43.115016+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:44.115203+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 82010112 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:45.115382+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338984960 unmapped: 82001920 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:46.115605+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338984960 unmapped: 82001920 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:47.115818+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338993152 unmapped: 81993728 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:48.115988+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 338993152 unmapped: 81993728 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:49.116199+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 81985536 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:50.116359+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339009536 unmapped: 81977344 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:51.116596+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339009536 unmapped: 81977344 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:52.116817+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339009536 unmapped: 81977344 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:53.116972+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339009536 unmapped: 81977344 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:54.117153+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339009536 unmapped: 81977344 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:55.117289+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339009536 unmapped: 81977344 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:56.117475+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339009536 unmapped: 81977344 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:57.117669+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 81960960 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:58.117821+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 81952768 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:59.118046+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 81952768 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:00.118174+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 81952768 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:01.118335+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 81952768 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:02.118547+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 81952768 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:03.118679+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 81952768 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:04.118847+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 81952768 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:05.118988+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 81952768 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:06.119135+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 81944576 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:07.119267+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 81936384 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:08.119440+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 81936384 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:09.119643+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 81936384 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:10.119862+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 81936384 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:11.120026+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 81936384 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:12.120214+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 81936384 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:13.120464+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 81936384 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:14.120633+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 81911808 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:15.120761+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 81911808 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:16.120890+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 81911808 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:17.121035+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 81911808 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:18.121169+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 81911808 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:19.121338+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 81911808 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:20.121479+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 81911808 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:21.121731+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 81903616 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:22.121882+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 81903616 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:23.122035+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339091456 unmapped: 81895424 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:24.122192+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339091456 unmapped: 81895424 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:25.122372+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339091456 unmapped: 81895424 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:26.122536+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339091456 unmapped: 81895424 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:27.122684+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339091456 unmapped: 81895424 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:28.122830+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339091456 unmapped: 81895424 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:29.123000+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339091456 unmapped: 81895424 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:30.123154+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339116032 unmapped: 81870848 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:31.123365+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339116032 unmapped: 81870848 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:32.123507+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 81862656 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:33.123750+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 81862656 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:34.124088+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 81862656 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:35.124289+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 81854464 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:36.124512+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 81854464 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:37.124666+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 81854464 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:38.124808+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 81846272 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:39.125016+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 81846272 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:40.125198+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 81846272 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:41.125400+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 81846272 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:42.125551+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 81846272 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:43.125761+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 81838080 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:44.126002+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 81838080 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:45.126186+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 81838080 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:46.126348+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 81829888 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:47.126540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 81829888 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:48.126738+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 81821696 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:49.126965+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 81821696 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:50.127108+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 81821696 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:51.127291+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 81821696 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:52.127476+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 81821696 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:53.129381+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 81821696 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:54.129541+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 81805312 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:55.129730+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 81805312 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:56.129916+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 81805312 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:57.130138+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 81805312 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:58.130339+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 81788928 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:59.130512+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 81788928 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:00.130661+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 81788928 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:01.130793+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 81788928 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:02.130953+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 81788928 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:03.131105+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 81780736 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:04.131263+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 81780736 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:05.131394+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 81780736 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:06.131524+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 81780736 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:07.131667+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 81780736 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:08.131848+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 81780736 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:09.132091+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 81780736 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:10.132232+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 81764352 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:11.132466+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 81756160 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:12.132621+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 81756160 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:13.132767+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 81756160 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:14.132908+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 81756160 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:15.133065+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 81756160 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:16.133296+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 81756160 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:17.133551+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 81756160 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:18.133818+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 81747968 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:19.134084+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 81747968 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:20.134261+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 81747968 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:21.134470+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 81747968 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:22.134627+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 81739776 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:23.134816+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 81739776 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:24.134982+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 81739776 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:25.135119+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 81739776 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:26.135248+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 81723392 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:27.135389+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 81723392 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:28.135531+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 81723392 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:29.135695+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339279872 unmapped: 81707008 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:30.135829+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339279872 unmapped: 81707008 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:31.135977+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 81698816 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:32.136160+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 81698816 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:33.136348+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 81698816 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:34.136593+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 81690624 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:35.136750+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23c000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328111 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 81690624 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:36.136912+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 81690624 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:37.137090+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 417.353149414s of 417.554626465s, submitted: 14
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 81641472 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:38.137242+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 81616896 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:39.137436+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:40.137638+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:41.137792+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:42.137994+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:43.138233+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:44.138394+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:45.138614+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1122304
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:46.138802+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:47.138986+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 81592320 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:48.139211+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 81584128 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:49.139437+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 81584128 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:50.139583+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 81575936 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:51.139835+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 81575936 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:52.140005+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 81575936 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:53.140174+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 81575936 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:54.140401+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 81575936 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:55.140552+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 81575936 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:56.140712+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 81575936 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:57.140877+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 81575936 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:58.141020+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 81567744 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:59.141258+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 81567744 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:00.141542+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 81567744 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:01.141740+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 81567744 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:02.141896+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 81567744 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:03.142161+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 81567744 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:04.142441+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 81567744 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:05.142631+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 81567744 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:06.142788+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 81559552 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:07.142962+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 81559552 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:08.143137+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 81559552 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:09.143338+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 81559552 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:10.143554+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 81559552 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:11.143744+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 81559552 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:12.143941+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 81543168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:13.144085+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 81543168 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:14.144258+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 81534976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:15.144429+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 81534976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:16.144589+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 81534976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:17.144767+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 81534976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:18.144920+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 81534976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:19.145158+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 81534976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:20.145397+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 81534976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:21.145540+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 81534976 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:22.145702+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 81526784 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:23.145847+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 81526784 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:24.146048+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 81526784 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:25.146209+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 81526784 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:26.146409+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 81526784 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:27.146611+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 81526784 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:28.146879+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 81526784 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:29.147133+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 81510400 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:30.147290+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 81502208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:31.147636+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 81502208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:32.147756+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 81502208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:33.147942+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 81502208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:34.148124+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 81502208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:35.151286+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 81502208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:36.151549+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 81502208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:37.151760+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 81502208 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:38.152050+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 81485824 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:39.152442+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 81485824 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:40.152615+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 81485824 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:41.152736+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 81477632 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:42.152942+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 81469440 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:43.153124+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 81469440 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:44.153495+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 81469440 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:45.153660+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 81469440 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:46.153836+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 81461248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:47.153985+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 81461248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:48.154101+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 81461248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:49.154250+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:50.154408+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 81461248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:51.154568+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 81461248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:52.154763+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 81461248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:53.155004+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 81461248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:54.155180+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 81461248 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:55.155376+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 81444864 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:56.155580+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 81444864 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:57.155764+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339558400 unmapped: 81428480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:58.155939+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339558400 unmapped: 81428480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:59.156183+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339558400 unmapped: 81428480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:00.156420+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339558400 unmapped: 81428480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:01.156578+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339558400 unmapped: 81428480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:02.156732+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339558400 unmapped: 81428480 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:03.156962+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 81412096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:04.157136+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 81412096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:05.157372+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 81412096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:06.157515+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 81412096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:07.157716+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 81412096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:08.157873+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 81412096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:09.158075+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 81412096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:10.158260+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 81412096 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:11.158450+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 81403904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:12.158630+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 81403904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:13.158775+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 81403904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:14.158893+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 81403904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:15.159079+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 81403904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:16.159255+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 81403904 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:17.159449+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 81395712 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:18.159639+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 81379328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:19.159867+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 81379328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:20.160109+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 81379328 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:21.160286+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 81371136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:22.160534+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 81371136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:23.160753+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 81371136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:24.160966+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 81371136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:25.161169+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 81371136 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:26.161373+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 81362944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:27.161533+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 81362944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:28.161689+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 81362944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:29.161874+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 81354752 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:30.162017+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 81362944 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:31.162170+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 81354752 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:32.162429+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 81354752 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:33.162616+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 81354752 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:34.162778+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 81346560 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:35.162946+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 81346560 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:36.163137+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 81346560 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:37.163290+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 81346560 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:38.163538+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 81346560 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:39.163726+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 81346560 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:40.163887+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 81346560 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:41.164073+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 81346560 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:42.164206+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 81338368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:43.164429+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 81338368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:44.164567+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 81338368 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:45.164804+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339656704 unmapped: 81330176 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:46.164952+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 81321984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:47.165124+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 81321984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:48.165290+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 81321984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:49.165630+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 81321984 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:50.165770+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 81313792 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:51.165899+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 81313792 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:52.166053+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339681280 unmapped: 81305600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:53.166376+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339681280 unmapped: 81305600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:54.166605+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339681280 unmapped: 81305600 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:55.166791+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 81297408 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:56.166926+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 81297408 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:57.167081+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 81297408 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:58.167282+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 81289216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:59.167531+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 81289216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:00.167669+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 81289216 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:01.167848+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 81281024 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:02.168037+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 81281024 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:03.168219+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 81281024 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:04.168484+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 81281024 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:05.168703+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 81256448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:06.168880+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 81256448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:07.169093+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 81256448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:08.169451+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 81256448 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:09.169649+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339738624 unmapped: 81248256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:10.169853+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339738624 unmapped: 81248256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:11.170010+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339738624 unmapped: 81248256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:12.170189+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339738624 unmapped: 81248256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:13.170406+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339738624 unmapped: 81248256 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:14.170611+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 81240064 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:15.170734+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 81240064 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:16.170910+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 81240064 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:17.171065+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 81240064 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:18.171238+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 81231872 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:19.171462+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 81215488 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:20.171688+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 81215488 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:21.171886+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 81215488 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:22.172070+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 81207296 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:23.172214+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 81207296 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:24.172424+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 81207296 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:25.172622+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 81207296 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:26.172779+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 81207296 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:27.173002+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 81207296 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:28.173198+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 81207296 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:29.173436+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 81207296 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:30.173623+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 81190912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:31.173810+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 81190912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:32.188629+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 81190912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:33.188805+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 81190912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:34.189003+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 81190912 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:35.189140+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 81182720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:36.189294+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 81182720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:37.189439+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 81182720 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:38.189591+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 81174528 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:39.189770+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 81174528 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:40.189961+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 81174528 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:41.190149+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 81174528 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:42.190446+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 81174528 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:43.190637+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 81174528 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:44.190767+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 81174528 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:45.190975+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 81174528 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:46.191162+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 81166336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:47.191347+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 81166336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:48.191497+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 81166336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:49.191659+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 81166336 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:50.191803+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 81158144 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:51.191979+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 81149952 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:52.192119+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 81149952 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:53.192265+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 81149952 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:54.192361+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:55.192507+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:56.192652+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:57.192888+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:58.193045+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:59.193246+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:00.193413+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:01.193604+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:02.193805+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:03.194030+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:04.194232+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:05.194396+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:06.194610+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:07.194749+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:08.194887+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 81125376 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:09.195128+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 81117184 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:10.195295+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 81108992 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:11.195516+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339886080 unmapped: 81100800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:12.195647+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339886080 unmapped: 81100800 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:13.195840+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 81092608 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:14.195983+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 81092608 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:15.196130+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 81092608 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:16.196278+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 81092608 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:17.196495+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 81092608 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:18.196640+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 81092608 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:19.196861+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 81092608 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:20.197061+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 81092608 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:21.197286+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 81084416 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:22.197480+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 81084416 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:23.197638+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 81076224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:24.197811+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 81076224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:25.198075+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 81076224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:26.198284+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 81068032 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:27.198550+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 81059840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:28.198815+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 81059840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:29.199108+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 81059840 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:30.199234+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 81051648 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:31.199407+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 81051648 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:32.199573+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 81051648 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:33.199767+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 81051648 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:34.199936+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 81043456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:35.200084+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 81043456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:36.200237+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 81043456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:37.200386+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 81043456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:38.200515+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 81043456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:39.200698+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 81043456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:40.200826+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 81043456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:41.200976+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:16 compute-0 ceph-osd[89702]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:16 compute-0 ceph-osd[89702]: bluestore.MempoolThread(0x561d8aeedb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327231 data_alloc: 218103808 data_used: 1126400
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 81043456 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:42.201121+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 81035264 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:43.201281+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 81027072 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'config show' '{prefix=config show}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:44.201363+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 80707584 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:45.201497+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:16 compute-0 ceph-osd[89702]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 81076224 heap: 420986880 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:16 compute-0 ceph-osd[89702]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb23d000/0x0/0x4ffc00000, data 0x24384d/0x3d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: tick
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_tickets
Nov 25 09:46:16 compute-0 ceph-osd[89702]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:46.201621+0000)
Nov 25 09:46:16 compute-0 ceph-osd[89702]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:46:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 25 09:46:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/275657750' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 09:46:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/615187276' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 09:46:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2313208645' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 09:46:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3140011526' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 09:46:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1149545527' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 09:46:16 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/275657750' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 09:46:16 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:46:16 compute-0 rsyslogd[1007]: imjournal from <np0005534516:ceph-osd>: begin to drop messages due to rate-limiting
Nov 25 09:46:16 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 25 09:46:16 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779352715' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 09:46:17 compute-0 nova_compute[253538]: 2025-11-25 09:46:17.019 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 09:46:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108777185' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:46:17 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3801: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:17 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23553 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:17 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 25 09:46:17 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320551030' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 09:46:17 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23557 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1779352715' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 09:46:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4108777185' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:46:17 compute-0 ceph-mon[75015]: pgmap v3801: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:17 compute-0 ceph-mon[75015]: from='client.23553 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:17 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/320551030' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 09:46:17 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23559 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23561 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23563 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23565 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23569 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mon[75015]: from='client.23557 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mon[75015]: from='client.23559 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mon[75015]: from='client.23561 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mon[75015]: from='client.23563 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mon[75015]: from='client.23565 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23573 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3802: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 25 09:46:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775896836' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23575 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:19 compute-0 nova_compute[253538]: 2025-11-25 09:46:19.746 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:46:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 25 09:46:19 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2510705055' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 09:46:19 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:20 compute-0 ceph-mon[75015]: from='client.23569 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:20 compute-0 ceph-mon[75015]: from='client.23573 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:20 compute-0 ceph-mon[75015]: pgmap v3802: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2775896836' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 09:46:20 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2510705055' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 09:46:20 compute-0 nova_compute[253538]: 2025-11-25 09:46:20.176 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 09:46:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2641831591' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 09:46:20 compute-0 nova_compute[253538]: 2025-11-25 09:46:20.554 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 60997632 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:04.864541+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 60997632 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:05.864913+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:06.865279+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:07.865619+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:08.865940+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:09.866064+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:10.866431+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458421 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:11.866650+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 60989440 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:12.866864+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 60981248 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:13.867187+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 60981248 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:14.867566+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 60981248 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:15.867686+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.536159515s of 45.072593689s, submitted: 63
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21400 session 0x562bd46d8780
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd47a1c20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd30163c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd20ee1e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3e57800 session 0x562bd47a1860
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:16.867903+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492799 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:17.868184+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:18.868448+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:19.868731+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 59916288 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:20.869059+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a8000/0x0/0x4ffc00000, data 0x1622540/0x1796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a8000/0x0/0x4ffc00000, data 0x1622540/0x1796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 59908096 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:21.869294+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492799 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd2f223c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 59908096 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:22.869572+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd489af00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd48c9e00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd47a1a40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a8000/0x0/0x4ffc00000, data 0x1622540/0x1796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 59908096 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:23.869703+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 59981824 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:24.869837+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:25.870029+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:26.870167+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521014 data_alloc: 218103808 data_used: 7950336
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:27.870393+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:28.870521+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a6000/0x0/0x4ffc00000, data 0x1622573/0x1798000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:29.870663+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:30.870812+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:31.870979+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521014 data_alloc: 218103808 data_used: 7950336
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:32.871117+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:33.871242+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:34.871410+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 60088320 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e80a6000/0x0/0x4ffc00000, data 0x1622573/0x1798000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.819879532s of 19.090642929s, submitted: 44
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:35.871564+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 59113472 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:36.873108+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 59826176 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567670 data_alloc: 218103808 data_used: 8052736
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:37.873522+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:38.873666+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ad9000/0x0/0x4ffc00000, data 0x1bd0573/0x1d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:39.874182+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ad9000/0x0/0x4ffc00000, data 0x1bd0573/0x1d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:40.874386+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:41.874517+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 59719680 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577180 data_alloc: 218103808 data_used: 7962624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:42.875195+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 59785216 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:43.875989+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 59785216 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7af5000/0x0/0x4ffc00000, data 0x1bd3573/0x1d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7af5000/0x0/0x4ffc00000, data 0x1bd3573/0x1d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:44.876522+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 59785216 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:45.876837+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7af5000/0x0/0x4ffc00000, data 0x1bd3573/0x1d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 59785216 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:46.877037+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 59777024 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570364 data_alloc: 218103808 data_used: 7966720
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd21f14a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe8c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3fe8c00 session 0x562bd48274a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd3792960
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:47.877199+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2ec3c20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 59777024 heap: 412213248 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.002296448s of 12.697712898s, submitted: 86
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd57f01e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd160dc20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:48.877764+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 66002944 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:49.878183+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 66002944 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:50.878580+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e70fb000/0x0/0x4ffc00000, data 0x25cc5d5/0x2743000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:51.878850+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651333 data_alloc: 218103808 data_used: 7966720
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:52.879090+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:53.879263+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e70fb000/0x0/0x4ffc00000, data 0x25cc5d5/0x2743000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:54.879645+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:55.879900+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:56.880057+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 65994752 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651333 data_alloc: 218103808 data_used: 7966720
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd9679000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd9679000 session 0x562bd46d8d20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:57.880208+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 65978368 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.340452194s of 10.586875916s, submitted: 42
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e70fa000/0x0/0x4ffc00000, data 0x25cc5f8/0x2744000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:58.880355+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 65970176 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:13:59.880463+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:00.880653+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:01.880785+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 234881024 data_used: 18120704
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:02.880969+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:03.881147+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e70fa000/0x0/0x4ffc00000, data 0x25cc5f8/0x2744000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:04.881372+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:05.881558+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:06.881734+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 234881024 data_used: 18120704
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:07.881884+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:08.882044+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357359616 unmapped: 62210048 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.238709450s of 11.252449989s, submitted: 3
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:09.882170+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6ea6000/0x0/0x4ffc00000, data 0x28205f8/0x2998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,1,0,6])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 59424768 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:10.882412+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 59252736 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:11.882544+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3795939 data_alloc: 234881024 data_used: 18563072
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:12.882723+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:13.882858+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:14.883023+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6912000/0x0/0x4ffc00000, data 0x2db45f8/0x2f2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:15.883196+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:16.883370+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3795303 data_alloc: 234881024 data_used: 18567168
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:17.883543+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 59703296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:18.883693+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 59695104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:19.883816+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e68f1000/0x0/0x4ffc00000, data 0x2dd55f8/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 59695104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:20.884010+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 59695104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:21.884146+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 59686912 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3795303 data_alloc: 234881024 data_used: 18567168
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.981983185s of 12.428812027s, submitted: 91
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd404cd20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd22f0f00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e68f1000/0x0/0x4ffc00000, data 0x2dd55f8/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:22.884297+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd22f1a40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 63619072 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:23.884513+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 63619072 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:24.884687+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 63619072 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:25.884851+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 63619072 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3cabe00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd3e57800 session 0x562bd48c94a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:26.885004+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd22f0f00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7af3000/0x0/0x4ffc00000, data 0x1bd5573/0x1d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:27.885183+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:28.885378+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:29.885543+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:30.885749+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:31.885898+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:32.886140+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:33.886348+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:34.886490+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:35.886671+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:36.886803+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:37.886966+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:38.887151+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:39.887370+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:40.887632+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:41.887762+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:42.887940+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:43.888108+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:44.888280+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:45.888455+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:46.888645+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:47.888832+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:48.888963+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:49.889195+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:50.889423+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:51.889586+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:52.889777+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:53.889935+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:54.890132+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:55.890356+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:56.890613+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:57.890835+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:58.890982+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 67674112 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:14:59.891162+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:00.891365+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:01.891570+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482868 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8430000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:02.891794+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:03.891974+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:04.892106+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 67665920 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.842296600s of 43.214748383s, submitted: 108
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd20eef00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:05.892251+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ec1000/0x0/0x4ffc00000, data 0x180a4de/0x197d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:06.892422+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534280 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:07.892609+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ec1000/0x0/0x4ffc00000, data 0x180a4de/0x197d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:08.892797+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd47194a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:09.892989+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd4002d20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 67346432 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd2a952c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd2ec23c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:10.893162+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 67387392 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:11.893345+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 67387392 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538168 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:12.893484+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:13.893635+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:14.893792+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:15.893989+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:16.894149+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577048 data_alloc: 218103808 data_used: 9658368
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:17.894369+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:18.894578+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:19.894731+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:20.894919+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7ebf000/0x0/0x4ffc00000, data 0x180a511/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:21.895063+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577048 data_alloc: 218103808 data_used: 9658368
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:22.895220+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 67362816 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.739616394s of 17.893392563s, submitted: 26
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:23.895445+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 62439424 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:24.895620+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:25.895767+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7283000/0x0/0x4ffc00000, data 0x2438511/0x25ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:26.895957+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686598 data_alloc: 218103808 data_used: 10125312
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:27.896136+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7283000/0x0/0x4ffc00000, data 0x2438511/0x25ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:28.896381+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:29.896513+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:30.896702+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:31.896856+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680126 data_alloc: 218103808 data_used: 10125312
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7270000/0x0/0x4ffc00000, data 0x2459511/0x25ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:32.897007+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:33.897273+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:34.897528+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 62078976 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:35.897695+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.321384430s of 12.719173431s, submitted: 131
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 61972480 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7270000/0x0/0x4ffc00000, data 0x2459511/0x25ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:36.897825+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 61972480 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680082 data_alloc: 218103808 data_used: 10125312
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:37.897957+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3ff1680
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd3ff0960
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd3ff0b40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f20800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f20800 session 0x562bd3ff0d20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd44d7680
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:38.898185+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:39.898405+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:40.898591+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:41.898714+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6813000/0x0/0x4ffc00000, data 0x2eb6511/0x302b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 60735488 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765666 data_alloc: 218103808 data_used: 10125312
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:42.898861+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 60727296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:43.899003+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 60727296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:44.899166+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd2a8cd20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 60727296 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd4827680
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:45.899291+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd57f1a40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4b1e800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.891793251s of 10.165854454s, submitted: 23
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4b1e800 session 0x562bd21aba40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 60719104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:46.899396+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67ed000/0x0/0x4ffc00000, data 0x2eda544/0x3051000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [1])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 60719104 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773142 data_alloc: 218103808 data_used: 10133504
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:47.899553+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:48.899668+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:49.899802+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67ed000/0x0/0x4ffc00000, data 0x2eda544/0x3051000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:50.899974+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:51.900141+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3841462 data_alloc: 234881024 data_used: 19718144
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:52.900444+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:53.900633+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58884096 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:54.900882+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58875904 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67ed000/0x0/0x4ffc00000, data 0x2eda544/0x3051000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:55.901061+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58875904 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:56.901373+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67eb000/0x0/0x4ffc00000, data 0x2edb544/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58875904 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842074 data_alloc: 234881024 data_used: 19722240
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:57.901594+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e67eb000/0x0/0x4ffc00000, data 0x2edb544/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.110299110s of 12.168491364s, submitted: 17
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361177088 unmapped: 58392576 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:58.901742+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364691456 unmapped: 54878208 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:15:59.901882+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364871680 unmapped: 54697984 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:00.902285+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364871680 unmapped: 54697984 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:01.902697+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 54689792 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3968126 data_alloc: 234881024 data_used: 21753856
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:02.903036+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 54689792 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:03.903345+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e5b3d000/0x0/0x4ffc00000, data 0x3b89544/0x3d00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 54689792 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:04.903509+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 54681600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:05.903805+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 54681600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:06.903979+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 54681600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3960894 data_alloc: 234881024 data_used: 21827584
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3cab2c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd48c9e00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:07.904118+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd3792000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 59596800 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:08.904428+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 59596800 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:09.904697+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x2467511/0x25dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 59596800 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:10.904892+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.229134560s of 12.738837242s, submitted: 143
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x2467511/0x25dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 59588608 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:11.905070+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e725a000/0x0/0x4ffc00000, data 0x246f511/0x25e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 59588608 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696365 data_alloc: 218103808 data_used: 10125312
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:12.905502+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3ff0f00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd2f1a000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd2f225a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:13.905710+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:14.905848+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:15.906050+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:16.906542+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:17.906760+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:18.906962+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:19.907199+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:20.907422+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:21.907606+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:22.907912+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:23.908179+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:24.908481+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:25.908836+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:26.909141+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:27.909472+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:28.909719+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:29.909966+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:30.910371+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:31.910587+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:32.910832+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:33.911069+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:34.911386+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:35.911586+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356696064 unmapped: 62873600 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:36.911744+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:37.911905+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:38.912091+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:39.912443+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:40.912783+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356704256 unmapped: 62865408 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:41.912969+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356712448 unmapped: 62857216 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513602 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:42.913154+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8431000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356712448 unmapped: 62857216 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:43.913365+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.918975830s of 33.064044952s, submitted: 43
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356712448 unmapped: 62857216 heap: 419569664 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:44.913495+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3ff03c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 65806336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:45.913711+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 65798144 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:46.913869+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 65798144 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609634 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:47.914034+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 65798144 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:48.914183+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 65798144 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:49.914387+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd57f03c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:50.914543+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:51.914682+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688674 data_alloc: 234881024 data_used: 15405056
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:52.914848+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:53.915749+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd48c8780
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:54.915921+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:55.916060+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:56.916385+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688674 data_alloc: 234881024 data_used: 15405056
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:57.917132+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:58.917364+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:16:59.917619+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:00.917956+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd2ec30e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:01.918215+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688674 data_alloc: 234881024 data_used: 15405056
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:02.918488+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:03.918730+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:04.918973+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 65224704 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:05.919148+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358555648 unmapped: 65216512 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:06.919302+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7818000/0x0/0x4ffc00000, data 0x1eb34de/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358555648 unmapped: 65216512 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688674 data_alloc: 234881024 data_used: 15405056
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:07.919661+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd3ba4960
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.817094803s of 24.468519211s, submitted: 20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:08.919801+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd22f1860
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:09.919956+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:10.920200+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:11.920447+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:12.920612+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:13.921010+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:14.921275+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:15.921549+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:16.921737+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:17.921925+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:18.922144+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 68665344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:19.922414+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:20.922703+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:21.922969+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:22.923171+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:23.923358+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:24.923538+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:25.923758+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:26.923947+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:27.924155+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:28.924394+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:29.924639+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 68657152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:30.924862+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:31.925056+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:32.925380+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:33.925605+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:34.925713+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:35.925901+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:36.926048+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:37.926255+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 68648960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:38.926490+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 68640768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:39.926696+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 68640768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:40.926904+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 68640768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:41.927118+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 68640768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:42.927249+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521298 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:43.927465+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:44.927667+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:45.927873+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:46.928069+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 68632576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd4303e00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd4002000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2f21c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2f21c00 session 0x562bd212b2c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd3cc34a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.765380859s of 38.786224365s, submitted: 10
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd21f0780
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd3caa780
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd489ab40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:47.928194+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd21f01e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd8114400 session 0x562bd3cc2f00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e8432000/0x0/0x4ffc00000, data 0x12994de/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3552485 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:48.928425+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:49.928604+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:50.928808+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:51.928970+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd3ff01e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:52.929119+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 66535424 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3552485 data_alloc: 218103808 data_used: 4255744
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3ff0780
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x1494550/0x1609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6c000 session 0x562bd2ec3a40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd2f1a1e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:53.929248+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357253120 unmapped: 66519040 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:54.929423+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7094000/0x0/0x4ffc00000, data 0x1494573/0x160a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357253120 unmapped: 66519040 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:55.930011+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:56.930549+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:57.930809+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569479 data_alloc: 218103808 data_used: 6160384
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:58.931068+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:17:59.931241+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7094000/0x0/0x4ffc00000, data 0x1494573/0x160a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:00.931407+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:01.931578+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 66502656 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:02.931759+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569479 data_alloc: 218103808 data_used: 6160384
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 66494464 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:03.931899+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 66494464 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:04.932043+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 66494464 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e7094000/0x0/0x4ffc00000, data 0x1494573/0x160a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:05.932203+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.432277679s of 18.639139175s, submitted: 44
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 66494464 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:06.932363+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 63037440 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:07.932515+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636503 data_alloc: 218103808 data_used: 7401472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:08.932825+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:09.933013+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:10.933455+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6530000/0x0/0x4ffc00000, data 0x1be0573/0x1d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:11.933674+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:12.933960+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636519 data_alloc: 218103808 data_used: 7401472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6530000/0x0/0x4ffc00000, data 0x1be0573/0x1d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:13.934269+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:14.934464+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 62840832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:15.934710+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6530000/0x0/0x4ffc00000, data 0x1be0573/0x1d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 62898176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.024751663s of 10.314311981s, submitted: 135
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:16.934960+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 62898176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:17.935203+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3631995 data_alloc: 218103808 data_used: 7401472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 62889984 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd4913400 session 0x562bd20efc20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2409800 session 0x562bd4002000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:18.935374+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 heartbeat osd_stat(store_statfs(0x4e6537000/0x0/0x4ffc00000, data 0x1be1573/0x1d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 62889984 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 ms_handle_reset con 0x562bd2e6b400 session 0x562bd489bc20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:19.935592+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 62881792 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 266 ms_handle_reset con 0x562bd2e6c000 session 0x562bd46d9860
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:20.935795+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 266 ms_handle_reset con 0x562bd8114400 session 0x562bd404d680
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 53370880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 266 ms_handle_reset con 0x562bd8114400 session 0x562bd44d74a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 267 ms_handle_reset con 0x562bd2409800 session 0x562bd3cabc20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:21.935929+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 370409472 unmapped: 53362688 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:22.936107+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 268 ms_handle_reset con 0x562bd2e6b400 session 0x562bd57f10e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719735 data_alloc: 234881024 data_used: 14004224
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 59285504 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 268 ms_handle_reset con 0x562bd2e6c000 session 0x562bd489b860
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 268 ms_handle_reset con 0x562bd4913400 session 0x562bd4931680
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:23.936224+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 268 ms_handle_reset con 0x562bd4913400 session 0x562bd2b2cf00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 268 heartbeat osd_stat(store_statfs(0x4e5dc3000/0x0/0x4ffc00000, data 0x234f8bc/0x24ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 59277312 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:24.936460+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 59269120 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:25.936713+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 268 heartbeat osd_stat(store_statfs(0x4e5dc3000/0x0/0x4ffc00000, data 0x234f8bc/0x24ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 59269120 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:26.936951+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 59269120 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:27.937146+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719735 data_alloc: 234881024 data_used: 14004224
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 59269120 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.731040955s of 11.987977982s, submitted: 40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 268 handle_osd_map epochs [268,269], i have 268, src has [1,269]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:28.937266+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2409800 session 0x562bd2a8cb40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:29.937400+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e6489000/0x0/0x4ffc00000, data 0x1a0928a/0x1b82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:30.937588+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e6489000/0x0/0x4ffc00000, data 0x1a0928a/0x1b82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:31.937723+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:32.937871+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3605557 data_alloc: 218103808 data_used: 4263936
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:33.938006+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:34.938135+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:35.938297+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2258000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6c000 session 0x562bd48c9a40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd8114400 session 0x562bd4827a40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 63709184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd8114400 session 0x562bd46d81e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2f1a000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2409800 session 0x562bd3cc1860
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6c000 session 0x562bd4826b40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd4913400 session 0x562bd37932c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd4913400 session 0x562bd44d70e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2409800 session 0x562bd404c5a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:36.938416+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e6489000/0x0/0x4ffc00000, data 0x1a0928a/0x1b82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [0,1])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368148480 unmapped: 55623680 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:37.938584+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6b400 session 0x562bd46dde00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e5df2000/0x0/0x4ffc00000, data 0x232229a/0x249c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719608 data_alloc: 218103808 data_used: 12038144
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 54804480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2e6c000 session 0x562bd404d4a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:38.938711+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd8114400 session 0x562bd1e6ed20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e5de5000/0x0/0x4ffc00000, data 0x232f29a/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 54804480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd8114400 session 0x562bd48c8d20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.021324158s of 11.262329102s, submitted: 78
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 ms_handle_reset con 0x562bd2409800 session 0x562bd2ec30e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:39.938880+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368959488 unmapped: 54812672 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:40.939149+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 54804480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 heartbeat osd_stat(store_statfs(0x4e5dc0000/0x0/0x4ffc00000, data 0x23532bd/0x24ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:41.939281+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 59449344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 270 ms_handle_reset con 0x562bd4913400 session 0x562bd4719e00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:42.939519+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673882 data_alloc: 218103808 data_used: 8560640
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:43.939677+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:44.939850+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:45.940040+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:46.940146+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 270 heartbeat osd_stat(store_statfs(0x4e6526000/0x0/0x4ffc00000, data 0x1bebe2c/0x1d67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:47.940274+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673882 data_alloc: 218103808 data_used: 8560640
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:48.940364+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:49.940472+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:50.940623+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:51.940744+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.706192017s of 12.975371361s, submitted: 42
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:52.940906+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678708 data_alloc: 218103808 data_used: 8785920
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e6523000/0x0/0x4ffc00000, data 0x1bed88f/0x1d6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 63889408 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:53.941069+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:54.941181+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:55.941432+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:56.941593+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:57.941731+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689444 data_alloc: 218103808 data_used: 9150464
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7564000/0x0/0x4ffc00000, data 0x1bed88f/0x1d6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:58.941903+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7564000/0x0/0x4ffc00000, data 0x1bed88f/0x1d6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 63881216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:18:59.942145+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:00.942424+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:01.942649+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e755f000/0x0/0x4ffc00000, data 0x1bf288f/0x1d6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:02.942895+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689530 data_alloc: 218103808 data_used: 9150464
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:03.943080+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:04.943252+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e755f000/0x0/0x4ffc00000, data 0x1bf288f/0x1d6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:05.943397+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.351364136s of 13.782290459s, submitted: 21
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:06.943551+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:07.943743+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690402 data_alloc: 218103808 data_used: 9146368
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e755e000/0x0/0x4ffc00000, data 0x1bf388f/0x1d70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:08.943907+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 63946752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:09.944057+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 63938560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:10.944191+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 63938560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:11.944342+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7555000/0x0/0x4ffc00000, data 0x1bf888f/0x1d75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 63938560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:12.944472+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690402 data_alloc: 218103808 data_used: 9146368
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 63938560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:13.944634+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 63930368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:14.944830+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 63930368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:15.944946+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 63815680 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:16.945087+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7555000/0x0/0x4ffc00000, data 0x1bf888f/0x1d75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 63815680 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:17.945297+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.745916367s of 11.788716316s, submitted: 10
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6b400 session 0x562bd40025a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6c000 session 0x562bd21aba40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694526 data_alloc: 218103808 data_used: 10240000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 63864832 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2409800 session 0x562bd47a05a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:18.945556+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:19.945763+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:20.945963+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:21.946092+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:22.946351+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569389 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:23.946640+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 63848448 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:24.946779+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:25.946924+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:26.947097+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:27.947261+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569389 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:28.947462+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 63840256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:29.947626+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:30.947975+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:31.948157+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:32.948289+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569389 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:33.948445+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:34.948619+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.5 total, 600.0 interval
                                           Cumulative writes: 46K writes, 186K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.84 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2946 writes, 11K keys, 2946 commit groups, 1.0 writes per commit group, ingest: 12.87 MB, 0.02 MB/s
                                           Interval WAL: 2946 writes, 1143 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets getting new tickets!
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:35.948862+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _finish_auth 0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:35.949989+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:36.948964+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 63832064 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:37.949117+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569389 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:38.949228+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:39.949373+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:40.949516+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:41.949652+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eaf000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6b400 session 0x562bd1e6fc20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd4913400 session 0x562bd404c5a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd8114400 session 0x562bd40023c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 63823872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3e57000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd3e57000 session 0x562bd40021e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.374252319s of 24.575435638s, submitted: 53
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:42.949788+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2409800 session 0x562bd21f0b40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3632626 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6b400 session 0x562bd3cab0e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd4913400 session 0x562bd2abef00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361283584 unmapped: 62488576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd8114400 session 0x562bd47a0960
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe9400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd3fe9400 session 0x562bd22590e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:43.949925+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e77f8000/0x0/0x4ffc00000, data 0x19598ce/0x1ad6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe9400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd3fe9400 session 0x562bd47a1860
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361283584 unmapped: 62488576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2409800 session 0x562bd57f14a0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:44.950122+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd2e6b400 session 0x562bd160da40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4913400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 62480384 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:45.950382+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd4913400 session 0x562bd404c000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e77d3000/0x0/0x4ffc00000, data 0x197d8de/0x1afb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361603072 unmapped: 62169088 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:46.950525+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4319000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361611264 unmapped: 62160896 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:47.950706+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651889 data_alloc: 218103808 data_used: 6205440
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 361611264 unmapped: 62160896 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:48.950874+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd8114400 session 0x562bd4002d20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd4319000 session 0x562bd4000960
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 66379776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd8114400 session 0x562bd57f1680
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:49.951048+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 66371584 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:50.951254+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 66371584 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:51.951389+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 66371584 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:52.951539+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 66371584 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:53.951704+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:54.951844+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:55.952002+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:56.952185+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:57.952346+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:58.952484+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:19:59.952704+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:00.952952+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:01.953105+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:02.953350+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:03.953530+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:04.953728+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:05.953889+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:06.954031+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:07.954216+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:08.954403+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:09.954704+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:10.954957+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 ms_handle_reset con 0x562bd21ad800 session 0x562bd2f1bc20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:11.955110+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:12.955368+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:13.955525+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:14.955748+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:15.955934+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:16.956113+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:17.956259+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 66363392 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:18.956367+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 66355200 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:19.956519+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 66355200 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:20.956718+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 66355200 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:21.956858+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:22.957035+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:23.957182+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:24.957335+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:25.957517+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:26.957661+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 66347008 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:27.957790+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 66338816 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:28.957914+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 66338816 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:29.958089+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 66338816 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:30.958266+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 66330624 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:31.958425+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 66330624 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:32.958597+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 66330624 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:33.958801+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 66322432 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:34.958996+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 66322432 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:35.959208+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 66322432 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:36.959424+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 66314240 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:37.959592+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 66314240 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:38.959769+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:39.959946+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:40.960130+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:41.960240+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:42.960379+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:43.960546+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:44.960674+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 66306048 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:45.960829+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 66297856 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:46.960965+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 66297856 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:47.961092+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 66297856 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:48.961252+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 66297856 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:49.961426+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 66289664 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:50.961652+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 66289664 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:51.961762+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 66289664 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:52.961914+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 66289664 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:53.962088+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 66281472 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:54.962282+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:55.962724+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 66281472 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:56.962913+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 66281472 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:57.963173+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 66281472 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 heartbeat osd_stat(store_statfs(0x4e7eb0000/0x0/0x4ffc00000, data 0x12a385c/0x141e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578654 data_alloc: 218103808 data_used: 4272128
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:58.963403+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 66265088 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:20:59.963553+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 66265088 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:00.963752+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 66265088 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 77.214645386s of 78.329620361s, submitted: 82
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 272 ms_handle_reset con 0x562bd21ad800 session 0x562bd4302000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:01.963957+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 66240512 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:02.964090+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 273 ms_handle_reset con 0x562bd2e6b400 session 0x562bd48272c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529296 data_alloc: 218103808 data_used: 4284416
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:03.964287+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 273 heartbeat osd_stat(store_statfs(0x4e86bb000/0x0/0x4ffc00000, data 0xa96fee/0xc12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:04.964549+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:05.964767+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:06.964975+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:07.965148+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 66215936 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532270 data_alloc: 218103808 data_used: 4284416
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:08.965398+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 66199552 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:09.965535+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 66199552 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:10.965765+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:11.965940+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:12.966152+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532270 data_alloc: 218103808 data_used: 4284416
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:13.966442+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:14.966665+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:15.966829+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 66191360 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:16.967052+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:17.967265+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532270 data_alloc: 218103808 data_used: 4284416
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:18.967717+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:19.968078+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:20.968408+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:21.968625+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 66183168 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:22.968813+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 66174976 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 ms_handle_reset con 0x562bd2e6bc00 session 0x562bd44d72c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6bc00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:23.969033+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532270 data_alloc: 218103808 data_used: 4284416
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 66174976 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:24.969264+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 66174976 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:25.969466+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 66174976 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:26.969720+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 66166784 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:27.969985+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 66166784 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:28.970230+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532430 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 66166784 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:29.970388+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 66166784 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:30.970596+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 66158592 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:31.970764+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 66158592 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:32.970935+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 66158592 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:33.971127+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532430 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 66150400 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:34.971376+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 66150400 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:35.971606+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 66150400 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.431751251s of 35.659019470s, submitted: 67
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:36.971793+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 66134016 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:37.972019+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:38.972175+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:39.972413+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:40.972590+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:41.972758+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:42.972915+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:43.973128+0000)
Nov 25 09:46:20 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 25 09:46:20 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635297020' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:44.973390+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:45.973554+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 66093056 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:46.973699+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:47.973928+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:48.974121+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:49.974298+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:50.974531+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:51.974752+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:52.974922+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357687296 unmapped: 66084864 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:53.975115+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 66076672 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:54.975355+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 66068480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:55.975586+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 66068480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:56.975766+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 66068480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:57.975938+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 66068480 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:58.976086+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:21:59.976295+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:00.976559+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:01.976699+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:02.976876+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:03.977071+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:04.977254+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 66060288 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:05.977413+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:06.977602+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:07.977747+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:08.977946+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:09.978138+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:10.978403+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:11.978608+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:12.978840+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 66052096 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:13.979025+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 66043904 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:14.979240+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 66043904 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:15.979436+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 66043904 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:16.979706+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 66043904 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:17.979873+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:18.980066+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:19.980263+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:20.980492+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:21.980683+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 66035712 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:22.980844+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357744640 unmapped: 66027520 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:23.981004+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:24.981156+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:25.981328+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:26.981477+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:27.982267+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:28.982506+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 66019328 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:29.982689+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:30.982951+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:31.983188+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:32.983358+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:33.983570+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:34.983753+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:35.983932+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:36.984143+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 66011136 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:37.984342+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 65994752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:38.984516+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 65994752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:39.984708+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 65994752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:40.984882+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 65994752 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:41.985097+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 65986560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:42.985264+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 65986560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:43.985424+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 65986560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:44.985592+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 65986560 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:45.985873+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 65978368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:46.986107+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 65978368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:47.986317+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 65978368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:48.986664+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 65978368 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:49.986838+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:50.987099+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:51.987369+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:52.987513+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:53.987718+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:54.987872+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:55.988098+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:56.988291+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 65970176 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:57.988489+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 65961984 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:58.988650+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 4288512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 65961984 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 ms_handle_reset con 0x562bd21ad800 session 0x562bd4827a40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:22:59.988832+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 65372160 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:00.989097+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 65372160 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:01.989243+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:02.989408+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:03.989564+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531550 data_alloc: 218103808 data_used: 5861376
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:04.989782+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:05.989974+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:06.990133+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 65355776 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:07.990360+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 heartbeat osd_stat(store_statfs(0x4e86b9000/0x0/0x4ffc00000, data 0xa98a71/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 274 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 90.954139709s of 91.414291382s, submitted: 106
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 65331200 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 275 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2258000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:08.990554+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493620 data_alloc: 218103808 data_used: 1216512
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356818944 unmapped: 66953216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:09.990779+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356818944 unmapped: 66953216 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 275 heartbeat osd_stat(store_statfs(0x4e8b26000/0x0/0x4ffc00000, data 0x62a642/0x7a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:10.991002+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4319000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 66945024 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:11.991147+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 66945024 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:12.991297+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 276 ms_handle_reset con 0x562bd4319000 session 0x562bd4002b40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356843520 unmapped: 66928640 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:13.991540+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463896 data_alloc: 218103808 data_used: 1224704
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356859904 unmapped: 66912256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 277 heartbeat osd_stat(store_statfs(0x4e8f8e000/0x0/0x4ffc00000, data 0x1bdc92/0x33e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:14.991789+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356859904 unmapped: 66912256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:15.992003+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356859904 unmapped: 66912256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:16.992219+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356859904 unmapped: 66912256 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:17.992375+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 66895872 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.893241882s of 10.314013481s, submitted: 38
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:18.992505+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466198 data_alloc: 218103808 data_used: 1224704
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 66879488 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:19.992591+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 278 heartbeat osd_stat(store_statfs(0x4e8f8c000/0x0/0x4ffc00000, data 0x1bf6f5/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 66879488 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:20.992832+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 66879488 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:21.992948+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356909056 unmapped: 66863104 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:22.993113+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356909056 unmapped: 66863104 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:23.993286+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466518 data_alloc: 218103808 data_used: 1232896
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356909056 unmapped: 66863104 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:24.993465+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356909056 unmapped: 66863104 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 278 heartbeat osd_stat(store_statfs(0x4e8f8c000/0x0/0x4ffc00000, data 0x1bf6f5/0x341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:25.993596+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 278 ms_handle_reset con 0x562bd8114400 session 0x562bd46dd680
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:26.993755+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:27.993980+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:28.994128+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356925440 unmapped: 66846720 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:29.994361+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356933632 unmapped: 66838528 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:30.994597+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356933632 unmapped: 66838528 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:31.994761+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356933632 unmapped: 66838528 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:32.994916+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356933632 unmapped: 66838528 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:33.995088+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356941824 unmapped: 66830336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:34.995258+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356941824 unmapped: 66830336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:35.995448+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356941824 unmapped: 66830336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:36.995632+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356941824 unmapped: 66830336 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:37.995786+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356958208 unmapped: 66813952 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:38.995975+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356958208 unmapped: 66813952 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:39.996120+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356958208 unmapped: 66813952 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:40.996351+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356958208 unmapped: 66813952 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:41.996577+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356966400 unmapped: 66805760 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:42.996744+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356966400 unmapped: 66805760 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:43.997118+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356966400 unmapped: 66805760 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:44.997451+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356966400 unmapped: 66805760 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:45.997702+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356974592 unmapped: 66797568 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:46.997917+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356974592 unmapped: 66797568 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:47.998142+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356974592 unmapped: 66797568 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:48.998400+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:49.998735+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:50.999018+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:51.999400+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:52.999536+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356982784 unmapped: 66789376 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:53.999700+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356990976 unmapped: 66781184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:55.000138+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356990976 unmapped: 66781184 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:56.000839+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356999168 unmapped: 66772992 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:57.001406+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356999168 unmapped: 66772992 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:58.001682+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 356999168 unmapped: 66772992 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:23:59.002118+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:00.002351+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:01.002806+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:02.003092+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:03.003278+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357007360 unmapped: 66764800 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:04.003518+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357015552 unmapped: 66756608 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:05.003733+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357015552 unmapped: 66756608 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:06.004012+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357023744 unmapped: 66748416 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:07.004256+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:08.004563+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:09.004760+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:10.004943+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:11.005210+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357031936 unmapped: 66740224 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:12.005429+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 66732032 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:13.005615+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 66732032 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:14.005782+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 66732032 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:15.005932+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357048320 unmapped: 66723840 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:16.006078+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357048320 unmapped: 66723840 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:17.006246+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357048320 unmapped: 66723840 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:18.006436+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357048320 unmapped: 66723840 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:19.006594+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357056512 unmapped: 66715648 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:20.006850+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:21.007104+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:22.007371+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:23.007524+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:24.007699+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:25.007880+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:26.008076+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:27.008272+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 66707456 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:28.008372+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:29.008537+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:30.008713+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:31.008912+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:32.009093+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357081088 unmapped: 66691072 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:33.009255+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:34.009427+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:35.009582+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:36.009989+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:37.010244+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 66682880 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:38.010401+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:39.010545+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:40.010664+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:41.010873+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:42.011002+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:43.011166+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357105664 unmapped: 66666496 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:44.011346+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:45.011498+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:46.011671+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:47.011933+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:48.012153+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:49.012389+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 66641920 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:50.012534+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357138432 unmapped: 66633728 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:51.012704+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357138432 unmapped: 66633728 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:52.012878+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357138432 unmapped: 66633728 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:53.013074+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357138432 unmapped: 66633728 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:54.013265+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:55.013589+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357146624 unmapped: 66625536 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:56.013840+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357146624 unmapped: 66625536 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:57.014052+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357146624 unmapped: 66625536 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:58.014248+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357146624 unmapped: 66625536 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:24:59.014433+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357154816 unmapped: 66617344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:00.014590+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357154816 unmapped: 66617344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:01.014790+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357154816 unmapped: 66617344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:02.014945+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357154816 unmapped: 66617344 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:03.015163+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357163008 unmapped: 66609152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:04.015814+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357163008 unmapped: 66609152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:05.016015+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357163008 unmapped: 66609152 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:06.016153+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357171200 unmapped: 66600960 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:07.016370+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 66592768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:08.016603+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 66592768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:09.016763+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 66592768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471616 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:10.016995+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 66592768 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:11.017236+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 66584576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8f88000/0x0/0x4ffc00000, data 0x1c1383/0x345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:12.017445+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 66584576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:13.017594+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 66584576 heap: 423772160 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe9400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:14.017746+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 115.481742859s of 115.539009094s, submitted: 13
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 66592768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:15.017947+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525408 data_alloc: 218103808 data_used: 1241088
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 74981376 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:16.018130+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 74981376 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:17.018281+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357187584 unmapped: 74981376 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 279 heartbeat osd_stat(store_statfs(0x4e8319000/0x0/0x4ffc00000, data 0xe31383/0xfb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:18.018480+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357195776 unmapped: 74973184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8315000/0x0/0x4ffc00000, data 0xe32f00/0xfb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:19.018634+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 74956800 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 280 ms_handle_reset con 0x562bd3fe9400 session 0x562bd2a95e00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:20.018820+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560558 data_alloc: 218103808 data_used: 1249280
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 74956800 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3fe9400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:21.019024+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 74956800 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:22.019180+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357220352 unmapped: 74948608 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:23.019375+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357228544 unmapped: 74940416 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:24.019528+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357228544 unmapped: 74940416 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.588495731s of 10.099582672s, submitted: 10
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8315000/0x0/0x4ffc00000, data 0xe32f10/0xfb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:25.019695+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563037 data_alloc: 218103808 data_used: 1249280
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357236736 unmapped: 74932224 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 280 handle_osd_map epochs [281,281], i have 281, src has [1,281]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:26.019882+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357244928 unmapped: 74924032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 ms_handle_reset con 0x562bd3fe9400 session 0x562bd3792d20
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:27.020275+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357244928 unmapped: 74924032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:28.020569+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357244928 unmapped: 74924032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:29.020835+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357244928 unmapped: 74924032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:30.021070+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 74907648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:31.021414+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 74907648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:32.021598+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 74907648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:33.022065+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 74899456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:34.022375+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357269504 unmapped: 74899456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:35.022792+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:36.023089+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:37.023249+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:38.023477+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:39.023744+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:40.023969+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:41.024455+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 74891264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:42.024609+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357285888 unmapped: 74883072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:43.024750+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357285888 unmapped: 74883072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:44.024997+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357285888 unmapped: 74883072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:45.025192+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357285888 unmapped: 74883072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:46.025446+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:47.025649+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:48.025921+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:49.026117+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:50.026401+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:51.026666+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:52.026875+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:53.027032+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:54.027212+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:55.027356+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:56.027551+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:57.027773+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357302272 unmapped: 74866688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:58.027968+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74858496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:25:59.028201+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74858496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:00.028429+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566987 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74858496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:01.028655+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74858496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:02.028823+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 74842112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:03.028975+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 74842112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:04.029083+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 74842112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0xe34a8d/0xfbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:05.029202+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.229564667s of 40.942382812s, submitted: 6
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569253 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 282 ms_handle_reset con 0x562bd21ad800 session 0x562bd3cc01e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:06.029387+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:07.029590+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:08.029787+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:09.030021+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 74784768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 282 heartbeat osd_stat(store_statfs(0x4e830f000/0x0/0x4ffc00000, data 0xe3654d/0xfbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:10.030173+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569253 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 74776576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:11.030415+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 74776576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:12.030547+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 74776576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:13.030731+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 282 heartbeat osd_stat(store_statfs(0x4e830f000/0x0/0x4ffc00000, data 0xe3654d/0xfbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357392384 unmapped: 74776576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:14.030915+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 74735616 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 282 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:15.031090+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572227 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 74711040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:16.031338+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 74711040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:17.031502+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 74711040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:18.031693+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 74702848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:19.031884+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 74702848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:20.032017+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572227 data_alloc: 218103808 data_used: 1257472
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 74702848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:21.032176+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 74702848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:22.032301+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 74686464 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:23.032507+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 74686464 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:24.032698+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 74686464 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:25.032842+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357490688 unmapped: 74678272 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:26.033048+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:27.033222+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:28.033372+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:29.033540+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:30.033674+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:31.033921+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:32.034105+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:33.034298+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 74670080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:34.034491+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 74653696 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:35.034770+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 74645504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:36.034920+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 74645504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:37.035093+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 74645504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:38.035244+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:39.035390+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:40.035544+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:41.035712+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:42.035868+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:43.036037+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:44.036238+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:45.036393+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 74637312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:46.036521+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 74620928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:47.036948+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 74620928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:48.037162+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 74620928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:49.037466+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 74620928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:50.037626+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:51.037839+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:52.038010+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:53.038239+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:54.038420+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 74612736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:55.038649+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:56.038802+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:57.038942+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:58.039103+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:26:59.039265+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:00.039446+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 74604544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:01.039694+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:02.039867+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:03.040048+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:04.040221+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:05.040349+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 74596352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:06.040489+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:07.040643+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:08.040774+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:09.040946+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:10.045095+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 74571776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:11.045362+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 74563584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:12.045520+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 74563584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:13.045713+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 74563584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:14.045920+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 74555392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:15.046112+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 74547200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:16.046290+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 74547200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:17.046453+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 74547200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:18.046610+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 74547200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:19.046727+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 74539008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:20.046931+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 74539008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:21.047107+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 74539008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:22.047249+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 74530816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:23.047364+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 74530816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:24.047493+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 74530816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:25.047607+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 74530816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:26.047750+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:27.047890+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:28.048040+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:29.048192+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:30.048349+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:31.048584+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:32.048710+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 74522624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:33.048836+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 74514432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:34.048996+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 74506240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:35.049127+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 74506240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:36.049381+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 74506240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:37.049745+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 74506240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:38.049907+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 74498048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:39.050395+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 74498048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:40.050627+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 74498048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:41.051047+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:42.051253+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:43.051468+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:44.051650+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:45.051876+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357679104 unmapped: 74489856 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:46.052046+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 74473472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:47.052542+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 74473472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:48.052796+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 74473472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:49.053164+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 74473472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:50.053396+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:51.053699+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:52.053902+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:53.054286+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:54.054562+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:55.054905+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 74457088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:56.055093+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:57.055454+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:58.055651+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:27:59.056427+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:00.056643+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 74448896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:01.056854+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357728256 unmapped: 74440704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:02.057009+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:03.057217+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:04.057376+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:05.057698+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:06.057851+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:07.058017+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:08.058234+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 74432512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:09.058577+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357744640 unmapped: 74424320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:10.058765+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 74416128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:11.059161+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 74416128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:12.059352+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 74416128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:13.059776+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 74416128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:14.059946+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 74407936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:15.060117+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 74407936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:16.060266+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 74407936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:17.060523+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 74407936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:18.060680+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:19.060829+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:20.060951+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:21.061243+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:22.061460+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:23.061814+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:24.062031+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:25.062557+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:26.062720+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:27.062857+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:28.063031+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:29.063192+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:30.063369+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:31.063548+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:32.064152+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:33.064402+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:34.064593+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:35.064851+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 74366976 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:36.065009+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 74366976 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:37.065186+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 74366976 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:38.065367+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 74358784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:39.065519+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:40.065677+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:41.065860+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:42.066032+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:43.066205+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:44.066456+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:45.066755+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:46.067015+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:47.067290+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:48.067684+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:49.068272+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:50.068604+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:51.068854+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:52.068981+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:53.069185+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 74326016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:54.069453+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 74326016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:55.069638+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:56.069902+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:57.070085+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:58.070370+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:28:59.070543+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:00.070737+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:01.070917+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:02.071217+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:03.071415+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:04.071751+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:05.071932+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 74301440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:06.072193+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:07.072377+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:08.072553+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:09.072785+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:10.072969+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:11.073200+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:12.073383+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:13.073568+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:14.073775+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:15.073938+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:16.074090+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:17.074222+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 74276864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:18.074451+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:19.074619+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:20.074739+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:21.074896+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:22.075059+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:23.075202+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:24.075400+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:25.075592+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:26.075754+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:27.075909+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:28.076096+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:29.076277+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:30.076423+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:31.076634+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:32.076840+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 74235904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:33.077001+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 74235904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:34.077133+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:35.077280+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.5 total, 600.0 interval
                                           Cumulative writes: 46K writes, 188K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.83 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 620 writes, 1636 keys, 620 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s
                                           Interval WAL: 620 writes, 275 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.077       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.14              0.00         1    0.137       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd0833090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bd08331f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:36.077405+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:37.077599+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:38.077741+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:39.077937+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:40.078130+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:41.078373+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 74227712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:42.078545+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 74219520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:43.078693+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 74219520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:44.078867+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 74211328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:45.079037+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 74211328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:46.079157+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:47.079345+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:48.079492+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:49.079639+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:50.079754+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:51.079921+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:52.080054+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:53.080192+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:54.080333+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:55.080528+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:56.080791+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:57.081371+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:58.081496+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 74178560 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:29:59.081633+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:00.081765+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:01.081965+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:02.082099+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:03.082227+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:04.082360+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:05.082486+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:06.082614+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:07.082774+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:08.082915+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:09.083046+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:10.083192+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:11.083443+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:12.083619+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:13.083818+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 74137600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:14.083965+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 74137600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:15.084123+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:16.084249+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:17.084374+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:18.084820+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:19.085013+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:20.085231+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:21.085414+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:22.085572+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:23.085706+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:24.085851+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 74121216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:25.085984+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 74113024 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:26.086116+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:27.101958+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:28.102137+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:29.102283+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358072320 unmapped: 74096640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:30.102487+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358072320 unmapped: 74096640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:31.102662+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 74088448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:32.102851+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 74088448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:33.103053+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 74088448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:34.103199+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:35.103382+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:36.103514+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:37.103683+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:38.103870+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:39.104034+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:40.104201+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:41.104446+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:42.104625+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:43.104812+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:44.105009+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:45.105108+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:46.105247+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:47.105414+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:48.105550+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:49.105740+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:50.106103+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:51.106627+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:52.106848+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:53.107826+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:54.108404+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:55.109087+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:56.109538+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:57.109727+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:58.110085+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:30:59.110398+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:00.110514+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:01.110678+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:02.110806+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:03.111210+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:04.111488+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:05.111638+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:06.112095+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:07.112439+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:08.112700+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:09.112934+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:10.113149+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:11.113424+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:12.113570+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:13.114011+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:14.114295+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:15.114518+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:16.114721+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:17.114890+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:18.115164+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:19.115328+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358154240 unmapped: 74014720 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:20.115473+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358154240 unmapped: 74014720 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:21.115722+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 74006528 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:22.116052+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:23.116986+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:24.117645+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:25.117803+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:26.118442+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 73990144 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:27.119010+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 73990144 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:28.119136+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 73990144 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:29.119390+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 73990144 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:30.119554+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:31.119966+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:32.120123+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:33.120419+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830c000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:34.120582+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:35.120738+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 73965568 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:36.120914+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572387 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 73965568 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:37.121079+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 331.406250000s of 332.066192627s, submitted: 23
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 73900032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:38.121194+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:39.121395+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:40.121535+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:41.121724+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:42.121899+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:43.122079+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:44.122278+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:45.122474+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:46.122628+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:47.122820+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:48.122948+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:49.123097+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:50.123285+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:51.123546+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:52.123701+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:53.123944+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:54.124209+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:55.124743+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:56.124967+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:57.125201+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:58.125411+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:31:59.125646+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:00.125835+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:01.126162+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:02.126350+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:03.126472+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:04.126639+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:05.126801+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:06.126939+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:07.127146+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:08.127402+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:09.127537+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:10.127701+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:11.127894+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:12.128064+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:13.128289+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:14.128544+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:15.128723+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:16.128880+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:17.129050+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:18.129208+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:19.129373+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:20.129555+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:21.129810+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:22.129989+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:23.130167+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:24.130400+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:25.130582+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:26.130738+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:27.130950+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:28.131078+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:29.137242+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:30.137375+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:31.137603+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:32.137777+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:33.141801+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:34.141946+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:35.142089+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:36.142279+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:37.142461+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:38.142591+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:39.142720+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:40.142862+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:41.143041+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:42.143186+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:43.143379+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:44.143515+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:45.143683+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:46.143827+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:47.144007+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 73809920 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:48.144135+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 73809920 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:49.144282+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 73809920 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:50.144456+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:51.144602+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:52.144740+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:53.144871+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:54.144995+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:55.145140+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:56.145365+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:57.145710+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:58.145944+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:32:59.146117+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:00.146375+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:01.146601+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:02.146755+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 73768960 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:03.146939+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 73768960 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:04.147269+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 73768960 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:05.147471+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 73768960 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:06.147665+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 73760768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:07.147821+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 73760768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:08.148037+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 73760768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:09.148229+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 73760768 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:10.148386+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:11.148622+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571507 data_alloc: 218103808 data_used: 1261568
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:12.148827+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:13.148982+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:14.149136+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e830d000/0x0/0x4ffc00000, data 0xe37fb0/0xfc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:15.149356+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2e6b400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:16.149570+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 98.870841980s of 99.186439514s, submitted: 106
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575681 data_alloc: 218103808 data_used: 1269760
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:17.149745+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 284 ms_handle_reset con 0x562bd2e6b400 session 0x562bd2258b40
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 73728000 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:18.149910+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:19.150146+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd4319000
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 73711616 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f79000/0x0/0x4ffc00000, data 0x1c9b81/0x354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _renew_subs
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:20.150423+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358465536 unmapped: 73703424 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:21.150707+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358473728 unmapped: 73695232 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493942 data_alloc: 218103808 data_used: 1277952
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:22.150965+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 73687040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 285 ms_handle_reset con 0x562bd4319000 session 0x562bd21aa1e0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:23.151206+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 73687040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e8f78000/0x0/0x4ffc00000, data 0x1cb742/0x356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:24.151452+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 73687040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:25.151592+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 73687040 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e8f78000/0x0/0x4ffc00000, data 0x1cb742/0x356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 285 handle_osd_map epochs [286,286], i have 286, src has [1,286]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:26.151779+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 73678848 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd8114400
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.876462460s of 10.545830727s, submitted: 70
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3499658 data_alloc: 218103808 data_used: 1294336
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:27.151929+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 73670656 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:28.152114+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 73670656 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:29.152283+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 73621504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 ms_handle_reset con 0x562bd8114400 session 0x562bd57f12c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:30.152479+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358563840 unmapped: 73605120 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:31.154107+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358563840 unmapped: 73605120 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:32.154658+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358563840 unmapped: 73605120 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:33.155038+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:34.157788+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:35.158294+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:36.158615+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:37.158878+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:38.159208+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:39.159500+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:40.159644+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:41.159875+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:42.160136+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:43.160542+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:44.160825+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:45.161105+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:46.161414+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:47.161668+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:48.161897+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:49.162155+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:50.162386+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:51.162889+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:52.163082+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:53.163397+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:54.163610+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358604800 unmapped: 73564160 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:55.163839+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358604800 unmapped: 73564160 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:56.164018+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358604800 unmapped: 73564160 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:57.164244+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358604800 unmapped: 73564160 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:58.164421+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:33:59.164640+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:00.164851+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:01.165089+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:02.165263+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358629376 unmapped: 73539584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:03.165365+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358629376 unmapped: 73539584 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:04.165482+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358637568 unmapped: 73531392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:05.165620+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358637568 unmapped: 73531392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:06.165749+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358637568 unmapped: 73531392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:07.165916+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358637568 unmapped: 73531392 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:08.166050+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:09.166247+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:10.166467+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:11.166689+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:12.166858+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:13.166977+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:14.167170+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:15.167292+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:16.167515+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:17.167648+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:18.167786+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 73498624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:19.168055+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 73498624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:20.168289+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 73498624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:21.168521+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 73498624 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:22.168667+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 73490432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:23.168811+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 73490432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:24.169027+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:25.169153+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:26.169337+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:27.169541+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505590 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:28.169710+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd21ad800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e8f6d000/0x0/0x4ffc00000, data 0x1d07cd/0x360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:29.170875+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:30.171025+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358694912 unmapped: 73474048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:31.171200+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.013931274s of 64.528137207s, submitted: 13
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358719488 unmapped: 73449472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:32.171363+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508724 data_alloc: 218103808 data_used: 1302528
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e8f6a000/0x0/0x4ffc00000, data 0x1d239e/0x363000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:33.171510+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:34.171719+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1d238e/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 289 ms_handle_reset con 0x562bd21ad800 session 0x562bd2f183c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:35.171894+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:36.172050+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:37.172213+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1d238e/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3506971 data_alloc: 218103808 data_used: 1298432
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1d238e/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:38.172383+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 73433088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:39.172551+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1d238e/0x362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:40.172726+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:41.172937+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:42.173109+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:43.173274+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:44.173464+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: mgrc ms_handle_reset ms_handle_reset con 0x562bd2e6cc00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3119838916
Nov 25 09:46:20 compute-0 ceph-osd[88620]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3119838916,v1:192.168.122.100:6801/3119838916]
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: get_auth_request con 0x562bd8114400 auth_method 0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: mgrc handle_mgr_configure stats_period=5
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:45.173632+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:46.173820+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:47.173971+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:48.174113+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:49.174295+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358768640 unmapped: 73400320 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:50.174489+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:51.174687+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:52.174965+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:53.175196+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:54.175391+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:55.175586+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:56.175721+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:57.175898+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:58.176073+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 73375744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:34:59.176204+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:00.176380+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:01.176541+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:02.176758+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:03.177004+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:04.177154+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:05.177358+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:06.177529+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:07.177743+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:08.177877+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:09.178055+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:10.178190+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:11.178394+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 ms_handle_reset con 0x562bd3f11c00 session 0x562bd43023c0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd2409800
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:12.178567+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:13.178778+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:14.178963+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:15.179128+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:16.179253+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:17.179372+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:18.179538+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:19.179724+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:20.179887+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:21.180084+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:22.180231+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:23.180404+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:24.180545+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:25.180733+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:26.181013+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:27.181192+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:28.186828+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:29.187017+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:30.187156+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:31.187404+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:32.187581+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:33.187739+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:34.188128+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358875136 unmapped: 73293824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:35.188348+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:36.188486+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:37.188639+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:38.188776+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:39.188942+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:40.189099+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:41.189271+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:42.189431+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:43.189555+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:44.189690+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358899712 unmapped: 73269248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:45.189876+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358899712 unmapped: 73269248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:46.190077+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:47.190440+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:48.190587+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:49.190722+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:50.190873+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 73252864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:51.191035+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 73252864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:52.191164+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 73252864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:53.191354+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 73252864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:54.191484+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 73244672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:55.191640+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 73244672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:56.191791+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 73244672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:57.191950+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 73244672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:58.192160+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 73236480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:59.192317+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:00.192437+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:01.192697+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:02.192824+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:03.192938+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:04.193042+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:05.193158+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:06.193348+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 73203712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:07.193469+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:08.193614+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:09.193765+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:10.193967+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:11.194722+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:12.194860+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'config show' '{prefix=config show}'
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:13.194988+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358236160 unmapped: 73932800 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:14.195140+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:15.195274+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358744064 unmapped: 73424896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 09:46:20 compute-0 ceph-osd[88620]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:16.195361+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 73670656 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:17.195576+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 73670656 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:18.195902+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358514688 unmapped: 73654272 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:19.196585+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358514688 unmapped: 73654272 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:20.196709+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358514688 unmapped: 73654272 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:21.196871+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358514688 unmapped: 73654272 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:22.197025+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358522880 unmapped: 73646080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:23.197378+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358522880 unmapped: 73646080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 ms_handle_reset con 0x562bd2e6bc00 session 0x562bd489a780
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: handle_auth_request added challenge on 0x562bd3f11c00
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:24.197531+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358522880 unmapped: 73646080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:25.197658+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358522880 unmapped: 73646080 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:26.197787+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358531072 unmapped: 73637888 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:27.197939+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358531072 unmapped: 73637888 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:28.198085+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358531072 unmapped: 73637888 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:29.198213+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358531072 unmapped: 73637888 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:30.198348+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358539264 unmapped: 73629696 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:31.198488+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358539264 unmapped: 73629696 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:32.198619+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358539264 unmapped: 73629696 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:33.198792+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358539264 unmapped: 73629696 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:34.198929+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 73621504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:35.199119+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 73621504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:36.199285+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 73621504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:37.199450+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 73621504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:38.199583+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358547456 unmapped: 73621504 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:39.199751+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358555648 unmapped: 73613312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:40.199890+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358555648 unmapped: 73613312 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:41.200061+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358563840 unmapped: 73605120 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:42.200191+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:43.200333+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:44.200445+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:45.200566+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 73596928 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:46.200685+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358580224 unmapped: 73588736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:47.200823+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358580224 unmapped: 73588736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:48.200996+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358580224 unmapped: 73588736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:49.201116+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358580224 unmapped: 73588736 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:50.201243+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:51.473999+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:52.474140+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:53.474483+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358588416 unmapped: 73580544 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:54.474650+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:55.474801+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:56.474934+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:57.475109+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358596608 unmapped: 73572352 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:58.475255+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358612992 unmapped: 73555968 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:59.475411+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358612992 unmapped: 73555968 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:00.475571+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358612992 unmapped: 73555968 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:01.475747+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358612992 unmapped: 73555968 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:02.475898+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358612992 unmapped: 73555968 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:03.476064+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358612992 unmapped: 73555968 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:04.476226+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358612992 unmapped: 73555968 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:05.476396+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358612992 unmapped: 73555968 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:06.476579+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:07.476731+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:08.476903+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:09.477120+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:10.477274+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:11.477505+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:12.477687+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:13.477819+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358621184 unmapped: 73547776 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:14.477958+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:15.478099+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:16.478270+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358645760 unmapped: 73523200 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:17.478450+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 73515008 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:18.478586+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358662144 unmapped: 73506816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:19.478776+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358662144 unmapped: 73506816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:20.478951+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358662144 unmapped: 73506816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:21.479166+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358662144 unmapped: 73506816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:22.479392+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358662144 unmapped: 73506816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:23.479540+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358662144 unmapped: 73506816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:24.479685+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358662144 unmapped: 73506816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:25.479847+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358662144 unmapped: 73506816 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:26.480074+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 73490432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:27.480239+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 73490432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:28.480409+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 73490432 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:29.480583+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:30.480766+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:31.480957+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:32.481141+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:33.481352+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358686720 unmapped: 73482240 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:34.481511+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358694912 unmapped: 73474048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:35.481677+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358694912 unmapped: 73474048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:36.481869+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358694912 unmapped: 73474048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:37.482081+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358694912 unmapped: 73474048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:38.482217+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358694912 unmapped: 73474048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:39.482410+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358694912 unmapped: 73474048 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:40.482564+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358711296 unmapped: 73457664 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:41.482752+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358719488 unmapped: 73449472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:42.482968+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358719488 unmapped: 73449472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:43.483167+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358719488 unmapped: 73449472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:44.483339+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358719488 unmapped: 73449472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:45.483556+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358719488 unmapped: 73449472 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:46.483783+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:47.483945+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:48.484139+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:49.484381+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:50.484606+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358727680 unmapped: 73441280 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:51.484830+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 73433088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:52.517839+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 73433088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:53.517995+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 73433088 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:54.518154+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358744064 unmapped: 73424896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:55.518370+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358744064 unmapped: 73424896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:56.518569+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358744064 unmapped: 73424896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:57.518756+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358744064 unmapped: 73424896 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:20 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:20 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:58.518976+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:59.519131+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:00.519288+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:01.519479+0000)
Nov 25 09:46:20 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 73416704 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:20 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:02.519591+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:03.519736+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:04.519947+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:05.520193+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:06.520375+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:07.520505+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:08.520666+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:09.520805+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:10.520973+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358776832 unmapped: 73392128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:11.521245+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358776832 unmapped: 73392128 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:12.521371+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:13.521486+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:14.521610+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358785024 unmapped: 73383936 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:15.521788+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:16.521916+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:17.522057+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:18.522240+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:19.522480+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:20.522686+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 73367552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:21.522964+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358809600 unmapped: 73359360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:22.523142+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:23.523401+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358817792 unmapped: 73351168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:24.523602+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358825984 unmapped: 73342976 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:25.523798+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358825984 unmapped: 73342976 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:26.524164+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:27.524343+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:28.524593+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:29.524785+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:30.524965+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:31.525184+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:32.525375+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:33.525513+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:34.525665+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:35.525885+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358834176 unmapped: 73334784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:36.526092+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:37.526322+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 73326592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:38.526462+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:39.526587+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:40.526740+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:41.526931+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 73318400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:42.527108+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:43.527269+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358858752 unmapped: 73310208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:44.527455+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 73302016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:45.527646+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 73302016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:46.527796+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 73302016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:47.527987+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 73302016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:48.528116+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 73302016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:49.528243+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:50.528435+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:51.528657+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:52.528817+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:53.528993+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 73285632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:54.529125+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:55.529367+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:56.529573+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:57.529735+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:58.529917+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:59.530063+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:00.530259+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 73277440 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:01.530485+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358899712 unmapped: 73269248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:02.537890+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358899712 unmapped: 73269248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:03.538105+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358899712 unmapped: 73269248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:04.538234+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 73261056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:05.538396+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 73236480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:06.538562+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 73236480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:07.538695+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 73236480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:08.538853+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 73236480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:09.539056+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 73236480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:10.539204+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:11.539383+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:12.539540+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:13.539682+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:14.539851+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:15.539995+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 73228288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:16.540165+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 73220096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:17.540353+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 73211904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:18.540479+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 73211904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:19.540634+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 73211904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:20.540789+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 73211904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:21.540992+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 73203712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:22.541161+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 73203712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:23.541296+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 73203712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:24.541473+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 73203712 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:25.542569+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 73195520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:26.542726+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 73187328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:27.542881+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 73187328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:28.543029+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 73187328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:29.543151+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 73187328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:30.543354+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 73187328 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:31.543564+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 73179136 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:32.543705+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 73179136 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:33.543878+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 73179136 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:34.544024+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.5 total, 600.0 interval
                                           Cumulative writes: 47K writes, 189K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 16K syncs, 2.82 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 377 writes, 852 keys, 377 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s
                                           Interval WAL: 377 writes, 164 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 73179136 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:35.544229+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 73179136 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:36.544367+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 73179136 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:37.544561+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 73170944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:38.544767+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 73162752 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:39.544943+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 73162752 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:40.545113+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 73162752 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:41.546120+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 73146368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:42.546277+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 73146368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:43.546446+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 73146368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:44.546605+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 73146368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:45.546746+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 73146368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:46.546912+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 73146368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:47.547073+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 73146368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:48.547243+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 73146368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:49.547395+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359030784 unmapped: 73138176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:50.547563+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359030784 unmapped: 73138176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:51.548381+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359030784 unmapped: 73138176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:52.548505+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359030784 unmapped: 73138176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:53.548714+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 73129984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:54.548879+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 73121792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:55.549046+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 73121792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:56.549200+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 73121792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:57.549371+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 73121792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:58.549529+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 73121792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:59.549658+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:00.549814+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 73121792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:01.550031+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 73121792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:02.550203+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359055360 unmapped: 73113600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:03.550379+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359055360 unmapped: 73113600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:04.550561+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359055360 unmapped: 73113600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:05.550732+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359055360 unmapped: 73113600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:06.550901+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 73105408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:07.551071+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 73097216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:08.551254+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 73097216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:09.551448+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 73097216 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:10.551640+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359088128 unmapped: 73080832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:11.551814+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359088128 unmapped: 73080832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:12.551991+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359088128 unmapped: 73080832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:13.552214+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359088128 unmapped: 73080832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:14.552378+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 73072640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:15.552550+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 73072640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:16.552688+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 73072640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:17.552871+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 73072640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:18.553056+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 73064448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:19.553233+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 73064448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:20.553414+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 73064448 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:21.553608+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 74399744 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:22.553760+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 74391552 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:23.553916+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:24.554038+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:25.554272+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:26.554647+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:27.554778+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:28.554981+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:29.555131+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:30.555349+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:31.555529+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:32.555666+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:33.555814+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357785600 unmapped: 74383360 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:34.555995+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:35.556133+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:36.556281+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:37.556486+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 74375168 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:38.556660+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 74358784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:39.556906+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 74358784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:40.557065+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 74358784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:41.557250+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 74358784 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:42.557425+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:43.557586+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:44.557729+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:45.557975+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 74350592 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:46.558120+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 74342400 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:47.558300+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:48.558514+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:49.558669+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:50.558898+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 74334208 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:51.559067+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 74326016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:52.559248+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 74326016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:53.559401+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 74326016 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:54.559548+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:55.559722+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:56.559886+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:57.560083+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 74317824 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:58.560216+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:59.560370+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:00.560536+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:01.560764+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:02.560928+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:03.561108+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:04.561279+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:05.561446+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 74309632 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:06.561593+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:07.561732+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:08.561888+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:09.562034+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 74293248 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:10.562176+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:11.562397+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:12.562554+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:13.562698+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 74285056 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:14.562833+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 74276864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:15.563001+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 74276864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:16.563179+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 74276864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:17.563381+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 74276864 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:18.563590+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:19.563779+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:20.563960+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:21.564178+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:22.564354+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:23.564548+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357900288 unmapped: 74268672 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:24.564711+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:25.564842+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 74260480 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:26.565022+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357916672 unmapped: 74252288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:27.565160+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357916672 unmapped: 74252288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:28.565299+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357916672 unmapped: 74252288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:29.565448+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357916672 unmapped: 74252288 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:30.565642+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:31.565848+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:32.566050+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:33.566240+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511145 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:34.566453+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:35.566722+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 74244096 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:36.567034+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f68000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 74235904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 424.865814209s of 425.864105225s, submitted: 24
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:37.567230+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 74235904 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:38.567365+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 74219520 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:39.567558+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:40.567756+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:41.568022+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:42.568255+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:43.568481+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:44.568625+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:45.568817+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:46.569015+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:47.569243+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:48.569428+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:49.569658+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:50.569945+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:51.570369+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:52.570893+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:53.571056+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 74194944 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:54.571237+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 74186752 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:55.571393+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 74186752 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:56.571551+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 74186752 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:57.571758+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 74186752 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:58.571917+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 74186752 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:59.572072+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:00.572199+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:01.572416+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:02.572613+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:03.572831+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 357998592 unmapped: 74170368 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:04.573044+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:05.573295+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:06.573509+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:07.573716+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:08.573851+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:09.574068+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 74162176 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:10.574398+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:11.574599+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:12.574765+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:13.574900+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:14.575042+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:15.575195+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:16.575382+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 74153984 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:17.575713+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:18.575901+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:19.576096+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:20.576268+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 74145792 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:21.576527+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 74137600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:22.576807+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 74137600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:23.577021+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 74137600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:24.577209+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 74137600 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:25.577451+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:26.577673+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:27.577842+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:28.577996+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:29.578156+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:30.578419+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:31.578652+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:32.578866+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 74129408 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:33.579011+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 74113024 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:34.579251+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 74113024 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:35.579503+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 74113024 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:36.579654+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 74113024 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:37.579798+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 74113024 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:38.579951+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 74113024 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:39.580178+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:40.580396+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:41.580612+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:42.580809+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:43.580960+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:44.581098+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 74104832 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:45.581272+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358072320 unmapped: 74096640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:46.581445+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358072320 unmapped: 74096640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:47.581619+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358072320 unmapped: 74096640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:48.581800+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358072320 unmapped: 74096640 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:49.581963+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:50.582133+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:51.582343+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:52.582576+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:53.582773+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:54.583140+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:55.583500+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:56.583687+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 74080256 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:57.583882+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:58.584042+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:59.584178+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:00.584382+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 74072064 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:01.584573+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:02.584728+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:03.584864+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:04.585010+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 74063872 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:05.585227+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:06.585436+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:07.585626+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:08.585829+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358113280 unmapped: 74055680 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:09.586016+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:10.586169+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:11.586419+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:12.586571+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358121472 unmapped: 74047488 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:13.586707+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:14.586868+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:15.587022+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:16.587253+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:17.587448+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:18.587638+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 74039296 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:19.587845+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:20.588008+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 74031104 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:21.588188+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:22.588364+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:23.588486+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:24.588615+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 74022912 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:25.588797+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358154240 unmapped: 74014720 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:26.588932+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358154240 unmapped: 74014720 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:27.589072+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358154240 unmapped: 74014720 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:28.589257+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358154240 unmapped: 74014720 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:29.589388+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 74006528 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:30.589522+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 74006528 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:31.589716+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 74006528 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:32.589852+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 74006528 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:33.589986+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 74006528 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:34.590139+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:35.590298+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:36.590455+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:37.590580+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:38.591252+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:39.591911+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 73998336 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:40.592039+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:41.592218+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:42.592367+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:43.592539+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:44.592672+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 73981952 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:45.592828+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 73973760 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:46.593012+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 73973760 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:47.593162+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 73973760 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:48.593344+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 73973760 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:49.593506+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 73973760 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:50.593688+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 73973760 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:51.593878+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 73973760 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:52.594022+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 73973760 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:53.594359+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 73965568 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:54.594646+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 73965568 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:55.594862+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 73965568 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:56.595062+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 73957376 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:57.595257+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 73949184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:58.596649+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 73949184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:59.596773+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 73949184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:00.596928+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 73949184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:01.597127+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 73949184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:02.597295+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 73949184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:03.597493+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 73949184 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:04.597661+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 73940992 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:05.597833+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 73940992 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:06.598028+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 73940992 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:07.598193+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 73940992 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:08.598394+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 73940992 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:09.598558+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 73924608 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:10.598762+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:11.599025+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 73924608 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:12.599171+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 73924608 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:13.599414+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 73924608 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:14.599570+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 73924608 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:15.599791+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 73924608 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:16.599985+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 73916416 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:17.600177+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 73916416 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:18.600351+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 73908224 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:19.600530+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 73908224 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:20.600735+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 73908224 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:21.600970+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 73908224 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:22.601182+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 73900032 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:23.601383+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:24.601534+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:25.601732+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:26.601890+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:27.602073+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:28.602255+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 73891840 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:29.602476+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:30.602642+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 73883648 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:31.602849+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:32.603056+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:33.603272+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:34.603476+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:35.603660+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:36.603821+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:37.603938+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 73875456 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:38.604116+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 73867264 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:39.604253+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:40.604447+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:41.604655+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:42.604795+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:43.604935+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:44.605112+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 73859072 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:45.605377+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:46.605627+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:47.605806+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:48.605975+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:49.606101+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 73850880 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:50.606237+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:51.606387+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:52.606515+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 73842688 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:53.606684+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 73834496 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:54.606835+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 73826304 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:55.607017+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 73826304 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:56.607229+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 73826304 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:57.607431+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 73826304 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:58.607627+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:59.607825+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:00.608031+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:01.608254+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:02.608491+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:03.608708+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:04.608923+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:05.609144+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 73801728 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:06.609379+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:07.609585+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:08.609776+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:09.609991+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:10.610196+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 73793536 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:11.610440+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:12.610658+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:13.610831+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:14.610965+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:15.611159+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:16.611385+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:17.611591+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 73785344 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:18.611800+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 73777152 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:19.612021+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 73777152 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:20.612214+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 73777152 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:21.612399+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 73777152 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:22.612611+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:23.612893+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:24.613150+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:25.613425+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:26.613599+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 73752576 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:27.613793+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:28.613948+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:29.614098+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:30.614251+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:31.614408+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 73744384 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:32.614609+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 73736192 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:33.614763+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 73736192 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:34.614969+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 73728000 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:35.615105+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 73728000 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:36.615224+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 73728000 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:37.615365+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 73728000 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:38.615518+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:39.615654+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:40.615781+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:41.615959+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:42.616104+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:43.616252+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:44.616387+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:45.616522+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8f69000/0x0/0x4ffc00000, data 0x1d3df1/0x365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 73719808 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:46.619369+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 73711616 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:47.619505+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 73711616 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'config diff' '{prefix=config diff}'
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'config show' '{prefix=config show}'
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:48.619643+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358760448 unmapped: 73408512 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:49.619785+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 73818112 heap: 432168960 old mem: 2845415832 new mem: 2845415832
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 09:46:21 compute-0 ceph-osd[88620]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 09:46:21 compute-0 ceph-osd[88620]: bluestore.MempoolThread(0x562bd0911b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510265 data_alloc: 218103808 data_used: 1306624
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: tick
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_tickets
Nov 25 09:46:21 compute-0 ceph-osd[88620]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:50.619913+0000)
Nov 25 09:46:21 compute-0 ceph-osd[88620]: do_command 'log dump' '{prefix=log dump}'
Nov 25 09:46:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 09:46:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 09:46:21 compute-0 ceph-mon[75015]: from='client.23575 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 09:46:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2641831591' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 09:46:21 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1635297020' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 09:46:21 compute-0 ceph-mon[75015]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 09:46:21 compute-0 ceph-mon[75015]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 09:46:21 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3803: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:21 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 25 09:46:21 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002106071' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 09:46:21 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23589 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:22 compute-0 nova_compute[253538]: 2025-11-25 09:46:22.021 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:22 compute-0 ceph-mon[75015]: pgmap v3803: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:22 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2002106071' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 09:46:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 25 09:46:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3395767374' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 09:46:22 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 25 09:46:22 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4044238724' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 09:46:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 25 09:46:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/765434557' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 09:46:23 compute-0 ceph-mon[75015]: from='client.23589 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3395767374' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 09:46:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4044238724' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 09:46:23 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/765434557' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 09:46:23 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3804: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:23 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 25 09:46:23 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003970259' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 09:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:23 compute-0 ceph-mgr[75313]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:23 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 09:46:23 compute-0 systemd[1]: Started Hostname Service.
Nov 25 09:46:23 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23599 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:24 compute-0 ceph-mon[75015]: pgmap v3804: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:24 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2003970259' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 09:46:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 25 09:46:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3199059800' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 09:46:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 25 09:46:24 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403591145' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 09:46:24 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:25 compute-0 nova_compute[253538]: 2025-11-25 09:46:25.179 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:25 compute-0 ceph-mon[75015]: from='client.23599 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3199059800' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 09:46:25 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2403591145' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 09:46:25 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23605 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:25 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3805: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:25 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 25 09:46:25 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4188367386' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 09:46:25 compute-0 sudo[461721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:25 compute-0 sudo[461721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:25 compute-0 sudo[461721]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:25 compute-0 sudo[461769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:46:25 compute-0 sudo[461769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:25 compute-0 sudo[461769]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:25 compute-0 sudo[461794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:25 compute-0 sudo[461794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:25 compute-0 sudo[461794]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-0 sudo[461826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 09:46:26 compute-0 sudo[461826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:26 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23609 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:26 compute-0 sshd-session[461408]: Invalid user admin from 45.78.222.2 port 48354
Nov 25 09:46:26 compute-0 ceph-mon[75015]: from='client.23605 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:26 compute-0 ceph-mon[75015]: pgmap v3805: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:26 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/4188367386' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 09:46:26 compute-0 sshd-session[461408]: Received disconnect from 45.78.222.2 port 48354:11: Bye Bye [preauth]
Nov 25 09:46:26 compute-0 sshd-session[461408]: Disconnected from invalid user admin 45.78.222.2 port 48354 [preauth]
Nov 25 09:46:26 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23611 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:26 compute-0 sudo[461826]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:46:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 09:46:26 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:46:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 09:46:26 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:46:26 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev ef58e8da-0f5f-4174-83c9-27cc13bde999 does not exist
Nov 25 09:46:26 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev e20b6622-58f3-422d-8a39-5df5770ee2b0 does not exist
Nov 25 09:46:26 compute-0 ceph-mgr[75313]: [progress WARNING root] complete: ev 0f9128f5-0871-427f-9fb5-10ec5edaabf9 does not exist
Nov 25 09:46:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 09:46:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:46:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 09:46:26 compute-0 ceph-mon[75015]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:46:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 09:46:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:26 compute-0 sudo[461955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:26 compute-0 sudo[461955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:26 compute-0 sudo[461955]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 25 09:46:26 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454829264' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 09:46:26 compute-0 sudo[461981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:46:26 compute-0 sudo[461981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:26 compute-0 sudo[461981]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-0 sudo[462008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:26 compute-0 sudo[462008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:26 compute-0 sudo[462008]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:27 compute-0 nova_compute[253538]: 2025-11-25 09:46:27.022 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:27 compute-0 sudo[462036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 09:46:27 compute-0 sudo[462036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:27 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 25 09:46:27 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3337419805' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='client.23609 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='client.23611 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' 
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='mgr.14130 192.168.122.100:0/2484775184' entity='mgr.compute-0.cpskve' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2454829264' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 09:46:27 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3806: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:27 compute-0 podman[462120]: 2025-11-25 09:46:27.438344919 +0000 UTC m=+0.116881141 container create 9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:46:27 compute-0 podman[462120]: 2025-11-25 09:46:27.348486827 +0000 UTC m=+0.027023129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:46:27 compute-0 systemd[1]: Started libpod-conmon-9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c.scope.
Nov 25 09:46:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:27 compute-0 podman[462120]: 2025-11-25 09:46:27.560558265 +0000 UTC m=+0.239094507 container init 9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:46:27 compute-0 podman[462120]: 2025-11-25 09:46:27.569429928 +0000 UTC m=+0.247966150 container start 9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 09:46:27 compute-0 blissful_khayyam[462161]: 167 167
Nov 25 09:46:27 compute-0 systemd[1]: libpod-9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c.scope: Deactivated successfully.
Nov 25 09:46:27 compute-0 podman[462120]: 2025-11-25 09:46:27.609647605 +0000 UTC m=+0.288183827 container attach 9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:46:27 compute-0 podman[462120]: 2025-11-25 09:46:27.612046781 +0000 UTC m=+0.290583003 container died 9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:46:27 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23617 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3780714d9dc67c235ac679acff7f5f49fbc239b9a585b1f32f9d6ec2cd676633-merged.mount: Deactivated successfully.
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23619 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:28 compute-0 ceph-mgr[75313]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:46:28 compute-0 podman[462120]: 2025-11-25 09:46:28.151058594 +0000 UTC m=+0.829594816 container remove 9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:46:28 compute-0 systemd[1]: libpod-conmon-9f8fb4de00008b139ec38f87a49fbbc5c5049d38c713e7a69d12ebac39bc902c.scope: Deactivated successfully.
Nov 25 09:46:28 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3337419805' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 09:46:28 compute-0 ceph-mon[75015]: pgmap v3806: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:28 compute-0 podman[462279]: 2025-11-25 09:46:28.329808313 +0000 UTC m=+0.022609668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 09:46:28 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 25 09:46:28 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250635520' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 09:46:28 compute-0 podman[462279]: 2025-11-25 09:46:28.617369443 +0000 UTC m=+0.310170828 container create 256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:46:28 compute-0 systemd[1]: Started libpod-conmon-256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44.scope.
Nov 25 09:46:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd46738af75649a313103222cff500eece4d8b42505cfced44cd44f4b5124234/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd46738af75649a313103222cff500eece4d8b42505cfced44cd44f4b5124234/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd46738af75649a313103222cff500eece4d8b42505cfced44cd44f4b5124234/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd46738af75649a313103222cff500eece4d8b42505cfced44cd44f4b5124234/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd46738af75649a313103222cff500eece4d8b42505cfced44cd44f4b5124234/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:29 compute-0 podman[462279]: 2025-11-25 09:46:29.042280141 +0000 UTC m=+0.735081496 container init 256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:46:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 25 09:46:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624044125' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 09:46:29 compute-0 podman[462279]: 2025-11-25 09:46:29.053076286 +0000 UTC m=+0.745877621 container start 256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 09:46:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 09:46:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1581543054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 09:46:29 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1581543054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:46:29 compute-0 podman[462279]: 2025-11-25 09:46:29.300056128 +0000 UTC m=+0.992857513 container attach 256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_matsumoto, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:46:29 compute-0 ceph-mgr[75313]: log_channel(cluster) log [DBG] : pgmap v3807: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:29 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23629 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mgr[75313]: log_channel(audit) log [DBG] : from='client.23631 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mon[75015]: from='client.23617 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mon[75015]: from='client.23619 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2250635520' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/2624044125' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1581543054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mon[75015]: from='client.? 192.168.122.10:0/1581543054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:46:29 compute-0 ceph-mon[75015]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:30 compute-0 xenodochial_matsumoto[462349]: --> passed data devices: 0 physical, 3 LVM
Nov 25 09:46:30 compute-0 xenodochial_matsumoto[462349]: --> relative data size: 1.0
Nov 25 09:46:30 compute-0 xenodochial_matsumoto[462349]: --> All data devices are unavailable
Nov 25 09:46:30 compute-0 systemd[1]: libpod-256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44.scope: Deactivated successfully.
Nov 25 09:46:30 compute-0 podman[462279]: 2025-11-25 09:46:30.154886983 +0000 UTC m=+1.847688318 container died 256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_matsumoto, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:46:30 compute-0 systemd[1]: libpod-256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44.scope: Consumed 1.029s CPU time.
Nov 25 09:46:30 compute-0 nova_compute[253538]: 2025-11-25 09:46:30.225 253542 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:46:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 09:46:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827761100' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd46738af75649a313103222cff500eece4d8b42505cfced44cd44f4b5124234-merged.mount: Deactivated successfully.
Nov 25 09:46:30 compute-0 nova_compute[253538]: 2025-11-25 09:46:30.547 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:46:30 compute-0 nova_compute[253538]: 2025-11-25 09:46:30.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:46:30 compute-0 nova_compute[253538]: 2025-11-25 09:46:30.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:46:30 compute-0 nova_compute[253538]: 2025-11-25 09:46:30.553 253542 DEBUG oslo_service.periodic_task [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:46:30 compute-0 nova_compute[253538]: 2025-11-25 09:46:30.553 253542 DEBUG nova.compute.manager [None req-db473128-0c68-4c5a-bf61-b4c67e0bbfb2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:46:30 compute-0 podman[462279]: 2025-11-25 09:46:30.591863731 +0000 UTC m=+2.284665066 container remove 256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:46:30 compute-0 sudo[462036]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:30 compute-0 systemd[1]: libpod-conmon-256efcb1fae6df60146089c49f8034fb0784bb6e33706248ca8e1ae3150d3c44.scope: Deactivated successfully.
Nov 25 09:46:30 compute-0 sudo[462827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:30 compute-0 sudo[462827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:30 compute-0 sudo[462827]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:30 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 25 09:46:30 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825701336' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 09:46:30 compute-0 sudo[462865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:46:30 compute-0 sudo[462865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:30 compute-0 sudo[462865]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:30 compute-0 sudo[462900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:30 compute-0 sudo[462900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:30 compute-0 sudo[462900]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:30 compute-0 ceph-mon[75015]: pgmap v3807: 321 pgs: 321 active+clean; 457 KiB data, 975 MiB used, 59 GiB / 60 GiB avail
Nov 25 09:46:30 compute-0 ceph-mon[75015]: from='client.23629 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:30 compute-0 ceph-mon[75015]: from='client.23631 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:46:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/1827761100' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:46:30 compute-0 ceph-mon[75015]: from='client.? 192.168.122.100:0/3825701336' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 09:46:30 compute-0 sudo[462944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a058ea16-8b73-51e1-b172-ed66107102bf/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a058ea16-8b73-51e1-b172-ed66107102bf -- lvm list --format json
Nov 25 09:46:30 compute-0 sudo[462944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:31 compute-0 ceph-mon[75015]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 25 09:46:31 compute-0 ceph-mon[75015]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227903946' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 09:46:31 compute-0 podman[463081]: 2025-11-25 09:46:31.266045093 +0000 UTC m=+0.063363600 container create 5cb39646253d2ab47f62775fd1b578b6f7087c03f5059adf886a6f72cded7042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_archimedes, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
